Knowledge

Knowledge

RAG integration with document processing and vector search

Overview

The Knowledge system provides retrieval-augmented generation (RAG) capabilities, allowing agents to access and utilize external documents in their responses. It automatically processes documents, creates vector embeddings, and enables semantic search for relevant information. Agents with knowledge can provide more accurate, contextual responses based on your documents.

Enabling Knowledge

Enable knowledge for an agent by setting the knowledge option to true:

import { Agent } from '@astreus-ai/astreus';

const agent = await Agent.create({
  name: 'KnowledgeAgent',
  model: 'gpt-4o',
  knowledge: true,  // Enable knowledge base access (default: false)
  embeddingModel: 'text-embedding-3-small' // Optional: specify embedding model
});

Adding Documents

Add Text Content

Add content directly as a string:

await agent.addKnowledge(
  'Your important content here',
  'Document Title',
  { category: 'documentation' }
);

Add from File

Add content from supported file types:

// Add PDF file
await agent.addKnowledgeFromFile(
  '/path/to/document.pdf',
  { source: 'manual', version: '1.0' }
);

// Add text file
await agent.addKnowledgeFromFile('/path/to/notes.txt');

Add from Directory

Process all supported files in a directory:

await agent.addKnowledgeFromDirectory(
  '/path/to/docs',
  { project: 'documentation' }
);

Supported File Types

  • Text files: .txt, .md, .json
  • PDF files: .pdf (with text extraction)

How It Works

The knowledge system follows a sophisticated processing pipeline:

Document Processing

Documents are stored and indexed in the knowledge database with metadata.

Text Chunking

Content is split into chunks (1000 characters with 200 character overlap) for optimal retrieval.

Vector Embeddings

Each chunk is converted to vector embeddings using OpenAI or Ollama embedding models.

When agents receive queries, relevant chunks are retrieved using cosine similarity search.

Context Integration

Retrieved information is automatically added to the agent's context for enhanced responses.

Example Usage

Here's a complete example of using knowledge with an agent:

import { Agent } from '@astreus-ai/astreus';

// Create agent with knowledge enabled
const agent = await Agent.create({
  name: 'DocumentAssistant',
  model: 'gpt-4o',
  knowledge: true,
  embeddingModel: 'text-embedding-3-small', // Optional: specify embedding model
  systemPrompt: 'You are a helpful assistant with access to company documentation.'
});

// Add documentation
await agent.addKnowledgeFromFile('./company-handbook.pdf', {
  type: 'handbook',
  department: 'hr'
});

await agent.addKnowledge(`
Our API uses REST principles with JSON responses.
Authentication is done via Bearer tokens.
Rate limiting is 1000 requests per hour.
`, 'API Documentation', {
  type: 'api-docs',
  version: '2.0'
});

// Query with automatic knowledge retrieval
const response = await agent.ask('What is our API rate limit?');
console.log(response);
// The agent will automatically search the knowledge base and include relevant context

// Manual knowledge search
const results = await agent.searchKnowledge('API authentication', 5, 0.7);
results.forEach(result => {
  console.log(`Similarity: ${result.similarity}`);
  console.log(`Content: ${result.content}`);
});

Managing Knowledge

Available Methods

// List all documents with metadata
const documents = await agent.getKnowledgeDocuments();
// Returns: Array<{ id: number; title: string; created_at: string }>

// Delete specific document by ID
const success = await agent.deleteKnowledgeDocument(documentId);
// Returns: boolean indicating success

// Delete specific chunk by ID
const success = await agent.deleteKnowledgeChunk(chunkId);
// Returns: boolean indicating success

// Clear all knowledge for this agent
await agent.clearKnowledge();
// Returns: void

// Search with custom parameters
const results = await agent.searchKnowledge(
  'search query',
  10,    // limit: max results (default: 5)
  0.8    // threshold: similarity threshold (0-1, default: 0.7)
);
// Returns: Array<{ content: string; metadata: MetadataObject; similarity: number }>

// Get relevant context for a query
const context = await agent.getKnowledgeContext(
  'query text',
  5      // limit: max chunks to include (default: 5)
);
// Returns: string with concatenated relevant content

// Expand context around a specific chunk
const expandedChunks = await agent.expandKnowledgeContext(
  documentId,   // Document ID
  chunkIndex,   // Chunk index within document
  2,            // expandBefore: chunks to include before (default: 1)
  2             // expandAfter: chunks to include after (default: 1)
);
// Returns: Array<string> with expanded chunk content

Configuration

Environment Variables

# Database (required)
KNOWLEDGE_DB_URL=postgresql://user:password@host:port/database

# API key for embeddings (uses same provider as agent's model)
OPENAI_API_KEY=your_openai_key

Embedding Model Configuration

Specify the embedding model directly in the agent configuration:

const agent = await Agent.create({
  name: 'KnowledgeAgent',
  model: 'gpt-4o',
  embeddingModel: 'text-embedding-3-small',  // Specify embedding model here
  knowledge: true
});

How is this guide?