# Astreus AI Documentation - Full Text
Generated for Large Language Models
URL: https://astreus.org/llms-full.txt
Total Documents: 32
***
# Agent Persistence
URL: /docs/examples/agent-persistence
Source: /app/src/content/docs/examples/agent-persistence.mdx
import { DocImage } from '@/components/DocImage';
Save and load agents from database for reusability.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/agent-persistence
cd agent-persistence
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Agent Persistence
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create and save an agent
const agent = await Agent.create({
name: 'ProjectAssistant',
model: 'gpt-4o',
memory: true,
systemPrompt: 'You are a project management assistant.'
});
// Use the agent
await agent.ask("Remember that our project deadline is March 15th");
// Later, load the same agent by name
const loadedAgent = await Agent.findByName('ProjectAssistant');
const response = await loadedAgent?.ask("What is our project deadline?");
console.log(response); // Should remember March 15th
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/agent-persistence](https://github.com/astreus-ai/agent-persistence)
***
# Agent with Knowledge
URL: /docs/examples/agent-with-knowledge
Source: /app/src/content/docs/examples/agent-with-knowledge.mdx
import { DocImage } from '@/components/DocImage';
Create agents with knowledge base capabilities for enhanced information retrieval.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/agent-with-knowledge
cd agent-with-knowledge
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
# LLM API key
OPENAI_API_KEY=sk-your-openai-api-key-here
# Knowledge database (required for RAG)
KNOWLEDGE_DB_URL=postgresql://username:password@localhost:5432/knowledge_db
# Main database for agent persistence
DB_URL=sqlite://./astreus.db
```
## Knowledge Agent
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create agent with knowledge enabled
const agent = await Agent.create({
name: 'CosmosBot',
model: 'gpt-4o',
embeddingModel: 'text-embedding-3-small', // Specify embedding model directly
knowledge: true,
systemPrompt: 'You can search and retrieve information from scientific knowledge bases about the cosmos and universe.'
});
// Add knowledge from scientific book about the sun and cosmos
await agent.addKnowledgeFromFile(
'./data/The Sun\'s Light and Heat.pdf',
{ category: 'solar-physics', version: '1.0' }
);
// Agent automatically uses knowledge in conversations
const response = await agent.ask("What is Correction for Atmospheric Absorption? Explain.");
console.log(response); // Uses knowledge base automatically
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/agent-with-knowledge](https://github.com/astreus-ai/agent-with-knowledge)
***
# Agent with Memory
URL: /docs/examples/agent-with-memory
Source: /app/src/content/docs/examples/agent-with-memory.mdx
import { DocImage } from '@/components/DocImage';
Build agents with persistent memory capabilities.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/agent-with-memory
cd agent-with-memory
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
# Database for memory storage
DB_URL=sqlite://./astreus.db
```
## Memory Agent
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'MemoryBot',
model: 'gpt-4o',
memory: true,
systemPrompt: 'You remember our conversation history.'
});
// First conversation
const response1 = await agent.ask("My name is John and I like TypeScript");
console.log(response1);
// Later conversation - agent remembers
const response2 = await agent.ask("What's my name and what do I like?");
console.log(response2); // Should remember John and TypeScript
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/agent-with-memory](https://github.com/astreus-ai/agent-with-memory)
***
# Agent with Vision
URL: /docs/examples/agent-with-vision
Source: /app/src/content/docs/examples/agent-with-vision.mdx
import { DocImage } from '@/components/DocImage';
Create agents capable of processing and analyzing images.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/agent-with-vision
cd agent-with-vision
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
# Vision-capable model API key
OPENAI_API_KEY=sk-your-openai-api-key-here
# Database for agent persistence
DB_URL=sqlite://./astreus.db
```
## Vision Agent
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'VisionBot',
model: 'gpt-4o',
visionModel: 'gpt-4o',
vision: true,
systemPrompt: 'You can analyze and describe images in detail.'
});
// Analyze an image
const result = await agent.ask("Analyze this image and describe what you see", {
attachments: [{
type: 'image',
path: './screenshot.png'
}]
});
console.log(result); // Detailed image analysis
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/agent-with-vision](https://github.com/astreus-ai/agent-with-vision)
***
# Basic Graphs
URL: /docs/examples/basic-graphs
Source: /app/src/content/docs/examples/basic-graphs.mdx
import { DocImage } from '@/components/DocImage';
Create simple workflow graphs to orchestrate multi-step processes.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/basic-graphs
cd basic-graphs
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Basic Graph Workflow
```typescript
import { Agent, Graph } from '@astreus-ai/astreus';
// Create an agent
const agent = await Agent.create({
name: 'WorkflowAgent',
model: 'gpt-4o'
});
// Create a simple sequential graph
const graph = new Graph({
name: 'research-workflow'
}, agent);
// Add tasks with dependencies
const research = graph.addTaskNode({
prompt: 'Research artificial intelligence trends'
});
const summary = graph.addTaskNode({
prompt: 'Summarize the research findings',
dependencies: [research]
});
// Execute the workflow
const results = await graph.run();
// Parse the result and extract the response
if (results.success && results.results[summary]) {
const summaryResult = JSON.parse(results.results[summary]);
console.log(summaryResult.response);
}
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/basic-graphs](https://github.com/astreus-ai/basic-graphs)
***
# Complex Workflows
URL: /docs/examples/complex-workflows
Source: /app/src/content/docs/examples/complex-workflows.mdx
import { DocImage } from '@/components/DocImage';
Build sophisticated multi-agent workflows with advanced orchestration patterns.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/complex-workflows
cd complex-workflows
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Multi-Agent Workflow
```typescript
import { Agent, Graph } from '@astreus-ai/astreus';
// Create specialized agents
const researcher = await Agent.create({
name: 'Researcher',
model: 'gpt-4o',
systemPrompt: 'You research topics thoroughly.'
});
const writer = await Agent.create({
name: 'Writer',
model: 'gpt-4o',
systemPrompt: 'You create engaging content.'
});
// Create workflow pipeline
const pipeline = new Graph({
name: 'content-pipeline'
}, researcher);
// Define workflow steps
const research = pipeline.addTaskNode({
prompt: 'Research AI trends in 2024',
agentId: researcher.id
});
const article = pipeline.addTaskNode({
prompt: 'Write an article based on the research',
agentId: writer.id,
dependencies: [research]
});
// Execute the workflow
const results = await pipeline.run();
// Parse the result and extract the response
if (results.success && results.results[article]) {
const articleResult = JSON.parse(results.results[article]);
console.log(articleResult.response);
}
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/complex-workflows](https://github.com/astreus-ai/complex-workflows)
***
# Context Compression
URL: /docs/examples/context-compression
Source: /app/src/content/docs/examples/context-compression.mdx
import { DocImage } from '@/components/DocImage';
Use Astreus's auto context compression system to automatically manage long conversations by summarizing older messages while preserving important context.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/context-compression
cd context-compression
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Auto Context Compression
The `autoContextCompression` feature automatically summarizes older messages when the conversation gets too long, maintaining context while reducing token usage:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'ContextAgent',
model: 'gpt-4o',
memory: true,
autoContextCompression: true,
systemPrompt: 'You can handle very long conversations efficiently.'
});
// Have a long conversation
for (let i = 1; i <= 20; i++) {
await agent.ask(`Tell me an interesting fact about space. This is message #${i}.`);
}
// Test memory - agent should remember early facts despite context compression
const response = await agent.ask("What was the first space fact you told me?");
console.log(response);
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
## Repository
The complete example is available on GitHub: [astreus-ai/context-compression](https://github.com/astreus-ai/context-compression)
***
# Custom Plugins
URL: /docs/examples/custom-plugins
Source: /app/src/content/docs/examples/custom-plugins.mdx
import { DocImage } from '@/components/DocImage';
Create and register custom plugins with tools for agents.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/custom-plugins
cd custom-plugins
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Custom Weather Plugin
```typescript
import { Agent, ToolDefinition, PluginDefinition } from '@astreus-ai/astreus';
// Define a custom tool
const weatherTool: ToolDefinition = {
name: 'get_weather',
description: 'Get current weather for a location',
parameters: {
location: {
name: 'location',
type: 'string',
description: 'City name',
required: true
},
units: {
name: 'units',
type: 'string',
description: 'Temperature units',
required: false,
enum: ['celsius', 'fahrenheit']
}
},
handler: async (params) => {
try {
// Simulate weather API call
const weather = {
temperature: 22,
conditions: 'sunny',
location: params.location
};
return {
success: true,
data: weather
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
};
// Create plugin
const weatherPlugin: PluginDefinition = {
name: 'weather-plugin',
version: '1.0.0',
description: 'Weather information tools',
tools: [weatherTool]
};
// Create agent and register plugin
const agent = await Agent.create({
name: 'WeatherAgent',
model: 'gpt-4o'
});
await agent.registerPlugin(weatherPlugin);
// Use the plugin in conversation
const response = await agent.ask("What's the weather like in Tokyo?");
console.log(response); // Agent automatically uses the weather tool
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/custom-plugins](https://github.com/astreus-ai/custom-plugins)
***
# Graph + Sub-Agents
URL: /docs/examples/graph-sub-agents
Source: /app/src/content/docs/examples/graph-sub-agents.mdx
import { DocImage } from '@/components/DocImage';
Combine Graph workflows with Sub-Agent coordination for sophisticated hierarchical task distribution.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/graph-sub-agents
cd graph-sub-agents
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
ANTHROPIC_API_KEY=your-anthropic-api-key-here
DB_URL=sqlite://./astreus.db
```
## Graph + Sub-Agent Integration
```typescript
import { Agent, Graph } from '@astreus-ai/astreus';
// Create specialized sub-agents
const researcher = await Agent.create({
name: 'ResearchSpecialist',
model: 'gpt-4o',
systemPrompt: 'You conduct thorough research and gather comprehensive information.',
knowledge: true,
memory: true
});
const analyst = await Agent.create({
name: 'DataAnalyst',
model: 'gpt-4o',
systemPrompt: 'You analyze data and provide actionable insights.',
useTools: true
});
const writer = await Agent.create({
name: 'ContentWriter',
model: 'claude-3-5-sonnet-20241022',
systemPrompt: 'You create compelling, well-structured content.',
vision: true
});
// Create main coordinator agent with sub-agents
const coordinator = await Agent.create({
name: 'ProjectCoordinator',
model: 'gpt-4o',
systemPrompt: 'You coordinate complex projects using specialized sub-agents.',
subAgents: [researcher, analyst, writer]
});
// Create graph with sub-agent awareness
const projectGraph = new Graph({
name: 'Market Analysis Project',
defaultAgentId: coordinator.id,
subAgentAware: true,
maxConcurrency: 2
}, coordinator);
// Add tasks that leverage sub-agents
const researchTask = projectGraph.addTaskNode({
name: 'Market Research',
prompt: 'Research the AI healthcare market, including key players, market size, and growth trends',
useSubAgents: true,
subAgentDelegation: 'auto'
});
const analysisTask = projectGraph.addTaskNode({
name: 'Data Analysis',
prompt: 'Analyze the research data and identify key opportunities and challenges',
dependencies: [researchTask],
useSubAgents: true,
subAgentCoordination: 'parallel'
});
const reportTask = projectGraph.addTaskNode({
name: 'Executive Report',
prompt: 'Create a comprehensive executive report with recommendations',
dependencies: [analysisTask],
useSubAgents: true
});
// Execute the graph with intelligent sub-agent coordination
const result = await projectGraph.run();
// Display results
console.log('Project completed:', result.success);
console.log(`Tasks completed: ${result.completedNodes}/${result.completedNodes + result.failedNodes}`);
console.log(`Duration: ${result.duration}ms`);
// Display task results
if (result.results) {
console.log('\nTask Results:');
for (const [nodeId, nodeResult] of Object.entries(result.results)) {
console.log(`\n${nodeId}:`, nodeResult);
}
}
// Get the final report from the last task
const finalReport = result.results?.[reportTask];
if (finalReport) {
console.log('\n=== Final Executive Report ===');
console.log(finalReport);
}
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/graph-sub-agents](https://github.com/astreus-ai/graph-sub-agents)
***
# MCP Integration
URL: /docs/examples/mcp-integration
Source: /app/src/content/docs/examples/mcp-integration.mdx
import { DocImage } from '@/components/DocImage';
Connect agents with external tools using Model Context Protocol.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/mcp-integration
cd mcp-integration
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## MCP Server Integration
```typescript
import { config } from 'dotenv';
import { Agent, Graph } from '@astreus-ai/astreus';
config();
async function main() {
const mainAgent = await Agent.create({
name: 'FileAnalysisAgent',
model: 'gpt-4o',
systemPrompt: 'You are a file analysis agent that processes files using graph-based workflows and MCP tools.'
});
const mcpConfig = {
name: 'filesystem',
command: "npx",
args: ["@modelcontextprotocol/server-filesystem", process.cwd()]
};
await mainAgent.addMCPServers([mcpConfig]);
await new Promise(resolve => setTimeout(resolve, 3000));
const analysisGraph = new Graph({
name: 'File Summary Workflow',
maxConcurrency: 1
}, mainAgent);
const readTask = analysisGraph.addTaskNode({
name: 'File Reading',
prompt: 'Read the content of "./info.txt" file and analyze it.',
priority: 1
});
const summaryTask = analysisGraph.addTaskNode({
name: 'Summary Creation',
prompt: 'Based on the analyzed file content from the previous task, create a concise summary and save it as "./summary.txt" file.',
dependencies: [readTask],
priority: 2
});
const result = await analysisGraph.run({
stream: true,
onChunk: (chunk) => {
console.log(chunk);
}
});
}
main().catch(console.error);
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/mcp-integration](https://github.com/astreus-ai/mcp-integration)
***
# Scheduled Workflows
URL: /docs/examples/scheduled-workflows
Source: /app/src/content/docs/examples/scheduled-workflows.mdx
import { DocImage } from '@/components/DocImage';
Build time-based automated workflows with simple schedule strings and dependency management.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/scheduled-workflows
cd scheduled-workflows
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
NODE_ENV=development
```
## Quick Test Example (Seconds)
```typescript
import { Agent, Graph } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'ContentAgent',
model: 'gpt-4o',
systemPrompt: 'You are a content creation specialist.'
});
const graph = new Graph({
name: 'Quick Test Pipeline',
description: 'Test automated workflow with seconds interval',
maxConcurrency: 2
}, agent);
// Run after 5 seconds
const researchNode = graph.addTaskNode({
name: 'Content Research',
prompt: 'Research trending topics in AI and technology. Find 3-5 compelling topics for blog content.',
schedule: 'after:5s'
});
// Run after 10 seconds
const creationNode = graph.addTaskNode({
name: 'Content Creation',
prompt: 'Based on the research findings, create a short blog post summary on one of the trending topics.',
schedule: 'after:10s',
dependsOn: ['Content Research']
});
// Run after 15 seconds
const seoNode = graph.addTaskNode({
name: 'SEO Optimization',
prompt: 'Optimize the blog post for SEO: add meta description and keywords.',
schedule: 'after:15s',
dependsOn: ['Content Creation']
});
// Run after 20 seconds
const publishNode = graph.addTaskNode({
name: 'Content Publishing',
prompt: 'Create a social media post for the optimized content.',
schedule: 'after:20s',
dependsOn: ['SEO Optimization']
});
console.log('Starting scheduled workflow test...');
console.log('Tasks will run at:');
console.log('- Research: 5 seconds from now');
console.log('- Creation: 10 seconds from now');
console.log('- SEO: 15 seconds from now');
console.log('- Publishing: 20 seconds from now\n');
// Run the graph and get results
const result = await graph.run();
// Display execution results
console.log('\n=== Workflow Execution Results ===');
console.log('Success:', result.success);
console.log(`Completed: ${result.completedNodes} tasks`);
console.log(`Failed: ${result.failedNodes} tasks`);
console.log(`Duration: ${result.duration}ms`);
// Display each task result
if (result.results) {
console.log('\n=== Task Results ===');
for (const [nodeId, nodeResult] of Object.entries(result.results)) {
console.log(`\n[${nodeId}]:`);
console.log(nodeResult);
}
}
// Check for errors
if (result.errors && Object.keys(result.errors).length > 0) {
console.log('\n=== Errors ===');
for (const [nodeId, error] of Object.entries(result.errors)) {
console.log(`[${nodeId}]: ${error}`);
}
}
console.log('\n✅ Scheduled workflow test completed!');
```
## Daily Content Pipeline (Production)
For production use with actual daily schedules:
```typescript
// Schedule format examples:
// 'daily@06:00' - Every day at 6 AM
// 'weekly@monday@09:00' - Every Monday at 9 AM
// 'monthly@15@10:00' - 15th of every month at 10 AM
// 'after:5s' - After 5 seconds (for testing)
// 'after:2h' - After 2 hours
// 'every:30m' - Every 30 minutes
const researchNode = graph.addTaskNode({
name: 'Content Research',
prompt: 'Research trending topics in AI and technology.',
schedule: 'daily@06:00'
});
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/scheduled-workflows](https://github.com/astreus-ai/scheduled-workflows)
***
# Advanced Sub-Agents
URL: /docs/examples/sub-agents-advanced
Source: /app/src/content/docs/examples/sub-agents-advanced.mdx
import { DocImage } from '@/components/DocImage';
Build sophisticated multi-agent workflows with complex coordination patterns and specialized capabilities.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/sub-agents-advanced
cd sub-agents-advanced
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
ANTHROPIC_API_KEY=your-anthropic-api-key-here
DB_URL=sqlite://./astreus.db
```
## Multi-Model Agent Team
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create diverse agent team with different models
const strategicPlanner = await Agent.create({
name: 'StrategicPlanner',
model: 'gpt-4o', // High reasoning for strategy
systemPrompt: 'You are a strategic business consultant with deep analytical thinking.',
memory: true,
knowledge: true
});
const creativeWriter = await Agent.create({
name: 'CreativeWriter',
model: 'claude-3-5-sonnet-20241022', // Excellent for creative writing
systemPrompt: 'You are a creative copywriter who crafts compelling narratives.',
vision: true
});
const dataScientist = await Agent.create({
name: 'DataScientist',
model: 'gpt-4o', // Strong analytical capabilities
systemPrompt: 'You are a data scientist specializing in statistical analysis and insights.',
useTools: true
});
const executiveTeam = await Agent.create({
name: 'ExecutiveTeam',
model: 'gpt-4o', // High-level coordination
systemPrompt: 'You coordinate executive-level strategic initiatives across expert teams.',
subAgents: [strategicPlanner, creativeWriter, dataScientist]
});
const businessPlan = await executiveTeam.ask(
'Develop comprehensive go-to-market strategy for AI-powered healthcare platform',
{
useSubAgents: true,
delegation: 'auto',
coordination: 'sequential'
}
);
console.log('Business plan completed:', businessPlan);
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/sub-agents-advanced](https://github.com/astreus-ai/sub-agents-advanced)
***
# Basic Sub-Agents
URL: /docs/examples/sub-agents-basic
Source: /app/src/content/docs/examples/sub-agents-basic.mdx
import { DocImage } from '@/components/DocImage';
Create and coordinate multiple AI agents for complex task delegation.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/sub-agents-basic
cd sub-agents-basic
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Simple Sub-Agent Setup
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create specialized sub-agents
const researcher = await Agent.create({
name: 'Researcher',
model: 'gpt-4o',
systemPrompt: 'You are an expert researcher who gathers comprehensive information.'
});
const writer = await Agent.create({
name: 'Writer',
model: 'gpt-4o',
systemPrompt: 'You are a skilled writer who creates clear, engaging content.'
});
// Create main coordinator agent
const mainAgent = await Agent.create({
name: 'Coordinator',
model: 'gpt-4o',
systemPrompt: 'You coordinate tasks between specialized agents.',
subAgents: [researcher, writer]
});
// Use auto delegation
const result = await mainAgent.ask(
'Research artificial intelligence trends and write a summary',
{
useSubAgents: true,
delegation: 'auto'
}
);
console.log(result);
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/sub-agents-basic](https://github.com/astreus-ai/sub-agents-basic)
***
# Task Attachments
URL: /docs/examples/task-attachments
Source: /app/src/content/docs/examples/task-attachments.mdx
import { DocImage } from '@/components/DocImage';
Attach multiple file types to tasks for comprehensive analysis.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/task-attachments
cd task-attachments
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Task with Multiple Attachments
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'AnalysisAgent',
model: 'gpt-4o',
visionModel: 'gpt-4o', // Specify vision model directly
vision: true // Enable vision for images
});
// Code review task with multiple file types
const reviewTask = await agent.createTask({
prompt: `Perform a comprehensive analysis:
1. Review the code for security issues
2. Check the design mockup for usability
3. Verify dependencies are up to date
4. Review documentation completeness`,
attachments: [
{
type: 'code',
path: './src/auth/login.ts',
name: 'Login Controller',
language: 'typescript'
},
{
type: 'image',
path: './designs/login-ui.png',
name: 'Login UI Mockup'
},
{
type: 'json',
path: './package.json',
name: 'Dependencies'
},
{
type: 'markdown',
path: './docs/api.md',
name: 'API Documentation'
}
],
metadata: {
type: 'comprehensive-review',
priority: 'high'
}
});
const result = await agent.executeTask(reviewTask.id);
console.log('Analysis complete:', result.response);
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/task-attachments](https://github.com/astreus-ai/task-attachments)
***
# Your First Agent
URL: /docs/examples/your-first-agent
Source: /app/src/content/docs/examples/your-first-agent.mdx
import { DocImage } from '@/components/DocImage';
Create your first AI agent with Astreus framework.
## Quick Start
### Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
```bash
git clone https://github.com/astreus-ai/your-first-agent
cd your-first-agent
npm install
```
### Or Install Package Only
If you prefer to build from scratch:
```bash
npm install @astreus-ai/astreus
```
## Environment Setup
```bash
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
```
## Basic Agent
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'MyFirstAgent',
model: 'gpt-4o',
systemPrompt: 'You are a helpful assistant.'
});
// Create and execute a task
const task = await agent.createTask({
prompt: "Hello, introduce yourself"
});
const result = await agent.executeTask(task.id);
console.log(result.response);
```
## Running the Example
If you cloned the repository:
```bash
npm run dev
```
If you built from scratch, create an `index.ts` file with the code above and run:
```bash
npx tsx index.ts
```
## Repository
The complete example is available on GitHub: [astreus-ai/your-first-agent](https://github.com/astreus-ai/your-first-agent)
***
# Agent
URL: /docs/framework/agent
Source: /app/src/content/docs/framework/agent.mdx
import { DocImage } from '@/components/DocImage';
**Core AI entity with modular capabilities and decorator-based composition**
## Overview
Agents are the fundamental building blocks in Astreus. They provide intelligent conversation capabilities with configurable features like memory, tools, knowledge bases, and vision processing. Each agent operates independently with its own context, memory, and specialized capabilities.
## Creating an Agent
Creating an agent in Astreus is straightforward:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'MyAssistant', // Unique name for the agent
model: 'gpt-4o', // LLM model to use
systemPrompt: 'You are a helpful assistant', // Custom instructions
memory: true // Enable persistent memory
});
```
## Choosing the LLM Model
Astreus supports multiple LLM providers out of the box:
```typescript
const agent = await Agent.create({
name: 'MyAssistant',
model: 'gpt-4.5' // Set model here. Latest: 'gpt-4.5', 'claude-sonnet-4-20250514', 'gemini-2.5-pro', 'deepseek-r1'
});
```
[Learn supported LLM providers and models →](/docs/framework/llm)
## Agent Architecture
```mermaid
graph TB
A[Agent Core] --> B[LLM Integration]
A --> C[Memory System]
A --> D[Knowledge Base]
A --> E[Vision Processing]
A --> F[Tool System]
A --> G[Context Management]
B --> B1[OpenAI/GPT]
B --> B2[Anthropic/Claude]
B --> B3[Google/Gemini]
B --> B4[DeepSeek]
C --> C1[Short-term Memory]
C --> C2[Long-term Memory]
C --> C3[Context Retention]
D --> D1[Vector Storage]
D --> D2[Semantic Search]
D --> D3[RAG Pipeline]
E --> E1[Image Analysis]
E --> E2[OCR Processing]
E --> E3[Visual Understanding]
F --> F1[Custom Plugins]
F --> F2[Built-in Tools]
F --> F3[External APIs]
G --> G1[Auto Compression]
G --> G2[Token Management]
G --> G3[Context Window]
style A fill:#f9f,stroke:#333,stroke-width:4px
style B fill:#bbf,stroke:#333,stroke-width:2px
style C fill:#bfb,stroke:#333,stroke-width:2px
style D fill:#fbb,stroke:#333,stroke-width:2px
style E fill:#fbf,stroke:#333,stroke-width:2px
style F fill:#bff,stroke:#333,stroke-width:2px
style G fill:#ffb,stroke:#333,stroke-width:2px
```
## Agent Attributes
Agents can be configured with various attributes to customize their behavior:
### Core Attributes
```typescript
interface AgentConfig {
name: string; // Unique identifier for the agent
description?: string; // Agent description
model?: string; // LLM model to use (default: 'gpt-4o-mini')
embeddingModel?: string; // Specific model for embeddings (auto-detected)
visionModel?: string; // Specific model for vision (auto-detected)
temperature?: number; // Control response randomness (0-1, default: 0.7)
maxTokens?: number; // Maximum response length (default: 2000)
systemPrompt?: string; // Custom system instructions
memory?: boolean; // Enable persistent memory (default: false)
knowledge?: boolean; // Enable knowledge base access (default: false)
vision?: boolean; // Enable image processing (default: false)
useTools?: boolean; // Enable tool/plugin usage (default: true)
autoContextCompression?: boolean; // Enable smart context management (default: false)
maxContextLength?: number; // Token limit before compression (default: 8000)
preserveLastN?: number; // Recent messages to keep uncompressed (default: 3)
compressionRatio?: number; // Target compression ratio (default: 0.3)
compressionStrategy?: 'summarize' | 'selective' | 'hybrid'; // Algorithm (default: 'hybrid')
debug?: boolean; // Enable debug logging (default: false)
subAgents?: IAgent[]; // Sub-agents for delegation and coordination
}
```
### Example with All Attributes
```typescript
// Create sub-agents first
const researcher = await Agent.create({
name: 'ResearchAgent',
systemPrompt: 'You are an expert researcher who gathers comprehensive information.'
});
const writer = await Agent.create({
name: 'WriterAgent',
systemPrompt: 'You create engaging, well-structured content.'
});
const fullyConfiguredAgent = await Agent.create({
name: 'AdvancedAssistant',
description: 'Multi-purpose AI assistant',
model: 'gpt-4o',
embeddingModel: 'text-embedding-3-small', // Optional: specific embedding model
visionModel: 'gpt-4o', // Optional: specific vision model
temperature: 0.7,
maxTokens: 2000,
systemPrompt: 'You are an expert software architect...',
memory: true,
knowledge: true,
vision: true,
useTools: true,
autoContextCompression: true,
maxContextLength: 6000, // Compress at 6000 tokens
preserveLastN: 4, // Keep last 4 messages
compressionRatio: 0.4, // 40% compression target
compressionStrategy: 'hybrid', // Use hybrid strategy
debug: true, // Enable debug logging
subAgents: [researcher, writer] // Add sub-agents for delegation
});
```
***
# Context
URL: /docs/framework/context
Source: /app/src/content/docs/framework/context.mdx
import { DocImage } from '@/components/DocImage';
import { Step, Steps } from 'fumadocs-ui/components/steps';
**Smart context management for long conversations with automatic compression**
## Overview
Auto context compression in Astreus provides intelligent conversation management by automatically handling long conversation histories. The system compresses older messages while preserving important information, ensuring agents can maintain coherent long conversations without exceeding model token limits.
## Basic Usage
Enable auto context compression to get automatic conversation management:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create an agent with auto context compression enabled
const agent = await Agent.create({
name: 'ContextAwareAgent',
model: 'gpt-4o',
autoContextCompression: true // Enable smart context management
});
// Long conversations are automatically managed
for (let i = 1; i <= 50; i++) {
const response = await agent.ask(`Tell me fact #${i} about TypeScript`);
console.log(`Fact ${i}:`, response);
}
// Agent can still reference early conversation through compressed context
const summary = await agent.ask('What was the first fact you told me?');
console.log(summary); // System retrieves from compressed context
```
## Example with Tasks
Auto context compression works with both direct conversations and tasks:
```typescript
const agent = await Agent.create({
name: 'ResearchAgent',
model: 'gpt-4o',
autoContextCompression: true,
memory: true // Often used together with memory
});
// Create multiple related tasks
const task1 = await agent.createTask({
prompt: "Research the latest trends in AI development"
});
const result1 = await agent.executeTask(task1.id);
const task2 = await agent.createTask({
prompt: "Based on the research, what are the key opportunities?"
});
const result2 = await agent.executeTask(task2.id);
// Task can reference previous context even if it was compressed
```
Auto context compression ensures agents can handle conversations and tasks of any length while maintaining coherence and staying within token limits.
## Configuration Options
You can customize the auto context compression behavior with these parameters:
```typescript
const agent = await Agent.create({
name: 'CustomContextAgent',
model: 'gpt-4o',
autoContextCompression: true,
// Context compression configuration
maxContextLength: 4000, // Trigger compression at 4000 tokens
preserveLastN: 5, // Keep last 5 messages uncompressed
compressionRatio: 0.4, // Target 40% size reduction
compressionStrategy: 'hybrid', // Use hybrid compression strategy
memory: true,
});
```
### Configuration Parameters
| Parameter | Type | Default | Description |
| ------------------------ | --------- | ---------- | ---------------------------------------------- |
| `autoContextCompression` | `boolean` | `false` | Enable automatic context compression |
| `maxContextLength` | `number` | `8000` | Token limit before compression triggers |
| `preserveLastN` | `number` | `3` | Number of recent messages to keep uncompressed |
| `compressionRatio` | `number` | `0.3` | Target compression ratio (0.1 = 90% reduction) |
| `compressionStrategy` | `string` | `'hybrid'` | Compression algorithm to use |
### Compression Mathematics
The compression ratio determines how much the context is reduced:
$\text{Compression Ratio} = \frac{\text{compressed tokens}}{\text{original tokens}}$
For example, with a ratio of 0.3:
* Original: 1000 tokens
* Compressed: 300 tokens
* **Reduction: 70%**
The token reduction percentage is calculated as:
$\text{Reduction \%} = (1 - \text{ratio}) \times 100\%$
With `compressionRatio = 0.3`:
$\text{Reduction} = (1 - 0.3) \times 100\% = 70\%$
### Compression Strategies
Choose the compression strategy that best fits your use case:
#### `'summarize'` - Text Summarization
* **Best for**: General conversations, Q\&A, discussions
* **How it works**: Creates concise summaries of message groups
* **Pros**: Maintains context flow, good for most use cases
* **Cons**: May lose specific details
```typescript
const agent = await Agent.create({
name: 'SummarizingAgent',
autoContextCompression: true,
compressionStrategy: 'summarize',
preserveLastN: 4
});
```
#### `'selective'` - Important Message Selection
* **Best for**: Task-oriented conversations, technical discussions
* **How it works**: Uses AI to identify and preserve important messages
* **Pros**: Keeps crucial information intact
* **Cons**: May be more resource intensive
```typescript
const agent = await Agent.create({
name: 'SelectiveAgent',
autoContextCompression: true,
compressionStrategy: 'selective',
preserveLastN: 3
});
```
#### `'hybrid'` - Combined Approach (Recommended)
* **Best for**: Most applications, balanced approach
* **How it works**: Combines summarization and selective preservation
* **Pros**: Balanced between context preservation and efficiency
* **Cons**: None significant
```typescript
const agent = await Agent.create({
name: 'HybridAgent',
autoContextCompression: true,
compressionStrategy: 'hybrid', // Default and recommended
});
```
## Advanced Usage
### Custom Compression Settings by Use Case
#### High-Frequency Conversations
For chatbots or interactive agents with many short messages:
```typescript
const chatbot = await Agent.create({
name: 'Chatbot',
autoContextCompression: true,
maxContextLength: 2000, // Compress more frequently
preserveLastN: 8, // Keep more recent messages
compressionRatio: 0.5, // More aggressive compression
compressionStrategy: 'summarize'
});
```
#### Long-Form Content Creation
For agents working with detailed content:
```typescript
const writer = await Agent.create({
name: 'ContentWriter',
autoContextCompression: true,
maxContextLength: 12000, // Allow longer context
preserveLastN: 3, // Keep recent context tight
compressionRatio: 0.2, // Gentle compression
compressionStrategy: 'selective'
});
```
#### Technical Documentation
For agents handling complex technical discussions:
```typescript
const techAgent = await Agent.create({
name: 'TechnicalAssistant',
autoContextCompression: true,
maxContextLength: 6000,
preserveLastN: 5,
compressionRatio: 0.3,
compressionStrategy: 'hybrid' // Best for mixed content
});
```
## How Context Compression Works
### Compression Process
**Token Monitoring**: Agent continuously monitors total token count in conversation
**Trigger Point**: When tokens exceed `maxContextLength`, compression is triggered
**Message Preservation**: Recent `preserveLastN` messages are kept uncompressed
**Content Analysis**: Older messages are analyzed based on chosen strategy
**Compression**: Messages are compressed into summaries or selections
**Context Update**: Compressed context replaces original messages
### What Gets Preserved
* **System prompts**: Always preserved
* **Recent messages**: Last N messages based on `preserveLastN`
* **Important context**: Key information identified by the compression strategy
* **Compressed summaries**: Condensed versions of older conversations
### Example Compression Flow
```typescript
// Before compression (1200 tokens)
[
{ role: 'user', content: 'Tell me about TypeScript' },
{ role: 'assistant', content: 'TypeScript is...' },
{ role: 'user', content: 'What about interfaces?' },
{ role: 'assistant', content: 'Interfaces in TypeScript...' },
{ role: 'user', content: 'Show me an example' },
{ role: 'assistant', content: 'Here\'s an example...' },
]
// After compression (400 tokens)
[
{ role: 'system', content: '[Compressed] User asked about TypeScript basics, interfaces, and examples. Assistant provided comprehensive explanations...' },
{ role: 'user', content: 'Show me an example' },
{ role: 'assistant', content: 'Here\'s an example...' },
]
```
## Monitoring and Debugging
### Context Window Information
Get details about the current context state:
```typescript
const contextWindow = agent.getContextWindow();
console.log({
messageCount: contextWindow.messages.length,
totalTokens: contextWindow.totalTokens,
maxTokens: contextWindow.maxTokens,
utilization: `${contextWindow.utilizationPercentage.toFixed(1)}%`
});
// Check if compression occurred
const hasCompression = contextWindow.messages.some(
msg => msg.metadata?.type === 'summary'
);
console.log('Context compressed:', hasCompression);
```
### Context Analysis
Analyze context for optimization opportunities:
```typescript
const analysis = agent.analyzeContext();
console.log({
compressionNeeded: analysis.compressionNeeded,
averageTokensPerMessage: analysis.averageTokensPerMessage,
suggestedCompressionRatio: analysis.suggestedCompressionRatio
});
```
***
# Environment
URL: /docs/framework/env-example
Source: /app/src/content/docs/framework/env-example.mdx
import { DocImage } from '@/components/DocImage';
**Environment configuration for Astreus AI applications**
## Complete Configuration
```bash
# ===========================================
# Astreus AI - Environment Variables Example
# ===========================================
# ===== DATABASE CONFIGURATION =====
# Main Application Database (Agents, Memory, Tasks, etc.)
# SQLite (default for development)
DB_URL=sqlite://./astreus.db
# PostgreSQL (production recommended)
# DB_URL=postgresql://username:password@localhost:5432/astreus_db
# ===== KNOWLEDGE/RAG SYSTEM =====
# Knowledge Vector Database (PostgreSQL with pgvector extension required)
# This is separate from the main database and stores vector embeddings
KNOWLEDGE_DB_URL=postgresql://username:password@localhost:5432/knowledge_db
# ===== LLM PROVIDERS API KEYS =====
# OpenAI
OPENAI_API_KEY=sk-your-openai-api-key-here # Primary key (fallback for vision/embedding)
OPENAI_VISION_API_KEY=sk-your-vision-api-key-here # Optional dedicated vision key
OPENAI_EMBEDDING_API_KEY=sk-your-embedding-api-key-here # Optional dedicated embedding key
OPENAI_BASE_URL=https://api.openai.com/v1 # Primary base URL (fallback for vision/embedding)
OPENAI_VISION_BASE_URL=https://api.openai.com/v1 # Optional dedicated vision base URL
OPENAI_EMBEDDING_BASE_URL=https://api.openai.com/v1 # Optional dedicated embedding base URL
# Anthropic Claude
ANTHROPIC_API_KEY=your-anthropic-api-key-here # Primary key (fallback for vision)
ANTHROPIC_VISION_API_KEY=your-vision-api-key-here # Optional dedicated vision key
ANTHROPIC_BASE_URL=https://api.anthropic.com # Primary base URL (fallback for vision)
ANTHROPIC_VISION_BASE_URL=https://api.anthropic.com # Optional dedicated vision base URL
# Google Gemini
GEMINI_API_KEY=your-gemini-api-key-here # Primary key (replaces GOOGLE_API_KEY)
GEMINI_VISION_API_KEY=your-vision-api-key-here # Optional dedicated vision key
GEMINI_EMBEDDING_API_KEY=your-embedding-api-key-here # Optional dedicated embedding key
GEMINI_BASE_URL=https://generativelanguage.googleapis.com # Primary base URL (fallback for vision/embedding)
GEMINI_VISION_BASE_URL=https://generativelanguage.googleapis.com # Optional dedicated vision base URL
GEMINI_EMBEDDING_BASE_URL=https://generativelanguage.googleapis.com # Optional dedicated embedding base URL
# Ollama (for local models)
OLLAMA_BASE_URL=http://localhost:11434 # Same as before
# ===== APPLICATION SETTINGS =====
# Environment
NODE_ENV=development # Options: 'development' | 'production' | 'test'
# ===== DATABASE ENCRYPTION =====
# Enable/disable field-level encryption for sensitive data
ENCRYPTION_ENABLED=true # Options: 'true' | 'false'
# Master encryption key (required when ENCRYPTION_ENABLED=true)
# IMPORTANT: Generate a strong 32+ character key and keep it secure!
# You can generate one with: openssl rand -hex 32
ENCRYPTION_MASTER_KEY=your-256-bit-encryption-key-here-keep-it-safe-and-secure
# Encryption algorithm (default: aes-256-gcm)
ENCRYPTION_ALGORITHM=aes-256-gcm
```
***
# Graph
URL: /docs/framework/graph
Source: /app/src/content/docs/framework/graph.mdx
import { DocImage } from '@/components/DocImage';
**Workflow orchestration with dependency management and parallel execution**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
The Graph system enables you to create complex workflows by connecting tasks and agents with dependencies, conditions, and parallel execution capabilities. It provides a visual and programmatic way to orchestrate multi-step processes, handle branching logic, and coordinate multiple agents working together.
## Creating a Graph
Graphs are composed of nodes (tasks or agents) and edges (connections between them):
```typescript
import { Graph } from '@astreus-ai/astreus';
// Create a workflow graph with agent reference
const agent = await Agent.create({
name: 'ContentAgent',
model: 'gpt-4o'
});
const graph = new Graph({
name: 'content-creation-pipeline',
description: 'Research and write technical content',
defaultAgentId: agent.id // Use the agent ID
}, agent);
// Add task nodes
const researchNodeId = graph.addTaskNode({
prompt: 'Research the latest TypeScript features and summarize key findings',
model: 'gpt-4o',
priority: 10,
metadata: { type: 'research' }
});
const writeNodeId = graph.addTaskNode({
prompt: 'Write a comprehensive blog post based on the research findings',
model: 'gpt-4o',
dependencies: [researchNodeId], // Depends on research completion
priority: 5,
metadata: { type: 'writing' }
});
// Execute the graph
const results = await graph.run();
console.log('Success:', results.success);
console.log('Completed nodes:', results.completedNodes);
console.log('Failed nodes:', results.failedNodes);
console.log('Duration:', results.duration, 'ms');
console.log('Results:', results.results);
```
## Graph Execution Flow
### Node Resolution
Graph analyzes all nodes and their dependencies to determine execution order.
### Parallel Execution
Independent nodes run simultaneously for optimal performance.
### Dependency Waiting
Dependent nodes wait for their prerequisites to complete before starting.
### Result Collection
All node outputs are collected and made available in the final result.
## Advanced Example
Here's a complex workflow with dependencies, parallel execution, and error handling:
```typescript
import { Graph } from '@astreus-ai/astreus';
// Create workflow graph with default agent
const agent = await Agent.create({
name: 'OptimizationAgent',
model: 'gpt-4o'
});
const graph = new Graph({
name: 'code-optimization-pipeline',
description: 'Analyze and optimize codebase',
defaultAgentId: agent.id,
maxConcurrency: 3, // Allow 3 parallel nodes
timeout: 300000, // 5 minute timeout
retryAttempts: 2 // Retry failed nodes twice
}, agent);
// Add task nodes with proper configuration
const analysisNodeId = graph.addTaskNode({
prompt: 'Analyze the codebase for performance issues and categorize them by severity',
model: 'gpt-4o',
priority: 10, // High priority
metadata: { step: 'analysis', category: 'review' }
});
const optimizationNodeId = graph.addTaskNode({
prompt: 'Based on the analysis, implement performance optimizations',
model: 'gpt-4o',
dependencies: [analysisNodeId], // Depends on analysis
priority: 8,
metadata: { step: 'optimization', category: 'implementation' }
});
const testNodeId = graph.addTaskNode({
prompt: 'Run performance tests and validate the optimizations',
model: 'gpt-4o',
dependencies: [optimizationNodeId], // Depends on optimization
priority: 6,
stream: true, // Enable streaming for real-time feedback
metadata: { step: 'testing', category: 'validation' }
});
const documentationNodeId = graph.addTaskNode({
prompt: 'Document all changes and performance improvements',
model: 'gpt-4o',
dependencies: [analysisNodeId], // Can run parallel to optimization
priority: 5, // Lower priority
metadata: { step: 'documentation', category: 'docs' }
});
// Add edges (optional, as dependencies already create edges)
graph.addEdge(analysisNodeId, optimizationNodeId);
graph.addEdge(analysisNodeId, documentationNodeId);
graph.addEdge(optimizationNodeId, testNodeId);
// Execute the graph
const results = await graph.run();
console.log('Pipeline results:', results);
console.log('Completed nodes:', results.completedNodes);
console.log('Failed nodes:', results.failedNodes);
console.log('Duration:', results.duration, 'ms');
// Access individual node results
Object.entries(results.results).forEach(([nodeId, result]) => {
console.log(`Node ${nodeId}:`, result);
});
// Check for errors
if (results.errors && Object.keys(results.errors).length > 0) {
console.log('Errors:', results.errors);
}
```
## Graph Configuration
Graphs support various configuration options:
```typescript
interface GraphConfig {
id?: string; // Optional graph ID
name: string; // Graph name (required)
description?: string; // Graph description
defaultAgentId?: number; // Default agent for task nodes
maxConcurrency?: number; // Max parallel execution (default: 1)
timeout?: number; // Execution timeout in ms
retryAttempts?: number; // Retry attempts for failed nodes
metadata?: MetadataObject; // Custom metadata
subAgentAware?: boolean; // Enable sub-agent awareness and optimization
optimizeSubAgentUsage?: boolean; // Optimize sub-agent delegation patterns
subAgentCoordination?: 'parallel' | 'sequential' | 'adaptive'; // Default sub-agent coordination
}
// Example with full configuration including sub-agent support
const graph = new Graph({
name: 'advanced-pipeline',
description: 'Complex workflow with error handling and sub-agent coordination',
defaultAgentId: agent.id,
maxConcurrency: 5,
timeout: 600000, // 10 minutes
retryAttempts: 3,
subAgentAware: true,
optimizeSubAgentUsage: true,
subAgentCoordination: 'adaptive',
metadata: { project: 'automation', version: '1.0' }
}, agent);
```
## Node Types and Options
### Task Nodes
```typescript
interface AddTaskNodeOptions {
name?: string; // Node name for easy referencing
prompt: string; // Task prompt (required)
model?: string; // Override model for this task
agentId?: number; // Override default agent
stream?: boolean; // Enable streaming for this task
schedule?: string; // Simple schedule string (e.g., 'daily@09:00')
dependencies?: string[]; // Node IDs this task depends on
dependsOn?: string[]; // Node names this task depends on (easier than IDs)
priority?: number; // Execution priority (higher = earlier)
metadata?: MetadataObject; // Custom metadata
useSubAgents?: boolean; // Force enable/disable sub-agent usage for this task
subAgentDelegation?: 'auto' | 'manual' | 'sequential'; // Sub-agent delegation strategy
subAgentCoordination?: 'parallel' | 'sequential'; // Sub-agent coordination pattern
}
```
### Agent Nodes
```typescript
interface AddAgentNodeOptions {
agentId: number; // Agent ID (required)
dependencies?: string[]; // Node IDs this agent depends on
priority?: number; // Execution priority
metadata?: MetadataObject; // Custom metadata
}
```
## Sub-Agent Configuration Options
When configuring graphs with sub-agent support, you have comprehensive control over delegation and coordination:
### Graph-Level Sub-Agent Configuration
* **subAgentAware**: Enables automatic detection and optimization of sub-agent opportunities across the graph
* **optimizeSubAgentUsage**: Enables real-time performance monitoring and automatic strategy adjustment for better efficiency
* **subAgentCoordination**: Sets the default coordination pattern:
* `'parallel'`: Sub-agents work simultaneously across different nodes
* `'sequential'`: Sub-agents work in dependency order, passing context between executions
* `'adaptive'`: Dynamically chooses the best coordination pattern based on task complexity and dependencies
### Node-Level Sub-Agent Configuration
Each task node can override graph-level settings with specific sub-agent behavior:
* **useSubAgents**: Force enable or disable sub-agent delegation for specific nodes
* **subAgentDelegation**: Control how tasks are distributed to sub-agents at the node level
* **subAgentCoordination**: Override the graph's default coordination pattern for specific nodes
### Enhanced Graph Workflow with Sub-Agents
```typescript
import { Graph, Agent } from '@astreus-ai/astreus';
// Create specialized sub-agents
const researcher = await Agent.create({
name: 'DataResearcher',
systemPrompt: 'You specialize in gathering and analyzing data from multiple sources.'
});
const analyst = await Agent.create({
name: 'TechnicalAnalyst',
systemPrompt: 'You provide technical insights and recommendations.'
});
const writer = await Agent.create({
name: 'TechnicalWriter',
systemPrompt: 'You create clear, comprehensive technical documentation.'
});
// Main coordinator with sub-agents
const coordinator = await Agent.create({
name: 'ProjectCoordinator',
systemPrompt: 'You orchestrate complex projects using specialized team members.',
subAgents: [researcher, analyst, writer]
});
// Create sub-agent optimized graph
const projectGraph = new Graph({
name: 'Technical Documentation Pipeline',
description: 'Automated technical documentation creation with specialized agents',
defaultAgentId: coordinator.id,
maxConcurrency: 3,
subAgentAware: true,
optimizeSubAgentUsage: true,
subAgentCoordination: 'adaptive'
}, coordinator);
// Research phase with automatic sub-agent delegation
const researchNode = projectGraph.addTaskNode({
name: 'Market Research',
prompt: 'Research current trends in cloud computing and serverless architecture',
useSubAgents: true,
subAgentDelegation: 'auto',
priority: 10,
metadata: { phase: 'research', category: 'data-gathering' }
});
// Analysis phase with sequential sub-agent coordination
const analysisNode = projectGraph.addTaskNode({
name: 'Technical Analysis',
prompt: 'Analyze research findings and identify key technical patterns',
dependencies: [researchNode],
useSubAgents: true,
subAgentDelegation: 'auto',
subAgentCoordination: 'sequential',
priority: 8,
metadata: { phase: 'analysis', category: 'insights' }
});
// Documentation phase with parallel sub-agent work
const docNode = projectGraph.addTaskNode({
name: 'Documentation Creation',
prompt: 'Create comprehensive technical documentation and executive summary',
dependencies: [analysisNode],
useSubAgents: true,
subAgentDelegation: 'manual',
subAgentCoordination: 'parallel',
priority: 6,
metadata: { phase: 'documentation', category: 'deliverables' }
});
// Execute with performance monitoring
const result = await projectGraph.run();
// Access sub-agent performance insights
if (projectGraph.generateSubAgentPerformanceReport) {
const performanceReport = projectGraph.generateSubAgentPerformanceReport();
console.log('Sub-agent performance:', performanceReport);
}
console.log('Pipeline completed:', result.success);
console.log('Node results:', result.results);
```
***
# Install
URL: /docs/framework/install
Source: /app/src/content/docs/framework/install.mdx
import { DocImage } from '@/components/DocImage';
## Node.js Version Requirements
Astreus requires Node.js >=16.0.0. Here's how to check your version:
```bash
node --version
```
If you need to update Node.js, visit [nodejs.org](https://nodejs.org/downloads)
## Installing Astreus
### 1. Install Astreus
Install Astreus using npm or your preferred package manager:
```bash
npm install @astreus-ai/astreus
```
Or with yarn:
```bash
yarn add @astreus-ai/astreus
```
Or with pnpm:
```bash
pnpm add @astreus-ai/astreus
```
Installation successful! You're ready to create your first AI agent.
***
# Intro
URL: /docs/framework/intro
Source: /app/src/content/docs/framework/intro.mdx

import { Card, Cards } from 'fumadocs-ui/components/card';
import { Brain, Network, GitBranch, Puzzle, Eye, MessageSquare, Layers, Users } from 'lucide-react';
**Open-source AI agent framework for building autonomous systems that solve real-world tasks effectively.**
Astreus is the developer-friendly AI agent framework that lets you build powerful, production-ready agents in minutes. Rapidly develop sophisticated AI systems with full control over capabilities while maintaining clean, maintainable code.
## Installation
```bash
npm install @astreus-ai/astreus
```
## Basic Usage
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'Assistant',
model: 'gpt-4o',
memory: true
});
const response = await agent.ask('How can you help me?');
```
## Core Features
} title="Sub-Agents">
Intelligent task delegation with specialized agent coordination, hierarchical workflows, and LLM-powered assignment.
} title="Advanced Memory System">
Per-agent persistent memory with automatic context integration and vector search capabilities.
} title="Task Orchestration">
Structured task execution with status tracking, dependency management, and streaming support.
} title="Graph Workflows">
Complex workflow orchestration with conditional execution, parallel processing, and sub-agent integration.
} title="MCP Integration">
Model Context Protocol support for seamless external tool and service connections.
} title="Plugin System">
Extensible tool integration with JSON schema validation and automatic LLM function calling.
} title="Vision Processing">
Built-in image analysis and document processing capabilities for multimodal interactions.
} title="Knowledge Base">
RAG integration with document chunking, vector embeddings, and similarity search.
***
# Knowledge
URL: /docs/framework/knowledge
Source: /app/src/content/docs/framework/knowledge.mdx
import { DocImage } from '@/components/DocImage';
**RAG integration with document processing and vector search**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
The Knowledge system provides retrieval-augmented generation (RAG) capabilities, allowing agents to access and utilize external documents in their responses. It automatically processes documents, creates vector embeddings, and enables semantic search for relevant information. Agents with knowledge can provide more accurate, contextual responses based on your documents.
## Enabling Knowledge
Enable knowledge for an agent by setting the `knowledge` option to `true`:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'KnowledgeAgent',
model: 'gpt-4o',
knowledge: true, // Enable knowledge base access (default: false)
embeddingModel: 'text-embedding-3-small' // Optional: specify embedding model
});
```
## Adding Documents
### Add Text Content
Add content directly as a string:
```typescript
await agent.addKnowledge(
'Your important content here',
'Document Title',
{ category: 'documentation' }
);
```
### Add from File
Add content from supported file types:
```typescript
// Add PDF file
await agent.addKnowledgeFromFile(
'/path/to/document.pdf',
{ source: 'manual', version: '1.0' }
);
// Add text file
await agent.addKnowledgeFromFile('/path/to/notes.txt');
```
### Add from Directory
Process all supported files in a directory:
```typescript
await agent.addKnowledgeFromDirectory(
'/path/to/docs',
{ project: 'documentation' }
);
```
## Supported File Types
* **Text files**: `.txt`, `.md`, `.json`
* **PDF files**: `.pdf` (with text extraction)
## How It Works
The knowledge system follows a sophisticated processing pipeline:
### Document Processing
Documents are stored and indexed in the knowledge database with metadata.
### Text Chunking
Content is split into chunks (1000 characters with 200 character overlap) for optimal retrieval.
The overlap ensures context continuity:
$\text{overlap ratio} = \frac{200}{1000} = 0.2 = 20\%$
This prevents important information from being split across chunk boundaries.
### Vector Embeddings
Each chunk is converted to vector embeddings using OpenAI or Ollama embedding models.
Common embedding dimensions:
* `text-embedding-3-small`: 1536 dimensions
* `text-embedding-3-large`: 3072 dimensions
* `text-embedding-ada-002`: 1536 dimensions
The Euclidean distance between vectors can also be used:
$d(\vec{p}, \vec{q}) = \sqrt{\sum_{i=1}^{n}(p_i - q_i)^2}$
### Semantic Search
When agents receive queries, relevant chunks are retrieved using cosine similarity search.
The similarity between query and document vectors is calculated using:
$\text{cosine similarity} = \cos(\theta) = \frac{\vec{A} \cdot \vec{B}}{||\vec{A}|| \cdot ||\vec{B}||} = \frac{\sum_{i=1}^{n} A_i B_i}{\sqrt{\sum_{i=1}^{n} A_i^2} \cdot \sqrt{\sum_{i=1}^{n} B_i^2}}$
Where:
* $\vec{A}$ is the query embedding vector
* $\vec{B}$ is the document chunk embedding vector
* Higher values (closer to 1) indicate greater similarity
### Context Integration
Retrieved information is automatically added to the agent's context for enhanced responses.
## Example Usage
Here's a complete example of using knowledge with an agent:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create agent with knowledge enabled
const agent = await Agent.create({
name: 'DocumentAssistant',
model: 'gpt-4o',
knowledge: true,
embeddingModel: 'text-embedding-3-small', // Optional: specify embedding model
systemPrompt: 'You are a helpful assistant with access to company documentation.'
});
// Add documentation
await agent.addKnowledgeFromFile('./company-handbook.pdf', {
type: 'handbook',
department: 'hr'
});
await agent.addKnowledge(`
Our API uses REST principles with JSON responses.
Authentication is done via Bearer tokens.
Rate limiting is 1000 requests per hour.
`, 'API Documentation', {
type: 'api-docs',
version: '2.0'
});
// Query with automatic knowledge retrieval
const response = await agent.ask('What is our API rate limit?');
console.log(response);
// The agent will automatically search the knowledge base and include relevant context
// Manual knowledge search
const results = await agent.searchKnowledge('API authentication', 5, 0.7);
results.forEach(result => {
console.log(`Similarity: ${result.similarity}`);
console.log(`Content: ${result.content}`);
});
```
## Managing Knowledge
### Available Methods
```typescript
// List all documents with metadata
const documents = await agent.getKnowledgeDocuments();
// Returns: Array<{ id: number; title: string; created_at: string }>
// Delete specific document by ID
const success = await agent.deleteKnowledgeDocument(documentId);
// Returns: boolean indicating success
// Delete specific chunk by ID
const success = await agent.deleteKnowledgeChunk(chunkId);
// Returns: boolean indicating success
// Clear all knowledge for this agent
await agent.clearKnowledge();
// Returns: void
// Search with custom parameters
const results = await agent.searchKnowledge(
'search query',
10, // limit: max results (default: 5)
0.8 // threshold: similarity threshold (0-1, default: 0.7)
);
// Returns: Array<{ content: string; metadata: MetadataObject; similarity: number }>
// Get relevant context for a query
const context = await agent.getKnowledgeContext(
'query text',
5 // limit: max chunks to include (default: 5)
);
// Returns: string with concatenated relevant content
// Expand context around a specific chunk
const expandedChunks = await agent.expandKnowledgeContext(
documentId, // Document ID
chunkIndex, // Chunk index within document
2, // expandBefore: chunks to include before (default: 1)
2 // expandAfter: chunks to include after (default: 1)
);
// Returns: Array with expanded chunk content
```
## Configuration
### Environment Variables
```bash
# Database (required)
KNOWLEDGE_DB_URL=postgresql://user:password@host:port/database
# API key for embeddings (uses same provider as agent's model)
OPENAI_API_KEY=your_openai_key
```
### Embedding Model Configuration
Specify the embedding model directly in the agent configuration:
```typescript
const agent = await Agent.create({
name: 'KnowledgeAgent',
model: 'gpt-4o',
embeddingModel: 'text-embedding-3-small', // Specify embedding model here
knowledge: true
});
```
***
# LLM
URL: /docs/framework/llm
Source: /app/src/content/docs/framework/llm.mdx
import { DocImage } from '@/components/DocImage';
**Unified interface for multiple LLM providers with automatic routing**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
The LLM abstraction layer provides seamless integration with multiple AI providers, allowing you to switch between OpenAI, Claude, Gemini, and Ollama without changing your code. It handles provider-specific implementations, message formatting, and streaming while providing a consistent API across all providers.
## Supported Providers
Astreus supports four major LLM providers with automatic model routing:
### OpenAI
**All 14 supported models:**
* **Latest**: `gpt-4.5`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `o4-mini`, `o4-mini-high`, `o3`
* **Stable**: `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo`, `gpt-4`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k`, `gpt-3.5-turbo-instruct`
* **API Key**: Set `OPENAI_API_KEY` environment variable
### Anthropic Claude
**All 9 supported models:**
* **Latest**: `claude-sonnet-4-20250514`, `claude-opus-4-20250514`, `claude-3.7-sonnet-20250224`
* **Stable**: `claude-3-5-sonnet-20241022`, `claude-3-5-sonnet-20240620`, `claude-3-5-haiku-20241022`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307`
* **API Key**: Set `ANTHROPIC_API_KEY` environment variable
### Google Gemini
**All 12 supported models:**
* **Latest**: `gemini-2.5-pro`, `gemini-2.5-pro-deep-think`, `gemini-2.5-flash`, `gemini-2.5-flash-lite`
* **Stable**: `gemini-2.0-flash`, `gemini-2.0-flash-thinking`, `gemini-2.0-flash-lite`, `gemini-2.0-pro-experimental`, `gemini-1.5-pro`, `gemini-1.5-flash`, `gemini-1.5-flash-8b`, `gemini-pro`
* **API Key**: Set `GOOGLE_API_KEY` environment variable
### Ollama (Local)
**All 32 supported models:**
* **Latest**: `deepseek-r1`, `deepseek-v3`, `deepseek-v2.5`, `deepseek-coder`, `deepseek-coder-v2`, `qwen3`, `qwen2.5-coder`, `llama3.3`, `gemma3`, `phi4`
* **Popular**: `mistral-small`, `codellama`, `llama3.2`, `llama3.1`, `qwen2.5`, `gemma2`, `phi3`, `mistral`, `codegemma`, `wizardlm2`
* **Additional**: `dolphin-mistral`, `openhermes`, `deepcoder`, `stable-code`, `wizardcoder`, `magicoder`, `solar`, `yi`, `zephyr`, `orca-mini`, `vicuna`
* **Configuration**: Set `OLLAMA_BASE_URL` (default: `http://localhost:11434`)
## Configuration
### Environment Variables
Set up your API keys and configuration:
```bash
# OpenAI
export OPENAI_API_KEY="your-openai-key"
export OPENAI_BASE_URL="https://api.openai.com/v1" # Optional
# Anthropic Claude
export ANTHROPIC_API_KEY="your-anthropic-key"
export ANTHROPIC_BASE_URL="https://api.anthropic.com" # Optional
# Google Gemini
export GOOGLE_API_KEY="your-google-key"
# Ollama (Local)
export OLLAMA_BASE_URL="http://localhost:11434" # Optional
```
### Agent Configuration
Specify the model when creating agents:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'MyAgent',
model: 'gpt-4.5', // Model automatically routes to correct provider
temperature: 0.7,
maxTokens: 2000
});
```
## Usage Examples
### Basic LLM Usage
```typescript
import { getLLM } from '@astreus-ai/astreus';
const llm = getLLM();
// Generate response
const response = await llm.generateResponse({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: 'Explain quantum computing' }],
temperature: 0.7,
maxTokens: 1000
});
console.log(response.content);
```
### Streaming Responses
```typescript
// Stream response in real-time
for await (const chunk of llm.generateStreamResponse({
model: 'gpt-4.5',
messages: [{ role: 'user', content: 'Write a story about AI' }],
stream: true
})) {
if (!chunk.done) {
process.stdout.write(chunk.content);
}
}
```
### Function Calling
```typescript
const response = await llm.generateResponse({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What\'s the weather in Tokyo?' }],
tools: [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather information',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name'
}
},
required: ['location']
}
}
}]
});
// Handle tool calls
if (response.toolCalls) {
response.toolCalls.forEach(call => {
console.log(`Tool: ${call.function.name}`);
console.log(`Args: ${call.function.arguments}`);
});
}
```
## LLM Options
Configure LLM behavior with these options:
```typescript
interface LLMRequestOptions {
model: string; // Required: Model identifier
messages: LLMMessage[]; // Required: Conversation history
temperature?: number; // Creativity level (0.0-1.0, default: 0.7)
maxTokens?: number; // Max output tokens (default: 4096)
stream?: boolean; // Enable streaming responses
systemPrompt?: string; // System instructions
tools?: Tool[]; // Function calling tools
}
```
### Parameter Details
* **temperature**: Controls randomness (0.0 = deterministic, 1.0 = very creative)
* **maxTokens**: Maximum tokens in the response (varies by model)
* **stream**: Enable real-time streaming for long responses
* **systemPrompt**: Sets behavior and context for the model
* **tools**: Enable function calling capabilities
## Provider Features
| Feature | OpenAI | Claude | Gemini | Ollama |
| ---------------- | ------ | ------ | ------- | ------- |
| Streaming | ✅ | ✅ | ✅ | ✅ |
| Function Calling | ✅ | ✅ | Limited | Limited |
| Token Usage | ✅ | ✅ | Limited | ✅ |
| Custom Base URL | ✅ | ✅ | ❌ | ✅ |
| Local Models | ❌ | ❌ | ❌ | ✅ |
## Model Selection Guide
### For Code Generation
* **Best**: `gpt-4o`, `claude-3-5-sonnet-20241022`, `deepseek-coder`
* **Fast**: `gpt-4o-mini`, `claude-3-5-haiku-20241022`
### For Reasoning Tasks
* **Best**: `claude-opus-4-20250514`, `gpt-4.5`, `o3`
* **Balanced**: `claude-sonnet-4-20250514`, `gpt-4o`
### For Creative Writing
* **Best**: `gpt-4.5`, `claude-3-opus-20240229`
* **Fast**: `gemini-2.5-pro`, `gpt-4o-mini`
### For Privacy/Local Use
* **Best**: `deepseek-r1`, `llama3.3`, `qwen3`
* **Code**: `deepseek-coder`, `codellama`
***
# MCP
URL: /docs/framework/mcp
Source: /app/src/content/docs/framework/mcp.mdx
import { DocImage } from '@/components/DocImage';
**Model Context Protocol integration for connecting agents with external tools and services**
## Overview
MCP (Model Context Protocol) enables Astreus agents to connect with external tools and services seamlessly. Define MCP servers as simple objects with automatic environment variable loading and use them at different levels - agent, task, or conversation level.
## Creating MCP Servers
Define MCP servers as array objects with automatic environment loading:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Define MCP servers array
const mcpServers = [
{
name: 'github',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"]
// GITHUB_PERSONAL_ACCESS_TOKEN loaded from .env automatically
},
{
name: 'filesystem',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/Documents"]
}
];
const agent = await Agent.create({
name: 'DevAgent',
model: 'gpt-4'
});
// Add MCP servers to agent
await agent.addMCPServers(mcpServers);
// Use automatically in conversations
const response = await agent.ask("List my repositories and save to repos.txt");
```
## Example
Here's a complete example showing MCP integration:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create agent
const agent = await Agent.create({
name: 'DevAssistant',
model: 'gpt-4',
systemPrompt: 'You are a helpful development assistant with access to various tools.'
});
// Add MCP servers
await agent.addMCPServers([
{
name: 'github',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"]
},
{
name: 'filesystem',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
},
{
name: 'search',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-brave-search"]
}
]);
// Agent now has access to GitHub, filesystem, and search tools
const response = await agent.ask(`
Check my latest repositories,
create a summary file in my project directory,
and search for TypeScript best practices
`);
console.log(response);
```
## Environment Variables
MCP servers automatically load environment variables from your `.env` file:
```bash
# .env
GITHUB_PERSONAL_ACCESS_TOKEN=ghp_xxxxxxxxxxxx
BRAVE_API_KEY=your_brave_api_key
GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json
```
No need to specify environment variables in code - they're loaded automatically and securely.
## Server Types
### Local Servers (stdio)
For servers that run as local processes:
```typescript
const localServers = [
{
name: 'sqlite',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-sqlite", "--db-path", "/path/to/db.sqlite"],
cwd: "/working/directory"
}
];
```
### Remote Servers (SSE)
For servers that connect via HTTP/SSE:
```typescript
const remoteServers = [
{
name: 'api-server',
url: "https://api.example.com/mcp/events"
}
];
```
## Multi-Level Usage
### Agent Level
Available for all tasks and conversations:
```typescript
// Agent-level: Available everywhere
await agent.addMCPServers([
{
name: 'filesystem',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/Documents"]
}
]);
```
### Task Level
Available for specific tasks:
```typescript
// Task-level: Available for this task only
const task = await agent.createTask({
prompt: "Analyze my GitHub repositories",
mcpServers: [
{
name: 'github',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"]
}
]
});
```
### Conversation Level
Available for single conversations:
```typescript
// Conversation-level: Available for this conversation only
const response = await agent.ask("Search for TypeScript news", {
mcpServers: [
{
name: 'search',
command: "npx",
args: ["-y", "@modelcontextprotocol/server-brave-search"]
}
]
});
```
## Manual Tool Access
Access MCP tools programmatically:
```typescript
// List available MCP tools
const tools = agent.getMCPTools();
console.log('Available MCP tools:', tools.map(t => t.name));
// Call specific MCP tool
const result = await agent.callMCPTool('github:list_repos', {
owner: 'username'
});
```
MCP integration provides powerful external tool access while maintaining security and simplicity.
***
# Memory
URL: /docs/framework/memory
Source: /app/src/content/docs/framework/memory.mdx
import { DocImage } from '@/components/DocImage';
**Persistent conversation memory with vector search and automatic context integration**
## Overview
The Memory system provides agents with long-term memory capabilities, enabling them to remember past conversations, learn from interactions, and maintain context across sessions. When memory is enabled, agents automatically store and retrieve relevant information from previous conversations, creating a more personalized and context-aware experience.
## Enabling Memory
Enable memory for an agent by setting the `memory` option to `true`:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'MemoryAgent',
model: 'gpt-4o',
memory: true // Enable persistent memory
});
```
## Basic Usage
Here's a complete example showing how memory works across conversations:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create an agent with memory
const agent = await Agent.create({
name: 'PersonalAssistant',
model: 'gpt-4o',
memory: true,
systemPrompt: 'You are a helpful personal assistant who remembers user preferences.'
});
// First conversation
const response1 = await agent.ask('My name is John and I love TypeScript');
console.log(response1);
// Output: "Nice to meet you, John! It's great that you love TypeScript..."
// Later conversation - agent remembers
const response2 = await agent.ask('What programming language do I like?');
console.log(response2);
// Output: "You mentioned that you love TypeScript, John!"
// Memory persists even after restarting
const sameAgent = await Agent.create({
name: 'PersonalAssistant', // Same name retrieves existing memories
model: 'gpt-4o',
memory: true
});
const response3 = await sameAgent.ask('Do you remember my name?');
console.log(response3);
// Output: "Yes, your name is John!"
```
## Memory Methods
When memory is enabled, agents have access to these memory management methods:
```typescript
// Add a memory manually
const memory = await agent.addMemory(
'Important project information: Budget is $50k',
{ type: 'project', category: 'budget' }
);
// Remember conversation with role context
const userMemory = await agent.rememberConversation(
'I prefer TypeScript over JavaScript',
'user'
);
// Get a specific memory by ID
const existingMemory = await agent.getMemory(memory.id);
// Search memories by content (semantic search with embeddings)
const budgetMemories = await agent.searchMemories('budget', {
limit: 5,
startDate: new Date('2024-01-01')
});
// Vector similarity search for semantic matching
const happyMemories = await agent.searchMemoriesBySimilarity('joyful moments', {
similarityThreshold: 0.7, // Minimum similarity score
limit: 10
});
// List all memories with options
const allMemories = await agent.listMemories({
limit: 20,
orderBy: 'createdAt',
order: 'desc'
});
// Update a memory
const updatedMemory = await agent.updateMemory(memory.id, {
content: 'Updated budget: $75k',
metadata: { type: 'project', category: 'budget', updated: true }
});
// Delete a specific memory
const deleted = await agent.deleteMemory(memory.id);
// Generate embedding for existing memory (migration/repair)
const result = await agent.generateEmbeddingForMemory(memory.id);
if (result.success) {
console.log('✅ Embedding generated successfully');
}
// Clear all memories
const deletedCount = await agent.clearMemories();
```
## Similarity Search Mathematics
When searching memories using vector similarity, the system calculates similarity scores between query and memory embeddings:
### Cosine Similarity Score
$\text{similarity} = \frac{\vec{q} \cdot \vec{m}}{||\vec{q}|| \cdot ||\vec{m}||} \in [0, 1]$
Where:
* $\vec{q}$ is the query embedding vector
* $\vec{m}$ is the memory embedding vector
* Result ranges from 0 (completely different) to 1 (identical)
### Distance-Based Score
For distance metrics, the similarity score is calculated as:
$\text{score} = \frac{1}{1 + d(\vec{q}, \vec{m})}$
Where $d$ is the Euclidean distance between vectors.
### Threshold Filtering
Memories are returned only if:
$\text{similarity} \geq \theta$
Where $\theta$ is the `similarityThreshold` parameter (default: 0.7).
## Memory Object Structure
```typescript
interface Memory {
id?: number; // Unique memory identifier
agentId: number; // ID of the owning agent
content: string; // Memory content
embedding?: number[]; // Vector embedding (auto-generated)
metadata?: MetadataObject; // Custom metadata
createdAt?: Date; // When memory was created
updatedAt?: Date; // Last update time
}
interface MemorySearchOptions {
limit?: number; // Max results (default: 10 for search, 100 for list)
offset?: number; // Skip results (default: 0)
orderBy?: 'createdAt' | 'updatedAt' | 'relevance'; // Sort field
order?: 'asc' | 'desc'; // Sort order (default: 'desc')
startDate?: Date; // Filter from date
endDate?: Date; // Filter to date
similarityThreshold?: number; // Similarity threshold (0-1, default: 0.7)
useEmbedding?: boolean; // Use embedding search (default: true)
}
```
***
# Plugin
URL: /docs/framework/plugin
Source: /app/src/content/docs/framework/plugin.mdx
import { DocImage } from '@/components/DocImage';
**Extensible tool system with JSON schema validation and automatic function calling**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
Plugins extend agent capabilities by providing tools that can be called during conversations. The plugin system is built around a decorator pattern that enhances agents with tool execution capabilities. It provides automatic parameter validation, error handling, and seamless LLM integration with function calling.
## Built-in Tools
Astreus comes with several built-in tools available to all agents:
### Knowledge Tools
* **search\_knowledge**: Search through the agent's knowledge base for relevant information
* `query` (string, required): Search query
* `limit` (number, optional): Maximum results (default: 5)
* `threshold` (number, optional): Similarity threshold (default: 0.7)
### Vision Tools
* **analyze\_image**: General image analysis with custom prompts
* **describe\_image**: Generate accessibility-friendly descriptions
* **extract\_text\_from\_image**: OCR capabilities for text extraction
## Creating Custom Plugins
### Define Your Tool
Create a tool definition with handler function:
```typescript
import { ToolDefinition, ToolContext } from '@astreus-ai/astreus';
const weatherTool: ToolDefinition = {
name: 'get_weather',
description: 'Get current weather information for a location',
parameters: {
location: {
name: 'location',
type: 'string',
description: 'City name or location',
required: true
},
units: {
name: 'units',
type: 'string',
description: 'Temperature units (celsius or fahrenheit)',
required: false
}
},
handler: async (params: Record, context?: ToolContext) => {
try {
// Your tool implementation
const weather = await fetchWeather(params.location, params.units);
return {
success: true,
data: {
temperature: weather.temp,
conditions: weather.conditions,
location: params.location
}
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error'
};
}
}
};
```
### Create the Plugin
Bundle your tools into a plugin:
```typescript
import { PluginDefinition } from '@astreus-ai/astreus';
const weatherPlugin: PluginDefinition = {
name: 'weather-plugin',
version: '1.0.0',
description: 'Weather information tools',
tools: [weatherTool],
// Optional: Plugin initialization
initialize: async (config?: Record) => {
console.log('Weather plugin initialized');
},
// Optional: Plugin cleanup
cleanup: async () => {
console.log('Weather plugin cleaned up');
}
};
```
### Register with Agent
Register your plugin with an agent:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'WeatherAgent',
model: 'gpt-4o'
});
// Register the plugin
await agent.registerPlugin(weatherPlugin);
```
## Tool Parameter Types
The plugin system supports comprehensive parameter validation:
```typescript
// Parameter type definitions
interface ToolParameter {
name: string; // Parameter name
type: 'string' | 'number' | 'boolean' | 'object' | 'array';
description: string; // Parameter description
required: boolean; // Whether parameter is required
enum?: string[]; // Allowed values (for string types)
default?: any; // Default value
properties?: Record; // For object types
items?: ToolParameter; // For array types
}
```
### Parameter Examples
```typescript
const advancedTool: ToolDefinition = {
name: 'process_data',
description: 'Process data with various options',
parameters: {
// String with enum values
format: {
name: 'format',
type: 'string',
description: 'Output format',
required: true,
enum: ['json', 'csv', 'xml']
},
// Number with default
limit: {
name: 'limit',
type: 'number',
description: 'Maximum records to process',
required: false,
default: 100
},
// Object with nested properties
options: {
name: 'options',
type: 'object',
description: 'Processing options',
required: false,
properties: {
includeHeaders: {
name: 'includeHeaders',
type: 'boolean',
description: 'Include column headers',
required: false,
default: true
}
}
},
// Array of strings
fields: {
name: 'fields',
type: 'array',
description: 'Fields to include',
required: false,
items: {
name: 'field',
type: 'string',
description: 'Field name'
}
}
},
handler: async (params) => {
// Tool implementation
return { success: true, data: params };
}
};
```
## Using Tools in Conversations
### Automatic Tool Usage
Agents with registered plugins can automatically use tools during conversations:
```typescript
const agent = await Agent.create({
name: 'AssistantAgent',
model: 'gpt-4o'
});
await agent.registerPlugin(weatherPlugin);
// Agent can automatically call tools based on conversation
const response = await agent.ask("What's the weather like in Tokyo?");
// Agent will automatically call get_weather tool and incorporate results
console.log(response);
// "The current weather in Tokyo is 22°C with clear skies..."
```
### Manual Tool Execution
You can also execute tools manually:
```typescript
// Execute single tool
const result = await agent.executeToolCall({
id: 'call-123',
name: 'get_weather',
parameters: {
location: 'New York',
units: 'celsius'
}
});
console.log(result.success ? result.data : result.error);
// Execute multiple tools
const results = await agent.executeToolCalls([
{ id: 'call-1', name: 'get_weather', parameters: { location: 'Tokyo' } },
{ id: 'call-2', name: 'get_weather', parameters: { location: 'London' } }
]);
```
### Tool-Enhanced Tasks
Use tools in structured tasks via the Task module:
```typescript
const task = await agent.createTask({
prompt: "Compare the weather in Tokyo, London, and New York",
useTools: true
});
const result = await agent.executeTask(task.id, {
stream: true,
onChunk: (chunk) => {
console.log(chunk);
}
});
```
## Tool Context and Metadata
Tools receive execution context with useful information:
```typescript
const contextAwareTool: ToolDefinition = {
name: 'log_action',
description: 'Log an action with context',
parameters: {
action: {
name: 'action',
type: 'string',
description: 'Action to log',
required: true
}
},
handler: async (params, context) => {
// Access execution context
console.log(`Agent ${context?.agentId} performed: ${params.action}`);
console.log(`Task ID: ${context?.taskId}`);
console.log(`User ID: ${context?.userId}`);
console.log(`Metadata:`, context?.metadata);
return {
success: true,
data: { logged: true, timestamp: new Date().toISOString() }
};
}
};
```
***
# Quickstart
URL: /docs/framework/quickstart
Source: /app/src/content/docs/framework/quickstart.mdx
import { DocImage } from '@/components/DocImage';
import { Step, Steps } from 'fumadocs-ui/components/steps';
**Build your first AI agent with Astreus in under 2 minutes**
Let's create a simple agent that can execute tasks and respond intelligently.
Before we proceed, make sure you have Astreus installed. If you haven't installed it yet, follow the [installation guide](/docs/framework/install).
### Create Environment File
Create a `.env` file in your project root and add your OpenAI API key:
```bash
touch .env
```
Add your API key to the `.env` file:
```bash
OPENAI_API_KEY=sk-your-openai-api-key-here
```
### Create your First Agent
Create an agent with memory and system prompt:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create agent
const agent = await Agent.create({
name: 'ResearchAgent',
model: 'gpt-4o',
memory: true,
systemPrompt: 'You are an expert research assistant.'
});
```
### Create and Execute Task
Create a task and execute it with your agent:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create agent
const agent = await Agent.create({
name: 'ResearchAgent',
model: 'gpt-4o',
memory: true,
systemPrompt: 'You are an expert research assistant.'
});
// Create a task
const task = await agent.createTask({
prompt: "Research latest news in Anthropic and OpenAI"
});
// Execute the task
const result = await agent.executeTask(task.id);
console.log(result.response);
```
### Build a Graph Workflow
Create a workflow graph with multiple tasks:
```typescript
import { Agent, Graph } from '@astreus-ai/astreus';
// Create agent
const agent = await Agent.create({
name: 'ResearchAgent',
model: 'gpt-4o',
memory: true,
systemPrompt: 'You are an expert research assistant.'
});
// Create a graph for complex workflows
const graph = new Graph({
name: 'Research Pipeline',
defaultAgentId: agent.id
});
// Add task nodes
const researchNode = graph.addTaskNode({
prompt: 'Research the latest AI developments'
});
const analysisNode = graph.addTaskNode({
prompt: 'Analyze the research findings',
dependencies: [researchNode]
});
const summaryNode = graph.addTaskNode({
prompt: 'Create a summary report',
dependencies: [analysisNode]
});
// Run the graph
const graphResult = await graph.run();
console.log(graphResult.results[summaryNode]);
```
Congratulations! You've created your first AI agent with Astreus.
***
# Scheduler
URL: /docs/framework/scheduler
Source: /app/src/content/docs/framework/scheduler.mdx
import { DocImage } from '@/components/DocImage';
**Simple time-based execution with minimal setup**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
The Astreus scheduler provides simple time-based execution for tasks and graphs using intuitive schedule strings. No complex configuration needed - just add a `schedule` field and you're done!
## Basic Task Scheduling
Schedule individual tasks with simple syntax:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'SchedulerAgent',
model: 'gpt-4o'
});
// Create a scheduled task - scheduler starts automatically when needed
const scheduledTask = await agent.createTask({
prompt: 'Generate monthly report for December',
schedule: 'once@2024-12-25@09:00'
});
// Create a recurring task
const dailyTask = await agent.createTask({
prompt: 'Daily health check and status report',
schedule: 'daily@08:00'
});
// Execute the task - scheduler will handle the scheduling automatically
await agent.executeTask(scheduledTask.id);
```
## Schedule Configuration
Use simple schedule strings for easy configuration:
```typescript
// Supported schedule formats:
'daily@07:00' // Daily at 7 AM
'weekly@monday@09:00' // Weekly on Monday at 9 AM
'monthly@1@10:00' // Monthly on 1st day at 10 AM
'hourly' // Every hour (default time)
'@15:30' // Once today at 3:30 PM
'once@2024-12-25@10:00' // Once on specific date and time
// Examples:
await agent.createTask({
prompt: 'Morning briefing',
schedule: 'daily@08:00'
});
await agent.createTask({
prompt: 'Weekly report',
schedule: 'weekly@friday@17:00'
});
```
## Graph Scheduling with Dependencies
Schedule graphs with intelligent dependency resolution:
```typescript
import { Graph } from '@astreus-ai/astreus';
const graph = new Graph({
name: 'Morning Workflow',
defaultAgentId: agent.id
}, agent);
// Node A: Data collection at 6 AM
const nodeA = graph.addTaskNode({
name: 'Data Collection',
prompt: 'Collect overnight data from all sources',
schedule: 'once@2024-12-20@06:00'
});
// Node B: Processing (depends on A completing first)
const nodeB = graph.addTaskNode({
name: 'Data Processing',
prompt: 'Process collected data and generate insights',
schedule: 'once@2024-12-20@07:00',
dependsOn: ['Data Collection'] // Must wait for A
});
// Node C: Report generation at 8 AM
const nodeC = graph.addTaskNode({
name: 'Report Generation',
prompt: 'Generate morning executive report',
schedule: 'once@2024-12-20@08:00',
dependsOn: ['Data Processing']
});
// Execute - scheduler starts automatically for scheduled nodes
await graph.run();
// Result: A runs at 06:00, B waits and runs after A (~06:05), C runs at 08:00
```
## Recurring Patterns
Create sophisticated recurring schedules:
### Daily Schedules
```typescript
// Every day at 8 AM
{
type: 'recurring',
executeAt: new Date('2024-12-20T08:00:00Z'),
recurrence: {
pattern: 'daily',
interval: 1,
maxExecutions: 365 // Stop after 1 year
}
}
// Every 3 days
{
type: 'recurring',
executeAt: new Date('2024-12-20T08:00:00Z'),
recurrence: {
pattern: 'daily',
interval: 3
}
}
```
### Weekly Schedules
```typescript
// Every Monday at 9 AM
{
type: 'recurring',
executeAt: new Date('2024-12-23T09:00:00Z'), // Monday
recurrence: {
pattern: 'weekly',
interval: 1,
daysOfWeek: [1] // Monday
}
}
// Every Monday and Friday
{
type: 'recurring',
executeAt: new Date('2024-12-23T09:00:00Z'),
recurrence: {
pattern: 'weekly',
interval: 1,
daysOfWeek: [1, 5] // Monday and Friday
}
}
```
### Monthly and Yearly
```typescript
// 15th of every month
{
type: 'recurring',
executeAt: new Date('2024-12-15T10:00:00Z'),
recurrence: {
pattern: 'monthly',
interval: 1,
dayOfMonth: 15
}
}
// Every January 1st
{
type: 'recurring',
executeAt: new Date('2025-01-01T00:00:00Z'),
recurrence: {
pattern: 'yearly',
interval: 1,
monthOfYear: 1
}
}
```
## Scheduler Management
Monitor and control scheduled executions:
```typescript
// Get scheduler status
const status = agent.getSchedulerStatus();
console.log(`Running: ${status.running}, Active jobs: ${status.activeJobs}`);
// List all scheduled items
const pending = await agent.listScheduledItems('pending');
const completed = await agent.listScheduledItems('completed');
// Get specific scheduled item
const item = await agent.getScheduledItem('task_123_456');
// Cancel a scheduled item
await agent.cancelScheduledItem('task_123_456');
// Delete a scheduled item
await agent.deleteScheduledItem('task_123_456');
// Stop the scheduler
await agent.stopScheduler();
```
## Advanced Options
Configure retry logic and execution parameters:
```typescript
await agent.scheduleTask({
prompt: 'Critical system backup',
schedule: {
type: 'recurring',
executeAt: new Date('2024-12-20T02:00:00Z'),
recurrence: { pattern: 'daily', interval: 1 }
},
options: {
maxRetries: 3, // Retry failed executions
retryDelay: 60000, // 1 minute between retries
timeout: 300000, // 5 minute execution timeout
respectDependencies: true // Honor dependencies (default)
}
});
```
## Dependency Resolution Logic
The scheduler intelligently resolves conflicts between schedules and dependencies:
| Scenario | Behavior |
| -------------------------------- | ------------------------------------ |
| Node scheduled before dependency | **Waits for dependency to complete** |
| Node scheduled after dependency | **Runs at scheduled time** |
| Multiple dependencies | **Waits for ALL dependencies** |
| Circular dependencies | **Error thrown during validation** |
| Mixed scheduled/immediate nodes | **Works seamlessly together** |
The scheduler provides a robust foundation for building automated, time-based AI workflows that respect dependencies and scale with your needs.
***
# Security
URL: /docs/framework/security
Source: /app/src/content/docs/framework/security.mdx
import { DocImage } from '@/components/DocImage';
**Field-level encryption for protecting sensitive data in your Astreus agents**
## Overview
Astreus includes built-in **AES-256-GCM encryption** to protect sensitive data stored in your database. This feature provides transparent field-level encryption for conversations, system prompts, task data, and knowledge base content.
## Quick Setup
### 1. Generate Encryption Key
```bash
# Generate a cryptographically secure 256-bit key
openssl rand -hex 32
```
### 2. Configure Environment
```bash
# Enable encryption
ENCRYPTION_ENABLED=true
# Your secure master key (keep this secret!)
ENCRYPTION_MASTER_KEY=your-256-bit-encryption-key-here
# Optional: specify algorithm (default: aes-256-gcm)
ENCRYPTION_ALGORITHM=aes-256-gcm
```
### 3. Use Normally
```javascript
import { Agent } from '@astreus-ai/astreus';
// Create agent with sensitive system prompt
const agent = await Agent.create({
name: 'SecureAgent',
systemPrompt: 'Your confidential business logic here', // ← Automatically encrypted
memory: true,
knowledge: true
});
// All interactions automatically encrypted
const response = await agent.ask('Sensitive question here');
// Knowledge uploads automatically encrypted
await agent.knowledge.addDocument(
'Confidential Document',
'Sensitive content here' // ← Automatically encrypted
);
```
## Key Management
### Master Key Requirements
* **Minimum Length**: 32 characters (256 bits)
* **Generation**: Use cryptographically secure random generators
* **Storage**: Store securely outside of codebase
* **Rotation**: Plan for periodic key rotation
***
# Sub-Agents
URL: /docs/framework/sub-agents
Source: /app/src/content/docs/framework/sub-agents.mdx
import { DocImage } from '@/components/DocImage';
**Intelligent task delegation with specialized agents working in coordination**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
Sub-Agents enable sophisticated multi-agent coordination where a main agent intelligently delegates tasks to specialized sub-agents. Each sub-agent has its own expertise, capabilities, and role, working together to complete complex workflows that would be challenging for a single agent.
**New**: Sub-Agents now integrate seamlessly with Graph workflows, enabling hierarchical task distribution across complex workflow orchestration systems.
## Creating Sub-Agents
Sub-agents are created independently and then attached to a main coordinator agent:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create specialized sub-agents
const researcher = await Agent.create({
name: 'ResearcherBot',
model: 'gpt-4o',
systemPrompt: 'You are an expert researcher who gathers and analyzes information thoroughly.',
memory: true,
knowledge: true
});
const writer = await Agent.create({
name: 'WriterBot',
model: 'gpt-4o',
systemPrompt: 'You are a skilled content writer who creates engaging, well-structured content.',
vision: true
});
const analyst = await Agent.create({
name: 'AnalystBot',
model: 'gpt-4o',
systemPrompt: 'You are a data analyst who provides insights and recommendations.',
useTools: true
});
// Create main agent with sub-agents
const mainAgent = await Agent.create({
name: 'CoordinatorAgent',
model: 'gpt-4o',
systemPrompt: 'You coordinate complex tasks between specialized sub-agents.',
subAgents: [researcher, writer, analyst]
});
```
## Delegation Strategies
### Auto Delegation
The main agent uses LLM intelligence to analyze tasks and assign them optimally:
```typescript
const result = await mainAgent.ask(
'Research AI market trends, analyze the data, and write an executive summary',
{
useSubAgents: true,
delegation: 'auto' // AI-powered task distribution
}
);
```
### Task Analysis
Main agent analyzes the complex task using LLM reasoning.
### Agent Matching
Evaluates each sub-agent's capabilities and specializations.
### Optimal Assignment
Creates specific subtasks for the most appropriate agents.
### Coordinated Execution
Manages execution flow and result aggregation.
### Manual Delegation
Explicitly assign specific tasks to specific agents using their IDs:
```typescript
const result = await mainAgent.ask(
'Complex multi-step project',
{
useSubAgents: true,
delegation: 'manual',
taskAssignment: {
[researcher.id]: 'Research market opportunities in healthcare AI',
[analyst.id]: 'Analyze market size and growth potential',
[writer.id]: 'Create executive summary with recommendations'
}
}
);
```
### Sequential Delegation
Sub-agents work in sequence, building on previous results:
```typescript
const result = await mainAgent.ask(
'Create a comprehensive business plan for an AI startup',
{
useSubAgents: true,
delegation: 'sequential' // Each agent builds on the previous work
}
);
```
## Coordination Patterns
```mermaid
graph TD
A[Main Coordinator Agent] --> B{Task Analysis}
B -->|Research Tasks| C[ResearcherBot]
B -->|Analysis Tasks| D[AnalystBot]
B -->|Content Tasks| E[WriterBot]
C --> F[Research Results]
D --> G[Analysis Results]
E --> H[Written Content]
F --> I[Result Aggregation]
G --> I
H --> I
I --> J[Final Output]
style A fill:#f9f,stroke:#333,stroke-width:4px
style C fill:#bbf,stroke:#333,stroke-width:2px
style D fill:#bfb,stroke:#333,stroke-width:2px
style E fill:#fbb,stroke:#333,stroke-width:2px
```
### Parallel Execution
Sub-agents work simultaneously for maximum efficiency:
```typescript
const result = await mainAgent.ask(
'Multi-faceted analysis task',
{
useSubAgents: true,
delegation: 'auto',
coordination: 'parallel' // All agents work concurrently
}
);
```
### Sequential Execution
Sub-agents work in order with context passing:
```typescript
const result = await mainAgent.ask(
'Research → Analyze → Report workflow',
{
useSubAgents: true,
delegation: 'auto',
coordination: 'sequential' // Agents work in dependency order
}
);
```
## Sub-Agent Configuration
### Specialized Roles
Configure sub-agents for specific expertise areas:
```typescript
// Research Specialist
const researcher = await Agent.create({
name: 'ResearchSpecialist',
systemPrompt: 'You conduct thorough research using multiple sources and methodologies.',
knowledge: true, // Access to knowledge base
memory: true, // Remember research context
useTools: true // Use research tools
});
// Content Creator
const creator = await Agent.create({
name: 'ContentCreator',
systemPrompt: 'You create compelling content across different formats and audiences.',
vision: true, // Process visual content
useTools: true // Use content creation tools
});
// Technical Analyst
const analyst = await Agent.create({
name: 'TechnicalAnalyst',
systemPrompt: 'You analyze technical data and provide actionable insights.',
useTools: true // Use analysis tools
});
```
## Graph Integration
Sub-Agents work seamlessly with Graph workflows for complex orchestration:
```typescript
import { Agent, Graph } from '@astreus-ai/astreus';
// Create specialized sub-agents
const researcher = await Agent.create({
name: 'ResearchBot',
systemPrompt: 'You conduct thorough research and analysis.',
knowledge: true
});
const writer = await Agent.create({
name: 'WriterBot',
systemPrompt: 'You create compelling content and reports.',
vision: true
});
// Main coordinator with sub-agents
const coordinator = await Agent.create({
name: 'ProjectCoordinator',
systemPrompt: 'You coordinate complex projects using specialized teams.',
subAgents: [researcher, writer]
});
// Create sub-agent aware graph
const projectGraph = new Graph({
name: 'Market Analysis Project',
defaultAgentId: coordinator.id,
subAgentAware: true,
optimizeSubAgentUsage: true
}, coordinator);
// Add tasks with intelligent sub-agent delegation
const researchTask = projectGraph.addTaskNode({
name: 'Market Research',
prompt: 'Research AI healthcare market trends and opportunities',
useSubAgents: true,
subAgentDelegation: 'auto'
});
const reportTask = projectGraph.addTaskNode({
name: 'Executive Report',
prompt: 'Create comprehensive executive report based on research',
dependencies: [researchTask],
useSubAgents: true,
subAgentCoordination: 'sequential'
});
// Execute with performance monitoring
const result = await projectGraph.run();
console.log('Performance:', projectGraph.generateSubAgentPerformanceReport());
```
### Graph Sub-Agent Features
* **Automatic Detection**: Graph nodes automatically use sub-agents when beneficial
* **Context Passing**: Workflow context flows to sub-agents for better coordination
* **Performance Optimization**: Real-time monitoring and automatic strategy adjustment
* **Flexible Configuration**: Per-node sub-agent settings with inheritance from graph config
## Advanced Examples
### Content Production Pipeline
```typescript
const contentPipeline = await Agent.create({
name: 'ContentPipeline',
model: 'gpt-4o',
subAgents: [researcher, writer, analyst]
});
const blogPost = await contentPipeline.ask(
'Create a comprehensive blog post about quantum computing applications in finance',
{
useSubAgents: true,
delegation: 'auto',
coordination: 'sequential'
}
);
```
### Market Research Workflow
```typescript
const marketResearch = await Agent.create({
name: 'MarketResearchTeam',
model: 'gpt-4o',
subAgents: [researcher, analyst, writer]
});
const report = await marketResearch.ask(
'Analyze the fintech market and create investor presentation',
{
useSubAgents: true,
delegation: 'manual',
coordination: 'parallel',
taskAssignment: {
[researcher.id]: 'Research fintech market trends and competitors',
[analyst.id]: 'Analyze market data and financial projections',
[writer.id]: 'Create compelling investor presentation'
}
}
);
```
***
# Task
URL: /docs/framework/task
Source: /app/src/content/docs/framework/task.mdx
import { DocImage } from '@/components/DocImage';
**Structured task execution with status tracking and tool integration**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
Tasks provide a way to organize and execute complex operations with your agents. They support status tracking, tool usage, and can be composed into larger workflows. Each task can have dependencies, execute specific actions, and maintain its own state throughout execution.
## Creating Tasks
Tasks are created through agents using a simple prompt-based approach:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'TaskAgent',
model: 'gpt-4o'
});
// Create a task
const task = await agent.createTask({
prompt: 'Analyze the TypeScript code and suggest performance improvements'
});
// Execute the task
const result = await agent.executeTask(task.id);
console.log(result.response);
```
## Task Attributes
Tasks can be configured with the following attributes:
```typescript
interface TaskRequest {
prompt: string; // The task instruction or query
useTools?: boolean; // Enable/disable tool usage (default: true)
mcpServers?: MCPServerDefinition[]; // Task-level MCP servers
plugins?: Array<{ // Task-level plugins
plugin: Plugin;
config?: PluginConfig;
}>;
attachments?: Array<{ // Files to attach to the task
type: 'image' | 'pdf' | 'text' | 'markdown' | 'code' | 'json' | 'file';
path: string; // File path
name?: string; // Display name
language?: string; // Programming language (for code files)
}>;
schedule?: string; // Simple schedule string (e.g., 'daily@09:00')
metadata?: MetadataObject; // Custom metadata for tracking
useSubAgents?: boolean; // Enable sub-agent delegation for this task
subAgentDelegation?: 'auto' | 'manual' | 'sequential'; // Delegation strategy
subAgentCoordination?: 'parallel' | 'sequential'; // How sub-agents coordinate
taskAssignment?: Record; // Manual task assignment (agentId -> task)
}
```
### Attribute Details
* **prompt**: The main instruction or query for the task. This is the only required field.
* **useTools**: Controls whether the task can use tools/plugins. Defaults to `true` (inherits from agent if not specified).
* **mcpServers**: Task-specific MCP (Model Context Protocol) servers to enable for this task.
* **plugins**: Task-specific plugins to register for this task execution.
* **attachments**: Array of files to attach to the task. Supports images, PDFs, text files, code files, and more.
* **schedule**: Simple schedule string for time-based execution (e.g., `'daily@09:00'`, `'weekly@friday@17:00'`). Optional field that enables automatic scheduling.
* **metadata**: Custom key-value pairs for organizing and tracking tasks (e.g., category, priority, tags).
#### Sub-Agent Integration
* **useSubAgents**: Enable sub-agent delegation for this specific task. When `true`, the main agent will intelligently delegate portions of the task to its registered sub-agents.
* **subAgentDelegation**: Strategy for task delegation:
* `'auto'`: AI-powered intelligent task distribution based on sub-agent capabilities
* `'manual'`: Explicit task assignment using `taskAssignment` mapping
* `'sequential'`: Sub-agents work in sequence, building on previous results
* **subAgentCoordination**: Coordination pattern for sub-agent execution:
* `'parallel'`: Sub-agents work simultaneously for maximum efficiency
* `'sequential'`: Sub-agents work in order with context passing between them
* **taskAssignment**: Manual task assignment mapping (only used with `subAgentDelegation: 'manual'`). Maps agent IDs to specific task instructions.
## Task Lifecycle
Tasks go through several states during execution:
```typescript
type TaskStatus = 'pending' | 'in_progress' | 'completed' | 'failed';
```
```mermaid
stateDiagram-v2
[*] --> Pending: Task Created
Pending --> InProgress: Execute Task
InProgress --> Completed: Success
InProgress --> Failed: Error
Completed --> [*]
Failed --> [*]
InProgress --> InProgress: Using Tools
note right of Pending
Waiting for execution
or dependencies
end note
note right of InProgress
Actively executing
May use tools/plugins
end note
note right of Completed
Task finished successfully
Results available
end note
note right of Failed
Error encountered
Error details available
end note
```
### Pending
Task is created but not yet started. Waiting for execution or dependencies.
### In Progress
Task is actively being executed by the agent. Tools may be used during this phase.
### Completed
Task has finished successfully with results available.
### Failed
Task encountered an error during execution. Error details are available.
## Example with Attachments and Tools
Here's a complete example showing tasks with file attachments and tool integration:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create an agent
const agent = await Agent.create({
name: 'CodeReviewAssistant',
model: 'gpt-4o',
vision: true // Enable vision for screenshots
});
// Code review task with multiple file types
const codeReviewTask = await agent.createTask({
prompt: `Please perform a comprehensive code review:
1. Check for security vulnerabilities
2. Identify performance issues
3. Suggest improvements for code quality
4. Review the UI mockup for usability issues`,
attachments: [
{
type: 'code',
path: './src/auth/login.ts',
name: 'Login Controller',
language: 'typescript'
},
{
type: 'code',
path: './src/middleware/security.js',
name: 'Security Middleware',
language: 'javascript'
},
{
type: 'json',
path: './package.json',
name: 'Package Dependencies'
},
{
type: 'image',
path: './designs/login-mockup.png',
name: 'Login UI Mockup'
},
{
type: 'markdown',
path: './docs/security-requirements.md',
name: 'Security Requirements'
}
],
metadata: {
type: 'code-review',
priority: 'high',
reviewer: 'ai-assistant'
}
});
// Execute task with streaming
const result = await agent.executeTask(codeReviewTask.id, {
model: 'gpt-4o', // Override model for this task
stream: true // Enable streaming response
});
console.log('Code review completed:', result.response);
// Documentation task with text files
const docTask = await agent.createTask({
prompt: 'Update the API documentation based on the latest code changes',
attachments: [
{ type: 'text', path: '/api/routes.txt', name: 'API Routes' },
{ type: 'markdown', path: '/README.md', name: 'Current Documentation' }
]
});
// List tasks with attachments
const tasksWithFiles = await agent.listTasks({
orderBy: 'createdAt',
order: 'desc'
});
tasksWithFiles.forEach(task => {
console.log(`Task ${task.id}: ${task.status}`);
if (task.metadata?.attachments) {
console.log(` - Has attachments`);
}
if (task.completedAt) {
console.log(` - Completed: ${task.completedAt.toISOString()}`);
}
});
```
## Sub-Agent Task Delegation
Tasks now support sub-agent delegation directly through task creation and execution:
```typescript
import { Agent } from '@astreus-ai/astreus';
// Create specialized sub-agents
const researcher = await Agent.create({
name: 'ResearchBot',
systemPrompt: 'You are an expert researcher who gathers comprehensive information.'
});
const writer = await Agent.create({
name: 'WriterBot',
systemPrompt: 'You create engaging, well-structured content.'
});
const mainAgent = await Agent.create({
name: 'ContentCoordinator',
subAgents: [researcher, writer]
});
// Create task with automatic sub-agent delegation
const autoTask = await mainAgent.createTask({
prompt: 'Research renewable energy trends and write a comprehensive report',
useSubAgents: true,
subAgentDelegation: 'auto',
subAgentCoordination: 'sequential',
metadata: { type: 'research-report', priority: 'high' }
});
// Create task with manual sub-agent assignment
const manualTask = await mainAgent.createTask({
prompt: 'Create market analysis presentation',
useSubAgents: true,
subAgentDelegation: 'manual',
subAgentCoordination: 'parallel',
taskAssignment: {
[researcher.id]: 'Research market data and competitor analysis',
[writer.id]: 'Create presentation slides and executive summary'
},
metadata: { type: 'presentation', deadline: '2024-12-01' }
});
// Execute tasks - sub-agent coordination happens automatically
const autoResult = await mainAgent.executeTask(autoTask.id);
const manualResult = await mainAgent.executeTask(manualTask.id);
console.log('Auto-delegated result:', autoResult.response);
console.log('Manually-assigned result:', manualResult.response);
```
### Alternative: Agent Methods for Sub-Agent Execution
You can also leverage sub-agents through agent methods for immediate execution:
```typescript
// Direct execution with sub-agent delegation via agent.ask()
const result = await mainAgent.ask('Research renewable energy trends and write report', {
useSubAgents: true,
delegation: 'auto',
coordination: 'sequential'
});
// Manual delegation with specific task assignments
const manualResult = await mainAgent.ask('Create market analysis presentation', {
useSubAgents: true,
delegation: 'manual',
coordination: 'parallel',
taskAssignment: {
[researcher.id]: 'Research market data and competitor analysis',
[writer.id]: 'Create presentation slides and executive summary'
}
});
```
### Benefits of Task-Level Sub-Agent Delegation
* **Persistent Configuration**: Sub-agent settings are stored with the task and persist across sessions
* **Reproducible Workflows**: Task definitions can be reused with consistent sub-agent behavior
* **Flexible Execution**: Tasks can be executed immediately or scheduled for later with same sub-agent coordination
* **Audit Trail**: Task metadata includes sub-agent delegation history for tracking and debugging
## Managing Tasks
Tasks can be managed and tracked throughout their lifecycle:
```typescript
// Update task with additional metadata
await agent.updateTask(task.id, {
metadata: {
...task.metadata,
progress: 50,
estimatedCompletion: new Date()
}
});
// Delete a specific task
await agent.deleteTask(task.id);
// Clear all tasks for an agent
const deletedCount = await agent.clearTasks();
console.log(`Deleted ${deletedCount} tasks`);
// Search tasks with filters
const pendingTasks = await agent.listTasks({
status: 'pending',
limit: 5
});
const recentTasks = await agent.listTasks({
orderBy: 'completedAt',
order: 'desc',
limit: 10
});
```
***
# Vision
URL: /docs/framework/vision
Source: /app/src/content/docs/framework/vision.mdx
import { DocImage } from '@/components/DocImage';
**Image analysis and document processing for multimodal interactions**
import { Step, Steps } from 'fumadocs-ui/components/steps';
## Overview
The Vision system enables agents to process and analyze images, providing multimodal AI capabilities for richer interactions. It supports multiple image formats, offers various analysis modes, and integrates seamlessly with both OpenAI and local Ollama providers for flexible deployment options.
## Enabling Vision
Enable vision capabilities for an agent by setting the `vision` option to `true`:
```typescript
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'VisionAgent',
model: 'gpt-4o', // Vision-capable model
vision: true // Enable vision capabilities (default: false)
});
```
## Attachment System
Astreus supports an intuitive attachment system for working with images:
```typescript
// Clean, modern attachment API
const response = await agent.ask("What do you see in this image?", {
attachments: [
{ type: 'image', path: '/path/to/image.jpg', name: 'My Photo' }
]
});
```
The attachment system automatically:
* Detects the file type and selects appropriate tools
* Enhances the prompt with attachment information
* Enables tool usage when attachments are present
## Vision Capabilities
The vision system provides three core capabilities through built-in tools:
### 1. General Image Analysis
Analyze images with custom prompts and configurable detail levels:
```typescript
// Using attachments (recommended approach)
const response = await agent.ask("Please analyze this screenshot and describe the UI elements", {
attachments: [
{ type: 'image', path: '/path/to/screenshot.png', name: 'UI Screenshot' }
]
});
// Using the analyze_image tool through conversation
const response2 = await agent.ask("Please analyze the image at /path/to/screenshot.png and describe the UI elements");
// Direct method call
const analysis = await agent.analyzeImage('/path/to/image.jpg', {
prompt: 'What UI elements are visible in this interface?',
detail: 'high',
maxTokens: 1500
});
```
### 2. Image Description
Generate structured descriptions for different use cases:
```typescript
// Accessibility-friendly description
const description = await agent.describeImage('/path/to/image.jpg', 'accessibility');
// Available styles:
// - 'detailed': Comprehensive description of all visual elements
// - 'concise': Brief description of main elements
// - 'accessibility': Screen reader-friendly descriptions
// - 'technical': Technical analysis including composition and lighting
```
### 3. Text Extraction (OCR)
Extract and transcribe text from images:
```typescript
// Extract text with language hint
const text = await agent.extractTextFromImage('/path/to/document.jpg', 'english');
// The system maintains original formatting and structure
console.log(text);
```
## Supported Formats
The vision system supports these image formats:
* **JPEG** (`.jpg`, `.jpeg`)
* **PNG** (`.png`)
* **GIF** (`.gif`)
* **BMP** (`.bmp`)
* **WebP** (`.webp`)
### Input Sources
### File Paths
Analyze images from local file system:
```typescript
const result = await agent.analyzeImage('/path/to/image.jpg');
```
### Base64 Data
Analyze images from base64-encoded data:
```typescript
const base64Image = 'data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQ...';
const result = await agent.analyzeImageFromBase64(base64Image);
```
## Configuration
### Vision Model Configuration
Specify the vision model directly in the agent configuration:
```typescript
const agent = await Agent.create({
name: 'VisionAgent',
model: 'gpt-4o',
visionModel: 'gpt-4o', // Specify vision model here
vision: true
});
```
### Environment Variables
```bash
# API keys (auto-detected based on model)
OPENAI_API_KEY=your_openai_key # For OpenAI models
ANTHROPIC_API_KEY=your_anthropic_key # For Claude models
GOOGLE_API_KEY=your_google_key # For Gemini models
# Ollama configuration (local)
OLLAMA_BASE_URL=http://localhost:11434 # Default if not set
```
The vision system automatically selects the appropriate provider based on the `visionModel` specified in the agent configuration.
### Analysis Options
Configure analysis behavior with these options:
```typescript
interface AnalysisOptions {
prompt?: string; // Custom analysis prompt
maxTokens?: number; // Response length limit (default: 1000)
detail?: 'low' | 'high'; // Analysis detail level (OpenAI only)
}
```
## Usage Examples
### Screenshot Analysis
```typescript
const agent = await Agent.create({
name: 'UIAnalyzer',
model: 'gpt-4o',
vision: true
});
// Analyze a UI screenshot
const analysis = await agent.analyzeImage('/path/to/app-screenshot.png', {
prompt: 'Analyze this mobile app interface. Identify key UI components, layout structure, and potential usability issues.',
detail: 'high'
});
console.log(analysis);
```
### Document Processing
```typescript
// Extract text from scanned documents
const documentText = await agent.extractTextFromImage('/path/to/scanned-invoice.jpg', 'english');
// Generate accessible descriptions
const accessibleDesc = await agent.describeImage('/path/to/chart.png', 'accessibility');
```
### Multimodal Conversations
```typescript
// Using attachments for cleaner API
const response = await agent.ask("I'm getting an error. Can you analyze this screenshot and help me fix it?", {
attachments: [
{ type: 'image', path: '/Users/john/Desktop/error.png', name: 'Error Screenshot' }
]
});
// Multiple attachments
const response2 = await agent.ask("Compare these UI mockups and suggest improvements", {
attachments: [
{ type: 'image', path: '/designs/mockup1.png', name: 'Design A' },
{ type: 'image', path: '/designs/mockup2.png', name: 'Design B' }
]
});
// Traditional approach (still works)
const response3 = await agent.ask(
"Please analyze the error screenshot at /Users/john/Desktop/error.png and suggest how to fix the issue"
);
```
## Provider Comparison
| Feature | OpenAI (gpt-4o) | Ollama (llava) |
| ---------------- | --------------- | ---------------- |
| Analysis Quality | Excellent | Good |
| Processing Speed | Fast | Variable |
| Cost | Pay-per-use | Free (local) |
| Privacy | Cloud-based | Local processing |
| Detail Levels | Low/High | Standard |
| Language Support | Extensive | Good |
### OpenAI Provider
* **Best for**: Production applications requiring high accuracy
* **Default Model**: `gpt-4o`
* **Features**: Detail level control, excellent text recognition
### Ollama Provider (Local)
* **Best for**: Privacy-sensitive applications or development
* **Default Model**: `llava`
* **Features**: Local processing, no API costs, offline capability
## Batch Processing
Process multiple images efficiently:
```typescript
const images = [
'/path/to/image1.jpg',
'/path/to/image2.png',
'/path/to/image3.gif'
];
// Process all images in parallel
const results = await Promise.all(
images.map(imagePath =>
agent.describeImage(imagePath, 'concise')
)
);
console.log('Analysis results:', results);
// Or use task attachments for batch processing
const batchTask = await agent.createTask({
prompt: 'Analyze all these images and provide a comparative report',
attachments: images.map(path => ({
type: 'image',
path,
name: path.split('/').pop()
}))
});
const batchResult = await agent.executeTask(batchTask.id);
```
## Built-in Vision Tools
When vision is enabled, these tools are automatically available:
### analyze\_image
* **Parameters**:
* `image_path` (string, required): Path to image file
* `prompt` (string, optional): Custom analysis prompt
* `detail` (string, optional): 'low' or 'high' detail level
### describe\_image
* **Parameters**:
* `image_path` (string, required): Path to image file
* `style` (string, optional): Description style ('detailed', 'concise', 'accessibility', 'technical')
### extract\_text\_from\_image
* **Parameters**:
* `image_path` (string, required): Path to image file
* `language` (string, optional): Language hint for better OCR accuracy