# Agent URL: /docs/framework/agent Source: /app/src/content/docs/framework/agent.mdx import { DocImage } from '@/components/DocImage'; **Core AI entity with modular capabilities and decorator-based composition** ## Overview Agents are the fundamental building blocks in Astreus. They provide intelligent conversation capabilities with configurable features like memory, tools, knowledge bases, and vision processing. Each agent operates independently with its own context, memory, and specialized capabilities. ## Creating an Agent Creating an agent in Astreus is straightforward: ```typescript import { Agent } from '@astreus-ai/astreus'; const agent = await Agent.create({ name: 'MyAssistant', // Unique name for the agent model: 'gpt-4o', // LLM model to use systemPrompt: 'You are a helpful assistant', // Custom instructions memory: true // Enable persistent memory }); ``` ## Choosing the LLM Model Astreus supports multiple LLM providers out of the box: ```typescript const agent = await Agent.create({ name: 'MyAssistant', model: 'gpt-4.5' // Set model here. Latest: 'gpt-4.5', 'claude-sonnet-4-20250514', 'gemini-2.5-pro', 'deepseek-r1' }); ``` [Learn supported LLM providers and models →](/docs/framework/llm) ## Agent Architecture ```mermaid graph TB A[Agent Core] --> B[LLM Integration] A --> C[Memory System] A --> D[Knowledge Base] A --> E[Vision Processing] A --> F[Tool System] A --> G[Context Management] B --> B1[OpenAI/GPT] B --> B2[Anthropic/Claude] B --> B3[Google/Gemini] B --> B4[DeepSeek] C --> C1[Short-term Memory] C --> C2[Long-term Memory] C --> C3[Context Retention] D --> D1[Vector Storage] D --> D2[Semantic Search] D --> D3[RAG Pipeline] E --> E1[Image Analysis] E --> E2[OCR Processing] E --> E3[Visual Understanding] F --> F1[Custom Plugins] F --> F2[Built-in Tools] F --> F3[External APIs] G --> G1[Auto Compression] G --> G2[Token Management] G --> G3[Context Window] style A fill:#f9f,stroke:#333,stroke-width:4px style B fill:#bbf,stroke:#333,stroke-width:2px style C fill:#bfb,stroke:#333,stroke-width:2px style D fill:#fbb,stroke:#333,stroke-width:2px style E fill:#fbf,stroke:#333,stroke-width:2px style F fill:#bff,stroke:#333,stroke-width:2px style G fill:#ffb,stroke:#333,stroke-width:2px ``` ## Agent Attributes Agents can be configured with various attributes to customize their behavior: ### Core Attributes ```typescript interface AgentConfig { name: string; // Unique identifier for the agent description?: string; // Agent description model?: string; // LLM model to use (default: 'gpt-4o-mini') embeddingModel?: string; // Specific model for embeddings (auto-detected) visionModel?: string; // Specific model for vision (auto-detected) temperature?: number; // Control response randomness (0-1, default: 0.7) maxTokens?: number; // Maximum response length (default: 2000) systemPrompt?: string; // Custom system instructions memory?: boolean; // Enable persistent memory (default: false) knowledge?: boolean; // Enable knowledge base access (default: false) vision?: boolean; // Enable image processing (default: false) useTools?: boolean; // Enable tool/plugin usage (default: true) autoContextCompression?: boolean; // Enable smart context management (default: false) maxContextLength?: number; // Token limit before compression (default: 8000) preserveLastN?: number; // Recent messages to keep uncompressed (default: 3) compressionRatio?: number; // Target compression ratio (default: 0.3) compressionStrategy?: 'summarize' | 'selective' | 'hybrid'; // Algorithm (default: 'hybrid') debug?: boolean; // Enable debug logging (default: false) subAgents?: IAgent[]; // Sub-agents for delegation and coordination } ``` ### Example with All Attributes ```typescript // Create sub-agents first const researcher = await Agent.create({ name: 'ResearchAgent', systemPrompt: 'You are an expert researcher who gathers comprehensive information.' }); const writer = await Agent.create({ name: 'WriterAgent', systemPrompt: 'You create engaging, well-structured content.' }); const fullyConfiguredAgent = await Agent.create({ name: 'AdvancedAssistant', description: 'Multi-purpose AI assistant', model: 'gpt-4o', embeddingModel: 'text-embedding-3-small', // Optional: specific embedding model visionModel: 'gpt-4o', // Optional: specific vision model temperature: 0.7, maxTokens: 2000, systemPrompt: 'You are an expert software architect...', memory: true, knowledge: true, vision: true, useTools: true, autoContextCompression: true, maxContextLength: 6000, // Compress at 6000 tokens preserveLastN: 4, // Keep last 4 messages compressionRatio: 0.4, // 40% compression target compressionStrategy: 'hybrid', // Use hybrid strategy debug: true, // Enable debug logging subAgents: [researcher, writer] // Add sub-agents for delegation }); ```