Context Processor
The Context Processor is Astreus's advanced adaptive context window management system that intelligently manages conversation context across multiple hierarchical layers. It features AI-powered sentiment and importance analysis, automatic content prioritization, and intelligent frequency tracking. The system automatically optimizes context retention based on importance, recency, sentiment, and token budgets.
Overview
The Context Processor implements a three-layer hierarchical architecture:
- Immediate Layer (40% of tokens) - Recent messages
- Summarized Layer (35% of tokens) - Compressed conversation summaries
- Persistent Layer (25% of tokens) - Long-term important facts
Key Features
- Hierarchical Memory: Three-layer architecture (immediate, summarized, persistent)
- AI-Powered Analysis: Intelligent sentiment and importance scoring using LLM providers
- Frequency Tracking: Automatic keyword and topic frequency analysis for better prioritization
- Token Budgeting: Intelligent allocation of context window space
- Priority-Based Retention: Content prioritized using recency, frequency, importance, sentiment, and user interaction
- Automatic Compression: Context is compressed when token limits are reached
- Compression Strategies: SUMMARIZE, KEYWORD_EXTRACT, SEMANTIC_CLUSTER, TEMPORAL_COMPRESS
- Fallback Heuristics: Smart heuristic analysis when AI providers are unavailable
- Unicode Support: International language support with Unicode-aware text processing
- Session Isolation: Each session has its own context manager
- Seamless Integration: Works transparently with all other components
- Cleanup Management: Automatic cleanup of inactive context managers
Automatic Context Management (Recommended)
The easiest way to use adaptive context is through the memory system:
import { createMemory, createAgent } from '@astreus-ai/astreus';
// Enable adaptive context management
const memory = await createMemory({
database: db,
tableName: "memories",
enableAdaptiveContext: true, // Enable adaptive context
maxEntries: 100,
// Optional: Custom token budget
tokenBudget: {
total: 4000,
immediate: 1600, // 40% - recent messages
summarized: 1400, // 35% - conversation summaries
persistent: 1000 // 25% - important facts
}
});
// Create agent with adaptive context
const agent = await createAgent({
name: 'SmartAgent',
provider: provider,
memory: memory, // Context management is automatic
database: db,
systemPrompt: 'You are an intelligent assistant with advanced memory.'
});
// Context is automatically managed during conversations
const response = await agent.chat("Remember that I prefer concise answers");
const response2 = await agent.chat("What did I just tell you about my preferences?");
// The agent will remember preferences even in long conversations
Manual Context Control
For advanced use cases, you can manually control the adaptive context:
import {
AdaptiveContextManager,
DEFAULT_TOKEN_BUDGET,
DEFAULT_PRIORITY_WEIGHTS,
CompressionStrategy
} from '@astreus-ai/astreus';
// Get adaptive context for a session
const contextLayers = await memory.getAdaptiveContext("session-1", 4000);
console.log("Current context layers:", contextLayers);
// Update context with new information
await memory.updateContextLayers("session-1", {
role: "user",
content: "Important: I'm a software engineer",
timestamp: new Date()
});
// Compress context when needed
const compressionResult = await memory.compressContext("session-1", CompressionStrategy.SUMMARIZE);
console.log("Compression result:", compressionResult);
// Get formatted context for display
const formattedContext = await memory.getFormattedContext("session-1", 4000);
console.log("Formatted context:", formattedContext);
Custom Configuration
You can customize the adaptive context behavior:
import { createMemory } from '@astreus-ai/astreus';
const memory = await createMemory({
database: db,
enableAdaptiveContext: true,
// Custom token budget allocation
tokenBudget: {
total: 6000,
immediate: 3000, // 50% for recent messages
summarized: 2000, // 33% for summaries
persistent: 1000 // 17% for persistent data
},
// Custom priority weights
priorityWeights: {
recency: 0.4, // Prioritize recent messages
frequency: 0.1, // Less weight to frequency
importance: 0.4, // High importance weight
userInteraction: 0.1,
sentiment: 0.0 // Ignore sentiment
}
});
Through Chat Service
The chat service provides convenient access to adaptive context:
import { createChat } from '@astreus-ai/astreus';
const chat = await createChat({
database: db,
memory: memory,
tableName: 'chats',
enableAdaptiveContext: true // Enable for chat sessions
});
// Get adaptive context for a chat
const adaptiveContext = await chat.getAdaptiveContext("chat-1");
// Get formatted context for display
const formattedContext = await chat.getFormattedContext("chat-1", 4000);
Token Budget Configuration
The default token budget allocates tokens across the three layers:
export const DEFAULT_TOKEN_BUDGET = {
total: 4000,
immediate: 1600, // 40% - recent messages
summarized: 1400, // 35% - conversation summaries
persistent: 1000 // 25% - important facts
};
Priority Weights
Content priority is calculated using multiple AI-powered and heuristic factors:
export const DEFAULT_PRIORITY_WEIGHTS = {
recency: 0.3, // Recent messages get higher priority
frequency: 0.2, // Frequently repeated topics (AI-extracted keywords and topics)
importance: 0.25, // Important information (AI-analyzed or heuristic scoring)
userInteraction: 0.15, // User messages prioritized over system messages
sentiment: 0.1 // Emotional intensity (AI-analyzed or heuristic scoring)
};
AI-Powered Analysis: When an LLM provider is available, the system uses AI to analyze sentiment and importance scores. The AI analyzes content context, emotional tone, and semantic importance for more accurate prioritization.
Fallback Heuristics: When AI providers are unavailable, the system falls back to intelligent heuristic analysis using language-agnostic patterns, Unicode-aware text processing, and content characteristics like length, special characters, and structural elements.
Compression Strategies
When context exceeds token limits, different compression strategies are used:
- SUMMARIZE: Summarize content to preserve key information
- KEYWORD_EXTRACT: Extract key terms and phrases
- SEMANTIC_CLUSTER: Group similar content together
- TEMPORAL_COMPRESS: Compress by time periods
Architecture Benefits
The Context Processor provides several advantages:
- Memory Efficiency: Optimal use of available context tokens
- Content Preservation: Important information is retained longer
- Automatic Management: No manual intervention required
- Scalable: Handles long conversations gracefully
- Flexible: Customizable for different use cases
- Session Aware: Each conversation has its own context
Integration with Other Components
The Context Processor integrates seamlessly with:
- Memory Storage: Automatic context management in memory operations
- Chat Service: Context-aware chat sessions
- Agents: Transparent context management during conversations
- Task Execution: Context-aware task processing
Best Practices
- Enable by Default: Use
enableAdaptiveContext: true
for most applications - Tune Token Budget: Adjust allocation based on your specific needs
- Monitor Performance: Watch context compression frequency
- Custom Weights: Adjust priority weights for domain-specific applications
- Session Management: Clean up inactive sessions to free resources
The Context Processor is a powerful system that makes Astreus agents intelligent about memory management, ensuring optimal performance even in extended conversations.
How is this guide?