Context Compression
Use Astreus's auto context compression system to automatically manage long conversations by summarizing older messages while preserving important context.
Quick Start
Clone the Complete Example
The easiest way to get started is to clone the complete example repository:
git clone https://github.com/astreus-ai/context-compression
cd context-compression
npm install
Or Install Package Only
If you prefer to build from scratch:
npm install @astreus-ai/astreus
Environment Setup
# .env
OPENAI_API_KEY=sk-your-openai-api-key-here
DB_URL=sqlite://./astreus.db
Auto Context Compression
The autoContextCompression
feature automatically summarizes older messages when the conversation gets too long, maintaining context while reducing token usage:
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'ContextAgent',
model: 'gpt-4o',
memory: true,
autoContextCompression: true,
systemPrompt: 'You can handle very long conversations efficiently.'
});
// Have a long conversation
for (let i = 1; i <= 20; i++) {
await agent.ask(`Tell me an interesting fact about space. This is message #${i}.`);
}
// Test memory - agent should remember early facts despite context compression
const response = await agent.ask("What was the first space fact you told me?");
console.log(response);
Running the Example
If you cloned the repository:
npm run dev
Repository
The complete example is available on GitHub: astreus-ai/context-compression
How is this guide?