LLM
Unified interface for multiple LLM providers with automatic routing
Overview
The LLM abstraction layer provides seamless integration with multiple AI providers, allowing you to switch between OpenAI, Claude, Gemini, and Ollama without changing your code. It handles provider-specific implementations, message formatting, and streaming while providing a consistent API across all providers.
Supported Providers
Astreus supports four major LLM providers with automatic model routing:
OpenAI
All 14 supported models:
- Latest:
gpt-4.5
,gpt-4.1
,gpt-4.1-mini
,gpt-4.1-nano
,o4-mini
,o4-mini-high
,o3
- Stable:
gpt-4o
,gpt-4o-mini
,gpt-4-turbo
,gpt-4
,gpt-3.5-turbo
,gpt-3.5-turbo-16k
,gpt-3.5-turbo-instruct
- API Key: Set
OPENAI_API_KEY
environment variable
Anthropic Claude
All 9 supported models:
- Latest:
claude-sonnet-4-20250514
,claude-opus-4-20250514
,claude-3.7-sonnet-20250224
- Stable:
claude-3-5-sonnet-20241022
,claude-3-5-sonnet-20240620
,claude-3-5-haiku-20241022
,claude-3-opus-20240229
,claude-3-sonnet-20240229
,claude-3-haiku-20240307
- API Key: Set
ANTHROPIC_API_KEY
environment variable
Google Gemini
All 12 supported models:
- Latest:
gemini-2.5-pro
,gemini-2.5-pro-deep-think
,gemini-2.5-flash
,gemini-2.5-flash-lite
- Stable:
gemini-2.0-flash
,gemini-2.0-flash-thinking
,gemini-2.0-flash-lite
,gemini-2.0-pro-experimental
,gemini-1.5-pro
,gemini-1.5-flash
,gemini-1.5-flash-8b
,gemini-pro
- API Key: Set
GOOGLE_API_KEY
environment variable
Ollama (Local)
All 32 supported models:
- Latest:
deepseek-r1
,deepseek-v3
,deepseek-v2.5
,deepseek-coder
,deepseek-coder-v2
,qwen3
,qwen2.5-coder
,llama3.3
,gemma3
,phi4
- Popular:
mistral-small
,codellama
,llama3.2
,llama3.1
,qwen2.5
,gemma2
,phi3
,mistral
,codegemma
,wizardlm2
- Additional:
dolphin-mistral
,openhermes
,deepcoder
,stable-code
,wizardcoder
,magicoder
,solar
,yi
,zephyr
,orca-mini
,vicuna
- Configuration: Set
OLLAMA_BASE_URL
(default:http://localhost:11434
)
Configuration
Environment Variables
Set up your API keys and configuration:
# OpenAI
export OPENAI_API_KEY="your-openai-key"
export OPENAI_BASE_URL="https://api.openai.com/v1" # Optional
# Anthropic Claude
export ANTHROPIC_API_KEY="your-anthropic-key"
export ANTHROPIC_BASE_URL="https://api.anthropic.com" # Optional
# Google Gemini
export GOOGLE_API_KEY="your-google-key"
# Ollama (Local)
export OLLAMA_BASE_URL="http://localhost:11434" # Optional
Agent Configuration
Specify the model when creating agents:
import { Agent } from '@astreus-ai/astreus';
const agent = await Agent.create({
name: 'MyAgent',
model: 'gpt-4.5', // Model automatically routes to correct provider
temperature: 0.7,
maxTokens: 2000
});
Usage Examples
Basic LLM Usage
import { getLLM } from '@astreus-ai/astreus';
const llm = getLLM();
// Generate response
const response = await llm.generateResponse({
model: 'claude-sonnet-4-20250514',
messages: [{ role: 'user', content: 'Explain quantum computing' }],
temperature: 0.7,
maxTokens: 1000
});
console.log(response.content);
Streaming Responses
// Stream response in real-time
for await (const chunk of llm.generateStreamResponse({
model: 'gpt-4.5',
messages: [{ role: 'user', content: 'Write a story about AI' }],
stream: true
})) {
if (!chunk.done) {
process.stdout.write(chunk.content);
}
}
Function Calling
const response = await llm.generateResponse({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What\'s the weather in Tokyo?' }],
tools: [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather information',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name'
}
},
required: ['location']
}
}
}]
});
// Handle tool calls
if (response.toolCalls) {
response.toolCalls.forEach(call => {
console.log(`Tool: ${call.function.name}`);
console.log(`Args: ${call.function.arguments}`);
});
}
LLM Options
Configure LLM behavior with these options:
interface LLMRequestOptions {
model: string; // Required: Model identifier
messages: LLMMessage[]; // Required: Conversation history
temperature?: number; // Creativity level (0.0-1.0, default: 0.7)
maxTokens?: number; // Max output tokens (default: 4096)
stream?: boolean; // Enable streaming responses
systemPrompt?: string; // System instructions
tools?: Tool[]; // Function calling tools
}
Parameter Details
- temperature: Controls randomness (0.0 = deterministic, 1.0 = very creative)
- maxTokens: Maximum tokens in the response (varies by model)
- stream: Enable real-time streaming for long responses
- systemPrompt: Sets behavior and context for the model
- tools: Enable function calling capabilities
Provider Features
Feature | OpenAI | Claude | Gemini | Ollama |
---|---|---|---|---|
Streaming | ✅ | ✅ | ✅ | ✅ |
Function Calling | ✅ | ✅ | Limited | Limited |
Token Usage | ✅ | ✅ | Limited | ✅ |
Custom Base URL | ✅ | ✅ | ❌ | ✅ |
Local Models | ❌ | ❌ | ❌ | ✅ |
Model Selection Guide
For Code Generation
- Best:
gpt-4o
,claude-3-5-sonnet-20241022
,deepseek-coder
- Fast:
gpt-4o-mini
,claude-3-5-haiku-20241022
For Reasoning Tasks
- Best:
claude-opus-4-20250514
,gpt-4.5
,o3
- Balanced:
claude-sonnet-4-20250514
,gpt-4o
For Creative Writing
- Best:
gpt-4.5
,claude-3-opus-20240229
- Fast:
gemini-2.5-pro
,gpt-4o-mini
For Privacy/Local Use
- Best:
deepseek-r1
,llama3.3
,qwen3
- Code:
deepseek-coder
,codellama
How is this guide?