Astreus - Docs

Providers

Providers

Providers establish connections to Language Model APIs, allowing your agents to generate text, answer questions, and perform reasoning tasks.

Supported Providers

Astreus currently supports the following LLM providers:

  • OpenAI: Access to models like GPT-4o, GPT-4o-mini, and GPT-3.5 Turbo
  • Claude (Anthropic): Access to Claude 3.5 Sonnet, Claude 3.5 Haiku, and other Claude models
  • Gemini (Google): Access to Gemini 1.5 Pro, Gemini 1.5 Flash, and other Gemini models
  • Ollama: Connect to locally hosted models like Llama, Mistral, and others

Creating a Provider

Use the createProvider function to set up a provider:

import { createProvider } from '@astreus-ai/astreus';

// For OpenAI
const openaiProvider = createProvider({
  type: 'openai',
  model: 'gpt-4o-mini',
  apiKey: process.env.OPENAI_API_KEY // Optional, defaults to OPENAI_API_KEY env var
});

// For Claude (Anthropic)
const claudeProvider = createProvider({
  type: 'claude',
  model: 'claude-3-5-sonnet-20241022',
  apiKey: process.env.ANTHROPIC_API_KEY // Optional, defaults to ANTHROPIC_API_KEY env var
});

// For Gemini (Google)
const geminiProvider = createProvider({
  type: 'gemini',
  model: 'gemini-1.5-pro',
  apiKey: process.env.GOOGLE_API_KEY // Optional, defaults to GOOGLE_API_KEY env var
});

// For Ollama (local models)
const ollamaProvider = createProvider({
  type: 'ollama',
  baseUrl: "http://localhost:11434",
  model: "llama3.1"
});

Provider Configuration

OpenAI Provider Options

OptionTypeDescriptionRequired
typestringMust be 'openai'Yes
modelstringModel name (e.g., 'gpt-4o', 'gpt-4o-mini', 'gpt-3.5-turbo')Yes
apiKeystringOpenAI API keyNo (defaults to OPENAI_API_KEY env var)
baseUrlstringCustom API endpointNo
organizationIdstringOrganization ID for OpenAINo
embeddingModelstringEmbedding model for RAGNo (defaults to 'text-embedding-3-small')

Claude (Anthropic) Provider Options

OptionTypeDescriptionRequired
typestringMust be 'claude'Yes
modelstringModel name (e.g., 'claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022')Yes
apiKeystringAnthropic API keyNo (defaults to ANTHROPIC_API_KEY env var)
baseUrlstringCustom API endpointNo
maxTokensnumberMaximum tokens per responseNo (defaults to 4000)

Gemini (Google) Provider Options

OptionTypeDescriptionRequired
typestringMust be 'gemini'Yes
modelstringModel name (e.g., 'gemini-1.5-pro', 'gemini-1.5-flash')Yes
apiKeystringGoogle API keyNo (defaults to GOOGLE_API_KEY env var)
baseUrlstringCustom API endpointNo
safetySettingsarraySafety settings for content filteringNo

Ollama Provider Options

OptionTypeDescriptionRequired
typestringMust be 'ollama'Yes
modelstringModel name (e.g., 'llama3.1', 'mistral', 'mixtral')Yes
baseUrlstringOllama API URLNo (defaults to 'http://localhost:11434')
optionsobjectModel-specific optionsNo

Working with Models

Once you have created a provider, you can access specific models:

// Get the default model
const defaultModel = provider.getDefaultModel();

// List available models
const availableModels = provider.listModels();

// Get a specific model
const gpt4 = provider.getModel('gpt-4o');

Using Providers with Agents

Providers are typically used when creating agents:

import { createAgent, createProvider, createMemory, createDatabase } from '@astreus-ai/astreus';

async function createAgentWithProvider() {
  const db = await createDatabase();
  const memory = await createMemory({ database: db });
  
  // Create provider
  const provider = createProvider({
    type: 'openai',
    model: 'gpt-4o-mini',
    apiKey: process.env.OPENAI_API_KEY
  });

  // Create agent with provider
  const agent = await createAgent({
    name: 'MyAgent',
    provider: provider,
    memory: memory,
    systemPrompt: "You are a helpful AI assistant."
  });

  return agent;
}

Generating Text Completions

You can use the provider directly for completions through the model:

// Get a model from the provider
const model = provider.getModel('gpt-4o-mini');

// Generate a completion
const messages = [
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'What is the capital of France?' }
];

const completion = await model.complete(messages);
console.log(completion); // "The capital of France is Paris."

Advanced Usage: Embedding Generation

Providers can also generate embeddings (vector representations of text) for semantic search:

// Generate embeddings for text content (OpenAI only)
const text = "This is a sample text for embedding generation";
const embedding = await provider.generateEmbedding(text);

// The embedding is a vector (number array) that can be used for semantic search
console.log(`Generated embedding with ${embedding.length} dimensions`);

Provider Methods

Core Methods

All providers implement these core methods:

// List available models
const models = provider.listModels();
console.log('Available models:', models);

// Get default model name
const defaultModelName = provider.getDefaultModel();
console.log('Default model:', defaultModelName);

// Get a specific model instance
const specificModel = provider.getModel('gpt-4o');

// Get provider type
console.log('Provider type:', provider.type);

OpenAI-Specific Methods

OpenAI providers have additional methods:

// Get embedding model name
const embeddingModel = provider.getEmbeddingModel();
console.log('Embedding model:', embeddingModel);

// Generate embeddings
const embedding = await provider.generateEmbedding("Sample text");
console.log('Embedding dimensions:', embedding?.length);

Environment Variables

Global Configuration (Provider-Agnostic)

Configure models globally for all providers:

# Global Model Configuration
MODEL_NAME=gpt-4o-mini           # Model name for any provider
TEMPERATURE=0.7                  # Model temperature (0.0 - 1.0)
MAX_TOKENS=2048                 # Maximum tokens per response
EMBEDDING_MODEL=text-embedding-3-small  # Embedding model

# Switch providers easily by changing MODEL_NAME:
# MODEL_NAME=claude-3-haiku-20240307     # For Claude
# MODEL_NAME=gemini-1.5-flash           # For Gemini  
# MODEL_NAME=llama3.2                   # For Ollama

Provider API Keys

# Provider API Keys
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=sk-ant-your_anthropic_api_key
GOOGLE_API_KEY=your_google_api_key

# Optional Provider Base URLs
OPENAI_BASE_URL=https://api.openai.com/v1
ANTHROPIC_BASE_URL=https://api.anthropic.com
GOOGLE_BASE_URL=https://generativelanguage.googleapis.com
OLLAMA_BASE_URL=http://localhost:11434

Best Practices

  1. Environment Variables: Store API keys in environment variables, not in code
  2. Model Selection: Use MODEL_NAME environment variable to easily switch between models and providers
  3. Error Handling: Wrap provider operations in try-catch blocks
  4. Rate Limiting: Be aware of provider rate limits and implement appropriate handling
  5. Cost Management: Monitor usage and costs, especially with premium models

Example: Multi-Provider Setup

You can create multiple providers for different use cases:

import { createAgent, createProvider, createMemory, createDatabase } from '@astreus-ai/astreus';

async function createMultiProviderSystem() {
  const db = await createDatabase();
  const memory = await createMemory({ database: db });

  // Fast provider for simple tasks
  const fastProvider = createProvider({
    type: 'openai',
    model: 'gpt-4o-mini'
  });

  // Powerful provider for complex tasks
  const powerfulProvider = createProvider({
    type: 'claude',
    model: 'claude-3-5-sonnet-20241022'
  });

  // Creative provider for content generation
  const creativeProvider = createProvider({
    type: 'gemini',
    model: 'gemini-1.5-pro'
  });

  // Local provider for privacy-sensitive tasks
  const localProvider = createProvider({
    type: 'ollama',
    model: 'llama3.1',
    baseUrl: 'http://localhost:11434'
  });

  // Create agents with different providers
  const fastAgent = await createAgent({
    name: 'FastAgent',
    provider: fastProvider,
    memory: memory,
    systemPrompt: "You provide quick, concise responses."
  });

  const powerfulAgent = await createAgent({
    name: 'PowerfulAgent',
    provider: powerfulProvider,
    memory: memory,
    systemPrompt: "You provide detailed, comprehensive responses with careful reasoning."
  });

  const creativeAgent = await createAgent({
    name: 'CreativeAgent',
    provider: creativeProvider,
    memory: memory,
    systemPrompt: "You are a creative assistant that generates engaging content."
  });

  const localAgent = await createAgent({
    name: 'LocalAgent',
    provider: localProvider,
    memory: memory,
    systemPrompt: "You are a privacy-focused assistant running locally."
  });

  return { fastAgent, powerfulAgent, creativeAgent, localAgent };
}

Troubleshooting

Provider not working?

  • Check API keys are correctly set in environment variables
  • Verify network connectivity to provider endpoints
  • Ensure model names are correct and available

OpenAI API errors?

  • Check API key validity and permissions
  • Monitor rate limits and usage quotas
  • Verify model availability in your region

Claude (Anthropic) API errors?

  • Ensure your ANTHROPIC_API_KEY is valid
  • Check rate limits and usage quotas
  • Verify model access permissions

Gemini (Google) API errors?

  • Validate GOOGLE_API_KEY is correctly set
  • Check Google Cloud project settings
  • Ensure the Generative AI API is enabled

Ollama connection issues?

  • Make sure Ollama is running locally
  • Check the base URL is correct (http://localhost:11434)
  • Verify the model is downloaded and available (ollama list)
  • Try pulling the model: ollama pull llama3.1

How is this guide?