OpenClaw Model Setup: OpenAI, Anthropic, Ollama
Basic guide to configuring models in OpenClaw. Links to official docs for complete provider documentation.
Overview
OpenClaw supports multiple model providers including OpenAI, Anthropic, Google, Ollama, and more. This guide covers how to configure models, set up fallbacks, and optimize for your use case.
Quick Start
The fastest way to get started is setting environment variables:
# Option 1: OpenAI
OPENAI_API_KEY=sk-...
# Option 2: Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Option 3: Ollama (local)
MODEL_BACKEND_URL=http://localhost:11434Then set your model in config:
agents:
defaults:
model:
primary: openai/gpt-5Model Providers
OpenClaw supports many providers. Here's how to configure each:
OpenAI
OPENAI_API_KEY=sk-...
MODEL_BACKEND_URL=https://api.openai.com/v1Anthropic
ANTHROPIC_API_KEY=sk-ant-...
MODEL_BACKEND_URL=https://api.anthropic.comOllama (Local)
MODEL_BACKEND_URL=http://localhost:11434Make sure Ollama is running: ollama serve
Google (Gemini)
GOOGLE_API_KEY=your-google-api-key
MODEL_BACKEND_URL=https://generativelanguage.googleapis.com/v1Azure OpenAI
AZURE_OPENAI_API_KEY=your-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT=gpt-4Configuration Options
Full model configuration in your config file:
agents:
defaults:
model:
primary: anthropic/claude-sonnet-4-5
imageModel:
primary: openai/gpt-5o
thinkingDefault: "low" # off | low | medium | high
verboseDefault: "off" # off | on
elevatedDefault: "on" # on | off
timeoutSeconds: 600
mediaMaxMb: 5
contextTokens: 200000
maxConcurrent: 3Fallback Models
Set fallback models in case your primary fails:
agents:
defaults:
model:
primary: anthropic/claude-opus-4-6
fallbacks:
- anthropic/claude-sonnet-4-5
- openai/gpt-5
- google/gemini-2-proModel Aliases
Create shortcuts for commonly used models:
agents:
defaults:
models:
"opus":
provider: anthropic/claude-opus-4-6
"sonnet":
provider: anthropic/claude-sonnet-4-5
"gpt":
provider: openai/gpt-5
"mini":
provider: openai/gpt-5-miniNow you can switch models with /model opus or /model gpt
Context Windows & Tuning
Adjust context settings for your needs:
Context Tokens
Set the maximum context window. Higher = more memory but more expensive.
contextTokens: 200000 # Claude 3 Opus max)
contextTokens: 128000 # GPT-4 Turbo max)
contextTokens: 32000 # Smaller models)Temperature
Control randomness (0 = deterministic, 1 = creative).
models:
"my-model":
params:
temperature: 0.7Max Tokens
Limit response length.
models:
"my-model":
params:
maxTokens: 4096Image Models
Configure a separate model for image analysis (used when primary model doesn't support images):
agents:
defaults:
model:
primary: anthropic/claude-opus-4-6
imageModel:
primary: openrouter/qwen/qwen-2.5-vl-72b-instruct:free
fallbacks:
- openrouter/google/gemini-2.0-flash-vision:free