Run Local AI for Free: OpenClaw + Ollama

SetupModelsLocal

Run Local AI for Free: OpenClaw + Ollama

Run AI models locally with Ollama. No API costs, complete privacy. Configure OpenClaw to use local models.

5 min readLast updated Feb 19, 2026
Stuck?Check the troubleshooting index or ask in Discord.

Overview

Ollama lets you run AI models locally on your machine. No API costs, complete privacy, and full control. OpenClaw integrates with Ollama's native API, supporting streaming and tool calling.

Ollama is free to use - you only pay for your hardware. Great for development, testing, or running 24/7 without API bills.

Quick Start

1. Install Ollama

Download from ollama.ai

2. Pull a model

bash
ollama pull llama3.3
# or
ollama pull mistral
# or  
ollama pull qwen2.5-coder

3. Enable in OpenClaw

bash
# Set environment variable
export OLLAMA_API_KEY="ollama-local"

4. Use the model

bash
openclaw models set ollama/llama3.3

Installation

Download and install Ollama from the official website. After installation, Ollama runs as a service on your machine.

bash
# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | bash

# Windows - download from ollama.ai

Verify installation:

bash
ollama --version

Configuration

OpenClaw auto-discovers Ollama when OLLAMA_API_KEY is set. For custom setups:

json5
{
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://localhost:11434",
        "apiKey": "ollama-local"
      }
    }
  }
}

Or use environment variable:

bash
export OLLAMA_API_KEY="ollama-local"

Available Models

Popular models for OpenClaw:

Llama 3.3

General purpose, excellent reasoning. ~70B parameters.

Mistral

Great balance of speed and capability.

Qwen 2.5 Coder

Optimized for code. Best for programming tasks.

DeepSeek R1

Reasoning model with strong capabilities.

See all models:

bash
ollama list
openclaw models list

Troubleshooting

Ollama not detected

Make sure Ollama is running:

bash
ollama serve
# Verify
curl http://localhost:11434/api/tags

No models available

Pull a model:

bash
ollama pull llama3.3

Connection refused

Check if Ollama is running:

bash
ps aux | grep ollama
ollama serve