Ollama configured, but OpenClaw still uses Anthropic (or model discovery keeps failing)
Fix local Ollama setups where gateway logs show Anthropic fallback or repeated Ollama model-discovery failures by pinning provider config, verifying connectivity from the gateway runtime, and separating model selection problems from OpenAI-compatible payload problems.
Symptoms
- You set an Ollama model, but gateway logs still show
agent model: anthropic/.... - Local model requests fail silently (or look like provider fallback), especially with no Anthropic key configured.
- Logs repeatedly show
Failed to discover Ollama models: TypeError: fetch failed, even though a manualcurlmay work from another shell.
Cause
OpenClaw can only use Ollama if the gateway runtime can resolve and reach the configured Ollama endpoint, and if model/provider config is explicit enough for model selection.
Common causes:
models.providers.ollama.baseUrlis reachable from your host shell, but not from the actual runtime context (Docker/WSL/container namespace).- Discovery requests fail and leave model resolution relying on stale/default provider settings.
agents.defaults.model.primaryis not pinned to anollama/...id, so selection falls back to another provider.- You defined
models.providers.ollama.models[], but set eachidto a fully-qualified ref likeollama/qwen2.5:7b. In provider catalogs,idshould be the raw model id (for exampleqwen2.5:7b), whileollama/qwen2.5:7bis the reference you use when selecting a model inagents.defaults.model.primary. - You are using Ollama through its
/v1OpenAI-compatible path and assuming it is equivalent to the native Ollama API for tools, multi-turn agent flows, and streaming. It often is not.
Fix
1) Verify connectivity from the same runtime as gateway
Run from the same host/container that runs gateway:
curl -sS http://<ollama-host>:11434/v1/models
If this fails inside runtime, fix networking first (host mapping, bridge address, firewall, WSL host routing).
Tip: if localhost resolves to IPv6 (::1) on your system but Ollama is only listening on IPv4, prefer 127.0.0.1.
2) Pin Ollama provider and models explicitly
Use explicit provider config and model ids instead of relying only on auto-discovery.
{
models: {
mode: "merge",
providers: {
ollama: {
baseUrl: "http://127.0.0.1:11434/v1",
apiKey: "ollama-local",
api: "openai-completions",
// IMPORTANT: provider catalogs use raw ids (no "ollama/" prefix).
models: [{ id: "qwen2.5:7b", name: "qwen2.5:7b" }],
},
},
},
agents: {
defaults: {
model: {
primary: "ollama/qwen2.5:7b",
},
},
},
}
2.5) Decide whether you should use native Ollama API or /v1 OpenAI-compatible mode
This distinction matters a lot.
If your goal is just:
- basic local chat,
- a proxy layer that only speaks OpenAI-style
/v1, - or a quick compatibility test,
then api: "openai-completions" can be a reasonable starting point.
But if your goal is:
- reliable tool calling,
- multi-turn tool-result continuation,
- or the most natural Ollama behavior,
then the native Ollama API is usually the safer expectation boundary.
In other words: OpenClaw successfully selecting ollama/... does not guarantee that Ollama’s /v1 compatibility mode will behave like a full agent runtime.
3) Confirm active model selection and probe
openclaw config get agents.defaults.model.primary
openclaw models status --probe
If probe still resolves to Anthropic, restart gateway and recheck runtime config path/environment.
If probe succeeds and the active model is clearly ollama/..., but a real agent or TUI run still fails, you have likely moved past model selection and into payload compatibility territory.
Typical symptoms of that next layer:
- plain chat works but tools fail,
curlworks but OpenClaw agent runs fail,- reasoning or streaming changes behavior unexpectedly,
- or the server rejects tool-related roles/messages on later turns.
Verify
- Gateway logs show an
ollama/...model as active (not Anthropic fallback). - A test message produces a normal response using the local model.
- Repeated Ollama discovery errors no longer appear continuously (or no longer affect serving).
Related
- Configuration overview (models/providers, refs vs ids, local endpoints): /guides/openclaw-configuration
- Relay / OpenAI-compatible runtime mismatch guide: /guides/openclaw-relay-and-api-proxy-troubleshooting
- For generic auth/rate-limit/provider failures, use: