Relays and API proxies are incredibly useful (especially if you need multi-provider routing, unified billing, or a China-friendly endpoint), but they also introduce a new class of “it works in curl but not in OpenClaw” failures.
This guide is a battle-tested checklist for the most common issues reported by the community: 403 blocks, 404 not found
(often caused by wrong baseUrl), and blank/empty replies caused by response-format mismatch.
If the problem appeared immediately after an OpenClaw upgrade, keep a rollback path ready instead of debugging forever on a broken production setup:
If you’re new to OpenClaw configuration, start here first:
If your TUI shows “(no output)” specifically:
What this guide helps you finish
By the end of this guide, you should be able to say:
- which API contract your relay actually implements,
- which config file and provider path OpenClaw is really using,
- and whether your failure is URL, header, payload, or runtime compatibility.
That is the real finish line: one relay path that is explicit enough to debug and stable enough to keep.
Who this is for (and not for)
Use this guide if:
curlworks but OpenClaw does not,- you see
403,404,(no output), or blank replies through a relay, - you are using NewAPI, OneAPI, AnyRouter, LiteLLM, vLLM, or a similar compatibility layer,
- or probe success does not match what real agent runs do.
This is not the main page for:
- direct first-party OpenAI/Anthropic setups with no relay,
- broader OpenClaw config basics,
- or model quality questions unrelated to transport/contract failures.
Before you touch config: collect these five facts
Before changing anything, collect these five facts:
- Which
apimode are you trying to use? - What exact
baseUrlis the running gateway calling? - Which config file or per-agent override is actually in effect?
- Does the relay require special headers, region allowlists, or a browser-like
User-Agent? - Does the failing request include runtime fields your simple curl test omitted?
Once you have those facts, most relay incidents stop feeling mysterious.
0) Mental model: API mode decides the contract
In OpenClaw, a relay “provider” typically looks like:
{
models: {
mode: "merge",
providers: {
myrelay: {
baseUrl: "https://example.com/...", // important: varies by API mode
apiKey: "${MYRELAY_API_KEY}",
api: "openai-completions" // or "openai-responses" / "anthropic-messages"
}
}
}
}
The api mode is not just a label — it decides:
- which endpoint path OpenClaw calls,
- how OpenClaw authenticates,
- and what response JSON shape OpenClaw expects.
Practical takeaway: when a relay says “OpenAI compatible”, it often means one of these, not all of them:
- OpenAI Chat Completions (
/v1/chat/completions) → OpenClawapi: "openai-completions" - OpenAI Responses (
/v1/responses) → OpenClawapi: "openai-responses" - Anthropic Messages (
/v1/messages) → OpenClawapi: "anthropic-messages"
Compatibility is layered, not binary
The most important mindset shift for relay debugging is this:
- Endpoint compatibility: can OpenClaw reach the URL with the right auth?
- Contract compatibility: does the relay actually implement the API mode you picked?
- Runtime payload compatibility: does it still work once OpenClaw sends tools, multi-turn state, reasoning controls, or streaming?
That is why users so often report one of these confusing combinations:
curlworks, but OpenClaw failsopenclaw models status --probeworks, butopenclaw agentor TUI fails- plain chat works, but tool calling or reasoning fails
Those are usually not network problems anymore. They are payload-compatibility problems.
1) Pick the right api mode (and don’t “invent” values)
If you see an error like:
Invalid discriminator value. Expected 'anthropic-messages' | 'openai-responses' | 'openai-completions'
It means your config is being schema-validated and api: must be one of the supported enum values for your OpenClaw version.
Compatibility heuristics (fast)
- If your relay advertises “Anthropic compatible / Claude compatible”, start with
api: "anthropic-messages". - If your relay only advertises “OpenAI compatible” and shows examples with
/v1/chat/completions, start withapi: "openai-completions". - If your relay supports
/v1/responses, preferapi: "openai-responses"(newer contract, generally fewer edge cases than legacy completions).
If you’re unsure, verify support by checking the relay docs or trying a minimal request (see the curl section below).
2) Confirm you’re editing the config the gateway is actually using
When changes “don’t work”, 80% of the time you edited the wrong file or the gateway never reloaded.
2.1 The two most common config locations
Depending on your setup, you may have more than one config file in the state directory:
~/.openclaw/openclaw.json(main gateway config for most modern setups)~/.openclaw/config.json(older/legacy config used by some flows)
Rule: update the file that your running gateway instance loads, then restart the gateway.
2.2 Per-agent overrides can bypass your provider catalog
Some setup flows store model/provider settings in a per-agent file like:
~/.openclaw/agents/<agentId>/agent/models.json
If you set a model in an agent-specific config, it can override what you think is “default” in openclaw.json.
Quick checks:
openclaw config get agents.defaults.model.primary
openclaw config get agents.defaults.model.fallbacks
openclaw models status --probe
If --probe resolves to a different provider than expected, you’re debugging model selection / precedence, not networking.
3) Base URL rules: prevent 404s (and watch for accidental /v1/v1)
Different API modes commonly want different baseUrl formats.
3.1 Known-good baseUrl patterns (copy/paste)
A) Anthropic Messages relays
For many Anthropic-compatible relays, baseUrl should be the host root (no /v1), because the Messages API endpoint itself is /v1/messages:
{
models: {
providers: {
myrelay: {
baseUrl: "https://my-relay.example.com",
apiKey: "${MYRELAY_API_KEY}",
api: "anthropic-messages"
}
}
}
}
B) OpenAI Chat Completions relays (openai-completions)
OpenAI-style relays commonly expose chat completions under a /v1 prefix. Many known-good configs use:
{
models: {
providers: {
myrelay: {
baseUrl: "https://my-relay.example.com/v1",
apiKey: "${MYRELAY_API_KEY}",
api: "openai-completions"
}
}
}
}
C) OpenAI Responses relays (openai-responses)
Most relays that implement the Responses API also use a /v1 base:
{
models: {
providers: {
myrelay: {
baseUrl: "https://my-relay.example.com/v1",
apiKey: "${MYRELAY_API_KEY}",
api: "openai-responses"
}
}
}
}
3.2 The only rule that always works: verify the final URL
If your gateway logs show something like:
.../v1/v1/messages(Anthropic mode).../v1/v1/chat/completions(OpenAI mode)
You have a duplicated /v1. Remove it from the baseUrl (or from the relay path), restart, and try again.
If logs show a URL without /v1 and your relay requires it, add /v1 back.
4) 403 Forbidden: WAF / User-Agent blocks are real
Symptom pattern:
- Your key is valid.
- A manual request from Postman/curl works.
- OpenClaw gets a
403(sometimes with a message like “Request Blocked”, “Forbidden”, or a vendor WAF page).
Common causes:
- The relay blocks SDK user agents (for example
Anthropic/*,OpenAI/*) or unknown automation traffic. - The relay has region/IP restrictions or requires allowlisting.
- The relay expects an additional header (custom auth, tenant id, etc.).
4.1 Workaround: set custom headers (especially User-Agent)
If your relay blocks the default SDK user-agent, set a browser-like UA:
{
models: {
providers: {
myrelay: {
api: "anthropic-messages",
baseUrl: "https://my-relay.example.com",
apiKey: "${MYRELAY_API_KEY}",
headers: {
"User-Agent": "Mozilla/5.0 (OpenClaw; +https://coclaw.com)"
}
}
}
}
}
If your relay is strict, you may need to coordinate with the relay operator:
- ask them to allowlist your gateway IP,
- or to disable WAF rules that block SDK traffic.
4.2 Prove the gateway process is using the same proxy path as your curl test
This is one of the most common false assumptions:
curlworks in your current shell,- but the OpenClaw gateway runs as a service,
- so the gateway may not inherit the same
HTTP_PROXY,HTTPS_PROXY, orALL_PROXYvalues.
Practical rule:
- trust the gateway process environment more than your interactive shell,
- and trust a same-host curl repro only after you confirm both are using the same proxy path.
Good places to define proxy env vars:
- the gateway parent process environment,
~/.openclaw/.env,- or a deliberate service environment setup.
If you changed proxy environment and nothing improved, restart the gateway before concluding the proxy change failed.
This matters especially for reports that look like:
- curl succeeds,
- but OpenClaw still gets region-style
403s, - or the issue appeared only after an upgrade even though your endpoint and key did not change.
In that case, ask a narrower question:
Is the gateway really using the same proxy path as curl?
If the answer becomes “I should probably stop and preserve the current good state before trying more changes,” take that hint and create a backup first:
5) “Empty reply”, blank output, or silent failures
This is where most relay integrations go wrong: the HTTP request succeeds, but OpenClaw can’t parse the result (or the relay returns a different shape than the chosen API mode expects).
5.1 OpenAI “completions” vs “chat completions” JSON mismatch
In OpenAI-compatible relays, “completions” can mean two different payload shapes:
- Legacy text completions often return output in
choices[0].text - Chat completions often return output in
choices[0].message.content
If OpenClaw expects one shape but the relay returns the other, you can get a “successful HTTP request” but a blank/empty UI.
This is most often reported when a relay advertises “OpenAI compatible” but only fully implements one of the two contracts.
Fix options:
- Switch API mode (
openai-responsesoranthropic-messages) if your relay supports it. - In the relay admin panel, enable the compatibility mode that matches what you configured in OpenClaw.
- Test the exact response JSON with curl and compare it with the contract of the endpoint you’re claiming to support.
5.2 Don’t enable reasoning: true unless you know your relay supports it
Several relay users report “blank” or broken responses when a model entry includes reasoning: true but the upstream does not
support that toggle consistently.
If you’re troubleshooting, set reasoning: false first and re-test.
5.3 curl success does not prove agent compatibility
A minimal curl test usually sends a tiny payload like:
- one
messagesitem, - no tools,
- no tool results,
- no reasoning controls,
- no session history,
- often
stream: false.
That proves the endpoint is alive and can handle basic chat completions.
It does not prove the relay also supports everything OpenClaw may send during a real run.
The usual breakpoints on OpenAI-compatible relays are:
toolstool_choiceparallel_tool_callsreasoning_effortstorestream: true
If your curl request succeeds but OpenClaw fails with 400/422/empty output, assume the relay is rejecting one of those runtime fields until proven otherwise.
5.4 models status --probe can pass while real runs fail
openclaw models status --probe is a very useful first test, but it is still a smaller request than a real agent run.
It is best treated as:
- “can OpenClaw authenticate and get one basic response from this provider?”
It is not equivalent to:
- “this provider fully supports my actual session shape, tools, streaming, and reasoning settings.”
Practical takeaway:
- if probe fails, fix auth/network/config first;
- if probe succeeds but real chat still fails, move immediately to runtime payload compatibility.
5.5 Quick downgrade path for flaky custom providers
If a custom OpenAI-compatible relay is unstable, first treat it as a plain chat endpoint and only add advanced features back after it is stable.
Conservative example:
{
models: {
providers: {
myrelay: {
api: "openai-completions",
baseUrl: "https://example.com/v1",
apiKey: "${MYRELAY_API_KEY}",
models: [
{
id: "my-model",
reasoning: false,
input: ["text"],
compat: {
supportsTools: false,
},
},
],
},
},
},
}
That will not fix every relay edge case, but it is the fastest way to separate:
- “the endpoint only supports plain chat” from
- “the endpoint is fundamentally misconfigured or unreachable.”
5.6 If the gateway makes zero API calls, you’re not hitting that provider
If you expect “relay X” to be called but logs show no outbound request at all, the problem is typically:
- model selection chose a different provider,
- your relay provider is defined but your agent is pinned elsewhere,
- or the provider name collides with something you didn’t expect.
Tip: avoid naming a custom relay provider openai or anthropic. Use a unique key like newapi, oneapi, anyrouter,
or relay.
6) A tight debug loop (fastest way to converge)
6.1 Probe from OpenClaw first
openclaw models status --probe
openclaw logs --follow
You’re looking for:
- the resolved model (which provider OpenClaw actually chose),
- the exact request URL (does it contain
/v1once?), - the HTTP status code (403/404/429),
- and any structured error body.
Important: a successful probe only proves the minimal path works. If the real agent still fails, keep debugging — you have likely moved from config/auth problems into payload-compatibility problems.
6.2 Reproduce with curl using the same URL and headers
Once you see the final URL in logs, reproduce it directly. Examples (adjust to match your API mode):
OpenAI-compatible Chat Completions request (common for many relays):
curl -sS https://my-relay.example.com/v1/chat/completions \
-H "Authorization: Bearer $MYRELAY_API_KEY" \
-H "Content-Type: application/json" \
-H "User-Agent: Mozilla/5.0" \
-d '{"model":"gpt-4.1-mini","messages":[{"role":"user","content":"ping"}]}'
OpenAI-compatible Legacy Completions request (some relays still use this):
curl -sS https://my-relay.example.com/v1/completions \
-H "Authorization: Bearer $MYRELAY_API_KEY" \
-H "Content-Type: application/json" \
-H "User-Agent: Mozilla/5.0" \
-d '{"model":"gpt-4.1-mini","prompt":"ping","max_tokens":64}'
Anthropic Messages-style:
curl -sS https://my-relay.example.com/v1/messages \
-H "x-api-key: $MYRELAY_API_KEY" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-H "User-Agent: Mozilla/5.0" \
-d '{"model":"claude-3-5-sonnet-latest","max_tokens":64,"messages":[{"role":"user","content":"ping"}]}'
If curl works but OpenClaw doesn’t, compare:
- request URL paths,
- auth header format (
Authorizationvsx-api-key), - required vendor headers,
- response JSON shape,
- and whether the failing OpenClaw run includes runtime fields your curl test did not.
Ask specifically whether the real failing request body contains any of these:
toolstool_choiceparallel_tool_callsreasoning_effortstorestream
Quick checklist (print this)
-
apiis one of:anthropic-messages,openai-responses,openai-completions -
baseUrlmatches the mode (and logs do not show/v1/v1) - You edited the config the gateway uses, then restarted it
- No per-agent override is pinning a different provider/model
- If 403: set
headers.User-Agent, and verify IP/region allowlist - If curl works through a proxy: confirm the gateway service gets the same
HTTP_PROXY/HTTPS_PROXY/ALL_PROXYenvironment - If blank output: verify response JSON matches the endpoint contract; disable
reasoning: true - If probe passes but real runs fail: compare runtime fields (
tools,tool_choice,parallel_tool_calls,reasoning_effort,store,stream)
Verification: prove the relay path is really stable
Treat the relay as healthy only when all of these are true:
-
openclaw models status --probesucceeds against the intended provider, - the gateway logs show the exact URL and contract you expected,
- a curl repro using the same path and headers succeeds,
- one real OpenClaw run returns content instead of blank output,
- and the same setup survives a restart without silently switching to another provider or override.
That final step matters. Many relay incidents are not “fixed” until the next restart proves the config path is still the same.
What to change first when the relay still feels haunted
Use this order:
- Fix contract mismatches (
apimode, endpoint family, response shape). - Fix path/header mismatches (
baseUrl,User-Agent, vendor headers, proxy env). - Fix config-precedence problems (wrong file, per-agent override, stale gateway).
- Only then investigate model-specific or reasoning-specific runtime edges.
This keeps you from debugging payload nuance on top of a broken path.
Further reading
- Self-hosted AI API compatibility matrix:
- Choosing a local or proxy API path:
- OpenClaw model concepts and configuration:
- Community-reported relay edge cases (discussion threads):