solution model high macos linux windows

API works in curl, but OpenClaw still fails

Fix custom or local AI API integrations where direct curl requests succeed, but OpenClaw still errors, returns blank output, or fails during real agent runs.

By CoClaw Team

Symptoms

  • A direct curl request to your provider succeeds.
  • openclaw models status --probe may also succeed.
  • But a real openclaw agent run, TUI message, or channel reply still fails.
  • You may see HTTP 422, blank output, a fast failure, or “no output” even though the endpoint is reachable.

Cause

This usually means you have already solved the endpoint problem and are now hitting a runtime payload compatibility problem.

A minimal curl request usually proves only that the backend accepts:

  • the URL,
  • the auth header,
  • a small messages payload,
  • and basic chat generation.

A real OpenClaw run may send more than that, including:

  • session history,
  • tools,
  • tool_choice,
  • parallel_tool_calls,
  • reasoning_effort,
  • store,
  • or stream.

If the backend only partially implements an OpenAI-compatible contract, the minimal curl test can pass while the real run fails.

Fix

1) Confirm whether probe also passes

Run:

openclaw models status --probe

Interpret the result like this:

  • If probe fails, you are still in auth/config/network territory.
  • If probe succeeds but real runs fail, move to payload-compatibility debugging.

2) Compare the minimal request with the real failing path

Ask these questions:

  • Does your successful curl use only one user message?
  • Does it avoid tools and multi-turn history?
  • Does it force stream: false?
  • Does it avoid reasoning-related fields?

If yes, then curl is proving only basic chat support.

3) Temporarily treat the provider as plain chat only

If this is a custom OpenAI-compatible provider, start with a conservative model config:

{
  models: {
    providers: {
      myprovider: {
        api: "openai-completions",
        baseUrl: "http://host:port/v1",
        apiKey: "${MY_API_KEY}",
        models: [
          {
            id: "my-model",
            reasoning: false,
            input: ["text"],
            compat: {
              supportsTools: false,
            },
          },
        ],
      },
    },
  },
}

That isolates whether the endpoint can survive a real OpenClaw run as a plain chat backend.

4) Retry with a fresh session

Do not reuse a session that may already contain failed tool turns or provider-specific history.

openclaw agent --session-id "compat-test-1" -m "hi"

In the TUI, use /reset before re-testing.

5) Capture the most useful next evidence

If it still fails, the most valuable next artifact is the server-side log or the final request body fields. You do not need to expose secrets.

Ask whether the failing request includes any of these:

  • tools
  • tool_choice
  • parallel_tool_calls
  • reasoning_effort
  • store
  • stream

Verify

The fix path is working if:

  • a fresh openclaw agent run succeeds, not just curl,
  • TUI replies are normal,
  • and logs show the expected provider/model being called without a fast 400/422 failure.

Verification & references

  • Reviewed by:CoClaw Code Team
  • Last reviewed:March 14, 2026
  • Verified on: macOS · Linux · Windows
Want to explore more? Browse all solutions or ask in the Community Forum .
Report a problem

Related Resources

Custom provider fails only when reasoning is enabled
Fix
Fix custom OpenAI-compatible providers that work in basic chat mode but fail once reasoning or thinking controls are enabled.
TUI: '(no output)' or no response after sending a message
Fix
If the OpenClaw TUI shows '(no output)' or appears stuck, check connection status, gateway logs, model auth, and whether your provider only supports minimal chat payloads but not real OpenClaw runtime requests.
Moonshot (Kimi): API works in browser but OpenClaw keeps failing (wrong endpoint)
Fix
Fix Moonshot/Kimi model failures by using the correct baseUrl for your region (api.moonshot.ai vs api.moonshot.cn), and ensuring MOONSHOT_API_KEY is set on the gateway host.
Venice AI: models unavailable or requests make no API calls
Fix
Fix Venice provider issues by checking VENICE_API_KEY, network reachability to api.venice.ai, model refs, and credits/billing.
OpenClaw Relay & API Proxy Troubleshooting (NewAPI/OneAPI/AnyRouter): Fix 403s, 404s, and Empty Replies
Guide
A practical integration guide for using OpenClaw with OpenAI/Anthropic-compatible relays and API proxies (NewAPI, OneAPI, AnyRouter, LiteLLM, vLLM): choose the right API mode, set baseUrl correctly, avoid config precedence traps, and debug 403/404/blank-output failures fast.
OpenClaw Not Responding: Fix 'no output', Incorrect API, Rate Limits, and Silent Failures
Guide
A high-signal checklist for when OpenClaw stops replying (TUI shows '(no output)', channels go quiet, or logs show 401/403/429). Covers config precedence, provider auth, model allowlists, relay API-mode mismatch, and rate-limit/billing traps.