solution model high macos linux windows

Custom provider fails only when reasoning is enabled

Fix custom OpenAI-compatible providers that work in basic chat mode but fail once reasoning or thinking controls are enabled.

By CoClaw Team

Symptoms

  • Basic chat works.
  • The same provider fails after enabling reasoning or thinking features.
  • Errors may appear only for certain models or only after changing defaults.
  • Tool calling may also become unstable once reasoning is enabled.

Cause

“Reasoning-capable model” and “this endpoint accepts reasoning-control fields” are not the same thing.

A custom OpenAI-compatible endpoint may accept normal chat requests but reject or mis-handle fields such as:

  • reasoning_effort,
  • provider-specific thinking controls,
  • or translated reasoning payloads passed through a proxy layer.

This is why a model can be genuinely reasoning-capable while the compatibility layer in front of it is not.

Fix

1) Turn reasoning off for the compatibility test

When debugging, first prove the endpoint can survive a plain run:

{
  models: {
    providers: {
      myprovider: {
        models: [
          {
            id: "my-model",
            reasoning: false,
          },
        ],
      },
    },
  },
}

Then retry with a fresh session.

2) Check whether the failure appears only after changing thinking defaults

Look for:

  • global thinking defaults,
  • per-model reasoning flags,
  • or a recent configuration change that enabled reasoning-like behavior.

If the problem started only after that change, your endpoint likely rejects the reasoning-control layer, not the model itself.

3) Treat the endpoint as plain chat until proven otherwise

For custom compatibility layers, the safe default is:

  • reasoning: false during initial integration,
  • then re-enable only after proving the specific endpoint contract supports it.

4) Distinguish model capability from transport compatibility

The right mental model is:

  • the model may support reasoning,
  • but the proxy, relay, or OpenAI-compatible wrapper may not support the control fields OpenClaw uses.

That is a transport limitation, not proof that the underlying model is weak or broken.

Verify

You have confirmed the diagnosis if:

  • the provider works with reasoning off,
  • fails only when reasoning is enabled,
  • and becomes stable again after reverting to plain chat mode.

Verification & references

  • Reviewed by:CoClaw Code Team
  • Last reviewed:March 14, 2026
  • Verified on: macOS · Linux · Windows
Want to explore more? Browse all solutions or ask in the Community Forum .
Report a problem

Related Resources