TUI: '(no output)' or no response after sending a message
If the OpenClaw TUI shows '(no output)' or appears stuck, check connection status, gateway logs, model auth, and whether your provider only supports minimal chat payloads but not real OpenClaw runtime requests.
Symptoms
- After you send a message in
openclaw tui, the assistant reply is blank or shows “(no output)”. - The UI looks idle, but you never get a response.
- This can be intermittent or only happen with certain models/providers.
Cause
Most common causes:
- The TUI is connected, but the gateway run failed (provider auth, rate limit, model mismatch).
- The gateway is still busy, stalled, or disconnected.
- Delivery is off (you expected a reply in a chat channel, but the TUI is only showing local chat).
- A provider streaming edge case (some providers/models have had TUI rendering bugs).
- A custom or local OpenAI-compatible backend accepts
curlormodels status --probe, but rejects the fuller payload used in a real agent run.
Fix
1) Confirm the TUI is connected and the run state is sane
In the TUI:
- Run
/status - If it shows disconnected, reconnect with the correct
--url/--token/--password.
2) Check gateway logs (most actionable)
On the gateway host:
openclaw logs --follow
If you see auth or model errors, fix those first (then retry).
3) Sanity-check model/auth
On the gateway host:
openclaw models status
openclaw models status --probe
If probe fails, you are still in auth/config/network territory.
If probe succeeds but the TUI still shows (no output) or the run errors immediately, suspect a runtime payload compatibility problem rather than a basic connectivity problem.
4) If probe passes, test whether the provider only supports minimal chat payloads
This pattern is increasingly common with custom OpenAI-compatible endpoints, local proxies, and some local-model servers:
- direct
curlworks, openclaw models status --probeworks,- but a real TUI or
openclaw agentrun still fails.
That usually means the backend accepts a minimal chat request but rejects one of the fields that OpenClaw may send during a real run, such as:
toolstool_choiceparallel_tool_callsreasoning_effortstorestream
Practical next step:
- try a new clean session with
/reset, and - if you control the model config, temporarily treat that provider as a plain chat backend first.
If you are using a custom OpenAI-compatible provider, the most useful follow-up guide is:
5) If you expected messages to be delivered to a channel
Turn on delivery:
/deliver on(or start withopenclaw tui --deliver)
6) Reset the session / restart the TUI
In the TUI:
/reset(new clean session), or- exit and rerun
openclaw tui.
If the issue only happens with one provider/model, switch models temporarily and update OpenClaw to the latest version.
Verify
- A new message returns a normal assistant response in the TUI.
- Gateway logs show a successful model call.