Telegram: network request failed, 7s+ delays, or timeouts due to IPv6/routing
Fix slow or failing Telegram Bot API calls by checking IPv6-first behavior, regional routing problems, and outbound reachability to api.telegram.org.
Symptoms
You are likely in this failure pattern if one or more of these are true:
- gateway logs show errors like:
HttpError: Network request for 'sendMessage' failedHttpError: Network request for 'sendChatAction' failedfetch failed
- Telegram messages eventually go through, but each call takes 7-10+ seconds instead of feeling near-instant,
- the bot looks alive but replies arrive very late or intermittently time out,
setMyCommandsor other startup calls fail on some boots and succeed on others,- repeated network failures trigger restart loops or make the bot feel “randomly unavailable.”
If you are in Taiwan or on a network with poor routing to Telegram, the slow version of this problem can be as important as the hard-fail version.
Cause
The most useful cause family is not “Telegram auth is broken.” It is the path to api.telegram.org is unhealthy.
Most often that means one of these:
api.telegram.orgresolves to IPv6 first, but your host has no usable IPv6 egress,- your IPv6 path exists but is much slower or less reliable than IPv4,
- outbound DNS/HTTPS to Telegram is restricted,
- or a regional route to Telegram is simply bad enough that the bot feels stalled even when requests eventually succeed.
Recent operator evidence in issue #48727 adds a sharper real-world pattern: in Taiwan, especially on Chunghwa Telecom, Telegram API traffic could become extremely slow or intermittently unreachable. In that report, Node.js 22+ IPv6-first behavior made the symptom worse by burning several seconds on a broken IPv6 path before falling back.
Confidence boundary: the slow-routing pattern is well supported by the operator report. The exact limits of SDK-level workarounds, proxying, or API-root overrides are more environment-specific and should be treated as advanced paths rather than guaranteed fixes.
Fix
1) Prove whether this is a path problem, not a bot-token problem
On the gateway host, start with OpenClaw’s own probe:
openclaw channels status --probe
If Telegram fails there, run one direct API timing check from the same host:
time curl -sS "https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/getMe" >/dev/null
What to look for:
- healthy path: roughly fast and repeatable,
- unhealthy path: repeated stalls, 7s+ latency, intermittent failures, or large variance between attempts.
This separates “Telegram is slow from this machine” from “the bot token is invalid.”
2) Try the lowest-risk IPv4-first recovery first
If your host has no stable IPv6 egress, or IPv6 is clearly the slow path, prefer IPv4 for this environment.
Common approaches:
- enable working IPv6 on the host or network,
- configure the host/runtime to prefer IPv4,
- or disable broken IPv6 resolution for the OpenClaw runtime.
A low-cost thing to try first is forcing IPv4-first DNS order for the runtime:
export NODE_OPTIONS="--dns-result-order=ipv4first"
Then restart the gateway and retest.
Important boundary: this can help when the real problem is IPv6-first resolution, but it is not a guaranteed fix for every Telegram networking case. If the network route itself is bad, forcing IPv4 may reduce delay without fully solving the incident.
3) If the network itself is the problem, move the egress path
If probes and curl are still slow or flaky after the IPv4-first test, treat this as a routing problem rather than a Telegram-config problem.
The most reliable operator moves are:
- run the gateway on a host with better egress to Telegram,
- move the gateway to a VPS or another network,
- or keep Telegram on a network path that can reach
api.telegram.orgconsistently.
This is the most important branch for Taiwan-style cases: when the ISP route is bad, changing model config, reinstalling OpenClaw, or rotating the bot token will not fix the path.
4) Treat proxy or patched API-root workarounds as advanced, unsupported recovery
Issue #48727 reports a stronger workaround using a reverse proxy plus patching compiled Telegram client paths.
That may be useful for advanced operators who fully control their runtime, but it is not the first-line CoClaw troubleshooting answer because it is:
- environment-specific,
- upgrade-fragile,
- and not a clean supported configuration path today.
Use it only if:
- you have confirmed the network route is the real blocker,
- moving the gateway to better egress is not practical,
- and you are willing to own a custom maintenance path.
Verify
Close the loop in this order:
1) Probe from OpenClaw again
openclaw channels status --probe
2) Time the direct Telegram API call again
time curl -sS "https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/getMe" >/dev/null
You want the timing to become both faster and more repeatable.
3) Send one real Telegram message
Confirm:
- the bot replies without a long visible stall,
- gateway logs stop showing repeated
sendMessage/sendChatActionnetwork failures, - and the process does not fall back into restart or crash-loop behavior.
4) Watch through a few real interactions
Do not stop at one lucky success. The page is only proven when several messages in a row stay fast enough that the bot no longer feels regionally broken.
Related
- /channels/telegram
- GitHub issue: openclaw/openclaw#48727
- OpenClaw docs: Telegram, channel troubleshooting