Telegram: repeated getUpdates 409 conflict blocks bot replies
Recover a Telegram bot that keeps logging getUpdates 409 Conflict errors by stopping duplicate pollers, restarting cleanly, and proving only one listener still owns the token.
Symptoms
- Gateway logs repeat an error like:
[telegram] getUpdates conflict: Call to 'getUpdates' failed! (409: Conflict: terminated by other getUpdates request; make sure that only one bot instance is running)
- The Telegram bot stops replying even though the gateway process or container still looks alive.
/agents,/status, and normal chat messages silently fail because polling is stuck in the conflict loop.- The problem comes back even after:
- creating a fresh bot token,
- disabling
gateway.channelHealthCheckMinutes, - and confirming no webhook is set.
Cause
A Telegram 409 Conflict on getUpdates means more than one long-poll listener is trying to own the same bot token.
That is the important operator boundary:
- a fresh token removes many obvious external conflicts,
- disabled health checks remove one obvious restart path,
- and a cleared webhook removes the webhook-vs-polling split.
If the 409 loop still returns after all three, the evidence points much more strongly to duplicate polling ownership than to a bad credential.
What is still inference, not settled fact: in issue #49822, the remaining likely cause was an internal polling-loop leak or restart race inside OpenClaw’s Telegram lifecycle. The issue evidence supports that cause family, but it does not yet prove one exact code path.
Fix
Start with operator actions that restore single ownership of the token.
1) Make sure only one OpenClaw runtime is active
Look for duplicate containers, services, shells, or watchdogs that might still be polling the same bot token.
Common checks:
docker ps --format 'table {{.Names}}\t{{.Status}}'
docker compose ps
ps aux | rg openclaw
If you use a service manager, also check for units or scheduled jobs that can re-spawn the gateway outside your current shell.
2) Stop everything for that token, then wait before starting again
Do a clean stop, not an in-place restart:
docker compose down
If you are not in Docker, stop the gateway service and confirm the process is fully gone.
Then wait about 10-15 seconds before starting again.
Why this helps: Telegram can still consider a previous long-poll active for a short window after process shutdown. A short pause lowers the chance that your replacement listener races a stale one.
3) Start exactly one gateway instance
Bring the gateway back in one place only:
docker compose up -d
Avoid mixing:
- a foreground debug shell,
- a service manager,
- and a container
against the same Telegram token at the same time.
4) If you previously used webhook mode, clear it once
This is not the main issue pattern in #49822, but it is still a quick boundary check if this token was reused:
curl -sS "https://api.telegram.org/bot$TELEGRAM_BOT_TOKEN/deleteWebhook"
If you are already on a fresh token, or already confirmed webhook deletion, do not keep circling here.
5) If the conflict still returns, treat it as a likely internal polling lifecycle bug
Once you have confirmed:
- one visible runtime only,
- fresh token or cleared webhook,
- disabled health checks,
- and a clean stop/start cycle,
the most credible next move is to stop changing Telegram credentials and start collecting runtime evidence.
Capture:
- the first startup logs,
- the first moment the 409 loop begins,
- whether it appears after 60-120 seconds,
- and proof that only one visible process or container exists.
That is the point where #49822 suggests a likely internal duplicate poller, leaked loop, or restart race rather than operator misconfiguration.
Why a fresh token still conflicts
The token is not the real unit of failure here. Polling ownership is.
A new token helps only if some outside process was still using the old one.
It does not help if the same OpenClaw runtime:
- starts two
getUpdatesloops for the same account, - restarts polling before the earlier loop has fully stopped,
- or leaks an older listener across restart boundaries.
That is why “fresh token + disabled health checks + still no replies” is such a strong clue. Those steps reduce outside explanations and leave duplicate poller ownership as the most likely remaining family.
Verify
After the clean stop/start cycle, verify in this order:
1) Logs stay quiet on getUpdates
docker compose logs -f gateway
Watch for at least 2 minutes. Do not stop at “provider started.”
2) The bot answers a real command
In Telegram, send:
/agents
Then send one normal message.
3) The bot keeps working after the initial window
The issue evidence says conflicts often reappeared within 60-120 seconds. Keep watching through that window before declaring recovery complete.
You have closed the loop when:
- no repeated
getUpdates conflictmessages appear, /agentsresponds,- a normal chat message gets a reply,
- and no second process or container comes back unexpectedly.