On March 17, 2026, a LocalLLaMA thread challenged whether OpenCode was really “truly local.” That thread matters less because it settles the product question, and more because it exposes the operator question people actually care about:
What exactly is local, what still leaves the machine, and who can still observe or steer the system?
That is the right frame.
Local-first is not a moral category. It is not a fandom label. It is not a binary identity claim. It is a boundary claim.
If a tool runs a local model but syncs sessions to a vendor when you share, fetches remote defaults after auth, exposes a controllable HTTP server on your LAN, or silently depends on remote updates and plugins, then “local” may still be true in one layer and false in another. The operator job is to name those layers precisely.
Why “truly local” debates flare up so fast
These arguments usually explode because one phrase is being used to describe four different promises:
- Execution locality: Is inference happening on your hardware or against a remote API?
- Data locality: Do prompts, files, session history, or metadata ever leave your machine?
- Control locality: Can anyone outside the box change defaults, observe state, or drive the runtime?
- Dependency locality: Does the system still rely on remote auth, updates, package installs, hosted frontends, or cloud tools?
A product can be local on one axis and remote on the others.
That is why the phrase “truly local” is so slippery. Users hear “nothing important leaves the machine.” Builders often mean “you can run this with local models.” Those are not the same claim.
OpenCode is a useful example because the boundary is mixed, not simple
OpenCode’s current docs support several things at once.
On the local side:
- OpenCode documents local model setups through llama.cpp, LM Studio, and Ollama.
- The current Web docs say
opencode webstarts a local server on127.0.0.1by default and opens the browser against that local server. - The current Server docs say
opencode servedefaults to127.0.0.1, with network exposure becoming explicit when you change hostname or add browser origins.
On the non-local or potentially widened side:
- The Providers docs say provider credentials added through
/connectare stored in~/.local/share/opencode/auth.json, which is local storage, but still part of your audit surface. - The Share docs say shared conversations sync history to OpenCode servers and remain accessible until you unshare them.
- The Zen docs say Zen models are hosted in the US, and note retention exceptions and 30-day retention for some third-party APIs.
- The Config docs say remote organizational defaults can be fetched from
.well-known/opencodeautomatically when you authenticate with a supporting provider. - The same config docs say OpenCode can auto-download updates at startup unless you disable
autoupdate. - They also say plugins can load from npm, MCP servers can be configured, and permissions allow all operations by default unless you tighten them.
- The Server docs document an HTTP API, OAuth endpoints, and a
/tuiendpoint that can drive the terminal client through the server.
That combination is exactly why the right conclusion is not “OpenCode is local” or “OpenCode is fake local.”
The better conclusion is: OpenCode exposes multiple boundaries, and the trustworthiness of a given deployment depends on which ones you actually use and how you configure them.
Treat the Reddit thread as signal, not verdict
The March 17, 2026 Reddit post claimed the web UI still depended on app.opencode.ai. I am not treating that claim as settled product fact.
Why not?
Because the current official docs now state that opencode web starts a local server on 127.0.0.1 by default, while network exposure is opt-in through --hostname 0.0.0.0, --mdns, or extra CORS origins. That means the Reddit thread is best used as operator demand signal: people care enough about browser path and hosted dependencies to scrutinize them.
And that instinct is correct.
Even if a given implementation has changed, the audit question remains the same:
- Is the browser loading a local UI or a hosted frontend?
- Which origin is making requests?
- What traffic leaves localhost during startup, auth, model selection, sharing, or updates?
- What breaks when outbound internet is blocked?
That is how you convert forum anxiety into operator judgment.
Local execution and privacy are related, but they are not the same
Here is the cleanest distinction to keep in your head:
| Claim | What it can mean | What it does not guarantee |
|---|---|---|
| Local model | Inference runs against a loopback or self-hosted endpoint | No remote sharing, no remote config, no remote packages, no exposed control plane |
| Self-hosted | You run the main control plane yourself | That prompts never leave, that channels are private, or that updates and dependencies are local |
| Private | Fewer third parties can see code, prompts, state, or metadata | That everything executes locally |
| Local-first | The default posture tries to keep core work close to the operator | That every optional feature preserves the same boundary |
A tool can be local-inference but weak on privacy if it exposes a remote control surface, syncs transcripts, or installs unreviewed extensions.
A tool can be more private than a SaaS coding assistant even if it still uses a hosted model API, because the most sensitive workflow state and tool execution stay on your infrastructure.
That is the same point we make in Privacy-First AI: the real question is not whether a product sounds private, but where the boundary actually sits.
What to inspect before you trust a “local-first” claim
If you only remember one section from this article, make it this one.
1) Model calls
Start with the obvious boundary first.
Ask:
- Is the active model pointed at
localhost,127.0.0.1, a LAN host, or a vendor API? - Are there fallback or “small model” paths that quietly use a different provider?
- Are routing, retries, or observability proxies injecting another cloud hop?
OpenCode’s docs make this mixed story explicit: you can use local model backends, but you can also use Zen or any number of hosted providers. That means “supports local models” is not the same as “this deployment is local.”
2) Auth and credential paths
Then check how identity enters the system.
Ask:
- Where are API keys stored?
- Does login require a vendor website or OAuth callback?
- Are org-level defaults fetched after authentication?
In OpenCode’s case, credentials can live locally in ~/.local/share/opencode/auth.json, but the docs also describe OAuth-capable provider flows and remote .well-known/opencode defaults for supported auth paths. That matters because auth is often where a supposedly local system quietly reconnects to an external control plane.
3) Browser and UI boundary
A local server is not the same thing as a local user interface boundary.
Ask:
- Does the browser open a localhost origin or a hosted frontend?
- Which domains need CORS permission?
- Does the UI still work with outbound internet blocked?
Current OpenCode docs describe opencode web as localhost by default, which is a meaningful boundary improvement over any design that would require a hosted web app. But the general lesson is broader than OpenCode: always verify the browser path, because “I opened a tab” tells you nothing about who actually served it.
4) Remote control surfaces
Many local-first systems expose a strong local core and then reopen the boundary through convenience features.
Ask:
- Is there an HTTP API?
- Can another client drive the runtime remotely?
- Is the server reachable only on localhost, or on LAN or internet interfaces?
- Is authentication on by default, or only recommended?
OpenCode’s server docs are clear here: there is an HTTP server, OpenAPI surface, and /tui endpoint; opencode serve defaults to 127.0.0.1; network reachability expands if you change hostname, use mDNS, or otherwise expose it. That is not an indictment. It is simply the boundary you must account for.
This is also why The OpenClaw Security Nightmare keeps hammering exposed dashboards and broad control planes: once the control surface is reachable, “local” no longer means “only I can touch it.”
5) Sharing, telemetry, and observability sinks
A system can keep inference local and still export high-value context elsewhere.
Ask:
- Can transcripts be shared or auto-shared?
- Do plugins, proxies, or analytics headers forward session information?
- Are logs local only, or copied to a service?
OpenCode’s share docs make one part of this explicit: shared conversations sync to OpenCode servers and remain public to anyone with the link until you unshare them. The docs I reviewed do not establish a default always-on telemetry pipeline, so I would not claim one. But they do show optional observability and proxy surfaces, which means telemetry belongs on the checklist even when it is not the default.
6) Updates, packages, and extension supply chain
This is the part people forget when they say “it runs locally.”
Ask:
- Does the runtime auto-update?
- Can it pull plugins or packages from npm or other registries?
- Are MCP servers local, remote, or organization-provided?
- What code gets executed because you installed an integration, not because you changed the core tool?
OpenCode’s docs say autoupdate is on unless you disable it, plugins can be loaded from npm, and MCP servers can come from config, including remote organization defaults. None of that makes the tool illegitimate. It does mean the locality claim must include the dependency plane, not just the model plane.
7) Failure-mode locality
Finally, test the claim under stress.
Ask:
- What still works with the internet unplugged?
- What breaks if vendor auth is unavailable?
- What fails if package registries are blocked?
- Can you still inspect and control the runtime without cloud help?
This is the easiest practical test of all. Many “local-first” products sound local only until you remove internet, DNS, or the vendor dashboard.
OpenCode-style claims and OpenClaw reality point to the same judgment
This is where OpenCode and OpenClaw converge.
OpenCode gives you a good example of how a modern coding agent can be locally anchored while still offering remote providers, sharing, plugins, OAuth, and browser/server surfaces.
OpenClaw gives you a good example of how “self-hosted” can still leave major privacy and security questions unresolved if you rely on hosted models, messaging channels, exposed dashboards, or broad tool permissions. That is the same boundary logic behind Privacy-First AI and the risk framing in The OpenClaw Security Nightmare.
So the operator question is not:
Which tool has the purest branding?
It is:
Which deployment leaves the smallest, clearest, and most auditable trust boundary for my workload?
In practice, a carefully configured OpenCode setup using localhost model endpoints, disabled share, tightened permissions, disabled autoupdate, and no remote plugins may be meaningfully more local than a sloppy “self-hosted” stack that still routes prompts to cloud models and exposes its control plane.
And a self-hosted OpenClaw stack may still be the better privacy posture than a SaaS assistant even when it is not fully local, because the highest-value state stays under operator control.
That is why local-first must be audited as a boundary claim instead of defended as a vibe.
A reusable locality audit checklist
Use this before you trust any assistant, coding agent, or self-hosted AI stack:
| Audit question | Pass condition | Warning sign |
|---|---|---|
| Where does inference run? | Model endpoint is local or on infrastructure you control | Hidden fallback to hosted APIs |
| What does the browser actually talk to? | Local origin or fully self-hosted frontend path is verified | Hosted SPA or unexplained third-party requests |
| What leaves the machine by design? | Sharing, sync, and logging are explicit and disabled when not needed | Public links, transcript sync, or metadata export you forgot was on |
| Who can drive the runtime? | Control surfaces stay on localhost or behind deliberate auth | LAN exposure, internet exposure, or weak default auth |
| Who can change defaults? | Config provenance is local and reviewable | Remote org defaults, silent policy injection, or opaque bootstrap config |
| What code can be added? | Extensions, plugins, MCP servers, and packages are curated | One-command installs from remote registries with broad privileges |
| How do updates happen? | Updates are pinned, reviewed, or intentionally scheduled | Silent autoupdate on startup |
| What still works offline? | Core workflow survives network removal | Tool becomes unusable without vendor auth, hosted UI, or cloud services |
| What is stored locally? | Secrets and state are known, minimized, and protected | Surprise credential files, long-lived plaintext tokens, or unclear retention |
| What is the operator assumption? | You can explain the trust boundary in one sentence | You are still saying “I think it’s basically local” |
If you cannot answer those questions, you do not yet know whether the tool is local-first in any meaningful operational sense.
Final take
The recent OpenCode “truly local” concern is useful because it pushes the conversation in the right direction.
Not toward purity tests. Not toward vendor dunking. Not toward the lazy binary of “local” versus “not local.”
Toward boundary tracing.
That is the standard that holds up across coding agents, assistants, and self-hosted stacks:
local-first is real only when you can audit what runs where, what leaves the machine, who can observe it, and which remote systems still sit on the failure path.
If a product team wants to claim “truly local,” that claim should survive a packet capture, a config review, an extension audit, and an unplugged-network test.
Anything weaker is branding.