Intermediate
macOS / Linux / Windows (WSL2) / Docker / Self-hosted
Estimated time: 12 min

OpenClaw Brave `llm-context` Web Search: When It’s Better Than Normal Search

A practical guide to Brave's `llm-context` mode in OpenClaw: what it returns, when to use it instead of normal web search, how to think about freshness and region, and how to avoid common key and environment mistakes.

Implementation Steps

This mode is not just 'Brave search with a different name' — it is optimized to return extracted grounding snippets and source metadata.

2026.3.8 added an opt-in Brave mode:

tools: {
  web: {
    search: {
      provider: "brave",
      brave: {
        mode: "llm-context"
      }
    }
  }
}

That sounds small, but it changes how you should think about Brave inside OpenClaw.

The short version:

  • use normal search when you want a broad result list,
  • use llm-context when you want the search provider to return more directly usable grounding snippets with source metadata.

1) What llm-context is actually for

Standard web search is good when the agent needs a broad search surface:

  • multiple links,
  • result diversity,
  • exploratory browsing,
  • or a more manual follow-up flow.

Brave llm-context is better when the task is closer to:

  • “summarize the current consensus,”
  • “find the most relevant current facts,”
  • “ground this answer with concise source-backed snippets.”

In other words, it is more useful when your real need is grounded synthesis, not just SERP exploration.


2) When llm-context is a better default

Use it when:

  • the question is fact-heavy and current,
  • you want cleaner grounding in the returned search payload,
  • you care more about answerable context than a large pile of links,
  • or you are feeding search results into a downstream model step.

Good examples:

  • latest product or release changes,
  • current provider status or pricing checks,
  • policy/rules/restrictions with strong recency requirements,
  • high-signal research summaries.

3) When normal Brave search is still better

Do not force llm-context everywhere.

Normal search is often better when:

  • you want wider coverage,
  • you expect to click through several sources,
  • the task is exploratory,
  • or the user really wants “search results” more than a grounded summary.

Rule of thumb:

  • browse many possibilities → normal search
  • extract grounded context quicklyllm-context

4) Query shape matters more than people think

llm-context works best when the query is tight.

Better:

  • one topic,
  • one time horizon,
  • one comparison axis.

Worse:

  • broad, multi-part, everything-bagel prompts,
  • several unrelated goals in one query,
  • vague “tell me everything” phrasing.

If the question is too broad, break it into two searches instead of hoping one giant query gets magically better because the mode changed.


5) Country, language, and freshness are worth using

Brave search in OpenClaw supports filters like:

  • country
  • search_lang
  • ui_lang
  • freshness

These are especially helpful when:

  • results differ by region,
  • you need non-English sources,
  • or the topic is time-sensitive.

Examples of where this matters:

  • release availability by country,
  • region-locked models/services,
  • recent announcements,
  • current policy or pricing pages.

If a query feels “kind of right but oddly off,” poor region/language assumptions are often the reason.


6) The most common operational mistake: the key is not visible to the gateway

If web_search fails before you ever get to compare modes, the usual problem is not Brave itself — it is that the running gateway cannot see BRAVE_API_KEY.

The most reliable fix path is still:

printf 'BRAVE_API_KEY=%s\n' 'YOUR_BRAVE_KEY' >> ~/.openclaw/.env
openclaw gateway restart

This matters because the gateway service environment is the real truth source, not the shell where you last ran openclaw configure.

If that error string sounds familiar, also read:


7) A sane rollout pattern

If you are unsure whether your workflow benefits from llm-context, do this:

  1. enable Brave search cleanly,
  2. test a few recurring research tasks,
  3. compare result usefulness rather than raw number of links,
  4. keep normal search as the fallback mental model.

This is not a religion question. It is a workflow-fit question.


8) Good candidate workflows

llm-context is especially promising for:

  • release note triage,
  • issue investigation with current web context,
  • pricing/policy verification,
  • and “brief me fast, but keep sources attached” tasks.

That makes it a strong fit for operators, maintainers, and agents that summarize current information for humans.


Quick checklist

  • I know whether I want broad browsing or grounded synthesis
  • My query is narrow enough to answer well
  • I set region/language/freshness intentionally when the topic needs it
  • The gateway can actually see BRAVE_API_KEY
  • I am evaluating usefulness, not just result count

Verification & references

  • Reviewed by:CoClaw Editorial Team
  • Last reviewed:March 14, 2026
  • Verified on: macOS · Linux · Windows (WSL2) · Docker · Self-hosted

Related Resources

Need live assistance?

Ask in the community forum or Discord support channels.

Get Support