OpenClaw provides multiple ways to search and retrieve information. Each tool has distinct strengths, tradeoffs, and ideal use cases. Understanding these differences helps you build more effective agents that choose the right approach for each task.
The three search approaches
OpenClaw offers three primary search capabilities: lightweight web extraction, full browser automation, and local memory search. Selecting the right one depends on your content source, complexity requirements, and performance needs.
web_fetch: Lightweight extraction
The web_fetch tool retrieves content from URLs and converts it to readable text or markdown. It is fast, simple, and requires no browser infrastructure.
When to use web_fetch
- Static content that does not require JavaScript
- Documentation pages, articles, and blog posts
- Quick research where speed matters
- Batch processing of multiple URLs
Limitations
- Cannot execute JavaScript or handle dynamic content
- No interaction with page elements
- May fail on sites with bot protection
- Cannot access authenticated content
Best practices
Use web_fetch as your default for public web content. It is the fastest option and works well for most documentation and article sites. Always validate that the content you need is actually present in the extracted output.
Browser automation: Full interaction
Browser automation controls a real browser through Playwright. It handles JavaScript, user interactions, and complex workflows that web_fetch cannot manage.
When to use browser automation
- JavaScript-rendered content and SPAs
- Pages requiring login or authentication
- Interactions: clicks, form fills, scrolling
- Complex extraction workflows
Limitations
- Slower than web_fetch due to browser overhead
- Requires more system resources
- May encounter CAPTCHAs or bot detection
- More complex to configure and maintain
Best practices
Reserve browser automation for cases where web_fetch fails or when you need interaction. Use specific selectors and wait conditions to make workflows reliable. Consider browser profiles for authenticated sessions.
memory_search: Local knowledge
memory_search queries your agent's memory files: MEMORY.md and files in the memory/ directory. It is completely private, instant, and ideal for personal knowledge retrieval.
When to use memory_search
- Recalling prior conversations and decisions
- Accessing user preferences and settings
- Retrieving project history and context
- Facts that do not require external lookup
Limitations
- Only searches local memory files
- Requires prior information to be stored
- No access to external web content
- Limited by what you have recorded
Best practices
Search memory first before fetching external content. This saves time and API costs when the answer already exists in your history. Maintain well-organized memory files for better search results.
Comparison matrix
| Feature | web_fetch | Browser | memory_search |
|---|---|---|---|
| Speed | Fast | Slow | Instant |
| JavaScript | No | Yes | N/A |
| Authentication | No | Yes | N/A |
| Interaction | No | Full | N/A |
| Privacy | External | External | Private |
| Cost | None | Higher | None |
Decision workflow
Follow this logic when choosing a search tool:
- Check memory first: Do I already know this? Use memory_search.
- Simple web content: Is it static public content? Use web_fetch.
- Complex requirements: Needs JavaScript, login, or interaction? Use browser automation.
Practical use cases
Research assistant
A research agent might check memory for prior findings, use web_fetch to gather documentation, and fall back to browser automation only for complex dashboard extractions.
Content curator
Content agents can use web_fetch for RSS feeds and articles, browser automation for social media interactions, and memory_search to avoid recommending previously shared content.
Personal knowledge base
Agents focused on personal productivity rely heavily on memory_search for user preferences, past decisions, and project context, minimizing external API calls.
Common patterns
Cascade search
Try memory_search first, then web_fetch, then browser automation. This minimizes cost and latency by using the fastest viable option.
Parallel search
For comprehensive research, run web_fetch on multiple URLs simultaneously while checking memory. Combine results for a complete picture.
Cached extraction
Store web_fetch results in memory for frequently accessed pages. Future queries hit memory_search instead of repeating external calls.
Troubleshooting
web_fetch returns empty content
- The site may require JavaScript rendering; try browser automation
- Bot protection may block the request; check for alternatives
- The URL may redirect; verify the final destination
Browser automation is slow
- Use specific selectors instead of broad searches
- Set appropriate wait conditions to avoid unnecessary delays
- Consider if web_fetch could handle the content instead
memory_search finds nothing
- Verify information was previously stored
- Check file paths and naming conventions
- Use different search terms; memory uses semantic matching
Need help from people who already use this stuff?
Questions about search tools?
Get help choosing the right tool, optimizing extraction workflows, and troubleshooting issues in the OpenClaw community.
FAQ
Which search tool is fastest?
web_fetch is fastest for simple page extraction. It retrieves and parses HTML directly without browser overhead. For JavaScript-heavy sites, browser automation is slower but more capable.
Can I search my own memory files?
Yes. memory_search searches your agent's MEMORY.md and memory/*.md files. It's private, fast, and ideal for recalling prior conversations, preferences, and decisions.
When should I use browser automation over web_fetch?
Use browser automation when content requires JavaScript rendering, user interaction (clicks, forms), or authenticated access. Use web_fetch for static content and speed.
Do these tools work with all websites?
Most websites work, but some block automated access. web_fetch may fail on sites with bot protection. Browser automation can handle more complex sites but may still encounter CAPTCHAs or rate limits.
Can I combine multiple search tools?
Absolutely. A common pattern: use web_fetch for quick research, browser automation for complex interactions, and memory_search to check if you've already encountered similar information.