Release Overview
OpenClaw 2026.4.14 is a broad quality release focused on model provider improvements for the GPT-5 family and channel provider stability. The update includes explicit turn improvements for GPT-5.4, security hardening across multiple surfaces, and significant core codebase refactors for better performance.
Major Features
GPT-5.4 Pro Forward Compatibility
OpenClaw now includes forward-compat support for gpt-5.4-pro, including Codex pricing, limits, and list/status visibility. This means when OpenAI releases GPT-5.4 Pro, your OpenClaw deployment will recognize it immediately without requiring an emergency update.
Telegram Forum Topics Enhancement
Human-readable topic names now surface in agent context, prompt metadata, and plugin hook metadata. Previously, forum topics appeared as opaque numeric IDs. Now agents understand they are in "Bug Reports" or "Feature Requests" rather than "topic_42".
// Topic names persist across restarts
{
"channels": {
"telegram": {
"groups": {
"-1001234567890": {
"topics": {
"5": "Bug Reports",
"12": "Feature Requests"
}
}
}
}
}
}Critical Fixes for GPT/Codex Users
Embedded Run Timeout Fix
The configured embedded-run timeout now properly flows into the global undici stream timeout tuning. Previously, slow local Ollama runs would hit the default stream cutoff instead of respecting your operator-set timeout. This caused premature failures on long-running local model operations.
Codex Provider Catalog Validation
The apiKey field now appears in Codex provider catalog output. This fixes a bug where the Pi ModelRegistry validator would reject the entry and silently drop all custom models from every provider in models.json. Your custom model configurations now persist correctly.
Minimal Thinking Mapping
OpenAI's minimal thinking setting now maps correctly to OpenAI's supported low reasoning effort for GPT-5.4 requests. Previously, embedded runs would fail request validation when minimal thinking was specified.
Legacy Alias Canonicalization
The legacy openai-codex/gpt-5.4-codex runtime alias now canonicalizes to openai-codex/gpt-5.4 while still honoring alias-specific and canonical per-model overrides. This eliminates confusion when using older configuration references.
Security Hardening
Security Improvements (AI-Assisted)
- Heartbeat security: Force owner downgrade for untrusted hook:wake system events
- Browser SSRF: Enforce SSRF policy on snapshot, screenshot, and tab routes
- Microsoft Teams: Enforce sender allowlist checks on SSO signin invokes
- Config redaction: Redact sourceConfig and runtimeConfig alias fields in snapshots
Dangerous Config Protection
The gateway tool now rejects config.patch and config.apply calls when they would newly enable any flag enumerated by openclaw security audit. This includes flags like:
- dangerouslyDisableDeviceAuth
- allowInsecureAuth
- dangerouslyAllowHostHeaderOriginFallback
- hooks.gmail.allowUnsafeExternalContent
- tools.exec.applyPatch.workspaceOnly: false
Already-enabled flags pass through unchanged, so non-dangerous edits in the same patch still apply. Direct authenticated operator RPC behavior is unchanged.
Slack Interactive Security
The configured global allowFrom owner allowlist now applies to channel block-action and modal interactive events. Interactive triggers can no longer bypass the documented allowlist intent in channels without a users list. Open-by-default behavior is preserved when no allowlists are configured.
Browser & Media Fixes
Browser SSRF Policy Restoration
Hostname navigation is restored under the default browser SSRF policy while keeping explicit strict mode reachable from config. Managed loopback CDP /json/new fallback requests stay on the local CDP control policy. This fixes regressions where browser follow-up fixes blocked normal navigation or self-blocked local CDP control.
Local Browser Reconnection
Loopback CDP readiness checks are now reachable under strict SSRF defaults. OpenClaw can reconnect to locally started managed Chrome without misclassifying a healthy child browser as "not reachable after start".
Image/PDF Tool Normalization
Configured provider/model refs are now normalized before media-tool registry lookup. This fixes cases where image and PDF tool runs rejected valid Ollama vision models as unknown because the tool path skipped the usual model-ref normalization step.
Google Image Generation Fix
A trailing /openai suffix is now stripped from configured Google base URLs only when calling the native Gemini image API. This fixes 404 errors on Gemini image requests without breaking explicit OpenAI-compatible Google endpoints.
Agent & Subagent Improvements
Subagent Registry Fix
The subagent registry lazy-runtime stub now emits on the stable dist path that both source and bundled runtime imports resolve. This fixes ERR_MODULE_NOT_FOUND errors at runtime when using runtime: "subagent".
Subagent NPM Build Fix
The dist/agents/subagent-registry.runtime.js file is now shipped in npm builds. Previously, runtime: "subagent" runs would stall in queued state after the registry import failed.
Context Engine Compaction
Engine-owned sessions now compact from the first tool-loop delta, and ingest fallback is preserved when afterTurn is absent. Long-running tool loops can now stay bounded without dropping engine state.
Tool Retry Logic
Streamed unknown-tool retries are now only marked as counted when a streamed message actually classifies an unavailable tool. Incomplete streamed tool names no longer reset the retry streak before the final assistant message arrives.
Cron & Scheduling Fixes
Cron Next-Run Repair
The scheduler no longer invents short retries when cron next-run calculation returns no valid future slot. A maintenance wake stays armed so enabled unscheduled jobs recover without entering a refire loop.
Error Backoff Preservation
The active error-backoff floor is now preserved when maintenance repair recomputes a missing cron next-run. Recurring errored jobs no longer resume early after a transient next-run resolution failure.
Memory & Context Fixes
Active Memory Path Change
Recalled memory now moves onto the hidden untrusted prompt-prefix path instead of system prompt injection. The visible Active Memory status line fields are now labeled, and the resolved recall provider/model appears in gateway debug logs. This ensures trace/debug output matches what the model actually saw.
Ollama Embedding Restoration
The built-in ollama embedding adapter is restored in memory-core. Explicit memorySearch.provider: "ollama" works again. Endpoint-aware cache keys ensure different Ollama hosts do not reuse each other's embeddings.
Memory/QMD Collection Fix
Legacy lowercase memory.md is no longer treated as a second default root collection. QMD recall no longer searches phantom memory-alt-* collections, and builtin/QMD root-memory fallback stays aligned.
Channel-Specific Fixes
WhatsApp Media Encryption
Installed Baileys media encryption writes are now patched during OpenClaw postinstall. The default npm/install.sh delivery path waits for encrypted media files to finish flushing before readback, avoiding transient ENOENT crashes on image sends.
BlueBubbles Cache Refresh
The Private API server-info cache now lazy-refreshes on send when reply threading or message effects are requested but status is unknown. Sends no longer silently degrade to plain messages when the 10-minute cache expires.
Feishu Allowlist Canonicalization
Allowlist entries are now canonicalized by explicit user/chat kind. Repeated feishu:/lark: provider prefixes are stripped, and opaque Feishu IDs are no longer folded to lowercase. Allowlist matching no longer crosses user/chat namespaces or widens to case-insensitive ID matches.
TTS Voice Note Fix
OpenClaw temp voice outputs now persist into managed outbound media and pass through reply-media normalization. Voice-note replies no longer silently drop.
Upgrade Recommendations
Who Should Upgrade Immediately
- GPT-5.4/Codex users experiencing validation or timeout issues
- Telegram forum users wanting human-readable topic names
- Slack users with interactive components and allowlists
- Ollama users with embedding or vision model configurations
- Anyone using subagents with runtime: "subagent"
Upgrade Command
# Via CLI
openclaw update
# Or via agent prompt
"Please update OpenClaw to version 2026.4.14"Post-Upgrade Verification
- Run
openclaw doctorto verify installation health - Test a GPT-5.4 embedded run with reasoning enabled
- Verify Telegram forum topics show human-readable names
- Confirm subagent spawning works without module errors
- Check browser automation starts correctly
Related Resources
- Fixing 'Lazy' GPT Agents in OpenClaw
Three configuration changes that transform GPT-5.4 performance.
- Choosing & Routing Models
Optimize model selection with primary/fallback configurations.
- Security Best Practices
Harden your OpenClaw deployment against common attack vectors.
Release date: April 14, 2026 · 84 community reactions · 44 mentions · 50+ fixes
Full changelog: github.com/openclaw/openclaw/releases/tag/v2026.4.14
FAQ
Should I upgrade immediately?
If you use GPT-5.4/Codex, Telegram forums, Slack with interactive components, or Ollama embeddings: yes. The fixes for validation errors, timeout issues, and model routing are significant. Otherwise, upgrade at your next maintenance window.
Will this break my existing setup?
The update is backward compatible. However, backup your openclaw.json before upgrading. The update changes default behaviors for some security policies, but existing configurations are preserved.
What's the fastest way to update?
Run 'openclaw update' in your terminal, or tell your agent: 'Please update OpenClaw to version 2026.4.14'. The update typically takes under a minute.
Do I need to change my model configuration?
If you were using the legacy 'openai-codex/gpt-5.4-codex' alias, it now automatically maps to 'openai-codex/gpt-5.4'. No manual changes needed unless you want to enable the new strict mode or Codex plugin.
Need help from people who already use this stuff?
Need help from people who already use this?
Connect with other OpenClaw users, discuss the 2026.4.14 update, and learn from the community.