Self-Hosted AI Agents 2026 OpenClaw, ZeroClaw, IronClaw, Hermes Agent, NanoClaw, NanoBot, PicoClaw, NullClaw, and QClaw — open-source personal AI assistants compared on RAM, security, integrations, and self-improvement
The self-hosted AI assistant ecosystem was transformed between late 2025 and early 2026. OpenClaw launched as 'Clawdbot' in November 2025 and went viral — 358,000 GitHub stars, 24+ messaging integrations, a 5,700+ skill marketplace. A security crisis (CVE-2026-25253 'ClawBleed', one-click RCE; plus 341 malicious ClawHub skills) drove demand for security-first alternatives. ZeroClaw (Rust, <5MB, deny-by-default) and IronClaw (WASM sandbox + hardware TEE) emerged as the security-focused options. NullClaw (678KB Zig binary, <2ms boot) and PicoClaw (Go, <10MB, RISC-V) target extreme resource constraints. Hermes Agent (Nous Research) stands apart: the only personal agent with a genuine closed self-improving loop that adapts from your conversations. The ecosystem also includes MetaClaw (adds RL self-improvement as a proxy to any Claw agent), HiClaw (Alibaba multi-agent orchestration), QwenPaw/CoPaw (China-ecosystem), and Moltis (Rust, enterprise voice + observability).
Self-Hosted Personal AI Assistants
Open-source agents designed to run on your own hardware — from Raspberry Pi to enterprise servers. Evaluated on resource footprint, security model, integrations, and unique capabilities.
| Dimension | OOpenClaw | ZZeroClaw | IIronClaw | HAHermes Agent | NNanoClaw | NNanoBot | PPicoClaw | NNullClaw | QQClaw |
|---|---|---|---|---|---|---|---|---|---|
| Language / runtime | TypeScript / Node.js 22.16+ (~390MB Node runtime) | Rust — single static binary, zero runtime dependency | Rust + WebAssembly — all plugins run in WASM sandbox | Python 3.12+ — flexible, research-grade codebase | TypeScript / Node.js — ~700 lines, auditable in 8 min | Python — ~4,000 lines; academic/hackable | Go — single static binary, RISC-V + ARM + x86 | Zig — compiled to native machine code; zero VM/interpreter overhead | JavaScript / Node.js 20+ — `npm i -g quantumclaw` |
| RAM footprint | ~1.5 GB baseline | <5 MB — ~300× smaller than OpenClaw | ~5 MB estimated (Rust + WASM overhead) | Flexible — from $5/mo VPS to GPU cluster | ~50–200 MB | ~191 MB (benchmarked on Raspberry Pi 3B+) | <10 MB (targets <10MB embedded boards) | ~1 MB — smallest in the ecosystem | Not publicly benchmarked |
| Startup time | ~6 seconds | <10 ms on ARM64 edge nodes | Fast (Rust compiled binary) | Not benchmarked | Fast | Fast | ~1 second | <2 ms on Apple Silicon — fastest in ecosystem | Not benchmarked |
| Security model | Opt-in Docker sandbox; 5 CVEs in 2026 incl. CVE-2026-25253 ClawBleed (CVSS 8.8, one-click RCE); 341 malicious ClawHub skills in the ClawHavoc campaign | Restrictive-by-default: localhost bind, DM pairing codes, explicit command allowlists, forbidden paths (/etc /root ~/.ssh), AES-encrypted secrets, Landlock/Bubblewrap sandboxing | WASM sandbox per tool (capability-based permissions); AES-256-GCM credential vault; credential leak scanning via Aho-Corasick; TEE hardware attestation; endpoint allowlisting | Conservative defaults; Tirith pre-execution terminal scanner; container hardening; prompt injection scanning; no major public CVEs | Per-conversation Docker container — each chat group gets its own isolated filesystem; audit logging; permission gates; OpenTelemetry tracing | Single Docker container + bwrap sandbox; not per-conversation isolated | Minimal (IoT threat model) — embedded-focused, not enterprise-grade | ChaCha20-Poly1305 encrypted API keys; multi-layer sandboxing (Landlock + Firejail + Docker); vtable interfaces for all subsystems | AES-256-GCM encrypted SQLite secrets; immutable VALUES.md trust kernel; AGEX cryptographic agent identity protocol; per-agent isolation |
| Messaging channels | 24+ — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, LINE, Matrix, Teams, Feishu, WeChat, QQ, and more | 25+ — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Matrix, IRC, Email, Bluesky, DingTalk, Lark, Nostr, Reddit, LinkedIn, Twitter, MQTT, QQ, WeChat Work, and more | Telegram, Discord (via WASM channels) + HTTP webhooks + web gateway (SSE/WebSocket) | Telegram, Discord, Slack, WhatsApp, Signal, Email (6 channels) | WhatsApp, Telegram, Slack, Discord, Gmail (5 channels — per-conversation isolated) | Telegram, WhatsApp, Discord, Feishu/Lark (4 channels — strong China platform support) | Telegram, Discord (2 channels — IoT-focused) | 18–19 channels despite 678KB binary — comparable breadth to ZeroClaw | Telegram, Discord, WhatsApp, Slack, Email (5 channels) + voice transcription |
| GitHub stars (Apr 2026) | ~358,000 — dominant project; moved to independent foundation | ~30,200 | ~11,500 (Near AI / NEARCON 2026) | ~91,200 — Nous Research; rapidly growing since Feb 2026 | ~26,800 — 7,000+ in first week (Jan 31, 2026) | ~38,400 — HKU Data Science Lab | ~13,300 — Sipeed embedded hardware company | ~5,300 — March 2026 (MarkTechPost coverage) | Small — ALLIN1.APP LTD |
| Self-improvement / learning loop | No — static capability set | No — static capability set | No — static capability set | Yes — closed self-improving loop: creates skills from experience, fine-tunes via Atropos RL, searches past conversations, builds deepening user model (Honcho dialectic) | No — static capability set | No — static capability set | No — static capability set | No — static capability set | Partial — 3-tier memory (vector search + structured knowledge + optional Cognee knowledge graph); no RL self-improvement |
| LLM providers | All major providers (Anthropic, OpenAI, Google, Ollama, etc.) | 22+ providers including Ollama, Groq, Mistral, OpenAI, Anthropic | Anthropic, OpenAI, GitHub Copilot, Gemini, MiniMax, Mistral, Ollama, OpenRouter (300+ models), Together AI, Fireworks, vLLM, LiteLLM | Nous Portal, OpenRouter (200+ models), Kimi, MiniMax, GLM, OpenAI, Anthropic, Hugging Face — swap with `hermes model` | Claude-first (Anthropic Agents SDK); expandable via OpenAI-compatible endpoints | OpenRouter, Anthropic, OpenAI, DeepSeek, Gemini, Groq + local via vLLM/Ollama (8+ providers) | Standard LLM APIs — embedded-optimised | 22+ providers; 50+ in test suite | 5-tier cost routing for automatic model selection |
| Best for | Maximum integrations and features; power users; home labs with ample RAM | Edge / IoT devices; $10 single-board computers; deny-by-default security | Regulated industries (healthcare, finance, legal); zero-trust; hardware TEE | Users who want an agent that gets smarter over time; research-grade RL pipelines | Compliance-heavy environments; auditable codebase; per-conversation isolation | Developers who want to read and modify agent code; China-platform integration; Raspberry Pi | IoT gateways, routers, IP cameras, RISC-V boards; battery-powered devices | Absolute minimalism; <2ms boot; microcontrollers; Zig language preference | Knowledge-intensive agents with graph-based memory; AGEX agent identity protocol |
When to choose each
OpenClaw
- Feature-richest option with 24+ messaging channels and 5,700+ ClawHub skills
- Power users wanting voice wake, live canvas, multi-agent routing out of the box
- Home lab setups where 1.5GB RAM is not a constraint
- Largest community and ecosystem support (~358K GitHub stars)
ZeroClaw
- IoT or edge devices with <64MB RAM — <5MB binary, <10ms start
- Security-first users who want deny-by-default access controls
- Cost-sensitive scaling — same core features as OpenClaw in 300× less RAM
- Teams migrating from OpenClaw via built-in `zeroclaw migrate openclaw` command
IronClaw
- Regulated industries where credential leaks are unacceptable (WASM + AES-256-GCM vault)
- Enterprises requiring hardware-level attestation via TEE (Trusted Execution Environment)
- Zero-trust environments where every tool must be explicitly capability-granted
- Teams building custom plugins without risking host system access
Hermes Agent
- Users who want an agent that genuinely improves and personalises over time
- Research teams building RL fine-tuning pipelines from agent trajectory data
- Developers who switch LLM providers frequently — `hermes model` swaps instantly
- Serverless deployments via Modal or Daytona without managing infrastructure
NanoClaw
- Compliance-heavy environments needing per-conversation process isolation
- Security-conscious teams who want to audit every line (~700 lines total)
- Production deployments requiring OpenTelemetry tracing and audit logs
- WhatsApp-first workflows with strong multi-group isolation guarantees
NanoBot
- Developers who want to understand and modify the agent code (~4K Python lines)
- Academic / research deployments backed by HKU Data Science Lab
- China-platform integration (Feishu/Lark) alongside Western channels
- Raspberry Pi 3B+ deployments at ~191MB RAM
PicoClaw
- IoT gateways, home automation controllers, and network routers (32MB RAM)
- RISC-V, ARM, and x86 embedded boards including Sipeed LicheeRV Nano
- Battery-powered devices requiring a battery-friendly power profile
- GPIO and hardware peripheral control (ESP32, Arduino, Raspberry Pi) alongside LLM
NullClaw
- Absolute minimalists who need <2ms boot time and ~1MB RAM footprint
- Microcontrollers where even PicoClaw's <10MB is too large
- Teams using Zig for their stack who want native language integration
- Deployments inside Docker, WASM, or native — same static binary for all targets
QClaw
- Knowledge-intensive workflows needing graph-based long-term memory via Cognee
- Multi-agent systems using AGEX cryptographic identity for trust and scoped permissions
- Voice + media pipelines (Deepgram, Whisper, ElevenLabs) integrated with a personal agent
- Teams wanting ClawHub skills (3,286+) in a lighter package than OpenClaw
Our verdict
The Claw ecosystem now has agents for every use case. OpenClaw (358K stars) wins on breadth — 24+ channels, 5,700+ skills, voice wake — but carries real security risk (5 CVEs including ClawBleed). ZeroClaw wins on efficiency (<5MB, <10ms, deny-by-default) and is the natural migration target from OpenClaw. IronClaw (WASM + TEE) is the only agent with hardware-level attestation — the right pick for regulated industries. Hermes Agent stands alone with a genuine self-improving learning loop. NanoClaw's per-conversation Docker isolation is stronger than OpenClaw's optional sandboxing at a fraction of the footprint. NullClaw (678KB Zig, <2ms) serves extreme embedded use cases. Beyond this list: MetaClaw (Python proxy) adds RL self-improvement to any Claw agent; HiClaw (Alibaba) enables multi-agent team orchestration; QwenPaw/CoPaw integrates the Qwen ecosystem for China-first deployments; Moltis (Rust, 44MB) adds enterprise voice I/O, WebAuthn, and Prometheus observability.
Sources & References
- 01OpenClaw GitHub
~358K stars; TypeScript; 24+ messaging integrations; 5,700+ ClawHub skills
- 02ZeroClaw GitHub
~30K stars; Rust; <5MB; 25+ channels; built-in OpenClaw migration tool
- 03IronClaw GitHub (Near AI)
~11.5K stars; Rust + WASM; TEE support; AES-256-GCM credential vault
- 04Hermes Agent
~91K stars; Nous Research; closed self-improving learning loop; MCP server mode
- 05NanoClaw GitHub
~26.8K stars; TypeScript; per-conversation Docker isolation; 700-line codebase
- 06NanoBot GitHub
~38K stars; HKU Data Science Lab; Python; ~191MB on Raspberry Pi 3B+
- 07PicoClaw GitHub
~13K stars; Sipeed; Go; targets $10 embedded boards and RISC-V
- 08NullClaw GitHub
~5.3K stars; Zig; 678KB binary; <2ms boot; <1MB RAM — smallest in ecosystem
- 09QClaw / QuantumClaw GitHub
JavaScript; Cognee knowledge graph memory; AGEX cryptographic agent identity
- 10EvoAI Labs — Claw Ecosystem Overview
Overview of the Claw ecosystem and its rapid growth in early 2026
- 11CVE-2026-25253 — OpenClaw ClawBleed
CVSS 8.8 — cross-site WebSocket hijacking enabling one-click RCE
Frequently asked questions
Related comparisons
Explore more technology comparisons.
Ready to start your AI project?
Tell us what you're building with AI. We'll respond within 24 hours.
We limit intake each month so every project gets the focus it deserves.