Search

Search pages, services, tech stack, and blog posts

Self-Hosted AI Agents 2026 OpenClaw, ZeroClaw, IronClaw, Hermes Agent, NanoClaw, NanoBot, PicoClaw, NullClaw, and QClaw — open-source personal AI assistants compared on RAM, security, integrations, and self-improvement

The self-hosted AI assistant ecosystem was transformed between late 2025 and early 2026. OpenClaw launched as 'Clawdbot' in November 2025 and went viral — 358,000 GitHub stars, 24+ messaging integrations, a 5,700+ skill marketplace. A security crisis (CVE-2026-25253 'ClawBleed', one-click RCE; plus 341 malicious ClawHub skills) drove demand for security-first alternatives. ZeroClaw (Rust, <5MB, deny-by-default) and IronClaw (WASM sandbox + hardware TEE) emerged as the security-focused options. NullClaw (678KB Zig binary, <2ms boot) and PicoClaw (Go, <10MB, RISC-V) target extreme resource constraints. Hermes Agent (Nous Research) stands apart: the only personal agent with a genuine closed self-improving loop that adapts from your conversations. The ecosystem also includes MetaClaw (adds RL self-improvement as a proxy to any Claw agent), HiClaw (Alibaba multi-agent orchestration), QwenPaw/CoPaw (China-ecosystem), and Moltis (Rust, enterprise voice + observability).

Self-Hosted Personal AI Assistants

Open-source agents designed to run on your own hardware — from Raspberry Pi to enterprise servers. Evaluated on resource footprint, security model, integrations, and unique capabilities.

O
OpenClawTypeScript
Z
ZeroClawRust
I
IronClawRust + WASM
HA
Hermes AgentPython
N
NanoClawTypeScript
N
NanoBotPython
P
PicoClawGo
N
NullClawZig
Q
QClawJavaScript
Dimension
OOpenClaw
ZZeroClaw
IIronClaw
HAHermes Agent
NNanoClaw
NNanoBot
PPicoClaw
NNullClaw
QQClaw
Language / runtimeTypeScript / Node.js 22.16+ (~390MB Node runtime)Rust — single static binary, zero runtime dependencyRust + WebAssembly — all plugins run in WASM sandboxPython 3.12+ — flexible, research-grade codebaseTypeScript / Node.js — ~700 lines, auditable in 8 minPython — ~4,000 lines; academic/hackableGo — single static binary, RISC-V + ARM + x86Zig — compiled to native machine code; zero VM/interpreter overheadJavaScript / Node.js 20+ — `npm i -g quantumclaw`
RAM footprint~1.5 GB baseline<5 MB — ~300× smaller than OpenClaw~5 MB estimated (Rust + WASM overhead)Flexible — from $5/mo VPS to GPU cluster~50–200 MB~191 MB (benchmarked on Raspberry Pi 3B+)<10 MB (targets <10MB embedded boards)~1 MB — smallest in the ecosystemNot publicly benchmarked
Startup time~6 seconds<10 ms on ARM64 edge nodesFast (Rust compiled binary)Not benchmarkedFastFast~1 second<2 ms on Apple Silicon — fastest in ecosystemNot benchmarked
Security modelOpt-in Docker sandbox; 5 CVEs in 2026 incl. CVE-2026-25253 ClawBleed (CVSS 8.8, one-click RCE); 341 malicious ClawHub skills in the ClawHavoc campaignRestrictive-by-default: localhost bind, DM pairing codes, explicit command allowlists, forbidden paths (/etc /root ~/.ssh), AES-encrypted secrets, Landlock/Bubblewrap sandboxingWASM sandbox per tool (capability-based permissions); AES-256-GCM credential vault; credential leak scanning via Aho-Corasick; TEE hardware attestation; endpoint allowlistingConservative defaults; Tirith pre-execution terminal scanner; container hardening; prompt injection scanning; no major public CVEsPer-conversation Docker container — each chat group gets its own isolated filesystem; audit logging; permission gates; OpenTelemetry tracingSingle Docker container + bwrap sandbox; not per-conversation isolatedMinimal (IoT threat model) — embedded-focused, not enterprise-gradeChaCha20-Poly1305 encrypted API keys; multi-layer sandboxing (Landlock + Firejail + Docker); vtable interfaces for all subsystemsAES-256-GCM encrypted SQLite secrets; immutable VALUES.md trust kernel; AGEX cryptographic agent identity protocol; per-agent isolation
Messaging channels24+ — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, LINE, Matrix, Teams, Feishu, WeChat, QQ, and more25+ — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Matrix, IRC, Email, Bluesky, DingTalk, Lark, Nostr, Reddit, LinkedIn, Twitter, MQTT, QQ, WeChat Work, and moreTelegram, Discord (via WASM channels) + HTTP webhooks + web gateway (SSE/WebSocket)Telegram, Discord, Slack, WhatsApp, Signal, Email (6 channels)WhatsApp, Telegram, Slack, Discord, Gmail (5 channels — per-conversation isolated)Telegram, WhatsApp, Discord, Feishu/Lark (4 channels — strong China platform support)Telegram, Discord (2 channels — IoT-focused)18–19 channels despite 678KB binary — comparable breadth to ZeroClawTelegram, Discord, WhatsApp, Slack, Email (5 channels) + voice transcription
GitHub stars (Apr 2026)~358,000 — dominant project; moved to independent foundation~30,200~11,500 (Near AI / NEARCON 2026)~91,200 — Nous Research; rapidly growing since Feb 2026~26,800 — 7,000+ in first week (Jan 31, 2026)~38,400 — HKU Data Science Lab~13,300 — Sipeed embedded hardware company~5,300 — March 2026 (MarkTechPost coverage)Small — ALLIN1.APP LTD
Self-improvement / learning loopNo — static capability setNo — static capability setNo — static capability setYes — closed self-improving loop: creates skills from experience, fine-tunes via Atropos RL, searches past conversations, builds deepening user model (Honcho dialectic)No — static capability setNo — static capability setNo — static capability setNo — static capability setPartial — 3-tier memory (vector search + structured knowledge + optional Cognee knowledge graph); no RL self-improvement
LLM providersAll major providers (Anthropic, OpenAI, Google, Ollama, etc.)22+ providers including Ollama, Groq, Mistral, OpenAI, AnthropicAnthropic, OpenAI, GitHub Copilot, Gemini, MiniMax, Mistral, Ollama, OpenRouter (300+ models), Together AI, Fireworks, vLLM, LiteLLMNous Portal, OpenRouter (200+ models), Kimi, MiniMax, GLM, OpenAI, Anthropic, Hugging Face — swap with `hermes model`Claude-first (Anthropic Agents SDK); expandable via OpenAI-compatible endpointsOpenRouter, Anthropic, OpenAI, DeepSeek, Gemini, Groq + local via vLLM/Ollama (8+ providers)Standard LLM APIs — embedded-optimised22+ providers; 50+ in test suite5-tier cost routing for automatic model selection
Best forMaximum integrations and features; power users; home labs with ample RAMEdge / IoT devices; $10 single-board computers; deny-by-default securityRegulated industries (healthcare, finance, legal); zero-trust; hardware TEEUsers who want an agent that gets smarter over time; research-grade RL pipelinesCompliance-heavy environments; auditable codebase; per-conversation isolationDevelopers who want to read and modify agent code; China-platform integration; Raspberry PiIoT gateways, routers, IP cameras, RISC-V boards; battery-powered devicesAbsolute minimalism; <2ms boot; microcontrollers; Zig language preferenceKnowledge-intensive agents with graph-based memory; AGEX agent identity protocol

When to choose each

O

OpenClaw

  • Feature-richest option with 24+ messaging channels and 5,700+ ClawHub skills
  • Power users wanting voice wake, live canvas, multi-agent routing out of the box
  • Home lab setups where 1.5GB RAM is not a constraint
  • Largest community and ecosystem support (~358K GitHub stars)
Z

ZeroClaw

  • IoT or edge devices with <64MB RAM — <5MB binary, <10ms start
  • Security-first users who want deny-by-default access controls
  • Cost-sensitive scaling — same core features as OpenClaw in 300× less RAM
  • Teams migrating from OpenClaw via built-in `zeroclaw migrate openclaw` command
I

IronClaw

  • Regulated industries where credential leaks are unacceptable (WASM + AES-256-GCM vault)
  • Enterprises requiring hardware-level attestation via TEE (Trusted Execution Environment)
  • Zero-trust environments where every tool must be explicitly capability-granted
  • Teams building custom plugins without risking host system access
HA

Hermes Agent

  • Users who want an agent that genuinely improves and personalises over time
  • Research teams building RL fine-tuning pipelines from agent trajectory data
  • Developers who switch LLM providers frequently — `hermes model` swaps instantly
  • Serverless deployments via Modal or Daytona without managing infrastructure
N

NanoClaw

  • Compliance-heavy environments needing per-conversation process isolation
  • Security-conscious teams who want to audit every line (~700 lines total)
  • Production deployments requiring OpenTelemetry tracing and audit logs
  • WhatsApp-first workflows with strong multi-group isolation guarantees
N

NanoBot

  • Developers who want to understand and modify the agent code (~4K Python lines)
  • Academic / research deployments backed by HKU Data Science Lab
  • China-platform integration (Feishu/Lark) alongside Western channels
  • Raspberry Pi 3B+ deployments at ~191MB RAM
P

PicoClaw

  • IoT gateways, home automation controllers, and network routers (32MB RAM)
  • RISC-V, ARM, and x86 embedded boards including Sipeed LicheeRV Nano
  • Battery-powered devices requiring a battery-friendly power profile
  • GPIO and hardware peripheral control (ESP32, Arduino, Raspberry Pi) alongside LLM
N

NullClaw

  • Absolute minimalists who need <2ms boot time and ~1MB RAM footprint
  • Microcontrollers where even PicoClaw's <10MB is too large
  • Teams using Zig for their stack who want native language integration
  • Deployments inside Docker, WASM, or native — same static binary for all targets
Q

QClaw

  • Knowledge-intensive workflows needing graph-based long-term memory via Cognee
  • Multi-agent systems using AGEX cryptographic identity for trust and scoped permissions
  • Voice + media pipelines (Deepgram, Whisper, ElevenLabs) integrated with a personal agent
  • Teams wanting ClawHub skills (3,286+) in a lighter package than OpenClaw

Our verdict

OpenClaw for max features; ZeroClaw for edge/IoT; IronClaw for regulated industries; Hermes for self-improvement

The Claw ecosystem now has agents for every use case. OpenClaw (358K stars) wins on breadth — 24+ channels, 5,700+ skills, voice wake — but carries real security risk (5 CVEs including ClawBleed). ZeroClaw wins on efficiency (<5MB, <10ms, deny-by-default) and is the natural migration target from OpenClaw. IronClaw (WASM + TEE) is the only agent with hardware-level attestation — the right pick for regulated industries. Hermes Agent stands alone with a genuine self-improving learning loop. NanoClaw's per-conversation Docker isolation is stronger than OpenClaw's optional sandboxing at a fraction of the footprint. NullClaw (678KB Zig, <2ms) serves extreme embedded use cases. Beyond this list: MetaClaw (Python proxy) adds RL self-improvement to any Claw agent; HiClaw (Alibaba) enables multi-agent team orchestration; QwenPaw/CoPaw integrates the Qwen ecosystem for China-first deployments; Moltis (Rust, 44MB) adds enterprise voice I/O, WebAuthn, and Prometheus observability.

Sources & References

  1. 01
    OpenClaw GitHub

    ~358K stars; TypeScript; 24+ messaging integrations; 5,700+ ClawHub skills

  2. 02
    ZeroClaw GitHub

    ~30K stars; Rust; <5MB; 25+ channels; built-in OpenClaw migration tool

  3. 03
    IronClaw GitHub (Near AI)

    ~11.5K stars; Rust + WASM; TEE support; AES-256-GCM credential vault

  4. 04
    Hermes Agent

    ~91K stars; Nous Research; closed self-improving learning loop; MCP server mode

  5. 05
    NanoClaw GitHub

    ~26.8K stars; TypeScript; per-conversation Docker isolation; 700-line codebase

  6. 06
    NanoBot GitHub

    ~38K stars; HKU Data Science Lab; Python; ~191MB on Raspberry Pi 3B+

  7. 07
    PicoClaw GitHub

    ~13K stars; Sipeed; Go; targets $10 embedded boards and RISC-V

  8. 08
    NullClaw GitHub

    ~5.3K stars; Zig; 678KB binary; <2ms boot; <1MB RAM — smallest in ecosystem

  9. 09
    QClaw / QuantumClaw GitHub

    JavaScript; Cognee knowledge graph memory; AGEX cryptographic agent identity

  10. 10
    EvoAI Labs — Claw Ecosystem Overview

    Overview of the Claw ecosystem and its rapid growth in early 2026

  11. 11
    CVE-2026-25253 — OpenClaw ClawBleed

    CVSS 8.8 — cross-site WebSocket hijacking enabling one-click RCE

Frequently asked questions





Related comparisons

Explore more technology comparisons.

Ready to start your AI project?

Tell us what you're building with AI. We'll respond within 24 hours.

1 spot available in May 2026Apr 2026 fully booked

We limit intake each month so every project gets the focus it deserves.