Bring AI to every device on Earth.

A complete autonomous AI agent runtime in 528 kilobytes of pure C.
One static binary. Zero dependencies. Boots in under 2 milliseconds.


0 KB
Binary
< 0 ms
Startup
0
Tests
0 +
Providers
0
Channels
0 +
Tools

To run an AI assistant today, you need...

A container runtime
A Python environment
2 GB of RAM
A cloud API key
A stable internet connection
A prayer

h-uman needs one thing: a CPU.


From data centers to five-dollar boards.

The same static binary. The same config. Every device.

Cloud

Docker, WASM, Cloudflare Workers. Full gateway with webhooks, tunnels, and multi-channel routing.

Workstation

macOS, Linux, Windows. Interactive TUI, local Ollama/llama.cpp. No cloud dependency.

Edge

Raspberry Pi, ARM SBCs. Under 5 MB RAM. Under 2 ms startup. Runs headless on boot.

Embedded

Arduino, STM32, Nucleo. Serial JSON protocol. Hardware peripherals via vtable interface.

Swap anything. Lock in nothing.

Every subsystem is a vtable interface. Change any layer with a config edit.

h-uman Core 50+ AI model providers: OpenAI, Anthropic, Ollama, and more Providers 33 messaging channels: Telegram, Discord, Slack, iMessage Channels SQLite + FTS5 + vector embeddings for persistent context Memory 67+ tools: shell, files, git, browser, web search, hardware Tools Sandbox, AEAD encryption, path traversal protection Security Native, Docker, WASM, Cloudflare Workers Runtime Cloudflare, ngrok, Tailscale for webhook exposure Tunnels Arduino, STM32, Raspberry Pi peripherals Hardware

50+ providers. 34 channels. 67+ tools.

Connect to any AI model. Reach users on any platform.

OpenAI Anthropic Gemini OpenRouter Ollama llama.cpp Groq Mistral xAI DeepSeek Together Fireworks Perplexity Cohere LM Studio vLLM sglang Cerebras SambaNova Hyperbolic Telegram Discord Slack Signal iMessage Matrix WhatsApp IRC Nostr Email Line Mattermost Shell File Ops Git Browser Memory Web Search Cron HTTP Hardware Delegate and more...

Three commands. Zero dependencies.

Clone, build, run. No containers. No interpreters. No package managers.

terminal

A command center, not a status page.

Real-time stats. Live activity feed. Sparkline trends. All in 528 KB.

h-uman Dashboard Live
Providers
0
Active
Channels
0
Connected
Tools
0
Stable
Sessions
0
Active

Try the live demo — no installation required.

Bring AI to every device on Earth.

Open source. MIT licensed. ~528 KB. Runs anywhere.

How h-uman Compares

Less code. Less memory. More integrations. By orders of magnitude.

Binary Size
h-uman
528 KB
Claude Code
~200 MB
OpenAI Codex
~50 MB
Peak RAM
h-uman
<6 MB
Claude Code
~300 MB
OpenAI Codex
~100 MB
Dependencies
h-uman
0
Claude Code
900+
OpenAI Codex
200+
Providers
h-uman
50+
Claude Code
1
OpenAI Codex
1
Channels
h-uman
34
Claude Code
1
OpenAI Codex
1

Smaller bars = better for size/memory/dependencies. Larger bars = better for providers/channels.