h-uman — almost human.

almost human.

Bring AI to every device on Earth.

1,750 KB binary < 6 MB RAM < 30 ms startup 0 dependencies
GitHub stars MIT License Binary size: 1,750 KB
$ git clone https://github.com/sethdford/h-uman.git
0 KB
Binary
< 0 ms
Startup
0 +
Tests
0 +
Providers
0
Channels
0 +
Tools
CLI Automation
DevOps
IoT
Security
Research
Enterprise

See it in action

human
$ human chat
human> Summarize the latest commits
Analyzing 12 commits from the last 24 hours...
3 features, 2 fixes, 1 refactor. Key changes: agent loop optimization, new Discord channel, security hardening for tool execution.

The dependency
problem.

A container runtime
A Python environment
2 GB of RAM
A cloud API key
A stable internet connection
A prayer

h-uman needs one thing: a CPU.

agent.py langchain openai chromadb 900+ dependencies ~200 MB installed h-uman: 0 dependencies 1,750 KB. That's it.
118×
smaller
50×
less RAM
Zero
dependencies

vs. leading AI agent frameworks

What's inside 1,750 kilobytes?

Every byte has a purpose. No runtime. No interpreter. No garbage collector. Just compiled C.

Active module Unused space

From data centers to five-dollar boards.

The same binary. The same config. Every device on the spectrum.

Cloud
38 channels
$ human
Workstation
<6 MB RAM
CPU
Edge
<30 ms boot
MCU
Embedded
1,750 KB
Maximum power Minimum footprint

Swap anything. Lock in nothing.

Every subsystem is a vtable interface. Change any layer with a config edit.

h-uman Core 50+ AI model providers: OpenAI, Anthropic, Ollama, and more Providers 38 messaging channels: Telegram, Discord, Slack, iMessage Channels SQLite + FTS5 + vector embeddings for persistent context Memory 83+ tools: shell, files, git, browser, web search, hardware Tools Sandbox, AEAD encryption, path traversal protection Security Native, Docker, WASM, Cloudflare Workers Runtime Cloudflare, ngrok, Tailscale for webhook exposure Tunnels Arduino, STM32, Raspberry Pi peripherals Hardware

Programs, not prompts.

HuLa is a typed intermediate representation that turns tool calls into composable, traceable, policy-checked programs. Eight opcodes. Automatic skill promotion.

Structured Execution call, seq, par, branch, loop, delegate, emit, assert — composable by design.
Skill Promotion Successful traces get promoted to reusable skills automatically — the agent learns from execution.
Secure by Default Every program node checked against runtime policy before execution. Deny-by-default.
example.hula.json
{
  "name": "deploy-review",
  "root": "main",
  "nodes": {
    "main": { "op": "seq", "children": ["lint", "test", "deploy"] },
    "lint":   { "op": "call", "tool": "shell", "args": { "cmd": "npm run lint" } },
    "test":   { "op": "call", "tool": "shell", "args": { "cmd": "npm test" } },
    "deploy": { "op": "call", "tool": "shell", "args": { "cmd": "npm run deploy" } }
  }
}
Learn more about HuLa

An ecosystem, not a walled garden.

Swap any provider, channel, or tool with a single config change. No vendor lock-in.

50+

AI Providers

Cloud APIs and local models. Switch with one line of config.

OpenAI Anthropic Gemini OpenRouter Ollama llama.cpp Groq Mistral +42 more
38

Channels

One agent, every platform. Users message you where they already are.

Telegram Discord Slack Signal iMessage Matrix WhatsApp IRC +26 more
83+

Tools

Not a chatbot — an agent. Real access to your system.

Shell File Ops Git Browser Memory Web Search Cron HTTP Hardware Delegate +57 more

Three commands. Zero dependencies.

Clone, build, run. No containers. No interpreters. No package managers.

terminal

A command center, not a status page.

Real-time stats. Live activity feed. Sparkline trends. All in under 2 MB.

h-uman Dashboard Live
Providers
0
Active
Channels
0
Connected
Tools
0
Stable
Sessions
0
Active

Try the live demo — no installation required.

Built to a standard, not a deadline.

4,980+ tests. Zero ASan errors. Every allocation freed. Every path covered.

4,980+

Tests

Unit, integration, and fuzz tests. Full suite runs in under 7 seconds. Zero flakes.

0

Memory Leaks

AddressSanitizer on every build. Every allocation tracked. Every byte freed.

98

Accessibility Score

WCAG 2.1 AA. Lighthouse 98+. Focus rings, keyboard nav, reduced motion — all built in.

Orders of magnitude smaller.

Not marginally better. Categorically different.

Binary Size
h-uman
1.7 MB
Claude Code
~200 MB
OpenAI Codex
~50 MB
Peak RAM
h-uman
<6 MB
Claude Code
~300 MB
OpenAI Codex
~100 MB
Dependencies
h-uman
0
Claude Code
900+
OpenAI Codex
200+
Providers
h-uman
50+
Claude Code
1
OpenAI Codex
1
Channels
h-uman
31
Claude Code
1
OpenAI Codex
1

Smaller bars = better for size/memory/dependencies. Larger bars = better for providers/channels.

Ship your first agent in minutes.

Open source. MIT licensed. Under 2 MB. Runs on anything with a CPU.