Configuration
Human reads configuration from ~/.human/config.json. Environment variables override config values.
Config location
Section titled “Config location”| Platform | Path |
|---|---|
| Unix/macOS | ~/.human/config.json |
| Windows | %USERPROFILE%\.human\config.json |
Key sections
Section titled “Key sections”| Section | Description |
|---|---|
default_provider | Default AI provider (e.g. openai, anthropic, ollama) |
default_model | Default model name |
default_temperature | Sampling temperature (0.0–2.0, default 0.7) |
providers | Array of provider entries with name, api_key, base_url |
channels | Channel enablement (cli, telegram, discord, etc.) |
tools | Tool config (enabled_tools, disabled_tools, timeouts) |
memory | Memory backend (sqlite, markdown, none) |
mcp_servers | External MCP server connections (command, args per server) |
cron | Cron job scheduling (enabled, interval) |
security | Sandbox, autonomy level, resource limits |
autonomy | Action limits, workspace scoping |
gateway | Host, port, pairing, webhook HMAC |
tunnel | Tunnel provider (e.g. ngrok) |
runtime | Execution environment (native, docker, wasm) |
Autonomy levels
Section titled “Autonomy levels”Control how much the agent can do without approval via autonomy.level or security.autonomy_level (0–4):
| Level | autonomy.level | Behavior |
|---|---|---|
| 0 | readonly | No shell or file writes; read-only tools only |
| 1 | supervised | Ask before destructive or high-impact commands (default) |
| 2+ | full | Autonomous execution; use with caution |
Config example:
{ "autonomy": { "level": "supervised", "workspace_only": true, "max_actions_per_hour": 20 }, "security": { "autonomy_level": 1 }}Use HUMAN_AUTONOMY=0 (readonly) through 4 (full) to override via environment.
Environment variable overrides
Section titled “Environment variable overrides”These override config values when set:
| Variable | Overrides |
|---|---|
HUMAN_PROVIDER | default_provider |
HUMAN_MODEL | default_model |
HUMAN_API_KEY | Default API key |
HUMAN_TEMPERATURE | default_temperature |
HUMAN_GATEWAY_PORT | gateway.port |
HUMAN_GATEWAY_HOST | gateway.host |
HUMAN_WORKSPACE | Workspace directory |
HUMAN_ALLOW_PUBLIC_BIND | gateway.allow_public_bind |
HUMAN_WEBHOOK_HMAC_SECRET | Webhook HMAC secret |
HUMAN_AUTONOMY | Autonomy level (0–4) |
Provider-specific keys (used when no HUMAN_API_KEY or provider-specific key in config):
OPENAI_API_KEY— OpenAIANTHROPIC_API_KEY— AnthropicGEMINI_API_KEY— Google GeminiOLLAMA_HOST— Ollama base URL (e.g.http://localhost:11434)
Complete example config
Section titled “Complete example config”{ "workspace": "~/.human/workspace", "default_provider": "openai", "default_model": "gpt-4o", "default_temperature": 0.7, "providers": [ { "name": "openai", "api_key": "sk-your-openai-key", "base_url": null, "native_tools": true }, { "name": "anthropic", "api_key": "sk-ant-your-anthropic-key" }, { "name": "ollama", "api_key": null, "base_url": "http://localhost:11434" } ], "channels": { "cli": true, "default_channel": "cli", "email": { "smtp_host": "smtp.gmail.com", "smtp_port": 587, "from_address": "bot@example.com", "smtp_user": "bot@example.com", "smtp_pass": "app-password", "imap_host": "imap.gmail.com", "imap_port": 993 }, "imessage": { "default_target": "+15551234567" } }, "memory": { "backend": "sqlite", "auto_save": true, "sqlite_path": null, "max_entries": 0 }, "security": { "sandbox": "auto", "autonomy_level": 1 }, "autonomy": { "level": "supervised", "workspace_only": true, "max_actions_per_hour": 20 }, "gateway": { "enabled": true, "port": 3000, "host": "127.0.0.1", "require_pairing": true, "allow_public_bind": false, "pair_rate_limit_per_minute": 10, "webhook_hmac_secret": null }, "tunnel": { "provider": "none", "domain": null }, "runtime": { "kind": "native", "docker_image": null }, "tools": { "shell_timeout_secs": 60, "shell_max_output_bytes": 1048576, "web_fetch_max_chars": 100000, "enabled_tools": [], "disabled_tools": [] }, "mcp_servers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem"] } }, "cron": { "enabled": true, "interval_minutes": 1 }}Provider config
Section titled “Provider config”Each entry in the providers array can have:
| Field | Type | Description |
|---|---|---|
name | string | Provider identifier: openai, anthropic, ollama, llamacpp, lmstudio, vllm, sglang, openrouter, etc. |
api_key | string | API key (optional for local providers) |
base_url | string | Override base URL (e.g. http://localhost:11434 for Ollama) |
native_tools | boolean | Use provider-native tool format (default: true) |
Local providers (no API key required): ollama, llamacpp, llama.cpp, lmstudio, lm-studio, vllm, sglang, osaurus
Memory backends
Section titled “Memory backends”sqlite— SQLite with FTS5 and vector search (requiresHU_ENABLE_SQLITE)markdown— File-based markdown filesnone— No memory
Tool filtering
Section titled “Tool filtering”Use enabled_tools and disabled_tools to control which tools are available:
{ "tools": { "enabled_tools": ["shell", "file_read", "memory_store"], "disabled_tools": ["browser_open"] }}Empty enabled_tools means all tools are enabled unless listed in disabled_tools.
MCP Servers
Section titled “MCP Servers”Connect to external MCP (Model Context Protocol) servers. Human can act as both client and server:
As client — connect to external tool servers at startup:
{ "mcp_servers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem"] } }}MCP tools are automatically loaded and available to the agent as mcp_0_<tool_name>.
As server — expose all Human tools over MCP:
human mcpThis runs the JSON-RPC 2.0 server on stdin/stdout, compatible with Claude Code and other MCP clients.
Service Mode
Section titled “Service Mode”Run Human as an always-on service:
human service # Daemonize (background)human service-loop # Foreground (for containers)human status # Check if runningThe service loop executes cron jobs and will poll configured channels in future releases.