Skip to content

Configuration

Human reads configuration from ~/.human/config.json. Environment variables override config values.

PlatformPath
Unix/macOS~/.human/config.json
Windows%USERPROFILE%\.human\config.json
SectionDescription
default_providerDefault AI provider (e.g. openai, anthropic, ollama)
default_modelDefault model name
default_temperatureSampling temperature (0.0–2.0, default 0.7)
providersArray of provider entries with name, api_key, base_url
channelsChannel enablement (cli, telegram, discord, etc.)
toolsTool config (enabled_tools, disabled_tools, timeouts)
memoryMemory backend (sqlite, markdown, none)
mcp_serversExternal MCP server connections (command, args per server)
cronCron job scheduling (enabled, interval)
securitySandbox, autonomy level, resource limits
autonomyAction limits, workspace scoping
gatewayHost, port, pairing, webhook HMAC
tunnelTunnel provider (e.g. ngrok)
runtimeExecution environment (native, docker, wasm)

Control how much the agent can do without approval via autonomy.level or security.autonomy_level (0–4):

Levelautonomy.levelBehavior
0readonlyNo shell or file writes; read-only tools only
1supervisedAsk before destructive or high-impact commands (default)
2+fullAutonomous execution; use with caution

Config example:

{
"autonomy": {
"level": "supervised",
"workspace_only": true,
"max_actions_per_hour": 20
},
"security": {
"autonomy_level": 1
}
}

Use HUMAN_AUTONOMY=0 (readonly) through 4 (full) to override via environment.

These override config values when set:

VariableOverrides
HUMAN_PROVIDERdefault_provider
HUMAN_MODELdefault_model
HUMAN_API_KEYDefault API key
HUMAN_TEMPERATUREdefault_temperature
HUMAN_GATEWAY_PORTgateway.port
HUMAN_GATEWAY_HOSTgateway.host
HUMAN_WORKSPACEWorkspace directory
HUMAN_ALLOW_PUBLIC_BINDgateway.allow_public_bind
HUMAN_WEBHOOK_HMAC_SECRETWebhook HMAC secret
HUMAN_AUTONOMYAutonomy level (0–4)

Provider-specific keys (used when no HUMAN_API_KEY or provider-specific key in config):

  • OPENAI_API_KEY — OpenAI
  • ANTHROPIC_API_KEY — Anthropic
  • GEMINI_API_KEY — Google Gemini
  • OLLAMA_HOST — Ollama base URL (e.g. http://localhost:11434)
{
"workspace": "~/.human/workspace",
"default_provider": "openai",
"default_model": "gpt-4o",
"default_temperature": 0.7,
"providers": [
{
"name": "openai",
"api_key": "sk-your-openai-key",
"base_url": null,
"native_tools": true
},
{
"name": "anthropic",
"api_key": "sk-ant-your-anthropic-key"
},
{
"name": "ollama",
"api_key": null,
"base_url": "http://localhost:11434"
}
],
"channels": {
"cli": true,
"default_channel": "cli",
"email": {
"smtp_host": "smtp.gmail.com",
"smtp_port": 587,
"from_address": "bot@example.com",
"smtp_user": "bot@example.com",
"smtp_pass": "app-password",
"imap_host": "imap.gmail.com",
"imap_port": 993
},
"imessage": {
"default_target": "+15551234567"
}
},
"memory": {
"backend": "sqlite",
"auto_save": true,
"sqlite_path": null,
"max_entries": 0
},
"security": {
"sandbox": "auto",
"autonomy_level": 1
},
"autonomy": {
"level": "supervised",
"workspace_only": true,
"max_actions_per_hour": 20
},
"gateway": {
"enabled": true,
"port": 3000,
"host": "127.0.0.1",
"require_pairing": true,
"allow_public_bind": false,
"pair_rate_limit_per_minute": 10,
"webhook_hmac_secret": null
},
"tunnel": {
"provider": "none",
"domain": null
},
"runtime": {
"kind": "native",
"docker_image": null
},
"tools": {
"shell_timeout_secs": 60,
"shell_max_output_bytes": 1048576,
"web_fetch_max_chars": 100000,
"enabled_tools": [],
"disabled_tools": []
},
"mcp_servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"]
}
},
"cron": {
"enabled": true,
"interval_minutes": 1
}
}

Each entry in the providers array can have:

FieldTypeDescription
namestringProvider identifier: openai, anthropic, ollama, llamacpp, lmstudio, vllm, sglang, openrouter, etc.
api_keystringAPI key (optional for local providers)
base_urlstringOverride base URL (e.g. http://localhost:11434 for Ollama)
native_toolsbooleanUse provider-native tool format (default: true)

Local providers (no API key required): ollama, llamacpp, llama.cpp, lmstudio, lm-studio, vllm, sglang, osaurus

  • sqlite — SQLite with FTS5 and vector search (requires HU_ENABLE_SQLITE)
  • markdown — File-based markdown files
  • none — No memory

Use enabled_tools and disabled_tools to control which tools are available:

{
"tools": {
"enabled_tools": ["shell", "file_read", "memory_store"],
"disabled_tools": ["browser_open"]
}
}

Empty enabled_tools means all tools are enabled unless listed in disabled_tools.

Connect to external MCP (Model Context Protocol) servers. Human can act as both client and server:

As client — connect to external tool servers at startup:

{
"mcp_servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"]
}
}
}

MCP tools are automatically loaded and available to the agent as mcp_0_<tool_name>.

As server — expose all Human tools over MCP:

Terminal window
human mcp

This runs the JSON-RPC 2.0 server on stdin/stdout, compatible with Claude Code and other MCP clients.

Run Human as an always-on service:

Terminal window
human service # Daemonize (background)
human service-loop # Foreground (for containers)
human status # Check if running

The service loop executes cron jobs and will poll configured channels in future releases.