Skip to content

Provider Overview

Human uses a vtable-driven provider interface. All AI model backends implement hu_provider_t and are selected by name at runtime.

Each provider implements:

MethodDescription
chatNon-streaming chat completion
chat_with_toolsChat with tool calls (optional)
stream_chatStreaming completion (optional)
supports_native_toolsWhether provider uses OpenAI-style tool schema
get_nameProvider identifier
deinitCleanup

Optional methods: warmup, supports_streaming, supports_vision.

These have dedicated implementations with format-specific handling:

NameTypeDefault base URL
openaiNativehttps://api.openai.com/v1
anthropicNativehttps://api.anthropic.com
geminiNativehttps://generativelanguage.googleapis.com
googleNative(same as gemini)
ollamaNativehttp://localhost:11434
openrouterNativehttps://openrouter.ai/api/v1
compatibleNative(configurable)
claude_cliNative(Claude CLI)
codex_cliNative(Codex CLI)
openai-codexNative(OpenAI Codex)

These use the compatible backend with preset base URLs:

NameDefault base URL
groqhttps://api.groq.com/openai
mistralhttps://api.mistral.ai/v1
deepseekhttps://api.deepseek.com
xai, grokhttps://api.x.ai
cerebrashttps://api.cerebras.ai/v1
perplexityhttps://api.perplexity.ai
coherehttps://api.cohere.com/compatibility
together, together-aihttps://api.together.xyz
fireworks, fireworks-aihttps://api.fireworks.ai/inference/v1
huggingfacehttps://router.huggingface.co/v1
siliconflowhttps://api.siliconflow.cn/v1
venicehttps://api.venice.ai
vercel, vercel-aihttps://ai-gateway.vercel.sh/v1
chuteshttps://chutes.ai/api/v1
synthetichttps://api.synthetic.new/openai/v1
opencode, opencode-zenhttps://opencode.ai/zen/v1
astraihttps://as-trai.com/v1
poehttps://api.poe.com/v1
moonshot, kimihttps://api.moonshot.cn/v1
glm, zhipu, zai, z.aihttps://api.z.ai/api/paas/v4
minimaxhttps://api.minimax.io/v1
qwen, dashscopehttps://dashscope.aliyuncs.com/compatible-mode/v1
qianfan, baiduhttps://aip.baidubce.com
doubao, volcengine, arkhttps://ark.cn-beijing.volces.com/api/v3
byteplushttps://ark.ap-southeast.bytepluses.com/api/v3
bedrock, aws-bedrockhttps://bedrock-runtime.us-east-1.amazonaws.com
cloudflare, cloudflare-aihttps://gateway.ai.cloudflare.com/v1
copilot, github-copilothttps://api.githubcopilot.com
nvidia, nvidia-nimhttps://integrate.api.nvidia.com/v1
ovhcloud, ovhhttps://oai.endpoints.kepler.ai.cloud.ovh.net/v1
NameDefault base URL
lmstudio, lm-studiohttp://localhost:1234/v1
vllmhttp://localhost:8000/v1
llamacpp, llama.cpphttp://localhost:8080/v1
sglanghttp://localhost:30000/v1
osaurushttp://localhost:1337/v1
litellmhttp://localhost:4000

No API key is required for local providers.

Set default_provider in config or pass --provider on the CLI:

Terminal window
./human agent --provider ollama --model llama3

For compatible providers, add a providers entry with your API key:

{
"providers": [{ "name": "groq", "api_key": "gsk_..." }],
"default_provider": "groq"
}