Vector Memory & Embeddings
Human supports vector embeddings for semantic similarity search. Text is embedded into vectors, stored, and queried by similarity (e.g. cosine) to find relevant context even when wording differs.
Architecture
Section titled “Architecture”The vector stack consists of:
- Embedder: Converts text to a fixed-dimension vector
- Vector store: Stores embeddings and supports similarity search
- Retrieval engine: Combines keyword (FTS5) and vector search when both are available
Built-in embedders
Section titled “Built-in embedders”| Embedder | Dimensions | Description |
|---|---|---|
| Local (TF-IDF) | 384 | In-process, no network; hash projection |
| Ollama | varies | Uses local Ollama embedding API |
| Voyage | varies | Voyage AI API |
| Gemini | varies | Google Gemini embedding API |
The default local embedder (hu_embedder_local_create) produces 384-dimensional vectors. No API key is required.
Vector stores
Section titled “Vector stores”| Store | Description |
|---|---|
| In-memory | hu_vector_store_mem_create — ephemeral, process lifetime |
| (Future) | Persistent backends (SQLite, LanceDB, etc.) as configured |
Memory engines with vector support
Section titled “Memory engines with vector support”Some memory engines integrate vector search:
- LanceDB: SQLite-based with optional vector indexing
- Lucid: SQLite-based with Lucid CLI for cross-project sync
These engines may use their own vector storage; see their documentation for configuration.
Retrieval
Section titled “Retrieval”The retrieval pipeline can use:
- Keyword only: FTS5 / substring match when no vector store is configured
- Keyword + vector: Hybrid retrieval with Reciprocal Rank Fusion (RRF) when both are available
Hybrid search combines keyword ranks and vector similarity ranks via RRF for final ordering.
Configuration
Section titled “Configuration”Vector and embedding behavior is configured through the memory and retrieval setup. The agent and gateway create a retrieval engine from config; the default uses an in-memory vector store and local embedder.
Example (conceptual; actual keys depend on config schema):
{ "memory": { "backend": "sqlite", "sqlite_path": "~/.human/memory.db" }}For external embedding APIs (Ollama, Voyage, Gemini), configure the appropriate provider and API key in the usual provider config.