diff --git a/crates/tabby-scheduler/src/doc/mod.rs b/crates/tabby-scheduler/src/doc/mod.rs index 94869b87008e..0a8304cd3e8d 100644 --- a/crates/tabby-scheduler/src/doc/mod.rs +++ b/crates/tabby-scheduler/src/doc/mod.rs @@ -72,7 +72,7 @@ impl IndexAttributeBuilder for DocBuilder { } let chunk = json!({ - doc::fields::CHUNK_TEXT: chunk_text, + doc::fields::CHUNK_TEXT: chunk_text, }); yield (chunk_embedding_tokens, chunk) diff --git a/website/docs/administration/context/index.mdx b/website/docs/administration/context/index.mdx index f3437a5cbdef..55615f9100ce 100644 --- a/website/docs/administration/context/index.mdx +++ b/website/docs/administration/context/index.mdx @@ -25,7 +25,7 @@ The repository context is used to connect Tabby with a source code repository fr Git - For GitHub / GitLab, a personal access token is required to access private repositories. - * Check the instructions in the corresponding tab to create a token. + * Check the instructions in the corresponding tab to create a token. GitHub or GitLab @@ -59,4 +59,32 @@ Once connected, the indexing job will start automatically. You can check the sta Additionally, you can also visit the **Code Browser** page to view the connected repository. -code browser \ No newline at end of file +code browser' + +## Internal: Vector Index + +When adding a document, it is converted into vectors that help quickly find relevant context. During searches or chats, queries and messages are also converted into vectors to locate the most similar documents. + +### Use the default embedding model + +The default embedding model is "Nomic-Embed-Text", which is a high-performing open embedding model with a large token context window. + +Currently, "Nomic-Embed-Text" is the only supported local embedding model. + +### Using a remote embedding model provider + +You can add also a remote embedding model provider by adding a new section to the `~/.tabby/config.toml` file. + +```toml +[model.embedding.http] +kind = "openai/embedding" +api_key = "sk-..." +model_name = "gpt-4" +``` + +Following embedding model providers are supported: + +* `openai/embedding` +* `voyageai/embedding` +* `llama.cpp/embedding` +* `ollama/embedding`