diff --git a/en/advanced/ai.md b/en/advanced/ai.md index 491f2a497..4b458d53a 100644 --- a/en/advanced/ai.md +++ b/en/advanced/ai.md @@ -6,7 +6,7 @@ **Requirements**: choose one available from combo box -Chat model specifies what AI models can you use. This will differ from one provider to other. Models vary in their accuracy, knowledge of the world, context window (what amount of information can they process). +The Chat model setting specifies what AI models you can use. This will differ from one provider to other. Models vary in their accuracy, knowledge of the world, context window (what amount of information can they process). Currently only OpenAI models are supported. @@ -16,11 +16,11 @@ Currently only OpenAI models are supported. **Requirements**: choose one available from combo box -Embedding model transforms a document (or a piece of text) into a vector (an ordered collection of numbers). This is used to supply the AI with relevant information regarding your questions. +The Embedding model transforms a document (or a piece of text) into a vector (an ordered collection of numbers). This is used to supply the AI with relevant information regarding your questions. -Different embedding models have different performance: this includes accuracy and how fast embeddings can be computed. `Q` at the end of model name usually means *quantized* (meaning *reduced*, *simplified*). These models are fast, but provide less accuracy. +Different embedding models have different performance: this includes accuracy and how fast embeddings can be computed. `Q` at the end of the model name usually means *quantized* (meaning *reduced*, *simplified*). These models are faster and smaller than their original counterpart, but provide slightly less accuracy. -Currently only local embedding models are supported. That means you don't have to provide a new API key and all the logic will be run on your machine. +Currently, only local embedding models are supported. That means, you don't have to provide a new API key and all the logic will be run on your machine. ## Instruction @@ -76,4 +76,4 @@ Setting this parameter controls the scope of information the AI model uses to ge The "Retrieval augmented generation: minimum score" parameter sets the threshold for relevance when retrieving chunks of text for generation. It specifies the minimum score that segments must achieve to be included in the results. Any text segments scoring below this threshold are excluded from consideration in the AI's response generation process. -This parameter is crucial in ensuring that the AI model focuses on retrieving and utilizing only the most relevant information from the retrieved chunks. By filtering out segments that do not meet the specified relevance score, the AI enhances the quality and accuracy of its responses, aligning more closely with the user's needs and query context. \ No newline at end of file +This parameter is crucial in ensuring that the AI model focuses on retrieving and utilizing only the most relevant information from the retrieved chunks. By filtering out segments that do not meet the specified relevance score, the AI enhances the quality and accuracy of its responses, aligning more closely with the user's needs and query context.