diff --git a/docs/docs/concepts/llm-configuration.mdx b/docs/docs/concepts/llm-configuration.mdx index e678a04baad5..d8679895448b 100644 --- a/docs/docs/concepts/llm-configuration.mdx +++ b/docs/docs/concepts/llm-configuration.mdx @@ -43,7 +43,7 @@ endpoint configuration: ```yaml title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser ``` Additional configuration parameters are explained in detail in the documentation @@ -52,7 +52,7 @@ pages for each of these components: - [LLMCommandGenerator](./llm-command-generator.mdx) - [FlowPolicy](../llms/flow-policy.mdx) - [Docseach](../llms/llm-docsearch.mdx) -- [LLMResponseRephraser](../llms/llm-nlg.mdx) +- [ContextualResponseRephraser](../llms/contextual-response-rephraser.mdx) ## OpenAI Configuration @@ -116,7 +116,7 @@ To configure models per component, follow these steps described on the pages for each component: 1. [intent classification instructions](../llms/llm-intent.mdx#llm--embeddings) -2. [rephrasing instructions](../llms/llm-nlg.mdx#llm-configuration) +2. [rephrasing instructions](../llms/contextual-response-rephraser.mdx#llm-configuration) 3. [intentless policy instructions](../llms/llm-intentless.mdx#llm--embeddings) 4. [docsearch instructions](../llms/llm-docsearch.mdx#llm--embeddings) @@ -344,7 +344,7 @@ should use the [Cohere](https://cohere.ai/) embeddings provider rather than Open :::note Only Some Components need Embeddings Not every component uses embeddings. For example, the -[LLMResponseRephraser](../llms/llm-nlg.mdx) component does not use embeddings. +[ContextualResponseRephraser](../llms/contextual-response-rephraser.mdx) component does not use embeddings. For these components, no `embeddings` property is needed. ::: diff --git a/docs/docs/llms/llm-nlg.mdx b/docs/docs/llms/contextual-response-rephraser.mdx similarity index 97% rename from docs/docs/llms/llm-nlg.mdx rename to docs/docs/llms/contextual-response-rephraser.mdx index 2ac1dccd2056..9ca105ddab10 100644 --- a/docs/docs/llms/llm-nlg.mdx +++ b/docs/docs/llms/contextual-response-rephraser.mdx @@ -1,5 +1,5 @@ --- -id: llm-nlg +id: contextual-response-rephraser sidebar_label: NLG using LLMs title: LLMs for Natural Language Generation abstract: | @@ -95,7 +95,7 @@ To use rephrasing, add the following lines to your `endpoints.yml` file: ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser ``` By default, rephrasing is only enabled for responses that specify @@ -115,7 +115,7 @@ If you want to enable rephrasing for all responses, you can set the ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser rephrase_all: true ``` @@ -131,7 +131,7 @@ by setting the `rephrase_all` property to `true` in the `endpoints.yml` file: ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser rephrase_all: true ``` @@ -147,7 +147,7 @@ You can specify the openai model to use for rephrasing by setting the ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser llm: model_name: text-davinci-003 ``` @@ -170,7 +170,7 @@ The used LLM provider provider can be configured in the ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser llm: type: "cohere" ``` @@ -188,7 +188,7 @@ You can specify the temperature to use for rephrasing by setting the ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser llm: temperature: 0.3 ``` @@ -241,7 +241,7 @@ property in the `endpoints.yml` file: ```yaml-rasa title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser prompt: | The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly. diff --git a/docs/docs/llms/llm-configuration.mdx b/docs/docs/llms/llm-configuration.mdx index b2a3dc2504c2..e3e4d00a7159 100644 --- a/docs/docs/llms/llm-configuration.mdx +++ b/docs/docs/llms/llm-configuration.mdx @@ -43,7 +43,7 @@ endpoint configuration: ```yaml title="endpoints.yml" nlg: - type: rasa_plus.ml.LLMResponseRephraser + type: rasa_plus.ml.ContextualResponseRephraser ``` Additional configuration parameters are explained in detail in the documentation @@ -52,7 +52,7 @@ pages for each of these components: - [LLMCommandGenerator](./llm-command-generator.mdx) - [FlowPolicy](../llms/flow-policy.mdx) - [Docseach](../llms/llm-docsearch.mdx) -- [LLMResponseRephraser](../llms/llm-nlg.mdx) +- [ContextualResponseRephraser](../llms/contextual-response-rephraser.mdx) ## OpenAI Configuration @@ -116,7 +116,7 @@ To configure models per component, follow these steps described on the pages for each component: 1. [intent classification instructions](../llms/llm-intent.mdx#llm--embeddings) -2. [rephrasing instructions](../llms/llm-nlg.mdx#llm-configuration) +2. [rephrasing instructions](../llms/contextual-response-rephraser.mdx#llm-configuration) 3. [intentless policy instructions](../llms/llm-intentless.mdx#llm--embeddings) 4. [docsearch instructions](../llms/llm-docsearch.mdx#llm--embeddings) @@ -344,7 +344,7 @@ should use the [Cohere](https://cohere.ai/) embeddings provider rather than Open :::note Only Some Components need Embeddings Not every component uses embeddings. For example, the -[LLMResponseRephraser](../llms/llm-nlg.mdx) component does not use embeddings. +[ContextualResponseRephraser](../llms/contextual-response-rephraser.mdx) component does not use embeddings. For these components, no `embeddings` property is needed. ::: diff --git a/docs/docs/llms/llm-custom.mdx b/docs/docs/llms/llm-custom.mdx index 0641c4bfd104..ea3d91baa2da 100644 --- a/docs/docs/llms/llm-custom.mdx +++ b/docs/docs/llms/llm-custom.mdx @@ -147,7 +147,7 @@ examples and doing a similarity search. """ ``` -### LLMResponseRephraser +### ContextualResponseRephraser #### rephrase diff --git a/docs/docs/llms/llm-next-gen.mdx b/docs/docs/llms/llm-next-gen.mdx index cbdf0b56f374..a1e86b456617 100644 --- a/docs/docs/llms/llm-next-gen.mdx +++ b/docs/docs/llms/llm-next-gen.mdx @@ -91,7 +91,7 @@ Here's a breakdown of our approach: know where we are in the dialogue and what's been covered. - We employ LLMs to update the state of these Flows. These language models process user input, adjusting the conversation's course as needed. -- We use LLMs to [improve the chatbot's responses](./llm-nlg.mdx). These +- We use LLMs to [improve the chatbot's responses](./contextual-response-rephraser.mdx). These language models generate the chatbot's responses, ensuring they feel natural and fluent. diff --git a/docs/docs/start-here.mdx b/docs/docs/start-here.mdx index e532672ff314..cb80dcdda1bd 100644 --- a/docs/docs/start-here.mdx +++ b/docs/docs/start-here.mdx @@ -72,7 +72,7 @@ Out of the box, this assistant can already handle a variety of conversations. Th where the user always provides the information the assistant requests. But if users change their mind, answer indirectly, or interject with questions, this assistant can handle those cases as well. Try out some of these conversations yourself to get a feel for things. -If you want your assistant to sound a bit more natural, you can activate [contextual rephrasing](./llms/llm-nlg.mdx) of responses. +If you want your assistant to sound a bit more natural, you can activate [contextual rephrasing](./llms/contextual-response-rephraser.mdx) of responses.