diff --git a/docs/docs/concepts/components/llm-configuration.mdx b/docs/docs/concepts/components/llm-configuration.mdx
index b87ecbd17627..396199d8c2bc 100644
--- a/docs/docs/concepts/components/llm-configuration.mdx
+++ b/docs/docs/concepts/components/llm-configuration.mdx
@@ -1,7 +1,7 @@
---
id: llm-configuration
-sidebar_label: Configuration
-title: Configuration
+sidebar_label: LLM Providers
+title: LLM Providers
abstract: |
Instructions on how to setup and configure Large Language Models from
OpenAI, Cohere, and other providers.
@@ -19,41 +19,22 @@ import RasaLabsBanner from "@theme/RasaLabsBanner";
## Overview
-This guide will walk you through the process of configuring Rasa to connect to an
-LLM, including deployments that rely on Azure OpenAI service. Instructions for
-other LLM providers are further down the page.
+All Rasa components which make use of an LLM can be configured.
+This includes:
+* The LLM provider
+* The model
+* The sampling temperature
+* The prompt template
-## Assitant Configration
+and other settings.
+This page applies to the following components which use LLMs:
-To use LLMs in your assistant, you need to configure the following components:
+* [LLMCommandGenerator](../dialogue-understanding.mdx)
+* DocsearchPolicy
+* IntentlessPolicy
+* ContextualResponseRephraser
+* LLMIntentClassifier
-```yaml title="config.yml"
-recipe: default.v1
-language: en
-pipeline:
- - name: LLMCommandGenerator
-
-policies:
- - name: rasa.core.policies.flow_policy.FlowPolicy
- - name: rasa_plus.ml.DocsearchPolicy
- - name: RulePolicy
-```
-
-To use the rephrasing capability, you'll also need to add the following to your
-endpoint configuration:
-
-```yaml title="endpoints.yml"
-nlg:
- type: rasa_plus.ml.ContextualResponseRephraser
-```
-
-Additional configuration parameters are explained in detail in the documentation
-pages for each of these components:
-
-- [LLMCommandGenerator](../dialogue-understanding.mdx)
-- [FlowPolicy](../policies.mdx#flow-policy)
-- Docseacrh
-- [ContextualResponseRephraser](../contextual-response-rephraser.mdx)
## OpenAI Configuration
@@ -63,18 +44,10 @@ can be configured with different LLMs, but OpenAI is the default.
If you want to configure your assistant with a different LLM, you can find
instructions for other LLM providers further down the page.
-### Prerequisites
-
-Before beginning, make sure that you have:
-
-- Access to OpenAI's services
-- Ability to generate API keys for OpenAI
### API Token
-The API token is a key element that allows your Rasa instance to connect and
-communicate with OpenAI. This needs to be configured correctly to ensure seamless
-interaction between the two.
+The API token authenticates your requests to the OpenAI API.
To configure the API token, follow these steps:
@@ -110,16 +83,24 @@ To configure the API token, follow these steps:
### Model Configuration
-Rasa allow you to use different models for different components. For example,
-you might use one model for intent classification and another for rephrasing.
+Many LLM providers offer multiple models through their API.
+The model is specified individually for each component, so that if you want to you can use
+a combination of various models. For instance here is how you could configure a different model
+for the `LLMCommandGenerator` and the `DocsearchPolicy`:
-To configure models per component, follow these steps described on the
-pages for each component:
+```yaml title="config.yml"
+recipe: default.v1
+language: en
+pipeline:
+ - name: LLMCommandGenerator
+ model: "gpt-4"
+
+policies:
+ - name: rasa.core.policies.flow_policy.FlowPolicy
+ - name: rasa_plus.ml.DocsearchPolicy
+ model: "gpt-3.5-turbo"
+```
-1. [intent classification instructions](../../nlu-based-assistants/components.mdx#llmintentclassifier)
-2. [rephrasing instructions](../contextual-response-rephraser.mdx#llm-configuration)
-3. [intentless policy instructions](../policies.mdx#flow-policy)
-4. docsearch instructions
### Additional Configuration for Azure OpenAI Service
diff --git a/docs/docs/concepts/components/llm-custom.mdx b/docs/docs/concepts/components/llm-custom.mdx
index 49ce32829300..e65de8c241d8 100644
--- a/docs/docs/concepts/components/llm-custom.mdx
+++ b/docs/docs/concepts/components/llm-custom.mdx
@@ -1,6 +1,6 @@
---
id: llm-custom
-sidebar_label: Customization
+sidebar_label: Customizing LLM Components
title: Customizing LLM based Components
abstract:
---
diff --git a/docs/docs/concepts/components/overview.mdx b/docs/docs/concepts/components/overview.mdx
index b7024ff09986..3f7dca576c3c 100644
--- a/docs/docs/concepts/components/overview.mdx
+++ b/docs/docs/concepts/components/overview.mdx
@@ -1,62 +1,107 @@
---
id: overview
-sidebar_label: Overview
-title: Model Configuration
-description: Learn about model configuration for Rasa.
+sidebar_label: Configuration
+title: Configuration
+description: Configure your Rasa Assistant.
abstract: The configuration file defines the components and policies that your model will use to make predictions based on user input.
---
-The recipe key allows for different types of config and model architecture.
-Currently, "default.v1" and the experimental "graph.v1" recipes are supported.
+import RasaProLabel from '@theme/RasaProLabel';
+import RasaProBanner from "@theme/RasaProBanner";
-:::info New in 3.5
+You can customise many aspects of how Rasa works by modifying the `config.yml` file.
-The config file now includes a new mandatory key `assistant_id` which represents the unique assistant identifier.
+A minimal configuration for a [CALM](../../calm.mdx) assistant looks like this:
+
+```yaml-rasa title="config.yml"
+recipe: default.v1
+language: en
+assistant_id: 20230405-114328-tranquil-mustard
+
+pipeline:
+ - name: LLMCommandGenerator
+
+policies:
+ - name: rasa.core.policies.flow_policy.FlowPolicy
+```
+
+:::tip Default Configuration
+For backwards compatibility, running `rasa init` will create an NLU-based assistant.
+To create a CALM assistant with the right `config.yml`, add the
+additional `--template` argument:
+
+```bash
+rasa init --template calm
+```
:::
-The `assistant_id` key must specify a unique value to distinguish multiple assistants in deployment.
-The assistant identifier will be propagated to each event's metadata, alongside the model id.
-Note that if the config file does not include this required key or the placeholder default value is not replaced, a random
-assistant name will be generated and added to the configuration everytime when running `rasa train`.
+## The recipe, language, and assistant_id keys
+
+The `recipe` key only needs to be modified if you want to use a [custom graph recipe](./graph-recipe.mdx).
+The vast majority of projects should use the default value `"default.v1"`.
-The language and pipeline keys specify the components used by the model to make NLU predictions.
-The policies key defines the policies used by the model to predict the next action.
+The `language` key is a 2-letter ISO code for the language your assistant supports.
+The `assistant_id` key should be a unique value and allows you to distinguish multiple
+deployed assistants.
+This id is added to each event's metadata, together with the model id.
+See [event brokers](../../production/event-brokers.mdx) for more information.
+Note that if the config file does not include this required key or the placeholder default value
+is not replaced, a random assistant name will be generated and added to the configuration
+every time you run `rasa train`.
-## Suggested Config
-TODO: update
-You can leave the pipeline and/or policies key out of your configuration file.
-When you run `rasa train`, the Suggested Config feature will select a default configuration
-for the missing key(s) to train the model.
+## Pipeline
-Make sure to specify the language key in your `config.yml` file with the
-2-letter ISO language code.
+The `pipeline` key lists the components which will be used to process and understand the messages
+that end users send to your assistant.
+In a CALM assistant, the output of your components pipeline is a list of [commands](../dialogue-understanding.mdx).
-Example `config.yml` file:
+The main component in your pipeline is the `LLMCommandGenerator`.
+Here is what an example configuration looks like:
-```yaml-rasa (docs/sources/data/configs_for_docs/example_for_suggested_config.yml)
+```yaml-rasa title="config.yml"
+pipeline:
+ - name: LLMCommandGenerator
+ llm:
+ model_name: "gpt-4"
+ request_timeout: 7
+ temperature: 0.0
```
-The selected configuration will also be written as comments into the `config.yml` file,
-so you can see which configuration was used. For the example above, the resulting file
-might look e.g. like this:
+The full set of configurable parameters is listed [here](../dialogue-understanding.mdx).
+
+All components which make use of LLMs have common configuration parameters which are listed [here](./llm-configuration.mdx)
+
-```yaml-rasa (docs/sources/data/configs_for_docs/example_for_suggested_config_after_train.yml)
+### Combining CALM and NLU-based components
+
+
+
+
+
+Rasa Pro allows you to combine both NLU-based and CALM components in your pipeline.
+See a full list of NLU-based components [here](../../nlu-based-assistants/components.mdx).
+
+## Policies
+
+The `policies` key lists the [dialogue policies](../policies.mdx) your assistant will use
+to progress the conversation.
+
+```yaml-rasa title="config.yml"
+policies:
+ - name: rasa.core.policies.flow_policy.FlowPolicy
```
-If you like, you can then un-comment the suggested configuration for one or both of the
-keys and make modifications. Note that this will disable automatic suggestions for this
-key when training again.
-As long as you leave the configuration commented out and don't specify any configuration
-for a key yourself, a default configuration will be suggested whenever you train a new
-model.
+The [FlowPolicy](../policies.mdx#flow-policy) currently doesn't have an additional configuration parameters.
-:::note nlu- or dialogue- only models
+### Combining CALM and NLU-based dialogue policies
-Only the default configuration for `pipeline` will be automatically selected
-if you run `rasa train nlu`, and only the default configuration for `policies`
-will be selected if you run `rasa train core`.
-:::
+
+
+
+
+Rasa Pro allows you to use both NLU-based and CALM dialogue policies in your assistant.
+See a full list of NLU-based policies [here](../../nlu-based-assistants/policies.mdx).
diff --git a/docs/docs/concepts/dialogue-understanding.mdx b/docs/docs/concepts/dialogue-understanding.mdx
index abe169853706..d10d081fd71f 100644
--- a/docs/docs/concepts/dialogue-understanding.mdx
+++ b/docs/docs/concepts/dialogue-understanding.mdx
@@ -30,6 +30,7 @@ You can find all generated commands in the [command reference](#command-referenc
To use this component in your assistant, you need to add the
`LLMCommandGenerator` to your NLU pipeline in the `config.yml` file.
+Read more about the `config.yml` file [here](./components/overview.mdx).
```yaml-rasa title="config.yml"
pipeline:
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 211182f7f3d6..f109346032bd 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -63,9 +63,9 @@ module.exports = {
items: [
"concepts/components/overview",
"concepts/components/llm-configuration",
- "concepts/components/llm-custom",
"concepts/components/custom-graph-components",
- "concepts/components/graph-recipe",
+ "concepts/components/llm-custom",
+ "concepts/components/graph-recipe",
],
},
"concepts/policies", // TODO: ENG-538