Skip to content

Commit

Permalink
Merge pull request #12865 from RasaHQ/docs-cleanup-components
Browse files Browse the repository at this point in the history
Docs cleanup components
  • Loading branch information
m-vdb authored Oct 17, 2023
2 parents 2571266 + 5052c54 commit 3e1e40d
Show file tree
Hide file tree
Showing 5 changed files with 118 additions and 91 deletions.
83 changes: 32 additions & 51 deletions docs/docs/concepts/components/llm-configuration.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: llm-configuration
sidebar_label: Configuration
title: Configuration
sidebar_label: LLM Providers
title: LLM Providers
abstract: |
Instructions on how to setup and configure Large Language Models from
OpenAI, Cohere, and other providers.
Expand All @@ -19,41 +19,22 @@ import RasaLabsBanner from "@theme/RasaLabsBanner";

## Overview

This guide will walk you through the process of configuring Rasa to connect to an
LLM, including deployments that rely on Azure OpenAI service. Instructions for
other LLM providers are further down the page.
All Rasa components which make use of an LLM can be configured.
This includes:
* The LLM provider
* The model
* The sampling temperature
* The prompt template

## Assitant Configration
and other settings.
This page applies to the following components which use LLMs:

To use LLMs in your assistant, you need to configure the following components:
* [LLMCommandGenerator](../dialogue-understanding.mdx)
* DocsearchPolicy
* IntentlessPolicy

Check warning on line 34 in docs/docs/concepts/components/llm-configuration.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

IntentlessPolicy

"IntentlessPolicy" is a typo. Did you mean "IntentnessPolicy"?
* ContextualResponseRephraser
* LLMIntentClassifier

```yaml title="config.yml"
recipe: default.v1
language: en
pipeline:
- name: LLMCommandGenerator

policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
- name: rasa_plus.ml.DocsearchPolicy
- name: RulePolicy
```
To use the rephrasing capability, you'll also need to add the following to your
endpoint configuration:
```yaml title="endpoints.yml"
nlg:
type: rasa_plus.ml.ContextualResponseRephraser
```
Additional configuration parameters are explained in detail in the documentation
pages for each of these components:
- [LLMCommandGenerator](../dialogue-understanding.mdx)
- [FlowPolicy](../policies.mdx#flow-policy)
- Docseacrh
- [ContextualResponseRephraser](../contextual-response-rephraser.mdx)

## OpenAI Configuration

Expand All @@ -63,18 +44,10 @@ can be configured with different LLMs, but OpenAI is the default.
If you want to configure your assistant with a different LLM, you can find
instructions for other LLM providers further down the page.

### Prerequisites
Before beginning, make sure that you have:
- Access to OpenAI's services
- Ability to generate API keys for OpenAI

### API Token

The API token is a key element that allows your Rasa instance to connect and
communicate with OpenAI. This needs to be configured correctly to ensure seamless
interaction between the two.
The API token authenticates your requests to the OpenAI API.

To configure the API token, follow these steps:

Expand Down Expand Up @@ -110,16 +83,24 @@ To configure the API token, follow these steps:

### Model Configuration

Rasa allow you to use different models for different components. For example,
you might use one model for intent classification and another for rephrasing.
Many LLM providers offer multiple models through their API.
The model is specified individually for each component, so that if you want to you can use
a combination of various models. For instance here is how you could configure a different model
for the `LLMCommandGenerator` and the `DocsearchPolicy`:

To configure models per component, follow these steps described on the
pages for each component:
```yaml title="config.yml"
recipe: default.v1
language: en
pipeline:
- name: LLMCommandGenerator
model: "gpt-4"

Check warning on line 96 in docs/docs/concepts/components/llm-configuration.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

gpt-

"gpt-" is a typo. Did you mean "got-"?

policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
- name: rasa_plus.ml.DocsearchPolicy
model: "gpt-3.5-turbo"

Check warning on line 101 in docs/docs/concepts/components/llm-configuration.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

gpt-

"gpt-" is a typo. Did you mean "got-"?
```
1. [intent classification instructions](../../nlu-based-assistants/components.mdx#llmintentclassifier)
2. [rephrasing instructions](../contextual-response-rephraser.mdx#llm-configuration)
3. [intentless policy instructions](../policies.mdx#flow-policy)
4. docsearch instructions
### Additional Configuration for Azure OpenAI Service
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/components/llm-custom.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
id: llm-custom
sidebar_label: Customization
sidebar_label: Customizing LLM Components
title: Customizing LLM based Components
abstract:
---
Expand Down
119 changes: 82 additions & 37 deletions docs/docs/concepts/components/overview.mdx
Original file line number Diff line number Diff line change
@@ -1,62 +1,107 @@
---
id: overview
sidebar_label: Overview
title: Model Configuration
description: Learn about model configuration for Rasa.
sidebar_label: Configuration
title: Configuration
description: Configure your Rasa Assistant.
abstract: The configuration file defines the components and policies that your model will use to make predictions based on user input.
---

The recipe key allows for different types of config and model architecture.
Currently, "default.v1" and the experimental "graph.v1" recipes are supported.
import RasaProLabel from '@theme/RasaProLabel';
import RasaProBanner from "@theme/RasaProBanner";

:::info New in 3.5
You can customise many aspects of how Rasa works by modifying the `config.yml` file.

Check warning on line 12 in docs/docs/concepts/components/overview.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

customise

"customise" is a typo. Did you mean "customize"?

The config file now includes a new mandatory key `assistant_id` which represents the unique assistant identifier.
A minimal configuration for a [CALM](../../calm.mdx) assistant looks like this:

```yaml-rasa title="config.yml"
recipe: default.v1
language: en
assistant_id: 20230405-114328-tranquil-mustard
pipeline:
- name: LLMCommandGenerator
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
```

:::tip Default Configuration
For backwards compatibility, running `rasa init` will create an NLU-based assistant.
To create a CALM assistant with the right `config.yml`, add the
additional `--template` argument:

```bash
rasa init --template calm
```

:::

The `assistant_id` key must specify a unique value to distinguish multiple assistants in deployment.
The assistant identifier will be propagated to each event's metadata, alongside the model id.
Note that if the config file does not include this required key or the placeholder default value is not replaced, a random
assistant name will be generated and added to the configuration everytime when running `rasa train`.
## The recipe, language, and assistant_id keys

The `recipe` key only needs to be modified if you want to use a [custom graph recipe](./graph-recipe.mdx).
The vast majority of projects should use the default value `"default.v1"`.

The language and pipeline keys specify the components used by the model to make NLU predictions.
The policies key defines the policies used by the model to predict the next action.
The `language` key is a 2-letter ISO code for the language your assistant supports.

The `assistant_id` key should be a unique value and allows you to distinguish multiple
deployed assistants.
This id is added to each event's metadata, together with the model id.
See [event brokers](../../production/event-brokers.mdx) for more information.
Note that if the config file does not include this required key or the placeholder default value
is not replaced, a random assistant name will be generated and added to the configuration
every time you run `rasa train`.

## Suggested Config

TODO: update

You can leave the pipeline and/or policies key out of your configuration file.
When you run `rasa train`, the Suggested Config feature will select a default configuration
for the missing key(s) to train the model.
## Pipeline

Make sure to specify the language key in your `config.yml` file with the
2-letter ISO language code.
The `pipeline` key lists the components which will be used to process and understand the messages
that end users send to your assistant.
In a CALM assistant, the output of your components pipeline is a list of [commands](../dialogue-understanding.mdx).

Example `config.yml` file:
The main component in your pipeline is the `LLMCommandGenerator`.
Here is what an example configuration looks like:

```yaml-rasa (docs/sources/data/configs_for_docs/example_for_suggested_config.yml)
```yaml-rasa title="config.yml"
pipeline:
- name: LLMCommandGenerator
llm:
model_name: "gpt-4"

Check warning on line 69 in docs/docs/concepts/components/overview.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

gpt-

"gpt-" is a typo. Did you mean "got-"?
request_timeout: 7
temperature: 0.0
```

The selected configuration will also be written as comments into the `config.yml` file,
so you can see which configuration was used. For the example above, the resulting file
might look e.g. like this:
The full set of configurable parameters is listed [here](../dialogue-understanding.mdx).

All components which make use of LLMs have common configuration parameters which are listed [here](./llm-configuration.mdx)


```yaml-rasa (docs/sources/data/configs_for_docs/example_for_suggested_config_after_train.yml)
### Combining CALM and NLU-based components

<RasaProLabel />

<RasaProBanner />

Rasa Pro allows you to combine both NLU-based and CALM components in your pipeline.
See a full list of NLU-based components [here](../../nlu-based-assistants/components.mdx).

## Policies

The `policies` key lists the [dialogue policies](../policies.mdx) your assistant will use
to progress the conversation.

```yaml-rasa title="config.yml"
policies:
- name: rasa.core.policies.flow_policy.FlowPolicy
```

If you like, you can then un-comment the suggested configuration for one or both of the
keys and make modifications. Note that this will disable automatic suggestions for this
key when training again.
As long as you leave the configuration commented out and don't specify any configuration
for a key yourself, a default configuration will be suggested whenever you train a new
model.
The [FlowPolicy](../policies.mdx#flow-policy) currently doesn't have an additional configuration parameters.

:::note nlu- or dialogue- only models
### Combining CALM and NLU-based dialogue policies

Only the default configuration for `pipeline` will be automatically selected
if you run `rasa train nlu`, and only the default configuration for `policies`
will be selected if you run `rasa train core`.
:::
<RasaProLabel />

<RasaProBanner />

Rasa Pro allows you to use both NLU-based and CALM dialogue policies in your assistant.
See a full list of NLU-based policies [here](../../nlu-based-assistants/policies.mdx).
1 change: 1 addition & 0 deletions docs/docs/concepts/dialogue-understanding.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ You can find all generated commands in the [command reference](#command-referenc

To use this component in your assistant, you need to add the
`LLMCommandGenerator` to your NLU pipeline in the `config.yml` file.
Read more about the `config.yml` file [here](./components/overview.mdx).

```yaml-rasa title="config.yml"
pipeline:
Expand Down
4 changes: 2 additions & 2 deletions docs/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,9 @@ module.exports = {
items: [
"concepts/components/overview",
"concepts/components/llm-configuration",
"concepts/components/llm-custom",
"concepts/components/custom-graph-components",
"concepts/components/graph-recipe",
"concepts/components/llm-custom",
"concepts/components/graph-recipe",
],
},
"concepts/policies", // TODO: ENG-538
Expand Down

0 comments on commit 3e1e40d

Please sign in to comment.