Skip to content

Commit

Permalink
Merge pull request #12848 from RasaHQ/ENG-553-docs-cleanups
Browse files Browse the repository at this point in the history
Eng 553 docs cleanups
  • Loading branch information
m-vdb authored Sep 22, 2023
2 parents 145ce06 + 369f3fb commit fdb8cf7
Show file tree
Hide file tree
Showing 47 changed files with 478 additions and 494 deletions.
150 changes: 75 additions & 75 deletions CHANGELOG.mdx

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions docs/docs/action-server/knowledge-base-actions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ In this section:
- we will annotate `mention` entities so that our model detects indirect mentions of objects like “the
first one”

- we will use [synonyms](../building-classic-assistants/training-data-format.mdx#synonyms) extensively
- we will use [synonyms](../nlu-based-assistants/training-data-format.mdx#synonyms) extensively

For the bot to understand that the user wants to retrieve information from the knowledge base, you need to define
a new intent. We will call it `query_knowledge_base`.
Expand Down Expand Up @@ -170,7 +170,7 @@ In addition to adding a variety of training examples for each query type,
you need to specify and annotate the following entities in your training examples:

- `object_type`: Whenever a training example references a specific object type from your knowledge base, the object type should
be marked as an entity. Use [synonyms](../building-classic-assistants/training-data-format.mdx#synonyms) to map e.g. `restaurants` to `restaurant`, the correct
be marked as an entity. Use [synonyms](../nlu-based-assistants/training-data-format.mdx#synonyms) to map e.g. `restaurants` to `restaurant`, the correct
object type listed as a key in the knowledge base.

- `mention`: If the user refers to an object via “the first one”, “that one”, or “it”, you should mark those terms
Expand Down Expand Up @@ -404,7 +404,7 @@ The ordinal mention mapping maps a string, such as “1”, to the object in a l
object at index `0`.

As the ordinal mention mapping does not, for example, include an entry for “the first one”,
it is important that you use [Entity Synonyms](../building-classic-assistants/training-data-format.mdx#synonyms) to map “the first one” in your NLU data to “1”:
it is important that you use [Entity Synonyms](../nlu-based-assistants/training-data-format.mdx#synonyms) to map “the first one” in your NLU data to “1”:

```yaml-rasa
intents:
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/action-server/validation-action.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ can be set or updated outside of a form context.
:::note
`ValidationAction` is intended for extracting slots **outside** the context of a form.
It will ignore extraction and validation methods for any slots that have a form specified under the
[slot mapping's `conditions`](../building-classic-assistants/domain.mdx#mapping-conditions).
[slot mapping's `conditions`](../nlu-based-assistants/domain.mdx#mapping-conditions).

Check warning on line 26 in docs/docs/action-server/validation-action.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

mapping's

"mapping's" is a typo. Did you mean "mappings"?
It will not run these methods when the specified form is active nor when no form is active.
Please extend the [`FormValidationAction`](./validation-action.mdx#formvalidationaction-class) class to apply
custom slot mappings only within the context of a form.
Expand Down Expand Up @@ -212,7 +212,7 @@ the custom action should inherit from `FormValidationAction` rather than from `V
Since a custom action extending `FormValidationAction` runs on every user turn only as long as the form is active, it is
not required to use mapping conditions in this case.

To learn more about how to implement this class, see Forms [Advanced Usage](../building-classic-assistants/forms.mdx#advanced-usage).
To learn more about how to implement this class, see Forms [Advanced Usage](../nlu-based-assistants/forms.mdx#advanced-usage).

### `FormValidationAction` class implementation

Expand Down
41 changes: 19 additions & 22 deletions docs/docs/calm.mdx
Original file line number Diff line number Diff line change
@@ -1,22 +1,21 @@
---
id: calm
sidebar_label: CALM
title: CALM
description: CALM is an LLM-native approach to building reliable conversational AI.
sidebar_label: Conversational AI with Language Models
title: Conversational AI with Language Models
description: Conversational AI with Language Models (CALM) is an LLM-native approach to building reliable conversational AI.
---

import useBaseUrl from '@docusaurus/useBaseUrl';
import useBaseUrl from "@docusaurus/useBaseUrl";

CALM is an LLM-native approach to building reliable conversational AI.
Conversational AI with Language Models (CALM) is an LLM-native approach to building reliable conversational AI.

If you are familiar with building NLU-based chatbots in Rasa or another platform,
If you are familiar with building NLU-based chatbots in Rasa or another platform,
go [here](#calm-compared-to-nlu-based-assistants) to understand how CALM differs.


If you are familiar with building LLM Agents and want to understand how that
approach relates to CALM, go [here](#calm-compared-to-llm-agents).

The CALM approach is best described as a set of interacting modules.
The CALM approach is best described as a set of interacting modules.

<img
alt="high level outline of the CALM approach"
Expand All @@ -25,39 +24,37 @@ The CALM approach is best described as a set of interacting modules.

## CALM compared to NLU-based assistants

The diagram above might look familiar if you have built NLU-based assistants before.
The diagram above might look familiar if you have built NLU-based assistants before.
In that paradigm, an NLU module interprets one user message at a time and represents its meaning by predicting
intents and entities.
The dialogue manager then decides what to do based on the NLU output.

intents and entities.
The dialogue manager then decides what to do based on the NLU output.

## CALM compared to LLM Agents

"LLM Agents" refers to idea of using LLMs as a _reasoning engine_,
as in the [Reasoning and Acting](https://arxiv.org/abs/2210.03629) (ReAct) framework.

CALM uses an LLM to figure out *how the user wants to progress the conversation*.
CALM uses an LLM to figure out _how the user wants to progress the conversation_.
It does not use an LLM to guess what the correct set of steps are to complete that process.
This is the primary difference between the two approaches.
This is the primary difference between the two approaches.

In CALM, business logic is described by a `Flow` and executed precisely using the `FlowPolicy`.

So CALM uses an LLM to reason about the user side of the conversation, but not the system side.
So CALM uses an LLM to reason about the user side of the conversation, but not the system side.
In a ReAct-style agent, an LLM is used for both.

When each of these is appropriate:

| LLM Agents | CALM |
| --------------------------------------------------------------------------- | ------------------------------------------------------------- |
| allow users to make open-ended use of tools / API endpoints | known business logic for a finite set of skills / user goals |
| you have an effectively infinite set of possible tasks and open-ended goals | business logic needs to be strictly enforced |
| you want to give the end-user of the bot full autonomy | limits to what end-users are allowed to do |
| LLM Agents | CALM |
| --------------------------------------------------------------------------- | ------------------------------------------------------------ |
| allow users to make open-ended use of tools / API endpoints | known business logic for a finite set of skills / user goals |
| you have an effectively infinite set of possible tasks and open-ended goals | business logic needs to be strictly enforced |
| you want to give the end-user of the bot full autonomy | limits to what end-users are allowed to do |

For business use cases, CALM has advantages over LLM Agents:


1. Your business logic is explicitly written down, can be easily edited, and you can be sure that it will be followed faithfully
2. Your business logic is known up front so you don't need to rely on an LLM to guess it. This avoids doing multiple LLM calls in series in response to a single user message, which is infeasibly slow for most applications.
3. You can make your business logic arbitrarily complex and not worry about whether the LLM “forgets” a step.
4. You can validate every piece of information a user provides (i.e. every slot value) as the conversation progresses. So you don’t have to wait until the all the information is collected and the API response gives you an error.
5. End users cannot use prompt injection to override your business logic. *A Language model as a reasoning engine* is a fundamentally insecure proposition - see the [OWASP top 10](https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_0.pdf).
5. End users cannot use prompt injection to override your business logic. _A Language model as a reasoning engine_ is a fundamentally insecure proposition - see the [OWASP top 10](https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_0.pdf).
14 changes: 7 additions & 7 deletions docs/docs/command-line-interface.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ The following arguments can be used to configure the training process:
```

See the section on [data augmentation](./building-classic-assistants/policies.mdx#data-augmentation) for info on how data augmentation works
See the section on [data augmentation](./nlu-based-assistants/policies.mdx#data-augmentation) for info on how data augmentation works
and how to choose a value for the flag. Note that `TEDPolicy` is the only policy affected by data augmentation.

See the following section on [incremental training](#incremental-training) for more information about the `--epoch-fraction` argument.
Expand Down Expand Up @@ -217,7 +217,7 @@ rasa interactive

This will first train a model and then start an interactive shell session.
You can then correct your assistants predictions as you talk to it.
If [`UnexpecTEDIntentPolicy`](./building-classic-assistants/policies.mdx#unexpected-intent-policy) is
If [`UnexpecTEDIntentPolicy`](./nlu-based-assistants/policies.mdx#unexpected-intent-policy) is

Check warning on line 220 in docs/docs/command-line-interface.mdx

View workflow job for this annotation

GitHub Actions / Typo CI

UnexpecTEDIntentPolicy

"UnexpecTEDIntentPolicy" is a typo. Did you mean "UnexpectedTEDIntentPolicy"?
included in the pipeline, [`action_unlikely_intent`](./concepts/default-actions.mdx#action_unlikely_intent)
can be triggered at any conversation turn. Subsequently, the following message will be displayed:

Expand Down Expand Up @@ -339,7 +339,7 @@ The following arguments can be used to configure your Rasa server:
```

For more information on the additional parameters, see [Model Storage](./production/model-storage.mdx).
See the Rasa [HTTP API](./http-api.mdx) page for detailed documentation of all the endpoints.
See the Rasa [HTTP API](./production/http-api.mdx) page for detailed documentation of all the endpoints.

## rasa run actions

Expand Down Expand Up @@ -398,8 +398,8 @@ rasa test nlu
```

You can find more details on specific arguments for each testing type in
[Evaluating an NLU Model](./building-classic-assistants/testing-your-assistant.mdx#evaluating-an-nlu-model) and
[Evaluating a Dialogue Management Model](./building-classic-assistants/testing-your-assistant.mdx#evaluating-a-dialogue-model).
[Evaluating an NLU Model](./nlu-based-assistants/testing-your-assistant.mdx#evaluating-an-nlu-model) and
[Evaluating a Dialogue Management Model](./nlu-based-assistants/testing-your-assistant.mdx#evaluating-a-dialogue-model).

The following arguments are available for `rasa test`:

Expand Down Expand Up @@ -542,7 +542,7 @@ If no arguments are specified, the default domain path (`domain.yml`) will be us
This command will also back-up your 2.0 domain file(s) into a different `original_domain.yml` file or
directory labeled `original_domain`.

Note that the slots in the migrated domain will contain [mapping conditions](./building-classic-assistants/domain.mdx#mapping-conditions) if these
Note that the slots in the migrated domain will contain [mapping conditions](./nlu-based-assistants/domain.mdx#mapping-conditions) if these
slots are part of a form's `required_slots`.

:::caution
Expand Down Expand Up @@ -592,7 +592,7 @@ rasa data validate stories
```

:::note
Running `rasa data validate` does **not** test if your [rules](./building-classic-assistants/rules.mdx) are consistent with your stories.
Running `rasa data validate` does **not** test if your [rules](./nlu-based-assistants/rules.mdx) are consistent with your stories.
However, during training, the `RulePolicy` checks for conflicts between rules and stories. Any such conflict will abort training.

Also, if you use end-to-end stories, then this might not capture all conflicts. Specifically, if two user inputs
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/concepts/actions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ API call, or to query a database for example.

## Forms

[Forms](../building-classic-assistants/forms.mdx) are a special type of custom action, designed to handle business logic. If you have
[Forms](../nlu-based-assistants/forms.mdx) are a special type of custom action, designed to handle business logic. If you have
any conversation designs where you expect the assistant to ask for a specific set of
information, you should use forms.

Expand Down
Loading

0 comments on commit fdb8cf7

Please sign in to comment.