-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update descriptions of flow policy and intentless policy #12880
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here you go 👍
docs/docs/concepts/policies.mdx
Outdated
|
||
For more detailed information on defining flows, refer to the | ||
[flows specification](../concepts/flows.mdx). | ||
triggers new flows when needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line makes it sound like it would start flows autonomously and regularly. It only does it itself in the case of intents and links. Maybe to not introduce too much detail upfront: [...] handles state transitions, and in a few situations triggers new flows.
docs/docs/concepts/policies.mdx
Outdated
|
||
There are many things a user might say *other than* providing the value of the slot your | ||
assistant has requested. | ||
The may clarify that they didn't want to send money after all, or ask a clarifying question, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
redundant "or"
docs/docs/concepts/policies.mdx
Outdated
[LLM Command Generator](../concepts/dialogue-understanding.mdx), to identify it as a | ||
correction and set the corresponding slots. Based on this, the `FlowPolicy` | ||
initiates the correction flow. | ||
- Flows can be automatically added by the `FlowPolicy` in the case of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would highlight that these do not tend do be user-defined business logic flows but shorter meta flows that smooth out the conversation
### High-stakes Topics | ||
In addition to chitchat, another common use case for the Intentless Policy is | ||
providing answers on high-stakes topics. | ||
For example, if users have questions about policies, legal terms, or guarantees, | ||
like: _"My situation is X, I need Y. Is that covered by my policy?"_ | ||
In these cases the DocSearchPolicy's RAG approach is risky. | ||
Even with the relevant content present in the prompt, a RAG approach allows an | ||
LLM to make interpretations of documents. | ||
The answer users get will vary depending on the exact phrasing of their question, | ||
and may change when the underlying model changes. | ||
|
||
For high-stakes topics, it is safest to send a self-contained, vetted answer | ||
rather than relying on a generative model. | ||
The Intentless Policy provides that capability in a CALM assistant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, not without risk either. You might have responses
- If you have X, that is covered
but also - yes
- no
not 100% chance that you'll get the right one. Hard to tell, but the additional context could make RAG safer in some situations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I see what you're saying. That was the point about "self-contained" answers, but agree that if yes
and no
are candidates for intentless then you do run this risk. So there's definitely a design aspect to it
### Interjections | ||
When a flow reaches a `collect` step and your assistant asks the user for information, | ||
your user might ask a clarifying question, refuse to answer, or otherwise interject in | ||
the continuation of the flow. | ||
In these cases, the Intentless Policy can contextually select appropriate `responses`, | ||
while afterwards allowing the flow to continue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clarifying questions might be more the domain of doc search - hard to tell
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example here would help the reader to picture what sort of a clarifying question we are talking about.
|
||
## Adding the Intentless Policy to your bot | ||
The Intentless Policy is used to send `responses` in a conversation which | ||
are not part of a flow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"conversation which are not part of a flow" is confusing to understand.
The Intentless Policy is used to send responses
to user messages that are not modelled by an existing flow?
In these cases the DocSearchPolicy's RAG approach is risky. | ||
Even with the relevant content present in the prompt, a RAG approach allows an | ||
LLM to make interpretations of documents. | ||
The answer users get will vary depending on the exact phrasing of their question, | ||
and may change when the underlying model changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I follow this part completely.
### Interjections | ||
When a flow reaches a `collect` step and your assistant asks the user for information, | ||
your user might ask a clarifying question, refuse to answer, or otherwise interject in | ||
the continuation of the flow. | ||
In these cases, the Intentless Policy can contextually select appropriate `responses`, | ||
while afterwards allowing the flow to continue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An example here would help the reader to picture what sort of a clarifying question we are talking about.
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information The version of Java (11.0.20) you have used to run this analysis is deprecated and we will stop accepting it soon. Please update to at least Java 17. |
🚀 A preview of the docs have been deployed at the following URL: https://12880--rasahq-docs-rasa-v2.netlify.app/docs/rasa |
Proposed changes:
Status (please check what you already did):
black
(please check Readme for instructions)