Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update descriptions of flow policy and intentless policy #12880

Merged
merged 3 commits into from
Oct 18, 2023

Conversation

amn41
Copy link
Contributor

@amn41 amn41 commented Sep 28, 2023

Proposed changes:

Status (please check what you already did):

  • added some tests for the functionality
  • updated the documentation
  • updated the changelog (please check changelog for instructions)
  • reformat files using black (please check Readme for instructions)

@amn41 amn41 requested a review from dakshvar22 September 28, 2023 10:58
Copy link
Contributor

@twerkmeister twerkmeister left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here you go 👍


For more detailed information on defining flows, refer to the
[flows specification](../concepts/flows.mdx).
triggers new flows when needed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line makes it sound like it would start flows autonomously and regularly. It only does it itself in the case of intents and links. Maybe to not introduce too much detail upfront: [...] handles state transitions, and in a few situations triggers new flows.


There are many things a user might say *other than* providing the value of the slot your
assistant has requested.
The may clarify that they didn't want to send money after all, or ask a clarifying question,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redundant "or"

[LLM Command Generator](../concepts/dialogue-understanding.mdx), to identify it as a
correction and set the corresponding slots. Based on this, the `FlowPolicy`
initiates the correction flow.
- Flows can be automatically added by the `FlowPolicy` in the case of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would highlight that these do not tend do be user-defined business logic flows but shorter meta flows that smooth out the conversation

Comment on lines +361 to +374
### High-stakes Topics
In addition to chitchat, another common use case for the Intentless Policy is
providing answers on high-stakes topics.
For example, if users have questions about policies, legal terms, or guarantees,
like: _"My situation is X, I need Y. Is that covered by my policy?"_
In these cases the DocSearchPolicy's RAG approach is risky.
Even with the relevant content present in the prompt, a RAG approach allows an
LLM to make interpretations of documents.
The answer users get will vary depending on the exact phrasing of their question,
and may change when the underlying model changes.

For high-stakes topics, it is safest to send a self-contained, vetted answer
rather than relying on a generative model.
The Intentless Policy provides that capability in a CALM assistant.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, not without risk either. You might have responses

  • If you have X, that is covered
    but also
  • yes
  • no

not 100% chance that you'll get the right one. Hard to tell, but the additional context could make RAG safer in some situations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I see what you're saying. That was the point about "self-contained" answers, but agree that if yes and no are candidates for intentless then you do run this risk. So there's definitely a design aspect to it

Comment on lines +376 to +381
### Interjections
When a flow reaches a `collect` step and your assistant asks the user for information,
your user might ask a clarifying question, refuse to answer, or otherwise interject in
the continuation of the flow.
In these cases, the Intentless Policy can contextually select appropriate `responses`,
while afterwards allowing the flow to continue.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clarifying questions might be more the domain of doc search - hard to tell

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example here would help the reader to picture what sort of a clarifying question we are talking about.


## Adding the Intentless Policy to your bot
The Intentless Policy is used to send `responses` in a conversation which
are not part of a flow.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"conversation which are not part of a flow" is confusing to understand.
The Intentless Policy is used to send responses to user messages that are not modelled by an existing flow?

Comment on lines +366 to +370
In these cases the DocSearchPolicy's RAG approach is risky.
Even with the relevant content present in the prompt, a RAG approach allows an
LLM to make interpretations of documents.
The answer users get will vary depending on the exact phrasing of their question,
and may change when the underlying model changes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I follow this part completely.

Comment on lines +376 to +381
### Interjections
When a flow reaches a `collect` step and your assistant asks the user for information,
your user might ask a clarifying question, refuse to answer, or otherwise interject in
the continuation of the flow.
In these cases, the Intentless Policy can contextually select appropriate `responses`,
while afterwards allowing the flow to continue.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example here would help the reader to picture what sort of a clarifying question we are talking about.

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

warning The version of Java (11.0.20) you have used to run this analysis is deprecated and we will stop accepting it soon. Please update to at least Java 17.
Read more here

@github-actions
Copy link
Contributor

🚀 A preview of the docs have been deployed at the following URL: https://12880--rasahq-docs-rasa-v2.netlify.app/docs/rasa

@amn41 amn41 merged commit 634a3a7 into docs-dm2 Oct 18, 2023
97 checks passed
@amn41 amn41 deleted the docs-dm2-update-intentless branch October 18, 2023 12:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants