copyright | lastupdated | subcollection | ||
---|---|---|---|---|
|
2023-08-03 |
assistant |
{{site.data.keyword.attribute-definition-list}}
Documentation for the classic {{site.data.keyword.assistant_classic_short}} experience has moved. For the most up-to-date version, see Dialog creation workflow{: external}. {: attention}
{: #dev-process}
Use {{site.data.keyword.assistant_classic_short}} to leverage AI as you build, deploy, and incrementally improve a conversational assistant. {: shortdesc}
{: #dev-process-workflow}
The typical workflow for an assistant project includes the following steps:
-
Define a narrow set of key customer needs that you want the assistant to address on your behalf, including any business processes that it can initiate or complete for your customers. Start small.
-
Create intents that represent the customer needs you identified in the previous step. For example, intents such as
#about_company
or#place_order
. -
Build a dialog that detects the defined intents and addresses them, either with simple responses or with a dialog flow that collects more information first.
-
Define any entities that are needed to more clearly understand the user's meaning. For example, you might add an
@product
entity that you can use with the#place_order
intent to understand what product the customer wants to buy.Mine existing intent user examples for common entity value mentions. Using annotations to define entities captures not only the text of the entity value, but the context in which the entity value is typically used in a sentence.
-
Test each function that you add to the assistant in the "Try it" pane, incrementally, as you go.
-
When you have a working assistant that can successfully handle key tasks, add an integration that deploys the assistant to a development environment. Test the deployed assistant and make refinements.
-
After you build an effective assistant, take a snapshot of the dialog skill and save it as a version.
Saving a version when you reach a development milestone gives you something you can go back to if subsequent changes you make to the skill decrease its effectiveness. See Creating skill versions.
-
Deploy the version of the assistant into a test environment, and test it.
If you use the preview, you can share the URL with others to get their help with testing.
-
Use metrics from the Analytics tab to find areas for improvement, and make adjustments.
If you need to test alternative approaches to addressing an issue, create a version for each solution, so you can deploy and test each one independently, and compare the results.
-
When you are happy with the performance of your assistant, deploy the best version of the assistant into a production environment.
-
Monitor the logs from conversations that users have with the deployed assistant.
You can view the logs for a version of a skill that is running in production from the Analytics tab of a development version of the skill. As you find misclassifications or other issues, you can correct them in the development version of the skill, and then deploy the improved version to production after testing. See Improving across assistants for more details.
The process of analyzing the logs and improving the dialog skill is ongoing. Over time, you might want to expand the tasks that the assistant can handle for you. Customer needs also change. As new needs arise, the metrics generated by your deployed assistants can help you identify and address them in subsequent iterations.