Skip to content

Commit

Permalink
Notebook tidy
Browse files Browse the repository at this point in the history
Signed-off-by: Mark Sze <[email protected]>
  • Loading branch information
marklysze committed Dec 5, 2024
1 parent f218c66 commit e69118e
Showing 1 changed file with 84 additions and 48 deletions.
132 changes: 84 additions & 48 deletions notebook/agentchat_structured_outputs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,16 @@
"source": [
"# Structured output\n",
"\n",
"OpenAI offers a functionality for defining a structure of the messages generated by LLMs, AutoGen enables this functionality by propagating `response_format` passed to your agents to the underlying client.\n",
"OpenAI offers functionality for defining a structure of the messages generated by LLMs, AG2 enables this functionality by propagating `response_format` passed to your agents to the underlying client.\n",
"\n",
"For more info on structured output, please check [here](https://platform.openai.com/docs/guides/structured-outputs)\n",
"\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
"Install `pyautogen`:\n",
"Install `ag2`:\n",
"```bash\n",
"pip install pyautogen\n",
"pip install ag2\n",
"```\n",
"\n",
"For more information, please refer to the [installation guide](/docs/installation/).\n",
Expand All @@ -31,14 +31,23 @@
"source": [
"## Set your API Endpoint\n",
"\n",
"The [`config_list_from_json`](https://ag2ai.github.io/ag2/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
"The [`config_list_from_json`](https://ag2ai.github.io/ag2/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. Structured Output is supported by OpenAI's models from gpt-4-0613 and gpt-3.5-turbo-0613."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"import autogen\n",
"\n",
Expand Down Expand Up @@ -68,7 +77,7 @@
"source": [
"## Example: math reasoning\n",
"\n",
"Using structured output, we can enforce chain-of-thought reasoning in the model to output an answer in a structured, step-by-step way"
"Using structured output, we can enforce chain-of-thought reasoning in the model to output an answer in a structured, step-by-step way."
]
},
{
Expand All @@ -77,12 +86,12 @@
"source": [
"### Define the reasoning model\n",
"\n",
"First we will define the math reasoning model. This model will indirectly force the LLM to solve the posed math problems iteratively trough math reasoning steps."
"First, we will define the math reasoning model. This model will indirectly force the LLM to solve the posed math problems iteratively through math reasoning steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -113,7 +122,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -143,9 +152,36 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser_proxy\u001b[0m (to Math_solver):\n",
"\n",
"how can I solve 8x + 7 = -23\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mMath_solver\u001b[0m (to User_proxy):\n",
"\n",
"{\"steps\":[{\"explanation\":\"To isolate the term with x, we first subtract 7 from both sides of the equation.\",\"output\":\"8x + 7 - 7 = -23 - 7 -> 8x = -30.\"},{\"explanation\":\"Now that we have 8x = -30, we divide both sides by 8 to solve for x.\",\"output\":\"x = -30 / 8 -> x = -3.75.\"}],\"final_answer\":\"x = -3.75\"}\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"text/plain": [
"ChatResult(chat_id=None, chat_history=[{'content': 'how can I solve 8x + 7 = -23', 'role': 'assistant', 'name': 'User_proxy'}, {'content': '{\"steps\":[{\"explanation\":\"To isolate the term with x, we first subtract 7 from both sides of the equation.\",\"output\":\"8x + 7 - 7 = -23 - 7 -> 8x = -30.\"},{\"explanation\":\"Now that we have 8x = -30, we divide both sides by 8 to solve for x.\",\"output\":\"x = -30 / 8 -> x = -3.75.\"}],\"final_answer\":\"x = -3.75\"}', 'role': 'user', 'name': 'Math_solver'}], summary='{\"steps\":[{\"explanation\":\"To isolate the term with x, we first subtract 7 from both sides of the equation.\",\"output\":\"8x + 7 - 7 = -23 - 7 -> 8x = -30.\"},{\"explanation\":\"Now that we have 8x = -30, we divide both sides by 8 to solve for x.\",\"output\":\"x = -30 / 8 -> x = -3.75.\"}],\"final_answer\":\"x = -3.75\"}', cost={'usage_including_cached_inference': {'total_cost': 0.00015089999999999998, 'gpt-4o-mini-2024-07-18': {'cost': 0.00015089999999999998, 'prompt_tokens': 582, 'completion_tokens': 106, 'total_tokens': 688}}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"user_proxy.initiate_chat(assistant, message=\"how can I solve 8x + 7 = -23\", max_turns=1, summary_method=\"last_msg\")"
]
Expand All @@ -156,7 +192,7 @@
"source": [
"## Formatting a response\n",
"\n",
"When defining a `response_format`, you have the flexibility to customize how the output is parsed and presented, making it more user-friendly. To demonstrate this, we’ll add a format method to our `MathReasoning` model. This method will define the logic for transforming the raw JSON response into a more human-readable and accessible format."
"When defining a `response_format`, you have the flexibility to customize how the output is parsed and presented, making it more user-friendly. To demonstrate this, we’ll add a `format` method to our `MathReasoning` model. This method will define the logic for transforming the raw JSON response into a more human-readable and accessible format."
]
},
{
Expand All @@ -170,7 +206,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -181,7 +217,6 @@
" explanation: str\n",
" output: str\n",
"\n",
"\n",
"class MathReasoning(BaseModel):\n",
" steps: list[Step]\n",
" final_answer: str\n",
Expand All @@ -206,9 +241,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser_proxy\u001b[0m (to Math_solver):\n",
"\n",
"how can I solve 8x + 7 = -23\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mMath_solver\u001b[0m (to User_proxy):\n",
"\n",
"Step 1: To isolate the term with x, we first subtract 7 from both sides of the equation.\n",
" Output: 8x + 7 - 7 = -23 - 7 -> 8x = -30.\n",
"Step 2: Now that we have 8x = -30, we divide both sides by 8 to solve for x.\n",
" Output: x = -30 / 8 -> x = -3.75.\n",
"\n",
"Final Answer: x = -3.75\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"text/plain": [
"ChatResult(chat_id=None, chat_history=[{'content': 'how can I solve 8x + 7 = -23', 'role': 'assistant', 'name': 'User_proxy'}, {'content': 'Step 1: To isolate the term with x, we first subtract 7 from both sides of the equation.\\n Output: 8x + 7 - 7 = -23 - 7 -> 8x = -30.\\nStep 2: Now that we have 8x = -30, we divide both sides by 8 to solve for x.\\n Output: x = -30 / 8 -> x = -3.75.\\n\\nFinal Answer: x = -3.75', 'role': 'user', 'name': 'Math_solver'}], summary='Step 1: To isolate the term with x, we first subtract 7 from both sides of the equation.\\n Output: 8x + 7 - 7 = -23 - 7 -> 8x = -30.\\nStep 2: Now that we have 8x = -30, we divide both sides by 8 to solve for x.\\n Output: x = -30 / 8 -> x = -3.75.\\n\\nFinal Answer: x = -3.75', cost={'usage_including_cached_inference': {'total_cost': 0.00015089999999999998, 'gpt-4o-mini-2024-07-18': {'cost': 0.00015089999999999998, 'prompt_tokens': 582, 'completion_tokens': 106, 'total_tokens': 688}}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
Expand All @@ -224,37 +291,6 @@
"\n",
"user_proxy.initiate_chat(assistant, message=\"how can I solve 8x + 7 = -23\", max_turns=1, summary_method=\"last_msg\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Function calling still works alongside structured output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@assistant.register_for_execution()\n",
"@assistant.register_for_llm(description=\"You can use this function call to solve addition\")\n",
"def add(x: int, y: int) -> int:\n",
" return x + y\n",
"\n",
"\n",
"user_proxy.initiate_chat(\n",
" assistant, message=\"solve 3 + 4 by calling appropriate function\", max_turns=1, summary_method=\"last_msg\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down

0 comments on commit e69118e

Please sign in to comment.