-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create new notebook example #74
base: main
Are you sure you want to change the base?
Conversation
uses custom chain
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we set env vars here the way we do in other examples, please? That way, we can easily build tests from this.
"cell_type": "code", | ||
"source": [ | ||
"for i, prompt in enumerate(inputs):\n", | ||
" if prompt[\"role\"] == \"system\":\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you fix this? And include the entire history below?
" tool_output = get_delivery_date(order_id)\n", | ||
" wf.add_tool(\n", | ||
" input=response.choices[0].message.tool_calls[0].function.arguments,\n", | ||
" output=f\"Your delivery date is {tool_output}.\",\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would log the 'output' of the tool as-is (without putting into a "Your delivery date is..." string).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then separately, if this ""Your delivery date is {tool_output}."" is what you're showing to the user, log that as a step
" )\n", | ||
"\n", | ||
" # On LLM steps, also run Protect\n", | ||
" if output_message.content is not None:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an if output_message.content is not None
already. Combine the two? Or re-arrange the code?
" )\n", | ||
"\n", | ||
" # Conclude the workflow\n", | ||
" wf.conclude(output={\"output\": output_message.content})\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're not using the output of the tool, or the output of protect here.
"\n", | ||
" # On LLM steps, also run Protect\n", | ||
" if output_message.content is not None:\n", | ||
" response_protect = protect_galileo(prompt[\"content\"], output_message.content)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do something with the response? e.g. if protect triggers, alter the response?
"source": [ | ||
"# Conversational Flow\n", | ||
"inputs = [\n", | ||
"{\"role\": \"system\", \"content\": \"You are a helpful customer support assistant. Use the supplied tools to assist the user.\"},\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not include the system message as an input here. it's not an input from 'the user'. it's an instruction / your prompt template.
uses custom chain, per feedback add as an example and link out instead of in the docs https://github.com/rungalileo/docs-mintlify/pull/86