Chat with an Assistant #699
Replies: 4 comments
-
Hey @the-s-anton 👋🏻 ! Do you have any code to share? If not -- this might be a good starting point:
class KayakAPI < Langchain::Tool::Base
def get_flights(destination:, origin:, start_date:, end_date:)
# Make the API call and return data in the format that will be passed back to the AI assistant
def get_hotels(destination:, start_date:, end_date:)
... (Take a look at how the NewsRetriever tool is build; You could probably copy those 2 files, rename and start from there).
# Instantiate the KayakAPI tool
kayak_api = KayakAPI.new
# Instantiate the LLM client
openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
assistant = Langchain::Assistant.new
llm: openai,
instructions: "You are a travel planner AI that ..." # Describe all of the desired behavior here, like "I don't want the Assistant to summarize the tool's output." etc.
tools: [kayak_api]
# Add your user message
assistant.add_message content: "Help me plan a trip to Sarajevo for EuRuKo for 9/11-9/15. Find me some flights please"
# Inspect what kind of messages are in there now
assistant.messages
# Run it (send it to the LLM)
assistant.run
# If an assistant calls your KayakAPI tool, you can always auto execute it with:
assistant.run auto_tool_execution: true I hope this is a good starting point! |
Beta Was this translation helpful? Give feedback.
-
@andreibondarev Thank you for the response I guess what I am missing here is the proper Tool usage; Currently, here is my setup: class Message::CreatedWorker
include Sidekiq::Worker
sidekiq_options queue: :messages, retry: 0
INSTRUCTIONS = <<~INSTRUCTIONS
...
INSTRUCTIONS
def perform(message_id)
message = Message.includes(:account, :user, chat: [:messages]).find(message_id)
prepare_response(message)
rescue => e
handle_error(message, e)
end
private
def prepare_response(message)
llm_response = fetch_llm_response(message)
response = message.chat.messages.create!(
account: message.account,
content: llm_response.last.content,
role: :assistant,
metadata: {
tool_calls: llm_response.last.tool_calls,
tool_call_id: llm_response.last.tool_call_id,
},
)
broadcast_response(response)
end
def fetch_llm_response(message)
llm = llm_client(message.account)
llm_assistant = initialize_llm_assistant(llm, message)
llm_assistant.run(auto_tool_execution: true)
end
def initialize_llm_assistant(llm, message)
thread = ::Langchain::Thread.new
assistant = ::Langchain::Assistant.new(
llm:,
thread:,
instructions: INSTRUCTIONS,
tools: [
::Tool::User::Flight.new(user: message.user, chat: message.chat),
],
)
messages = message.chat.messages.order(:created_at).select(:id, :content, :role)
messages.each do |message|
assistant.add_message(content: message.content, role: message.role)
end
assistant
end
def llm_client(account)
::Langchain::LLM::OpenAI.new(
api_key: account.llm_api_key || ENV.fetch("OPENAI_API_KEY", nil),
default_options: {
model: "gpt-4o",
chat_completion_model_name: "gpt-4o",
},
)
end
def broadcast_response(response)
Turbo::StreamsChannel.broadcast_append_to(
response.chat.prefix_id,
target: response.chat.prefix_id,
partial: "chats/messages/message",
locals: { message: response },
)
end
def handle_error(message, error)
ErrorHanding.process(class_name: self.class.name, error: error, record: message)
# broadcast_error(message)
end
end Demo tool class Tool::User::Flight < Langchain::Tool::Base
NAME = "flight_booking"
ANNOTATIONS_PATH = File.join(__dir__, "flight_booking.json")
def initialize(user:, chat:)
@user = user
@chat = chat
end
def flight_booking(params)
# broadcast -> "flight_booking #{params}"
# invoke API to book a flight
# return response
# broadcast -> "flight_booking_response #{response}"
end
end I get proper AI output, and the Assistant understands the context of a thread. I would like to get the interaction with the Tool itself. For example, suppose the Assistant decides to book a flight. In that case, I want to broadcast that a particular tool is being used (ex. "Searching for a flight to Toronto") and then even broadcast a review message with the action button to book a flight. In this case, the AI itself doesn't need to summarize anything. Besides, what I am trying to achieve is that I keep track of these interactions in my database. |
Beta Was this translation helpful? Give feedback.
-
@the-s-anton I think you've got a few different approaches, either split your main Several small methods: def initiate_booking(params)
end
def confirm_with_user()
end
def finalize_booking()
end Take a glance at this example, how the instructions for completing an order are written out step by step: https://github.com/patterns-ai-core/ecommerce-ai-assistant-demo You also have greater control with manually submitting tool output if you'd like, with: assistant.submit_tool_output tool_call_id:, output: |
Beta Was this translation helpful? Give feedback.
-
@andreibondarev I found the solution; thank you so much for your time and answer! |
Beta Was this translation helpful? Give feedback.
-
I am trying to re-create ChatGDP style chatting with an Assistant where you can see what AI is doing.
In other words, I would want to have something like this:
We have a
thread
that I can use; however, what would be the best way to design this system?I assume I need to create a message record for each AI entry (please let me know if I am wrong), but how do I do this? Besides, I would want to display to the User what parameters AI used to come up with the summary (similar to what OpenAI does)
All of this information I will use for the follow-up conversations.
Here is the screenshot from ChatGDP to showcase what I am trying to get:
I am creating a message record inside the Tool, which is not the correct approach. Moreover, in some cases, I don't want the Assistant to summarize the tool's output.
At this moment, I think that I am overstepping or overcomplicating something—any advice or guidance is highly appreciated.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions