Standardizing Tool Calling with Chat Models #20343
Replies: 12 comments 22 replies
-
This is great ✨🥵 |
Beta Was this translation helpful? Give feedback.
-
Great, finally a standardised input and output! |
Beta Was this translation helpful? Give feedback.
-
It's so useful! |
Beta Was this translation helpful? Give feedback.
-
Cool! ty! |
Beta Was this translation helpful? Give feedback.
-
Thanks ! Nice feature to scale apps easier |
Beta Was this translation helpful? Give feedback.
-
Can you I ask,
|
Beta Was this translation helpful? Give feedback.
-
Got the following issue running llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768",
groq_api_key=env_settings.groq_api_key)
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent, tools=tools, max_iterations=5, handle_parsing_errors=True, verbose=True)
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the updated function. I am able to use my custom tools with OpenAI and MistralAI without any issues. However, I also want to implement chat memory. I was able to get it working with OpenAI model but not on Mistral. Do you have any suggestions?
when I run with MistralAI, I get the following error.
|
Beta Was this translation helpful? Give feedback.
-
Are there any requirements towards the prompt? And does the create_tool_calling_agent/tool_calls have any effects on the prompt of an agent? |
Beta Was this translation helpful? Give feedback.
-
I want to make my agent return structured outputs, |
Beta Was this translation helpful? Give feedback.
-
Love the standardised interface! Just wondering if there is a timeline for other chat models to implement bind_tools? Is this planned for a specific release or dependent on the providers? I'm restricted to a non-implemented provider so just wondering how long I will need to put my Agent dreams on hold :D |
Beta Was this translation helpful? Give feedback.
-
I am trying to get it to work with chatllamacpp, but seems that I hit a weird behaviour and an error. Here is the code: import os
from langchain_core.tools import tool
from langchain_community.chat_models import ChatLlamaCpp
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder
from langchain import hub
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.agents.output_parsers.tools import ToolsAgentOutputParser
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from langchain.agents.format_scratchpad.tools import (
format_to_tool_messages,
)
from langchain.tools import tool
from typing import Dict
os.environ["LOCAL_MODEL_PATH"] = r"C:\Users\<username>\lm-studio\models\NousResearch\Hermes-2-Pro-Llama-3-8B-GGUF\Hermes-2-Pro-Llama-3-8B-Q8_0.gguf"
@tool("multiply")
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together. Responds to questions like what's {first_int} times {second_int}?"""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"Add two integers."
return first_int + second_int
@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent
def create_tool_calling_agent_local(llm_with_tools, prompt):
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_tool_messages(x["intermediate_steps"])
)
| prompt
| llm_with_tools
| ToolsAgentOutputParser()
)
return agent
if __name__ == "__main__":
local_model = os.environ.get("LOCAL_MODEL_PATH")
print("Loading local model at path = ", local_model)
llm = ChatLlamaCpp(
temperature=0,
model_path=local_model,
n_ctx=8192,
# n_gpu_layers=8,
n_batch=300, # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
max_tokens=512,
n_threads=15,
repeat_penalty=1.5,
top_p=1,
verbose=True,
)
# prompt = hub.pull("homanp/superagent")
prompt = hub.pull("hwchase17/openai-tools-agent")
# tools = [multiply, add, exponentiate]
tools = [add]
llm_with_tools = llm.bind_tools(tools = [multiply], tool_choice = {"type": "function", "function": {"name": "multiply"}})
msg = llm_with_tools.invoke("what is the product of 100 and 500")
print(msg)
agent = create_tool_calling_agent_local(llm_with_tools, prompt)
tool_out = agent.invoke({"input":"what is 2 times 3", "intermediate_steps":[]})
print(tool_out)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is the sum of 100 and 500"}) The error trace is:
|
Beta Was this translation helpful? Give feedback.
-
We're happy to introduce a more standardized interface for using tools:
ChatModel.bind_tools()
: a method for attaching tool definitions to model calls.AIMessage.tool_calls
: an attribute on theAIMessage
returned from the model for easily accessing the tool calls the model decided to make.create_tool_calling_agent()
: an agent constructor that works with ANY model that implementsbind_tools
and returnstool_calls
.We'll share some sample code snippets below. You can read more about this
interface on our blog or in the python docs:
We'd love to hear feedback from you or any issues that you encounter in this discussion!
Bind Tools:
AIMessage.tool_calls
:Please let us know if you have any feedback!
Beta Was this translation helpful? Give feedback.
All reactions