You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This proposal is a generalized solution for this issue.
Currently there are some proposals to add support for a variety of new features in some LLMs. However, this is not practical. Given that each vendor/model may implement custom features it doesn’t make sense having to extend LLMQ every time there is an innovation.
To be able to solve this and to avoid LMQL to be left behind I propose to add a new Keyword to the language using:
"[OUTPUT]" where … using ModelCallConfig(**kargs)
This keyword lets the user pass a ModelCallConfig that can define arbitrary arguments for the model call. The ModelCallConfig object can achieve this by modifying the data that is to be sent to the model.
To do, this we can define in the class a serializer and a “injector” for each of the parameters accepted to the config. The “injector” takes as an input the full prompt data and inserts the serialized parameter in the appropriate place.
The data is only passed through an injector if its associated parameter was passed. Also the default serializer is the __str__ function.
Example
Here is an example of usage for the previously mentioned issue:
defadd(a: int, b: int):
''' Adds two numbers. This function takes two parameters, 'a' and 'b', and returns their sum. Parameters: - a (int): The first number. - b (int): The second number. '''returna+b"{:user} I need you to add two numbers together 1 and 2""{:asistant}[answer]"usingOpenAICallConfig(tools=[add])
do_something(answer)
so what normally would look like this
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "I need you to add two numbers together 1 and 2"
}
]
}
would become
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What is the weather like in Boston?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "add",
"description": "Adds two numbers. This function takes two parameters, 'a' and 'b', and returns their sum.",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "int",
"description": "The first number."
},
"b": {
"type": "int",
"description": "The second number."
}
},
"required": []
}
}
}
],
"tool_choice": "auto"
}
Other uses
Adaptative temperature
"{:user}Make a cool movie title for a movie""{:assistant}[TITLE]""{:user} now translate it to spanish""{:assistant}[TRANSLATION]"usingModelCallConfig(temperature=0)
Multi modal prompts
my_image=get_image()
"{:user}What do you see in this image?""{:assistant}[RESPONSE]"usingMultiModalCallConfig(image_attachment=my_image)
Open question
maybe some model features would require custom handling of the response, in which case maybe it would be necesary in some cases to add a callback to the config.
The text was updated successfully, but these errors were encountered:
This proposal is a generalized solution for this issue.
Currently there are some proposals to add support for a variety of new features in some LLMs. However, this is not practical. Given that each vendor/model may implement custom features it doesn’t make sense having to extend LLMQ every time there is an innovation.
To be able to solve this and to avoid LMQL to be left behind I propose to add a new Keyword to the language
using
:This keyword lets the user pass a ModelCallConfig that can define arbitrary arguments for the model call. The ModelCallConfig object can achieve this by modifying the data that is to be sent to the model.
To do, this we can define in the class a serializer and a “injector” for each of the parameters accepted to the config. The “injector” takes as an input the full prompt data and inserts the serialized parameter in the appropriate place.
The data is only passed through an injector if its associated parameter was passed. Also the default serializer is the
__str__
function.Example
Here is an example of usage for the previously mentioned issue:
And then in LMQL
so what normally would look like this
would become
Other uses
Open question
maybe some model features would require custom handling of the response, in which case maybe it would be necesary in some cases to add a callback to the config.
The text was updated successfully, but these errors were encountered: