-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LMM: use _generate_oai_reply_from_client
#58
base: main
Are you sure you want to change the base?
Conversation
NOTE: the test case is failing because of API key. Let me know how to address it.
|
We can just skip the tests for this, If it is tested locally on all OS. |
@BabyCNM im on windows, let me know if it is already tested. |
@Hk669 Thanks! I have tested in Mac OS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! This change simplifies the LMM agent a lot! Thanks for the contribution!
I couldn't test it, let me do it tomorrow. But it looks good to me though. |
The LMM tests should be skiped when skip-openai is specified. Please add a skip condition similar to other tests in contrib tests, and add the test back to contrib-openai CI. |
Ok. Will take a look at this and get back soon. |
…ter presentation. (autogenhub#58) * Improve assistant and chess examples to make them more robust and better presentation. * type dep * format
Why are these changes needed?
This PR only has one-line change (as major change) and a new test case. However, it addresses several issues mentioned before:
microsoft#2550
microsoft#3507
Details:
In 2024, we introduced the function
_generate_oai_reply_from_client
in Conversable Agent (microsoft#1575), which can handle function calling, model dumping, reflection, and many other great features. However, this change is not updated in the MultimodalConversableAgent, which causes lots of issues afterwards for the multimodal agent in function calling, group chat, and many other locations.The fix is simple.
Updates:
_generate_oai_reply_from_client
.gpt-4-turbo
as the default model instead ofgpt-4-vision-preview
, as the preview model is deprecated by OpenAI now.Related issue number
Checks