-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix]: allow extra fields in requests to openai compatible server #10463
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
0873c44
to
354f97b
Compare
It would be nice if the client could know what arguments were ignored. I believe this was set up to avoid silent ignores |
@mgoin yes the original idea was indeed to have a way to make the user aware of e.g., a typo, in one of their field, which would explain why the response they get is not what they wanted. However in practice it seems to break compatibility with lots of systems that have been developed against the official OlenAI API. However, let me investigate, from the linked issue in the comment of this pydantic issue (which describes our current need, telling the user they added something while still allowing the extra fields) it seems that there might be a way to do it. |
c942355
to
5e33c81
Compare
@mgoin, based on pydantic #3455 I have added a I agree that with this solution, the information about ignored fields won't be directly present in the request response, but at least if the user performing the request has access to the vllm server logs, debugging can still be done efficiently. Not sure what more could be done while still matching the OpenAI API behavior.
|
5e33c81
to
4e37017
Compare
I agree that logging a warning in this case is more reasonable. Thanks for improving the user experience! |
Please fix the test failure though. |
a653204
to
f65448e
Compare
Head branch was pushed to by a user without write access
f65448e
to
a9deeef
Compare
Thanks for adding the server-side log. After thinking further on it, we generally strive to follow the behavior of OpenAI so it makes sense to let the requests through |
…tible server Signed-off-by: Guillaume Calmettes <[email protected]>
Signed-off-by: Guillaume Calmettes <[email protected]>
Signed-off-by: Guillaume Calmettes <[email protected]>
a9deeef
to
1d01c05
Compare
@DarkLight1337 happy to report that all tests passed 😊. @mgoin what could be possible though is for example to add a |
I think that is a good idea, and certainly would be useful when testing the server in CI! |
Ok, I’ll prepare something and I’ll ping you on it to see if that would work for all the use cases you have in mind. |
…llm-project#10463) Signed-off-by: Guillaume Calmettes <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]>
…llm-project#10463) Signed-off-by: Guillaume Calmettes <[email protected]> Signed-off-by: Clay <[email protected]>
…llm-project#10463) Signed-off-by: Guillaume Calmettes <[email protected]> Signed-off-by: Maxime Fournioux <[email protected]>
…llm-project#10463) Signed-off-by: Guillaume Calmettes <[email protected]>
There are been several reports of
BadRequestError: Error code: 400
due to the vLLM OpenAI compatible server not accepting extra arguments in the requests' payloads, while the same requests would work with the official OpenAI API.e.g.:
truncate_prompt_tokens
#6890Similarly, code completion plugins like CodeCompanion are currently not working with models served with the vLLM OpenAI compatible server, failing with the same
BadRequestError: Error code: 400
error, while the plugin is working fine with the official OpenAI API:The reason of the error is that currently the vLLM OpenAI-compatible server forbids extra field in its base pydantic model (introduced by #4355) while OpenAI native pydantic definitions all inherit from a BaseModel that explicitely allow extra fields in their model_config
This PR updates the vLLM
OpenAIBaseModel
to allow extra fields, like the official OpenAI API.cc: @DarkLight1337 @simon-mo