Replies: 2 comments
-
following, since I also encounter the same issue. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Got same issue |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Checked other resources
Commit to Help
Example Code
Description
When invoking the LLM directly, I get a much longer response than when using
chat
. Inspecting the chat output showedChatCompletionOutputUsage(completion_tokens=100, *)
in the response, and I'm wondering how thecompletion_token
limit can be increased.System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
Beta Was this translation helpful? Give feedback.
All reactions