Max Token Limits Incorrect? #7243
Replies: 1 comment
-
200k is context size, and 4k is the max tokens of output. I think you are referring to this one. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Hey guys, for some reason many models in both the Dify Cloud and Self-Hosted instances have Max Token limits far short of their actual capabilities. Surely I’m just missing something. The Claude models, for example, should have 200k token capabilities, yet, when I’m adjusting the model options in the chatbot builder, it only lets me go up to 8k. GPT-4 models are also well short of what they should be aside from the 32k model, which appropriately allows up to 32k. What’s going on?
2. Additional context or comments
No response
Beta Was this translation helpful? Give feedback.
All reactions