-
Notifications
You must be signed in to change notification settings - Fork 60k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update max_tokens to max_completions_tokens #5677
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughThe changes in this pull request involve renaming the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (3)
app/store/config.ts (1)
130-132
: LGTM! Consider adding a clarifying comment.The renaming of the validation method from
max_tokens
tomax_completions_tokens
in theModalConfigValidator
object is consistent with the PR objective and the earlier change inDEFAULT_CONFIG
. The validation logic remains unchanged, which maintains existing behavior.Consider adding a brief comment to explain the purpose of this validation method and the significance of the limits. For example:
max_completions_tokens(x: number) { + // Validate and limit the maximum number of tokens for completions return limitNumber(x, 0, 512000, 1024); },
This addition would improve code clarity and maintainability.
app/client/platforms/openai.ts (2)
235-240
: LGTM: Special handling for vision modelsThe addition of special handling for
max_completions_tokens
in vision models is a good improvement. It ensures that vision models have an appropriate token limit while maintaining flexibility with the model configuration.Suggestion for minor improvement:
Consider extracting the magic number 4000 into a named constant for better readability and maintainability. For example:const MIN_VISION_MODEL_TOKEN_LIMIT = 4000; // ... if (visionModel) { requestPayload["max_completions_tokens"] = Math.max( modelConfig.max_completions_tokens, MIN_VISION_MODEL_TOKEN_LIMIT ); }This change would make the code more self-documenting and easier to update if the minimum token limit for vision models changes in the future.
Inconsistent Renaming of
max_tokens
Across the CodebaseThe renaming of
max_tokens
tomax_completions_tokens
inapp/client/platforms/openai.ts
is incomplete. The shell script output reveals numerous remaining instances ofmax_tokens
in other parts of the codebase, which can lead to confusion and potential issues.Affected Areas:
app/store/chat.ts
app/client/platforms/alibaba.ts
app/client/platforms/iflytek.ts
app/client/platforms/google.ts
- Multiple locale files (e.g.,
app/locales/cn.ts
,app/locales/fr.ts
, etc.)app/client/platforms/anthropic.ts
app/client/platforms/bytedance.ts
app/masks/en.ts
,app/masks/cn.ts
,app/masks/tw.ts
, etc.Please ensure all instances of
max_tokens
are uniformly renamed to maintain consistency and prevent potential discrepancies in API calls.🔗 Analysis chain
Line range hint
1-500
: Summary of changes and main concernThis PR successfully renames
max_tokens
tomax_completions_tokens
in theRequestPayload
interface and implements special handling for vision models. However, there's an inconsistency in howmax_completions_tokens
is used:
- It's commented out in the main
requestPayload
object with an unclear explanation.- It's explicitly set for vision models.
This inconsistency could lead to confusion and potential issues in API calls. Please review and address the comments, especially regarding the commented-out
max_completions_tokens
in the mainrequestPayload
.To ensure consistency across the codebase, please run the following command to check for any remaining instances of
max_tokens
:This will help identify any places where the renaming might have been missed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
rg "max_tokens" --type ts
Length of output: 3959
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
- app/client/platforms/openai.ts (2 hunks)
- app/store/config.ts (2 hunks)
🧰 Additional context used
🔇 Additional comments (2)
app/store/config.ts (1)
Line range hint
1-268
: Overall LGTM! Ensure comprehensive testing.The changes in this file consistently rename
max_tokens
tomax_completions_tokens
in both theDEFAULT_CONFIG
object and theModalConfigValidator
. These changes align well with the PR objective.To ensure the changes don't introduce any regressions:
- Conduct thorough testing of all features that rely on the
max_completions_tokens
configuration.- Update and run any existing unit tests that involve this configuration.
- Verify that the application's behavior remains unchanged when using different values for
max_completions_tokens
.Consider running the following command to identify any test files that might need updating:
#!/bin/bash # Description: Find test files that might need updating # Test: Search for test files containing 'max_tokens' rg --type-add 'test:*{test,spec}.{ts,js}' --type test 'max_tokens'Update any identified test files to use
max_completions_tokens
instead ofmax_tokens
.app/client/platforms/openai.ts (1)
66-66
: LGTM: Renamed property inRequestPayload
interfaceThe change from
max_tokens
tomax_completions_tokens
in theRequestPayload
interface is correct and aligns with the PR objective. The property remains optional and maintains the correct type.
@@ -65,7 +65,7 @@ export const DEFAULT_CONFIG = { | |||
providerName: "OpenAI" as ServiceProvider, | |||
temperature: 0.5, | |||
top_p: 1, | |||
max_tokens: 4000, | |||
max_completions_tokens: 4000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Incomplete Refactor: 'max_tokens' Still in Use
The renaming of max_tokens
to max_completions_tokens
was not fully applied. The following files still reference max_tokens
and need to be updated:
app/store/chat.ts
app/masks/en.ts
app/masks/cn.ts
app/masks/tw.ts
app/locales/*.ts
app/client/platforms/moonshot.ts
app/client/platforms/google.ts
app/client/platforms/iflytek.ts
app/client/platforms/bytedance.ts
app/client/platforms/alibaba.ts
app/client/platforms/anthropic.ts
app/client/platforms/baidu.ts
Please update all instances of max_tokens
to max_completions_tokens
to ensure consistency and prevent potential issues.
🔗 Analysis chain
LGTM! Verify usage across the codebase.
The renaming of max_tokens
to max_completions_tokens
in the DEFAULT_CONFIG
object is consistent with the PR objective. The value remains unchanged, which should maintain existing behavior.
To ensure this change doesn't break any existing functionality, please run the following script to check for any remaining usage of max_tokens
in the codebase:
If any occurrences are found, they may need to be updated to use max_completions_tokens
instead.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining usage of 'max_tokens' in the codebase
# Test: Search for 'max_tokens' in all TypeScript and JavaScript files
rg --type-add 'script:*.{ts,js}' --type script 'max_tokens'
Length of output: 160
Script:
#!/bin/bash
# Description: Check for any remaining usage of 'max_tokens' in the codebase
# Test: Search for 'max_tokens' in all TypeScript and JavaScript files
rg --type-add 'script:*.{ts,js}' --type script 'max_tokens'
Length of output: 3812
// max_completions_tokens: Math.max(modelConfig.max_completions_tokens, 1024), | ||
// Please do not ask me why not send max_completions_tokens, no reason, this param is just shit, I dont want to explain anymore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify the decision to comment out max_completions_tokens
The commented-out max_completions_tokens
property and the accompanying comment raise some concerns:
- Commenting out this property contradicts the PR objective of renaming
max_tokens
tomax_completions_tokens
. - The comment doesn't provide a clear explanation for why this parameter is problematic or shouldn't be sent.
- Removing this parameter might affect the API call behavior, potentially leading to unexpected results.
Could you please provide more context on why max_completions_tokens
is considered problematic? If there are known issues with this parameter, it would be helpful to document them more clearly.
Consider the following options:
- If the parameter should be removed, update the
RequestPayload
interface to reflect this change. - If the parameter is still needed in some cases, consider adding a condition to include it only when necessary.
- If there are specific issues with this parameter, add a more detailed comment explaining the problems and any workarounds.
@@ -63,7 +63,7 @@ export interface RequestPayload { | |||
presence_penalty: number; | |||
frequency_penalty: number; | |||
top_p: number; | |||
max_tokens?: number; | |||
max_completions_tokens?: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
只是o1 参数变了又不是所有的都变了 怎么能这么改?看清楚问题
开了新pr,这个可以老的可以关了 #5788 |
A new PR has been opened, and the old one can be closed #5788 |
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
max_tokens更换为max_completions_tokens
📝 补充信息 | Additional Information
Summary by CodeRabbit
New Features
Bug Fixes
max_completions_tokens
property in request payloads and configuration settings.