Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update max_tokens to max_completions_tokens #5677

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions app/client/platforms/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ export interface RequestPayload {
presence_penalty: number;
frequency_penalty: number;
top_p: number;
max_tokens?: number;
max_completions_tokens?: number;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

只是o1 参数变了又不是所有的都变了 怎么能这么改?看清楚问题

}

export interface DalleRequestPayload {
Expand Down Expand Up @@ -228,13 +228,16 @@ export class ChatGPTApi implements LLMApi {
presence_penalty: !isO1 ? modelConfig.presence_penalty : 0,
frequency_penalty: !isO1 ? modelConfig.frequency_penalty : 0,
top_p: !isO1 ? modelConfig.top_p : 1,
// max_tokens: Math.max(modelConfig.max_tokens, 1024),
// Please do not ask me why not send max_tokens, no reason, this param is just shit, I dont want to explain anymore.
// max_completions_tokens: Math.max(modelConfig.max_completions_tokens, 1024),
// Please do not ask me why not send max_completions_tokens, no reason, this param is just shit, I dont want to explain anymore.
Comment on lines +231 to +232
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Clarify the decision to comment out max_completions_tokens

The commented-out max_completions_tokens property and the accompanying comment raise some concerns:

  1. Commenting out this property contradicts the PR objective of renaming max_tokens to max_completions_tokens.
  2. The comment doesn't provide a clear explanation for why this parameter is problematic or shouldn't be sent.
  3. Removing this parameter might affect the API call behavior, potentially leading to unexpected results.

Could you please provide more context on why max_completions_tokens is considered problematic? If there are known issues with this parameter, it would be helpful to document them more clearly.

Consider the following options:

  1. If the parameter should be removed, update the RequestPayload interface to reflect this change.
  2. If the parameter is still needed in some cases, consider adding a condition to include it only when necessary.
  3. If there are specific issues with this parameter, add a more detailed comment explaining the problems and any workarounds.

};

// add max_tokens to vision model
// add max_completions_tokens to vision model
if (visionModel) {
requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000);
requestPayload["max_completions_tokens"] = Math.max(
modelConfig.max_completions_tokens,
4000,
);
}
}

Expand Down
4 changes: 2 additions & 2 deletions app/store/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ export const DEFAULT_CONFIG = {
providerName: "OpenAI" as ServiceProvider,
temperature: 0.5,
top_p: 1,
max_tokens: 4000,
max_completions_tokens: 4000,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Incomplete Refactor: 'max_tokens' Still in Use

The renaming of max_tokens to max_completions_tokens was not fully applied. The following files still reference max_tokens and need to be updated:

  • app/store/chat.ts
  • app/masks/en.ts
  • app/masks/cn.ts
  • app/masks/tw.ts
  • app/locales/*.ts
  • app/client/platforms/moonshot.ts
  • app/client/platforms/google.ts
  • app/client/platforms/iflytek.ts
  • app/client/platforms/bytedance.ts
  • app/client/platforms/alibaba.ts
  • app/client/platforms/anthropic.ts
  • app/client/platforms/baidu.ts

Please update all instances of max_tokens to max_completions_tokens to ensure consistency and prevent potential issues.

🔗 Analysis chain

LGTM! Verify usage across the codebase.

The renaming of max_tokens to max_completions_tokens in the DEFAULT_CONFIG object is consistent with the PR objective. The value remains unchanged, which should maintain existing behavior.

To ensure this change doesn't break any existing functionality, please run the following script to check for any remaining usage of max_tokens in the codebase:

If any occurrences are found, they may need to be updated to use max_completions_tokens instead.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining usage of 'max_tokens' in the codebase

# Test: Search for 'max_tokens' in all TypeScript and JavaScript files
rg --type-add 'script:*.{ts,js}' --type script 'max_tokens'

Length of output: 160


Script:

#!/bin/bash
# Description: Check for any remaining usage of 'max_tokens' in the codebase

# Test: Search for 'max_tokens' in all TypeScript and JavaScript files
rg --type-add 'script:*.{ts,js}' --type script 'max_tokens'

Length of output: 3812

presence_penalty: 0,
frequency_penalty: 0,
sendMemory: true,
Expand Down Expand Up @@ -127,7 +127,7 @@ export const ModalConfigValidator = {
model(x: string) {
return x as ModelType;
},
max_tokens(x: number) {
max_completions_tokens(x: number) {
return limitNumber(x, 0, 512000, 1024);
},
presence_penalty(x: number) {
Expand Down
Loading