-
Notifications
You must be signed in to change notification settings - Fork 59.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix model leak issue #5883
base: main
Are you sure you want to change the base?
fix model leak issue #5883
Conversation
@code-october is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis pull request introduces changes across multiple API files to update the logic for checking model availability. The import of the function Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Warning Rate limit exceeded@code-october has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 20 minutes and 29 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (13)
app/api/xai.ts (1)
Line range hint
91-103
: Strengthen error handling for model availability checks.While the model availability check is implemented correctly, the error handling could be improved. Currently, if JSON parsing fails, the model check is silently skipped, which could potentially allow unauthorized model access.
Consider this safer implementation:
try { const clonedBody = await req.text(); fetchOptions.body = clonedBody; const jsonBody = JSON.parse(clonedBody) as { model?: string }; + if (!jsonBody?.model) { + return NextResponse.json( + { error: true, message: "Model specification is required" }, + { status: 400 } + ); + } if ( isModelNotavailableInServer( serverConfig.customModels, jsonBody.model, ServiceProvider.XAI, ) ) { return NextResponse.json( { error: true, message: `you are not allowed to use ${jsonBody.model} model`, }, { status: 403 } ); } } catch (e) { - console.error(`[XAI] filter`, e); + return NextResponse.json( + { error: true, message: "Invalid request body" }, + { status: 400 } + ); }app/api/bytedance.ts (2)
Line range hint
91-104
: Consider security improvements in model availability check.While the core logic is correct, there are a few security considerations:
- The error message reveals the specific model name, which could aid in model enumeration attacks
- The error handling could be more robust for malformed requests
Consider applying these improvements:
if ( isModelNotavailableInServer( serverConfig.customModels, jsonBody?.model as string, ServiceProvider.ByteDance as string, ) ) { return NextResponse.json( { error: true, - message: `you are not allowed to use ${jsonBody?.model} model`, + message: "Requested model is not available", }, { status: 403, }, ); }
Line range hint
105-107
: Strengthen error handling for model availability checks.The current error handling only logs errors and continues execution, which could potentially allow bypass of model checks if the request body is malformed.
Consider failing closed on parse errors:
} } catch (e) { console.error(`[ByteDance] filter`, e); + return NextResponse.json( + { + error: true, + message: "Invalid request format", + }, + { + status: 400, + }, + ); }app/api/iflytek.ts (2)
Line range hint
92-96
: Improve type safety in model availability check.While the implementation correctly addresses the model leak issue, there are some type safety improvements that could be made:
Consider applying these changes:
if ( isModelNotavailableInServer( serverConfig.customModels, - jsonBody?.model as string, - ServiceProvider.Iflytek as string, + jsonBody?.model ?? '', + ServiceProvider.Iflytek ) ) {The changes:
- Replace unsafe type assertion with nullish coalescing
- Remove unnecessary string cast for ServiceProvider enum
Line range hint
82-106
: Optimize request body handling to avoid multiple reads.The current implementation reads the request body twice: once for model validation and again during the fetch request. This could be optimized for better performance.
Consider refactoring like this:
if (serverConfig.customModels && req.body) { try { const clonedBody = await req.text(); - fetchOptions.body = clonedBody; const jsonBody = JSON.parse(clonedBody) as { model?: string }; if ( isModelNotavailableInServer( serverConfig.customModels, jsonBody?.model ?? '', ServiceProvider.Iflytek ) ) { return NextResponse.json( { error: true, message: `you are not allowed to use ${jsonBody?.model} model`, }, { status: 403, }, ); } + fetchOptions.body = clonedBody; } catch (e) { console.error(`[Iflytek] filter`, e); } }This ensures we only set the body once after validation passes.
app/api/moonshot.ts (1)
Line range hint
91-103
: Effective implementation of model access controlThe implementation correctly addresses the model leak issue by properly enforcing model restrictions and returning appropriate error responses.
Consider these improvements:
- Enhanced error message to provide more context about the restriction
- Add logging for unauthorized access attempts
return NextResponse.json( { error: true, - message: `you are not allowed to use ${jsonBody?.model} model`, + message: `Access to model '${jsonBody?.model}' is restricted based on current configuration`, }, { status: 403, }, );Also consider adding before the return:
console.warn(`[Moonshot] Unauthorized model access attempt: ${jsonBody?.model}`);app/api/glm.ts (1)
Line range hint
92-96
: Strengthen error handling in the model availability check.While the model availability check is correctly implemented, the error handling in the surrounding try-catch block (lines 89-107) could potentially allow unauthorized model access if JSON parsing fails.
Consider this safer implementation:
if (serverConfig.customModels && req.body) { try { const clonedBody = await req.text(); fetchOptions.body = clonedBody; const jsonBody = JSON.parse(clonedBody) as { model?: string }; + if (!jsonBody?.model) { + return NextResponse.json( + { + error: true, + message: "Invalid request: model field is required", + }, + { status: 400 } + ); + } if ( isModelNotavailableInServer( serverConfig.customModels, jsonBody.model, ServiceProvider.ChatGLM as string, ) ) { return NextResponse.json( { error: true, message: `you are not allowed to use ${jsonBody.model} model`, }, { status: 403 } ); } } catch (e) { - console.error(`[GLM] filter`, e); + return NextResponse.json( + { + error: true, + message: "Invalid request body", + }, + { status: 400 } + ); } }app/api/alibaba.ts (1)
Line range hint
92-107
: Strengthen error handling for model availability checks.While the implementation correctly checks model availability, the error handling could potentially leak model access. If the check fails (caught in the catch block), the request continues processing instead of failing securely.
Apply this diff to fail securely:
} catch (e) { console.error(`[Alibaba] filter`, e); + return NextResponse.json( + { + error: true, + message: "Failed to validate model availability", + }, + { + status: 403, + } + ); }Also, consider adding input validation:
const jsonBody = JSON.parse(clonedBody) as { model?: string }; + if (!jsonBody?.model) { + return NextResponse.json( + { + error: true, + message: "Model parameter is required", + }, + { + status: 400, + } + ); + }app/api/baidu.ts (2)
11-11
: Consider a more explicit function name.While
isModelNotavailableInServer
works, a more explicit name likeisModelRestrictedInServer
orisModelBlockedInServer
might better convey the intent of checking for restricted access.
Line range hint
107-112
: Remove unnecessary string cast and enhance error message.The implementation correctly addresses the model leak issue, but has two minor improvements:
ServiceProvider.Baidu as string
cast is unnecessary as ServiceProvider is already a string enum- The error message could be more informative about why the model is restricted
Apply this diff:
if ( isModelNotavailableInServer( serverConfig.customModels, jsonBody?.model as string, - ServiceProvider.Baidu as string, + ServiceProvider.Baidu, ) ) { return NextResponse.json( { error: true, - message: `you are not allowed to use ${jsonBody?.model} model`, + message: `Access to model '${jsonBody?.model}' is restricted by server configuration`, }, { status: 403, }, ); }app/api/anthropic.ts (1)
Line range hint
125-139
: Improve error handling and type safety in model availability checkWhile the core logic is correct, there are several areas for improvement:
- Error handling should be more specific and not catch all errors
- Type assertion on ServiceProvider.Anthropic is unnecessary
- Error message could be more informative
Here's the suggested improvement:
if ( - isModelNotavailableInServer( - serverConfig.customModels, - jsonBody?.model as string, - ServiceProvider.Anthropic as string, - ) + isModelNotavailableInServer( + serverConfig.customModels, + jsonBody?.model ?? '', + ServiceProvider.Anthropic, + ) ) { return NextResponse.json( { error: true, - message: `you are not allowed to use ${jsonBody?.model} model`, + message: `Model '${jsonBody?.model}' is not available based on your current configuration. Please check CUSTOM_MODELS setting.`, }, { status: 403, }, ); }Also, consider improving the error handling:
} catch (e) { - console.error(`[Anthropic] filter`, e); + if (e instanceof SyntaxError) { + console.error(`[Anthropic] Invalid JSON in request body:`, e); + } else { + console.error(`[Anthropic] Error checking model availability:`, e); + } + throw e; // Re-throw to handle it in the outer try-catch }app/api/common.ts (1)
5-5
: Consider a clearer naming convention.While the change from
isModelAvailableInServer
toisModelNotavailableInServer
is functionally correct, the double negative might impact code readability. Consider using a more direct name likeisModelRestricted
orshouldBlockModel
.app/utils/model.ts (1)
215-228
: Consider adding input validation and improving case handlingWhile the implementation is functionally correct, consider these improvements for better robustness:
- Add input validation for empty/invalid inputs
- Handle provider name case normalization consistently
Here's a suggested improvement:
export function isModelNotavailableInServer( customModels: string, modelName: string, providerNames: string | string[], ) { + if (!modelName?.trim()) { + throw new Error('Model name cannot be empty'); + } + if (!providerNames || (Array.isArray(providerNames) && providerNames.length === 0)) { + throw new Error('At least one provider name must be specified'); + } const modelTable = collectModelTable(DEFAULT_MODELS, customModels); const providerNamesArray = Array.isArray(providerNames) ? providerNames : [providerNames]; for (const providerName of providerNamesArray){ + if (!providerName?.trim()) continue; const fullName = `${modelName}@${providerName.toLowerCase()}`; if (modelTable[fullName]?.available === true) return false; } return true; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (10)
app/api/alibaba.ts
(2 hunks)app/api/anthropic.ts
(2 hunks)app/api/baidu.ts
(2 hunks)app/api/bytedance.ts
(2 hunks)app/api/common.ts
(2 hunks)app/api/glm.ts
(2 hunks)app/api/iflytek.ts
(2 hunks)app/api/moonshot.ts
(2 hunks)app/api/xai.ts
(2 hunks)app/utils/model.ts
(1 hunks)
🔇 Additional comments (14)
app/api/xai.ts (2)
11-11
: LGTM! Import change aligns with the new availability check logic.
The change from isModelAvailableInServer
to isModelNotavailableInServer
better represents the function's purpose and aligns with the PR's objective to fix model availability checks.
Line range hint 91-103
: Verify model restriction logic across different scenarios.
Let's verify that the model restriction logic works correctly for all cases mentioned in the PR objectives, particularly for DISABLE_GPT4 and CUSTOM_MODELS restrictions.
✅ Verification successful
Based on the code analysis, I can now provide a final response regarding the model restriction logic verification:
Model restriction logic is properly implemented and consistent
The implementation shows a robust model restriction system that works across different scenarios:
- The server config (
app/config/server.ts
) properly handles DISABLE_GPT4 by automatically adding GPT-4 models to the restricted list in customModels - The model utility (
app/utils/model.ts
) implementsisModelNotavailableInServer()
which correctly handles model restrictions by:- Checking against customModels configuration
- Supporting multiple provider checks
- Properly handling model@provider format
- The XAI API endpoint correctly uses this restriction logic with proper 403 status code response
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of model restrictions across the codebase
# Check for other files that should implement similar restrictions
echo "Checking for other API endpoints that need model restrictions..."
rg -l "customModels.*model" --type ts
# Check if DISABLE_GPT4 setting is properly handled
echo "Checking DISABLE_GPT4 implementation..."
rg "DISABLE_GPT4" --type ts
# Check for consistent error handling across all API endpoints
echo "Checking error handling consistency..."
rg "isModelNotavailableInServer.*403" --type ts
Length of output: 606
Script:
#!/bin/bash
# Let's examine the model restriction implementation in detail
# Check the model utility implementation
echo "Checking model utility implementation..."
cat app/utils/model.ts
# Check server config implementation
echo "Checking server config implementation..."
cat app/config/server.ts
# Look for all API endpoints that might need model restrictions
echo "Checking all API routes..."
fd "route|api" --type f --extension ts --exec cat {}
Length of output: 37707
app/api/bytedance.ts (1)
11-11
: LGTM! Verify consistent usage across other API routes.
The import change from isModelAvailableInServer
to isModelNotavailableInServer
aligns with the PR objective to fix model leak issues.
Let's verify this change is consistent across other API routes:
✅ Verification successful
All API routes consistently use the new isModelNotavailableInServer
function
The verification confirms that:
- All API routes (
bytedance.ts
,xai.ts
,moonshot.ts
,glm.ts
,iflytek.ts
,common.ts
,anthropic.ts
,alibaba.ts
,baidu.ts
) have adopted the newisModelNotavailableInServer
function - No instances of the old
isModelAvailableInServer
function remain in the API routes
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if all API routes use the new isModelNotavailableInServer function
# and none still use the old isModelAvailableInServer
echo "Checking for new function usage:"
rg "isModelNotavailableInServer" "app/api/"
echo -e "\nChecking for old function usage (should return nothing):"
rg "isModelAvailableInServer" "app/api/"
Length of output: 1547
app/api/iflytek.ts (1)
11-11
: LGTM! Verify consistent usage across the codebase.
The semantic change from isModelAvailableInServer
to isModelNotavailableInServer
aligns with the PR objectives to fix model leak issues.
Let's verify this change is consistently applied across all API handlers:
✅ Verification successful
All API handlers consistently use the new isModelNotavailableInServer
function
The verification shows that:
- The old function
isModelAvailableInServer
only exists inapp/utils/model.ts
where it's defined - The new function
isModelNotavailableInServer
is consistently imported and used across all API handlers:- alibaba.ts
- anthropic.ts
- baidu.ts
- bytedance.ts
- common.ts
- glm.ts
- iflytek.ts
- moonshot.ts
- xai.ts
The change has been consistently applied throughout the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining uses of the old function name
# and verify consistent usage of the new function across API handlers
# Check for any remaining references to the old function
rg "isModelAvailableInServer" --type ts
# Check the usage pattern of the new function in API handlers
rg "isModelNotavailableInServer" --type ts -A 5
Length of output: 5707
app/api/moonshot.ts (2)
Line range hint 1-134
: Verify consistency across API files
The implementation looks good, but let's verify that the same model availability check pattern is consistently implemented across all API files.
✅ Verification successful
Model availability check is consistently implemented across API handlers
The verification shows that the isModelNotavailableInServer
check is properly implemented across all relevant API handlers including:
- alibaba.ts
- anthropic.ts
- baidu.ts
- bytedance.ts
- common.ts (for OpenAI/Azure)
- glm.ts
- iflytek.ts
- moonshot.ts
- xai.ts
Each implementation follows the same pattern:
- Uses the same utility function
- Checks against
serverConfig.customModels
- Returns a consistent 403 error response with the same message format
- Properly specifies the respective
ServiceProvider
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent implementation across API files
# Expected: Find similar model availability check patterns in other API files
# Check for similar implementation pattern across API files
rg -A 10 "isModelNotavailableInServer.*ServiceProvider\." "app/api/*.ts"
Length of output: 127
Script:
#!/bin/bash
# Let's first find all API handler files
fd "\.ts$" app/api/
# Then check for model availability patterns across these files
rg -A 10 "isModelNotavailableInServer" app/api/
Length of output: 9996
11-11
: Verify the imported function implementation
The import statement has been updated to use isModelNotavailableInServer
. Let's ensure this function is properly implemented in the utils module.
✅ Verification successful
Function implementation is correct and consistently used
The isModelNotavailableInServer
function is properly implemented in app/utils/model.ts
with clear documentation and logic. The function is being used consistently across multiple API files (alibaba, anthropic, baidu, bytedance, glm, iflytek, moonshot, xai) as part of the model availability check improvements. The implementation correctly inverts the logic to return true
when a model is not available on any of the specified providers, which aligns with the PR's objective to fix model leak issues.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of isModelNotavailableInServer function
# Expected: Find the function definition and its test coverage
# Check for function implementation
ast-grep --pattern 'export function isModelNotavailableInServer($_) {
$$$
}'
# Check for test coverage
rg -l "isModelNotavailableInServer" "**/*test*"
Length of output: 178
Script:
#!/bin/bash
# Let's try a more comprehensive search for the function and its implementation
# Search for the function name in all files
rg "isModelNotavailableInServer"
# Look for the file where it should be defined
fd "model.ts" app/utils/
# Check the content of model.ts if found
fd "model.ts" app/utils/ -x cat {}
Length of output: 8376
app/api/glm.ts (2)
11-11
: LGTM! Verify consistent import changes across related files.
The semantic change from isModelAvailableInServer
to isModelNotavailableInServer
aligns with the PR objective to fix model availability checks.
✅ Verification successful
Import changes have been consistently applied across all API files ✅
The verification shows that:
- No instances of the old
isModelAvailableInServer
remain in the API directory - The new
isModelNotavailableInServer
is consistently imported and used across all API files:- alibaba.ts
- anthropic.ts
- baidu.ts
- bytedance.ts
- common.ts
- glm.ts
- iflytek.ts
- moonshot.ts
- xai.ts
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all API files have been updated consistently
# Expected: All API files should use isModelNotavailableInServer
# Search for any remaining uses of the old function
echo "Checking for any remaining uses of isModelAvailableInServer:"
rg "isModelAvailableInServer" app/api/
# Verify the new function is used consistently
echo "Verifying consistent usage of isModelNotavailableInServer:"
rg "isModelNotavailableInServer" app/api/
Length of output: 1601
Line range hint 89-107
: Verify complete implementation of model restriction features.
Since this PR aims to fix issues with both DISABLE_GPT4 and CUSTOM_MODELS settings, let's verify the complete implementation.
✅ Verification successful
Model restriction features are properly implemented across the codebase
The verification shows a comprehensive implementation of both DISABLE_GPT4 and CUSTOM_MODELS features:
- DISABLE_GPT4 setting is properly handled in
app/config/server.ts
, automatically adding GPT-4 models to the restricted list - CUSTOM_MODELS restrictions are consistently enforced across all API routes (glm.ts, alibaba.ts, common.ts, etc.)
- The
isModelNotavailableInServer
function is used uniformly in all provider-specific API handlers - Model availability checks are integrated with the UI components and client-side stores
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify implementation of all model restriction features
# Expected: Find all model restriction logic and settings usage
# Check for DISABLE_GPT4 setting usage
echo "Checking DISABLE_GPT4 implementation:"
rg -A 5 "DISABLE_GPT4" app/
# Check for CUSTOM_MODELS handling
echo "Checking CUSTOM_MODELS implementation:"
rg -A 5 "customModels" app/
# Check the isModelNotavailableInServer implementation
echo "Checking model availability function implementation:"
ast-grep --pattern 'function isModelNotavailableInServer($_) {
$$$
}'
Length of output: 14829
app/api/baidu.ts (1)
Line range hint 107-112
: Verify consistent implementation across API endpoints.
Since this is a critical security feature for model access control, let's verify that the same pattern is implemented consistently across other API endpoints.
✅ Verification successful
Model availability check is consistently implemented across all API endpoints
The verification shows that isModelNotavailableInServer
is consistently implemented across all API endpoints with the same pattern:
- All endpoints import the function from
@/app/utils/model
- Each endpoint uses the same structure: checking model availability with server config, model name, and appropriate service provider
- The error response format is identical across all implementations
- No legacy
isModelAvailableInServer
implementations were found
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent implementation of model availability checks across API endpoints
# Look for similar model availability check patterns in other API files
echo "Checking for model availability implementations:"
rg -A 10 "isModelNotavailableInServer" "app/api/"
# Look for any remaining old implementations
echo "Checking for potential missed conversions:"
rg -A 10 "isModelAvailableInServer" "app/api/"
Length of output: 9758
app/api/anthropic.ts (2)
12-12
: Verify consistent usage of isModelNotavailableInServer
across the codebase
The import change from isModelAvailableInServer
to isModelNotavailableInServer
looks good and aligns with the PR objective to fix model availability checks.
Let's verify this change is consistently applied across all API files:
✅ Verification successful
Change to isModelNotavailableInServer
is consistently applied across the codebase
The verification shows that:
- The old function
isModelAvailableInServer
exists only in its definition file (app/utils/model.ts
) - The new function
isModelNotavailableInServer
is properly imported and used across all API files:- anthropic.ts
- alibaba.ts
- baidu.ts
- bytedance.ts
- common.ts
- glm.ts
- iflytek.ts
- moonshot.ts
- xai.ts
The change is consistent and complete throughout the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining instances of old function and confirm new function usage
echo "Checking for any remaining instances of old function..."
rg "isModelAvailableInServer" --type ts
echo "Verifying new function usage in API files..."
rg "isModelNotavailableInServer" "app/api/*.ts"
Length of output: 421
Script:
#!/bin/bash
# Let's verify this more thoroughly with correct paths and patterns
# First, check the implementation of both functions
echo "Checking function implementations in model.ts..."
rg -A 5 "export function isModel(Not)?[aA]vailableInServer" app/utils/model.ts
# Then check all usages across the codebase
echo "\nChecking all usages of both functions..."
rg "isModel(Not)?[aA]vailableInServer" --type ts
# Finally, check specifically in the anthropic.ts file
echo "\nChecking anthropic.ts specifically..."
rg "isModel(Not)?[aA]vailableInServer" app/api/anthropic.ts
Length of output: 2343
Line range hint 125-139
: Verify security implications of model availability check
Since this is a security-critical change that controls model access, let's ensure:
- The check cannot be bypassed
- Server configuration is properly loaded
✅ Verification successful
Model validation implementation is secure and properly implemented
The verification shows that:
- The model validation is implemented securely through
isModelNotavailableInServer
which:- Properly validates models against both default and custom configurations
- Uses a strict validation that requires explicit model availability (
true
) to pass - Handles provider names case-insensitively
- The check is consistently applied across all API routes using the same validation function
- Server configuration is properly loaded and used in all API endpoints
- No direct model access bypassing the validation was found
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify security aspects of the model availability implementation
# Check if there are any other model validation checks that might conflict
echo "Checking for other model validation points..."
rg -B 2 -A 2 "model.*validation|validate.*model" --type ts
# Verify server config loading
echo "Checking server config usage..."
rg "serverConfig\.customModels" --type ts
# Look for potential bypass points
echo "Checking for potential bypass points..."
rg "model.*override|override.*model" --type ts
Length of output: 1699
Script:
#!/bin/bash
# Let's verify the model validation implementation and configuration loading
# Check the implementation of isModelNotavailableInServer function
echo "Checking model availability validation implementation..."
ast-grep --pattern 'function isModelNotavailableInServer'
# Check how serverConfig is imported and initialized
echo "Checking server config initialization..."
rg -B 2 -A 2 "serverConfig.*=|import.*serverConfig" --type ts
# Check the structure of model validation across different API routes
echo "Checking model validation pattern..."
ast-grep --pattern 'if ($serverConfig.customModels && $req.body) {
$$$
isModelNotavailableInServer(
$$$
)
}'
Length of output: 6151
Script:
#!/bin/bash
# Let's check the actual implementation of model validation and config loading
# Check the implementation in utils/model.ts
echo "Checking model validation implementation..."
rg -B 5 -A 5 "export function isModelNotavailableInServer" --type ts
# Check server config implementation
echo "Checking server config implementation..."
rg -B 5 -A 5 "export function getServerSideConfig" --type ts
# Check if there are any direct model access without validation
echo "Checking for direct model access..."
rg "model.*=.*req\.body|body\.model" --type ts
Length of output: 1138
Script:
#!/bin/bash
# Let's check the complete model validation flow and configuration
# Check the complete model validation implementation
echo "Checking complete model validation implementation..."
rg -B 15 -A 15 "isModelNotavailableInServer" app/utils/model.ts
# Check the model table collection implementation
echo "Checking model table collection..."
rg -B 10 -A 10 "function collectModelTable" app/utils/model.ts
# Check server configuration structure
echo "Checking server configuration..."
rg -B 10 -A 10 "interface ServerConfig" --type ts
Length of output: 2841
app/api/common.ts (1)
121-128
: Verify model restriction effectiveness.
Let's ensure that the model restrictions are working as intended across different scenarios.
Run the following script to analyze the model restriction implementation:
✅ Verification successful
Let me gather more information about the model restriction implementation to ensure its effectiveness.
Based on the gathered information, I can now provide a final response about the model restriction effectiveness:
Model restriction implementation is properly secured
The model restriction system is well-implemented with multiple security layers:
- The
isModelNotavailableInServer
function correctly validates models against both custom configurations and provider specifications - Model availability is controlled through a centralized model table that properly maps model names to their providers
- The implementation in common.ts correctly handles both OpenAI and Azure providers, with proper support for provider-unspecified models
- Custom model restrictions are properly processed at the server configuration level
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify model restriction scenarios
# Check for any direct model references that might bypass restrictions
rg -A 5 "model[\"']?\s*[:=]\s*[\"']gpt-4" --type ts
# Check for proper handling of CUSTOM_MODELS in the codebase
rg -A 5 "CUSTOM_MODELS" --type ts
# Look for potential model restriction bypasses
ast-grep --pattern 'isModelNotavailableInServer($$$)'
Length of output: 5312
Script:
#!/bin/bash
# Check the implementation of isModelNotavailableInServer function
ast-grep --pattern 'function isModelNotavailableInServer($$$)'
# Check how custom models are parsed and validated
rg -A 10 "customModels\s*=\s*process\.env\.CUSTOM_MODELS" --type ts
# Check for model validation and error handling
rg -A 5 "isModelNotavailableInServer.*return" --type ts
Length of output: 835
Script:
#!/bin/bash
# Let's try a different approach to find the implementation
rg -A 15 "export function isModelNotavailableInServer" --type ts
# Check for model validation logic
rg -A 10 "function.*isModelNotavailable" --type ts
# Look for model restriction related constants
rg -A 5 "DEFAULT_MODELS\s*=|BUILTIN_MODELS\s*=" --type ts
Length of output: 1895
Script:
#!/bin/bash
# Check the model table collection implementation
ast-grep --pattern 'function collectModelTable($$$)'
# Check OpenAI models definition
rg -A 10 "const openaiModels\s*=" --type ts
# Check how model availability is determined
rg -B 5 -A 10 "available:\s*(?:true|false)" --type ts
Length of output: 8543
Script:
#!/bin/bash
# Find the collectModelTable implementation
rg -A 15 "function collectModelTable" --type ts
# Check how custom models are processed
rg -A 10 "customModels\.split" --type ts
# Look for model validation in the API routes
rg -B 5 "if.*isModelNotavailableInServer.*{" --type ts
Length of output: 1775
app/utils/model.ts (2)
206-214
: Well-documented implementation with clear purpose
The documentation clearly explains the function's purpose, parameters, and return value. This is particularly important for security-related functions.
215-228
: Verify complete migration to new function
Since this change is critical for fixing model availability checks, we should verify:
- All API files have migrated to the new function
- No instances of the old function remain
✅ Verification successful
Migration to new function is complete and properly implemented
The verification shows:
- The old function
isModelAvailableInServer
exists only in its definition inapp/utils/model.ts
- The new function
isModelNotavailableInServer
is properly imported and used across all relevant API files:- common.ts (OpenAI/Azure)
- iflytek.ts
- glm.ts
- moonshot.ts
- xai.ts
- baidu.ts
- bytedance.ts
- anthropic.ts
- alibaba.ts
All API endpoints are correctly using the new function with appropriate provider parameters, ensuring proper model availability checks.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any remaining uses of the old function
echo "Checking for remaining uses of isModelAvailableInServer:"
rg "isModelAvailableInServer" --type ts
# Verify all API files are using the new function
echo "Verifying new function usage in API files:"
rg "isModelNotavailableInServer" "app/api/*.ts"
Length of output: 419
Script:
#!/bin/bash
# Let's first find all API-related files to ensure correct path
echo "Finding API files:"
fd "\.ts$" app/api
# Check for the old function definition and its usage
echo "Checking old function definition and usage:"
rg -A 5 "isModelAvailableInServer" --type ts
# Check for the new function usage
echo "Checking new function usage:"
rg -A 5 "isModelNotavailableInServer" --type ts
# Let's also check the model-related files
echo "Finding model-related files:"
fd "model" --type file --extension ts
Length of output: 6778
@@ -8,7 +8,7 @@ import { | |||
import { prettyObject } from "@/app/utils/format"; | |||
import { NextRequest, NextResponse } from "next/server"; | |||
import { auth } from "@/app/api/auth"; | |||
import { isModelAvailableInServer } from "@/app/utils/model"; | |||
import { isModelNotavailableInServer } from "@/app/utils/model"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Remove unused isModelAvailableInServer
function from model.ts
The old function is not being used anywhere in the codebase but still exists in app/utils/model.ts
. This dead code should be removed to prevent confusion and potential future misuse.
app/utils/model.ts
: Remove the unusedisModelAvailableInServer
function
🔗 Analysis chain
LGTM! Verify consistent usage across the codebase.
The import change from isModelAvailableInServer
to isModelNotavailableInServer
aligns with the PR objective to fix model leak issues.
Both old and new functions coexist in model.ts - needs cleanup
The verification revealed that while the new isModelNotavailableInServer
function is consistently used across API handlers, the old isModelAvailableInServer
function still exists in app/utils/model.ts
. Let's verify if it's actually being used anywhere:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all files have been updated to use isModelNotavailableInServer
# and no occurrences of the old isModelAvailableInServer remain
echo "Checking for any remaining usage of old function name..."
rg "isModelAvailableInServer"
echo "Verifying consistent usage of new function..."
rg "isModelNotavailableInServer"
Length of output: 1674
Script:
#!/bin/bash
# Check if the old function is used anywhere in the codebase
rg -A 2 "isModelAvailableInServer\(" --type ts
Length of output: 192
isModelNotavailableInServer( | ||
serverConfig.customModels, | ||
jsonBody?.model as string, | ||
ServiceProvider.OpenAI as string, | ||
) || | ||
isModelAvailableInServer( | ||
serverConfig.customModels, | ||
jsonBody?.model as string, | ||
ServiceProvider.Azure as string, | ||
[ | ||
ServiceProvider.OpenAI, | ||
ServiceProvider.Azure, | ||
jsonBody?.model as string, // support provider-unspecified model | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling and code clarity.
Several improvements can be made to this section:
- The comment "try to refuse gpt4 request" is outdated as this now handles all model restrictions
- The error message could be more informative
- The model parameter is cast to string unnecessarily at line 127
Apply these changes:
- // #1815 try to refuse gpt4 request
+ // Check if the requested model is restricted based on server configuration
if (
isModelNotavailableInServer(
serverConfig.customModels,
jsonBody?.model as string,
[
ServiceProvider.OpenAI,
ServiceProvider.Azure,
- jsonBody?.model as string, // support provider-unspecified model
+ jsonBody?.model, // support provider-unspecified model
],
)
) {
return NextResponse.json(
{
error: true,
- message: `you are not allowed to use ${jsonBody?.model} model`,
+ message: `Access to model '${jsonBody?.model}' is restricted by server configuration`,
},
{
status: 403,
},
);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
isModelNotavailableInServer( | |
serverConfig.customModels, | |
jsonBody?.model as string, | |
ServiceProvider.OpenAI as string, | |
) || | |
isModelAvailableInServer( | |
serverConfig.customModels, | |
jsonBody?.model as string, | |
ServiceProvider.Azure as string, | |
[ | |
ServiceProvider.OpenAI, | |
ServiceProvider.Azure, | |
jsonBody?.model as string, // support provider-unspecified model | |
], | |
isModelNotavailableInServer( | |
serverConfig.customModels, | |
jsonBody?.model as string, | |
[ | |
ServiceProvider.OpenAI, | |
ServiceProvider.Azure, | |
jsonBody?.model, // support provider-unspecified model | |
], |
@@ -202,3 +202,27 @@ export function isModelAvailableInServer( | |||
const modelTable = collectModelTable(DEFAULT_MODELS, customModels); | |||
return modelTable[fullName]?.available === false; | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Function has incorrect logic that caused the model leak
The isModelAvailableInServer
function has inverted logic that likely caused the model leak issue:
- The function name suggests it checks if a model is available, but it returns
true
when the model is NOT available - The condition
available === false
means it only returnstrue
when explicitly marked as unavailable, allowing access to undefined or unspecified models
This function should be deprecated in favor of the new isModelNotavailableInServer
function, as its behavior is counterintuitive and could lead to security issues.
加上单元测试~ |
Add unit testing~ |
@Dogtiti 已添加 |
Added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (2)
test/model-available.test.ts (2)
1-3
: Consider using path aliases for importsInstead of using relative paths with
../
, consider configuring and using path aliases (e.g.,@app/utils/model
) to improve maintainability and make imports more readable.
3-43
: Improve test organization with nested describe blocksConsider organizing related test cases using nested describe blocks for better readability and maintenance.
Example structure:
describe("isModelNotavailableInServer", () => { describe("basic functionality", () => { test("default behavior with no custom models"); test("behavior with custom models"); }); describe("multiple providers", () => { test("model unavailable on one provider"); test("model available on one provider"); }); describe("custom models", () => { test("without provider"); }); describe("edge cases", () => { // Add suggested edge cases here }); });
test/model-available.test.ts
Outdated
test("test model will return false, which means the model is not available", () => { | ||
const customModels = "-all,gpt-4o-mini"; | ||
const modelName = "gpt-4"; | ||
const providerNames = "OpenAI"; | ||
const result = isModelNotavailableInServer(customModels, modelName, providerNames); | ||
expect(result).toBe(true); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix misleading test description
The test description says "will return false" but the test expects true
. This contradiction makes the test confusing to read.
- test("test model will return false, which means the model is not available", () => {
+ test("test model will return true when model is not available in custom models", () => {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
test("test model will return false, which means the model is not available", () => { | |
const customModels = "-all,gpt-4o-mini"; | |
const modelName = "gpt-4"; | |
const providerNames = "OpenAI"; | |
const result = isModelNotavailableInServer(customModels, modelName, providerNames); | |
expect(result).toBe(true); | |
}); | |
test("test model will return true when model is not available in custom models", () => { | |
const customModels = "-all,gpt-4o-mini"; | |
const modelName = "gpt-4"; | |
const providerNames = "OpenAI"; | |
const result = isModelNotavailableInServer(customModels, modelName, providerNames); | |
expect(result).toBe(true); | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已采纳
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@code-october,感谢您采纳我的建议!
If you found this review helpful, would you consider giving us a shout-out on X?
Thank you for using CodeRabbit!
test("test custom model without setting provider", () => { | ||
const customModels = "-all,mistral-large"; | ||
const modelName = "mistral-large"; | ||
const providerNames = modelName; | ||
const result = isModelNotavailableInServer(customModels, modelName, providerNames); | ||
expect(result).toBe(false); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review provider name assignment in custom model test
Setting providerNames
to modelName
seems incorrect and might hide potential issues. Consider using a specific provider name or an array of providers to match the real-world usage.
- const providerNames = modelName;
+ const providerNames = "Default"; // or ["Default"] to match other test patterns
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
test("test custom model without setting provider", () => { | |
const customModels = "-all,mistral-large"; | |
const modelName = "mistral-large"; | |
const providerNames = modelName; | |
const result = isModelNotavailableInServer(customModels, modelName, providerNames); | |
expect(result).toBe(false); | |
}); | |
test("test custom model without setting provider", () => { | |
const customModels = "-all,mistral-large"; | |
const modelName = "mistral-large"; | |
const providerNames = "Default"; // or ["Default"] to match other test patterns | |
const result = isModelNotavailableInServer(customModels, modelName, providerNames); | |
expect(result).toBe(false); | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
provider 设计问题,default 是无效参数
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@code-october,明白了,谢谢您的解释。确实,Default
不是有效参数,保留原来的 providerNames = modelName
设置是合理的。感谢您的澄清!
✏️ Learnings added
Learnt from: code-october
PR: ChatGPTNextWeb/ChatGPT-Next-Web#5883
File: test/model-available.test.ts:46-52
Timestamp: 2024-11-30T07:30:13.651Z
Learning: 在测试文件中,对于没有指定 provider 的模型,`providerNames` 应该设置为 `modelName`,`"Default"` 不是有效的 provider 参数,应避免使用。
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
重新优化了模型可用性判断,避免模型泄露
📝 补充信息 | Additional Information
受此 #5881 启发,重新检查了模型禁用规则,发现存在重大问题,不仅 DISABLE_GPT4 无法生效,即使通过 CUSTOM_MODELS 限制了可用模型列表,依然无法真正限制模型请求(仅前端无法选中,但实际可以请求到 api 支持但 CUSTOM_MODELS 已限制的模型,如通过 curl 请求指定的模型)。
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests