-
Notifications
You must be signed in to change notification settings - Fork 59.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/onfinish #5759
Feature/onfinish #5759
Conversation
@Dogtiti is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThe changes in this pull request primarily focus on enhancing the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? πͺ§ TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Your build has completed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
π§Ή Outside diff range and nitpick comments (26)
app/utils/stream.ts (4)
Line range hint
7-17
: Add JSDoc comments to type definitions.The types
ResponseEvent
andStreamResponse
would benefit from documentation explaining their purpose and the meaning of their fields, especially since they're crucial for the Tauri integration.+/** + * Represents an event received from Tauri's stream-response channel + * @property id - The event identifier + * @property payload - The event payload containing request details and data chunks + */ type ResponseEvent = { id: number; payload: { request_id: number; status?: number; chunk?: number[]; }; }; +/** + * Represents the initial response from Tauri's stream_fetch command + * @property request_id - The unique identifier for the request + * @property status - HTTP status code + * @property status_text - HTTP status message + * @property headers - Response headers + */ type StreamResponse = { request_id: number; status: number; status_text: string; headers: Record<string, string>; };
Line range hint
31-44
: Enhance error handling in stream operations.While the stream handling is generally well-implemented, the writer operations could benefit from more robust error handling.
const close = () => { if (closed) return; closed = true; unlisten && unlisten(); - writer.ready.then(() => { - writer.close().catch((e) => console.error(e)); - }); + writer.ready + .then(() => writer.close()) + .catch((e) => { + console.error('Failed to close writer:', e); + // Optionally trigger an error callback or cleanup here + }); };
Line range hint
45-63
: Add defensive checks for chunk processing.The chunk processing logic could be more resilient to malformed payloads and edge cases.
window.__TAURI__.event .listen("stream-response", (e: ResponseEvent) => requestIdPromise.then((request_id) => { const { request_id: rid, chunk, status } = e?.payload || {}; - if (request_id != rid) { + if (request_id != rid || !e?.payload) { return; } - if (chunk) { + if (Array.isArray(chunk) && chunk.length > 0) { writer.ready.then(() => { - writer.write(new Uint8Array(chunk)); + writer.write(new Uint8Array(chunk)).catch((error) => { + console.error('Failed to write chunk:', error); + close(); + }); }); } else if (status === 0) { // end of body close(); } }), )
Line range hint
95-102
: Enhance error reporting and handling.The current error handling could be more informative and provide better context for debugging.
- .catch((e) => { - console.error("stream error", e); - // throw e; - return new Response("", { status: 599 }); + .catch((error: Error) => { + const errorMessage = { + message: error.message, + type: error.name, + timestamp: new Date().toISOString(), + }; + console.error("Stream error:", errorMessage); + return new Response(JSON.stringify(errorMessage), { + status: 599, + headers: { 'Content-Type': 'application/json' } + }); });app/client/platforms/glm.ts (2)
Line range hint
182-185
: Consider calling onFinish in error cases.The error handling only calls
onError
but doesn't callonFinish
. For consistency with other implementations and proper cleanup, consider callingonFinish
with the error response.} catch (e) { console.log("[Request] failed to make a chat request", e); options.onError?.(e as Error); + options.onFinish("", new Response(null, { status: 500 })); }
Line range hint
89-106
: Consider improving type safety for plugin tools.The code uses type assertions (
as any
) when handling plugin tools. Consider defining proper types to improve type safety and maintainability.- tools as any, + tools as PluginTool[], // Define appropriate interfaceapp/client/platforms/xai.ts (2)
Line range hint
89-176
: Ensure consistent response handling between streaming and non-streaming paths.The streaming implementation doesn't pass the response object to
onFinish
, making it inconsistent with the non-streaming path.Consider updating the
stream
utility to pass the response object:// In the stream utility (not shown in this file) - options.onFinish(message); + options.onFinish(message, response);Additionally, consider standardizing error handling between both paths. The non-streaming path only logs the error, while the streaming path's error handling isn't visible in this code.
Line range hint
1-176
: Consider implementing request interceptors for consistent request/response handling.To ensure consistent handling of requests and responses across streaming and non-streaming paths, consider implementing request/response interceptors.
This would allow:
- Centralized error handling
- Consistent response transformation
- Unified logging
- Easier testing and debugging
Example structure:
interface RequestInterceptor { onRequest?: (config: RequestConfig) => Promise<RequestConfig>; onRequestError?: (error: Error) => Promise<Error>; } interface ResponseInterceptor { onResponse?: (response: Response) => Promise<Response>; onResponseError?: (error: Error) => Promise<Error>; } class XAIApi implements LLMApi { private requestInterceptors: RequestInterceptor[] = []; private responseInterceptors: ResponseInterceptor[] = []; addRequestInterceptor(interceptor: RequestInterceptor) { this.requestInterceptors.push(interceptor); } addResponseInterceptor(interceptor: ResponseInterceptor) { this.responseInterceptors.push(interceptor); } }app/client/platforms/moonshot.ts (2)
Line range hint
176-184
: Consider enhancing error handling for non-streaming responses.While the change to
onFinish
is good, the non-streaming path could benefit from additional error handling:
- Response status validation
- Error response parsing
Consider applying this improvement:
const res = await fetch(chatPath, chatPayload); clearTimeout(requestTimeoutId); + if (!res.ok) { + throw new Error(`HTTP error! status: ${res.status}`); + } const resJson = await res.json(); + if (resJson.error) { + throw new Error(resJson.error.message || 'Unknown error'); + } const message = this.extractMessage(resJson); options.onFinish(message, res);
Add error handling for HTTP response status and parsing errors
Based on the analysis of error handling patterns across the codebase:
- Several implementations (Tencent, Bytedance, iFlytek, Baidu, Alibaba) check
res.ok
for HTTP status- Most implementations use
options.onError
for error reporting- The current implementation in moonshot.ts lacks HTTP status checks and response parsing error handling
Add these error checks before parsing the response:
if (!res.ok) { const errorText = await res.text(); options.onError?.(new Error(`HTTP error ${res.status}: ${errorText}`)); return; } try { const resJson = await res.json(); const message = this.extractMessage(resJson); options.onFinish(message, res); } catch (e) { options.onError?.(new Error(`Failed to parse response: ${e.message}`)); }π Analysis chain
Line range hint
176-184
: Verify error handling consistency across API implementations.Let's check if other API implementations have similar error handling patterns that we should align with.
The previous results show inconsistent error handling across implementations. Some files check
res.ok
, while others don't. Let's gather more information about error handling patterns, including try-catch blocks and error callbacks, to make a comprehensive assessment.π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for error handling patterns in other API implementations echo "Searching for error handling patterns in API classes..." rg -A 5 "const res = await fetch" "app/client/platforms/" echo "Searching for HTTP status checks..." rg "!res.ok" "app/client/platforms/" echo "Searching for error response parsing..." rg "resJson.error" "app/client/platforms/"Length of output: 6128
Script:
#!/bin/bash # Let's check for error handling patterns in try-catch blocks and error callbacks echo "Searching for try-catch blocks around fetch..." rg -B 2 -A 10 "try \{.*?const res = await fetch" -U --multiline "app/client/platforms/" echo "Searching for onError callback usage..." rg "options\.onError" "app/client/platforms/" echo "Searching for error message extraction..." rg "error.*message|message.*error" "app/client/platforms/"Length of output: 2824
app/client/platforms/bytedance.ts (2)
179-179
: Consider initializing responseRes to prevent undefined access.The
responseRes
is assigned in theonopen
handler, but it could potentially be accessed before this if the finish callback is triggered early (e.g., by controller abort). Consider initializing it with a default value.-let responseRes: Response; +let responseRes: Response | undefined;
Line range hint
133-245
: Consider standardizing error handling with the enhanced response object.Now that the response object is available in the onFinish callback, consider standardizing error handling across the codebase to leverage this additional context. This could include:
- Creating common error handling utilities that can extract detailed error information from the response
- Implementing consistent error reporting patterns
- Adding response status and headers to error logs
This would improve debugging capabilities and provide better error context to users.
app/client/platforms/iflytek.ts (2)
160-160
: Consider enhancing error loggingThe response capture is well-placed and used effectively in error handling. Consider adding response headers to error logs for better debugging context.
- console.log("[Spark] request response content type: ", contentType); + console.log("[Spark] request response details: ", { + contentType, + status: res.status, + headers: Object.fromEntries(res.headers.entries()) + });
235-235
: Consider type consistency for response parameterThe response object is correctly passed to onFinish. For better type safety, consider using the same Response type annotation in both streaming and non-streaming paths.
- options.onFinish(message, res); + options.onFinish(message, res as Response);app/client/platforms/alibaba.ts (2)
192-192
: Consider enhancing error logging for debugging.The response object capture is well-placed. To improve debugging capabilities, consider adding structured logging for error cases.
- console.log("[Alibaba] request response content type: ", contentType); + console.log("[Alibaba] request response:", { + status: res.status, + contentType, + headers: Object.fromEntries(res.headers.entries()) + });
259-259
: Consider standardizing error handling between streaming and non-streaming modes.The addition of the response object to
onFinish
is correct. However, the error handling differs between streaming and non-streaming modes.const resJson = await res.json(); const message = this.extractMessage(resJson); +if (message.length === 0) { + throw new Error("empty response from server"); +} options.onFinish(message, res);app/client/platforms/tencent.ts (1)
257-257
: Consider adding response status checkWhile the response object is now correctly passed to
onFinish
, consider adding a status check before processing the response to ensure consistent error handling with the streaming path.- options.onFinish(message, res); + if (!res.ok) { + const errorText = await res.clone().text(); + throw new Error(`HTTP ${res.status}: ${errorText}`); + } + options.onFinish(message, res);app/client/platforms/baidu.ts (2)
208-208
: Consider enhancing error handling with response statusWhile the response capture is correct, consider adding specific error handling for different response status codes to provide more detailed error messages.
responseRes = res; +if (!res.ok) { + const errorMessage = `HTTP error! status: ${res.status}`; + console.error("[Baidu API]", errorMessage); + options.onError?.(new Error(errorMessage)); +}
Line range hint
165-195
: Consider cleanup for animation frameThe animation frame callback should be properly cleaned up to prevent potential memory leaks.
let finished = false; +let animationFrameId: number; function animateResponseText() { if (finished || controller.signal.aborted) { responseText += remainText; console.log("[Response Animation] finished"); if (responseText?.length === 0) { options.onError?.(new Error("empty response from server")); } + if (animationFrameId) { + cancelAnimationFrame(animationFrameId); + } return; } // ... existing animation code ... - requestAnimationFrame(animateResponseText); + animationFrameId = requestAnimationFrame(animateResponseText); }app/client/platforms/google.ts (1)
276-278
: Consider enhancing error handling with response metadata.Since we now have access to the raw Response object, consider checking the response status and headers before calling
onFinish
. This could help catch and handle HTTP-level errors more gracefully.const resJson = await res.json(); + if (!res.ok) { + throw new Error(`HTTP error! status: ${res.status}, message: ${resJson?.error?.message || 'Unknown error'}`); + } if (resJson?.promptFeedback?.blockReason) { // being blocked options.onError?.( new Error( "Message is being blocked for reason: " + resJson.promptFeedback.blockReason, ), ); } const message = apiClient.extractMessage(resJson); options.onFinish(message, res);app/client/api.ts (1)
73-73
: Document the Response parameter usageConsider adding JSDoc comments to explain:
- The purpose of the Response parameter
- Expected handling of different response statuses
- Common usage patterns
Example documentation:
export interface ChatOptions { messages: RequestMessage[]; config: LLMConfig; onUpdate?: (message: string, chunk: string) => void; + /** + * Callback invoked when the chat request completes + * @param message The final message content + * @param responseRes The raw Response object for status/header access + */ onFinish: (message: string, responseRes: Response) => void;app/utils.ts (1)
269-271
: Consider moving visionKeywords array outside the function.To optimize performance, consider moving the
visionKeywords
array outside the function to prevent recreation on each call.+const VISION_KEYWORDS = [ + "vision", + "claude-3", + "gemini-1.5-pro", + "gemini-1.5-flash", + "gpt-4o", + "gpt-4o-mini", +]; export function isVisionModel(model: string) { // Note: This is a better way using the TypeScript feature instead of `&&` or `||` (ts v5.5.0-dev.20240314 I've been using) - const visionKeywords = [ - "vision", - "claude-3", - "gemini-1.5-pro", - "gemini-1.5-flash", - "gpt-4o", - "gpt-4o-mini", - ]; const isGpt4Turbo = model.includes("gpt-4-turbo") && !model.includes("preview"); return ( - visionKeywords.some((keyword) => model.includes(keyword)) || + VISION_KEYWORDS.some((keyword) => model.includes(keyword)) || isGpt4Turbo || isDalle3(model) ); }app/client/platforms/anthropic.ts (1)
320-321
: Use a more semantically correct HTTP status code for aborted requests.The current implementation uses status code 400 (Bad Request) for aborted requests, which doesn't accurately represent the nature of the cancellation. Consider using either:
- 499 (Client Closed Request) - More specific to client-side cancellations
- 408 (Request Timeout) - Suitable for timeout-based cancellations
- options.onFinish("", new Response(null, { status: 400 })); + options.onFinish("", new Response(null, { status: 499 }));app/client/platforms/openai.ts (2)
Line range hint
352-365
: Consider standardizing response handling between streaming and non-streaming paths.While the non-streaming path now provides access to the raw response via
onFinish(message, res)
, the streaming path still uses the old signature. Consider standardizing the response handling to maintain consistency.This could be achieved by:
- Collecting the complete streamed response
- Passing both the final message and the collected response to
onFinish
363-365
: Enhance error handling for response processing.Consider adding more robust error handling around the response processing:
- Type checking for the response structure
- Handling edge cases where
extractMessage
might failExample implementation:
- const resJson = await res.json(); - const message = await this.extractMessage(resJson); - options.onFinish(message, res); + try { + const resJson = await res.json(); + if (!resJson) { + throw new Error('Empty response received'); + } + const message = await this.extractMessage(resJson); + options.onFinish(message, res); + } catch (e) { + console.error('[Response] failed to process chat response', e); + options.onError?.(e as Error); + }app/store/chat.ts (1)
673-673
: Extract magic number to a constant.The fallback value of 4000 should be defined as a named constant for better maintainability and documentation.
+const DEFAULT_MAX_TOKENS = 4000; + - if (historyMsgLength > (modelConfig?.max_tokens || 4000)) { + if (historyMsgLength > (modelConfig?.max_tokens || DEFAULT_MAX_TOKENS)) {
π Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
π Files selected for processing (16)
app/client/api.ts
(1 hunks)app/client/platforms/alibaba.ts
(4 hunks)app/client/platforms/anthropic.ts
(1 hunks)app/client/platforms/baidu.ts
(4 hunks)app/client/platforms/bytedance.ts
(4 hunks)app/client/platforms/glm.ts
(1 hunks)app/client/platforms/google.ts
(1 hunks)app/client/platforms/iflytek.ts
(4 hunks)app/client/platforms/moonshot.ts
(1 hunks)app/client/platforms/openai.ts
(1 hunks)app/client/platforms/tencent.ts
(4 hunks)app/client/platforms/xai.ts
(1 hunks)app/store/chat.ts
(3 hunks)app/utils.ts
(1 hunks)app/utils/chat.ts
(3 hunks)app/utils/stream.ts
(1 hunks)
π§° Additional context used
πͺ Biome
app/store/chat.ts
[error] 656-658: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
π Additional comments (24)
app/utils/stream.ts (2)
22-22
: Great improvement in type safety!
The change from Promise<any>
to Promise<Response>
is a significant improvement that:
- Provides better type safety
- Matches the Web Fetch API's return type
- Makes the function more predictable and easier to use
Line range hint 22-102
: Verify error handling across different scenarios.
Please ensure that error handling works correctly in the following scenarios:
- Network failures
- Malformed responses from Tauri
- Aborted requests
- Large payloads
β Verification successful
Based on the search results, I can now generate the final response as there is sufficient evidence to verify the error handling implementation:
Error handling implementation is robust and follows established patterns
The error handling in the stream fetch implementation aligns with the codebase's error handling patterns:
- Network failures: Returns a Response with status 599 when the stream fetch fails
- Malformed responses: Properly closes the writer and handles cleanup via the
close()
function - Aborted requests: Implements proper signal handling with event listener cleanup
- Large payloads: Uses TransformStream for efficient streaming and handles write errors
The implementation matches error handling patterns seen across the codebase, including:
- Consistent error status code usage (599 for failures)
- Proper cleanup of resources and event listeners
- Error propagation through Promise chains
- Console error logging for debugging
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for tests covering error scenarios
rg -l "describe|test|it.*fetch.*error" --type ts
# Look for error handling patterns in related files
rg "catch|throw|Error|status.*599" --type ts
Length of output: 16456
app/client/platforms/glm.ts (1)
180-180
: LGTM! Verify streaming flow consistency.
The updated onFinish
callback correctly passes both the message and response object. However, we should ensure consistent behavior in the streaming flow.
Let's verify the streaming implementation:
β Verification successful
Streaming implementation is consistent with non-streaming flow
The verification shows that both streaming and non-streaming implementations correctly pass the message and response object to onFinish
:
- Non-streaming:
options.onFinish(message, res)
- Streaming:
options.onFinish(responseText + remainText, responseRes)
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the streaming flow handles the response object consistently
rg -A 10 "onFinish.*Response" "app/utils/chat.ts"
Length of output: 49
Script:
#!/bin/bash
# Let's check the streaming implementation in the chat utility and GLM client
rg -A 10 "onFinish" "app/utils/chat.ts" "app/client/platforms/glm.ts"
Length of output: 1049
app/client/platforms/moonshot.ts (1)
183-183
: LGTM! The onFinish callback update aligns with the PR objectives.
The addition of the response object to the onFinish
callback provides better response handling capabilities, consistent with the changes across other API classes.
app/client/platforms/bytedance.ts (3)
133-133
: LGTM: Well-scoped response variable declaration.
The variable is correctly typed and appropriately scoped for use across the streaming handlers.
245-245
: LGTM: Consistent response handling in non-streaming mode.
The implementation maintains parity between streaming and non-streaming modes by passing the response object to onFinish in both cases.
163-163
: Verify error handling for the enhanced onFinish callback.
The response object is correctly passed to onFinish, enhancing error handling capabilities. However, we should verify that all error paths properly handle the response object.
β Verification successful
Let me gather more context about the error handling implementations.
Let me check how the error handling is implemented in the store where the onFinish callback is consumed.
Based on the comprehensive analysis of the codebase, I can now provide the final response:
Error handling for the enhanced onFinish callback is properly implemented
The response object is consistently handled across the codebase:
- All platform implementations properly pass the response object to onFinish
- Error status codes (especially 401) are properly handled with appropriate error messages
- The chat store correctly checks response status (200) before processing the message
- Error paths include both streaming and non-streaming scenarios
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other onFinish implementations to ensure consistent error handling
rg "onFinish.*Response" --type ts
Length of output: 218
Script:
#!/bin/bash
# Search for onFinish usage patterns across the codebase
rg "onFinish" -A 3 --type ts
Length of output: 5502
Script:
#!/bin/bash
# Search for error handling in chat store
rg "status.*=== (?:4|5)" -A 3 --type ts
Length of output: 1683
app/client/platforms/iflytek.ts (2)
120-120
: LGTM: Response variable declaration
The responseRes
variable is properly declared with the correct type and scope.
147-147
: Verify onFinish handler compatibility
The response object is now correctly passed to onFinish. This change aligns with the standardization of onFinish signatures across the codebase.
β Verification successful
The previous ast-grep pattern didn't yield results, so let's try with ripgrep to get a broader view of how onFinish is used and defined across the codebase. This will help us verify the compatibility of the handler signatures.
Verified: onFinish handler signature is consistent across the codebase
The ripgrep results show that the onFinish
handler consistently accepts two parameters across all implementations:
- The type definition in
app/client/api.ts
explicitly declaresonFinish: (message: string, responseRes: Response) => void
- All platform implementations (iflytek, openai, anthropic, etc.) follow this signature by passing both the message and response object
- Both streaming and non-streaming code paths maintain this contract
The change in iflytek.ts
correctly aligns with the standardized signature used throughout the codebase.
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that all implementations of onFinish in the codebase accept the Response parameter
ast-grep --pattern 'onFinish($_, $_)'
Length of output: 39
Script:
#!/bin/bash
# Search for onFinish usage patterns with ripgrep for better context
rg "onFinish" -A 2 -B 2
# Also search for type definitions or interfaces that might define onFinish
rg "onFinish.*:" -A 2 -B 2
Length of output: 7443
app/client/platforms/alibaba.ts (2)
146-146
: LGTM: Response object tracking added.
The addition of responseRes
enables proper response object tracking throughout the streaming process.
176-176
: Verify onFinish callback implementation in consumers.
The modification to include responseRes
in the onFinish
callback is correct. However, ensure all consumers of this API are updated to handle the additional parameter.
β Verification successful
The previous command failed because of incorrect file type specification. Let's try with the correct file type flags and also look for type definitions to understand the callback interface.
The onFinish callback modification is consistent across the codebase
The verification shows that:
- The type definition in
app/client/api.ts
correctly declaresonFinish
with both message and Response parameters - All platform implementations (alibaba, baidu, tencent, etc.) consistently use the callback with both parameters
- The consumers in
app/store/chat.ts
properly handle both parameters, using the Response status for additional logic
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for onFinish implementations to verify they handle the Response parameter
rg -t typescript -t javascript "onFinish.*=.*=>" --no-heading
Length of output: 98
Script:
#!/bin/bash
# Search for onFinish usage patterns with ripgrep
rg "onFinish" -t ts -t js --no-heading -A 2
# Also search for type definitions of onFinish
rg "onFinish.*:" -t ts -t js --no-heading -A 1
Length of output: 4290
app/client/platforms/tencent.ts (3)
145-145
: LGTM: Response storage enhancement
The addition of responseRes
variable improves response handling by preserving the full Response object for later use.
175-175
: LGTM: Enhanced streaming response handling
The streaming path now correctly captures and forwards the Response object while maintaining existing error handling capabilities.
Also applies to: 191-192
Line range hint 145-257
: Verify consistency with other platform implementations
Let's ensure that the response handling pattern is consistent across other platform implementations.
β Verification successful
Let me analyze the response handling patterns more deeply by checking the streaming implementations.
Response handling pattern is consistent across platform implementations
The verification shows that the response handling pattern in the Tencent platform implementation is consistent with other platforms that support streaming (Alibaba, Baidu, ByteDance, iFlytek). All these implementations:
- Use the same
fetchEventSource
for streaming responses - Share similar
onopen
,onmessage
handlers - Follow the same pattern for handling text responses and error cases
- Use consistent animation and text accumulation approach
- Call
onFinish
with the same parameter structure (message and response object)
The changes are well-aligned with the established patterns across the codebase.
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for onFinish implementations in other platform files
echo "Checking onFinish implementations in other platform files:"
rg -A 2 "onFinish\(" "app/client/platforms/"
Length of output: 3183
Script:
#!/bin/bash
# Search for streaming response handling patterns
echo "Checking streaming implementations:"
rg -B 2 -A 5 "fetchEventSource|onmessage|onopen" "app/client/platforms/"
Length of output: 9100
app/client/platforms/baidu.ts (3)
165-165
: LGTM: Response variable declaration
The new variable is properly typed and well-placed to capture the response object.
271-271
: LGTM: Consistent response handling
The non-streaming mode now correctly passes the response object to onFinish
, maintaining consistency with the streaming implementation.
195-195
: Verify the onFinish callback implementation
The enhanced onFinish
call now correctly passes both the response text and response object, improving error handling capabilities.
app/client/platforms/google.ts (1)
277-277
: Verify type compatibility with ChatOptions interface.
The addition of the res
parameter to onFinish
needs to be type-safe. Ensure that the ChatOptions
interface in app/client/api.ts
has been updated to include the Response parameter in its onFinish
callback definition.
β Verification successful
Type compatibility verified for onFinish
callback
The ChatOptions
interface in app/client/api.ts
has been correctly updated to include the Response parameter: onFinish: (message: string, responseRes: Response) => void
. This matches the usage in the Google API client implementation.
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if ChatOptions interface has been updated to include Response parameter
# Expected: Should find the updated interface definition
# Search for ChatOptions interface definition
ast-grep --pattern 'interface ChatOptions {
$$$
onFinish($$$): void;
$$$
}'
# Alternatively, search for the specific parameter
rg -t ts "onFinish.*Response.*\)" app/client/api.ts
Length of output: 199
app/client/api.ts (1)
73-73
: Breaking Change: Updated onFinish callback signature
The addition of the responseRes
parameter to onFinish
improves error handling capabilities but constitutes a breaking change that requires updates to all implementations.
Let's verify that all implementations have been updated:
β Verification successful
Breaking change has been properly implemented across the codebase
The verification shows that all implementations have been updated to use the new onFinish(message, responseRes)
signature:
- All platform implementations (OpenAI, Google, Anthropic, etc.) correctly pass both message and response parameters
- Stream handlers in various platforms properly handle the responseRes parameter
- Error cases (like in Anthropic platform) correctly create and pass Response objects
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for onFinish implementations that might need updating
rg "onFinish.*=.*\(" --type ts
# Search for chat method implementations that should pass the Response
rg "onFinish\(((?!Response).)*\)" --type ts
Length of output: 321
Script:
#!/bin/bash
# Search for onFinish implementations and their usage
rg "onFinish" --type ts -A 2
# Also search for specific class implementations that might use this interface
ast-grep --pattern 'class $_ implements ChatOptions {
$$$
}'
# And search for direct assignments to onFinish
ast-grep --pattern 'onFinish = $_'
Length of output: 4055
app/utils.ts (2)
269-271
: LGTM! The DALL-E 3 check is a logical addition.
The addition of the DALL-E 3 check to the vision model detection is appropriate since DALL-E 3 is indeed a vision-capable model. The code maintains good readability with clear conditions.
269-271
: Verify the relationship between vision model changes and onFinish functionality.
The changes to the vision model detection logic appear unrelated to the PR's main focus on enhancing the onFinish
method's signature. Could you clarify how these changes support the PR's objectives?
app/client/platforms/anthropic.ts (2)
327-327
: LGTM! Enhanced response handling.
The modification to pass the raw response object to onFinish
is a good improvement, allowing consumers to access response metadata when needed while maintaining backward compatibility.
320-327
: Verify consistent response handling across API implementations.
These changes enhance error handling by providing access to the full response object. To maintain architectural consistency, ensure that other API implementations (if any) follow the same pattern of passing both the message and response object to onFinish
.
β Verification successful
Let me gather more information about how other implementations handle their responses.
Response handling is consistent across all API implementations
The verification shows that all API implementations in the codebase follow the same pattern of passing both the message and response object to onFinish
. This includes both streaming and non-streaming implementations:
- Non-streaming APIs (XAI, GLM, Moonshot, etc.) use:
options.onFinish(message, res)
- Streaming APIs (Tencent, Baidu, Alibaba, etc.) use:
options.onFinish(responseText + remainText, responseRes)
- Error handling in Anthropic's implementation matches the pattern:
options.onFinish("", new Response(null, { status: 400 }))
The changes maintain architectural consistency across all LLM API implementations.
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if other API implementations follow the same pattern
# Search for other API implementations and their onFinish calls
rg -A 2 "implements LLMApi" app/client/platforms/
rg "onFinish.*Response" app/client/platforms/
Length of output: 2429
Script:
#!/bin/bash
# Search for onFinish calls in chat/completion implementations across all API files
rg -B 2 -A 2 "onFinish\(" app/client/platforms/ --type ts
# Search for response handling patterns
rg -B 2 -A 2 "const res = await fetch" app/client/platforms/ --type ts
Length of output: 9287
app/client/platforms/openai.ts (1)
364-364
: Verify type safety for the updated onFinish
callback.
The change to pass both message
and res
to onFinish
looks good, as it provides access to the raw response for advanced use cases. However, we should ensure type safety.
Let's verify the interface definition and usage:
β Verification successful
Let me try a different search strategy to verify the type safety.
Type safety for onFinish
callback is verified and consistent
The implementation is type-safe and consistent across the codebase:
- The
ChatOptions
interface inapp/client/api.ts
correctly definesonFinish
with signature:onFinish: (message: string, responseRes: Response) => void
- All LLM implementations (
ChatGPTApi
,ClaudeApi
,GeminiProApi
, etc.) consistently pass both the message and response object toonFinish
- The usage in chat store and utilities also aligns with this type signature
- The change in
openai.ts
matches the interface contract and follows the same pattern as other implementations
π Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the LLMApi interface definition and other implementations
# to ensure consistent typing of the onFinish callback
# Search for the ChatOptions interface definition
echo "Searching for ChatOptions interface definition..."
rg --type typescript "interface ChatOptions" -A 10
# Search for other implementations of the chat method
echo "Searching for other chat method implementations..."
ast-grep --pattern 'interface LLMApi {
$$$
chat(options: ChatOptions): $$$
$$$
}'
Length of output: 397
Script:
#!/bin/bash
# Search for ChatOptions and LLMApi definitions
echo "Searching for ChatOptions and LLMApi..."
rg "ChatOptions|LLMApi" -A 5
# Search for onFinish usage in the codebase
echo "Searching for onFinish usage..."
rg "onFinish" -A 2
Length of output: 20372
@@ -173,7 +173,7 @@ export class XAIApi implements LLMApi { | |||
|
|||
const resJson = await res.json(); | |||
const message = this.extractMessage(resJson); | |||
options.onFinish(message); | |||
options.onFinish(message, res); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution: Response object might be consumed.
The Response object being passed to onFinish
has already been consumed by res.json()
. This could lead to issues if the caller tries to read the response body again.
Consider one of these approaches:
- const resJson = await res.json();
- const message = this.extractMessage(resJson);
- options.onFinish(message, res);
+ const resClone = res.clone();
+ const resJson = await res.json();
+ const message = this.extractMessage(resJson);
+ options.onFinish(message, resClone);
Or:
- const resJson = await res.json();
- const message = this.extractMessage(resJson);
- options.onFinish(message, res);
+ const resJson = await res.json();
+ const message = this.extractMessage(resJson);
+ options.onFinish(message, { status: res.status, statusText: res.statusText, headers: res.headers });
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
options.onFinish(message, res); | |
const resClone = res.clone(); | |
const resJson = await res.json(); | |
const message = this.extractMessage(resJson); | |
options.onFinish(message, resClone); | |
``` | |
Solution 2 (using response metadata): | |
```suggestion | |
const resJson = await res.json(); | |
const message = this.extractMessage(resJson); | |
options.onFinish(message, { status: res.status, statusText: res.statusText, headers: res.headers }); |
onFinish(message, responseRes) { | ||
if (responseRes?.status === 200) { | ||
console.log("[Memory] ", message); | ||
get().updateCurrentSession((session) => { | ||
session.lastSummarizeIndex = lastSummarizeIndex; | ||
session.memoryPrompt = message; // Update the memory prompt for stored it in local storage | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion
Improve error handling and remove redundant comment.
Consider these improvements:
- Handle non-200 status codes explicitly
- Remove the redundant comment about local storage
- onFinish(message, responseRes) {
- if (responseRes?.status === 200) {
- console.log("[Memory] ", message);
- get().updateCurrentSession((session) => {
- session.lastSummarizeIndex = lastSummarizeIndex;
- session.memoryPrompt = message; // Update the memory prompt for stored it in local storage
- });
- }
+ onFinish(message, responseRes) {
+ if (responseRes?.status !== 200) {
+ console.error("[Memory] Failed to update memory prompt:", responseRes?.status);
+ return;
+ }
+ console.log("[Memory] ", message);
+ get().updateCurrentSession((session) => {
+ session.lastSummarizeIndex = lastSummarizeIndex;
+ session.memoryPrompt = message;
+ });
},
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onFinish(message, responseRes) { | |
if (responseRes?.status === 200) { | |
console.log("[Memory] ", message); | |
get().updateCurrentSession((session) => { | |
session.lastSummarizeIndex = lastSummarizeIndex; | |
session.memoryPrompt = message; // Update the memory prompt for stored it in local storage | |
}); | |
} | |
onFinish(message, responseRes) { | |
if (responseRes?.status !== 200) { | |
console.error("[Memory] Failed to update memory prompt:", responseRes?.status); | |
return; | |
} | |
console.log("[Memory] ", message); | |
get().updateCurrentSession((session) => { | |
session.lastSummarizeIndex = lastSummarizeIndex; | |
session.memoryPrompt = message; | |
}); | |
}, |
onFinish(message, responseRes) { | ||
if (responseRes?.status === 200) { | ||
get().updateCurrentSession( | ||
(session) => | ||
(session.topic = | ||
message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC), | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π οΈ Refactor suggestion
Improve error handling and code clarity.
While the status check is a good addition, consider these improvements:
- Handle non-200 status codes explicitly
- Restructure the assignment for better readability
- onFinish(message, responseRes) {
- if (responseRes?.status === 200) {
- get().updateCurrentSession(
- (session) =>
- (session.topic =
- message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC),
- );
- }
+ onFinish(message, responseRes) {
+ if (responseRes?.status !== 200) {
+ console.error("[Topic] Failed to update topic:", responseRes?.status);
+ return;
+ }
+ get().updateCurrentSession((session) => {
+ const newTopic = message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC;
+ session.topic = newTopic;
+ });
},
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
onFinish(message, responseRes) { | |
if (responseRes?.status === 200) { | |
get().updateCurrentSession( | |
(session) => | |
(session.topic = | |
message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC), | |
); | |
} | |
onFinish(message, responseRes) { | |
if (responseRes?.status !== 200) { | |
console.error("[Topic] Failed to update topic:", responseRes?.status); | |
return; | |
} | |
get().updateCurrentSession((session) => { | |
const newTopic = message.length > 0 ? trimTopic(message) : DEFAULT_TOPIC; | |
session.topic = newTopic; | |
}); | |
}, |
π§° Tools
πͺ Biome
[error] 656-658: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
@@ -174,6 +174,7 @@ export function stream( | |||
let finished = false; | |||
let running = false; | |||
let runTools: any[] = []; | |||
let responseRes: Response; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential undefined responseRes
in options.onFinish
There's a risk that responseRes
may be undefined
when options.onFinish
is called. Since responseRes
is assigned within the onopen
callback, if finish()
is invoked before onopen
is triggered (e.g., due to an early error or abort), responseRes
will remain undefined
. This could lead to runtime errors if onFinish
expects a defined Response
object.
Consider updating the variable declaration to allow for undefined
and ensure that onFinish
can handle this case. Apply this diff to adjust the variable declaration:
- let responseRes: Response;
+ let responseRes: Response | undefined;
Additionally, ensure that any usage of responseRes
in onFinish
accounts for the possibility of it being undefined
.
Also applies to: 276-276
commit 38fa305 Author: lloydzhou <[email protected]> Date: Mon Nov 11 13:26:08 2024 +0800 update version commit 289aeec Merge: f8f6954 7d71da9 Author: Lloyd Zhou <[email protected]> Date: Mon Nov 11 13:19:26 2024 +0800 Merge pull request ChatGPTNextWeb#5786 from ConnectAI-E/feature/realtime-chat Feature/realtime chat commit 7d71da9 Author: lloydzhou <[email protected]> Date: Mon Nov 11 13:15:09 2024 +0800 remove close-24 svg commit f8f6954 Merge: 6e03f32 64aa760 Author: Lloyd Zhou <[email protected]> Date: Mon Nov 11 13:13:09 2024 +0800 Merge pull request ChatGPTNextWeb#5779 from ConnectAI-E/feature/model/claude35haiku add claude35haiku & not support vision commit 6e03f32 Merge: 108069a 18a6571 Author: Lloyd Zhou <[email protected]> Date: Mon Nov 11 13:10:00 2024 +0800 Merge pull request ChatGPTNextWeb#5795 from JingSyue/main fix: built-in plugin dalle3 error ChatGPTNextWeb#5787 commit 18a6571 Author: JingSyue <[email protected]> Date: Mon Nov 11 12:59:29 2024 +0800 Update proxy.ts Update proxy.ts commit 14f444e Author: Dogtiti <[email protected]> Date: Mon Nov 11 11:47:41 2024 +0800 doc: realtime chat commit 2b0f2e5 Author: JingSyue <[email protected]> Date: Sun Nov 10 10:28:25 2024 +0800 fix: built-in plugin dalle3 error ChatGPTNextWeb#5787 commit 4629b39 Author: Dogtiti <[email protected]> Date: Sat Nov 9 16:22:01 2024 +0800 chore: comment context history commit d33e772 Author: Dogtiti <[email protected]> Date: Fri Nov 8 22:39:17 2024 +0800 feat: voice print commit 89136fb Author: Dogtiti <[email protected]> Date: Fri Nov 8 22:18:39 2024 +0800 feat: voice print commit 8b4ca13 Author: Dogtiti <[email protected]> Date: Fri Nov 8 22:02:31 2024 +0800 feat: voice print commit a4c9eaf Author: lloydzhou <[email protected]> Date: Fri Nov 8 13:43:13 2024 +0800 do not save empty audio file commit 50e6310 Author: lloydzhou <[email protected]> Date: Fri Nov 8 13:21:40 2024 +0800 merge code and get analyser data commit 48a1e8a Author: Dogtiti <[email protected]> Date: Thu Nov 7 21:32:47 2024 +0800 chore: i18n commit e44ebe3 Author: Dogtiti <[email protected]> Date: Thu Nov 7 21:28:23 2024 +0800 feat: realtime config commit 108069a Merge: fbb9385 d5bda29 Author: Lloyd Zhou <[email protected]> Date: Thu Nov 7 20:06:30 2024 +0800 Merge pull request ChatGPTNextWeb#5788 from ConnectAI-E/fix-o1-maxtokens chore: o1樑εδ½Ώη¨max_completion_tokens commit d5bda29 Author: DDMeaqua <[email protected]> Date: Thu Nov 7 19:45:27 2024 +0800 chore: o1樑εδ½Ώη¨max_completion_tokens commit 283caba Author: lloydzhou <[email protected]> Date: Thu Nov 7 18:57:57 2024 +0800 stop streaming play after get input audio. commit b78e5db Author: lloydzhou <[email protected]> Date: Thu Nov 7 17:55:51 2024 +0800 add temperature config commit 46c469b Author: lloydzhou <[email protected]> Date: Thu Nov 7 17:47:55 2024 +0800 add voice config commit c00ebbe Author: lloydzhou <[email protected]> Date: Thu Nov 7 17:40:03 2024 +0800 update commit c526ff8 Author: lloydzhou <[email protected]> Date: Thu Nov 7 17:23:20 2024 +0800 update commit 0037b0c Author: lloydzhou <[email protected]> Date: Thu Nov 7 17:03:04 2024 +0800 ts error commit 6f81bb3 Author: lloydzhou <[email protected]> Date: Thu Nov 7 16:56:15 2024 +0800 add context after connected commit 7bdc45e Author: lloydzhou <[email protected]> Date: Thu Nov 7 16:41:24 2024 +0800 connect realtime model when open panel commit 88cd3ac Author: Dogtiti <[email protected]> Date: Thu Nov 7 12:16:11 2024 +0800 fix: ts error commit 4988d2e Author: Dogtiti <[email protected]> Date: Thu Nov 7 11:56:58 2024 +0800 fix: ts error commit 8deb7a9 Author: lloydzhou <[email protected]> Date: Thu Nov 7 11:53:01 2024 +0800 hotfix for update target session commit db060d7 Author: lloydzhou <[email protected]> Date: Thu Nov 7 11:45:38 2024 +0800 upload save record wav file commit 5226278 Author: lloydzhou <[email protected]> Date: Thu Nov 7 09:36:22 2024 +0800 upload save wav file logic commit cf46d5a Author: lloydzhou <[email protected]> Date: Thu Nov 7 01:12:08 2024 +0800 upload response audio, and update audio_url to session message commit a494152 Author: Dogtiti <[email protected]> Date: Wed Nov 6 22:30:02 2024 +0800 feat: audio to message commit f6e1f83 Author: Dogtiti <[email protected]> Date: Wed Nov 6 22:07:33 2024 +0800 wip commit d544eea Author: Dogtiti <[email protected]> Date: Wed Nov 6 21:14:45 2024 +0800 feat: realtime chat ui commit fbb9385 Merge: 6ded4e9 18144c3 Author: Lloyd Zhou <[email protected]> Date: Wed Nov 6 20:33:51 2024 +0800 Merge pull request ChatGPTNextWeb#5782 from ConnectAI-E/style/classname style: improve classname by clsx commit 18144c3 Author: Dogtiti <[email protected]> Date: Wed Nov 6 20:16:38 2024 +0800 chore: clsx commit 64aa760 Author: opchips <[email protected]> Date: Wed Nov 6 19:18:05 2024 +0800 update claude rank commit e0bbb8b Author: Dogtiti <[email protected]> Date: Wed Nov 6 16:58:26 2024 +0800 style: improve classname by clsx commit 6667ee1 Merge: 3086a2f 6ded4e9 Author: opchips <[email protected]> Date: Wed Nov 6 15:08:18 2024 +0800 merge main commit 6ded4e9 Merge: f4c9410 85cdcab Author: Lloyd Zhou <[email protected]> Date: Wed Nov 6 15:04:46 2024 +0800 Merge pull request ChatGPTNextWeb#5778 from ConnectAI-E/fix/5436 fix: botMessage reply date commit 85cdcab Author: Dogtiti <[email protected]> Date: Wed Nov 6 14:53:08 2024 +0800 fix: botMessage reply date commit f4c9410 Merge: f526d6f adf7d82 Author: Lloyd Zhou <[email protected]> Date: Wed Nov 6 14:02:20 2024 +0800 Merge pull request ChatGPTNextWeb#5776 from ConnectAI-E/feat-glm fix: glm chatpath commit adf7d82 Author: DDMeaqua <[email protected]> Date: Wed Nov 6 13:55:57 2024 +0800 fix: glm chatpath commit 3086a2f Author: opchips <[email protected]> Date: Wed Nov 6 12:56:24 2024 +0800 add claude35haiku not vision commit f526d6f Merge: f3603e5 106461a Author: Lloyd Zhou <[email protected]> Date: Wed Nov 6 11:16:33 2024 +0800 Merge pull request ChatGPTNextWeb#5774 from ConnectAI-E/feature/update-target-session fix: updateCurrentSession => updateTargetSession commit 106461a Merge: c4e19db f3603e5 Author: Dogtiti <[email protected]> Date: Wed Nov 6 11:08:41 2024 +0800 Merge branch 'main' of https://github.com/ConnectAI-E/ChatGPT-Next-Web into feature/update-target-session commit c4e19db Author: Dogtiti <[email protected]> Date: Wed Nov 6 11:06:18 2024 +0800 fix: updateCurrentSession => updateTargetSession commit f3603e5 Merge: 00d6cb2 8e2484f Author: Dogtiti <[email protected]> Date: Wed Nov 6 10:49:28 2024 +0800 Merge pull request ChatGPTNextWeb#5769 from ryanhex53/fix-model-multi@ Custom model names can include the `@` symbol by itself. commit 8e2484f Author: ryanhex53 <[email protected]> Date: Tue Nov 5 13:52:54 2024 +0000 Refactor: Replace all provider split occurrences with getModelProvider utility method commit 00d6cb2 Author: lloydzhou <[email protected]> Date: Tue Nov 5 17:42:55 2024 +0800 update version commit b844045 Author: ryanhex53 <[email protected]> Date: Tue Nov 5 07:44:12 2024 +0000 Custom model names can include the `@` symbol by itself. To specify the model's provider, append it after the model name using `@` as before. This format supports cases like `google vertex ai` with a model name like `claude-3-5-sonnet@20240620`. For instance, `claude-3-5-sonnet@20240620@vertex-ai` will be split by `split(/@(?!.*@)/)` into: `[ 'claude-3-5-sonnet@20240620', 'vertex-ai' ]`, where the former is the model name and the latter is the custom provider. commit e49fe97 Merge: 14f7519 e49466f Author: Lloyd Zhou <[email protected]> Date: Tue Nov 5 15:07:52 2024 +0800 Merge pull request ChatGPTNextWeb#5765 from ConnectAI-E/feature/onfinish feat: update real 'currentSession' commit 14f7519 Merge: 820ab54 0ec4233 Author: Dogtiti <[email protected]> Date: Tue Nov 5 11:07:52 2024 +0800 Merge pull request ChatGPTNextWeb#5767 from ConnectAI-E/feat-glm chore: update readme commit 0ec4233 Author: DDMeaqua <[email protected]> Date: Tue Nov 5 11:06:20 2024 +0800 chore: update readme commit 820ab54 Merge: 0dc4071 a6c1eb2 Author: Dogtiti <[email protected]> Date: Tue Nov 5 10:54:52 2024 +0800 Merge pull request ChatGPTNextWeb#5766 from ConnectAI-E/feature/add-claude-haiku3.5 Feature/add claude haiku3.5 commit a6c1eb2 Merge: 801dc41 0dc4071 Author: lloydzhou <[email protected]> Date: Tue Nov 5 10:23:15 2024 +0800 add claude 3.5 haiku commit 0dc4071 Merge: aef535f 4d39497 Author: Lloyd Zhou <[email protected]> Date: Tue Nov 5 01:10:06 2024 +0800 Merge pull request ChatGPTNextWeb#5464 from endless-learner/main Added 1-click deployment link for Alibaba Cloud. commit 4d39497 Author: Lloyd Zhou <[email protected]> Date: Tue Nov 5 01:09:27 2024 +0800 merge main commit aef535f Merge: 686a80e fbb7a1e Author: Dogtiti <[email protected]> Date: Mon Nov 4 21:41:11 2024 +0800 Merge pull request ChatGPTNextWeb#5753 from ChatGPTNextWeb/feat-bt-doc Feat bt doc commit 686a80e Merge: 5733e3c 4b93370 Author: Dogtiti <[email protected]> Date: Mon Nov 4 21:37:34 2024 +0800 Merge pull request ChatGPTNextWeb#5764 from ChatGPTNextWeb/dependabot/npm_and_yarn/testing-library/react-16.0.1 chore(deps-dev): bump @testing-library/react from 16.0.0 to 16.0.1 commit e49466f Author: Dogtiti <[email protected]> Date: Mon Nov 4 21:25:56 2024 +0800 feat: update real 'currentSession' commit 4b93370 Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon Nov 4 10:24:30 2024 +0000 chore(deps-dev): bump @testing-library/react from 16.0.0 to 16.0.1 Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 16.0.0 to 16.0.1. - [Release notes](https://github.com/testing-library/react-testing-library/releases) - [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md) - [Commits](testing-library/react-testing-library@v16.0.0...v16.0.1) --- updated-dependencies: - dependency-name: "@testing-library/react" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> commit 5733e3c Merge: d66bfc6 44fc5b5 Author: Dogtiti <[email protected]> Date: Mon Nov 4 17:16:44 2024 +0800 Merge pull request ChatGPTNextWeb#5759 from ConnectAI-E/feature/onfinish Feature/onfinish commit 44fc5b5 Author: Dogtiti <[email protected]> Date: Mon Nov 4 17:00:45 2024 +0800 fix: onfinish responseRes commit 2d3f7c9 Author: Dogtiti <[email protected]> Date: Wed Oct 16 15:17:08 2024 +0800 fix: vision model dalle3 commit fe8cca3 Merge: adf97c6 d66bfc6 Author: GH Action - Upstream Sync <[email protected]> Date: Sat Nov 2 01:12:09 2024 +0000 Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web commit fbb7a1e Author: weige <[email protected]> Date: Fri Nov 1 18:20:16 2024 +0800 fix commit fb2c155 Author: weige <[email protected]> Date: Fri Nov 1 17:45:50 2024 +0800 fix commit c2c52a1 Author: weige <[email protected]> Date: Fri Nov 1 17:35:34 2024 +0800 fix commit 106ddc1 Author: weige <[email protected]> Date: Fri Nov 1 17:35:09 2024 +0800 fix commit 17d5209 Author: weige <[email protected]> Date: Fri Nov 1 17:28:20 2024 +0800 add bt install doc commit adf97c6 Merge: 7c466c9 0581e37 Author: GH Action - Upstream Sync <[email protected]> Date: Fri Nov 1 01:18:59 2024 +0000 Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web commit 7c466c9 Merge: b0d28eb a0fa4d7 Author: GH Action - Upstream Sync <[email protected]> Date: Thu Oct 31 01:14:28 2024 +0000 Merge branch 'main' of https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web commit b0d28eb Merge: 064e964 613d67e Author: endless-learner <[email protected]> Date: Tue Oct 29 14:38:49 2024 -0700 Merge branch 'main' into main commit 801dc41 Author: lloydzhou <[email protected]> Date: Thu Oct 24 15:28:05 2024 +0800 add claude-3.5-haiku commit 064e964 Author: endless-learner <[email protected]> Date: Tue Sep 24 23:05:32 2024 -0700 Updated link to deploy on Alibaba Cloud, readable when not logged in, also, able to choose region. commit 47fb40d Merge: 9e18cc2 4c84182 Author: endless-learner <[email protected]> Date: Tue Sep 24 23:03:03 2024 -0700 Merge branch 'ChatGPTNextWeb:main' into main commit 9e18cc2 Author: endless-learner <[email protected]> Date: Tue Sep 24 13:55:00 2024 -0700 Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> commit 03268ce Author: endless-learner <[email protected]> Date: Wed Sep 18 20:38:20 2024 -0700 Added 1-click deployment link for Alibaba Cloud.
π» εζ΄η±»ε | Change Type
π εζ΄θ―΄ζ | Description of Change
π θ‘₯ε δΏ‘ζ― | Additional Information
Summary by CodeRabbit
Release Notes
New Features
onFinish
callback, allowing for more detailed handling of responses.Bug Fixes
chat
methods to provide clearer feedback when requests fail or are aborted.Documentation