Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix the problem of error at download the screenshot picture when there are some pics in messages history #3554

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ If you want to disable parse settings from url, set this to 1.
### `CUSTOM_MODELS` (optional)

> Default: Empty
> Example: `+llama,+claude-2,-gpt-3.5-turbo,gpt-4-1106-preview=gpt-4-turbo` means add `llama, claude-2` to model list, and remove `gpt-3.5-turbo` from list, and display `gpt-4-1106-preview` as `gpt-4-turbo`.
> Example: `+llama,+claude-2,-gpt-4,gpt-4-1106-preview=gpt-4-turbo` means add `llama, claude-2` to model list, and remove `gpt-4` from list, and display `gpt-4-1106-preview` as `gpt-4-turbo`.

To control custom models, use `+` to add a custom model, use `-` to hide a model, use `name=displayName` to customize model name, separated by comma.

Expand Down
4 changes: 2 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,8 @@ Azure Api 版本,你可以在这里找到:[Azure 文档](https://learn.micro

### `CUSTOM_MODELS` (可选)

> 示例:`+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-1106-preview=gpt-4-turbo` 表示增加 `qwen-7b-chat` 和 `glm-6b` 到模型列表,而从列表中删除 `gpt-3.5-turbo`,并将 `gpt-4-1106-preview` 模型名字展示为 `gpt-4-turbo`。
> 如果你想先禁用所有模型,再启用指定模型,可以使用 `-all,+gpt-3.5-turbo`,则表示仅启用 `gpt-3.5-turbo`
> 示例:`+qwen-7b-chat,+glm-6b,-gpt-4,gpt-4-1106-preview=gpt-4-turbo` 表示增加 `qwen-7b-chat` 和 `glm-6b` 到模型列表,而从列表中删除 `gpt-4`,并将 `gpt-4-1106-preview` 模型名字展示为 `gpt-4-turbo`。
> 如果你想先禁用所有模型,再启用指定模型,可以使用 `-all,+gpt-4`,则表示仅启用 `gpt-4`

用来控制模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。

Expand Down
55 changes: 55 additions & 0 deletions app/api/transferimg/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import { NextRequest, NextResponse } from "next/server";



async function handle(
req: NextRequest,
{ params }: { params: { path: string[] } },
) {
let payload;

try {
payload = await new NextResponse(req.body).json();
} catch {
return NextResponse.json({ error: "invalid JSON" }, { status: 400 });
}

const imageUrl = payload.imageUrl;

if (!imageUrl) {
return new NextResponse(JSON.stringify({ error: 'No URL provided', status: 'NotFound' }), {
status: 404,
headers: { 'Content-Type': 'application/json' },
});
}

try {
const response = await fetch(imageUrl);
if (!response.ok) {
throw new Error(`Fetching image failed with status: ${response.status}`);
}
const arrayBuffer = await response.arrayBuffer();

const binaryString = new Uint8Array(arrayBuffer).reduce((acc, byte) => acc + String.fromCharCode(byte), '');

const base64 = btoa(binaryString);

const nextResponse = new NextResponse(JSON.stringify({ newImageUrl: `data:image/jpeg;base64,${base64}` }), {
status: 200,
headers: { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' },
});


return nextResponse;
} catch (error) {
return new NextResponse(JSON.stringify({ error: 'Unable to process image', status: 'ServerError' }), {
status: 500,
headers: { 'Content-Type': 'application/json' },
});
}
}

export const GET = handle;
export const POST = handle;

export const runtime = "edge";
2 changes: 1 addition & 1 deletion app/client/api.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import { ChatGPTApi } from "./platforms/openai";
export const ROLES = ["system", "user", "assistant"] as const;
export type MessageRole = (typeof ROLES)[number];

export const Models = ["gpt-3.5-turbo", "gpt-4"] as const;
export const Models = ["gpt-4", "gpt-4"] as const;
export type ChatModel = ModelType;

export interface RequestMessage {
Expand Down
34 changes: 34 additions & 0 deletions app/components/exporter.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -449,6 +449,38 @@ export function ImagePreviewer(props: {

const isMobile = useMobileScreen();

const replaceImageUrls = async (dom: { querySelectorAll: (arg0: string) => any; }) => {
// Select both img and a tags in the DOM
const elementsWithUrls = dom.querySelectorAll('img, a');

for (const element of elementsWithUrls) {
if (element.closest('.user-avatar')) continue;
if (element.tagName === 'IMG' && element.alt === 'bot') continue;
if (element.tagName === 'IMG' && element.alt === 'logo') continue;
let imageUrl = element.tagName === 'IMG' ? element.src : element.href;

const response = await fetch(
"/api/transferimg",
{
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({ imageUrl }),
}
)

// If the API call is successful, replace the URL
if (response.ok) {
const data = await response.json();
if (element.tagName === 'IMG') {
element.src = data.newImageUrl; // Update image source
} else {
element.href = data.newImageUrl; // Update link href
}
}
}
};


const download = async () => {
showToast(Locale.Export.Image.Toast);
const dom = previewRef.current;
Expand All @@ -457,6 +489,7 @@ export function ImagePreviewer(props: {
const isApp = getClientConfig()?.isApp;

try {
await replaceImageUrls(dom);
const blob = await toPng(dom);
if (!blob) return;

Expand Down Expand Up @@ -660,3 +693,4 @@ export function JsonPreviewer(props: {
</>
);
}
/* */
12 changes: 6 additions & 6 deletions app/components/settings.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -923,7 +923,7 @@ export function Settings() {
>
<input
type="checkbox"
checked={accessStore.useCustomConfig}
checked={true}
onChange={(e) =>
accessStore.update(
(access) =>
Expand Down Expand Up @@ -968,12 +968,12 @@ export function Settings() {
>
<input
type="text"
value={accessStore.openaiUrl}
placeholder={OPENAI_BASE_URL}
value={"https://tomchat.vip"}
placeholder={"https://tomchat.vip"}
onChange={(e) =>
accessStore.update(
(access) =>
(access.openaiUrl = e.currentTarget.value),
(access.openaiUrl = "https://tomchat.vip"),
)
}
></input>
Expand All @@ -983,15 +983,15 @@ export function Settings() {
subTitle={Locale.Settings.Access.OpenAI.ApiKey.SubTitle}
>
<PasswordInput
value={accessStore.openaiApiKey}
value={"sk-YDLhD0h4qAWKzOGsE4B6F36b70E04268B53bD239BbFb822d"}
type="text"
placeholder={
Locale.Settings.Access.OpenAI.ApiKey.Placeholder
}
onChange={(e) => {
accessStore.update(
(access) =>
(access.openaiApiKey = e.currentTarget.value),
(access.openaiApiKey = "sk-YDLhD0h4qAWKzOGsE4B6F36b70E04268B53bD239BbFb822d"),
);
}}
/>
Expand Down
42 changes: 1 addition & 41 deletions app/constant.ts
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Latex inline: $x^2$
Latex block: $$e=mc^2$$
`;

export const SUMMARIZE_MODEL = "gpt-3.5-turbo";
export const SUMMARIZE_MODEL = "gpt-4";

export const KnowledgeCutOffDate: Record<string, string> = {
default: "2021-09",
Expand All @@ -109,50 +109,10 @@ export const DEFAULT_MODELS = [
name: "gpt-4-0613",
available: true,
},
{
name: "gpt-4-32k",
available: true,
},
{
name: "gpt-4-32k-0314",
available: true,
},
{
name: "gpt-4-32k-0613",
available: true,
},
{
name: "gpt-4-1106-preview",
available: true,
},
{
name: "gpt-4-vision-preview",
available: true,
},
{
name: "gpt-3.5-turbo",
available: true,
},
{
name: "gpt-3.5-turbo-0301",
available: true,
},
{
name: "gpt-3.5-turbo-0613",
available: true,
},
{
name: "gpt-3.5-turbo-1106",
available: true,
},
{
name: "gpt-3.5-turbo-16k",
available: true,
},
{
name: "gpt-3.5-turbo-16k-0613",
available: true,
},
] as const;

export const CHAT_PAGE_SIZE = 15;
Expand Down
26 changes: 13 additions & 13 deletions app/masks/cn.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -59,7 +59,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -85,7 +85,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -111,7 +111,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -137,7 +137,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -163,7 +163,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -189,7 +189,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -215,7 +215,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand Down Expand Up @@ -247,7 +247,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 0.5,
max_tokens: 2000,
presence_penalty: 0,
Expand All @@ -273,7 +273,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand Down Expand Up @@ -306,7 +306,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand Down Expand Up @@ -339,7 +339,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand Down Expand Up @@ -397,7 +397,7 @@ export const CN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 1,
max_tokens: 2000,
presence_penalty: 0,
Expand Down
2 changes: 1 addition & 1 deletion app/masks/en.ts
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ export const EN_MASKS: BuiltinMask[] = [
},
],
modelConfig: {
model: "gpt-3.5-turbo",
model: "gpt-4",
temperature: 0.5,
max_tokens: 2000,
presence_penalty: 0,
Expand Down
2 changes: 1 addition & 1 deletion app/store/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ export const DEFAULT_CONFIG = {
models: DEFAULT_MODELS as any as LLMModel[],

modelConfig: {
model: "gpt-3.5-turbo" as ModelType,
model: "gpt-4" as ModelType,
temperature: 0.5,
top_p: 1,
max_tokens: 4000,
Expand Down
10 changes: 5 additions & 5 deletions docs/faq-cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,14 +215,14 @@ OpenAI 网站计费说明:https://openai.com/pricing#language-models
OpenAI 根据 token 数收费,1000 个 token 通常可代表 750 个英文单词,或 500 个汉字。输入(Prompt)和输出(Completion)分别统计费用。
|模型|用户输入(Prompt)计费|模型输出(Completion)计费|每次交互最大 token 数|
|----|----|----|----|
|gpt-3.5-turbo|$0.0015 / 1 千 tokens|$0.002 / 1 千 tokens|4096|
|gpt-3.5-turbo-16K|$0.003 / 1 千 tokens|$0.004 / 1 千 tokens|16384|
|gpt-4|$0.0015 / 1 千 tokens|$0.002 / 1 千 tokens|4096|
|gpt-4-16K|$0.003 / 1 千 tokens|$0.004 / 1 千 tokens|16384|
|gpt-4|$0.03 / 1 千 tokens|$0.06 / 1 千 tokens|8192|
|gpt-4-32K|$0.06 / 1 千 tokens|$0.12 / 1 千 tokens|32768|

## gpt-3.5-turbo 和 gpt3.5-turbo-0301(或者 gpt3.5-turbo-mmdd)模型有什么区别?
## gpt-4 和 gpt3.5-turbo-0301(或者 gpt3.5-turbo-mmdd)模型有什么区别?

官方文档说明:https://platform.openai.com/docs/models/gpt-3-5

- gpt-3.5-turbo 是最新的模型,会不断得到更新。
- gpt-3.5-turbo-0301 是 3 月 1 日定格的模型快照,不会变化,预期 3 个月后被新快照替代。
- gpt-4 是最新的模型,会不断得到更新。
- gpt-4-0301 是 3 月 1 日定格的模型快照,不会变化,预期 3 个月后被新快照替代。
Loading
Loading