Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpt token 수 부족 #27

Open
caboodles opened this issue Jan 31, 2024 · 2 comments
Open

gpt token 수 부족 #27

caboodles opened this issue Jan 31, 2024 · 2 comments

Comments

@caboodles
Copy link
Contributor

gpt token 수 부족으로 response 안돌아옴

`/workspace/app/node_modules/openai/error.js:44
return new BadRequestError(status, error, message, headers);
^

BadRequestError: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 9132 tokens. Please reduce the length of the messages.
at APIError.generate (/workspace/app/node_modules/openai/error.js:44:20)
at OpenAI.makeStatusError (/workspace/app/node_modules/openai/core.js:255:33)
at OpenAI.makeRequest (/workspace/app/node_modules/openai/core.js:294:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /workspace/app/node_modules/@langchain/openai/dist/chat_models.cjs:649:29
at async RetryOperation._fn (/workspace/app/node_modules/p-retry/index.js:50:12) {
status: 400,
headers: {
'access-control-allow-origin': '*',
'alt-svc': 'h3=":443"; ma=86400',
'cf-cache-status': 'DYNAMIC',
'cf-ray': '84e1aaba69f2edfd-ICN',
connection: 'keep-alive',
'content-length': '281',
'content-type': 'application/json',
date: 'Wed, 31 Jan 2024 11:38:11 GMT',
'openai-organization': 'user-8ajoec35ib7hi96vqkdktor7',
'openai-processing-ms': '24',
'openai-version': '2020-10-01',
server: 'cloudflare',
'set-cookie': '__cf_bm=kNxVjphN92oG5koQyhZv.UKfI4d7vpyqbzn9QL86bfU-1706701091-1-AdrsP8/+Ds72nNnfufoeQydJJov+PB5hyP5mQjbBqaEUzbC7CDeW/xKO35ZBSAPywhEvV7OeTk0KKbXRCZQgWj8=; path=/; expires=Wed, 31-Jan-24 12:08:11 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=0v3CnWu5zimAnHbZydxUNf6iaT2oIMsFhp3S9gIPngc-1706701091305-0-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',
'strict-transport-security': 'max-age=15724800; includeSubDomains',
'x-ratelimit-limit-requests': '10000',
'x-ratelimit-limit-tokens': '60000',
'x-ratelimit-remaining-requests': '9999',
'x-ratelimit-remaining-tokens': '54738',
'x-ratelimit-reset-requests': '8.64s',
'x-ratelimit-reset-tokens': '5.262s',
'x-request-id': '921a6b6911e568f7a7919cbea1832160'
},
error: {
message: "This model's maximum context length is 4097 tokens. However, your messages resulted in 9132 tokens. Please reduce the length of the messages.",
type: 'invalid_request_error',
param: 'messages',
code: 'context_length_exceeded'
},
code: 'context_length_exceeded',
param: 'messages',
type: 'invalid_request_error',
attemptNumber: 1,
retriesLeft: 6
}`

@caboodles
Copy link
Contributor Author

prompting의 경우

const model = new ChatOpenAI({
  modelName: "gpt-3.5-turbo-0125",
  apiKey: apiKey,
});

와 같이 token window가 큰 모델을 사용하면 됨

embedding의 경우

openAI에는 max input token이 더 큰 모델은 없는 듯함

  • 해결법 1 : openAI가 아닌 다른 embedder 사용
  • 해결법 2: model을 시켜서 summerize후 embedding
  • 해결법 3: txt파일을 분할 후 저장

@jnnkk
Copy link
Contributor

jnnkk commented Feb 8, 2024

혹시 이 이슈 언제 해결 가능할까요?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants