You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, great work! When i try to reproduce the results of RepoCoder, there is an issue:
When i run the file codegen_inference.py using prompt file rg-one-gram-ws-20-ss-2.jsonl, it comes the error the current text generation call will exceed the model's predefined maximum length (2048).
I noticed that you limited the retrieval context with 1k for CodeGen model. But i think the original prompt in the file api_level_completion_1k_context_codegen.test.jsonl is too long, and it is directly concatenated to the additional retrieval context. So the total length of the input can be long as 1900+, leading to this error. How should i solve this problem? Thanks!
The text was updated successfully, but these errors were encountered:
Hello, great work! When i try to reproduce the results of
RepoCoder
, there is an issue:When i run the file
codegen_inference.py
using prompt filerg-one-gram-ws-20-ss-2.jsonl
, it comes the errorthe current text generation call will exceed the model's predefined maximum length (2048)
.I noticed that you limited the retrieval context with 1k for CodeGen model. But i think the original prompt in the file
api_level_completion_1k_context_codegen.test.jsonl
is too long, and it is directly concatenated to the additional retrieval context. So the total length of the input can be long as 1900+, leading to this error. How should i solve this problem? Thanks!The text was updated successfully, but these errors were encountered: