Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Run with Self Hosted Gitlab & Ollama error #1389

Open
aqidd opened this issue Dec 10, 2024 · 2 comments
Open

Docker Run with Self Hosted Gitlab & Ollama error #1389

aqidd opened this issue Dec 10, 2024 · 2 comments

Comments

@aqidd
Copy link

aqidd commented Dec 10, 2024

Hi there, thanks for this project!

I'm trying to run docker command for my self hosted gitlab & ollama here

docker run --rm -it -e CONFIG.max_model_tokens=2048 -e CONFIG.MODEL="llama3.1:8b-instruct-fp16" -e CONFIG.FALLBACK_MODELS="[llama3.1:8b-instruct-fp16]" -e OLLAMA.API_BASE="https://ai.domain.tld/" -e CONFIG.GIT_PROVIDER=gitlab -e GITLAB.PERSONAL_ACCESS_TOKEN="glpat-xxx" -e GITLAB.URL="https://gitlab.cs.tld/" codiumai/pr-agent:latest --pr_url "https://gitlab.cs.tld/bukhori.aqid/testrepo/-/merge_requests/1" review

got below error

2024-12-10 10:49:11.281 | INFO     | pr_agent.git_providers.git_provider:get_main_pr_language:272 - No languages detected
2024-12-10 10:49:13.810 | INFO     | pr_agent.tools.pr_reviewer:run:130 - Reviewing PR: https://gitlab.cs.tld/bukhori.aqid/testrepo/-/merge_requests/1 ...
2024-12-10 10:49:14.651 | INFO     | pr_agent.algo.pr_processing:get_pr_diff:73 - PR main language: Other
2024-12-10 10:49:14.655 | WARNING  | pr_agent.algo.pr_processing:retry_with_fallback_models:349 - Failed to generate prediction with llama3.1:8b-instruct-fp16
2024-12-10 10:49:15.058 | INFO     | pr_agent.algo.pr_processing:get_pr_diff:73 - PR main language: Other
2024-12-10 10:49:15.063 | WARNING  | pr_agent.algo.pr_processing:retry_with_fallback_models:349 - Failed to generate prediction with [llama3.1:8b-instruct-fp16]
2024-12-10 10:49:15.063 | ERROR    | pr_agent.tools.pr_reviewer:run:178 - Failed to review PR: Failed to generate prediction with any model of ['llama3.1:8b-instruct-fp16', '[llama3.1:8b-instruct-fp16]']

I've already make sure that my deployed ollama is accessible with this command

curl https://ai.domain.tld/api/generate -d '{
  "model": "llama3.1:8b-instruct-fp16",
  "prompt": "Why is the sky blue?"
}'

What can I do to debug more about the issue?

@mrT23
Copy link
Collaborator

mrT23 commented Dec 11, 2024

Hi
In general, we cannot debug your local deployment. Especially not with a configuration as complicated as this (local model, gitlab server)

Some general tips:

  • The model you are using is not good enough for code.

  • just for debugging, try maybe to set an environment variable:
    LOG_LEVEL=DEBUG
    or run from CLI to catch a breakpoint

  • To get good results, use a top model like claude sonnet. For gitlab server, a highly commercial environment, I recommend using Qodo Merge pro: https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/

@aqidd
Copy link
Author

aqidd commented Dec 12, 2024

Hi there,

  • I tried the gitlab hosted with Groq and it can review & improve the PR (although there are errors when I'm trying to use describe and update changelog command - probably not supported yet?)
  • The model and the VM should be good enough to produce/analyze code. I use the same VM to test bolt.diy and it can generate code project
  • Log Level Debug doesn't really give much information (see below)
2024-12-12 16:05:34.267 | INFO     | pr_agent.git_providers.git_provider:get_main_pr_language:272 - No languages detected
2024-12-12 16:05:34.285 | DEBUG    | pr_agent.tools.pr_code_suggestions:__init__:69 - AI metadata is disabled for this command
2024-12-12 16:05:34.654 | INFO     | pr_agent.tools.pr_code_suggestions:run:102 - Generating code suggestions for PR...
2024-12-12 16:05:34.656 | DEBUG    | pr_agent.tools.pr_code_suggestions:run:105 - Relevant configs
2024-12-12 16:05:36.433 | DEBUG    | pr_agent.algo.pr_processing:retry_with_fallback_models:342 - Generating prediction with llama3.1:8b-instruct-fp16
2024-12-12 16:05:43.392 | WARNING  | pr_agent.algo.pr_processing:retry_with_fallback_models:349 - Failed to generate prediction with llama3.1:8b-instruct-fp16
2024-12-12 16:05:43.392 | DEBUG    | pr_agent.algo.pr_processing:retry_with_fallback_models:342 - Generating prediction with [llama3.1:8b-instruct-fp16]
2024-12-12 16:05:44.584 | WARNING  | pr_agent.algo.pr_processing:retry_with_fallback_models:349 - Failed to generate prediction with [llama3.1:8b-instruct-fp16]
2024-12-12 16:05:44.587 | ERROR    | pr_agent.tools.pr_code_suggestions:run:211 - Failed to generate code suggestions for PR, error: Failed to generate prediction with any model of ['llama3.1:8b-instruct-fp16', '[llama3.1:8b-instruct-fp16]']

Other help would be much appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants