You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to connect bedrock to crewai, but even though the model id is correct and starts configuring the agents correctly, whenever an agent tries to connect to bedrock, it gets a 400 bad request error due to something done internaly by liteLLM.
Steps to Reproduce
Set a custom LLM class for the agents
Provide the model_id
Set up the agents, manager agent, tasks, and crew
use the kick-off method and wait for the output
Expected behavior
User receives an output according to the provided question
Actual behaviour: receives a 400 error message given by liteLLM. and the crew reaches the max_rpm threshold and further waits a minute to keep trying and stays in the same cycle unless I interrupt it.
Screenshots/Code snippets
Operating System
Other (specify in additional context)
Python Version
3.10
crewAI Version
0.80.0
crewAI Tools Version
0.14.0
Virtual Environment
Venv
Evidence
I am using the following snippet to instantiate the crewai LLM class:
where chat_provider = "bedrock" and chat_model = "meta.llama3-1-70b-instruct-v1:0". According to the YAML files configuration, the string would be something like:
Description
I am trying to connect bedrock to crewai, but even though the model id is correct and starts configuring the agents correctly, whenever an agent tries to connect to bedrock, it gets a 400 bad request error due to something done internaly by liteLLM.
Steps to Reproduce
Expected behavior
User receives an output according to the provided question
Actual behaviour: receives a 400 error message given by liteLLM. and the crew reaches the max_rpm threshold and further waits a minute to keep trying and stays in the same cycle unless I interrupt it.
Screenshots/Code snippets
Operating System
Other (specify in additional context)
Python Version
3.10
crewAI Version
0.80.0
crewAI Tools Version
0.14.0
Virtual Environment
Venv
Evidence
I am using the following snippet to instantiate the crewai LLM class:
LLM(
model= chat_provider + "/" + chat_model,
temperature=chat_temperature,
)
where chat_provider = "bedrock" and chat_model = "meta.llama3-1-70b-instruct-v1:0". According to the YAML files configuration, the string would be something like:
llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
Possible Solution
adapt the necessary code when calling the boto3 client internally to pass the user_message received as a question from the crew.kick_off method.
Additional context
Linux pop-os 6.9.3-76060903-generic #202405300957
172676603522.04~4092a0e SMP PREEMPT_DYNAMIC Thu S x86_64 x86_64 x86_64 GNU/LinuxThe text was updated successfully, but these errors were encountered: