-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Amazon Bedrock #2828
Comments
Hi all, my team at AWS is working on this, more to report soon! |
So cool! Is there anything LangChain users can do to help? |
We will post in this issue when we have a PR open. We would love help reviewing and testing as people get access to the service. If anyone wants to chat in the meantime, please DM me on Twitter. |
bump |
Any news on this? |
Completed with #5464 |
There seems to be minor bug while checking for user provided Boto3 client causing Bedrock client not being initalized resulting in Error Log:
workaround:
|
@rajeshkumarravi |
I still appear to have the issue in v0.0.189, which is fixed with @rajeshkumarravi fix. @3coins, maybe it is in the next release? |
I am also getting the same issue. Error raised by bedrock service: 'NoneType' object has no attribute 'invoke_modeand I using v0.0.189 |
@garystafford @sudhir2016 @garystafford |
I can't find the boto3.client the implementation is using, there a dev version? |
You can find info about boto3 here: https://github.com/boto/boto3 |
I know about boto3, the latest version ('1.26.154') doesn't contain the client for bedrock though |
@rpauli |
For current searchers while Bedrock is still in preview - once you get Bedrock access, click the Info > User Guide. In the User Guide you can find a set of instructions which include accessing boto3 wheels. |
Thanks a lot @mendhak . I got access but I have not been able to find that "Info > User Guide" that you mentioned. Could you be a little bit more explicit? I am facing issues to apply the fix described by @rajeshkumarravi |
Hi there go to https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/text-playground |
Thanks a lot!!! Much appreciated! |
I'm getting "Could not load credentials to authenticate with AWS Client", am I missing something below? Installed the preview boto3 wheels from Amazon, and I've got latest langchain 0.0.229 I've got my AWS credentials in the environment variables (and tested with sts) so I was hoping not to have to pass any profile name:
|
It seems the workaround is still required BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')
llm = Bedrock( model_id="amazon.titan-tg1-large", client=BEDROCK_CLIENT ) |
I feel I'm missing something with the Bedrock integration. For example I am trying the Claude model, using the fewshot example. The output is odd, and doesn't stop when it should.
The code is quite basic import boto3
from langchain.llms.bedrock import Bedrock
from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
def get_llm():
BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')
bedrock_llm = Bedrock(
model_id="anthropic.claude-instant-v1",
client=BEDROCK_CLIENT
)
return bedrock_llm
template = "You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, example_human, example_ai, human_message_prompt]
)
chain = LLMChain(llm=get_llm(), prompt=chat_prompt, verbose=True)
print(chain.run("I love programming.")) I'm wondering if it's because the verbose output shows The Claude API page says:
|
i just wondering how apply Streaming in Bedrock Langchain? |
@brianadityagdp |
@3coins - any updates on the streaming functionality? |
BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1'). anyone has any idea? |
@leonliangquchen did you download the custom Python wheels? You can find it in the PDF shown in my comment. Be sure to get it from the PDF because they have changed that URL a few times now. |
Hello, I have a problem when trying to interact with the model:
Does anyone know what could cause this issue? |
How to call stability.stable-diffusion-xl model using langchain? Does Prompt Template doesn't support stability.stable-diffusion-x model? It is asking for [text_prompts] key.How to provide it in Prompt Template? def get_llm(): prompt = PromptTemplate( The above code snippet throws below error: |
@andypindus import boto3
from langchain.llms.bedrock import Bedrock
bedrock_client = boto3.client('bedrock')
llm = Bedrock(
model_id="anthropic.claude-v2",
client=bedrock_client
)
llm("Hi there!") |
@ChoubeTK |
@3coins Well spotted! Thank you and sorry for bothering. |
I'm experiencing the same issue and was wondering if there are any workarounds? |
@aripo99 |
Hello. I tried to run Bedrock Claude model and I got |
I am trying to implement Bedrock with RetrivalQA and I get the same answer as @hongyishi .
Any ideas of how to get it to work? |
I got the same error, and it looks like boto3 had some updates about Bedrock client. There are 2 clients : And the invoke_model function now belongs to the BedrockRuntime object and not Bedrock anymore. I think Langchain code has not been updated yet since AWS made this change last week. The workaround I use is to download a former version of boto3 botocore and aws cli by following this tutorial :
pip install --no-build-isolation --force-reinstall \
../dependencies/awscli-*-py3-none-any.whl \
../dependencies/boto3-*-py3-none-any.whl \
../dependencies/botocore-*-py3-none-any.whl I hope it helps ! Note : here's a linked issue about the same error |
@Druizm128 @hongyishi @Druizm128 |
Has anyone here implemented Bedrock (or ChatBedrock) with a statistics callback function? with get_openai_callback() as cb:
...
save_stats(llm_answer, cb.total_tokens, cb.prompt_tokens ...) It'd be great if we can get this support as well as I am currently tasked with making our company's chatbot service use Amazon Bedrock instead of OpenAI in certain cases. |
This should work after the changes from AWS. `session = boto3.Session(profile_name='aws_profile') BEDROCK_CLIENT = session.client("bedrock-runtime", 'us-east-1') |
It seems LLama input validation has some issues. I was expecting this code to work: from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import LLMChain
import boto3
from langchain.llms import Bedrock
session = boto3.Session(region_name = 'us-east-1')
boto3_bedrock = session.client(service_name="bedrock-runtime")
inference_modifier = {
"temperature": 0.01,
"max_tokens":100,
"stop_sequence":["\n\nHuman:", "\n\nAssistant:"]
}
llm = Bedrock(client=boto3_bedrock, model_id="meta.llama2-70b-chat-v1", region_name='us-east-1')
prompt = ChatPromptTemplate(
messages=[
# The variablec name must be the same as in buffer memory
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{instruction}")
]
)
memory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)
conversation = LLMChain(
llm=llm,
prompt=prompt,
verbose=False,
memory=memory
)
instruction = "Hi, how are you?"
instruction_2 = "\n\nHuman:Hi, how are you?\n\nAssistant:"
conversation({"instruction":instruction_2}) I get the following error:
I have tried different variations of this, with |
@emilmirzayev Have you tried using the |
An AWS employee told us at our company that we should use |
Any update on this? It looks like Llama2 is available in all regions, but I'm also getting that same error: |
Has anyone been able to successfully use def get_llm_answer(config: Config):
self.boto_client = boto3.client('bedrock', 'us-west-2')
messages = []
messages.append(HumanMessage(content=prompt))
kwargs = {
"model_id": config.model or "anthropic.claude-v2",
"client": self.boto_client,
"model_kwargs": {
"temperature": config.temperature,
"max_tokens_to_sample": config.max_tokens,
},
}
if config.top_p:
kwargs["model_kwargs"]["top_p"] = config.top_p
if config.stream:
from langchain.callbacks.streaming_stdout import \
StreamingStdOutCallbackHandler
callbacks = [StreamingStdOutCallbackHandler()]
chat = BedrockChat(**kwargs, streaming=config.stream, callbacks=callbacks)
else:
chat = BedrockChat(**kwargs)
return chat(messages).content File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 600, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/bedrock.py", line 89, in _generate
completion = self._prepare_input_and_invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/llms/bedrock.py", line 258, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: 'Bedrock' object has no attribute 'invoke_model' @3coins am I missing anything? Please help. Also, this documentation from langchain is useless. It doesn't mention anything about initializing the
|
🤖 Hello @mats16! Great to meet you! I'm Dosu, a friendly bot here to lend a hand. I'm here to assist you with bugs, answer your queries, and guide you on contributing to LangChain. While we wait for a human maintainer, feel free to ask anything you need help with. Let's make your experience with LangChain even better! Thank you for your interest in contributing to the LangChain project, specifically in adding support for the new Amazon Bedrock service. Here are some steps you can follow:
Please note that these steps assume you have a good understanding of Python programming and the LangChain framework. If you're not familiar with these, you might need to spend some time learning about them before you can effectively contribute to the project. I hope this helps! If you have any further questions, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@3coins can investigate further, but you may need to update boto3. |
@deven298 boto_client = boto3.client('bedrock-runtime', 'us-west-2') |
@3coins Thank you for your help! We are releasing the AWS Bedrock support in Embedchain soon! |
this workaround worked for me: session = boto3.Session(profile_name='default')
|
Hello,
I would like to request the addition of support for Amazon Bedrock to the Langchain library. As Amazon Bedrock is a new service, it would be beneficial for Langchain to include it as a supported platform.
2023-04-13 Amazon announced the new service Amazon Bedrock.
Blog: https://aws.amazon.com/blogs/machine-learning/announcing-new-tools-for-building-with-generative-ai-on-aws/
The text was updated successfully, but these errors were encountered: