-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Module Readme Update #16389
Merged
Merged
LLM Module Readme Update #16389
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
6b9c7cb
incorrect LLM import
anoopshrma e1e4d3c
Updating README in LLM module
anoopshrma 4239704
Merge branch 'main' of https://github.com/anoopshrma/llama_index
anoopshrma 73d5349
more readme added
anoopshrma c0a7c80
Final Readme updates
anoopshrma File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
190 changes: 189 additions & 1 deletion
190
llama-index-integrations/llms/llama-index-llms-anthropic/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,189 @@ | ||
# LlamaIndex Llms Integration: Anthropic | ||
# LlamaIndex LLM Integration: Anthropic | ||
|
||
Anthropic is an AI research company focused on developing advanced language models, notably the Claude series. Their flagship model, Claude, is designed to generate human-like text while prioritizing safety and alignment with human intentions. Anthropic aims to create AI systems that are not only powerful but also responsible, addressing potential risks associated with artificial intelligence. | ||
|
||
### Installation | ||
|
||
```sh | ||
%pip install llama-index-llms-anthropic | ||
!pip install llama-index | ||
``` | ||
|
||
``` | ||
# Set Tokenizer | ||
# First we want to set the tokenizer, which is slightly different than TikToken. | ||
# NOTE: The Claude 3 tokenizer has not been updated yet; using the existing Anthropic tokenizer leads | ||
# to context overflow errors for 200k tokens. We've temporarily set the max tokens for Claude 3 to 180k. | ||
``` | ||
|
||
### Basic Usage | ||
|
||
```py | ||
from llama_index.llms.anthropic import Anthropic | ||
from llama_index.core import Settings | ||
|
||
tokenizer = Anthropic().tokenizer | ||
Settings.tokenizer = tokenizer | ||
|
||
# Call complete with a prompt | ||
import os | ||
|
||
os.environ["ANTHROPIC_API_KEY"] = "YOUR ANTHROPIC API KEY" | ||
from llama_index.llms.anthropic import Anthropic | ||
|
||
# To customize your API key, do this | ||
# otherwise it will lookup ANTHROPIC_API_KEY from your env variable | ||
# llm = Anthropic(api_key="<api_key>") | ||
llm = Anthropic(model="claude-3-opus-20240229") | ||
|
||
resp = llm.complete("Paul Graham is ") | ||
print(resp) | ||
|
||
# Sample response | ||
# Paul Graham is a well-known entrepreneur, programmer, venture capitalist, and essayist. | ||
# He is best known for co-founding Viaweb, one of the first web application companies, which was later | ||
# sold to Yahoo! in 1998 and became Yahoo! Store. Graham is also the co-founder of Y Combinator, a highly | ||
# successful startup accelerator that has helped launch numerous successful companies, such as Dropbox, | ||
# Airbnb, and Reddit. | ||
``` | ||
|
||
### Using Anthropic model through Vertex AI | ||
|
||
```py | ||
import os | ||
|
||
os.environ["ANTHROPIC_PROJECT_ID"] = "YOUR PROJECT ID HERE" | ||
os.environ["ANTHROPIC_REGION"] = "YOUR PROJECT REGION HERE" | ||
# Set region and project_id to make Anthropic use the Vertex AI client | ||
|
||
llm = Anthropic( | ||
model="claude-3-5-sonnet@20240620", | ||
region=os.getenv("ANTHROPIC_REGION"), | ||
project_id=os.getenv("ANTHROPIC_PROJECT_ID"), | ||
) | ||
|
||
resp = llm.complete("Paul Graham is ") | ||
print(resp) | ||
``` | ||
|
||
### Chat example with a list of messages | ||
|
||
```py | ||
from llama_index.core.llms import ChatMessage | ||
from llama_index.llms.anthropic import Anthropic | ||
|
||
messages = [ | ||
ChatMessage( | ||
role="system", content="You are a pirate with a colorful personality" | ||
), | ||
ChatMessage(role="user", content="Tell me a story"), | ||
] | ||
resp = Anthropic(model="claude-3-opus-20240229").chat(messages) | ||
print(resp) | ||
``` | ||
|
||
### Streaming example | ||
|
||
```py | ||
from llama_index.llms.anthropic import Anthropic | ||
|
||
llm = Anthropic(model="claude-3-opus-20240229", max_tokens=100) | ||
resp = llm.stream_complete("Paul Graham is ") | ||
for r in resp: | ||
print(r.delta, end="") | ||
``` | ||
|
||
### Chat streaming with pirate story | ||
|
||
```py | ||
llm = Anthropic(model="claude-3-opus-20240229") | ||
messages = [ | ||
ChatMessage( | ||
role="system", content="You are a pirate with a colorful personality" | ||
), | ||
ChatMessage(role="user", content="Tell me a story"), | ||
] | ||
resp = llm.stream_chat(messages) | ||
for r in resp: | ||
print(r.delta, end="") | ||
``` | ||
|
||
### Configure Model | ||
|
||
```py | ||
from llama_index.llms.anthropic import Anthropic | ||
|
||
llm = Anthropic(model="claude-3-sonnet-20240229") | ||
resp = llm.stream_complete("Paul Graham is ") | ||
for r in resp: | ||
print(r.delta, end="") | ||
``` | ||
|
||
### Async completion | ||
|
||
```py | ||
from llama_index.llms.anthropic import Anthropic | ||
|
||
llm = Anthropic("claude-3-sonnet-20240229") | ||
resp = await llm.acomplete("Paul Graham is ") | ||
print(resp) | ||
``` | ||
|
||
### Structured Prediction Example | ||
|
||
```py | ||
from llama_index.llms.anthropic import Anthropic | ||
from llama_index.core.prompts import PromptTemplate | ||
from llama_index.core.bridge.pydantic import BaseModel | ||
from typing import List | ||
|
||
|
||
class MenuItem(BaseModel): | ||
"""A menu item in a restaurant.""" | ||
|
||
course_name: str | ||
is_vegetarian: bool | ||
|
||
|
||
class Restaurant(BaseModel): | ||
"""A restaurant with name, city, and cuisine.""" | ||
|
||
name: str | ||
city: str | ||
cuisine: str | ||
menu_items: List[MenuItem] | ||
|
||
|
||
llm = Anthropic("claude-3-5-sonnet-20240620") | ||
prompt_tmpl = PromptTemplate( | ||
"Generate a restaurant in a given city {city_name}" | ||
) | ||
|
||
# Option 1: Use `as_structured_llm` | ||
restaurant_obj = ( | ||
llm.as_structured_llm(Restaurant) | ||
.complete(prompt_tmpl.format(city_name="Miami")) | ||
.raw | ||
) | ||
print(restaurant_obj) | ||
|
||
# Option 2: Use `structured_predict` | ||
# restaurant_obj = llm.structured_predict(Restaurant, prompt_tmpl, city_name="Miami") | ||
|
||
# Streaming Structured Prediction | ||
from llama_index.core.llms import ChatMessage | ||
from IPython.display import clear_output | ||
from pprint import pprint | ||
|
||
input_msg = ChatMessage.from_str("Generate a restaurant in San Francisco") | ||
|
||
sllm = llm.as_structured_llm(Restaurant) | ||
stream_output = sllm.stream_chat([input_msg]) | ||
for partial_output in stream_output: | ||
clear_output(wait=True) | ||
pprint(partial_output.raw.dict()) | ||
``` | ||
|
||
### LLM Implementation example | ||
|
||
https://docs.llamaindex.ai/en/stable/examples/llm/anthropic/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
91 changes: 91 additions & 0 deletions
91
llama-index-integrations/llms/llama-index-llms-anyscale/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,92 @@ | ||
# LlamaIndex Llms Integration: Anyscale | ||
|
||
### Installation | ||
|
||
```bash | ||
%pip install llama-index-llms-anyscale | ||
!pip install llama-index | ||
``` | ||
|
||
### Basic Usage | ||
|
||
```py | ||
from llama_index.llms.anyscale import Anyscale | ||
from llama_index.core.llms import ChatMessage | ||
|
||
# Call chat with ChatMessage List | ||
# You need to either set env var ANYSCALE_API_KEY or set api_key in the class constructor | ||
|
||
# Example of setting API key through environment variable | ||
# import os | ||
# os.environ['ANYSCALE_API_KEY'] = '<your-api-key>' | ||
|
||
# Initialize the Anyscale LLM with your API key | ||
llm = Anyscale(api_key="<your-api-key>") | ||
|
||
# Chat Example | ||
message = ChatMessage(role="user", content="Tell me a joke") | ||
resp = llm.chat([message]) | ||
print(resp) | ||
|
||
# Expected Output: | ||
# assistant: Sure, here's a joke for you: | ||
# | ||
# Why couldn't the bicycle stand up by itself? | ||
# | ||
# Because it was two-tired! | ||
# | ||
# I hope that brought a smile to your face! Is there anything else I can assist you with? | ||
``` | ||
|
||
### Streaming Example | ||
|
||
```py | ||
message = ChatMessage(role="user", content="Tell me a story in 250 words") | ||
resp = llm.stream_chat([message]) | ||
for r in resp: | ||
print(r.delta, end="") | ||
|
||
# Output Example: | ||
# Once upon a time, there was a young girl named Maria who lived in a small village surrounded by lush green forests. | ||
# Maria was a kind and gentle soul, loved by everyone in the village. She spent most of her days exploring the forests, | ||
# discovering new species of plants and animals, and helping the villagers with their daily chores... | ||
# (Story continues until it reaches the word limit.) | ||
``` | ||
|
||
### Completion Example | ||
|
||
```py | ||
resp = llm.complete("Tell me a joke") | ||
print(resp) | ||
|
||
# Expected Output: | ||
# assistant: Sure, here's a joke for you: | ||
# | ||
# Why couldn't the bicycle stand up by itself? | ||
# | ||
# Because it was two-tired! | ||
``` | ||
|
||
### Streaming Completion Example | ||
|
||
```py | ||
resp = llm.stream_complete("Tell me a story in 250 words") | ||
for r in resp: | ||
print(r.delta, end="") | ||
|
||
# Example Output: | ||
# Once upon a time, there was a young girl named Maria who lived in a small village... | ||
# (Stream continues as the story is generated.) | ||
``` | ||
|
||
### Model Configuration | ||
|
||
```py | ||
llm = Anyscale(model="codellama/CodeLlama-34b-Instruct-hf") | ||
resp = llm.complete("Show me the c++ code to send requests to HTTP Server") | ||
print(resp) | ||
``` | ||
|
||
### LLM Implementation example | ||
|
||
https://docs.llamaindex.ai/en/stable/examples/llm/anyscale/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually don't need to vbump these for README updates -- is it easy to undo these? 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think yeah for llamahub we don't need to update the version but for PYPI to reflect the new readme update we do.