Skip to content

Commit

Permalink
LLM Module Readme Update (#16389)
Browse files Browse the repository at this point in the history
  • Loading branch information
anoopshrma authored Oct 8, 2024
1 parent ae58151 commit cae78d5
Show file tree
Hide file tree
Showing 62 changed files with 3,407 additions and 32 deletions.
2 changes: 1 addition & 1 deletion docs/docs/examples/llm/cohere.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@
"metadata": {},
"outputs": [],
"source": [
"from llama_index.llms.openai import OpenAI\n",
"from llama_index.llms.cohere import Cohere\n",
"\n",
"llm = Cohere(api_key=api_key)\n",
"resp = llm.stream_complete(\"Paul Graham is \")"
Expand Down
190 changes: 189 additions & 1 deletion llama-index-integrations/llms/llama-index-llms-anthropic/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,189 @@
# LlamaIndex Llms Integration: Anthropic
# LlamaIndex LLM Integration: Anthropic

Anthropic is an AI research company focused on developing advanced language models, notably the Claude series. Their flagship model, Claude, is designed to generate human-like text while prioritizing safety and alignment with human intentions. Anthropic aims to create AI systems that are not only powerful but also responsible, addressing potential risks associated with artificial intelligence.

### Installation

```sh
%pip install llama-index-llms-anthropic
!pip install llama-index
```

```
# Set Tokenizer
# First we want to set the tokenizer, which is slightly different than TikToken.
# NOTE: The Claude 3 tokenizer has not been updated yet; using the existing Anthropic tokenizer leads
# to context overflow errors for 200k tokens. We've temporarily set the max tokens for Claude 3 to 180k.
```

### Basic Usage

```py
from llama_index.llms.anthropic import Anthropic
from llama_index.core import Settings

tokenizer = Anthropic().tokenizer
Settings.tokenizer = tokenizer

# Call complete with a prompt
import os

os.environ["ANTHROPIC_API_KEY"] = "YOUR ANTHROPIC API KEY"
from llama_index.llms.anthropic import Anthropic

# To customize your API key, do this
# otherwise it will lookup ANTHROPIC_API_KEY from your env variable
# llm = Anthropic(api_key="<api_key>")
llm = Anthropic(model="claude-3-opus-20240229")

resp = llm.complete("Paul Graham is ")
print(resp)

# Sample response
# Paul Graham is a well-known entrepreneur, programmer, venture capitalist, and essayist.
# He is best known for co-founding Viaweb, one of the first web application companies, which was later
# sold to Yahoo! in 1998 and became Yahoo! Store. Graham is also the co-founder of Y Combinator, a highly
# successful startup accelerator that has helped launch numerous successful companies, such as Dropbox,
# Airbnb, and Reddit.
```

### Using Anthropic model through Vertex AI

```py
import os

os.environ["ANTHROPIC_PROJECT_ID"] = "YOUR PROJECT ID HERE"
os.environ["ANTHROPIC_REGION"] = "YOUR PROJECT REGION HERE"
# Set region and project_id to make Anthropic use the Vertex AI client

llm = Anthropic(
model="claude-3-5-sonnet@20240620",
region=os.getenv("ANTHROPIC_REGION"),
project_id=os.getenv("ANTHROPIC_PROJECT_ID"),
)

resp = llm.complete("Paul Graham is ")
print(resp)
```

### Chat example with a list of messages

```py
from llama_index.core.llms import ChatMessage
from llama_index.llms.anthropic import Anthropic

messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = Anthropic(model="claude-3-opus-20240229").chat(messages)
print(resp)
```

### Streaming example

```py
from llama_index.llms.anthropic import Anthropic

llm = Anthropic(model="claude-3-opus-20240229", max_tokens=100)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
```

### Chat streaming with pirate story

```py
llm = Anthropic(model="claude-3-opus-20240229")
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
```

### Configure Model

```py
from llama_index.llms.anthropic import Anthropic

llm = Anthropic(model="claude-3-sonnet-20240229")
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
```

### Async completion

```py
from llama_index.llms.anthropic import Anthropic

llm = Anthropic("claude-3-sonnet-20240229")
resp = await llm.acomplete("Paul Graham is ")
print(resp)
```

### Structured Prediction Example

```py
from llama_index.llms.anthropic import Anthropic
from llama_index.core.prompts import PromptTemplate
from llama_index.core.bridge.pydantic import BaseModel
from typing import List


class MenuItem(BaseModel):
"""A menu item in a restaurant."""

course_name: str
is_vegetarian: bool


class Restaurant(BaseModel):
"""A restaurant with name, city, and cuisine."""

name: str
city: str
cuisine: str
menu_items: List[MenuItem]


llm = Anthropic("claude-3-5-sonnet-20240620")
prompt_tmpl = PromptTemplate(
"Generate a restaurant in a given city {city_name}"
)

# Option 1: Use `as_structured_llm`
restaurant_obj = (
llm.as_structured_llm(Restaurant)
.complete(prompt_tmpl.format(city_name="Miami"))
.raw
)
print(restaurant_obj)

# Option 2: Use `structured_predict`
# restaurant_obj = llm.structured_predict(Restaurant, prompt_tmpl, city_name="Miami")

# Streaming Structured Prediction
from llama_index.core.llms import ChatMessage
from IPython.display import clear_output
from pprint import pprint

input_msg = ChatMessage.from_str("Generate a restaurant in San Francisco")

sllm = llm.as_structured_llm(Restaurant)
stream_output = sllm.stream_chat([input_msg])
for partial_output in stream_output:
clear_output(wait=True)
pprint(partial_output.raw.dict())
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/anthropic/
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
license = "MIT"
name = "llama-index-llms-anthropic"
readme = "README.md"
version = "0.3.4"
version = "0.3.5"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
Expand Down
91 changes: 91 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-anyscale/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,92 @@
# LlamaIndex Llms Integration: Anyscale

### Installation

```bash
%pip install llama-index-llms-anyscale
!pip install llama-index
```

### Basic Usage

```py
from llama_index.llms.anyscale import Anyscale
from llama_index.core.llms import ChatMessage

# Call chat with ChatMessage List
# You need to either set env var ANYSCALE_API_KEY or set api_key in the class constructor

# Example of setting API key through environment variable
# import os
# os.environ['ANYSCALE_API_KEY'] = '<your-api-key>'

# Initialize the Anyscale LLM with your API key
llm = Anyscale(api_key="<your-api-key>")

# Chat Example
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!
#
# I hope that brought a smile to your face! Is there anything else I can assist you with?
```

### Streaming Example

```py
message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")

# Output Example:
# Once upon a time, there was a young girl named Maria who lived in a small village surrounded by lush green forests.
# Maria was a kind and gentle soul, loved by everyone in the village. She spent most of her days exploring the forests,
# discovering new species of plants and animals, and helping the villagers with their daily chores...
# (Story continues until it reaches the word limit.)
```

### Completion Example

```py
resp = llm.complete("Tell me a joke")
print(resp)

# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!
```

### Streaming Completion Example

```py
resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
print(r.delta, end="")

# Example Output:
# Once upon a time, there was a young girl named Maria who lived in a small village...
# (Stream continues as the story is generated.)
```

### Model Configuration

```py
llm = Anyscale(model="codellama/CodeLlama-34b-Instruct-hf")
resp = llm.complete("Show me the c++ code to send requests to HTTP Server")
print(resp)
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/anyscale/
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
license = "MIT"
name = "llama-index-llms-anyscale"
readme = "README.md"
version = "0.2.0"
version = "0.2.1"

[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
Expand Down
Loading

0 comments on commit cae78d5

Please sign in to comment.