-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add Anthropic Prompt caching notebook (#16272)
* Add support for anthropic prompt caching * Add support for anthropic prompt caching * Update prompt caching logic * Add notebook * remove print statements * Update notebook * Add google colab link
- Loading branch information
1 parent
e46fafa
commit 5b1fe25
Showing
3 changed files
with
321 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,310 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/llm/anthropic_prompt_caching.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n", | ||
"\n", | ||
"# Anthropic Prompt Caching\n", | ||
"\n", | ||
"In this Notebook, we will demonstrate the usage of [Anthropic Prompt Caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) with LlamaIndex abstractions.\n", | ||
"\n", | ||
"Prompt Caching is enabled by marking `cache_control` in the messages request.\n", | ||
"\n", | ||
"\n", | ||
"## How Prompt Caching works\n", | ||
"\n", | ||
"When you send a request with Prompt Caching enabled:\n", | ||
"\n", | ||
"1. The system checks if the prompt prefix is already cached from a recent query.\n", | ||
"2. If found, it uses the cached version, reducing processing time and costs.\n", | ||
"3. Otherwise, it processes the full prompt and caches the prefix for future use.\n", | ||
"\n", | ||
"\n", | ||
"**Note:** \n", | ||
"\n", | ||
"A. Prompt caching works with `Claude 3.5 Sonnet`, `Claude 3 Haiku` and `Claude 3 Opus` models.\n", | ||
"\n", | ||
"B. The minimum cacheable prompt length is:\n", | ||
"\n", | ||
" 1. 1024 tokens for Claude 3.5 Sonnet and Claude 3 Opus\n", | ||
" 2. 2048 tokens for Claude 3 Haiku\n", | ||
"\n", | ||
"C. Shorter prompts cannot be cached, even if marked with `cache_control`." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Setup API Keys" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import os\n", | ||
"\n", | ||
"os.environ[\n", | ||
" \"ANTHROPIC_API_KEY\"\n", | ||
"] = \"sk-...\" # replace with your Anthropic API key" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Setup LLM" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.llms.anthropic import Anthropic\n", | ||
"\n", | ||
"llm = Anthropic(model=\"claude-3-5-sonnet-20240620\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Download Data\n", | ||
"\n", | ||
"In this demonstration, we will use the text from the `Paul Graham Essay`. We will cache the text and run some queries based on it." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"--2024-09-28 01:22:14-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n", | ||
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8001::154, 2606:50c0:8002::154, ...\n", | ||
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected.\n", | ||
"HTTP request sent, awaiting response... 200 OK\n", | ||
"Length: 75042 (73K) [text/plain]\n", | ||
"Saving to: ‘./paul_graham_essay.txt’\n", | ||
"\n", | ||
"./paul_graham_essay 100%[===================>] 73.28K --.-KB/s in 0.01s \n", | ||
"\n", | ||
"2024-09-28 01:22:14 (5.73 MB/s) - ‘./paul_graham_essay.txt’ saved [75042/75042]\n", | ||
"\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O './paul_graham_essay.txt'" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Load Data" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.core import SimpleDirectoryReader\n", | ||
"\n", | ||
"documents = SimpleDirectoryReader(\n", | ||
" input_files=[\"./paul_graham_essay.txt\"],\n", | ||
").load_data()\n", | ||
"\n", | ||
"document_text = documents[0].text" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Prompt Caching\n", | ||
"\n", | ||
"Enabling Prompt Cache:\n", | ||
"\n", | ||
"1.\tInclude `\"cache_control\": {\"type\": \"ephemeral\"}` for the text prompt you want to cache.\n", | ||
"2.\tAdd `extra_headers={\"anthropic-beta\": \"prompt-caching-2024-07-31\"}` in the request." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"We can verify if the text is cached by checking the following parameters:\n", | ||
"\n", | ||
"`cache_creation_input_tokens:` Number of tokens written to the cache when creating a new entry.\n", | ||
"\n", | ||
"`cache_read_input_tokens:` Number of tokens retrieved from the cache for this request.\n", | ||
"\n", | ||
"`input_tokens:` Number of input tokens which were not read from or used to create a cache." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.core.llms import ChatMessage\n", | ||
"\n", | ||
"messages = [\n", | ||
" ChatMessage(role=\"system\", content=\"You are helpful AI Assitant.\"),\n", | ||
" ChatMessage(\n", | ||
" role=\"user\",\n", | ||
" content=[\n", | ||
" {\n", | ||
" \"text\": f\"{document_text}\",\n", | ||
" \"type\": \"text\",\n", | ||
" \"cache_control\": {\"type\": \"ephemeral\"},\n", | ||
" },\n", | ||
" {\"text\": \"Why did Paul Graham start YC?\", \"type\": \"text\"},\n", | ||
" ],\n", | ||
" ),\n", | ||
"]\n", | ||
"\n", | ||
"resp = llm.chat(\n", | ||
" messages, extra_headers={\"anthropic-beta\": \"prompt-caching-2024-07-31\"}\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Let's examine the raw response." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"{'id': 'msg_01KCcFZnbAGjxSKJm7LnXajp',\n", | ||
" 'content': [TextBlock(text=\"Based on the essay, it seems Paul Graham started Y Combinator for a few key reasons:\\n\\n1. He had been thinking about ways to improve venture capital and startup funding, like making smaller investments in younger, more technical founders.\\n\\n2. He wanted to try angel investing but hadn't gotten around to it yet, despite intending to for years after Yahoo acquired his company Viaweb.\\n\\n3. He missed working with his former Viaweb co-founders Robert Morris and Trevor Blackwell and wanted to find a project they could collaborate on.\\n\\n4. His girlfriend (later wife) Jessica Livingston was looking for a new job after interviewing at a VC firm, and Graham had been telling her ideas for how to improve VC.\\n\\n5. When giving a talk to Harvard students about startups, he realized there was demand for seed funding and advice from experienced founders.\\n\\n6. They wanted to create an investment firm that would actually implement Graham's ideas about how to better fund and support early-stage startups.\\n\\n7. They were somewhat naïve about how to be angel investors, which allowed them to take novel approaches like the batch model of funding multiple startups at once.\\n\\nSo it was a convergence of Graham's ideas about improving startup funding, his desire to angel invest and work with his former co-founders again, and the opportunity presented by Jessica looking for a new job. Their lack of experience in traditional VC allowed them to take an innovative approach.\", type='text')],\n", | ||
" 'model': 'claude-3-5-sonnet-20240620',\n", | ||
" 'role': 'assistant',\n", | ||
" 'stop_reason': 'end_turn',\n", | ||
" 'stop_sequence': None,\n", | ||
" 'type': 'message',\n", | ||
" 'usage': Usage(input_tokens=12, output_tokens=313, cache_creation_input_tokens=17470, cache_read_input_tokens=0)}" | ||
] | ||
}, | ||
"execution_count": null, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
], | ||
"source": [ | ||
"resp.raw" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"As you can see, `17470` tokens have been cached, as indicated by `cache_creation_input_tokens`.\n", | ||
"\n", | ||
"Now, let’s run another query on the same document. It should retrieve the document text from the cache, which will be reflected in `cache_read_input_tokens`." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"messages = [\n", | ||
" ChatMessage(role=\"system\", content=\"You are helpful AI Assitant.\"),\n", | ||
" ChatMessage(\n", | ||
" role=\"user\",\n", | ||
" content=[\n", | ||
" {\n", | ||
" \"text\": f\"{document_text}\",\n", | ||
" \"type\": \"text\",\n", | ||
" \"cache_control\": {\"type\": \"ephemeral\"},\n", | ||
" },\n", | ||
" {\"text\": \"What did Paul Graham do growing up?\", \"type\": \"text\"},\n", | ||
" ],\n", | ||
" ),\n", | ||
"]\n", | ||
"\n", | ||
"resp = llm.chat(\n", | ||
" messages, extra_headers={\"anthropic-beta\": \"prompt-caching-2024-07-31\"}\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"{'id': 'msg_01CpwhtuvJ8UR64xSbpxoutZ',\n", | ||
" 'content': [TextBlock(text='Based on the essay, here are some key things Paul Graham did growing up:\\n\\n1. As a teenager, he focused mainly on writing and programming outside of school. He tried writing short stories but says they were \"awful\".\\n\\n2. In 9th grade (age 13-14), he started programming on an IBM 1401 computer at his school district\\'s data processing center. He used an early version of Fortran.\\n\\n3. He convinced his father to buy a TRS-80 microcomputer around 1980 when he was in high school. He wrote simple games, a program to predict model rocket flight, and a word processor his father used.\\n\\n4. He planned to study philosophy in college, thinking it was more powerful than other fields. \\n\\n5. In college, he got interested in artificial intelligence after reading a novel featuring an intelligent computer and seeing a documentary about an AI program called SHRDLU.\\n\\n6. He taught himself Lisp programming language in college since there were no AI classes offered.\\n\\n7. For his undergraduate thesis, he reverse-engineered the SHRDLU AI program.\\n\\n8. He graduated college with a degree in \"Artificial Intelligence\" (in quotes on the diploma).\\n\\n9. He applied to grad schools for AI and ended up going to Harvard for graduate studies.\\n\\nSo in summary, his main interests and activities growing up centered around writing, programming, and eventually artificial intelligence as he entered college and graduate school.', type='text')],\n", | ||
" 'model': 'claude-3-5-sonnet-20240620',\n", | ||
" 'role': 'assistant',\n", | ||
" 'stop_reason': 'end_turn',\n", | ||
" 'stop_sequence': None,\n", | ||
" 'type': 'message',\n", | ||
" 'usage': Usage(input_tokens=12, output_tokens=313, cache_creation_input_tokens=0, cache_read_input_tokens=17470)}" | ||
] | ||
}, | ||
"execution_count": null, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
], | ||
"source": [ | ||
"resp.raw" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"As you can see, the response was generated using cached text, as indicated by `cache_read_input_tokens`." | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "llamacloud", | ||
"language": "python", | ||
"name": "llamacloud" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters