From b02eedfdeb120dbd17b62231683c6d50795ba261 Mon Sep 17 00:00:00 2001 From: John Alling <44934218+jalling97@users.noreply.github.com> Date: Fri, 4 Oct 2024 09:56:04 -0400 Subject: [PATCH 01/11] Create _index.md --- website/content/en/docs/dev-with-lfai-guide/_index.md | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 website/content/en/docs/dev-with-lfai-guide/_index.md diff --git a/website/content/en/docs/dev-with-lfai-guide/_index.md b/website/content/en/docs/dev-with-lfai-guide/_index.md new file mode 100644 index 000000000..71e16fb89 --- /dev/null +++ b/website/content/en/docs/dev-with-lfai-guide/_index.md @@ -0,0 +1,7 @@ +--- +title: Guide to Development with LeapfrogAI +type: docs +weight: 1 +--- + +This documentation serves as a basic guide for getting started doing development leveraging the LeapfrogAI API and the OpenAI SDK. From 7d8290f58616285b6cb5b11fb75ac416eabb2680 Mon Sep 17 00:00:00 2001 From: John Alling <44934218+jalling97@users.noreply.github.com> Date: Fri, 4 Oct 2024 10:57:52 -0400 Subject: [PATCH 02/11] Create dev_guide.md --- .../en/docs/dev-with-lfai-guide/dev_guide.md | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+) create mode 100644 website/content/en/docs/dev-with-lfai-guide/dev_guide.md diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md new file mode 100644 index 000000000..8de5a9972 --- /dev/null +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -0,0 +1,23 @@ +--- +title: App Development using LeapfrogAI +type: docs +weight: 1 +--- + +Prior to developing applications using LeapfrogAI, ensure that you have a valid instance of LeapfrogAI deployed in your environment. See the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/) page for more info on how to get started. + +## OpenAI Compatibility in LeapfrogAI + +The LeapfrogAI API is an OpenAI-compatible API, meaning that the endpoints built out within the LeapfrogAI API as what is found within the OpenAI API. **Note:** Not all endpoints/functionality in OpenAI is implemented in LeapfrogAI. To see what endpoints are implemented in your deployment, reference the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#checking-deployment) guide for how to check the API reference. + +## Using the OpenAI SDK with LeapfrogAI + +### OpenAI API Refernce + +### Creating the Client + +### Running Chat Completions + +### Building a RAG Pipeline using Assistants + +## Questions/Feedback From 05b0adf46097e67cc722fc655b0fd24f79672dbf Mon Sep 17 00:00:00 2001 From: John Alling <44934218+jalling97@users.noreply.github.com> Date: Fri, 4 Oct 2024 11:21:45 -0400 Subject: [PATCH 03/11] Update _index.md --- website/content/en/docs/dev-with-lfai-guide/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/_index.md b/website/content/en/docs/dev-with-lfai-guide/_index.md index 71e16fb89..8b2d1513b 100644 --- a/website/content/en/docs/dev-with-lfai-guide/_index.md +++ b/website/content/en/docs/dev-with-lfai-guide/_index.md @@ -1,7 +1,7 @@ --- -title: Guide to Development with LeapfrogAI +title: Development Guide type: docs -weight: 1 +weight: 2 --- This documentation serves as a basic guide for getting started doing development leveraging the LeapfrogAI API and the OpenAI SDK. From fd1a9542c93f14c48ab413ded34aab7f9dfb80b1 Mon Sep 17 00:00:00 2001 From: John Alling <44934218+jalling97@users.noreply.github.com> Date: Fri, 4 Oct 2024 11:22:08 -0400 Subject: [PATCH 04/11] Update dev_guide.md --- website/content/en/docs/dev-with-lfai-guide/dev_guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 8de5a9972..8ed024814 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -1,5 +1,5 @@ --- -title: App Development using LeapfrogAI +title: App Development with LeapfrogAI type: docs weight: 1 --- From af3d4741b12b78d78721d20fa34f35fd6889bbc0 Mon Sep 17 00:00:00 2001 From: John Alling <44934218+jalling97@users.noreply.github.com> Date: Fri, 4 Oct 2024 16:20:29 -0400 Subject: [PATCH 05/11] begin adding code samples --- .../en/docs/dev-with-lfai-guide/dev_guide.md | 109 +++++++++++++++++- 1 file changed, 108 insertions(+), 1 deletion(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 8ed024814..5aae37c4a 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -12,12 +12,119 @@ The LeapfrogAI API is an OpenAI-compatible API, meaning that the endpoints built ## Using the OpenAI SDK with LeapfrogAI -### OpenAI API Refernce +### OpenAI API Reference + +The best place to look for help with using the OpenAI SDK is to refer to the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/introduction), so it is recommended to return to this reference when understanding how specific endpoints work. + +### Getting a LeapfrogAI API Key + +In order to utilize the LeapfrogAI API outside of the User Interface, you'll need to get an API key. This can be done one of two ways: + +#### Via the UI + +To create a LeapfrogAI API key via the user interface, perform the following in the UI (reference the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#checking-deployment) guide for where the UI is deployed): +- Select the **Settings** icon ⚙️ in the top-right corner +- Select **API Keys** +- Select **Create New** + - Provide a name for the API key + - Choose a lifespan for the API key +- Select **Create** +- Copy the API for future use + +#### Via the deployment + +TODO: Write this ### Creating the Client +Now that you have your API key, you can create an OpenAI client using LeapfrogAI on the backend: + +```python +import openai + +# set base url and api key (recommended that these are set as env vars) +LEAPFROGAI_API_KEY="api-key" # insert actual API key here +LEAPFROGAI_API_URL="https://leapfrogai-api.uds.dev" # the API may be at a different URL + +# create an openai client +client = openai.OpenAI(api_key=LEAPFROGAI_API_KEY, base_url=LEAPFROGAI_API_URL+"/openai/v1") +``` + ### Running Chat Completions +Now that you have a client created, you can utilize it (with LeapfrogAI on the backend) to handle basic chat completion requests: + +```python +... # using the same code from above + +completion = client.chat.completions.create( + model="vllm", # in LFAI, the "model" refers to the backend which services the model itself + messages=[ + { + "role": "user", + "content": "Please tell me a fun fact about frogs.", + } + ] +) +``` + +This is just a basic example; check out the [chat completion reference](https://platform.openai.com/docs/api-reference/chat/create) for more options! + ### Building a RAG Pipeline using Assistants +Now that we've seen a basic example, let's leverage OpenAI assistants using LeapfrogAI to handle a more complex task: Retrieval Augmented Generation (RAG) + +We'll break this example down into a few step: + +#### Create a Vector Store + +#### Upload a file + +#### Create an Assistant + +Assuming you've created an OpenAI as detailed above, create an assistant: + +```python +# these instructions are for example only, your use case may require more explicit directions +instructions = """ + You are a helpful, frog-themed AI bot that answers questions for a user. Keep your response short and direct. + You may receive a set of context and a question that will relate to the context. + Do not give information outside the document or repeat your findings. +""" + +# create an assistant +assistant = client.beta.assistants.create( + name="Frog Buddy", + instructions=instructions, + model="vllm", + tools=[{"type": "file_search"}], + tool_resources={ + "file_search": {"vector_store_ids": [self.vector_store.id]} + }, +) +``` +#### Create a Thread and Get Messages + +```python +# create thread +thread = client.beta.threads.create() +client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content=message_prompt, +) + +# create run +run = client.beta.threads.runs.create_and_poll( + assistant_id=assistant.id, thread_id=thread.id +) + +# get messages +messages = self.client.beta.threads.messages.list( + thread_id=thread.id, run_id=run.id +).data + +``` + + ## Questions/Feedback From 989a3e6da178a5b208d9fea963d4cd8d73986252 Mon Sep 17 00:00:00 2001 From: John Alling Date: Mon, 7 Oct 2024 10:23:17 -0400 Subject: [PATCH 06/11] update guide with vector store section --- .../en/docs/dev-with-lfai-guide/dev_guide.md | 87 +++++++++++++++++-- 1 file changed, 79 insertions(+), 8 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 5aae37c4a..22903243d 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -23,6 +23,7 @@ In order to utilize the LeapfrogAI API outside of the User Interface, you'll nee #### Via the UI To create a LeapfrogAI API key via the user interface, perform the following in the UI (reference the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#checking-deployment) guide for where the UI is deployed): + - Select the **Settings** icon ⚙️ in the top-right corner - Select **API Keys** - Select **Create New** @@ -72,21 +73,62 @@ This is just a basic example; check out the [chat completion reference](https:// ### Building a RAG Pipeline using Assistants -Now that we've seen a basic example, let's leverage OpenAI assistants using LeapfrogAI to handle a more complex task: Retrieval Augmented Generation (RAG) +Now that we've seen a basic example, let's leverage OpenAI assistants using LeapfrogAI to handle a more complex task: **Retrieval Augmented Generation (RAG)**. -We'll break this example down into a few step: +We'll break this example down into a few steps: #### Create a Vector Store +A [vector database](https://www.pinecone.io/learn/vector-database/) is a fundamental piece of RAG-enabled systems. Vector databases store vectorized representations of and creating one is the first step to building a RAG pipeline. + +Assuming you've created an OpenAI client as detailed above, create a vector store: + +```python +vector_store = client.beta.vector_stores.create( + name="RAG Demo Vector Store", + file_ids=[], + expires_after={"anchor": "last_active_at", "days": 5}, + metadata={"project": "RAG Demo", "version": "0.1"}, +) +``` + #### Upload a file +Now that you have a vector store, let's add some documents. For a simple example, let's assume you have two text files with the following contents: + +**doc_1.txt** + +```text +Joseph has a pet frog named Milo. +``` + +**doc_2.txt** + +```text +Milo the frog's birthday is on October 7th. +``` + +You can add these documents to the vector store: + +```python +# upload some documents +documents = ['doc_1.txt','doc_2.txt'] +for doc in documents: + with open(doc, "rb") as file: # read these files in binary mode + _ = client.beta.vector_stores.files.upload( + vector_store_id=vector_store.id, file=file + ) +``` + +When you upload files to a vector store, this creates a `vector_store_file` object. You can record these to reference later, but it's not necessary to track these when chatting with your documents. + #### Create an Assistant -Assuming you've created an OpenAI as detailed above, create an assistant: +[OpenAI Assistants](https://platform.openai.com/docs/assistants/overview) carry specific instructions and can reference specific tools to add functionality to your workflows. In this case, we'll add the ability for this assistant to search files in our vector store: ```python -# these instructions are for example only, your use case may require more explicit directions -instructions = """ +# these instructions are for example only, your use case may require different directions +INSTRUCTIONS = """ You are a helpful, frog-themed AI bot that answers questions for a user. Keep your response short and direct. You may receive a set of context and a question that will relate to the context. Do not give information outside the document or repeat your findings. @@ -95,36 +137,65 @@ instructions = """ # create an assistant assistant = client.beta.assistants.create( name="Frog Buddy", - instructions=instructions, + instructions=INSTRUCTIONS, model="vllm", tools=[{"type": "file_search"}], tool_resources={ - "file_search": {"vector_store_ids": [self.vector_store.id]} + "file_search": {"vector_store_ids": [vector_store.id]} }, ) ``` + #### Create a Thread and Get Messages +Now that we have an assistant that is able to pull context from our vector store, let's query the assistant. This is done with the assistance of threads and runs (see the [assistants overview](https://platform.openai.com/docs/assistants/overview) for more info). + +We'll make a query specific to the information in the documents we've uploaded: + ```python +# this query can only be answered using the uploaded documents +query = "When is the birthday of Joseph's pet frog?" + # create thread thread = client.beta.threads.create() client.beta.threads.messages.create( thread_id=thread.id, role="user", - content=message_prompt, + content=query, ) # create run run = client.beta.threads.runs.create_and_poll( assistant_id=assistant.id, thread_id=thread.id ) +``` + +You'll notice that both documents are needed in order to answer this question. One contains the actual birthday date, while the other contains the relationship information between Joseph and Milo the frog. This is one of the reasons LLMs are utilized when extracting information from documents; they can integrate specific pieces of information across multiple sources. +#### View the Response + +With the run executed, you can now list the messages associated with that run to get the response to our query + +```python # get messages messages = self.client.beta.threads.messages.list( thread_id=thread.id, run_id=run.id ).data +# print messages +print(messages) +``` + +The output of this `print(messages)` command will look something like this: + +```text +INSERT OUTPUT ``` +You'll see that our Frog Buddy assistant was able to recieve the contextual information it needed in order to know how to answer the query. + +And this just scratches the surface of what you can create with the OpenAI SDK leveraging LeapfrogAI. This may just be a simple example that doesn't necessarily require the added overhead of RAG, but when you need to search for information hidden in hundreds or thousands of documents, you may not be able to hand your LLM all the data at once, which is where RAG really comes in handy. ## Questions/Feedback + +If you have any questions, feedback, or specific update requests on this development guide, please open an issue on the [LeapfrogAI Github Repository](https://github.com/defenseunicorns/leapfrogai) From f5d2b7b430956c7891397f112860306e23baef9f Mon Sep 17 00:00:00 2001 From: John Alling Date: Mon, 7 Oct 2024 11:14:29 -0400 Subject: [PATCH 07/11] refine code examples --- .../en/docs/dev-with-lfai-guide/dev_guide.md | 59 +++++++++++++------ 1 file changed, 40 insertions(+), 19 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 22903243d..7108d720b 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -10,7 +10,7 @@ Prior to developing applications using LeapfrogAI, ensure that you have a valid The LeapfrogAI API is an OpenAI-compatible API, meaning that the endpoints built out within the LeapfrogAI API as what is found within the OpenAI API. **Note:** Not all endpoints/functionality in OpenAI is implemented in LeapfrogAI. To see what endpoints are implemented in your deployment, reference the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#checking-deployment) guide for how to check the API reference. -## Using the OpenAI SDK with LeapfrogAI +## Basic Usage of the OpenAI SDK with LeapfrogAI ### OpenAI API Reference @@ -36,9 +36,19 @@ To create a LeapfrogAI API key via the user interface, perform the following in TODO: Write this +### Install dependencies + +It's recommended to be using Python version `3.11.6` or greater. + +You'll need the pip Python package manager and the OpenAI SDK: + +```bash +pip install openai +``` + ### Creating the Client -Now that you have your API key, you can create an OpenAI client using LeapfrogAI on the backend: +Now that you have your API key, you can create an OpenAI client using LeapfrogAI on the backend in a Python script: ```python import openai @@ -71,19 +81,28 @@ completion = client.chat.completions.create( This is just a basic example; check out the [chat completion reference](https://platform.openai.com/docs/api-reference/chat/create) for more options! -### Building a RAG Pipeline using Assistants +## Building a RAG Pipeline using Assistants Now that we've seen a basic example, let's leverage OpenAI assistants using LeapfrogAI to handle a more complex task: **Retrieval Augmented Generation (RAG)**. We'll break this example down into a few steps: -#### Create a Vector Store +### Requirements + +Referencing the [Basic Usage](#basic-usage-of-the-openai-sdk-with-leapfrogai) section, you'll need: + +- A LeapfrogAI API key +- The URL of the LeapfrogAI API instance you'll be using +- An OpenAI Client using LeapfrogAI + +### Create a Vector Store A [vector database](https://www.pinecone.io/learn/vector-database/) is a fundamental piece of RAG-enabled systems. Vector databases store vectorized representations of and creating one is the first step to building a RAG pipeline. Assuming you've created an OpenAI client as detailed above, create a vector store: ```python +# create a vector store vector_store = client.beta.vector_stores.create( name="RAG Demo Vector Store", file_ids=[], @@ -92,7 +111,7 @@ vector_store = client.beta.vector_stores.create( ) ``` -#### Upload a file +### Upload a file Now that you have a vector store, let's add some documents. For a simple example, let's assume you have two text files with the following contents: @@ -108,28 +127,29 @@ Joseph has a pet frog named Milo. Milo the frog's birthday is on October 7th. ``` -You can add these documents to the vector store: +Create these documents so you can add them to the vector store: ```python # upload some documents documents = ['doc_1.txt','doc_2.txt'] for doc in documents: with open(doc, "rb") as file: # read these files in binary mode - _ = client.beta.vector_stores.files.upload( + vector_store_file = client.beta.vector_stores.files.upload( vector_store_id=vector_store.id, file=file ) + print(f"{doc} vector store file id: {vector_store_file.id}") ``` -When you upload files to a vector store, this creates a `vector_store_file` object. You can record these to reference later, but it's not necessary to track these when chatting with your documents. +When you upload files to a vector store, this creates a `VectorStoreFile` object. You can record these for later usage, but for now we'll just print each ID for reference. -#### Create an Assistant +### Create an Assistant [OpenAI Assistants](https://platform.openai.com/docs/assistants/overview) carry specific instructions and can reference specific tools to add functionality to your workflows. In this case, we'll add the ability for this assistant to search files in our vector store: ```python # these instructions are for example only, your use case may require different directions INSTRUCTIONS = """ - You are a helpful, frog-themed AI bot that answers questions for a user. Keep your response short and direct. + You are a helpful AI bot that answers questions for a user. Keep your response short and direct. You may receive a set of context and a question that will relate to the context. Do not give information outside the document or repeat your findings. """ @@ -146,7 +166,7 @@ assistant = client.beta.assistants.create( ) ``` -#### Create a Thread and Get Messages +### Create a Thread and Get Messages Now that we have an assistant that is able to pull context from our vector store, let's query the assistant. This is done with the assistance of threads and runs (see the [assistants overview](https://platform.openai.com/docs/assistants/overview) for more info). @@ -172,30 +192,31 @@ run = client.beta.threads.runs.create_and_poll( You'll notice that both documents are needed in order to answer this question. One contains the actual birthday date, while the other contains the relationship information between Joseph and Milo the frog. This is one of the reasons LLMs are utilized when extracting information from documents; they can integrate specific pieces of information across multiple sources. -#### View the Response +### View the Response With the run executed, you can now list the messages associated with that run to get the response to our query ```python # get messages -messages = self.client.beta.threads.messages.list( +messages = client.beta.threads.messages.list( thread_id=thread.id, run_id=run.id ).data # print messages -print(messages) +print(messages[1].content[0].text.value) +# we need the second message in the list, as the first one is associated with our request to the LLM ``` -The output of this `print(messages)` command will look something like this: +The output will look something like this: ```text -INSERT OUTPUT +The birthday of Joseph's pet frog, Milo, is on October 7th. [f1e1f9b7-2ec8-4f72-a0cb-42d4eb97c204] [4e48550b-8cf8-49ba-8398-c69389150903] ``` -You'll see that our Frog Buddy assistant was able to recieve the contextual information it needed in order to know how to answer the query. +As you can see, our Frog Buddy assistant was able to recieve the contextual information it needed in order to know how to answer the query. You'll also notice that the attached annotations in the response correspond to the IDs for the vector store files we uploaded earlier, so we know we're pulling our information from the right place! -And this just scratches the surface of what you can create with the OpenAI SDK leveraging LeapfrogAI. This may just be a simple example that doesn't necessarily require the added overhead of RAG, but when you need to search for information hidden in hundreds or thousands of documents, you may not be able to hand your LLM all the data at once, which is where RAG really comes in handy. +This just scratches the surface of what you can create with the OpenAI SDK leveraging LeapfrogAI. This may be a simple example that doesn't necessarily require the added overhead of RAG, but when you need to search for information hidden in hundreds or thousands of documents, you may not be able to hand your LLM all the data at once, which is where RAG really comes in handy. ## Questions/Feedback -If you have any questions, feedback, or specific update requests on this development guide, please open an issue on the [LeapfrogAI Github Repository](https://github.com/defenseunicorns/leapfrogai) +If you have any questions, feedback, or specific update requests on this development guide, please open an issue on the [LeapfrogAI Github Repository](https://github.com/defenseunicorns/leapfrogai). From 799c8f4ed689a108358dacbdf619d5711fbb8c14 Mon Sep 17 00:00:00 2001 From: John Alling Date: Mon, 7 Oct 2024 11:26:21 -0400 Subject: [PATCH 08/11] minor reformatting edits --- .../en/docs/dev-with-lfai-guide/dev_guide.md | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 7108d720b..5850661d6 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -63,7 +63,7 @@ client = openai.OpenAI(api_key=LEAPFROGAI_API_KEY, base_url=LEAPFROGAI_API_URL+" ### Running Chat Completions -Now that you have a client created, you can utilize it (with LeapfrogAI on the backend) to handle basic chat completion requests: +Now that you have a client created, you can utilize it to handle basic chat completion requests: ```python ... # using the same code from above @@ -83,7 +83,7 @@ This is just a basic example; check out the [chat completion reference](https:// ## Building a RAG Pipeline using Assistants -Now that we've seen a basic example, let's leverage OpenAI assistants using LeapfrogAI to handle a more complex task: **Retrieval Augmented Generation (RAG)**. +Now that we've seen a basic example, let's leverage OpenAI assistants using LeapfrogAI to handle a more complex task: [**Retrieval Augmented Generation (RAG)**](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/). We'll break this example down into a few steps: @@ -142,9 +142,16 @@ for doc in documents: When you upload files to a vector store, this creates a `VectorStoreFile` object. You can record these for later usage, but for now we'll just print each ID for reference. +Output: + +```text +doc_1.txt vector store file id: 4e48550b-8cf8-49ba-8398-c69389150903 +doc_2.txt vector store file id: f1e1f9b7-2ec8-4f72-a0cb-42d4eb97c204 +``` + ### Create an Assistant -[OpenAI Assistants](https://platform.openai.com/docs/assistants/overview) carry specific instructions and can reference specific tools to add functionality to your workflows. In this case, we'll add the ability for this assistant to search files in our vector store: +[OpenAI Assistants](https://platform.openai.com/docs/assistants/overview) carry specific instructions and can reference specific tools to add functionality to your workflows. In this case, we'll add the ability for this assistant to search files in our vector store using the `file_search` tool: ```python # these instructions are for example only, your use case may require different directions @@ -194,7 +201,7 @@ You'll notice that both documents are needed in order to answer this question. O ### View the Response -With the run executed, you can now list the messages associated with that run to get the response to our query +With the run executed, you can now list the messages associated with that run to get the response to our query. ```python # get messages From 19ed0890d11790191a3ee42f2cb62c186f743e87 Mon Sep 17 00:00:00 2001 From: John Alling Date: Mon, 7 Oct 2024 11:36:40 -0400 Subject: [PATCH 09/11] final editing tweaks --- website/content/en/docs/dev-with-lfai-guide/dev_guide.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 5850661d6..874baf6e3 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -142,7 +142,7 @@ for doc in documents: When you upload files to a vector store, this creates a `VectorStoreFile` object. You can record these for later usage, but for now we'll just print each ID for reference. -Output: +Output (the IDs will be randomly generated each time): ```text doc_1.txt vector store file id: 4e48550b-8cf8-49ba-8398-c69389150903 @@ -224,6 +224,8 @@ As you can see, our Frog Buddy assistant was able to recieve the contextual info This just scratches the surface of what you can create with the OpenAI SDK leveraging LeapfrogAI. This may be a simple example that doesn't necessarily require the added overhead of RAG, but when you need to search for information hidden in hundreds or thousands of documents, you may not be able to hand your LLM all the data at once, which is where RAG really comes in handy. +As a reminder, the [OpenAI API Reference](https://platform.openai.com/docs/api-reference/introduction) has lots of information on using the OpenAI SDK, and much of it is compatible with LeapfrogAI! + ## Questions/Feedback -If you have any questions, feedback, or specific update requests on this development guide, please open an issue on the [LeapfrogAI Github Repository](https://github.com/defenseunicorns/leapfrogai). +If you have any questions, feedback, or specific update requests on this development guide, please open an issue on the [LeapfrogAI Github Repository](https://github.com/defenseunicorns/leapfrogai). Additionally, if you have specific feature requests for the LeapfrogAI API (for example, certain endpoints that are not yet compatible with OpenAI), please create an issue in Github. From ee2aa81e93f0e67abd44f55e4c627f153cba05de Mon Sep 17 00:00:00 2001 From: John Alling Date: Mon, 7 Oct 2024 17:31:34 -0400 Subject: [PATCH 10/11] add api key via deployment section --- website/content/en/docs/dev-with-lfai-guide/dev_guide.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index 874baf6e3..fe93b4dae 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -22,7 +22,7 @@ In order to utilize the LeapfrogAI API outside of the User Interface, you'll nee #### Via the UI -To create a LeapfrogAI API key via the user interface, perform the following in the UI (reference the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#checking-deployment) guide for where the UI is deployed): +The easiest way to create a LeapfrogAI API key is via the user interface. Perform the following in the UI (reference the [Quick Start](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#checking-deployment) guide for where the UI is deployed): - Select the **Settings** icon ⚙️ in the top-right corner - Select **API Keys** @@ -34,7 +34,11 @@ To create a LeapfrogAI API key via the user interface, perform the following in #### Via the deployment -TODO: Write this +If you prefer not use the UI (or the UI is not deployed), you can use the following guides to create an API key: + +1-hour JWT via Supabase: [instructions](https://github.com/defenseunicorns/leapfrogai/blob/main/packages/supabase/README.md#troubleshooting) + +Long-lived API key via the API: [instructions](https://github.com/defenseunicorns/leapfrogai/blob/main/src/leapfrogai_api/README.md#running) ### Install dependencies From 7b8f17dfc3ba089428f8dc9875083d6f50c50582 Mon Sep 17 00:00:00 2001 From: John Alling Date: Mon, 7 Oct 2024 17:35:52 -0400 Subject: [PATCH 11/11] update annotations --- .../en/docs/dev-with-lfai-guide/dev_guide.md | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md index fe93b4dae..d6ce130dc 100644 --- a/website/content/en/docs/dev-with-lfai-guide/dev_guide.md +++ b/website/content/en/docs/dev-with-lfai-guide/dev_guide.md @@ -138,20 +138,12 @@ Create these documents so you can add them to the vector store: documents = ['doc_1.txt','doc_2.txt'] for doc in documents: with open(doc, "rb") as file: # read these files in binary mode - vector_store_file = client.beta.vector_stores.files.upload( + _ = client.beta.vector_stores.files.upload( vector_store_id=vector_store.id, file=file ) - print(f"{doc} vector store file id: {vector_store_file.id}") ``` -When you upload files to a vector store, this creates a `VectorStoreFile` object. You can record these for later usage, but for now we'll just print each ID for reference. - -Output (the IDs will be randomly generated each time): - -```text -doc_1.txt vector store file id: 4e48550b-8cf8-49ba-8398-c69389150903 -doc_2.txt vector store file id: f1e1f9b7-2ec8-4f72-a0cb-42d4eb97c204 -``` +When you upload files to a vector store, this creates a `VectorStoreFile` object. You can record these for later usage, but for now they aren't needed for simple chatting with your documents. ### Create an Assistant @@ -221,10 +213,10 @@ print(messages[1].content[0].text.value) The output will look something like this: ```text -The birthday of Joseph's pet frog, Milo, is on October 7th. [f1e1f9b7-2ec8-4f72-a0cb-42d4eb97c204] [4e48550b-8cf8-49ba-8398-c69389150903] +The birthday of Joseph's pet frog, Milo, is on October 7th. 【4:0†doc_2.txt】 【4:0†doc_1.txt】 ``` -As you can see, our Frog Buddy assistant was able to recieve the contextual information it needed in order to know how to answer the query. You'll also notice that the attached annotations in the response correspond to the IDs for the vector store files we uploaded earlier, so we know we're pulling our information from the right place! +As you can see, our Frog Buddy assistant was able to recieve the contextual information it needed in order to know how to answer the query. You'll also notice that the attached annotations correspond to the files we uploaded earlier, so we know we're pulling our information from the right place! This just scratches the surface of what you can create with the OpenAI SDK leveraging LeapfrogAI. This may be a simple example that doesn't necessarily require the added overhead of RAG, but when you need to search for information hidden in hundreds or thousands of documents, you may not be able to hand your LLM all the data at once, which is where RAG really comes in handy.