From 344889efac3d2ec326d079fdc07fcb2f1bcef18a Mon Sep 17 00:00:00 2001 From: Logan Markewich Date: Sun, 15 Dec 2024 19:25:34 -0600 Subject: [PATCH] fix links in RAG understanding --- docs/docs/understanding/rag/index.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/docs/understanding/rag/index.md b/docs/docs/understanding/rag/index.md index 60888b9566969..e35ac6ad71218 100644 --- a/docs/docs/understanding/rag/index.md +++ b/docs/docs/understanding/rag/index.md @@ -1,5 +1,5 @@ !!! tip - If you haven't, [install LlamaIndex](./installation.md) and complete the [starter tutorial](./starter_example.md) before you read this. It will help ground these steps in your experience. + If you haven't, [install LlamaIndex](../../getting_started/installation.md) and complete the [starter tutorial](../../getting_started/starter_example.md) before you read this. It will help ground these steps in your experience. LLMs are trained on enormous bodies of data but they aren't trained on **your** data. Retrieval-Augmented Generation (RAG) solves this problem by adding your data to the data LLMs already have access to. You will see references to RAG frequently in this documentation. Query engines, chat engines and agents often use RAG to complete their tasks. @@ -7,7 +7,7 @@ In RAG, your data is loaded and prepared for queries or "indexed". User queries Even if what you're building is a chatbot or an agent, you'll want to know RAG techniques for getting data into your application. -![](../_static/getting_started/basic_rag.png) +![](../../_static/getting_started/basic_rag.png) ### Stages within RAG @@ -23,7 +23,7 @@ There are five key stages within RAG, which in turn will be a part of most large - **Evaluation**: a critical step in any flow is checking how effective it is relative to other strategies, or when you make changes. Evaluation provides objective measures of how accurate, faithful and fast your responses to queries are. -![](../_static/getting_started/stages.png) +![](../../_static/getting_started/stages.png) ### Important concepts within RAG @@ -31,28 +31,28 @@ There are also some terms you'll encounter that refer to steps within each of th #### Loading stage -[**Nodes and Documents**](../module_guides/loading/documents_and_nodes/index.md): A `Document` is a container around any data source - for instance, a PDF, an API output, or retrieve data from a database. A `Node` is the atomic unit of data in LlamaIndex and represents a "chunk" of a source `Document`. Nodes have metadata that relate them to the document they are in and to other nodes. +[**Nodes and Documents**](../../module_guides/loading/documents_and_nodes/index.md): A `Document` is a container around any data source - for instance, a PDF, an API output, or retrieve data from a database. A `Node` is the atomic unit of data in LlamaIndex and represents a "chunk" of a source `Document`. Nodes have metadata that relate them to the document they are in and to other nodes. -[**Connectors**](../module_guides/loading/connector/index.md): +[**Connectors**](../../module_guides/loading/connector/index.md): A data connector (often called a `Reader`) ingests data from different data sources and data formats into `Documents` and `Nodes`. #### Indexing Stage -[**Indexes**](../module_guides/indexing/index.md): +[**Indexes**](../../module_guides/indexing/index.md): Once you've ingested your data, LlamaIndex will help you index the data into a structure that's easy to retrieve. This usually involves generating `vector embeddings` which are stored in a specialized database called a `vector store`. Indexes can also store a variety of metadata about your data. -[**Embeddings**](../module_guides/models/embeddings.md): LLMs generate numerical representations of data called `embeddings`. When filtering your data for relevance, LlamaIndex will convert queries into embeddings, and your vector store will find data that is numerically similar to the embedding of your query. +[**Embeddings**](../../module_guides/models/embeddings.md): LLMs generate numerical representations of data called `embeddings`. When filtering your data for relevance, LlamaIndex will convert queries into embeddings, and your vector store will find data that is numerically similar to the embedding of your query. #### Querying Stage -[**Retrievers**](../module_guides/querying/retriever/index.md): +[**Retrievers**](../../module_guides/querying/retriever/index.md): A retriever defines how to efficiently retrieve relevant context from an index when given a query. Your retrieval strategy is key to the relevancy of the data retrieved and the efficiency with which it's done. -[**Routers**](../module_guides/querying/router/index.md): +[**Routers**](../../module_guides/querying/router/index.md): A router determines which retriever will be used to retrieve relevant context from the knowledge base. More specifically, the `RouterRetriever` class, is responsible for selecting one or multiple candidate retrievers to execute a query. They use a selector to choose the best option based on each candidate's metadata and the query. -[**Node Postprocessors**](../module_guides/querying/node_postprocessors/index.md): +[**Node Postprocessors**](../../module_guides/querying/node_postprocessors/index.md): A node postprocessor takes in a set of retrieved nodes and applies transformations, filtering, or re-ranking logic to them. -[**Response Synthesizers**](../module_guides/querying/response_synthesizers/index.md): +[**Response Synthesizers**](../../module_guides/querying/response_synthesizers/index.md): A response synthesizer generates a response from an LLM, using a user query and a given set of retrieved text chunks.