Skip to content

youngsecurity/ai-AutoRAG

 
 

Repository files navigation

AutoRAG

RAG AutoML tool for automatically finding an optimal RAG pipeline for your data.

Thumbnail

Discord PyPI - Downloads LinkedIn X (formerly Twitter) Follow Hugging Face Static Badge

There are many RAG pipelines and modules out there, but you don’t know what pipeline is great for “your own data” and "your own use-case." Making and evaluating all RAG modules is very time-consuming and hard to do. But without it, you will never know which RAG pipeline is the best for your own use-case.

AutoRAG is a tool for finding the optimal RAG pipeline for “your data.” You can evaluate various RAG modules automatically with your own evaluation data and find the best RAG pipeline for your own use-case.

AutoRAG supports a simple way to evaluate many RAG module combinations. Try now and find the best RAG pipeline for your own use-case.

Explore our 📖 Document!!

Plus, join our 📞 Discord Community.


Do you have any difficulties in optimizing your RAG pipeline? Or is it hard to set up things to use AutoRAG? Try AutoRAG Cloud beta. We will help you to run AutoRAG and optimize. Plus, we can help you to build RAG evaluation dataset.

Starts with 9.99$ per optimization.


YouTube Tutorial

AutoRAG.Tutorial.1.1.mp4

Muted by default, enable sound for voice-over

You can see on YouTube

Use AutoRAG in HuggingFace Space 🚀

Colab Tutorial

Index

Quick Install

We recommend using Python version 3.10 or higher for AutoRAG.

pip install AutoRAG

If you want to use the local models, you need to install gpu version.

pip install "AutoRAG[gpu]"

Or for parsing, you can use the parsing version.

pip install "AutoRAG[gpu,parse]"

Data Creation

Hugging Face Sticker

image

RAG Optimization requires two types of data: QA dataset and Corpus dataset.

  1. QA dataset file (qa.parquet)
  2. Corpus dataset file (corpus.parquet)

QA dataset is important for accurate and reliable evaluation and optimization.

Corpus dataset is critical to the performance of RAGs. This is because RAG uses the corpus to retrieve documents and generate answers using it.

Quick Start

1. Parsing

Set YAML File

modules:
  - module_type: langchain_parse
    parse_method: pdfminer

You can also use multiple Parse modules at once. However, in this case, you'll need to return a new process for each parsed result.

Start Parsing

You can parse your raw documents with just a few lines of code.

from autorag.parser import Parser

parser = Parser(data_path_glob="your/data/path/*")
parser.start_parsing("your/path/to/parse_config.yaml")

2. Chunking

Set YAML File

modules:
  - module_type: llama_index_chunk
    chunk_method: Token
    chunk_size: 1024
    chunk_overlap: 24
    add_file_name: en

You can also use multiple Chunk modules at once. In this case, you need to use one corpus to create QA and then map the rest of the corpus to QA Data. If the chunk method is different, the retrieval_gt will be different, so we need to remap it to the QA dataset.

Start Chunking

You can chunk your parsed results with just a few lines of code.

from autorag.chunker import Chunker

chunker = Chunker.from_parquet(parsed_data_path="your/parsed/data/path")
chunker.start_chunking("your/path/to/chunk_config.yaml")

3. QA Creation

You can create QA dataset with just a few lines of code.

import pandas as pd
from llama_index.llms.openai import OpenAI

from autorag.data.qa.filter.dontknow import dontknow_filter_rule_based
from autorag.data.qa.generation_gt.llama_index_gen_gt import (
    make_basic_gen_gt,
    make_concise_gen_gt,
)
from autorag.data.qa.schema import Raw, Corpus
from autorag.data.qa.query.llama_gen_query import factoid_query_gen
from autorag.data.qa.sample import random_single_hop

llm = OpenAI()
raw_df = pd.read_parquet("your/path/to/parsed.parquet")
raw_instance = Raw(raw_df)

corpus_df = pd.read_parquet("your/path/to/corpus.parquet")
corpus_instance = Corpus(corpus_df, raw_instance)

initial_qa = (
    corpus_instance.sample(random_single_hop, n=3)
    .map(
        lambda df: df.reset_index(drop=True),
    )
    .make_retrieval_gt_contents()
    .batch_apply(
        factoid_query_gen,  # query generation
        llm=llm,
    )
    .batch_apply(
        make_basic_gen_gt,  # answer generation (basic)
        llm=llm,
    )
    .batch_apply(
        make_concise_gen_gt,  # answer generation (concise)
        llm=llm,
    )
    .filter(
        dontknow_filter_rule_based,  # filter don't know
        lang="en",
    )
)

initial_qa.to_parquet('./qa.parquet', './corpus.parquet')

RAG Optimization

Hugging Face Sticker

rag

How AutoRAG optimizes RAG pipeline?

rag_opt_gif

🐳 AutoRAG Docker Guide

This guide provides a quick overview of building and running the AutoRAG Docker container for production, with instructions on setting up the environment for evaluation using your configuration and data paths.

🚀 Building the Docker Image

Tip: If you want to build an image for a gpu version, you can use autoraghq/autorag:gpu or autoraghq/autorag:gpu-parsing

1.Download dataset for Tutorial Step 1

python sample_dataset/eli5/load_eli5_dataset.py --save_path projects/tutorial_1

2. Run evaluate

Note: This step may take a long time to complete and involves OpenAI API calls, which may cost approximately $0.30.

docker run --rm -it \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -v $(pwd)/projects:/usr/src/app/projects \
  -e OPENAI_API_KEY=${OPENAI_API_KEY} \
  autoraghq/autorag:api evaluate \
  --config /usr/src/app/projects/tutorial_1/config.yaml \
  --qa_data_path /usr/src/app/projects/tutorial_1/qa_test.parquet \
  --corpus_data_path /usr/src/app/projects/tutorial_1/corpus.parquet \
  --project_dir /usr/src/app/projects/tutorial_1/

3. Run validate

docker run --rm -it \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -v $(pwd)/projects:/usr/src/app/projects \
  -e OPENAI_API_KEY=${OPENAI_API_KEY} \
  autoraghq/autorag:api validate \
  --config /usr/src/app/projects/tutorial_1/config.yaml \
  --qa_data_path /usr/src/app/projects/tutorial_1/qa_test.parquet \
  --corpus_data_path /usr/src/app/projects/tutorial_1/corpus.parquet

4. Run dashboard

docker run --rm -it \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -v $(pwd)/projects:/usr/src/app/projects \
  -e OPENAI_API_KEY=${OPENAI_API_KEY} \
  -p 8502:8502 \
  autoraghq/autorag:api dashboard \
    --trial_dir /usr/src/app/projects/tutorial_1/0

4. Run run_web

docker run --rm -it \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -v $(pwd)/projects:/usr/src/app/projects \
  -e OPENAI_API_KEY=${OPENAI_API_KEY} \
  -p 8501:8501 \
  autoraghq/autorag:api run_web --trial_path ./projects/tutorial_1/0

Key Points :

  • -v ~/.cache/huggingface:/cache/huggingface: Mounts the host machine’s Hugging Face cache to /cache/huggingface in the container, enabling access to pre-downloaded models.
  • -e OPENAI_API_KEY: ${OPENAI_API_KEY}: Passes the OPENAI_API_KEY from your host environment.

For more detailed instructions, refer to the Docker Installation Guide.

Quick Start

1. Set YAML File

First, you need to set the config YAML file for your RAG optimization.

You can get various config YAML files at here. We highly recommend using pre-made config YAML files for starter.

If you want to make your own config YAML files, check out the Config YAML file section.

Here is an example of the config YAML file to use retrieval, prompt_maker, and generator nodes.

node_lines:
- node_line_name: retrieve_node_line  # Set Node Line (Arbitrary Name)
  nodes:
    - node_type: retrieval  # Set Retrieval Node
      strategy:
        metrics: [retrieval_f1, retrieval_recall, retrieval_ndcg, retrieval_mrr]  # Set Retrieval Metrics
      top_k: 3
      modules:
        - module_type: vectordb
          vectordb: default
        - module_type: bm25
        - module_type: hybrid_rrf
          weight_range: (4,80)
- node_line_name: post_retrieve_node_line  # Set Node Line (Arbitrary Name)
  nodes:
    - node_type: prompt_maker  # Set Prompt Maker Node
      strategy:
        metrics:   # Set Generation Metrics
          - metric_name: meteor
          - metric_name: rouge
          - metric_name: sem_score
            embedding_model: openai
      modules:
        - module_type: fstring
          prompt: "Read the passages and answer the given question. \n Question: {query} \n Passage: {retrieved_contents} \n Answer : "
    - node_type: generator  # Set Generator Node
      strategy:
        metrics:  # Set Generation Metrics
          - metric_name: meteor
          - metric_name: rouge
          - metric_name: sem_score
            embedding_model: openai
      modules:
        - module_type: openai_llm
          llm: gpt-4o-mini
          batch: 16

2. Run AutoRAG

You can evaluate your RAG pipeline with just a few lines of code.

from autorag.evaluator import Evaluator

evaluator = Evaluator(qa_data_path='your/path/to/qa.parquet', corpus_data_path='your/path/to/corpus.parquet')
evaluator.start_trial('your/path/to/config.yaml')

or you can use the command line interface

autorag evaluate --config your/path/to/default_config.yaml --qa_data_path your/path/to/qa.parquet --corpus_data_path your/path/to/corpus.parquet

Once it is done, you can see several files and folders created in your current directory. At the trial folder named to numbers (like 0), you can check summary.csv file that summarizes the evaluation results and the best RAG pipeline for your data.

For more details, you can check out how the folder structure looks like at here.

3. Run Dashboard

You can run a dashboard to easily see the result.

autorag dashboard --trial_dir /your/path/to/trial_dir

sample dashboard

dashboard

4. Deploy your optimal RAG pipeline (for testing)

4-1. Run as a Code

You can use an optimal RAG pipeline right away from the trial folder. The trial folder is the directory used in the running dashboard. (like 0, 1, 2, ...)

from autorag.deploy import Runner

runner = Runner.from_trial_folder('/your/path/to/trial_dir')
runner.run('your question')

4-2. Run as an API server

You can run this pipeline as an API server.

Check out the API endpoint at here.

import nest_asyncio
from autorag.deploy import ApiRunner

nest_asyncio.apply()

runner = ApiRunner.from_trial_folder('/your/path/to/trial_dir')
runner.run_api_server()
autorag run_api --trial_dir your/path/to/trial_dir --host 0.0.0.0 --port 8000

The cli command uses extracted config YAML file. If you want to know it more, check out here.

4-3. Run as a Web Interface

you can run this pipeline as a web interface.

Check out the web interface at here.

autorag run_web --trial_path your/path/to/trial_path

sample web interface

web_interface

Use advanced web interface

You can deploy the advanced web interface featured by Kotaemon to the fly.io. Go here to use it and deploy to the fly.io.

Example :

Kotaemon Example

📌 Supporting Data Creation Modules

Data Creation

  • You can check our all Parsing Modules at here
  • You can check our all Chunk Modules at here

❗Supporting RAG Optimization Nodes & modules

module_1 module_2 module_3 module_4

You can check our all supporting Nodes & modules at here

❗Supporting Evaluation Metrics

Metrics

You can check our all supporting Evaluation Metrics at here

☎️ FaQ

🛣️ Roadmap

💻 Hardware Specs

Running AutoRAG

🍯 Tips/Tricks

☎️ TroubleShooting

💬 Talk with Founders

Talk with us! We are always open to talk with you.


✨ Contributors ✨

Thanks go to these wonderful people:

Contribution

We are developing AutoRAG as open-source.

So this project welcomes contributions and suggestions. Feel free to contribute to this project.

Plus, check out our detailed documentation at here.

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Other 0.2%