From ea2373c97ed11bcde7f428140a971d69df24d716 Mon Sep 17 00:00:00 2001 From: debrupf2946 Date: Fri, 23 Aug 2024 17:46:44 +0530 Subject: [PATCH] Updated Readme.md of evaluation module --- graph_rag/evaluation/README.MD | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/graph_rag/evaluation/README.MD b/graph_rag/evaluation/README.MD index cc78982..e648d0b 100644 --- a/graph_rag/evaluation/README.MD +++ b/graph_rag/evaluation/README.MD @@ -17,7 +17,7 @@ This section demonstrates how to use the functions provided in the module: This module offers tools to generate question-answer (QA) pairs from input documents using a language model and critique them based on various criteria like groundedness, relevance, and standalone quality. -#### Generate and Critique QA Pairs +> #### Generate and Critique QA Pairs To use this module, follow these steps: @@ -81,7 +81,7 @@ critiqued_qa_pairs = critique_qa(qa_pairs) You can easily evaluate the performance of your query engine using this module. -#### 1. Load and Evaluate Your Dataset +> #### Load and Evaluate Your Dataset Use the `load_test_dataset` function to load your dataset and directly evaluate it using the `evaluate` function. This method handles all necessary steps, including batching the data.