Skip to content

Question-answering on your own data with Large Language Models (LLMs)

License

Notifications You must be signed in to change notification settings

miranthajayatilake/nanoQA

Repository files navigation

License version discord

Chat with your data to find answers 🔍 ⚡

nanoQA uses Large Language Models (LLMs) to build a question-answering application on your own data.

Please refer to this blog post for a comprehensive guideLINK

Demo



Quick start

  • Create a virtual environment with python (Tested with python 3.10.9 on Anaconda)
  • pip install -r requirements.txt to install all dependecies.
  • Make sure Docker is up and running in your local environment. We use docker to set up elasticsearch as our data store.
  • Run bash datastore.sh to pull and set up elasticsearch. Wait till this step is completed.
  • Run python sample_data.py data/faq_covid https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/small_faq_covid.csv.zip index_qa. This script will download a sample dataset of FAQs on COVID 19 and index it under index_qa. This dataset and index name are for demo purposes. You can replace this with your own data and naming.
  • Run streamlit run app.py to spin up the user interface.

Now you can provide your index name and start chatting with your data.

Contributing

I highly appreciate your contributions to this project in any amount possible. This is still at an very basic stage. Suggestions on additional features and functionality are welcome. General instructions on how to contribute are mentioned in CONTRIBUTING

Getting help

Please use the issues tracker of this repository to report on any bugs or questions you have.

Also you can join the DISCORD

About

Question-answering on your own data with Large Language Models (LLMs)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published