Welcome to the Jarvis project! We appreciate your interest in contributing. This guide provides information on how to contribute to the project.
- Fork the repository on GitHub.
- Clone your fork to your local machine.
- Create a new branch for your work.
- Make your changes in your branch.
- Push your changes to your fork.
- Submit a pull request from your fork to the main repository.
We expect all contributors to follow our Code of Conduct. Please review it before you start contributing.
You can contribute in several ways:
- Submitting a pull request for bug fixes or new features.
- Improving documentation.
- Reporting issues.
- Suggesting new features or improvements.
- Improving the dataset
We use llama.cpp to convert and quantize the model. The deployment of the model is done using Ollama. If you make changes that affect the model, you may need to quantize it and run it locally before pushing you code. Please refer to the Ollama documentation for more information.
- Clone the this repo
- Make sure you have the python dep needed, we useing
pythin 3.10.13
for this project - That's it, you are in.
First, clone the ollama/ollama repo:
git clone https://github.com/ollama/ollama.git
cd ollama
after cloning this repo you will then fetch its llama.cpp submodule:
git submodule init
git submodule update llm/llama.cpp
Next, install the Python dependencies:
python3 -m venv llm/llama.cpp/.venv
source llm/llama.cpp/.venv/bin/activate
pip install -r llm/llama.cpp/requirements.txt
you can use whatever python version manager you want, we use anaconda.
then build the quantize tool:
make -C llm/llama.cpp quantize
Note: some model architectures require using specific convert scripts. For example, Qwen models require running
convert-hf-to-gguf.py
instead ofconvert.py
python llm/llama.cpp/convert.py ../jarvis-hf --outtype f16 --outfile converted.bin
llm/llama.cpp/quantize converted.bin quantized.bin q4_0
If you have any questions, please open an issue and we'll be happy to help.
Lets bring Jarvis to life together with today's tech:)