The purpose of this document is to make it easy for open-source community members to contribute to this project. We'd love to discuss your contributions with you via a GitHub Issue or Discussion, or on Discord!
This project uses a GitHub workflow to enforce code standards.
The rusty-hook
project is used to run a similar set of checks automatically before committing.
If you would like to run these checks locally, use cargo run -p precommit-check
.
Follow these steps to update the GGML submodule and regenerate the Rust bindings (this is only necessary if your changes depend on new GGML features):
git submodule update --remote
cargo run --release --package generate-ggml-bindings
This repository includes a launch.json
file that can
be used for
debugging with Visual Studio Code -
this file will need to be updated to reflect where models are stored on your
system. Debugging with Visual Studio Code requires a
language extension
that depends on your operating system. Keep in mind that debugging text
generation is extremely slow, but debugging model loading is not.
Here are some tried-and-true references for learning more about large language models:
- The Illustrated GPT-2 - an excellent technical description of how this seminal language model generates text
- Andrej Karpathy's "Neural Networks: Zero to Hero" - a series of in-depth YouTube videos that guide the viewer through creating a neural network, a large language model, and a fully functioning chatbot, from scratch (in Python)
- rustygrad - a native Rust implementation of Andrej Karpathy's micrograd
- Understanding Deep Learning (Chapter 12 specifically)