Skip to content

gmjen/alpaca_lora_4bit

 
 

Repository files navigation

Alpaca Lora 4bit

Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.

Update Logs

  • Resolved numerically unstable issue

  • Reconstruct fp16 matrix from 4bit data and call torch.matmul largely increased the inference speed.

  • Added install script for windows and linux.

  • Added Gradient Checkpointing. Now It can finetune 30b model 4bit on a single GPU with 24G VRAM with Gradient Checkpointing enabled. (finetune.py updated) (but would reduce training speed, so if having enough VRAM this option is not needed)

  • Added install manual by s4rduk4r

Requirements

gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git

Install

copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension
copy files from peft/tuners/lora.py to peft path, replace it

Linux:

./install.sh

Windows:

./install.bat

Finetune

The same finetune script from https://github.com/tloen/alpaca-lora can be used.

After installation, this script can be used:

python finetune.py

Inference

After installation, this script can be used:

python inference.py

Text Generation Webui Monkey Patch

Clone the latest version of text generation webui and copy all the files into ./text-generation-webui/

git clone https://github.com/oobabooga/text-generation-webui.git

Open server.py and insert a line at the beginning

import custom_monkey_patch # apply monkey patch
import gc
import io
...

Use the command to run

python server.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 61.5%
  • Cuda 28.9%
  • C++ 6.1%
  • Shell 1.8%
  • Batchfile 1.7%