Skip to content

Commit

Permalink
Refine axolotl quickstart (#10957)
Browse files Browse the repository at this point in the history
* Add default accelerate config for axolotl quickstart.
* Fix requirement link.
* Upgrade peft to 0.10.0 in requirement.
  • Loading branch information
qiyuangong authored May 8, 2024
1 parent c801c37 commit 164e695
Show file tree
Hide file tree
Showing 4 changed files with 38 additions and 4 deletions.
11 changes: 9 additions & 2 deletions docs/readthedocs/source/doc/LLM/Quickstart/axolotl_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ git clone https://github.com/OpenAccess-AI-Collective/axolotl/tree/v0.4.0
cd axolotl
# replace requirements.txt
remove requirements.txt
wget -O requirements.txt https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
wget -O requirements.txt https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/requirements-xpu.txt
pip install -e .
pip install transformers==4.36.0
# to avoid https://github.com/OpenAccess-AI-Collective/axolotl/issues/1544
Expand Down Expand Up @@ -92,7 +92,14 @@ Configure oneAPI variables by running the following command:
```

Configure accelerate to avoid training with CPU
Configure accelerate to avoid training with CPU. You can download a default `default_config.yaml` with `use_cpu: false`.

```cmd
mkdir -p ~/.cache/huggingface/accelerate/
wget -O ~/.cache/huggingface/accelerate/default_config.yaml https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
```

As an alternative, you can config accelerate based on your requirements.

```cmd
accelerate config
Expand Down
11 changes: 10 additions & 1 deletion python/llm/example/GPU/LLM-Finetuning/axolotl/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,22 @@ source /opt/intel/oneapi/setvars.sh

#### 2.2 Configures `accelerate` in command line interactively.

You can download a default `default_config.yaml` with `use_cpu: false`.

```bash
mkdir -p ~/.cache/huggingface/accelerate/
wget -O ~/.cache/huggingface/accelerate/default_config.yaml https://raw.githubusercontent.com/intel-analytics/ipex-llm/main/python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
```

As an alternative, you can config accelerate based on your requirements.

```bash
accelerate config
```

Please answer `NO` in option `Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:`.

After finish accelerate config, check if `use_cpu` is disable (i.e., ` use_cpu: false`) in accelerate config file (`~/.cache/huggingface/accelerate/default_config.yaml`).
After finish accelerate config, check if `use_cpu` is disable (i.e., `use_cpu: false`) in accelerate config file (`~/.cache/huggingface/accelerate/default_config.yaml`).

#### 2.3 (Optional) Set ` HF_HUB_OFFLINE=1` to avoid huggingface hug signing.

Expand Down
18 changes: 18 additions & 0 deletions python/llm/example/GPU/LLM-Finetuning/axolotl/default_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: 'NO'
downcast_bf16: 'no'
gpu_ids: all
ipex_config:
use_xpu: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# This file is copied from https://github.com/OpenAccess-AI-Collective/axolotl/blob/v0.4.0/requirements.txt
--extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
packaging==23.2
peft==0.5.0
peft==0.10.0
tokenizers
bitsandbytes>=0.41.1
accelerate==0.23.0
Expand Down

0 comments on commit 164e695

Please sign in to comment.