-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* add llama3.2 GPU example * change prompt format reference url * update * add Meta-Llama-3.2-1B-Instruct sample output * update wording
- Loading branch information
Showing
5 changed files
with
249 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
155 changes: 155 additions & 0 deletions
155
python/llm/example/GPU/HuggingFace/LLM/llama3.2/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,155 @@ | ||
# Llama3.2 | ||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.2 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Meta-Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-3B-Instruct) and [meta-llama/Meta-Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-1B-Instruct) as reference Llama3.2 models. | ||
|
||
## 0. Requirements | ||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information. | ||
|
||
## Example: Predict Tokens using `generate()` API | ||
In the example [generate.py](./generate.py), we show a basic use case for a Llama3.2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs. | ||
### 1. Install | ||
#### 1.1 Installation on Linux | ||
We suggest using conda to manage environment: | ||
```bash | ||
conda create -n llm python=3.11 | ||
conda activate llm | ||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default | ||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ | ||
|
||
pip install transformers==4.45.0 | ||
pip install accelerate==0.33.0 | ||
pip install trl | ||
``` | ||
|
||
#### 1.2 Installation on Windows | ||
We suggest using conda to manage environment: | ||
```bash | ||
conda create -n llm python=3.11 libuv | ||
conda activate llm | ||
|
||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default | ||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ | ||
|
||
pip install transformers==4.45.0 | ||
pip install accelerate==0.33.0 | ||
pip install trl | ||
``` | ||
|
||
### 2. Configures OneAPI environment variables for Linux | ||
|
||
> [!NOTE] | ||
> Skip this step if you are running on Windows. | ||
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. | ||
|
||
```bash | ||
source /opt/intel/oneapi/setvars.sh | ||
``` | ||
|
||
### 3. Runtime Configurations | ||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device. | ||
#### 3.1 Configurations for Linux | ||
<details> | ||
|
||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary> | ||
|
||
```bash | ||
export USE_XETLA=OFF | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
export SYCL_CACHE_PERSISTENT=1 | ||
``` | ||
|
||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel Data Center GPU Max Series</summary> | ||
|
||
```bash | ||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so | ||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export ENABLE_SDP_FUSION=1 | ||
``` | ||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`. | ||
</details> | ||
<details> | ||
|
||
<summary>For Intel iGPU</summary> | ||
|
||
```bash | ||
export SYCL_CACHE_PERSISTENT=1 | ||
export BIGDL_LLM_XMX_DISABLED=1 | ||
``` | ||
|
||
</details> | ||
|
||
#### 3.2 Configurations for Windows | ||
<details> | ||
|
||
<summary>For Intel iGPU</summary> | ||
|
||
```cmd | ||
set SYCL_CACHE_PERSISTENT=1 | ||
set BIGDL_LLM_XMX_DISABLED=1 | ||
``` | ||
|
||
</details> | ||
|
||
<details> | ||
|
||
<summary>For Intel Arc™ A-Series Graphics</summary> | ||
|
||
```cmd | ||
set SYCL_CACHE_PERSISTENT=1 | ||
``` | ||
|
||
</details> | ||
|
||
> [!NOTE] | ||
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile. | ||
### 4. Running examples | ||
|
||
``` | ||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.2 model (e.g. `meta-llama/Meta-Llama-3.2-3B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3.2-3B-Instruct'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`. | ||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. | ||
|
||
#### Sample Output | ||
#### [meta-llama/Meta-Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-3B-Instruct) | ||
```log | ||
Inference time: xxxx s | ||
-------------------- Prompt -------------------- | ||
<|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
-------------------- Output (skip_special_tokens=False) -------------------- | ||
<|begin_of_text|><|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that would typically require human intelligence, such as learning, problem-solving, and | ||
``` | ||
|
||
#### [meta-llama/Meta-Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-1B-Instruct) | ||
```log | ||
Inference time: xxxx s | ||
-------------------- Prompt -------------------- | ||
<|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
-------------------- Output (skip_special_tokens=False) -------------------- | ||
<|begin_of_text|><|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, and decision | ||
``` |
91 changes: 91 additions & 0 deletions
91
python/llm/example/GPU/HuggingFace/LLM/llama3.2/generate.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
import torch | ||
import time | ||
import argparse | ||
|
||
from ipex_llm.transformers import AutoModelForCausalLM | ||
from transformers import AutoTokenizer | ||
|
||
# you could tune the prompt based on your own model, | ||
# here the prompt tuning refers to https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/ | ||
DEFAULT_SYSTEM_PROMPT = """\ | ||
""" | ||
|
||
def get_prompt(user_input: str, chat_history: list[tuple[str, str]], | ||
system_prompt: str) -> str: | ||
prompt_texts = [f'<|begin_of_text|>'] | ||
|
||
if system_prompt != '': | ||
prompt_texts.append(f'<|start_header_id|>system<|end_header_id|>\n\n{system_prompt}<|eot_id|>') | ||
|
||
for history_input, history_response in chat_history: | ||
prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{history_input.strip()}<|eot_id|>') | ||
prompt_texts.append(f'<|start_header_id|>assistant<|end_header_id|>\n\n{history_response.strip()}<|eot_id|>') | ||
|
||
prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n\n{user_input.strip()}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n') | ||
return ''.join(prompt_texts) | ||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3.2 model') | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Llama-3.2-3B-Instruct", | ||
help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Llama-3.2-3B-Instruct`) to be downloaded' | ||
', or the path to the huggingface checkpoint folder') | ||
parser.add_argument('--prompt', type=str, default="What is AI?", | ||
help='Prompt to infer') | ||
parser.add_argument('--n-predict', type=int, default=32, | ||
help='Max tokens to predict') | ||
|
||
args = parser.parse_args() | ||
model_path = args.repo_id_or_model_path | ||
|
||
# Load model in 4 bit, | ||
# which convert the relevant layers in the model into INT4 format | ||
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function. | ||
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU. | ||
model = AutoModelForCausalLM.from_pretrained(model_path, | ||
load_in_4bit=True, | ||
optimize_model=True, | ||
trust_remote_code=True, | ||
use_cache=True) | ||
model = model.half().to('xpu') | ||
|
||
# Load tokenizer | ||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) | ||
|
||
# Generate predicted tokens | ||
with torch.inference_mode(): | ||
prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT) | ||
|
||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu') | ||
# ipex_llm model needs a warmup, then inference time can be accurate | ||
output = model.generate(input_ids, | ||
max_new_tokens=args.n_predict) | ||
|
||
# start inference | ||
st = time.time() | ||
output = model.generate(input_ids, | ||
max_new_tokens=args.n_predict) | ||
torch.xpu.synchronize() | ||
end = time.time() | ||
output = output.cpu() | ||
output_str = tokenizer.decode(output[0], skip_special_tokens=False) | ||
print(f'Inference time: {end-st} s') | ||
print('-'*20, 'Prompt', '-'*20) | ||
print(prompt) | ||
print('-'*20, 'Output (skip_special_tokens=False)', '-'*20) | ||
print(output_str) |