Skip to content

Commit

Permalink
Update llm_inference.md
Browse files Browse the repository at this point in the history
Signed-off-by: alabulei1 <[email protected]>
  • Loading branch information
alabulei1 authored Nov 6, 2023
1 parent b834b6e commit 1a500ef
Showing 1 changed file with 85 additions and 50 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -29,19 +29,33 @@ git clone curl -LO https://huggingface.co/wasmedge/llama2/blob/main/llama-2-7b-c
Run the inference application in WasmEdge.

```bash
wasmedge --dir .:. \
--nn-preload default:GGML:CPU:llama-2-7b.Q5_K_M.gguf llama-chat.wasm default \
--prompt 'Robert Oppenheimer most important achievement is ' \
--ctx-size 4096
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-q5_k_m.gguf \
llama-chat.wasm --prompt-template llama-2-chat
```

After executing the command, you may need to wait a moment for the input prompt to appear. Once the execution is complete, the following output will be generated.
After executing the command, you may need to wait a moment for the input prompt to appear. You can enter your question once you see the `[USER]:` prompt:

```bash
Robert Oppenheimer most important achievement is
1945 Manhattan Project.
Robert Oppenheimer was born in New York City on April 22, 1904. He was the son of Julius Oppenheimer, a wealthy German-Jewish textile merchant, and Ella Friedman Oppenheimer.
Robert Oppenheimer was a brilliant student. He attended the Ethical Culture School in New York City and graduated from the Ethical Culture Fieldston School in 1921. He then attended Harvard University, where he received his bachelor's degree.
[USER]:
I have two apples, each costing 5 dollars. What is the total cost of these apple
*** [prompt begin] ***
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as short as possible, while being safe. <</SYS>>
I have two apples, each costing 5 dollars. What is the total cost of these apple [/INST]
*** [prompt end] ***
[ASSISTANT]:
The total cost of the two apples is 10 dollars.
[USER]:
How about four apples?
*** [prompt begin] ***
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as short as possible, while being safe. <</SYS>>
I have two apples, each costing 5 dollars. What is the total cost of these apple [/INST] The total cost of the two apples is 10 dollars. </s><s>[INST] How about four apples? [/INST]
*** [prompt end] ***
[ASSISTANT]:
The total cost of four apples is 20 dollars.
```
## Build and run
Expand Down Expand Up @@ -70,57 +84,78 @@ curl -LO https://huggingface.co/wasmedge/llama2/blob/main/llama-2-13b-q5_k_m.ggu
Next, use WasmEdge to load the llama-2-13b model and then ask the model to questions.
```bash
wasmedge --dir .:. \
--nn-preload default:GGML:CPU:llama-2-13b.Q5_K_M.gguf llama-chat.wasm default \
--prompt 'Robert Oppenheimer most important achievement is ' \
--ctx-size 4096
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-13b-chat-q5_k_m.gguf \
llama-chat.wasm --prompt-template llama-2-chat
```
After executing the command, you may need to wait a moment for the input prompt to appear. You can enter your question once you see the `[USER]:` prompt:
```bash
Robert Oppenheimer most important achievement is
1945 Manhattan Project.
Robert Oppenheimer was born in New York City on April 22, 1904. He was the son of Julius Oppenheimer, a wealthy German-Jewish textile merchant, and Ella Friedman Oppenheimer.
Robert Oppenheimer was a brilliant student. He attended the Ethical Culture School in New York City and graduated from the Ethical Culture Fieldston School in 1921. He then attended Harvard University, where he received his bachelor's degree
[USER]:
Who is Robert Oppenheimer?
*** [prompt begin] ***
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as short as possible, while being safe. <</SYS>>
Who is Robert Oppenheimer? [/INST]
*** [prompt end] ***
[ASSISTANT]:
Robert Oppenheimer was an American theoretical physicist and director of the Manhattan Project, which developed the atomic bomb during World War II. He is widely regarded as one of the most important physicists of the 20th century and is known for his contributions to the development of quantum mechanics and the theory of the atomic nucleus. Oppenheimer was also a prominent figure in the post-war nuclear weapons debate and was a strong advocate for international cooperation on nuclear weapons control.
```
## Optional: Configure the model
You can use environment variables to configure the model execution.
## Optional: run the model with different CLI
| Option |Default |Function |
| -------|-----------|----- |
| LLAMA_LOG | 0 |The backend will print diagnostic information when this value is set to 1|
|LLAMA_N_CTX |512| The context length is the max number of tokens in the entire conversation|
|LLAMA_N_PREDICT |512|The number of tokens to generate in each response from the model|
For example, the following command specifies a context length of 4k tokens, which is standard for llama2, and the max number of tokens in each response to be 1k. It also tells WasmEdge to print out logs and statistics of the model at runtime.
We also have CLI options for more information.
```bash
-m, --model-alias <ALIAS>
Model alias [default: default]
-c, --ctx-size <CTX_SIZE>
Size of the prompt context [default: 4096]
-n, --n-predict <N_PRDICT>
Number of tokens to predict [default: 1024]
-g, --n-gpu-layers <N_GPU_LAYERS>
Number of layers to run on the GPU [default: 100]
-b, --batch-size <BATCH_SIZE>
Batch size for prompt processing [default: 4096]
-r, --reverse-prompt <REVERSE_PROMPT>
Halt generation at PROMPT, return control.
-s, --system-prompt <SYSTEM_PROMPT>
System prompt message string [default: "[Default system message for the prompt template]"]
-p, --prompt-template <TEMPLATE>
Prompt template. [default: llama-2-chat] [possible values: llama-2-chat, codellama-instruct, mistral-instruct-v0.1, mistrallite, openchat, belle-llama-2-chat, vicuna-chat, chatml]
--log-prompts
Print prompt strings to stdout
--log-stat
Print statistics to stdout
--log-enable
Print all log information to stdout
--stream-stdout
Print the output to stdout in the streaming way
-h, --help
Print help
```
LLAMA_LOG=1 LLAMA_N_CTX=4096 LLAMA_N_PREDICT=128 wasmedge --dir .:. \
--nn-preload default:GGML:CPU:llama-2-7b.Q5_K_M.gguf llama-simple.wasm default \
--prompt 'Robert Oppenheimer most important achievement is ' \
--ctx-size 4096
...................................................................................................
[2023-10-08 23:13:10.272] [info] [WASI-NN] GGML backend: set n_ctx to 4096
llama_new_context_with_model: kv self size = 2048.00 MB
llama_new_context_with_model: compute buffer total size = 297.47 MB
llama_new_context_with_model: max tensor size = 102.54 MB
[2023-10-08 23:13:10.472] [info] [WASI-NN] GGML backend: llama_system_info: AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
[2023-10-08 23:13:10.472] [info] [WASI-NN] GGML backend: set n_predict to 128
[2023-10-08 23:13:16.014] [info] [WASI-NN] GGML backend: llama_get_kv_cache_token_count 128
For example, the following command tells WasmEdge to print out logs and statistics of the model at runtime.
llama_print_timings: load time = 1431.58 ms
llama_print_timings: sample time = 3.53 ms / 118 runs ( 0.03 ms per token, 33446.71 tokens per second)
llama_print_timings: prompt eval time = 1230.69 ms / 11 tokens ( 111.88 ms per token, 8.94 tokens per second)
llama_print_timings: eval time = 4295.81 ms / 117 runs ( 36.72 ms per token, 27.24 tokens per second)
llama_print_timings: total time = 5742.71 ms
Robert Oppenheimer most important achievement is
1945 Manhattan Project.
Robert Oppenheimer was born in New York City on April 22, 1904. He was the son of Julius Oppenheimer, a wealthy German-Jewish textile merchant, and Ella Friedman Oppenheimer.
Robert Oppenheimer was a brilliant student. He attended the Ethical Culture School in New York City and graduated from the Ethical Culture Fieldston School in 1921. He then attended Harvard University, where he received his bachelor's degree.
```
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-q5_k_m.gguf \
llama-chat.wasm --prompt-template llama-2-chat --log-enable
..................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 256.00 MB
llama_new_context_with_model: compute buffer total size = 76.63 MB
[2023-11-07 02:07:44.019] [info] [WASI-NN] GGML backend: llama_system_info: AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
llama_print_timings: load time = 11523.19 ms
llama_print_timings: sample time = 2.62 ms / 102 runs ( 0.03 ms per token, 38961.04 tokens per second)
llama_print_timings: prompt eval time = 11479.27 ms / 49 tokens ( 234.27 ms per token, 4.27 tokens per second)
llama_print_timings: eval time = 13571.37 ms / 101 runs ( 134.37 ms per token, 7.44 tokens per second)
llama_print_timings: total time = 25104.57 ms
[ASSISTANT]:
Ah, a fellow Peanuts enthusiast! Snoopy is Charlie Brown's lovable and imaginative beagle, known for his wild and wacky adventures in the comic strip and television specials. He's a loyal companion to Charlie Brown and the rest of the Peanuts gang, and his antics often provide comic relief in the series. Is there anything else you'd like to know about Snoopy? 🐶
```
## Improve performance
Expand Down Expand Up @@ -216,7 +251,7 @@ Next, The prompt is converted into bytes and set as the input tensor for the mod
.expect("Failed to set prompt as the input tensor");
```
Next, excute the model inference.
Next, execute the model inference.
```rust
// execute the inference
Expand All @@ -241,7 +276,7 @@ println!("\nprompt: {}", &prompt);
println!("\noutput: {}", output);
```
The code explanation above is simple one time chat with llama 2 model. But we have more!
The code explanation above is simple [one time chat with llama 2 model](https://github.com/second-state/llama-utils/tree/main/simple). But we have more!
* If you're looking for continuous conversations with llama 2 models, please check out the source code [here](https://github.com/second-state/llama-utils/tree/main/chat).
* If you want to construct OpenAI-compatible APIs specifically for your llama2 model, or the Llama2 model itself, please check out the surce code [here](https://github.com/second-state/llama-utils/tree/main/api-server).
Expand Down

0 comments on commit 1a500ef

Please sign in to comment.