diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/start/install.md b/i18n/zh/docusaurus-plugin-content-docs/current/start/install.md index f51aed37..b7679cf6 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/start/install.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/start/install.md @@ -145,12 +145,18 @@ Then, go to [HTTPS request in Rust chapter](../develop/rust/http_service/client. WasmEdge supports various backends for `WASI-NN`. +- [ggml backend](#wasi-nn-plug-in-with-ggml-backend): supported on `Ubuntu above 20.04` (x86_64) and macOS (M1 and M2) - [PyTorch backend](#wasi-nn-plug-in-with-pytorch-backend): supported on `Ubuntu above 20.04` and `manylinux2014_x86_64`. - [OpenVINO™ backend](#wasi-nn-plug-in-with-openvino-backend): supported on `Ubuntu above 20.04`. - [TensorFlow-Lite backend](#wasi-nn-plug-in-with-tensorflow-lite-backend): supported on `Ubuntu above 20.04`, `manylinux2014_x86_64`, and `manylinux2014_aarch64`. Noticed that the backends are exclusive. Developers can only choose and install one backend for the `WASI-NN` plug-in. +#### WASI-NN plug-in with ggml backend +`WASI-NN plug-in` with `ggml` backend allows WasmEdge to run llama2 inference. To install WasmEdge with WASI-NN ggml backend on, please use `--plugin wasi_nn-ggml` when running the installer command. + +Then, go to the [ Llama2 inference in Rust chapter](../develop/rust/wasinn/llm-inference) to see how to run AI inference with llama2 series of models. + #### WASI-NN plug-in with PyTorch backend `WASI-NN` plug-in with `PyTorch` backend allows WasmEdge applications to perform `PyTorch` model inference. To install WasmEdge with `WASI-NN PyTorch backend` plug-in on Linux, please use the `--plugins wasi_nn-pytorch` parameter when [running the installer command](#generic-linux-and-macos).