Skip to content

Commit

Permalink
Update install.md
Browse files Browse the repository at this point in the history
Signed-off-by: alabulei1 <[email protected]>
  • Loading branch information
alabulei1 authored Oct 30, 2023
1 parent 106c750 commit e549761
Showing 1 changed file with 6 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -145,12 +145,18 @@ Then, go to [HTTPS request in Rust chapter](../develop/rust/http_service/client.

WasmEdge supports various backends for `WASI-NN`.

- [ggml backend](#wasi-nn-plug-in-with-ggml-backend): supported on `Ubuntu above 20.04` (x86_64) and macOS (M1 and M2)
- [PyTorch backend](#wasi-nn-plug-in-with-pytorch-backend): supported on `Ubuntu above 20.04` and `manylinux2014_x86_64`.
- [OpenVINO™ backend](#wasi-nn-plug-in-with-openvino-backend): supported on `Ubuntu above 20.04`.
- [TensorFlow-Lite backend](#wasi-nn-plug-in-with-tensorflow-lite-backend): supported on `Ubuntu above 20.04`, `manylinux2014_x86_64`, and `manylinux2014_aarch64`.

Noticed that the backends are exclusive. Developers can only choose and install one backend for the `WASI-NN` plug-in.

#### WASI-NN plug-in with ggml backend
`WASI-NN plug-in` with `ggml` backend allows WasmEdge to run llama2 inference. To install WasmEdge with WASI-NN ggml backend on, please use `--plugin wasi_nn-ggml` when running the installer command.

Then, go to the [ Llama2 inference in Rust chapter](../develop/rust/wasinn/llm-inference) to see how to run AI inference with llama2 series of models.

#### WASI-NN plug-in with PyTorch backend

`WASI-NN` plug-in with `PyTorch` backend allows WasmEdge applications to perform `PyTorch` model inference. To install WasmEdge with `WASI-NN PyTorch backend` plug-in on Linux, please use the `--plugins wasi_nn-pytorch` parameter when [running the installer command](#generic-linux-and-macos).
Expand Down

0 comments on commit e549761

Please sign in to comment.