diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/develop/rust/wasinn/llm-inference.md b/i18n/zh/docusaurus-plugin-content-docs/current/develop/rust/wasinn/llm-inference.md index b7cf3739..fbca319f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/develop/rust/wasinn/llm-inference.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/develop/rust/wasinn/llm-inference.md @@ -96,8 +96,7 @@ You can use environment variables to configure the model execution. | Option |Default |Function | | -------|-----------|----- | -| | -LLAMA_LOG| 0 |The backend will print diagnostic information when this value is set to 1| +| LLAMA_LOG| 0 |The backend will print diagnostic information when this value is set to 1| |LLAMA_N_CTX |512| The context length is the max number of tokens in the entire conversation| |LLAMA_N_PREDICT |512|The number of tokens to generate in each response from the model|