From c03bda27217059c7eafd752a2f895b2a88ef07f2 Mon Sep 17 00:00:00 2001 From: ThiloteE <73715071+ThiloteE@users.noreply.github.com> Date: Sun, 6 Oct 2024 14:42:48 +0200 Subject: [PATCH] Update local-llm.md (#520) Add how to run ollama in local LLM docs --- en/ai/local-llm.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/en/ai/local-llm.md b/en/ai/local-llm.md index 8f71cc23d..5a0a9615a 100644 --- a/en/ai/local-llm.md +++ b/en/ai/local-llm.md @@ -23,9 +23,9 @@ The following steps guide you on how to use `ollama` to download and runn local 1. Install `ollama` from [their website](https://ollama.com/download) 2. Select a model that you want to run. The `ollama` provides [a large list of models](https://ollama.com/library) to choose from (we recommend trying [`gemma2:2b`](https://ollama.com/library/gemma2:2b), or [`mistral:7b`](https://ollama.com/library/mistral), or [`tinyllama`](https://ollama.com/library/tinyllama)) -3. When you have selected your model, type `ollama pull :` in your terminal. `` refers to the model name like `gemma2` or `mistral`, and `` refers to parameters count like `2b` or `9b` +3. When you have selected your model, type `ollama pull :` in your terminal. `` refers to the model name like `gemma2` or `mistral`, and `` refers to parameters count like `2b` or `9b`. 4. `ollama` will download the model for you -5. After that, you can run ollama serve to start a local web server. This server will accept requests and respond with LLM output. Note: The ollama server may already be running, so do not be alarmed by a cannot bind error. +5. After that, you can run ollama serve to start a local web server. This server will accept requests and respond with LLM output. Note: The ollama server may already be running, so do not be alarmed by a cannot bind error. If it is not yet running, use the following command: `ollama run :` 6. Go to JabRef Preferences -> AI 7. Set the "AI provider" to "OpenAI" 8. Set the "Chat Model" to the model you have downloaded in the format `:`