From 85a6c5a5ccd6e48941a42148bdfed8689dbc5f90 Mon Sep 17 00:00:00 2001 From: "mergify[bot]" <37929162+mergify[bot]@users.noreply.github.com> Date: Fri, 26 Jan 2024 10:37:09 +0100 Subject: [PATCH] [DOCS] Completes air-gapped ELSER install procedure. (#2655) (#2657) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit (cherry picked from commit 0d82ed4b2883e3209d20d96d097b567f03a68402) Co-authored-by: István Zoltán Szabó --- docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc | 41 +++++++++++++++++++--- 1 file changed, 36 insertions(+), 5 deletions(-) diff --git a/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc index 2cc5cbb03..49973223b 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-elser.asciidoc @@ -130,7 +130,7 @@ allocations and threads per allocation values. [role="screenshot"] image::images/ml-nlp-deployment-id-elser-v2.png[alt="Deploying ELSER",align="center"] -- -5. Click Start. +5. Click **Start**. [discrete] @@ -244,14 +244,25 @@ options: * put the model artifacts into a directory inside the config directory on all {ref}/modules-node.html#master-node[master-eligible nodes]. -You need the following files in your system: +[discrete] +[[elser-model-artifacts]] +=== Model artifact files + +For the cross-platform verison, you need the following files in your system: ``` https://ml-models.elastic.co/elser_model_2.metadata.json https://ml-models.elastic.co/elser_model_2.pt https://ml-models.elastic.co/elser_model_2.vocab.json ``` +For the optimized version, you need the following files in your system: +``` +https://ml-models.elastic.co/elser_model_2_linux-x86_64.metadata.json +https://ml-models.elastic.co/elser_model_2_linux-x86_64.pt +https://ml-models.elastic.co/elser_model_2_linux-x86_64.vocab.json +``` + [discrete] === Using an HTTP server @@ -262,7 +273,7 @@ supports passwordless HTTP servers. You can use any HTTP service to deploy ELSER. This example uses the official Nginx Docker image to set a new HTTP download service up. -. Download the model artifact files from https://ml-models.elastic.co/. +. Download the <>. . Put the files into a subdirectory of your choice. . Run the following commands: + @@ -305,7 +316,17 @@ master-eligible nodes can reach the server you specify. -- . Repeat step 5 on all master-eligible nodes. . {ref}/restart-cluster.html#restart-cluster-rolling[Restart] the -master-eligible nodes one by one. +master-eligible nodes one by one. +. Navigate to the **Trained Models** page in {kib}, ELSER can be found in the +list of trained models. +. Click the **Add trained model** button, select the ELSER model version you +downloaded in step 1 and want to deploy, and click **Download**. The selected +model will be downloaded from the HTTP/HTTPS server you configured. +. After the download is finished, start the deployment by clicking the +**Start deployment** button. +. Provide a deployment ID, select the priority, and set the number of +allocations and threads per allocation values. +. Click **Start**. The HTTP server is only required for downloading the model. After the download has finished, you can stop and delete the service. You can stop the Docker image @@ -322,7 +343,7 @@ docker stop ml-models For a file-based access, follow these steps: -. Download the model artifact files from https://ml-models.elastic.co/. +. Download the <>. . Put the files into a `models` subdirectory inside the `config` directory of your Elasticsearch deployment. . Point your Elasticsearch deployment to the model directory by adding the @@ -336,6 +357,16 @@ xpack.ml.model_repository: file://${path.home}/config/models/` . Repeat step 2 and step 3 on all master-eligible nodes. . {ref}/restart-cluster.html#restart-cluster-rolling[Restart] the master-eligible nodes one by one. +. Navigate to the **Trained Models** page in {kib}, ELSER can be found in the +list of trained models. +. Click the **Add trained model** button, select the ELSER model version you +downloaded in step 1 and want to deploy and click **Download**. The selected +model will be downloaded from the model directory where you put in step 2. +. After the download is finished, start the deployment by clicking the +**Start deployment** button. +. Provide a deployment ID, select the priority, and set the number of +allocations and threads per allocation values. +. Click **Start**. [discrete]