-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
4 changed files
with
192 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,87 @@ | ||
# Docker中跑Ollama | ||
|
||
## 本人环境 | ||
|
||
ubuntu 24.04 | ||
4060ti 16G | ||
|
||
## 一、环境准备 | ||
|
||
1. 安装docker-ce | ||
[安装docker-ce](https://docs.docker.com/engine/install/) | ||
2. 安装docker-compose | ||
https://docs.docker.com/compose/install/ | ||
3. 配置安装docker gpu运行时 | ||
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html | ||
|
||
https://docs.docker.com/engine/containers/resource_constraints/#gpu | ||
4. 安装vscode | ||
5. 安装vscode docker 插件 | ||
|
||
## 二、准备docker-compose文件 | ||
|
||
新建一个目录 ollama-openwebui-docker-compose | ||
|
||
在目录下新建`docker-compose.yaml` 内容如下 | ||
```yaml | ||
services: | ||
ollama: | ||
volumes: | ||
- ./ollama_data:/root/.ollama | ||
# 修改这个路径为本地模型位置,可以换到大硬盘上 | ||
- ./本地模型位置:/ollama_models | ||
container_name: ollama | ||
pull_policy: always | ||
ports: | ||
- "11434:11434" | ||
tty: true | ||
restart: unless-stopped | ||
image: ollama/ollama | ||
extra_hosts: | ||
- "host.docker.internal:host-gateway" | ||
environment: | ||
# - https_proxy=http://host.docker.internal:20003 | ||
# - http_proxy=http://host.docker.internal:20003 | ||
# - no_proxy=localhost,127.0.0.1,host.docker.internal,open-webui | ||
- OLLAMA_HOST=0.0.0.0 | ||
- OLLAMA_MODELS=/ollama_models | ||
- OLLAMA_ORIGINS=* | ||
runtime: nvidia | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- driver: nvidia | ||
count: all | ||
capabilities: [gpu] | ||
open-webui: | ||
image: ghcr.io/open-webui/open-webui:main | ||
container_name: open-webui | ||
pull_policy: always | ||
ports: | ||
- "28899:8080" | ||
volumes: | ||
- ./data:/app/backend/data | ||
- /home/w/MY_CODE/tts-test/pdf-to-markdown/output4:/data/docs | ||
- ./nltk_data:/root/nltk_data | ||
extra_hosts: | ||
- "host.docker.internal:host-gateway" | ||
restart: always | ||
environment: | ||
- https_proxy=http://host.docker.internal:7890 | ||
- http_proxy=http://host.docker.internal:7890 | ||
- no_proxy=localhost,127.0.0.1,host.docker.internal,open-webui | ||
- 'OLLAMA_BASE_URL=http://ollama:11434' | ||
``` | ||
## 三、启动服务 | ||
```shell | ||
docker-compose up -d | ||
``` | ||
|
||
## 四、访问服务 | ||
|
||
浏览器访问 http://localhost:28899 访问openchatui | ||
|
||
|
||
ollama 服务地址 http://localhost:11434 访问ollama api |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
# ubuntu docker使用nvidia显卡跑ai | ||
|
||
|
||
## 参考文档 | ||
|
||
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html | ||
|
||
|
||
## 安装 | ||
|
||
|
||
### apt安装nvidia-container-toolkit | ||
```shell | ||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ | ||
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ | ||
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ | ||
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list | ||
|
||
|
||
sudo apt-get update | ||
sudo apt-get install -y nvidia-container-toolkit | ||
``` | ||
|
||
### 配置容器运行时 | ||
|
||
```shell | ||
sudo nvidia-ctk runtime configure --runtime=docker | ||
sudo systemctl restart docker | ||
``` | ||
|
||
## 验证 | ||
|
||
```shell | ||
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi | ||
``` | ||
|
||
output | ||
```text | ||
Sat Sep 14 05:17:16 2024 | ||
+-----------------------------------------------------------------------------------------+ | ||
| NVIDIA-SMI 555.42.06 Driver Version: 555.42.06 CUDA Version: 12.5 | | ||
|-----------------------------------------+------------------------+----------------------+ | ||
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | ||
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | ||
| | | MIG M. | | ||
|=========================================+========================+======================| | ||
| 0 NVIDIA GeForce RTX 4060 Ti Off | 00000000:10:00.0 On | N/A | | ||
| 0% 43C P8 14W / 165W | 1279MiB / 16380MiB | 11% Default | | ||
| | | N/A | | ||
+-----------------------------------------+------------------------+----------------------+ | ||
+-----------------------------------------------------------------------------------------+ | ||
| Processes: | | ||
| GPU GI CI PID Type Process name GPU Memory | | ||
| ID ID Usage | | ||
|=========================================================================================| | ||
+-----------------------------------------------------------------------------------------+ | ||
``` | ||
|
||
|
||
## 使用nvidia-docker | ||
|
||
### 用docker run运行 | ||
|
||
```shell | ||
docker run -it --rm \ | ||
--name comfyui \ | ||
--runtime=nvidia \ | ||
--gpus all \ | ||
-p 8188:8188 \ | ||
-v "$(pwd)"/storage:/home/runner \ | ||
-e CLI_ARGS="" \ | ||
yanwk/comfyui-boot:cu121 | ||
``` | ||
### 用docker-compose运行comfyui | ||
|
||
```yaml | ||
services: | ||
comfyui: | ||
image: yanwk/comfyui-boot:cu121 | ||
container_name: comfyui | ||
restart: always | ||
ports: | ||
- "8188:8188" | ||
volumes: | ||
- ./storage:/home/runner | ||
extra_hosts: | ||
- "host.docker.internal:host-gateway" | ||
environment: | ||
- no_proxy=localhost,127.0.0.1,host.docker.internal | ||
- https_proxy=http://host.docker.internal:20003 | ||
- http_proxy=http://host.docker.internal:20003 | ||
runtime: nvidia | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- driver: nvidia | ||
count: all | ||
capabilities: [gpu] | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,5 @@ | ||
# AI相关 | ||
- [Docker中跑Ollama](./Docker中跑Ollama.md) | ||
- [ubuntu docker使用nvidia显卡跑ai](./docker使用nvidia显卡跑ai.md) | ||
- [LLM-API参数解读](./LLM-API参数解读.md) | ||
- [Ollama安装和使用](./ollama安装和使用.md) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters