Skip to content

Commit

Permalink
upload
Browse files Browse the repository at this point in the history
  • Loading branch information
mib1185 committed Dec 1, 2024
0 parents commit a6ce641
Show file tree
Hide file tree
Showing 4 changed files with 105 additions and 0 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
data/
compose.yaml
28 changes: 28 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
FROM debian:12

ARG WYOMING_FASTER_WHIPSER_VERSION=2.2.0
ARG NVIDIA_DRIVER_VERSION=550.127.05

# install requirements
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3-pip wget kmod \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*

# install nvidia linux drivers
RUN cd /tmp \
&& wget "https://de.download.nvidia.com/XFree86/Linux-x86_64/${NVIDIA_DRIVER_VERSION}/NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}.run" \
&& chmod +x NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}.run \
&& bash -c "./NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}.run -s --no-kernel-module" \
$$ rm NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}.run

# install wyoming-faster-whisper and nvidia cuda libs
RUN python3 -m pip install --no-cache-dir --break-system-packages "wyoming-faster-whisper @ https://github.com/rhasspy/wyoming-faster-whisper/archive/refs/tags/v${WYOMING_FASTER_WHIPSER_VERSION}.tar.gz" \
&& python3 -m pip install --no-cache-dir --break-system-packages nvidia-cublas-cu12 nvidia-cudnn-cu12==9.*

EXPOSE 10300/tcp
VOLUME [ "/data" ]

COPY run.sh /

ENTRYPOINT [ "/run.sh" ]
70 changes: 70 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# wyoming-faster-whisper-cuda

This takes the [wyoming-faster-whisper](https://github.com/rhasspy/wyoming-faster-whisper) and wraps it into an nvidia cuda supported container.

**Note** This is only supported on x86_64 systems, yet.

## Usage

### Prerequisits

1. nvidia cuda compatible gpu
2. [nvidia linux drivers](https://www.nvidia.com/en-us/drivers/unix) installed on the host
3. up and running [docker](https://docs.docker.com/engine/install) installation on the host

### Installation

1. download this repo

```shell
$ git clone https://github.com/mib1185/wyoming-faster-whisper-cuda.git
```

2. `compose.yaml` file

create a `compose.yaml` file, which:

- builds from the local `Dockerfile`
- adds the needed parameters for `model` and `language` as command line parameter
- (_optional_) enables `debug` logging via command line parameter
- provides a `data` volume or directory
- exposes the port `10300/tcp`
- maps your nvidia gpu related devices into the container (_obtain with `ls -la /dev/nvidia*`_)
- (_optional_) set `restart: always`

**example `compose.yaml` file**

```yaml
name: wyoming
services:
faster-whisper-cuda:
container_name: faster-whisper-cuda
build: .
command: "--model large --language de --debug"
volumes:
- ./data:/data
ports:
- 10300:10300/tcp
devices:
- /dev/nvidia-uvm:/dev/nvidia-uvm
- /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools
- /dev/nvidia0:/dev/nvidia0
- /dev/nvidiactl:/dev/nvidiactl
restart: always
```

3. start service

on first start, the docker image is build, which needs some time

```shell
$ docker compose up -d
```

4. check if service is running

```shell
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
474e37a84326 wyoming-faster-whisper-cuda "/run.sh --model lar…" 3 minutes ago Up 3 minutes 0.0.0.0:10300->10300/tcp, :::10300->10300/tcp faster-whisper-cuda
```
5 changes: 5 additions & 0 deletions run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`

python3 -m wyoming_faster_whisper --uri tcp://0.0.0.0:10300 --data-dir /data --download-dir /data --device cuda "$@"

0 comments on commit a6ce641

Please sign in to comment.