The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically:
- Expose the number of GPUs on each nodes of your cluster
- Keep track of the health of your GPUs
- Run GPU enabled containers in your Kubernetes cluster.
This repository contains NVIDIA's official implementation of the Kubernetes device plugin.
The list of prerequisites for running the NVIDIA device plugin is described below:
- NVIDIA drivers ~= 361.93
- nvidia-docker version > 2.0 (see how to install and it's prerequisites)
- docker configured with nvidia as the default runtime.
- Kubernetes version = 1.9
- The
DevicePlugins
feature gate enabled
First you will need to enable the DevicePlugins
feature gate on GPU nodes.
If you are using kubeadm and systemd open /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
you will then need to add the following environment argument:
Environment="KUBELET_EXTRA_ARGS=--feature-gates=DevicePlugins=true"
Reload and restart the kubelet to pick up the config change:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
Once you have enabled this option on all the GPU nodes you wish to use you can then deploy the Daemonset.
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.9/nvidia-device-plugin.yml
NVIDIA GPUs can now be consumed via container level resource requirements using the resource name nvidia.com/gpu:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
containers:
- name: cuda-container
image: nvidia/cuda:9.0
resources:
limits:
nvidia.com/gpu: 2 # requesting 2 GPUs
- name: digits-container
image: nvidia/digits:6.0
resources:
limits:
nvidia.com/gpu: 2 # requesting 2 GPUs
WARNING: note that if you don't request GPUs when using the device plugin with NVIDIA images all the GPUs on the machine will be exposed inside your container.
Please note that:
- the device plugin feature is still alpha which is why it requires the feature gate to be enabled.
- the NVIDIA device plugin is still considered alpha and is missing
- Security features
- More comprehensive GPU health checking features
- GPU cleanup features
- ...
- support will only be provided for the official NVIDIA device plugin.
The next sections are focused on building the device plugin and running it.
Option 1, pull the prebuilt image from Docker Hub:
docker pull nvidia/k8s-device-plugin:1.9
Option 2, build without cloning the repository:
docker build -t nvidia/k8s-device-plugin:1.9 https://github.com/NVIDIA/k8s-device-plugin.git
Option 3, if you want to modify the code:
git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin
docker build -t nvidia/k8s-device-plugin:1.9 .
docker run --security-opt=no-new-privileges --cap-drop=ALL --network=none -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.9
kubectl create -f nvidia-device-plugin.yml
C_INCLUDE_PATH=/usr/local/cuda/include LIBRARY_PATH=/usr/local/cuda/lib64 go build
./k8s-device-plugin
- The device Plugin API changed and is no longer compatible with 1.8
- Error messages were added
- You can report a bug by filing a new issue
- You can contribute by opening a pull request