The Kubernetes vSphere driver contains bugs related to detaching volumes from offline nodes. See the Volume detach bug section for more details.
When creating worker nodes for a user cluster, the user can specify an existing image. Defaults may be set in the datacenters.yaml.
Supported operating systems
- Go into the VSphere WebUI, select your data centre, right-click onto it and choose “Deploy OVF Template”
- Fill in the “URL” field with the appropriate URL
- Click through the dialogue until “Select storage”
- Select the same storage you want to use for your machines
- Select the same network you want to use for your machines
- Leave everything in the “Customize Template” and “Ready to complete” dialogue as it is
- Wait until the VM got fully imported and the “Snapshots” => “Create Snapshot” button is not greyed out anymore.
- The template VM must have the disk.enable UUID flag set to 1, this can be done using the govc tool with the following command:
govc vm.change -e="disk.enableUUID=1" -vm='/PATH/TO/VM'
- Convert it to vmdk:
qemu-img convert -f qcow2 -O vmdk CentOS-7-x86_64-GenericCloud.qcow2 CentOS-7-x86_64-GenericCloud.vmdk
- Upload it to a Datastore of your vSphere installation
- Create a new virtual machine that uses the uploaded vmdk as rootdisk.
Modifications like Network, disk size, etc. must be done in the ova template before creating a worker node from it. If user clusters have dedicated networks, all user clusters, therefore, need a custom template.
During the creation of a user cluster, Kubermatic creates a dedicated VM folder in the root path on the Datastore (Defined in the datacenters.yaml). That folder will contain all worker nodes of a user cluster.
Kubernetes needs to talk to the vSphere to enable Storage inside the cluster. For this, Kubernetes needs a config called cloud-config
. This config contains all details to connect to a vCenter installation, including credentials.
As this Config must also be deployed onto each worker node of a user cluster, its recommended to have individual credentials for each user cluster.
The VSphere user must have the following permissions on the correct resources
- Role
k8c-storage-vmfolder-propagate
- Granted at VM Folder and Template Folder, propagated
- Permissions
- Virtual machine
- Change Configuration
- Add existing disk
- Add new disk
- Add or remove the device
- Remove disk
- Change Configuration
- Folder
- Create folder
- Delete folder
- Virtual machine
- Role
k8c-storage-datastore-propagate
- Granted at Datastore, propagated
- Permissions
- Datastore
- Allocate space
- Low-level file operations
- Datastore
- Role
Read-only
(predefined)- Granted at …, not propagated
- Datacenter
- Granted at …, not propagated
-
Role
k8c-user-vcenter
- Granted at vcentre level, not propagated
- Needed to customize VM during provisioning
- Permissions
- VirtualMachine
- Provisioning
- Modify customization specification
- Read customization specifications
- Provisioning
- VirtualMachine
-
Role
k8c-user-datacenter
- Granted at datacentre level, not propagated
- Needed for cloning the template VM (obviously this is not done in a folder at this time)
- Permissions
- Datastore
- Allocate space
- Browse datastore
- Low-level file operations
- Remove file
- vApp
- vApp application configuration
- vApp instance configuration
- Virtual Machine
- Change CPU count
- Memory
- Settings
- Inventory
- Create from existing
- Datastore
-
Role
k8c-user-cluster-propagate
- Granted at the cluster level, propagated
- Needed for upload of
cloud-init.iso
(Ubuntu and CentOS) or defining the Ignition config into Guestinfo (CoreOS) - Permissions
- Host
- Configuration
- System Management
- Local operations
- Reconfigure virtual machine
- Configuration
- Resource
- Assign virtual machine to the resource pool
- Migrate powered off the virtual machine
- Migrate powered-on virtual machine
- vApp
- vApp application configuration
- vApp instance configuration
- Host
-
Role k8s-network-attach
- Granted for each network that should be used
- Permissions
- Network
- Assign network
- Network
-
Role
k8c-user-datastore-propagate
- Granted at datastore/datastore cluster level, propagated
- Permissions
- Datastore
- Allocate space
- Browse datastore
- Low-level file operations
- Datastore
-
Role
k8c-user-folder-propagate
- Granted at VM Folder and Template Folder level, propagated
- Needed for managing the node VMs
- Permissions
- Folder
- Create folder
- Delete folder
- Global
- Set custom attribute
- Virtual machine
- Change Configuration
- Edit Inventory
- Guest operations
- Interaction
- Provisioning
- Snapshot management
- Folder
The described permissions have been tested with vSphere 6.7 and might be different for other vSphere versions.
After a node is powered-off, the Kubernetes vSphere driver doesn’t detach disks associated with PVCs mounted on that node. This makes it impossible to reschedule pods using these PVCs until the disks are manually detached in vCenter.
Upstream Kubernetes has been working on the issue for a long time now and tracking it under the following tickets: