This Helm chart manages the deployment of addons for a Kubernetes cluster deployed using Cluster API. It is a dependency of the cluster management charts from this repository, e.g. openstack-cluster.
Addons are managed using custom resources provided by the Cluster API Addon Provider, which must be installed. Please also read the documentation for the addon provider to see how addons are defined.
- Container Network Interface (CNI) plugins
- OpenStack integrations
- Ingress controllers
- Metrics server
- Monitoring and logging
- Custom addons
This chart can install either Calico or Cilium as a CNI plugin to provide the pod networking in a Kubernetes cluster. By default, the Calico CNI will be installed.
To switch the CNI to Cilium, use the following in your Helm values:
cni:
type: cilium
NOTE
When Cilium is used, the Cilium kube-proxy replacement is configured by default with no further action required.
To disable the installation of a CNI completely, use:
cni:
enabled: false
Additional configuration options are available for each - see values.yaml.
Kubernetes allows cloud providers to provide various plugins to integrate with the underlying infrastructure, for example Cloud Controller Managers (CCMs), Container Storage Interface (CSI) implementations and authenticating webhooks.
This chart is able to deploy the CCM, the Cinder and Manila CSI plugins and the Keystone authenticating webbook from the Kubernetes OpenStack cloud provider, which allows your Kubernetes cluster to integrate with the OpenStack cloud on which it is deployed. This enables features like automatic labelling of nodes with OpenStack information (e.g. server ID and flavor), automatic configuration of hostnames and IP addresses, managed load balancers for services and dynamic provisioning of RWO and RWX volumes.
By default, the OpenStack integrations are not enabled. To enable OpenStack integrations on the target cluster, use the following in your Helm values:
openstack:
enabled: true
TIP
When using the openstack-cluster chart, the OpenStack integrations are enabled by default in the values for the chart.
To configure options for the [Networking]
, [LoadBalancer]
, [BlockStorage]
and [Metadata]
sections of the cloud-config file, you can use Helm values, e.g.:
openstack:
cloudConfig:
Networking:
public-network-name: public-internet
LoadBalancer:
lb-method: LEAST_CONNECTIONS
create-monitor: true
BlockStorage:
ignore-volume-az: true
Metadata:
search-order: metadataService
The [Globals]
section is populated such that the credential used by the OpenStackCluster
object is also used by OpenStack integration on the cluster.
For the available options, consult the documentation for the CCM and the Cinder CSI plugin.
Additional configuration options are available for the OpenStack integrations - see values.yaml for more details.
The Cinder service in an OpenStack cloud provides block volumes for workloads. These volumes can only be attached to a single pod at once, referred to as read-write-one (RWO).
Cinder is available on the vast majority of OpenStack clouds, and so the Cinder CSI is installed
by default whenever the OpenStack integrations are enabled. As part of this, a default
storage class is installed that
allows Cinder volumes to be requested and attached to pods using
persistent volume claims.
This storage class uses the default Cinder volume type and the nova
availability zone, and
is configured as the default storage class for the cluster.
To change the Cinder availability zone or volume type for the default storage class, use the following values:
openstack:
csiCinder:
defaultStorageClass:
availabilityZone: az1
volumeType: fast-ssd
In contrast to Cinder, the Manila service provides shared filesystems for cloud workloads. These volumes can be attached to multiple pods simultaneously, referred to as read-write-many (RWX).
Because Manila is often not deployed on OpenStack clouds, it is not enabled by default even when the OpenStack integrations are enabled. To enable the Manila CSI, set the following variable:
openstack:
csiManila:
enabled: true
Manila supports multiple backends, but currently only the CephFS backend is supported in the CAPI Helm charts. To utilise the CephFS support in the Manila CSI, the CephFS CSI plugin must also be enabled:
csi:
cephfs:
enabled: true
By default, this will result in the Manila CSI creating volumes using the cephfs
share type. If
you need to use a different share type, use the following:
openstack:
csiManila:
defaultStorageClass:
parameters:
type: cephfs_type
Any of the storage class parameters
specified in the Manila CSI docs can be given under
openstack.csiManila.defaultStorageClass.parameters
. For example, to use the kernel
mounter
rather than the default fuse
mounter, which can help performance, use the following:
openstack:
csiManila:
defaultStorageClass:
parameters:
cephfs-mounter: kernel
The k8s-keystone-auth
webhook can be installed by enabling the k8sKeystoneAuth
subchart. Note that you will need to provide
the auth url and project id for the Openstack tenant where you are deploying your cluster.
openstack:
k8sKeystoneAuth:
enabled: true
values:
openstackAuthUrl: $OS_AUTH_URL
projectId: $OS_PROJECT_ID
Running an Ingress Controller on your Kubernetes cluster enables the use of Ingress resource to manage HTTP(S) traffic flowing in and out of the cluster. This allows your web applications to take advantage of load-balancing, name-based virtual hosting, path-based routing and TLS termination using the same declarative approach as other Kubernetes resources. When combined with a cert-manager issuer (see above) this provides an almost frictionless way to secure your web services.
It is possible to install multiple Ingress Controllers and select the preferred one for a particular Ingress resource using Ingress Classes.
This chart can install the Nginx Ingress Controller onto the target cluster.
The Nginx Ingress Controller is disabled by default. To enable it, use the following Helm values:
ingress:
enabled: true
In order to use features like kubectl top
to observe resource usage, and also to use
Horizontal Pod Autoscalers,
the metrics server must be installed.
This chart is able to install the metrics server, and it is enabled by default. To disable it, use the following Helm values:
metricsServer:
enabled: false
This chart is able to deploy a monitoring and logging stack using Prometheus, Grafana and Loki.
The monitoring stack is installed using the kube-prometheus-stack chart, which makes sure many useful exporters are installed and dashboards available for them. It also configures alerts for the cluster, but does not configure any alert sinks by default.
Loki is installed using the
loki-stack chart,
that also installs and configures promtail
to ship logs to Loki. A simple dashboard is installed into the Grafana provided by
kube-prometheus-stack
to make the logs available for browsing.
The monitoring stack is not enabled by default. To enable it, use the following Helm values:
monitoring:
enabled: true
By default, Grafana is only available from within the cluster and must be accessed using port forwarding:
kubectl -n monitoring-system port-forward svc/kube-prometheus-stack-grafana 3000:80
This chart is able to manage the deployment of custom addons.
For example, to manage the deployment of a custom Helm chart:
custom:
# This is the name of the Helm release
my-custom-helm-release:
kind: HelmRelease
spec:
# The namespace for the release
namespace: my-namespace
# The chart to use
chart:
repo: https://my-project/charts
name: my-chart
version: 1.5.0
# The values to use for the release
values:
name1: value1
name2:
complex:
nested:
- value
It is also possible to manage the deployment of arbitrary manifests to the cluster. The manifests are managed by creating a Helm chart and release using them, and the Helm release manages the lifecycle of the resulting resources. To specify custom manifests to install:
custom:
# The name of the Helm release that will contain the resources
my-custom-manifests:
kind: Manifests
spec:
# The namespace for the Helm release that will contain the resources
# For namespace-scoped resources, this is the namespace that the resources will be created
# in (unless overridden in the manifest itself)
namespace: my-namespace
manifests:
secret.yaml: |-
apiVersion: v1
kind: Secret
metadata:
name: my-secret
stringData:
secret-file: "secret-data"