A trait represents a piece of add-on functionality that attaches to a component instance. Traits augment components with additional operational features such as traffic routing rules (including load balancing policy, network ingress routing, circuit breaking, rate limiting), auto-scaling policies, upgrade strategies, and more. As such, traits represent features of the system that are operational concerns, as opposed to developer concerns. In terms of implementation details, traits are Rudr-defined Kubernetes CRDs.
Traits are assigned to component workloads by an application operator.
Currently, Rudr supports the following traits:
Specific traits are assigned to component workloads of an application via the ApplicationConfiguration file. For example:
apiVersion: core.oam.dev/v1alpha1 kind: ApplicationConfiguration metadata: name: first-app spec: components: - componentName: nginx-component instanceName: first-app-nginx parameterValues: - name: poet value: Eliot - name: poem value: The Wasteland traits: - name: ingress.core.oam.dev/v1alpha1 properties: hostname: example.com path: / servicePort: 80
You can assign a trait to a component by specifying its name
(as listed in kubectl get traits
) and your specific Properties (as described by kubectl get trait <trait-name> -o yaml
). For more on using specific traits, refer to the sections below.
Rudr supports several traits, with more rolling out in the future, including support for defining custom traits. In order provide maximum flexibility to Infrastructure operators, however, Rudr does not install default implementations for some of these these traits. Specifically, the Autoscaler and Ingress traits require you to select and install a Kubernetes controller before you can use them in your Rudr application, since they map to primitive Kubernetes features that can be fulfilled by different controllers. You can search for implementations for your traits at Helm Hub.
Here's how to get info on the traits supported on your Rudr installation.
List supported traits:
$ kubectl get traits
Show the schema details of a trait:
$ kubectl get trait <trait-name> -o yaml
Manual Scaler trait is used to manually scale components with replicable workload types.
None. The manual scaler trait has no external dependencies.
- Server
- Task
Name | Description | Allowable values | Required | Default |
---|---|---|---|---|
replicaCount | Number of replicas to run. | int | ☑ |
Here's an example of a manual scaler trait. You would attach this to a component within the application configuration:
# Example manual scaler trait entry
traits:
- name: manual-scaler.core.oam.dev/v1alpha1
properties:
replicaCount: 3
Autoscaler trait is used autoscale components with replicable workloads. This is implemented by the Kubernetes Horizontal Pod Autoscaler.
To use the autoscaler trait, you must install a controller for Kubernetes HorizontalPodAutoscaler
. We recommend using the Kubernetes-based Event Driven Autoscaling (KEDA) controller:
$ helm install keda stable/keda
- Server
- Task
Name | Description | Allowable values | Required | Default |
---|---|---|---|---|
minimum | Lower threshold of replicas to run. | int | 1 |
|
maximum | Higher threshold of replicas to run. | int. Cannot be less than minimum value. |
10 |
|
memory | Memory consumption threshold (as percent) that will cause a scale event. | int | ||
cpu | CPU consumption threshold (as percent) that will cause a scale event. | int |
Here's an example of an autoscaler trait. You would attach this to a component within the application configuration:
# Example autoscaler trait entry
- name: auto-scaler.core.oam.dev/v1alpha1
properties:
maximum: 6
minimim: 2
cpu: 50
memory: 50
Ingress trait is used for components with service workloads and provides load balancing, SSL termination and name-based virtual hosting.
To successfully use an ingress
trait, you will need to install one of the Kubernetes Ingress controllers. We recommend nginx-ingress:
$ helm install nginx-ingress stable/nginx-ingress
Note: You still must manage your DNS configuration as well. Mapping an ingress to example.com
will not work if you do not also control the domain mapping for example.com
.
- Server
- SingletonServer
Name | Description | Allowable values | Required | Default |
---|---|---|---|---|
hostname | Host name for the ingress. | string | ☑ | |
service_port | Port number on the service to bind to the ingress. | int. See notes below. | ☑ | |
path | Path to expose. | string | / |
To find your service port, you can do one of two things:
- find the port on the ComponentSchematic
- find the port on the desired Kubernetes Service object
For example, here's how to find the port on a ComponentSchematic
:
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: nginx-replicated-v1
spec:
workloadType: core.oam.dev/v1alpha1.Server
containers:
- image: nginx:latest
name: server
ports:
- containerPort: 80 # <-- this is the service port
name: http
protocol: TCP
So to use this on an ingress, you would need to add this to your ApplicationConfiguration
:
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: example
spec:
components:
- componentName: nginx-replicated-v1
instanceName: example-app
traits:
- name: ingress.core.oam.dev/v1alpha1
properties:
hostname: example.com
path: /
servicePort: 80 # <-- set this to the value in the component
Because each component may have multiple ports, the specific port must be defined in the ApplicationConfiguration
.
The volume mounter trait is responsible for attaching a Kubernetes PersistentVolume Claim (PVC) to a component.
None. The volume mounter trait has no external dependencies.
- Server
- SingletonServer
- Worker
- SingletonWorker
- Task
- SingletonTask
Name | Description | Allowable values | Required | Default |
---|---|---|---|---|
volumeName | The name of the volume this backs. | string. Matches the volume name declared in ComponentSchematic. | ☑ | |
storageClass | The storage class that a PVC requires. | string. According to the available StorageClasses(s) (kubectl get storageclass ) in your cluster and/or default |
☑ |
Here's an example of how to attach a storage volume to your container:
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: server-with-volume-v1
spec:
workloadType: core.oam.dev/v1alpha1.Server
containers:
- name: server
image: nginx:latest
resources:
volumes:
- name: myvol
mountPath: /myvol
disk:
required: "50M"
ephemeral: true
In the component schematic volumes
section, one volume is specified. It must be at least 50M
in size. It is ephemeral
, which means that the component author does not expect the data to persist if the pod is destroyed.
Sometimes, components need to persist data. In such cases, the ephemeral
flag should be set to false
:
apiVersion: core.oam.dev/v1alpha1
kind: ComponentSchematic
metadata:
name: server-with-volume-v1
spec:
workloadType: core.oam.dev/v1alpha1.Server
containers:
- name: server
image: nginx:latest
resources:
volumes:
- name: myvol
mountPath: /myvol
disk:
required: "50M"
ephemeral: false
In the Kubernetes implementation of OAM, a Persistent Volume Claim (PVC) is used to satisfy the non-ephemeral case. However, by default Rudr does not create this PVC automatically. A trait must be applied that will indicate how the PVC is created:
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
name: example-server-with-volume
spec:
components:
- componentName: server-with-volume-v1
instanceName: example-server-with-volume
traits:
- name: volume-mounter.core.oam.dev/v1alpha1
properties:
volumename: myvol
storageClass: default
The volume-mounter
trait ensures that a PVC is created with the given name (myvol
) using the given storage class (default
). Typically, the volumeName
should match the resources.volumes[].name
field from the ComponentSchematic
. Thus myvol
above will match the volume declared in the volumes
section of server-with-volume-v1
.
When this request is processed by Rudr, it will first create the Kubernetes PVC named myvol
and then create a Kubernetes pod that attaches that PVC as a volumeMount
.
Attaching PVCs to Pods may take extra time, as the underlying system must first provision storage.