Skip to content

Commit

Permalink
Network binding plugin: Address sidecar's requests and limits
Browse files Browse the repository at this point in the history
Signed-off-by: Orel Misan <[email protected]>
  • Loading branch information
orelmisan committed Jun 23, 2024
1 parent 145661d commit d1d4dd5
Showing 1 changed file with 80 additions and 0 deletions.
80 changes: 80 additions & 0 deletions design-proposals/network-binding-plugin/network-binding-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,86 @@ pod.
- Mount points
- Downward API

##### Sidecar Resource Requirements / Limits
Currently, the network binding plugin sidecar container's CPU/Memory requests and/or limits are set in any of the following scenarios:
1. The VMI was configured to have dedicated CPUs (by setting `Spec.Domain.CPU.DedicatedCPUPlacement`).
2. The VMI was configured to have a virt-launcher pod with [Guaranteed QOS class](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#guaranteed) (by setting `Spec.Domain.Resources`).
3. The cluster-admin had configured sidecar pods requests and/or limits on the [KubeVirt CR](https://kubevirt.io/api-reference/main/definitions.html#_v1_supportcontainerresources).

> **Note**
>
> Setting CPU/memory requests and/or limits for sidecar pods on the KubeVirt CR will apply uniformly on all [hook sidecar containers](https://kubevirt.io/user-guide/user_workloads/hook-sidecar/) and network binding plugin sidecar containers.
In the common scenario of a regular VM and no spacial configuration on the KubeVirt CR,
the network binding plugin sidecar containers do not have CPU/Memory requests and limits.
The sidecar container can have a memory leak and may cause node's destabilization.


Alternatives:

1. The cluster admin could configure worst case CPU/Memory requests and/or limits on the KubeVirt CR:

Pros:
- Already implemented.

Cons:
- Applies uniformly on all hook sidecars and network binding plugins
- Worst case CPU/Memory requests/limits should be defined on the KubeVirt CR which maybe wasteful for a combination of hook sidecars and network binding plugins.
- Coarse level of control.
- Only supports CPU and memory requests and limits.

2. Additional API for sidecar resource configuration:
The network binding plugin API in the KubeVirt CR could receive an additional input field to specify the sidecar resource requirements:

```yaml
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
name: kubevirt
namespace: kubevirt
spec:
configuration:
network:
binding:
mynetbindingplugin:
sidecarImage: quay.io/kubevirt/mynetbindingplugin
sidecarResources:
requests:
cpu: 200m
memory: 20Mi
```
Pros:
- Cluster-wide definition of network binding plugin sidecar resources per plugin.
- Finer grained control over resources allocated to network binding plugin sidecars.
- Decoupling from existing hook sidecars.
- Additional resources could be requested other than CPU and Memory.
Cons:
- Require an API change.
- The API will probably evolve as additional plugins will be created.
- May require cluster admins to adjust plugin resources during the plugin's lifecycle.
3. Mutating webhook for the virt-launcher pod:
For each network binding plugin used, the VMI controller will add a label on the virt-launcher pod with the following format:
`kubevirt.io/network-binding-plugin:<plugin-name>`

The binding plugin authors will have to provide a mutating webhook that will intercept
virt-launcher pod creation with the above label and add the appropriate resources requests/limits
for every relevant sidecar container.

Pros:
- Plugin authors have full control over the sidecar's resources.
- KubeVirt does not add additional API.
- Very small change in KubeVirt.

Cons:
- Plugin authors should provide another component and integrate it.
- Additional point of failure.
- Additional latency when creating VMs with network binding plugins.


#### Configure Pod netns

The CNI plugin has privileged access to the pod network namespace and
Expand Down

0 comments on commit d1d4dd5

Please sign in to comment.