-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
design-proposal: VirtualMachineInstanceMigration - Live migration to a named node #320
base: main
Are you sure you want to change the base?
Conversation
Follow-up and derived from: kubevirt#10712 Implements: kubevirt/community#320 TODO: add functional tests Signed-off-by: zhonglin6666 <[email protected]> Signed-off-by: Simone Tiraboschi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lovely to see this clear design proposal (even if I don't like anything that assumes a specific node is long-living). I have two questions, though.
design-proposals/migration-target.md
Outdated
|
||
## Goals | ||
- A user allowed to trigger a live-migration of a VM and list the nodes in the cluster is able to rely on a simple and direct API to try to live migrate a VM to a specific node. | ||
- The explict migration target overrules a nodeSelector or affinity and anti-affinity rules defined by the VM owner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this odd, as the VM and the application in it may not function well (or at all) if affinity is ignored. Can you share more about the origins of this goal? I'd expect the target node to be ANDed with existing anti/affinity rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to think that as a cluster admin that is trying to force a VM to migrate to named node this is the natural and expected behaviour:
if I explicitly select a named node, I'm expecting that my VM will be eventually migrated there and nowhere else (such as on a different node selected by the scheduler according to a weighted combination of affinity criteria and resource availability and so on); then I can tolerate that the live migration will fail since I chose a wrong node, but the controller should only try to live-migrate it according to what I'm explicitly asking for.
And by the way this is absolutely consistent with the native k8s behaviour for pods.
spec.nodeName
for pods is under spec for historical reasons but it's basically controlled by the scheduler:
when a pod is going to be executed, the scheduler is going to check it and, according to available cluster resources, nodeselectors, weighted affinity and anti-affinity rules and so on, it's going to select a node and write it on spec.nodeName
on the pod objects. At this point the kubelet on the named node will try to execute the Pod on that node.
If the user explicitly sets spec.nodeName
on a pod (or in the template in a deployment and so on), the scheduler is not going to be involved in the process since the pod is basically already scheduled for that node and nothing else and so the kubelet on that node will directly try to execute it there eventually failing.
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename explictly state:
If the nodeName field is not empty, the scheduler ignores the Pod and the kubelet on the named node tries to place the Pod on that node.
Using nodeName overrules using nodeSelector or affinity and anti-affinity rules.
And this in my opinion is exactly how we should treat a Live migration attempt to a named node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's take the following example (this is a real world use-case):
- An admin is adding a new node to a cluster to take it into prod. This node has a taint to prevent workloads to immediately land there.
- The admin wants to migrate a VM to this now to validted it is working properly.
If we AND
a new selector for this node, then the migration will not take place, because there is the taint. We'd also need to add a toleration to get the vm scheduled to that node.
With spec.nodeName
it would be no issue - initially - it could become one if Require*atRuntime
effects are used.
However, with spec.nodeName
all other validations - CPU caps, extended, storage, and local resources etc will be ignored. We are asking a VM to not start.
Worse: It would be really hard now to understand WHY the vm is not launching.
Thus I think we have to AND
to the node selector, but need code to understand taints specifically (because taints keep workloads away).
Then we still need to think about a generic mechanism to deal with reasons of why a pod can not be placed on the selected node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not like taking examples from the historically-understandable Pod.spec.nodeName
. Node identity is not something that should have typically been exposed to workload owners.
Can you summarize your reasoning into the proposal? I think I understand it now, but I am not at ease with it. For example, a cluster admin may easily violate anti/affinity rules that are important for app availability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabiand with taints is a bit more complex: the valid effects for a taint are NoExecute
, NoSchedule
and PreferNoSchedule
.
Bypassing the scheduler directly setting spec.nodeName
will allow us to bypass taints with NoSchedule
and PreferNoSchedule
effect but, AFAIK, it will be still blocked by a NoExecute
that is also enforced by the Kubelect with eviction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dankenigsberg yes, this is a critical aspect of this design proposal so we should carefully explore and weight the different alternatives tracking them down in the design proposal itself as a future reference.
In my opinion the choice strictly depends on the use case and the power we want to offer to the cluster admin when creating a live migration request to a named node.
Directly setting spec.nodeName
on the target pod will completely bypass all the scheduling hints (spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution
) and constraints (spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution
) meaning that the target pod will be started on the named nome regardless how the VM is actually configured.
Another option is trying to append/merge (this sub-topic deserves by itself another discussion) something like
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- <nodeName>
to the affinity rules already defined on the VM.
My concern with this choice is that affinity/anti-affinity grammar is pretty complex so, if the VM owner already defined some affinity/anti-affinity rules, we can easily end up with a set of conflicting rules so that the target pod cannot be scheduled on the named node as on any other node.
If the use case that we want to address is giving to the cluster admin the right to try migrating a generic VM to a named node (for instance for maintenance/emergency reasons), this is approach is not fully addressing it with many possible cases where the only viable option is still about manually overriding affinity/anti-affinity rules set by the VM owner.
I still tend to think that the always bypass the scheduler with a spec.nodeName
is the K.I.S.S. approach here if try to forcing a live migration to a named node is exactly what the cluster admin is trying to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I summarized this considerations into the proposal itself, let's continue from there.
design-proposals/migration-target.md
Outdated
|
||
# Implementation Phases | ||
A really close attempt was already tried in the past with https://github.com/kubevirt/kubevirt/pull/10712 but the Pr got some pushbacks. | ||
A similar PR should be reopened, refined and we should implement functional tests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you outline the nature of the pushback? Do we currently have good answers to the issues raised back then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to summarize (@EdDev please keep me honest on this), it was somehow considered a semi-imperative approach and it was pointed out that a similar behavior could already indirectly be achieved modifying on the fly and then reverting affinity rules on the VM object.
see: kubevirt/kubevirt#10712 (comment)
and: kubevirt/kubevirt#10712 (comment)
How much this is imperative is questionable: at the end we already have a VirtualMachineInstanceMigration
object that you can use to declare that you want to trigger a live migration, this is only about letting you also declare that you want to have a live migration to a named host.
The alternative approach based on amending the affinity rules on the VM object and waiting for the LiveUpdate rollout strategy to propagate it to the VMI before trying a live migration is described, pointing out its main drawback, in the Alternative design
section in this proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you inline this succinctly? E.g, that Pr got some pushbacks because it was not clear why a new API for one-off migration is needed. We give here a better explanation why this one-off migration destination request is necessary
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The "one-time" operation convinced me.
- The reasoning for the real need is hard for me, but I did feedback on this proposal what is convincing me.
2989e12
to
900eb23
Compare
/cc |
design-proposals/migration-target.md
Outdated
- Cluster-admin: the administrator of the cluster | ||
|
||
## User Stories | ||
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for maintenance reasons eventually overriding what the VM owner set |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to see more fleshed out user stories. It's unclear to me based on these user stories why the existing methods wouldn't suffice.
As a cluster admin I want to be able to try to live-migrate a VM to specific node for maintenance reasons eventually overriding what the VM owner set
For example, why wouldn't the cluster admin taint the source node and live migrate the vms away using the existing methods? Why would the admin need direct control over the exact node the VM goes to? I'd like to see a solid answer for why this is necessary over existing methods.
That's where this discussion usually falls apart and why it hasn't seen progress through the years. I'm not opposed to this feature, but I do think we need to articulate clearly why the feature is necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expanded this section
900eb23
to
cb9fb47
Compare
311710d
to
339f9f5
Compare
design-proposals/migration-target.md
Outdated
## User Stories | ||
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as: | ||
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions | ||
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations | ||
- Foreseeing a peak in application load (e.g. new product announcement), I'd like to balance in advance my cluster according to my expectation and not to current observations | ||
- During a planned maintenance window, I'm planning to drain more than one node in a sequence, so I want to be sure that the VM is going to land on a node that is not going to be drained in a near future (needing then a second migration) and being not interested in cordoning it also for other pods | ||
- I just added a new node and I want to validate it trying to live migrate a specific VM there |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice! these are good reasons that hadn't been explored during previous discussions, thanks
design-proposals/migration-target.md
Outdated
When a pod is going to be executed, the scheduler is going to check it and, according to available cluster resources, nodeselectors, weighted affinity and anti-affinity rules and so on, | ||
the scheduler is going to select a node and write its name on `spec.nodeName` on the pod object. At this point the kubelet on the named node will try to execute the Pod on that node. | ||
|
||
If `spec.nodeName` is already set on a pod object as in this approach, the scheduler is not going to be involved in the process since the pod is basically already scheduled for that node and only for tha named node and so the kubelet on that node will directly try to execute it there eventually failing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think using pod.spec.nodeName
is likely the most straightforward approach. This does introduce some new failure modes that might not be obvious to admins.
For example, today if a target pod is unschedulable due to lack of resources, the migration object will time out due to the pod being stuck in "pending". This information is feed back to admin as an k8s event associated with the migration object.
However, by setting the pod.spec.NodeName directly, we'd be bypassing the checks that ensure the required resources are available on the node (like the node having the "kvm" device available for instance), and the pod would likely get scheduled and immediately fail. I don't think we are currently bubbling up these types of errors to the migration object, so this could leave admins wondering why their migration failed.
I guess what I'm trying to get at here is, I like this approach, let's make sure the new failure modes get reported back on the migration object so the Admin has some sort of clue as to why a migration has failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidvossel We already report the failure reason on the VMIM. This is part of the VMIM status.
pod.spec.nodeName
entirely bypassed the scheduler making AAQ unusable as it relies on "pod scheduling readiness".
From my pov, bypassing the scheduler is a no go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my pov, bypassing the scheduler is a no go.
luckily we have also another option as described on:
### B. appending/merging an additional nodeAffinity rule on the target virt-launcher pod (merging it with VM owner set affinity/anti-affinity rules)
This will add an additional constraint for the scheduler summing it up with existing constraints/hints.
In case of mismatching/oppositing rules, the destination pod will not be scheduled and the migration will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vladikr @davidvossel +1.
spec.nodeName
is a horrible field that is not being removed from Kubernetes only due to backward compatibility and causes a lot of trouble. I agree that it should be considered as a no-go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the intention behind introducing the nodeName field, but I fail to see how something like this may work at scale. It seems to me that most, if not all, of the user stories listed in the proposal can already be achieved through existing methods. Adding this field could potentially cause confusion for admins and lead to unnecessary friction with the Kubernetes scheduler and descheduler flows. I'd prefer to see solutions to the user stories to be aligned closely with established patterns. (descheduler policies or scheduler plugins )
|
||
## User Stories | ||
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as: | ||
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder what would be so special about these VMs that cannot be handled by a descheduled?
Also, how would the admin know that the said descheduler did not remove these VMs at a later time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The descheduler it's going to decide according to its internal policy.
In the more general use case it will be a cluster admin who can decide to live migrate a VM just because he thinks it's the right thing to do.
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations | ||
- Foreseeing a peak in application load (e.g. new product announcement), I'd like to balance in advance my cluster according to my expectation and not to current observations | ||
- During a planned maintenance window, I'm planning to drain more than one node in a sequence, so I want to be sure that the VM is going to land on a node that is not going to be drained in a near future (needing then a second migration) and being not interested in cordoning it also for other pods | ||
- I just added a new node and I want to validate it trying to live migrate a specific VM there |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be achieved today by modifying the VM's node selector or creating a new VM. New nodes will be the schedulers' very likely target for new pods already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right,
from a pure technical perspective this feature can be already simply achieved directly manipulating the node affinity rules on the VM object. Now we have LiveUpdate
rollout strategy and so the new affinity rules will be quickly propagated to the VMI and so consumed on the target pod of the live-migration.
No doubt, on the technical side it will work.
But the central idea of this proposal is about allowing a cluster admin doing that without touching the VM object.
This for two maina reasons:
- separation of personas: the VM owner can set rules on his VM, a cluster admin could be still interested in migrating a VM without messing up or altering the configuration set by the owner on the VM object.
- separating what it a one-off configuration for the single migration attempt (so set on the
VirtualMachineInstanceMigration
object) that is relevant only for this single migration attempt but it should not produce any side effect in the future from what is a long-term configuration that is going to stay there and be applied also later on (future live migrations, restarts).
This comment applies to all the user stories here.
design-proposals/migration-target.md
Outdated
## User Stories | ||
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as: | ||
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions | ||
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also doable today as the default scheduler will try to choose the least busy node to schedule the target pod.
- As a cluster admin I want to be able to try to live-migrate a VM to specific node for various possible reasons such as: | ||
- I just added to the cluster a new powerful node and I want to migrate a selected VM there without trying more than once according to scheduler decisions | ||
- I'm not using any automatic workload rebalancing mechanism and I periodically want to manually rebalance my cluster according to my observations | ||
- Foreseeing a peak in application load (e.g. new product announcement), I'd like to balance in advance my cluster according to my expectation and not to current observations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please elaborate on this?
How would the cluster look like to the admins' expectations?
Couldn't a taint be placed on some nodes to resolve capacity before the new product announcement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, I do not want to argue with an admin on how the cluster should be managed, but this is surely not a recommended way we want to encourage/support.
Right, I also added this note: Note technically all of this can be already achieved manipulating the node affinity rules on the VM object, but as a cluster admin I want to keep a clear boundary between what is a long-lasting setting for a VM, defined by the VM owner, and what is single shot requirement for a one-off migration |
63818ed
to
a937ba2
Compare
I spoke with @fabiand offline. My main concern with this proposal is that it may promote a wrong assumption that manual cluster balancing is preferred instead of relying on the scheduler/descheduler - while this is just a local minimum. |
I think that exposing the whole node affinity/anti-affinity (+ tolerations + ...) grammar on the
I think it's up to us to emphasize this assumption in the API documentation making absolutely clear that the I'm proposing something like: // NodeName is a request to try to migrate this VMI to a specific node.
// If it is non-empty, the migration controller simply try to configure the target VMI pod to be started onto that node,
// assuming that it fits resource, limits and other node placement constraints; it will override nodeSelector and affinity
// and anti-affinity rules set on the VM.
// If it is empty, recommended, the scheduler becomes responsible for finding the best Node to migrate the VMI to.
// +optional
NodeName string `json:"nodeName,omitempty"` I'm adding it to this proposal. |
a937ba2
to
812d69f
Compare
Setting affinity and toleration is exactly what any other user would need to do to allow scheduling a workload on tainted node, not sure why we need to facilitate this in the migration case. Generally speaking,
From my pov, we could get away without any API changes and without advertising this option at all - making it available for special cases and not a mainstream.
|
I'm sorry but now I'm a bit confused. apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job
spec:
vmiName: vmi-fedora or, more imperatively, executed something like: $ virtctl migrate vmi-fedora that under the hood is going to create a This proposal is now about extending it with the optional capability to try to live migrate to a named node. apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job
spec:
vmiName: vmi-fedora
nodeName: my-new-target-node or executing something like: $ virtctl migrate vmi-fedora --nodeName=my-new-target-node and this because one of the key point here is that the cluster admin is not supposed to be required to amend the The migration controller will simply notice that spec:
nodeName: <nodeName> or spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- <nodeName> on the target virt-launcher pod. Can you please summarize what do you exactly mean with
? |
Yes, apologies. I meant to say a dedicated API.
I think that by using
Yes. As I mentioned above, I would prefer to let the admin add |
|
||
Such a capability is expected from traditional virtualization solutions but, with certain limitations, is still pretty common across the most popular cloud providers (at least when using dedicated and not shared nodes). | ||
- For instance on Amazon EC2 the user can already live-migrate a `Dedicated Instance` from a `Dedicated Host` to another `Dedicated Host` explicitly choosing it from the EC2 console, see: https://repost.aws/knowledge-center/migrate-dedicated-different-host | ||
- also on Google Cloud Platform Compute Engine the user can easily and directly live-migrate a VM from a `sole-tenancy` node to another one via CLI or REST API, see: https://cloud.google.com/compute/docs/nodes/manually-live-migrate#gcloud |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is an entirely different offering from what KubeVirt in general does.
I don't know how relevant is this example. Here the user specifically knows what node was given to him.
Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you think that this use case is different from Dedicated Hosts
on AWS EC2?
See for instance figure 1 on https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes
Normally your VMs are going to be executed on a multi-tenant host shared with other customers.
If you have special requirements in terms of physical isolation you can decide to pay more and have a set of physical hosts that are exclusively dedicated to you, in that case you know the names of the hosts and you can also decide (for various reasons) to manually live migrate a VM to a named node within your sole-tenancy node group.
Now if you are the cluster admin of an on-premise (or not but with dedicated hosts) cluster with KubeVirt, as for this proposal, you will be able to live migrate a VM to a named node.
The assumption is still that you are the cluster admin and you know the name of the hosts in your cluster.
Why is it different?
Thank you. I would a sentence somewhere (maybe documentation is enough), something along the lines of: `With this feature, the admin can migrate virtual machines to specific nodes, but the Kubernetes descheduler may move those virtual machines to other nodes to optimize cluster resources. /approve |
Just wanted to add that for a cluster admins it is always nice to have such tools out of a box and it is very common case for any maintenance to move out VMs in a predicted way. We use kubevirt 2 years already in our production and we are looking forward to see the feature in the nearest releases. |
Thank you Vladik! @jean-edouard are you willing to lgtm? |
@fabiand, Frankly, I'm not very happy with my approval of this proposal. I asked around and didn't find a single contributor who didn't think this was a terrible idea that would harm us in the future. I'm sure that there is an alternative way of achieving the same functionality without exposing such a direct API that someone could take advantage of to disrupt system flows that rely on live migration.
|
Well, this sounds a little like what ppl said about KubeVirt itself a few years ago :) I think Simone himself, and me, are also contributors. And we have @vriabyk. |
+1 from me |
@fabiand, you are right. Here is a list of approvers and reviewers in this repo: approvers: I will let someone approve this proposal which I personally 99% disagree with. /approve cancel |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@vriabyk (Happy to hear from anyone) |
I think we could move forward with this proposal, but there are some areas that concerns me and others. I encourage all KubeVirt approvers to speak up. Please express your opinion. Here are some suggestions from my side.
|
It isn't just because "it's how we've always done it", other VM management platforms implemented and continue to maintain similar features because they are useful for a variety of reasons. As an admin, being able to easily and deterministically force a one time migration to a specified node without altering any other system parameters or VM definition is immensely useful. |
Kubernetes uses many variables to decide where to place workloads and when to move them around. If VMIMigration objects had an option to decide where to move VMIs, nothing would guarantee that the VMI would stay there for any amount of time! While we put a lot of effort into making KubeVirt look and feel like "traditional" virtualization platforms, we also have to play by the cloud rules, which means some old habits need to be adjusted. The user/admin workflow that lead to this design proposal is one of them. |
I think it is expressed in the user stories section, which is the important part.
The maintainers/approvers seem to look for the "why" behind the need to even do this action.
Limiting this to an admin makes sense to me. Using priority queues sounds much bigger and complicated, so it sounds less attractive to me. @vladikr , assuming we will have a new CRD version for VMIM, strictly accessible by cluster admins, will that make this doable from your side?
Kubevirt is aimed to run on both cloud and BM, therefore the need as I see is valid. The solution should resolve potential problems if they exist. |
Correct and desired: it's just for the one-off migration attempt, it's not a long term required that can be already set on the VM object (we already have node selectors and node affinity there as for regular pods).
Why?
And we already have an API to declarative do that on VMs, this proposal is only about extending, in a full declarative way, to the |
Something like a cluster scoped? apiVersion: kubevirt.io/v1
kind: EmergencyVirtualMachineInstanceMigration
metadata:
name: migration-job
spec:
vmiName: vmi-fedora
vmiNamespace: usernamespace
addedNodeSelector:
accelerator: gpuenabled123
kubernetes.io/hostname: "ip-172-20-114-199.example" |
💯
Put this behind a feature flag and add an rbac check just for this call. Document all the warnings about consequences around the RBAC role and the feature flag (default disabled). Give the barrier to entry a bit of friction along with warnings and let admins shoot themselves in the foot if they so desire. |
@greg-bock Thanks. I'd love to hear more. Could you please explain how you determine (out of potentially thousands of nodes) to which node you need VM X to move? What is so special about this node? (Perhaps we could propose a different API and programmatically determine that via a different API??) When choosing this node, how do you take into account all the variables the scheduler takes into account when scheduling a node? |
- Workload balancing solution doesn't always work as expected | ||
> I have configured my cluster with the descheduler and a load aware scheduler (trimaran), thus by default, my VMs will be regularly descheduled if utilization is not balanced, and trimaran will ensure that my VMs will be scheduled to underutilized nodes. Often this is working, however, in exceptional cases, i.e. if the load changes too quickly, or only 1 VM is suffering, and I want to avoid that all Vms on the cluster are moved, I need - for exception - a tool to move one VM, once to deal with this exceptional situation. | ||
- Troubleshooting a node | ||
- Validating a new node migrating there a specific VM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tiraboschi @fabiand Sorry but I don't understand why live-migration is better than creating a new VM with a node selector of kubernetes.io/hostname
?
- Experienced admins are used to control where their critical workloads are move to | ||
> I as an admin, notice that a VM with guaranteed resources is having issues (I watched the cpu iowait metric). In order to resolve the performance issue and keep my user happy, I as admin want to move the VM, without interruption, to a node which is currently underutilized - and will make the user's vm perform better. | ||
- Workload balancing solution doesn't always work as expected | ||
> I have configured my cluster with the descheduler and a load aware scheduler (trimaran), thus by default, my VMs will be regularly descheduled if utilization is not balanced, and trimaran will ensure that my VMs will be scheduled to underutilized nodes. Often this is working, however, in exceptional cases, i.e. if the load changes too quickly, or only 1 VM is suffering, and I want to avoid that all Vms on the cluster are moved, I need - for exception - a tool to move one VM, once to deal with this exceptional situation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a distributed system, if you've observed at T1 that a node is under utilized it doesn't mean that when you will trigger the live-migration the target node will still be under utilized, how would you guarantee that no new workloads will be scheduled on that same node?
Exactly my point. If objects themselves want to run on a node that has specific characteristics, they can specify them via node selector/affinities. |
Advanced troubleshooting where a node might need to be prepped before the migration to collection information during the migration, or perhaps I need to move multiple production workloads (not necessarily all KubeVirt VMs) to the same node to cause behavior that a non production reproducer hasn't been found for yet. I might not be using KubeVirt scheduling logic at all, or I have a separate orchestration layer influencing those scheduler decisions for non technical reasons (like bin packing host machines for accounting/tax purposes). Perhaps I've run into an odd issue with a maintenance where the scheduler is making things difficult and I just need to move one workload one time. I've run into so many similar situations where these types of features are useful. No matter how many examples or use cases I might give I'll get the same response for each one. Go skin the cat these other 9 ways that are cumbersome, complicated, error prone, or might have worse unintended side effects.
We aren't, we only want to influence one of those variables one time, and if it fails to start for other reasons like not enough resources or some other issue then that's fine. This would be no different than altering the VMI definition to target the node. If I brought 10, 25, 100 more people into this conversation just stating they want this feature when would you all acquiesce if at all? |
kubevirt/kubevirt#13497 is removing "write rights" for the |
You mentioned non-kubevirt VMs.
I'm curious to whether a descheduler was considered for this use-case. |
This is an interesting question. When a company develops a product with the intention of making a profit, customer or potential customer requests for a feature are arguably the strongest reason to consider implementing it (though it's not the only factor, it is certainly extremely significant). However, Kubevirt is not a product, but an open-source project. This means it doesn't belong to a specific company but rather to a community composed of various stakeholders. These stakeholders work for different companies, have different customers, and different interests. IMHO to get features into healthy open-source projects, consensus among the stakeholders is required. This usually involves convincing them that the feature is not only useful (and explaining why it is useful) but also that it won't negatively impact the different interests of the various stakeholders. In conclusion, while many people requesting the same feature is a strong indicator of its potential usefulness, it is not enough to only show that people want the feature IMO. It is crucial to also demonstrate how the feature aligns with the interests of the various stakeholders and ensure it does not negatively impact their different priorities in the shorter and longer term.
I trust VMM operators in the sense that I'm sure they're speaking from their experience and reflecting on real pain that they have. However, I think it is a reasonable (and even an expected) request to understand if they have considered certain alternatives and understand why it didn't help them. |
Container and VM workloads are different, this is why we have KubeVirt.
I am not sure what the quesiton is here. |
I know containers and VMs are different and that containers don't live-migrate.. The use-case is to move both VMs and regular pods to a node, so I'm asking how it's being done. |
@iholder101 I think that it will cost time and money to train people to get used to new habits. I also spoke to a technical solution architect, he said that the IT dept. in many large companies tend to just cordon nodes and move workloads manually, they do it before upgrades. The reason is that they don't trust the scheduler. I am talking about the old virtualization orchestration systems. Perhaps the scheduler there is really hard to configure, and easy to disable. Bottom line, no technical reason as I see it. I have a question, is this feature going to harm the project stability? |
What this PR does / why we need it:
Adding a design proposal to extend VirtualMachineInstanceMigration
object with an additional API to let a cluster admin
try to trigger a live migration of a VM injecting
on the fly and additional NodeSelector constraint.
The additional NodeSelector can only restrict the set
of Nodes that are valid target for the migration
(eventually down to a single host).
All the affinity rules defined on the VM spec are still
going to be satisfied.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes https://issues.redhat.com/browse/CNV-7075
Special notes for your reviewer:
Something like this was directly proposed/implemented with kubevirt/kubevirt#10712 getting already discussed there.
Checklist
This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.
Release note: