diff --git a/2024/changelog-v1.3.0.html b/2024/changelog-v1.3.0.html new file mode 100644 index 0000000000..5a59e66438 --- /dev/null +++ b/2024/changelog-v1.3.0.html @@ -0,0 +1,516 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + KubeVirt v1.3.0 | KubeVirt.io + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+
+
+

+

KubeVirt v1.3.0

+ +
+
+

v1.3.0

+ +

Released on: Wed Jul 17 15:09:44 2024 +0000

+ +
    +
  • [PR #12319][Sreeja1725] Add v1.3.0 perf and scale benchmarks data
  • +
  • [PR #12330][kubevirt-bot] Fix wrong KubeVirtVMIExcessiveMigrations alert calculation in an upgrade scenario.
  • +
  • [PR #12328][acardace] enable only for VMs with memory >= 1Gi
  • +
  • [PR #12272][Sreeja1725] Add unit tests to check for API backward compatibility
  • +
  • [PR #12296][orelmisan] Network binding plugins: Enable the ability to specify compute memory overhead
  • +
  • [PR #12279][kubevirt-bot] Fix: persistent tpm can be used with vmis containing dots in their name
  • +
  • [PR #12226][kubevirt-bot] Virt export route has an edge termination of redirect
  • +
  • [PR #12240][kubevirt-bot] Updated common-instancetypes bundles to v1.0.1
  • +
  • [PR #12249][kubevirt-bot] Fix missing performance metrics for VMI resources
  • +
  • [PR #12237][vladikr] Only a single vgpu display option with ramfb will be configured per VMI.
  • +
  • [PR #12122][kubevirt-bot] Fix VMPools when LiveUpdate as vmRolloutStrategy is used.
  • +
  • [PR #12201][kubevirt-bot] fix RerunOnFailure stuck in Provisioning
  • +
  • [PR #12151][kubevirt-bot] Bugfix: Implement retry mechanism in export server and vmexport
  • +
  • [PR #12171][kubevirt-bot] PreferredDiskDedicatedIoThread is now only applied to virtio disk devices
  • +
  • [PR #12146][kubevirt-bot] Memory Hotplug fixes and stabilization
  • +
  • [PR #12185][kubevirt-bot] VMs with a single socket and NetworkInterfaceMultiqueue enabled require a restart to hotplug additional CPU sockets.
  • +
  • [PR #12132][kubevirt-bot] Introduce validatingAdmissionPolicy to restrict node patches on virt-handler
  • +
  • [PR #12109][acardace] Support Memory Hotplug with Hugepages
  • +
  • [PR #12009][xpivarc] By enabling NodeRestriction feature gate, Kubevirt now authorize virt-handler’s requests to modify VMs.
  • +
  • [PR #11681][lyarwood] The CommonInstancetypesDeployment feature and gate are retrospectively moved to Beta from the 1.2.0 release.
  • +
  • [PR #12025][fossedihelm] Add descheduler compatibility
  • +
  • [PR #12097][fossedihelm] Bump k8s deps to 0.30.0
  • +
  • [PR #12089][jean-edouard] Less privileged virt-operator ClusterRole
  • +
  • [PR #12064][akalenyu] BugFix: Graceful deletion skipped for any delete call to the VM (not VMI) resource
  • +
  • [PR #10490][jschintag] Add support for building and running kubevirt on s390x.
  • +
  • [PR #12079][EdDev] Network hotplug feature is declared as Beta.
  • +
  • [PR #11455][lyarwood] LiveUpdates of VMs using instance types are now supported with the same caveats as when making changes to a vanilla VM.
  • +
  • [PR #12000][machadovilaca] Create kubevirt_vmi_launcher_memory_overhead_bytes metric
  • +
  • [PR #11915][ormergi] The ‘passt’ core network binding is discontinued and removed.
  • +
  • [PR #12016][acardace] fix starting VM with Manual RunStrategy
  • +
  • [PR #11533][alicefr] Implement volume migration and introduce the migration updateVolumesStrategy field
  • +
  • [PR #11934][assafad] Add kubevirt_vmi_last_connection_timestamp_seconds metric
  • +
  • [PR #11956][mhenriks] Introduce export.kibevirt.io/v1beta1
  • +
  • [PR #11996][ShellyKa13] BugFix: Fix restore panic in case of volumesnapshot missing
  • +
  • [PR #11957][mhenriks] snapshot: Ignore unfreeze error if VMSnapshot deleting
  • +
  • [PR #11906][machadovilaca] Create kubevirt_vmi_info metric
  • +
  • [PR #11969][iholder101] Infra components control-plane nodes NoSchedule toleration
  • +
  • [PR #11955][mhenriks] Introduce snapshot.kibevirt.io/v1beta1
  • +
  • [PR #11883][orelmisan] Restart of a VM is required when the CPU socket count is reduced
  • +
  • [PR #11835][talcoh2x] add Intel Gaudi to adopters.
  • +
  • [PR #11344][aerosouund] Refactor device plugins to use a base plugin and define a common interface
  • +
  • [PR #11973][fossedihelm] Bug fix: Correctly reflect RestartRequired condition
  • +
  • [PR #11963][acardace] Fix RerunOnFailure RunStrategy
  • +
  • [PR #11962][lyarwood] VirtualMachines referencing an instance type are now allowed when the LiveUpdate feature is enabled and will trigger the RestartRequired condition if the reference within the VirtualMachine is changed.
  • +
  • [PR #11942][ido106] Update virtctl to use v1beta1 endpoint for both regular and async image upload
  • +
  • [PR #11648][kubevirt-bot] Updated common-instancetypes bundles to v1.0.0
  • +
  • [PR #11659][iholder101] Require scheduling infra components onto control-plane nodes
  • +
  • [PR #10545][lyarwood] ControllerRevisions containing instance types and preferences are now upgraded to their latest available version when the VirtualMachine owning them is resync’d by virt-controller.
  • +
  • [PR #11901][EdDev] The ‘macvtap’ core network binding is discontinued and removed.
  • +
  • [PR #11922][alromeros] Bugfix: Fix VM manifest rendering in export controller
  • +
  • [PR #11908][victortoso] sidecar-shim: allow stderr log from binary hooks
  • +
  • [PR #11729][lyarwood] spreadOptions have been introduced to preferences in order to allow for finer grain control of the preferSpread preferredCPUTopology. This includes the ability to now spread vCPUs across guest visible sockets, cores and threads.
  • +
  • [PR #11655][acardace] Allow to hotplug vcpus for VMs with CPU requests and/or limits set
  • +
  • [PR #11701][EdDev] The SLIRP core binding is deprecated and removed.
  • +
  • [PR #11773][jean-edouard] Persistent TPM/UEFI will use the default storage class if none is specified in the CR.
  • +
  • [PR #11846][victortoso] SMBios sidecar can be built out-of-tree
  • +
  • [PR #11788][ormergi] The network-info annotation is now used for mapping between SR-IOV network and the underlying device PCI address
  • +
  • [PR #11700][alicefr] Add the updateVolumeStrategy field
  • +
  • [PR #11256][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 10.0.0 and QEMU 8.2.0.
  • +
  • [PR #11482][brianmcarey] Build KubeVirt with go v1.22.2
  • +
  • [PR #11641][alicefr] Add kubevirt.io/testWorkloadUpdateMigrationAbortion annotation and a mechanism to abort workload updates
  • +
  • [PR #11770][alicefr] Fix the live updates for volumes and disks
  • +
  • [PR #11790][aburdenthehand] Re-adding Cloudflare to our ADOPTERS list
  • +
  • [PR #11718][fossedihelm] Fix: SEV methods in client-go now satisfy the proxy server configuration, if provided
  • +
  • [PR #11685][fossedihelm] Updated go version of the client-go to 1.21
  • +
  • [PR #11618][AlonaKaplan] Extend network binding plugin to support device-info DownwardAPI.
  • +
  • [PR #11283][assafad] Collect VMI OS info from the Guest agent as kubevirt_vmi_phase_count metric labels
  • +
  • [PR #11676][machadovilaca] Rename rest client metrics to include kubevirt prefix
  • +
  • [PR #11557][avlitman] New memory statistics added named kubevirt_memory_delta_from_requested_bytes
  • +
  • [PR #11678][Vicente-Cheng] Improve the handling of ordinal pod interface name for upgrade
  • +
  • [PR #11653][EdDev] Build the passtcustom CNI binary statically, for the passt network binding plugin.
  • +
  • [PR #11294][machadovilaca] Fix kubevirt_vm_created_total being broken down by virt-api pod
  • +
  • [PR #11307][machadovilaca] Add e2e tests for metrics
  • +
  • [PR #11479][vladikr] virtual machines instance will no longer be stuck in an irrecoverable state after an interrupted postcopy migration. Instead, these will fail and could be restarted again.
  • +
  • [PR #11416][dhiller] emission of k8s logs when using programmatic focus with FIt
  • +
  • [PR #11272][dharmit] Make ‘image’ field in hook sidecar annotation optional.
  • +
  • [PR #11500][iholder101] Support HyperV Passthrough: automatically use all available HyperV features
  • +
  • [PR #11484][jcanocan] Reduce the downwardMetrics server maximum number of request per second to 1.
  • +
  • [PR #11498][acardace] Allow to hotplug memory for VMs with memory limits set
  • +
  • [PR #11470][brianmcarey] Build KubeVirt with Go version 1.21.8
  • +
  • [PR #11312][alromeros] Improve handling of export resources in virtctl vmexport
  • +
  • [PR #11367][alromeros] Bugfix: Allow vmexport download redirections by printing logs into stderr
  • +
  • [PR #11219][alromeros] Bugfix: Improve handling of IOThreads with incompatible buses
  • +
  • [PR #11149][0xFelix] virtctl: It is possible to import volumes from GCS when creating a VM now
  • +
  • [PR #11404][avlitman] KubeVirtComponentExceedsRequestedCPU and KubeVirtComponentExceedsRequestedMemory alerts are deprecated; they do not indicate a genuine issue.
  • +
  • [PR #11331][anjuls] add cloudraft to adopters.
  • +
  • [PR #11387][alaypatel07] add perf-scale benchmarks for release v1.2
  • +
  • [PR #11095][ShellyKa13] Expose volumesnapshot error in vmsnapshot object
  • +
  • [PR #11372][xpivarc] Bug-fix: Fix nil panic if VM update fails
  • +
  • [PR #11267][mhenriks] BugFix: Ensure DataVolumes created by virt-controller (DataVolumeTemplates) are recreated and owned by the VM in the case of DR and backup/restore.
  • +
  • [PR #10900][KarstenB] BugFix: Fixed incorrect APIVersion of APIResourceList
  • +
  • [PR #11306][fossedihelm] fix(ksm): set the kubevirt.io/ksm-enabled node label to true if the ksm is managed by KubeVirt, instead of reflect the actual ksm value.
  • +
  • [PR #11330][jean-edouard] More information in the migration state of VMI / migration objects
  • +
  • [PR #11264][machadovilaca] Fix perfscale buckets error
  • +
  • [PR #11183][dhiller] Extend OWNERS for sig-buildsystem
  • +
  • [PR #11058][fossedihelm] fix(vmclone): delete vmclone resource when the target vm is deleted
  • +
  • [PR #11265][xpivarc] Bug fix: VM controller doesn’t corrupt its cache anymore
  • +
  • [PR #11205][akalenyu] Fix migration breaking in case the VM has an rng device after hotplugging a block volume on cgroupsv2
  • +
  • [PR #11051][alromeros] Bugfix: Improve error reporting when fsfreeze fails
  • +
  • [PR #11156][nunnatsa] Move some verification from the VMI create validation webhook to the CRD
  • +
  • [PR #11146][RamLavi] node-labeller: Remove obsolete functionalities
  • +
+ +
+ + + + +
+ + +
+
+
+
+ + + + +
+ + + + + + + + + + + + + + + + + + + diff --git a/404.html b/404.html index c0a2af923b..42a293c285 100644 --- a/404.html +++ b/404.html @@ -52,7 +52,7 @@ - + diff --git a/application-aware-quota/index.html b/application-aware-quota/index.html index ea4ac03e5f..e03d51cb44 100644 --- a/application-aware-quota/index.html +++ b/application-aware-quota/index.html @@ -52,7 +52,7 @@ - + diff --git a/applications-aware-quota/index.html b/applications-aware-quota/index.html index e262401cce..de3be01431 100644 --- a/applications-aware-quota/index.html +++ b/applications-aware-quota/index.html @@ -52,7 +52,7 @@ - + diff --git a/blogs/community.html b/blogs/community.html index 57d55a730e..0bfdbda40a 100644 --- a/blogs/community.html +++ b/blogs/community.html @@ -52,7 +52,7 @@ - + @@ -656,6 +656,8 @@

Additional filters

+ + diff --git a/blogs/date.html b/blogs/date.html index 769df6f0d5..3045113b8b 100644 --- a/blogs/date.html +++ b/blogs/date.html @@ -52,7 +52,7 @@ - + @@ -437,6 +437,16 @@

Additional filters

+ + + + + + + + + + @@ -4255,7 +4265,7 @@

Additional filters

Post calendar

- +
JanFebMarAprMayJunJulAugSepOctNovDec
2024  2         
2023  3 1 3 2 2 
20223112121311  
20212123112222 1
2020352432321214
2019322222532744
2018523486313442
2017      2 4454
JanFebMarAprMayJunJulAugSepOctNovDec
2024  2   1     
2023  3 1 3 2 2 
20223112121311  
20212123112222 1
2020352432321214
2019322222532744
2018523486313442
2017      2 4454
@@ -4373,6 +4383,14 @@

2024

+ + + + + + + + @@ -4388,6 +4406,22 @@

2024

+ + + + + + +

July

+ + + diff --git a/blogs/index.html b/blogs/index.html index 3d5a41dd32..c553fed805 100644 --- a/blogs/index.html +++ b/blogs/index.html @@ -52,7 +52,7 @@ - + @@ -305,6 +305,17 @@

Additional filters

diff --git a/blogs/news.html b/blogs/news.html index b1c1962f46..72ac6fba61 100644 --- a/blogs/news.html +++ b/blogs/news.html @@ -52,7 +52,7 @@ - + @@ -296,6 +296,8 @@

Additional filters

]]>kube🤖KubeVirt v0.53.02022-05-09T00:00:00+00:002022-05-09T00:00:00+00:00https://kubevirt.io//2022/changelog-v0.53.0v0.53.0 - -

Released on: Mon May 9 14:02:20 2022 +0000

- -
    -
  • [PR #7533][akalenyu] Add several VM snapshot metrics
  • -
  • [PR #7574][rmohr] Pull in cdi dependencies with minimized transitive dependencies to ease API adoption
  • -
  • [PR #7318][iholder-redhat] Snapshot restores now support restoring to a target VM different than the source
  • -
  • [PR #7474][borod108] Added the following metrics for live migration: kubevirt_migrate_vmi_data_processed_bytes, kubevirt_migrate_vmi_data_remaining_bytes, kubevirt_migrate_vmi_dirty_memory_rate_bytes
  • -
  • [PR #7441][rmohr] Add virtctl scp to ease copying files from and to VMs and VMIs
  • -
  • [PR #7265][rthallisey] Support steady-state job types in the load-generator tool
  • -
  • [PR #7544][fossedihelm] Upgraded go version to 1.17.8
  • -
  • [PR #7582][acardace] Fix failed reported migrations when actually they were successful.
  • -
  • [PR #7546][0xFelix] Update virtio-container-disk to virtio-win version 0.1.217-1
  • -
  • [PR #7530][iholder-redhat] [External Kernel Boot]: Disallow kernel args without providing custom kernel
  • -
  • [PR #7493][davidvossel] Adds new EvictionStrategy “External” for blocking eviction which is handled by an external controller
  • -
  • [PR #7563][akalenyu] Switch VolumeSnapshot to v1
  • -
  • [PR #7406][acardace] Reject LiveMigrate as a workload-update strategy if the LiveMigration feature gate is not enabled.
  • -
  • [PR #7103][jean-edouard] Non-persistent vTPM now supported. Keep in mind that the state of the TPM is wiped after each shutdown. Do not enable Bitlocker!
  • -
  • [PR #7277][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 8.0.0 and QEMU 6.2.0.
  • -
  • [PR #7130][Barakmor1] Add field to kubevirtCR to set Prometheus ServiceMonitor object’s namespace
  • -
  • [PR #7401][iholder-redhat] virt-api deployment is now scalable - replicas are determined by the number of nodes in the cluster
  • -
  • [PR #7500][awels] BugFix: Fixed RBAC for admin/edit user to allow virtualmachine/addvolume and removevolume. This allows for persistent disks
  • -
  • [PR #7328][apoorvajagtap] Don’t ignore –identity-file when setting –local-ssh=true on virtctl ssh
  • -
  • [PR #7469][xpivarc] Users can now enable the NonRoot feature gate instead of NonRootExperimental
  • -
  • [PR #7451][fossedihelm] Reduce virt-launcher memory usage by splitting monitoring and launcher processes
  • -
]]>
kube🤖
\ No newline at end of file +]]>kube🤖 \ No newline at end of file diff --git a/feed/uncategorized.xml b/feed/uncategorized.xml index 929c8e2d9d..5fcd2cd152 100644 --- a/feed/uncategorized.xml +++ b/feed/uncategorized.xml @@ -1,4 +1,4 @@ -Jekyll2024-07-10T14:40:21+00:00https://kubevirt.io//feed/uncategorized.xmlKubeVirt.io | UncategorizedVirtual Machine Management on KubernetesMonitoring KubeVirt VMs from the inside2020-12-10T00:00:00+00:002020-12-10T00:00:00+00:00https://kubevirt.io//2020/Monitoring-KubeVirt-VMs-from-the-insideMonitoring KubeVirt VMs from the inside +Jekyll2024-07-29T11:25:46+00:00https://kubevirt.io//feed/uncategorized.xmlKubeVirt.io | UncategorizedVirtual Machine Management on KubernetesMonitoring KubeVirt VMs from the inside2020-12-10T00:00:00+00:002020-12-10T00:00:00+00:00https://kubevirt.io//2020/Monitoring-KubeVirt-VMs-from-the-insideMonitoring KubeVirt VMs from the inside

This blog post will guide you on how to monitor KubeVirt Linux based VirtualMachines with Prometheus node-exporter. Since node_exporter will run inside the VM and expose metrics at an HTTP endpoint, you can use this same guide to expose custom applications that expose metrics in the Prometheus format.

diff --git a/feed/updates.xml b/feed/updates.xml index 0ffc23e003..1a2b52ace8 100644 --- a/feed/updates.xml +++ b/feed/updates.xml @@ -1,4 +1,4 @@ -Jekyll2024-07-10T14:40:21+00:00https://kubevirt.io//feed/updates.xmlKubeVirt.io | UpdatesVirtual Machine Management on KubernetesThis Week In Kube Virt 232018-04-27T00:00:00+00:002018-04-27T00:00:00+00:00https://kubevirt.io//2018/This-Week-in-Kube-Virt-23This is a close-to weekly update from the KubeVirt team.

+Jekyll2024-07-29T11:25:46+00:00https://kubevirt.io//feed/updates.xmlKubeVirt.io | UpdatesVirtual Machine Management on KubernetesThis Week In Kube Virt 232018-04-27T00:00:00+00:002018-04-27T00:00:00+00:00https://kubevirt.io//2018/This-Week-in-Kube-Virt-23This is a close-to weekly update from the KubeVirt team.

In general there is now more work happening outside of the core kubevirt repository.

diff --git a/gallery/index.html b/gallery/index.html index 6f7404f5fb..290130d733 100644 --- a/gallery/index.html +++ b/gallery/index.html @@ -52,7 +52,7 @@ - + diff --git a/hostpath-provisioner-operator/index.html b/hostpath-provisioner-operator/index.html index e0a4849d38..b914471a62 100644 --- a/hostpath-provisioner-operator/index.html +++ b/hostpath-provisioner-operator/index.html @@ -52,7 +52,7 @@ - + diff --git a/hostpath-provisioner/index.html b/hostpath-provisioner/index.html index 4ebd033a3b..115df4886e 100644 --- a/hostpath-provisioner/index.html +++ b/hostpath-provisioner/index.html @@ -52,7 +52,7 @@ - + diff --git a/index.html b/index.html index 28368ae4a9..85e6fe7f9b 100644 --- a/index.html +++ b/index.html @@ -52,7 +52,7 @@ - + @@ -327,6 +327,13 @@

What can I do with KubeVirt?

Recent posts

+ + - -

diff --git a/kubevirt/index.html b/kubevirt/index.html index 54da3b3129..d4e9c2dfc5 100644 --- a/kubevirt/index.html +++ b/kubevirt/index.html @@ -52,7 +52,7 @@ - + diff --git a/labs/index.html b/labs/index.html index fac6e32ce7..ffdbf2c05d 100644 --- a/labs/index.html +++ b/labs/index.html @@ -52,7 +52,7 @@ - + diff --git a/labs/kubernetes/lab1.html b/labs/kubernetes/lab1.html index cd7c569b74..55a2d4d63b 100644 --- a/labs/kubernetes/lab1.html +++ b/labs/kubernetes/lab1.html @@ -52,7 +52,7 @@ - + @@ -469,10 +469,14 @@

Create a Virtual Machine

Apply the manifest to Kubernetes.

kubectl apply -f https://kubevirt.io/labs/manifests/vm.yaml
-virtualmachine.kubevirt.io "testvm" created
-  virtualmachineinstancepreset.kubevirt.io "small" created
 
+

You should see following results

+
+

virtualmachine.kubevirt.io “testvm” created +virtualmachineinstancepreset.kubevirt.io “small” created

+
+

Manage Virtual Machines (optional):

To get a list of existing Virtual Machines. Note the running status.

@@ -506,12 +510,16 @@

Manage Virtual Machines (optional):'{"spec":{"running":false}}' -

Now that the Virtual Machine has been started, check the status. Note the running status.

+

Now that the Virtual Machine has been started, check the status (kubectl get vms). Note the Running status.

+ +

You now want to see the instance of the vm you just started :

kubectl get vmis
 kubectl get vmis -o yaml testvm
 
+

Note the difference between VM (virtual machine) resource and VMI (virtual machine instance) resource. The VMI does not exist before starting the VM and the VMI will be deleted when you stop the VM. (Also note that restart of the VM is needed if you like to change some properties. Just modifying VM is not sufficient, the VMI has to be replaced.)

+

Accessing VMs (serial console)

Connect to the serial console of the Cirros VM. Hit return / enter a few times and login with the displayed username and password.

@@ -521,6 +529,8 @@

Accessing VMs (serial console)

Disconnect from the virtual machine console by typing: ctrl+].

+

If you like to see the complete boot sequence logs from the console. You need to connect to the serial console just after starting the VM (you can test this by stopping and starting the VM again, see below).

+

Controlling the State of the VM

To shut it down:

diff --git a/labs/kubernetes/lab2.html b/labs/kubernetes/lab2.html index 0864ef2fe4..8cea3cd473 100644 --- a/labs/kubernetes/lab2.html +++ b/labs/kubernetes/lab2.html @@ -52,7 +52,7 @@ - + diff --git a/labs/kubernetes/lab3.html b/labs/kubernetes/lab3.html index 5cb347d025..983eda66ce 100644 --- a/labs/kubernetes/lab3.html +++ b/labs/kubernetes/lab3.html @@ -52,7 +52,7 @@ - + diff --git a/labs/kubernetes/migration.html b/labs/kubernetes/migration.html index 6e4e5a6e81..5243f9c5fb 100644 --- a/labs/kubernetes/migration.html +++ b/labs/kubernetes/migration.html @@ -52,7 +52,7 @@ - + diff --git a/machine-remediation/index.html b/machine-remediation/index.html index 74389d7681..446b3dd65c 100644 --- a/machine-remediation/index.html +++ b/machine-remediation/index.html @@ -52,7 +52,7 @@ - + diff --git a/managed-tenant-quota/index.html b/managed-tenant-quota/index.html index 63dcecae67..c42401a5ec 100644 --- a/managed-tenant-quota/index.html +++ b/managed-tenant-quota/index.html @@ -52,7 +52,7 @@ - + diff --git a/node-maintenance-operator/index.html b/node-maintenance-operator/index.html index 1ce01291dc..eb15f57c87 100644 --- a/node-maintenance-operator/index.html +++ b/node-maintenance-operator/index.html @@ -52,7 +52,7 @@ - + diff --git a/privacy/index.html b/privacy/index.html index 997e234993..b21ccfe22f 100644 --- a/privacy/index.html +++ b/privacy/index.html @@ -52,7 +52,7 @@ - + diff --git a/qe-tools/index.html b/qe-tools/index.html index bb4615b9c8..0c66ef44bd 100644 --- a/qe-tools/index.html +++ b/qe-tools/index.html @@ -52,7 +52,7 @@ - + diff --git a/quickstart_cloud/index.html b/quickstart_cloud/index.html index 0d81a1b1e0..3aa0586647 100644 --- a/quickstart_cloud/index.html +++ b/quickstart_cloud/index.html @@ -52,7 +52,7 @@ - + diff --git a/quickstart_kind/index.html b/quickstart_kind/index.html index 263106b5dd..6557f18c76 100644 --- a/quickstart_kind/index.html +++ b/quickstart_kind/index.html @@ -52,7 +52,7 @@ - + diff --git a/quickstart_minikube/index.html b/quickstart_minikube/index.html index da8eef5d10..a709b6427d 100644 --- a/quickstart_minikube/index.html +++ b/quickstart_minikube/index.html @@ -52,7 +52,7 @@ - + diff --git a/search.html b/search.html index 9571a20dd2..b259bd61ac 100644 --- a/search.html +++ b/search.html @@ -52,7 +52,7 @@ - + @@ -316,1791 +316,1798 @@

var documents = [{ "id": 0, + "url": "/2024/changelog-v1.3.0.html", + "title": "KubeVirt v1.3.0", + "author" : "kube🤖", + "tags" : "release notes, changelog", + "body": "v1. 3. 0: Released on: Wed Jul 17 15:09:44 2024 +0000 [PR #12319][Sreeja1725] Add v1. 3. 0 perf and scale benchmarks data [PR #12330][kubevirt-bot] Fix wrong KubeVirtVMIExcessiveMigrations alert calculation in an upgrade scenario. [PR #12328][acardace] enable only for VMs with memory >= 1Gi [PR #12272][Sreeja1725] Add unit tests to check for API backward compatibility [PR #12296][orelmisan] Network binding plugins: Enable the ability to specify compute memory overhead [PR #12279][kubevirt-bot] Fix: persistent tpm can be used with vmis containing dots in their name [PR #12226][kubevirt-bot] Virt export route has an edge termination of redirect [PR #12240][kubevirt-bot] Updated common-instancetypes bundles to v1. 0. 1 [PR #12249][kubevirt-bot] Fix missing performance metrics for VMI resources [PR #12237][vladikr] Only a single vgpu display option with ramfb will be configured per VMI. [PR #12122][kubevirt-bot] Fix VMPools when LiveUpdate as vmRolloutStrategy is used. [PR #12201][kubevirt-bot] fix RerunOnFailure stuck in Provisioning [PR #12151][kubevirt-bot] Bugfix: Implement retry mechanism in export server and vmexport [PR #12171][kubevirt-bot] PreferredDiskDedicatedIoThread is now only applied to virtio disk devices [PR #12146][kubevirt-bot] Memory Hotplug fixes and stabilization [PR #12185][kubevirt-bot] VMs with a single socket and NetworkInterfaceMultiqueue enabled require a restart to hotplug additional CPU sockets. [PR #12132][kubevirt-bot] Introduce validatingAdmissionPolicy to restrict node patches on virt-handler [PR #12109][acardace] Support Memory Hotplug with Hugepages [PR #12009][xpivarc] By enabling NodeRestriction feature gate, Kubevirt now authorize virt-handler’s requests to modify VMs. [PR #11681][lyarwood] The CommonInstancetypesDeployment feature and gate are retrospectively moved to Beta from the 1. 2. 0 release. [PR #12025][fossedihelm] Add descheduler compatibility [PR #12097][fossedihelm] Bump k8s deps to 0. 30. 0 [PR #12089][jean-edouard] Less privileged virt-operator ClusterRole [PR #12064][akalenyu] BugFix: Graceful deletion skipped for any delete call to the VM (not VMI) resource [PR #10490][jschintag] Add support for building and running kubevirt on s390x. [PR #12079][EdDev] Network hotplug feature is declared as Beta. [PR #11455][lyarwood] LiveUpdates of VMs using instance types are now supported with the same caveats as when making changes to a vanilla VM. [PR #12000][machadovilaca] Create kubevirt_vmi_launcher_memory_overhead_bytes metric [PR #11915][ormergi] The ‘passt’ core network binding is discontinued and removed. [PR #12016][acardace] fix starting VM with Manual RunStrategy [PR #11533][alicefr] Implement volume migration and introduce the migration updateVolumesStrategy field [PR #11934][assafad] Add kubevirt_vmi_last_connection_timestamp_seconds metric [PR #11956][mhenriks] Introduce export. kibevirt. io/v1beta1 [PR #11996][ShellyKa13] BugFix: Fix restore panic in case of volumesnapshot missing [PR #11957][mhenriks] snapshot: Ignore unfreeze error if VMSnapshot deleting [PR #11906][machadovilaca] Create kubevirt_vmi_info metric [PR #11969][iholder101] Infra components control-plane nodes NoSchedule toleration [PR #11955][mhenriks] Introduce snapshot. kibevirt. io/v1beta1 [PR #11883][orelmisan] Restart of a VM is required when the CPU socket count is reduced [PR #11835][talcoh2x] add Intel Gaudi to adopters. [PR #11344][aerosouund] Refactor device plugins to use a base plugin and define a common interface [PR #11973][fossedihelm] Bug fix: Correctly reflect RestartRequired condition [PR #11963][acardace] Fix RerunOnFailure RunStrategy [PR #11962][lyarwood] VirtualMachines referencing an instance type are now allowed when the LiveUpdate feature is enabled and will trigger the RestartRequired condition if the reference within the VirtualMachine is changed. [PR #11942][ido106] Update virtctl to use v1beta1 endpoint for both regular and async image upload [PR #11648][kubevirt-bot] Updated common-instancetypes bundles to v1. 0. 0 [PR #11659][iholder101] Require scheduling infra components onto control-plane nodes [PR #10545][lyarwood] ControllerRevisions containing instance types and preferences are now upgraded to their latest available version when the VirtualMachine owning them is resync’d by virt-controller. [PR #11901][EdDev] The ‘macvtap’ core network binding is discontinued and removed. [PR #11922][alromeros] Bugfix: Fix VM manifest rendering in export controller [PR #11908][victortoso] sidecar-shim: allow stderr log from binary hooks [PR #11729][lyarwood] spreadOptions have been introduced to preferences in order to allow for finer grain control of the preferSpread preferredCPUTopology. This includes the ability to now spread vCPUs across guest visible sockets, cores and threads. [PR #11655][acardace] Allow to hotplug vcpus for VMs with CPU requests and/or limits set [PR #11701][EdDev] The SLIRP core binding is deprecated and removed. [PR #11773][jean-edouard] Persistent TPM/UEFI will use the default storage class if none is specified in the CR. [PR #11846][victortoso] SMBios sidecar can be built out-of-tree [PR #11788][ormergi] The network-info annotation is now used for mapping between SR-IOV network and the underlying device PCI address [PR #11700][alicefr] Add the updateVolumeStrategy field [PR #11256][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 10. 0. 0 and QEMU 8. 2. 0. [PR #11482][brianmcarey] Build KubeVirt with go v1. 22. 2 [PR #11641][alicefr] Add kubevirt. io/testWorkloadUpdateMigrationAbortion annotation and a mechanism to abort workload updates [PR #11770][alicefr] Fix the live updates for volumes and disks [PR #11790][aburdenthehand] Re-adding Cloudflare to our ADOPTERS list [PR #11718][fossedihelm] Fix: SEV methods in client-go now satisfy the proxy server configuration, if provided [PR #11685][fossedihelm] Updated go version of the client-go to 1. 21 [PR #11618][AlonaKaplan] Extend network binding plugin to support device-info DownwardAPI. [PR #11283][assafad] Collect VMI OS info from the Guest agent as kubevirt_vmi_phase_count metric labels [PR #11676][machadovilaca] Rename rest client metrics to include kubevirt prefix [PR #11557][avlitman] New memory statistics added named kubevirt_memory_delta_from_requested_bytes [PR #11678][Vicente-Cheng] Improve the handling of ordinal pod interface name for upgrade [PR #11653][EdDev] Build the passtcustom CNI binary statically, for the passt network binding plugin. [PR #11294][machadovilaca] Fix kubevirt_vm_created_total being broken down by virt-api pod [PR #11307][machadovilaca] Add e2e tests for metrics [PR #11479][vladikr] virtual machines instance will no longer be stuck in an irrecoverable state after an interrupted postcopy migration. Instead, these will fail and could be restarted again. [PR #11416][dhiller] emission of k8s logs when using programmatic focus with FIt [PR #11272][dharmit] Make ‘image’ field in hook sidecar annotation optional. [PR #11500][iholder101] Support HyperV Passthrough: automatically use all available HyperV features [PR #11484][jcanocan] Reduce the downwardMetrics server maximum number of request per second to 1. [PR #11498][acardace] Allow to hotplug memory for VMs with memory limits set [PR #11470][brianmcarey] Build KubeVirt with Go version 1. 21. 8 [PR #11312][alromeros] Improve handling of export resources in virtctl vmexport [PR #11367][alromeros] Bugfix: Allow vmexport download redirections by printing logs into stderr [PR #11219][alromeros] Bugfix: Improve handling of IOThreads with incompatible buses [PR #11149][0xFelix] virtctl: It is possible to import volumes from GCS when creating a VM now [PR #11404][avlitman] KubeVirtComponentExceedsRequestedCPU and KubeVirtComponentExceedsRequestedMemory alerts are deprecated; they do not indicate a genuine issue. [PR #11331][anjuls] add cloudraft to adopters. [PR #11387][alaypatel07] add perf-scale benchmarks for release v1. 2 [PR #11095][ShellyKa13] Expose volumesnapshot error in vmsnapshot object [PR #11372][xpivarc] Bug-fix: Fix nil panic if VM update fails [PR #11267][mhenriks] BugFix: Ensure DataVolumes created by virt-controller (DataVolumeTemplates) are recreated and owned by the VM in the case of DR and backup/restore. [PR #10900][KarstenB] BugFix: Fixed incorrect APIVersion of APIResourceList [PR #11306][fossedihelm] fix(ksm): set the kubevirt. io/ksm-enabled node label to true if the ksm is managed by KubeVirt, instead of reflect the actual ksm value. [PR #11330][jean-edouard] More information in the migration state of VMI / migration objects [PR #11264][machadovilaca] Fix perfscale buckets error [PR #11183][dhiller] Extend OWNERS for sig-buildsystem [PR #11058][fossedihelm] fix(vmclone): delete vmclone resource when the target vm is deleted [PR #11265][xpivarc] Bug fix: VM controller doesn’t corrupt its cache anymore [PR #11205][akalenyu] Fix migration breaking in case the VM has an rng device after hotplugging a block volume on cgroupsv2 [PR #11051][alromeros] Bugfix: Improve error reporting when fsfreeze fails [PR #11156][nunnatsa] Move some verification from the VMI create validation webhook to the CRD [PR #11146][RamLavi] node-labeller: Remove obsolete functionalities" + }, { + "id": 1, "url": "/2024/KubeVirt-Summit-2024-CfP.html", "title": "KubeVirt Summit 2024 CfP is open!", "author" : "Andrew Burden", "tags" : "kubevirt, event, community", "body": "We are very pleased to announce the details for this year’s KubeVirt Summit!! What is KubeVirt Summit?: KubeVirt Summit is our annual online conference, now in its fourth year, in which the entire broader community meets to showcase technical architecture, new features, proposed changes, and in-depth tutorials. We have two tracks to cater for developer talks, and another for end users to share their deployment journey with KubeVirt and their use case(s) at scale. And there’s no reason why a talk can’t be both :) When is it?: The event will take place online over two half-days: Dates: June 25 and 26 June 24 and 25, 2024 Time: TBD (In the past we have aimed for 1200-1700 UTC but may modify these times slightly depending on our speaker timezones)How to submit a proposal?: Please submit through this Googleform CfP closes: May 20, 2024 Schedule will announced at the end of MayDo consider proposing a session, and help make our fourth Summit as valuable as possible. We welcome a range of session types, any of which can be simple and intended for beginners or face-meltingly technical. Check out last year’s talks for some ideas. Have questions?: Our KubeVirt Summit 2024 page will continue to evolve with details as we get closer. You can also reach out on our virtualization Slack channel (in the Kubernetes workspace). Keep up to date: Connect with the KubeVirt Community through our mailing list, slack channels, weekly meetings, and more, all list in our community repo. Good luck! " }, { - "id": 1, + "id": 2, "url": "/2024/changelog-v1.2.0.html", "title": "KubeVirt v1.2.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v1. 2. 0: Released on: Tue Mar 5 20:25:04 2024 +0000 [PR #11318][fossedihelm] fix(vmclone): delete vmclone resource when the target vm is deleted [PR #11393][kubevirt-bot] Bug-fix: Fix nil panic if VM update fails [PR #11354][kubevirt-bot] Fix perfscale buckets error [PR #11378][fossedihelm] fix(ksm): set the kubevirt. io/ksm-enabled node label to true if the ksm is managed by KubeVirt, instead of reflect the actual ksm value. [PR #11271][kubevirt-bot] Bug fix: VM controller doesn’t corrupt its cache anymore [PR #11242][kubevirt-bot] Fix migration breaking in case the VM has an rng device after hotplugging a block volume on cgroupsv2 [PR #11144][0xFelix] virtctl: Specifying size when creating a VM and using –volume-import to clone a PVC or a VolumeSnapshot is optional now [PR #11054][jean-edouard] New cluster-wide vmRolloutStrategy setting to define whether changes to VMs should either be always staged or live-updated when possible. [PR #11064][AlonaKaplan] Introduce a new API to mark a binding plugin as migratable. [PR #11122][brianmcarey] Update runc dependency to v1. 1. 12 [PR #10982][machadovilaca] Refactor monitoring metrics [PR #11069][ormergi] Bug fix: Packet drops during the initial phase of VM live migration https://issues. redhat. com/browse/CNV-28040 [PR #10961][jcanocan] Reduced VM rescheduling time on node failure [PR #11065][fossedihelm] fix(vmclone): Generate VM patches from vmsnapshotcontent, instead of current VM [PR #10888][fossedihelm] [Bugfix] Clone VM with WaitForFirstConsumer binding mode PVC now works. [PR #11068][brianmcarey] Update container base image to use current stable debian 12 base [PR #11047][jschintag] Fix potential crash when trying to list USB devices on host without any [PR #10970][alromeros] Expose fs disk information via GuestOsInfo [PR #11050][fossedihelm] restrict default cluster role to authenticated only users [PR #11025][0xFelix] Allow unprivileged users read-only access to VirtualMachineCluster{Instancetypes,Preferences} by default. [PR #10853][machadovilaca] Refactor monitoring collectors [PR #11001][fossedihelm] Allow kubevirt. io:default clusterRole to get,list kubevirts [PR #10905][tiraboschi] Aggregate DVs conditions on VMI (and so VM) [PR #10963][alromeros] Bugfix: Reject volume exports when no output is specified [PR #10962][machadovilaca] Update monitoring file structure [PR #10981][AlonaKaplan] Report IP of interfaces using network binding plugin. [PR #10922][kubevirt-bot] Updated common-instancetypes bundles to v0. 4. 0 [PR #10914][brianmcarey] KubeVirt is now built with go 1. 21. 5 [PR #10846][RamLavi] Change vm. status. PrintableStatus default value to “Stopped” [PR #10787][matthewei] # Create a manifest for a clone with template label filters: [PR #10918][orelmisan] VMClone: Emit an event in case restore creation fails [PR #10916][orelmisan] Fix the value of VMI Status. GuestOSInfo. Version [PR #10924][AlonaKaplan] Deprecate macvtap [PR #10898][matthewei] vmi status’s guestOsInfo adds Machine [PR #10866][AlonaKaplan] Raise an error in case passt feature gate or API are used. [PR #10879][brianmcarey] Built with golang 1. 20. 12 [PR #10872][RamLavi] IsolateEmulatorThread: Add cluster-wide parity completion setting [PR #10700][machadovilaca] Refactor monitoring alerts [PR #10839][RamLavi] Change second emulator thread assign strategy to best-effort. [PR #10863][dhiller] Remove year from generated code copyright [PR #10747][acardace] Fix KubeVirt for CRIO 1. 28 by using checksums to verify containerdisks when migrating VMIs [PR #10860][akalenyu] BugFix: Double cloning with filter fails [PR #10567][awels] Attachment pod creation is now rate limited [PR #10845][orelmisan] Reject VirtualMachineClone creation when target name is equal to source name [PR #10840][acardace] Requests/Limits can now be configured when using CPU/Memory hotplug [PR #10418][machadovilaca] Add total VMs created metric [PR #10800][AlonaKaplan] Support macvtap as a binding plugin [PR #10753][victortoso] Fixes device permission when using USB host passthrough [PR #10774][victortoso] Windows offline activation with ACPI SLIC table [PR #10783][RamLavi] Support multiple CPUs in Housekeeping cgroup [PR #10809][orelmisan] Source virt-launcher: Log migration info by default [PR #10046][victortoso] Add v1alpha3 for hooks [PR #10651][machadovilaca] Refactor monitoring recording-rules [PR #10732][AlonaKaplan] Extend kubvirt CR by adding domain attachment option to the network binding plugin API. [PR #10244][hshitomi] Added “adm” subcommand under “virtctl”, and “log-verbosity” subcommand under “adm”. The log-verbosity command is: [PR #10658][matthewei] 1. Support “Clone API” to filter VirtualMachine. spec. template. annotation and VirtualMachine. spec. template. label [PR #10593][RamLavi] Fixes SMT Alignment Error in virt-launcher pod by optimizing isolateEmulatorThread feature (BZ#2228103). [PR #10720][awels] Restored hotplug attachment pod request/limit to original value [PR #10657][germag] Exposing Filesystem Persistent Volumes (PVs) to the VM using unprivilege virtiofsd. [PR #10637][dharmit] Functional tests for sidecar hook with ConfigMap [PR #10598][alicefr] Add PVC option to the hook sidecars for supplying additional debugging tools [PR #10526][cfilleke] [PR #10699][qinqon] virt-launcher: fix qemu non root log path [PR #10689][akalenyu] BugFix: cgroupsv2 device allowlist is bound to virt-handler internal state/block disk device overwritten on hotplug [PR #10693][machadovilaca] Remove MigrateVmiDiskTransferRateMetric [PR #10615][orelmisan] Remove leftover NonRoot feature gate [PR #10529][alromeros] Allow LUN disks to be hotplugged [PR #10582][orelmisan] Remove leftover NonRootExperimental feature gate [PR #10596][mhenriks] Disable HTTP/2 to mitigate CVE-2023-44487 [PR #10570][machadovilaca] Fix LowKVMNodesCount not firing [PR #10571][tiraboschi] vmi memory footprint increase by 35M when guest serial console logging is turned on (default on). [PR #10425][ormergi] Introduce network binding plugin for Passt networking, interfacing with Kubevirt new network binding plugin API. [PR #10479][dharmit] Ability to run scripts through hook sidecar" }, { - "id": 2, + "id": 3, "url": "/2023/Announcing-KubeVirt-v1-1.html", "title": "Announcing KubeVirt v1.1", "author" : "KubeVirt Community", "tags" : "KubeVirt, v1.1.0, release, community, cncf", "body": "The KubeVirt Community is very pleased to announce the release of KubeVirt v1. 1. This comes 17 weeks after our celebrated v1. 0 release, and follows the predictable schedule we moved to three releases ago to follow the Kubernetes release cadence. You can read the full v1. 1 release notes here, but we’ve asked the KubeVirt SIGs to summarize their largest successes, as well as one of the community members from Arm to list their integration accomplishments for this release. SIG-compute: SIG-compute covers the core functionality of KubeVirt. This includes scheduling VMs, the API, and all KubeVirt operators. For the v1. 1 release, we have added quite a few features. This includes memory hotplug, as a follow up to CPU hotplug, which was part of the 1. 0 release. Basic KSM support was already part of KubeVirt, but we have now extended that with more tuning parameters and KubeVirt can also dynamically configure KSM based on system pressure. We’ve added persistent NVRAM support (requires that a VM use UEFI) so that settings are preserved across reboots. We’ve also added host-side USB passthrough support, so that USB devices on a cluster node can be made available to workloads. KubeVirt can now automatically apply limits to a VM running in a namespace with quotas. We’ve also added refinements to VM cloning, as well as the ability to create clones using the virtctl CLI tool. And you can now stream guest’s console logs. Finally, on the confidential computing front, we now have an API for SEV attestation. SIG-infra: SIG-infra takes care of KubeVirt’s own infrastructure, user workloads and other user-focused integrations through automation and the reduction of complexity wherever possible, providing a quality experience for end users. In this release, two major instance type-related features were added to KubeVirt. The first feature is the deployment of Common InstanceTypes by the virt-operator. This provides users with a useful set of InstanceTypes and Preferences right out of the box and allows them to easily create virtual machines tailored to the needs of their workloads. For now this feature remains behind a feature gate, but in future versions we aim to enable the deployment by default. Secondly, the inference of InstanceTypes and Preferences has been enabled by default when creating virtual machines with virtctl. This feature was already present in the previous release, but users still needed to explicitly enable it. Now it is enabled by default, being as transparent as possible so as to not let the creation of virtual machines fail if inference should not be possible. This significantly improves usability, as the command line for creating virtual machines is now even simpler. SIG-network: SIG-network is committed to enhancing and maintaining all aspects of Virtual Machine network connectivity and management in KubeVirt. For the v1. 1 release, we have re-designed the interface hot plug/unplug API, while adding hotplug support for SR-IOV interfaces. On top of that, we have added a network binding option allowing the community to extend the KubeVirt network configuration in the pod by injecting custom CNI plugins to configure the networking stack, and a sidecar to configure the libvirt domain. The existing slirp network configuration has been extracted from the code and re-designed as one such network binding, and can be used by the community as an example on how to extend KubeVirt bindings. SIG-scale: SIG-scale continues to track scale and performance across releases. The v1. 1 testing lanes ran on Kubernetes 1. 27 and we observed a slight performance improvement from Kubernetes. There’s no other notable performance or scale changes in KubeVirt v1. 1 as our focus has been on improving our tracking. vmiCreationToRunningSecondsP95: The gray dotted line in the graph is Feb 1, 2023, denoting release of v0. 59 The blue dotted line in the graph is March 1, 2023, denoting release of v0. 60 The green dotted line in the graph is July 6, 2023, denoting release of v1. 0. 0 The red dotted line in the graph is September 6, 2023, denoting change in k8s provider from v1. 25 to v1. 27 Full v1. 1 data source: https://github. com/kubevirt/kubevirt/blob/main/docs/perf-scale-benchmarks. md SIG-storage: SIG-storage is focused on providing persistent storage to KubeVirt VMs and managing that storage throughout the lifecycle of the VM. This begins with provisioning and populating PVCs with bootable images but also includes features such as disk hotplug, snapshots, backup and restore, disaster recovery, and virtual machine export. For this release we aimed to draw closer to Kubernetes principles when it comes to managing storage artifacts. Introducing CDI volume populators, which is CDI’s implementation of importing/uploading/cloning data to PVCs using the dataSourceRef field. This follows the Kubernetes way of populating PVCs and enables us to populate PVCs directly without the need for DataVolumes, an important but bespoke object that has served the KubeVirt use case for many years. Speaking of DataVolumes, they will no longer be garbage collected by default, something that violated a fundamental principle of Kubernetes (even though it was very useful for our use case). And, finally, we can now use snapshots to store operating system “golden images”, to serve as the base image for cloning. KubeVirt and Arm: We are excited to announce the successful integration of KubeVirt on Arm64 platforms. Here are some key accomplishments: Building and Compiling: We have released multi-architecture KubeVirt component images and binaries, while also allowing cross-compiling Arm64 architecture images and binaries on x86_64 platforms. Core Functionality: Our dedicated efforts have focused on enabling the core functionality of KubeVirt on Arm64 platforms. Testing Integration: Quality assurance is of paramount importance. We have integrated unit tests and end-to-end tests on Arm64 servers into the pull request (PR) pre-submit process. This guarantees that KubeVirt maintains its reliability and functionality on Arm64. Comprehensive Documentation: To provide valuable insights into KubeVirt’s capabilities on Arm64 platforms, we have compiled extensive documentation. Explore the status of feature gates and dive into device status documentation. Hybrid Cluster Compatibility Preview: Hybrid x86_64 and Arm64 clusters can work together now as a preview feature. Try it out and provide feedback. We are thrilled to declare that KubeVirt now offers tier-one support on Arm64 platforms. This milestone represents a culmination of collaborative efforts, unwavering dedication, and a commitment to innovation within the KubeVirt community. KubeVirt is no longer just an option; it has evolved to become a first-class citizen on Arm64 platforms. Conclusion: Thank you to everyone in the KubeVirt Community who contributed to this release, whether you pitched in on any of the features listed above, helped out with any of the other features or maintenance improvements listed in our release notes, or made any number of non-code contributions to our website, user guide or meetings. " }, { - "id": 3, + "id": 4, "url": "/2023/changelog-v1.1.0.html", "title": "KubeVirt v1.1.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v1. 1. 0: Released on: Mon Nov 6 16:28:56 2023 +0000 [PR #10669][kubevirt-bot] Introduce network binding plugin for Passt networking, interfacing with Kubevirt new network binding plugin API. [PR #10646][jean-edouard] The dedicated migration network should now always be properly detected by virt-handler [PR #10602][kubevirt-bot] Fix LowKVMNodesCount not firing [PR #10566][fossedihelm] Add 100Mi of memory overhead for vmi with dedicatedCPU or that wants GuaranteedQos [PR #10568][ormergi] Network binding plugin API support CNIs, new integration point on virt-launcher pod creation. [PR #10496][fossedihelm] Automatically set cpu limits when a resource quota with cpu limits is associated to the creation namespace and the AutoResourceLimits FeatureGate is enabled [PR #10309][lyarwood] cluster-wide common-instancetypes resources can now deployed by virt-operator using the CommonInstancetypesDeploymentGate feature gate. [PR #10543][0xFelix] Clear VM guest memory when ignoring inference failures [PR #9590][xuzhenglun] fix embed version info of virt-operator [PR #10532][alromeros] Add –volume-mode flag in image-upload [PR #10515][iholder101] Bug-fix: Stop copying VMI spec to VM during snapshots [PR #10320][victortoso] sidecar-shim implements PreCloudInitIso hook [PR #10463][0xFelix] VirtualMachines: Introduce InferFromVolumeFailurePolicy in Instancetype- and PreferenceMatchers [PR #10393][iholder101] [Bugfix] [Clone API] Double-cloning is now working as expected. [PR #10486][assafad] Deprecation notice for the metrics listed in the PR. Please update your systems to use the new metrics names. [PR #10438][lyarwood] A new instancetype. kubevirt. io:view ClusterRole has been introduced that can be bound to users via a ClusterRoleBinding to provide read only access to the cluster scoped VirtualMachineCluster{Instancetype,Preference} resources. [PR #10477][jean-edouard] Dynamic KSM enabling and configuration [PR #10110][tiraboschi] Stream guest serial console logs from a dedicated container [PR #10015][victortoso] Implements USB host passthrough in permittedHostDevices of KubeVirt CRD [PR #10184][acardace] Add memory hotplug feature [PR #10044][machadovilaca] Add operator-observability package [PR #10489][maiqueb] Remove the network-attachment-definition list and watch verbs from virt-controller’s RBAC [PR #10450][0xFelix] virtctl: Enable inference in create vm subcommand by default [PR #10447][fossedihelm] Add a Feature Gate to KV CR to automatically set memory limits when a resource quota with memory limits is associated to the creation namespace [PR #10253][rmohr] Stop trying to create unused directory /var/run/kubevirt-ephemeral-disk in virt-controller [PR #10231][kvaps] Propogate public-keys to cloud-init NoCloud meta-data [PR #10400][alromeros] Add new vmexport flags to download raw images, either directly (–raw) or by decompressing (–decompress) them [PR #9673][germag] DownwardMetrics: Expose DownwardMetrics through virtio-serial channel. [PR #10086][vladikr] allow live updating VM affinity and node selector [PR #10050][victortoso] Updating the virt stack: QEMU 8. 0. 0, libvirt to 9. 5. 0, edk2 20230524, [PR #10370][benjx1990] N/A [PR #10391][awels] BugFix: VMExport now works in a namespace with quotas defined. [PR #10386][liuzhen21] KubeSphere added to the adopter’s file! [PR #10380][alromeros] Bugfix: Allow image-upload to recover from PendingPopulation phase [PR #10366][ormergi] Kubevirt now delegates Slirp networking configuration to Slirp network binding plugin. In case you haven’t registered Slirp network binding plugin image yet (i. e. : specify in Kubevirt config) the following default image would be used: quay. io/kubevirt/network-slirp-binding:20230830_638c60fc8. On next release (v1. 2. 0) no default image will be set and registering an image would be mandatory. [PR #10167][0xFelix] virtctl: Apply namespace to created manifests [PR #10148][alromeros] Add port-forward functionalities to vmexport [PR #9821][sradco] Deprecation notice for the metrics listed in the PR. Please update your systems to use the new metrics names. [PR #10272][ormergi] Introduce network binding plugin for Slirp networking, interfacing with Kubevirt new network binding plugin API. [PR #10284][AlonaKaplan] Introduce an API for network binding plugins. The feature is behind “NetworkBindingPlugins” gate. [PR #10275][awels] Ensure new hotplug attachment pod is ready before deleting old attachment pod [PR #9231][victortoso] Introduces sidecar-shim container image [PR #10254][rmohr] Don’t mark the KubeVirt “Available” condition as false on up-to-date and ready but misscheduled virt-handler pods. [PR #10185][AlonaKaplan] Add support to migration based SRIOV hotplug. [PR #10182][iholder101] Stop considering nodes without kubevirt. io/schedulable label when finding lowest TSC frequency on the cluster [PR #10138][machadovilaca] Change kubevirt_vmi_*_usage_seconds from Gauge to Counter [PR #10173][rmohr] [PR #10101][acardace] Deprecate spec. config. machineType in KubeVirt CR. [PR #10020][akalenyu] Use auth API for DataVolumes, stop importing kubevirt. io/containerized-data-importer [PR #10107][PiotrProkop] Expose kubevirt_vmi_vcpu_delay_seconds_total reporting amount of seconds VM spent in waiting in the queue instead of running. [PR #10099][iholder101] Bugfix: target virt-launcher pod hangs when migration is cancelled. [PR #10056][jean-edouard] UEFI guests now use Bochs display instead of VGA emulation [PR #10070][machadovilaca] Remove affinities label from kubevirt_vmi_cpu_affinity and use sum as value [PR #10165][awels] BugFix: deleting hotplug attachment pod will no longer detach volumes that were not removed. [PR #9878][jean-edouard] The EFI NVRAM can now be configured to persist across reboots [PR #9932][lyarwood] ControllerRevisions containing instancetype. kubevirt. io CRDs are now decorated with labels detailing specific metadata of the underlying stashed object [PR #10039][simonyangcj] fix guaranteed qos of virt-launcher pod broken when use virtiofs [PR #10116][ormergi] Existing detached interfaces with ‘absent’ state will be cleared from VMI spec. [PR #9982][fabiand] Introduce a support lifecycle and Kubernetes target version. [PR #10118][akalenyu] Change exportserver default UID to succeed exporting CDI standalone PVCs (not attached to VM) [PR #10106][acardace] Add boot-menu wait time when starting the VM as paused. [PR #10058][alicefr] Add field errorPolicy for disks [PR #10004][AlonaKaplan] Hoyplug/unplug interfaces should be done by updating the VM spec template. virtctl and REST API endpoints were removed. [PR #10067][iholder101] Bug fix: virtctl create clone marshalling and replacement of kubectl with kubectl virt [PR #9989][alaypatel07] Add perf scale benchmarks for VMIs [PR #10001][machadovilaca] Fix kubevirt_vmi_phase_count not being created [PR #9896][ormergi] The VM controller now replicates spec interfaces MAC addresses to the corresponding interfaces in the VMI spec. [PR #9840][dhiller] Increase probability for flake checker script to find flakes [PR #9988][enp0s3] always deploy the outdated VMI workload alert [PR #7708][VirrageS] nodeSelector and schedulerName fields have been added to VirtualMachineInstancetype spec. [PR #7197][vasiliy-ul] Experimantal support of SEV attestation via the new API endpoints [PR #9958][AlonaKaplan] Disable network interface hotplug/unplug for VMIs. It will be supported for VMs only. [PR #9882][dhiller] Add some context for initial contributors about automated testing and draft pull requests. [PR #9935][xpivarc] Bug fix - correct logging in container disk [PR #9552][phoracek] gRPC client now works correctly with non-Go gRPC servers [PR #9918][ShellyKa13] Fix for hotplug with WFFC SCI storage class which uses CDI populators [PR #9737][AlonaKaplan] On hotunplug - remove bridge, tap and dummy interface from virt-launcher and the caches (file and volatile) from the node. [PR #9861][rmohr] Fix the possibility of data corruption when requestin a force-restart via “virtctl restart” [PR #9818][akrejcir] Added “virtctl credentials” commands to dynamically change SSH keys in a VM, and to set user’s password. [PR #9872][alromeros] Bugfix: Allow lun disks to be mapped to DataVolume sources [PR #9073][machadovilaca] Fix incorrect KubevirtVmHighMemoryUsage description" }, { - "id": 4, + "id": 5, "url": "/2023/KubeVirt-on-autoscaling-nodes.html", "title": "Running KubeVirt with Cluster Autoscaler", "author" : "Mark Maglana, Jonathan Kinred, Paul Myjavec", "tags" : "Kubevirt, kubernetes, virtual machine, VM, Cluster Autoscaler, AWS, EKS", "body": "Introduction: For this article, we’ll learn about the process of setting upKubeVirt with ClusterAutoscaleron EKS. In addition, we’ll be using bare metal nodes to host KubeVirt VMs. Required Base Knowledge: This article will talk about how to make various software systems work togetherbut introducing each one in detail is outside of its scope. Thus, you must already: Know how to administer a Kubernetes cluster; Be familiar with AWS, specifically IAM and EKS; and Have some experience with KubeVirt. Companion Code: All the code used in this article may also be found atgithub. com/relaxdiego/kubevirt-cas-baremetal. Set Up the Cluster: Shared environment variables: First let’s set some environment variables: # The name of the EKS cluster we're going to createexport RD_CLUSTER_NAME=my-cluster# The region where we will create the clusterexport RD_REGION=us-west-2# Kubernetes version to useexport RD_K8S_VERSION=1. 27# The name of the keypair that we're going to inject into the nodes. You# must create this ahead of time in the correct region. export RD_EC2_KEYPAIR_NAME=eks-my-clusterPrepare the cluster. yaml file: Using eksctl, prepare an EKS cluster config: eksctl create cluster \ --dry-run \ --name=${RD_CLUSTER_NAME} \ --nodegroup-name ng-infra \ --node-type m5. xlarge \ --nodes 2 \ --nodes-min 2 \ --nodes-max 2 \ --node-labels workload=infra \ --region=${RD_REGION} \ --ssh-access \ --ssh-public-key ${RD_EC2_KEYPAIR_NAME} \ --version ${RD_K8S_VERSION} \ --vpc-nat-mode HighlyAvailable \ --with-oidc \> cluster. yaml--dry-run means the command will not actually create the cluster but willinstead output a config to stdout which we then write to cluster. yaml. Open the file and look at what it has produced. For more info on the schema used by cluster. yaml, see the Config fileschema page from eksctl. io This cluster will start out with a node group that we will use to host our“infra” services. This is why we are using the cheaper m5. xlarge rather thana baremetal instance type. However, we also need to ensure that none of our VMswill ever be scheduled in these nodes. Thus we need to taint them. In thegenerated cluster. yaml file, append the following taint to the only nodegroup in the managedNodeGroups list: managedNodeGroups:- amiFamily: AmazonLinux2 . . . taints: - key: CriticalAddonsOnly effect: NoScheduleCreate the cluster: We can now create the cluster: eksctl create cluster --config-file cluster. yamlExample output: 2023-08-20 07:59:14 [ℹ] eksctl version . . . 2023-08-20 07:59:14 [ℹ] using region us-west-2 . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2a . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2b . . . 2023-08-20 07:59:14 [ℹ] subnets for us-west-2c . . . . . . 2023-08-20 08:14:06 [ℹ] kubectl command should work with . . . 2023-08-20 08:14:06 [✔] EKS cluster my-cluster in us-west-2 is readyOnce the command is done, you should be able to query the the kube API. Forexample: kubectl get nodesExample output: NAME STATUS ROLES AGE VERSIONip-XXX. compute. internal Ready <none> 32m v1. 27. 4-eks-2d98532ip-YYY. compute. internal Ready <none> 32m v1. 27. 4-eks-2d98532Create the Node Groups: As per this section of the Cluster Autoscalerdocs: If you’re using Persistent Volumes, your deployment needs to run in the sameAZ as where the EBS volume is, otherwise the pod scheduling could fail if itis scheduled in a different AZ and cannot find the EBS volume. To overcomethis, either use a single AZ ASG for this use case, or an ASG-per-AZ whileenabling --balance-similar-node-groups. Based on the above, we will create a node group for each of the availabilityzones (AZs) that was declared in cluster. yaml so that the Cluster Autoscaler willalways bring up a node in the AZ where a VM’s EBS-backed PV is located. To do that, we will first prepare a template that we can then feed toenvsubst. Save the following in node-group. yaml. template: ---# See: Config File Schema <https://eksctl. io/usage/schema/>apiVersion: eksctl. io/v1alpha5kind: ClusterConfigmetadata: name: ${RD_CLUSTER_NAME} region: ${RD_REGION}managedNodeGroups: - name: ng-${EKS_AZ}-c5-metal amiFamily: AmazonLinux2 instanceType: c5. metal availabilityZones: - ${EKS_AZ} desiredCapacity: 1 maxSize: 3 minSize: 0 labels: alpha. eksctl. io/cluster-name: my-cluster alpha. eksctl. io/nodegroup-name: ng-${EKS_AZ}-c5-metal workload: vm privateNetworking: false ssh: allow: true publicKeyPath: ${RD_EC2_KEYPAIR_NAME} volumeSize: 500 volumeIOPS: 10000 volumeThroughput: 750 volumeType: gp3 propagateASGTags: true tags: alpha. eksctl. io/nodegroup-name: ng-${EKS_AZ}-c5-metal alpha. eksctl. io/nodegroup-type: managed k8s. io/cluster-autoscaler/my-cluster: owned k8s. io/cluster-autoscaler/enabled: true # The following tags help CAS determine that this node group is able # to satisfy the label and resource requirements of the KubeVirt VMs. # See: https://github. com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README. md#auto-discovery-setup k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/kvm: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/tun: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/vhost-net: 1 k8s. io/cluster-autoscaler/node-template/resources/ephemeral-storage: 50M k8s. io/cluster-autoscaler/node-template/label/kubevirt. io/schedulable: true The last few tags bears additional emphasis. They are required because when avirtual machine is created, it will have the following requirements: requests: devices. kubevirt. io/kvm: 1 devices. kubevirt. io/tun: 1 devices. kubevirt. io/vhost-net: 1 ephemeral-storage: 50MnodeSelectors: kubevirt. io/schedulable=trueHowever, at least when scaling from zero for the first time, CAS will have noknowledge of this information unless the correct AWS tags are added to the nodegroup. This is why we have the following added to the managed node group’stags: k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/kvm: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/tun: 1 k8s. io/cluster-autoscaler/node-template/resources/devices. kubevirt. io/vhost-net: 1 k8s. io/cluster-autoscaler/node-template/resources/ephemeral-storage: 50Mk8s. io/cluster-autoscaler/node-template/label/kubevirt. io/schedulable: true For more information on these tags, see Auto-DiscoverySetup. Create the VM Node Groups: We can now create the node group: yq . availabilityZones[] cluster. yaml -r | \ xargs -I{} bash -c export EKS_AZ={}; envsubst < node-group. yaml. template | \ eksctl create nodegroup --config-file - Deploy KubeVirt: The following was adapted from KubeVirt quickstart with cloudproviders. Deploy the KubeVirt operator: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/v1. 0. 0/kubevirt-operator. yamlSo that the operator will know how to deploy KubeVirt, let’s add the KubeVirtresource: cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1kind: KubeVirtmetadata: name: kubevirt namespace: kubevirtspec: certificateRotateStrategy: {} configuration: developerConfiguration: featureGates: [] customizeComponents: {} imagePullPolicy: IfNotPresent workloadUpdateStrategy: {} infra: nodePlacement: nodeSelector: workload: infra tolerations: - key: CriticalAddonsOnly operator: ExistsEOF Notice how we are specifically configuring KubeVirt itself to tolerate theCriticalAddonsOnly taint. This is so that the KubeVirt services themselvescan be scheduled in the infra nodes instead of the bare metal nodes which wewant to scale down to zero when there are no VMs. Wait until KubeVirt is in a Deployed state: kubectl get -n kubevirt -o=jsonpath= {. status. phase} \ kubevirt. kubevirt. io/kubevirtExample output: DeployedDouble check that all KubeVirt components are healthy: kubectl get pods -n kubevirtExample output: NAME READY STATUS RESTARTS AGEpod/virt-api-674467958c-5chhj 1/1 Running 0 98dpod/virt-api-674467958c-wzcmk 1/1 Running 0 5dpod/virt-controller-6768977b-49wwb 1/1 Running 0 98dpod/virt-controller-6768977b-6pfcm 1/1 Running 0 5dpod/virt-handler-4hztq 1/1 Running 0 5dpod/virt-handler-x98x5 1/1 Running 0 98dpod/virt-operator-85f65df79b-lg8xb 1/1 Running 0 5dpod/virt-operator-85f65df79b-rp8p5 1/1 Running 0 98dDeploy a VM to test: The following is copied fromkubevirt. io. First create a secret from your public key: kubectl create secret generic my-pub-key --from-file=key1=~/. ssh/id_rsa. pubNext, create the VM: # Create a VM referencing the Secret using propagation method configDrivecat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: testvmspec: running: true template: spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk rng: {} resources: requests: memory: 1024M terminationGracePeriodSeconds: 0 accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key propagationMethod: configDrive: {} volumes: - containerDisk: image: quay. io/containerdisks/fedora:latest name: containerdisk - cloudInitConfigDrive: userData: |- #cloud-config password: fedora chpasswd: { expire: False } name: cloudinitdiskEOFCheck that the test VM is running: kubectl get vmExample output: NAME AGE STATUS READYtestvm 30s Running TrueDelete the VM: kubectl delete testvmSet Up Cluster Autoscaler: Prepare the permissions for Cluster Autoscaler: So that CAS can set the desired capacity of each node group dynamically, wemust grant it limited access to certain AWS resources. The first step to thisis to define the IAM policy. This section is based off of the “Create an IAM policy and role” section ofthe AWSAutoscalingdocumentation. Create the cluster-specific policy document: Prepare the policy document by rendering the following file. cat > policy. json <<EOF{ Version : 2012-10-17 , Statement : [ { Sid : VisualEditor0 , Effect : Allow , Action : [ autoscaling:SetDesiredCapacity , autoscaling:TerminateInstanceInAutoScalingGroup ], Resource : * }, { Sid : VisualEditor1 , Effect : Allow , Action : [ autoscaling:DescribeAutoScalingInstances , autoscaling:DescribeAutoScalingGroups , ec2:DescribeLaunchTemplateVersions , autoscaling:DescribeTags , autoscaling:DescribeLaunchConfigurations , ec2:DescribeInstanceTypes ], Resource : * } ]}EOFThe above should be enough for CAS to do its job. Next, create the policy: aws iam create-policy \ --policy-name eks-${RD_REGION}-${RD_CLUSTER_NAME}-ClusterAutoscalerPolicy \ --policy-document file://policy. json IMPORTANT: Take note of the returned policy ARN. You will need that below. Create the IAM role and k8s service account pair: The Cluster Autoscaler needs a service account in the k8s cluster that’sassociated with an IAM role that consumes the policy document we created in theprevious section. This is normally a two-step process but can be created in asingle command using eksctl: For more information on what eksctl is doing under the covers, see How ItWorks from theeksctl documentation for IAM Roles for Service Accounts. export RD_POLICY_ARN= <Get this value from the last command's output> eksctl create iamserviceaccount \ --cluster=${RD_CLUSTER_NAME} \ --region=${RD_REGION} \ --namespace=kube-system \ --name=cluster-autoscaler \ --attach-policy-arn=${RD_POLICY_ARN} \ --override-existing-serviceaccounts \ --approveDouble check that the cluster-autoscaler service account has been correctlyannotated with the IAM role that was created by eksctl in the same step: kubectl get sa cluster-autoscaler -n kube-system -ojson | \ jq -r '. metadata. annotations | . eks. amazonaws. com/role-arn 'Example output: arn:aws:iam::365499461711:role/eksctl-my-cluster-addon-iamserviceaccount-. . . Check from the AWS Console if the above role contains the policy that we createdearlier. Deploy Cluster Autoscaler: First, find the most recent Cluster Autoscaler version that has the same MAJORand MINOR version as the kubernetes cluster you’re deploying to. Get the kube cluster’s version: kubectl version -ojson | jq -r . serverVersion. gitVersionExample output: v1. 27. 4-eks-2d98532Choose the appropriate version for CAS. You can get the latest ClusterAutoscaler versions from its Github ReleasesPage. Example: export CLUSTER_AUTOSCALER_VERSION=1. 27. 3Next, deploy the cluster autoscaler using the deployment template that Iprepared in the companionrepo envsubst < <(curl https://raw. githubusercontent. com/relaxdiego/kubevirt-cas-baremetal/main/cas-deployment. yaml. template) | \ kubectl apply -f -Check the cluster autoscaler status: kubectl get deploy,pod -l app=cluster-autoscaler -n kube-systemExample output: NAME READY UP-TO-DATE AVAILABLE AGEdeployment. apps/cluster-autoscaler 1/1 1 1 4m1sNAME READY STATUS RESTARTS AGEpod/cluster-autoscaler-6c58bd6d89-v8wbn 1/1 Running 0 60sTail the cluster-autoscaler pod’s logs to see what’s happening: kubectl -n kube-system logs -f deployment. apps/cluster-autoscalerBelow are example log entries from Cluster Autoscaler terminating an unneedednode: node ip-XXXX. YYYY. compute. internal may be removed. . . ip-XXXX. YYYY. compute. internal was unneeded for 1m3. 743475455sOnce the timeout has been reached (default: 10 minutes), CAS will scale downthe group: Scale-down: removing empty node ip-XXXX. YYYY. compute. internalEvent(v1. ObjectReference{Kind: ConfigMap , Namespace: kube-system , . . . Successfully added ToBeDeletedTaint on node ip-XXXX. YYYY. compute. internalTerminating EC2 instance: i-ZZZZDeleteInstances was called: . . . For more information on how Cluster Autoscaler scales down a node group, seeHow does scale-downwork?from the project’s FAQ. When you try to get the list of nodes, you should see the bare metal nodestainted such that they are no longer schedulable: NAME STATUS ROLES AGE VERSIONip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready,SchedulingDisabled <none> 70m v1. 27. 3-eks-a5565adip-XXXX Ready <none> 112m v1. 27. 3-eks-a5565adip-XXXX Ready <none> 112m v1. 27. 3-eks-a5565adIn a few more minutes, the nodes will be deleted. To try the scale up, just deploy a VM. Expanding Node Group eks-ng-eacf8ebb . . . Best option to resize: eks-ng-eacf8ebbEstimated 1 nodes needed in eks-ng-eacf8ebbFinal scale-up plan: [{eks-ng-eacf8ebb 0->1 (max: 3)}]Scale-up: setting group eks-ng-eacf8ebb size to 1Setting asg eks-ng-eacf8ebb size to 1Done: At this point you should have a working, auto-scaling EKS cluster that can hostVMs on bare metal nodes. If you have any questions, ask themhere. References: Amazon EKS Autoscaling Cluster Autoscaler in Plain English AWS EKS Best Practices Guide IAM roles for service accounts eksctl create iamserviceaccount" }, { - "id": 5, + "id": 6, "url": "/2023/Managing-KubeVirt-VMs-with-Ansible.html", "title": "Managing KubeVirt VMs with Ansible", "author" : "Felix Matouschek, Andrew Block", "tags" : "Kubevirt, kubernetes, virtual machine, VM, Ansible, ansible collection, kubevirt.core, iac", "body": "Introduction: Infrastructure teams managing virtual machines (VMs) and the end users of these systems make use of a variety of tools as part of their day-to-day world. One such tool that is shared amongst these two groups is Ansible, an agentless automation tool for the enterprise. To simplify both the adoption and usage of KubeVirt as well as to integrate seamlessly into existing workflows, the KubeVirt community is excited to introduce the release of the first version of the KubeVirt collection for Ansible, called kubevirt. core, which includes a number of tools that you do not want to miss. This article will review some of the features and their use associated with this initial release. Note: There is also a video version of this blog, which can be found on the KubeVirt YouTube channel. Motivation: Before diving into the featureset of the collection itself, let’s review why the collection was created in the first place. While adopting KubeVirt and Kubernetes has the potential to disrupt the workflows of teams that typically manage VM infrastructure, including the end users themselves, many of the same paradigms remain: Kubernetes and the resources associated with KubeVirt can be represented in a declarative fashion. In many cases, communicating with KubeVirt VMs makes use of the same protocols and schemes as non-Kubernetes-based environments. The management of VMs still represents a challenge. For these reasons and more, it is only natural that a tool, like Ansible, is introduced within the KubeVirt community. Not only can it help manage KubeVirt and Kubernetes resources, like VirtualMachines, but also to enable the extensive Ansible ecosystem for managing guest configurations. Included capabilities: As part of the initial release, an Ansible Inventory plugin and management module is included. They are available in the same distribution location containing Ansible automation content, Ansible Galaxy. The resources encompassing the collection itself are detailed in the following sections. Inventory: To work with KubeVirt VMs in Ansible, they need to be available in Ansible’s hosts inventory. Since KubeVirt is already using the Kubernetes API to manage VMs, it would be nice to leverage this API to discover hosts with Ansible too. This is where the dynamic inventory of the kubevirt. core collection comes into play. The dynamic inventory capability allows you to query the Kubernetes API for available VMs in a given namespace or namespaces, along with additional filtering options, such as labels. To allow Ansible to find the right connection parameters for a VM, the network name of a secondary interface can also be specified. Under the hood, the dynamic inventory uses either your default kubectl credentials or credentials specified in the inventory parameters to establish the connection with a cluster. Managing VMs: While working with existing VMs is already quite useful, it would be even better to control the entire lifecycle of KubeVirt VirtualMachines from Ansible. This is made possible by the kubevirt_vm module provided by the kubevirt. core collection. The kubevirt_vm module is a thin wrapper around the kubernetes. core. k8s module and it allows you to control the essential fields of a KubeVirt VirtualMachine’s specification. In true Ansible fashion, this module tries to be as idempotent as possible and only makes changes to objects within Kubernetes if necessary. With its wait feature, it is possible to delay further tasks until a VM was successfully created or updated and the VM is in the ready state or was successfully deleted. Getting started: Now that we’ve provided an introduction to the featureset, it is time to illustrate how you can get up to speed using the collection including a few examples to showcase the capabilities provided by the collection. Prerequisites: Please note that as a prerequisite, Ansible needs to be installed and configured along with a working Kubernetes cluster with KubeVirt and the KubeVirt Cluster Network Addons Operator. The cluster also needs to have a secondary network configured, which can be attached to VMs so that the machine can be reached from the Ansible control node. Items covered: Installing the collection from Ansible Galaxy Creating a Namespace and a Secret with an SSH public key Creating a VM Listing available VMs Executing a command on the VM Removing the previously created resourcesWalkthrough: First, install the kubevirt. core collection from Ansible Galaxy: ansible-galaxy collection install kubevirt. coreThis will also install the kubernetes. core collection as a dependency. Second, create a new Namespace and a Secret containing a public key for SSH authentication: ssh-keygen -f my-keykubectl create namespace kubevirt-ansiblekubectl create secret generic my-pub-key --from-file=key1=my-key. pub -n kubevirt-ansibleWith the collection now installed and the public key pair created, create a file called play-create. yml containing an Ansible playbook to deploy a new VM called testvm: - hosts: localhost connection: local tasks: - name: Create VM kubevirt. core. kubevirt_vm: state: present name: testvm namespace: kubevirt-ansible labels: app: test instancetype: name: u1. medium preference: name: fedora spec: domain: devices: interfaces: - name: default masquerade: {} - name: secondary-network bridge: {} networks: - name: default pod: {} - name: secondary-network multus: networkName: secondary-network accessCredentials: - sshPublicKey: source: secret: secretName: my-pub-key propagationMethod: configDrive: {} volumes: - containerDisk: image: quay. io/containerdisks/fedora:latest name: containerdisk - cloudInitConfigDrive: userData: |- #cloud-config # The default username is: fedora name: cloudinit wait: yesRun the playbook by executing the following command: ansible-playbook play-create. ymlOnce the playbook completes successfully, the defined VM will be running in the kubevirt-ansible namespace, which can be confirmed by querying for VirtualMachines in this namespace: kubectl get VirtualMachine -n kubevirt-ansibleWith the VM deployed, it is eligible for use in Ansible automation activities. Let’s illustrate how it can be queried and added to an Ansible inventory dynamically using the plugin provided by the kubevirt. core collection. Create a file called inventory. kubevirt. yml containing the following content: plugin: kubevirt. core. kubevirtconnections:- namespaces: - kubevirt-ansible network_name: secondary-network label_selector: app=testUse the ansible-inventory command to confirm the VM becomes added to the Ansible inventory: ansible-inventory -i inventory. kubevirt. yml --listNext, make use of the host by querying for all of the facts exposed by the VM using the setup module: ansible -i inventory. kubevirt. yml -u fedora --key-file my-key all -m setupComplete the lifecycle of the VM by destroying the previously created VirtualMachine and Namespace. Create a file called play-delete. yml containing the following playbook: - hosts: localhost tasks: - name: Delete VM kubevirt. core. kubevirt_vm: name: testvm namespace: kubevirt-ansible state: absent wait: yes - name: Delete namespace kubernetes. core. k8s: name: kubevirt-ansible api_version: v1 kind: Namespace state: absentRun the playbook to remove the VM: ansible-playbook play-delete. ymlMore information including the full list of parameters and options can be found within the collection documentation: https://kubevirt. io/kubevirt. core What next?: This has been a brief introduction to the concepts and usage of the newly released kubevirt. core collection. Nevertheless, we hope that it helped to showcase the integration now available between KubeVirt and Ansible, including how easy it is to manage KubeVirt assets. A next potential iteration could be to expose a VM via a Kubernetes Service using one of the methods described in this article instead of a secondary interface as was covered in this walkthrough. Not only does it leverage existing models outside the KubeVirt ecosystem, but it helps to enable a uniform method for exposing content. Interested in learning more, providing feedback or contributing? Head over to the kubevirt. core GitHub repository to continue your journey and get involved. https://github. com/kubevirt/kubevirt. core " }, { - "id": 6, + "id": 7, "url": "/2023/OVN-kubernetes-secondary-networks-policies.html", "title": "NetworkPolicies for KubeVirt VMs secondary networks using OVN-Kubernetes", "author" : "Miguel Duarte Barroso", "tags" : "Kubevirt, kubernetes, virtual machine, VM, SDN, OVN, NetworkPolicy", "body": "Introduction: Kubernetes NetworkPolicies are constructs to control traffic flow at the IPaddress or port level (OSI layers 3 or 4). They allow the user to specify how a pod (or group of pods) is allowed tocommunicate with other entities on the network. In simpler words: the user canspecify ingress from or egress to other workloads, using L3 / L4 semantics. Keeping in mind NetworkPolicy is a Kubernetes construct - which only caresabout a single network interface - they are only usable for the cluster’sdefault network interface. This leaves a considerable gap for Virtual Machineusers, since they are heavily invested in secondary networks. The k8snetworkplumbingwg has addressed this limitation by providing aMultiNetworkPolicy CRD - it features the exact same API as NetworkPolicybut can target network-attachment-definitions. OVN-Kubernetes implements this API, and configures access control accordinglyfor secondary networks in the cluster. In this post we will see how we can govern access control for VMs using themulti-network policy API. On our simple example, we’ll only allow into our VMsfor traffic ingressing from a particular CIDR range. Current limitations of MultiNetworkPolicies for VMs: Kubernetes NetworkPolicy has three types of policy peers: namespace selectors: allows ingress-from, egress-to based on the peer’s namespace labels pod selectors: allows ingress-from, egress-to based on the peer’s labels ip block: allows ingress-from, egress-to based on the peer’s IP addressWhile MultiNetworkPolicy allows these three types, when used with VMs werecommend using only the IPBlock policy peer - both namespace and podselectors prevent the live-migration of Virtual Machines (these policy peersrequire OVN-K managed IPAM, and currently the live-migration feature is onlyavailable when IPAM is not enabled on the interfaces). Demo: To run this demo, we will prepare a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirt Multi-Network policy APIThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,upstream latest multus-cni, and the multi-network policy CRDs deployed. Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster (one control plane, and two workernodes), configured to use OVN-Kubernetes as the default cluster network,configuring the multi-homing OVN-Kubernetes feature gate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v1. 0. 0). export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Limiting ingress to a KubeVirt VM: In this example, we will configure a MultiNetworkPolicy allowing ingress intoour VMs only from a particular CIDR range - let’s say 10. 200. 0. 0/30. Provision the following NAD (to allow our VMs to live-migrate, we do not definea subnet): ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: flatl2netspec: config: |2 { cniVersion : 0. 4. 0 , name : flatl2net , type : ovn-k8s-cni-overlay , topology : layer2 , netAttachDefName : default/flatl2net }Let’s now provision our six VMs, with the following name to IP address(statically configured via cloud-init) association: vm1: 10. 200. 0. 1 vm2: 10. 200. 0. 2 vm3: 10. 200. 0. 3 vm4: 10. 200. 0. 4 vm5: 10. 200. 0. 5 vm6: 10. 200. 0. 6---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm1 name: vm1spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm1 kubevirt. io/vm: vm1 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 1/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm2 name: vm2spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm2 kubevirt. io/vm: vm2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 2/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm3 name: vm3spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm3 kubevirt. io/vm: vm3 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 3/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm4 name: vm4spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm4 kubevirt. io/vm: vm4 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 4/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm5 name: vm5spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm5 kubevirt. io/vm: vm5 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 5/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdisk---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm6 name: vm6spec: running: true template: metadata: labels: name: access-control kubevirt. io/domain: vm6 kubevirt. io/vm: vm6 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: flatl2-overlay rng: {} resources: requests: memory: 1024Mi networks: - multus: networkName: flatl2net name: flatl2-overlay termination/GracePeriodSeconds: 30 volumes: - containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:v1. 0. 0 name: containerdisk - cloudInitNoCloud: networkData: | ethernets: eth0: addresses: - 10. 200. 0. 6/24 version: 2 userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } name: cloudinitdiskNOTE: it is important to highlight all the Virtual Machines (and thenetwork-attachment-definition) are defined in the default namespace. After this step, we should have the following deployment: Let’s check the VMs vm1 and vm4 can ping their peers in the same subnet. For that we willconnect to the VMs over their serial console: First, let’s check vm1: ➜ virtctl console vm1Successfully connected to vm1 console. The escape sequence is ^][fedora@vm1 ~]$ ping 10. 200. 0. 2 -c 4PING 10. 200. 0. 2 (10. 200. 0. 2) 56(84) bytes of data. 64 bytes from 10. 200. 0. 2: icmp_seq=1 ttl=64 time=5. 16 ms64 bytes from 10. 200. 0. 2: icmp_seq=2 ttl=64 time=1. 41 ms64 bytes from 10. 200. 0. 2: icmp_seq=3 ttl=64 time=34. 2 ms64 bytes from 10. 200. 0. 2: icmp_seq=4 ttl=64 time=2. 56 ms--- 10. 200. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 1. 406/10. 841/34. 239/13. 577 ms[fedora@vm1 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=3. 77 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=1. 46 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=5. 47 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=1. 74 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3007msrtt min/avg/max/mdev = 1. 459/3. 109/5. 469/1. 627 ms[fedora@vm1 ~]$ And from vm4: ➜ ~ virtctl console vm4Successfully connected to vm4 console. The escape sequence is ^][fedora@vm4 ~]$ ping 10. 200. 0. 1 -c 4PING 10. 200. 0. 1 (10. 200. 0. 1) 56(84) bytes of data. 64 bytes from 10. 200. 0. 1: icmp_seq=1 ttl=64 time=3. 20 ms64 bytes from 10. 200. 0. 1: icmp_seq=2 ttl=64 time=1. 62 ms64 bytes from 10. 200. 0. 1: icmp_seq=3 ttl=64 time=1. 44 ms64 bytes from 10. 200. 0. 1: icmp_seq=4 ttl=64 time=0. 951 ms--- 10. 200. 0. 1 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 0. 951/1. 803/3. 201/0. 843 ms[fedora@vm4 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=1. 85 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=1. 02 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=1. 27 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=0. 970 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 0. 970/1. 275/1. 850/0. 350 msWe will now provision a MultiNetworkPolicy applying to all the VMs definedabove. To do this mapping correcly, the policy has to: Be in the same namespace as the VM. Set k8s. v1. cni. cncf. io/policy-for annotation matching the secondary network used by the VM. Set matchLabels selector matching the labels set on VM’sspec. template. metadata. This policy will allow ingress into these access-control labeled pods only if the traffic originates from within the 10. 200. 0. 0/30 CIDR range(IPs 10. 200. 0. 1-3). ---apiVersion: k8s. cni. cncf. io/v1beta1kind: MultiNetworkPolicymetadata: name: ingress-ipblock annotations: k8s. v1. cni. cncf. io/policy-for: default/flatl2netspec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10. 200. 0. 0/30Taking into account our example, onlyvm1, vm2, and vm3 will be able to contact any of its peers, as picturedby the following diagram: Let’s try again the ping after provisioning the MultiNetworkPolicy object: From vm1 (inside the allowed ip block range): [fedora@vm1 ~]$ ping 10. 200. 0. 2 -c 4PING 10. 200. 0. 2 (10. 200. 0. 2) 56(84) bytes of data. 64 bytes from 10. 200. 0. 2: icmp_seq=1 ttl=64 time=6. 48 ms64 bytes from 10. 200. 0. 2: icmp_seq=2 ttl=64 time=4. 40 ms64 bytes from 10. 200. 0. 2: icmp_seq=3 ttl=64 time=1. 28 ms64 bytes from 10. 200. 0. 2: icmp_seq=4 ttl=64 time=1. 51 ms--- 10. 200. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 1. 283/3. 418/6. 483/2. 154 ms[fedora@vm1 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. 64 bytes from 10. 200. 0. 6: icmp_seq=1 ttl=64 time=3. 81 ms64 bytes from 10. 200. 0. 6: icmp_seq=2 ttl=64 time=2. 67 ms64 bytes from 10. 200. 0. 6: icmp_seq=3 ttl=64 time=1. 68 ms64 bytes from 10. 200. 0. 6: icmp_seq=4 ttl=64 time=1. 63 ms--- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 1. 630/2. 446/3. 808/0. 888 msFrom vm4 (outside the allowed ip block range): [fedora@vm4 ~]$ ping 10. 200. 0. 1 -c 4PING 10. 200. 0. 1 (10. 200. 0. 1) 56(84) bytes of data. --- 10. 200. 0. 1 ping statistics ---4 packets transmitted, 0 received, 100% packet loss, time 3083ms[fedora@vm4 ~]$ ping 10. 200. 0. 6 -c 4PING 10. 200. 0. 6 (10. 200. 0. 6) 56(84) bytes of data. --- 10. 200. 0. 6 ping statistics ---4 packets transmitted, 0 received, 100% packet loss, time 3089msConclusions: In this post we’ve shown how MultiNetworkPolicies can be used to provideaccess control to VMs with secondary network interfaces. We have provided a comprehensive example on how a policy can be used to limitingress to our VMs only from desired sources, based on the client’s IP address. " }, { - "id": 7, + "id": 8, "url": "/2023/KubeVirt-v1-has-landed.html", "title": "KubeVirt v1.0 has landed!", "author" : "KubeVirt Maintainers", "tags" : "KubeVirt, v1.0, release, community, cncf, milestone, party time", "body": "The KubeVirt community is proud to announce the release of KubeVirt v1. 0! This release demonstrates the accomplishments of the community and user adoption over the years and represents an important milestone for everyone involved. A brief history: The KubeVirt project started in Red Hat at the end of 2016, with the question: Can virtual machines (VMs) run in containers and be deployed by Kubernetes?It proved to be not only possible, but quickly emerged as a promising solution to the future of virtual machines in the container age. KubeVirt joined the CNCF as a Sandbox project in September 2019, and an Incubating project in April 2022. From a handful of people hacking away on a proof of concept, KubeVirt has grown into 45 active repositories, with the primary kubevirt/kubevirt repo having 17k commits and 1k forks. What does v1. 0 mean to the community?: The v1. 0 release signifies the incredible growth that the community has gone through in the past six years from an idea to a production-ready Virtual Machine Management solution. The next stage with v1. 0 is the additional focus on maintaining APIs while continuing to grow the project. This has led KubeVirt to adopt community practices from Kubernetes in key parts of the project. Leading up to this release we had a shift in release cadence: from monthly to 3 times a year, following the Kubernetes release model. This allows our developer community additional time to ensure stability and compatibility, our users more time to plan and comfortably upgrade, and also aligns our releases with Kubernetes to simplify maintenance and supportability. The theme ‘aligning with Kubernetes’ is also felt through the other parts of the community, by following their governance processes; introducing SIGs to split test and review responsibilities, as well as a SIG release repo to handle everything related to a release; and regular SIG meetings that now include SIG scale and performance and SIG storage alongside our weekly Community meetings. What’s included in this release?: This release demonstrates the accomplishments of the community and user adoption over the past many months. The full list of feature and bug fixes can be found in our release notes, but we’ve also asked representatives from some of our SIGs for a summary. SIG-scale: KubeVirt’s SIG-scale drives the performance and scalability initiatives in the community. Our focus for the v1. 0 release was on sharing the performance results over the past 6 months. The benchmarks since December 2022 which cover the past two release - v0. 59 (Mar 2023) and v1. 0 (July 2023) are as follows: Performance benchmarks for v1. 0 release Scalability benchmarks for v1. 0 release Publishing these measurements provides the community and end-users visibility into the performance and scalability over multiple releases. In addition, these results help identify the effects of code changes so that community members can diagnose performance problems and regressions. End-users can use the same tools and techniques SIG-scale uses to analyze performance and scalability in their own deployments. Since performance and scalability are mostly relative to the deployment stack, the same strategies should be used to further contextualize the community’s measurements. SIG-storage: SIG-storage is focused on providing persistent storage to KubeVirt VMs and managing that storage throughout the lifecycle of the VM. This begins with provisioning and populating PVCs with bootable images but also includes features such as disk hotplug, snapshots, backup and restore, disaster recovery, and virtual machine export. For v1. 0, SIG-storage delivered the following features: providing a flexible VM export API, enabling persistent SCSI reservation, provisioning VMs from a retained snapshot, and setting out-of-the-box defaults for additional storage provisioners. Another major effort was to implement Volume Populator alternatives to the KubeVirt DataVolume API in order to better leverage platform capabilities. The SIG meets every 2 weeks and welcomes anyone to join us for interesting storage discussions. SIG-compute: SIG-compute is focused on the core virtualization functionality of KubeVirt, but also encompasses features that don’t fit well into another SIG. Some examples of SIG-compute’s scope include the lifecycle of VMs, migration, as well as maintenance of the core API. For v1. 0, SIG-compute developed features for memory over-commit. This includes initial support for KSM and FreePageReporting. We added support for persistent vTPM, which makes it much easier to use BitLocker on Windows installs. Additionally, there’s now an initial implementation for CPU Hotplug (currently hidden behind a feature gate). SIG-network: SIG-network is committed to enhancing and maintaining all aspects of Virtual Machine network connectivity and management in KubeVirt. For the v1. 0 release, we have introduced hot plug and hot unplug (as alpha), which enables users to add and remove VM secondary network interfaces that use bridge binding on a running VM. Hot plug API stabilization and support for SR-IOV interfaces is under development for the next minor release. SIG-infra: The effort to simplify the VirtualMachine UX is still ongoing and with the v1. 0 release we were able to introduce the v1beta1 version of the instancetype. kubevirt. io API. In the future KubeVirt v1. 1. 0 release we are aiming to finally graduate the instancetype. kubevirt. io API to v1. With the new version it is now possible to control the memory overcommit of virtual machines as a percentage within instance types. Resource requirements were added to preferences, which allows users to ensure that requirements of a workload are met. Also several new preference attributes have been added to cover more use cases. Moreover, virtctl was extended to make use of the new instance type and preference features. What next for KubeVirt?: From a development perspective, we will continue to introduce and improve features that make life easier for virtualization users in a manner that is as native to Kubernetes as possible. From a community perspective, we are improving our new contributor experience so that we can continue to grow and help new members learn and be a part of the cloud native ecosystem. In addition, with this milestone we can now shift our attention on becoming a CNCF Graduated project. " }, { - "id": 8, + "id": 9, "url": "/2023/changelog-v1.0.0.html", "title": "KubeVirt v1.0.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v1. 0. 0: Released on: Thu Jul 6 17:39:42 2023 +0000 [PR #10037][kubevirt-bot] The VM controller now replicates spec interfaces MAC addresses to the corresponding interfaces in the VMI spec. [PR #9992][machadovilaca] Fix incorrect KubevirtVmHighMemoryUsage description [PR #9965][kubevirt-bot] Disable network interface hotplug/unplug for VMIs. It will be supported for VMs only. [PR #9931][kubevirt-bot] Fix for hotplug with WFFC SCI storage class which uses CDI populators [PR #9946][kubevirt-bot] On hotunplug - remove bridge, tap and dummy interface from virt-launcher and the caches (file and volatile) from the node. [PR #9757][enp0s3] Introduce CPU hotplug [PR #9811][machadovilaca] Remove unnecessary marketplace tool [PR #7742][Fuzzy-Math] Experimental support for AMD SEV-ES [PR #9799][vladikr] Introduce an ability to set memory overcommit percentage in instanceType spec [PR #8780][lyarwood] Add basic support for expressing minimum resource requirements for CPU and Memory within VirtualMachine{Preferences,ClusterPreferences} [PR #9812][mhenriks] Handle DataVolume PendingPopulation phase [PR #9858][fossedihelm] build virtctl for all os/architectures when KUBEVIRT_RELEASE env var is true [PR #9765][lyarwood] Allow to define preferred cpu features in VirtualMachine{Preferences,ClusterPreferences} [PR #9844][EdDev] Drop the kubevirt. io/interface resource name API for reserving domain resources for network interfaces. [PR #9841][ormergi] Support hot-unplug of network interfaces on VirtualMachine objects [PR #9851][lxs137] virt-api: portfowrad can handle IPv6 VM [PR #9845][lxs137] DHCPv6 server handle request without iana option [PR #9769][lyarwood] Allow to define the preferred subdomain in VirtualMachine{Preferences,ClusterPreferences} [PR #9246][jean-edouard] Fixed migration issue for VMIs that have RWX disks backed by filesystem storage classes. [PR #9808][jcanocan] DownwardMetrics: Rename AllocatedToVirtualServers metric to AllocatedToVirtualServers and add ResourceProcessorLimit metric [PR #9832][tiraboschi] build virtctl also for arm64 for linux, darwin and windows [PR #9744][lyarwood] Allow to define the preferred termination grace period in VirtualMachine{Preferences,ClusterPreferences} [PR #9828][rthallisey] Publish multiarch manifests with each release [PR #9761][lyarwood] Allow to define the preferred masquerade configuration in VirtualMachine{Preferences,ClusterPreferences} [PR #9768][jean-edouard] New CR option to enable auto CPU limits for virt-launcher on some namespaces [PR #9779][EdDev] Support hot-unplug of network interfaces on VMI objects [PR #9688][xpivarc] Users are warned about the usage of deprecated fields [PR #9798][rmohr] Add LiveMigrateIfPossible eviction strategy to allow admins to express a live migration preference instead of a live migration requirement for evictions. [PR #9764][fossedihelm] Cluster admins can enable ksm in a set of nodes via kv configuration [PR #9753][lyarwood] The following flags have been added to the virtctl image-upload command allowing users to associate a default instance type and/or preference with an image during upload. --default-instancetype, --default-instancetype-kind, --default-preference and --default-preference-kind. See the user-guide documentation for more details on using the uploaded image with the inferFromVolume feature during VirtualMachine creation. [PR #9575][lyarwood] A new v1beta1 version of the instancetype. kubevirt. io API and CRDs has been introduced. [PR #9738][Barakmor1] Add condition to migrations that indicates that migration was rejected by ResourceQuota [PR #9730][assafad] Add kubevirt_vmi_memory_cached_bytes metric [PR #9674][fossedihelm] Introduce cluster configuration VirtualMachineOptions to specify virtual machine behavior at cluster level [PR #9724][0xFelix] An alert which triggers when KubeVirt APIs marked as deprecated are used was added. [PR #9623][rmohr] Bump to apimachinery 1. 26 [PR #9747][lyarwood] action required - With the v1. 0. 0 release of KubeVirt the storage version of all core kubevirt. io APIs will be moving to version v1. To accommodate the eventual removal of the v1alpha3 version with KubeVirt >=v1. 2. 0 it is recommended that operators deploy the kube-storage-version-migrator tool within their environment. This will ensure any existing v1alpha3 stored objects are migrated to v1 well in advance of the removal of the underlying v1alpha3 version. [PR #9268][ormergi] virt-launcher pods network interfaces name scheme is changed to hashed names (SHA256), based on the VMI spec network names. [PR #9746][EdDev] Introduce the kubevirt. io/interface resource name to reserve domain resources for network interfaces. [PR #9652][machadovilaca] Add kubevirt_number_of_vms recording rule [PR #9691][fossedihelm] ksm enabled nodes will have kubevirt. io/ksm-enabled label [PR #9628][lyarwood] * The kubevirt. io/v1 apiVersion is now the default storage version for newly created objects [PR #8293][daghaian] Add multi-arch support to KubeVirt. This allows a single KubeVirt installation to run VMs on different node architectures in the same cluster. [PR #9686][maiqueb] Fix ownership of macvtap’s char devices on non-root pods [PR #9631][0xFelix] virtctl: Allow to infer instancetype or preference from specified volume when creating VMs [PR #9665][rmohr] Expose the final resolved qemu machine type on the VMI on status. machine [PR #9609][germag] Add support for running virtiofsd in an unprivileged container when sharing configuration volumes. [PR #9651][0xFelix] virtctl: Allow to specify memory of created VMs. Default to 512Mi if no instancetype was specified or is inferred. [PR #9640][jean-edouard] TSC-enabled VMs can now migrate to a node with a non-identical (but close-enough) frequency [PR #9629][0xFelix] virtctl: Allow to specify the boot order of volumes when creating VMs [PR #9632][toelke] * Add Genesis Cloud to the adopters list [PR #9572][fossedihelm] Enable freePageReporting for new non high performance vmi [PR #9435][rmohr] Ensure existence of all PVCs attached to the VMI before creating the VM target pod. [PR #8156][jean-edouard] TPM VM device can now be set to persistent [PR #8575][iholder101] QEMU-level migration parallelism (a. k. a. multifd) + Upgrade QEMU to 7. 2. 0-11. el9 [PR #9603][qinqon] Adapt node-labeller. sh script to work at non kvm envs with emulation. [PR #9591][awels] BugFix: allow multiple NFS disks to be used/hotplugged [PR #9596][iholder101] Add “virtctl create clone” command [PR #9422][awels] Ability to specify cpu/mem request limit for supporting containers (hotplug/container disk/virtiofs/side car) [PR #9536][akalenyu] BugFix: virtualmachineclusterinstancetypes/preferences show up for get all -n [PR #9177][alicefr] Adding SCSI persistent reservation [PR #9470][machadovilaca] Enable libvirt GetDomainStats on paused VMs [PR #9407][assafad] Use env RUNBOOK_URL_TEMPLATE for the runbooks URL template [PR #9399][maiqueb] Compute the interfaces to be hotplugged based on the current domain info, rather than on the interface status. [PR #9491][orelmisan] API, AddInterfaceOptions: Rename NetworkName to NetworkAttachmentDefinitionName and InterfaceName to Name [PR #9327][jcanocan] DownwardMetrics: Swap KubeVirt build info with qemu version in VirtProductInfo field [PR #9478][xpivarc] Bug fix: Fixes case when migration is not retried if the migration Pod gets denied. [PR #9421][lyarwood] Requests to update the target Name of a {Instancetype,Preference}Matcher without also updating the RevisionName are now rejected. [PR #9367][machadovilaca] Add VM instancetype and preference label to vmi_phase_count metric [PR #9392][awels] virtctl supports retrieving vm manifest for VM export [PR #9442][EdDev] Remove the VMI Status interface podConfigDone field in favor of a new source option in infoSource. [PR #9376][ShellyKa13] Fix vmrestore with WFFC snapshotable storage class [PR #6852][maiqueb] Dev preview: Enables network interface hotplug for VMs / VMIs [PR #9300][xpivarc] Bug fix: API and virtctl invoked migration is not rejected when the VM is paused [PR #9189][xpivarc] Bug fix: DNS integration continues to work after migration [PR #9322][iholder101] Add guest-to-request memory headroom ratio. [PR #8906][machadovilaca] Alert if there are no available nodes to run VMs [PR #9320][darfux] node-labeller: Check arch on the handler side [PR #9127][fossedihelm] Use ECDSA instead of RSA for key generation [PR #9330][qinqon] Skip label kubevirt. io/migrationTargetNodeName from virtctl expose service selector [PR #9163][vladikr] fixes the requests/limits CPU number mismatch for VMs with isolatedEmulatorThread [PR #9250][vladikr] externally created mediated devices will not be deleted by virt-handler [PR #9193][qinqon] Add annotation for live migration and bridged pod interface [PR #9260][ShellyKa13] Fix bug of possible re-trigger of memory dump [PR #9241][akalenyu] BugFix: Guestfs image url not constructed correctly [PR #9220][orelmisan] client-go: Added context to VirtualMachine’s methods. [PR #9228][rumans] Bump virtiofs container limit [PR #9169][lyarwood] The dedicatedCPUPlacement attribute is once again supported within the VirtualMachineInstancetype and VirtualMachineClusterInstancetype CRDs after a recent bugfix improved VirtualMachine validations, ensuring defaults are applied before any attempt to validate. [PR #9159][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 9. 0. 0 and QEMU 7. 2. 0. [PR #8989][rthallisey] Integrate multi-architecture container manifests into the bazel make recipes [PR #9188][awels] Default RBAC for clone and export [PR #9145][awels] Show VirtualMachine name in the VMExport status [PR #8937][fossedihelm] Added foreground finalizer to virtual machine [PR #9133][ShellyKa13] Fix addvolume not rejecting adding existing volume source, fix removevolume allowing to remove non hotpluggable volume [PR #9047][machadovilaca] Deprecate VM stuck in status alerts" }, { - "id": 9, + "id": 10, "url": "/2023/OVN-kubernetes-secondary-networks-localnet.html", "title": "Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes", "author" : "Miguel Duarte Barroso", "tags" : "Kubevirt, kubernetes, virtual machine, VM, SDN, OVN", "body": "Introduction: OVN (Open Virtual Network) is a series of daemons for the Open vSwitch thattranslate virtual network configurations into OpenFlow. It provides virtualnetworking capabilities for any type of workload on a virtualized platform(virtual machines and containers) using the same API. OVN provides a higher-layer of abstraction than Open vSwitch, working withlogical routers and logical switches, rather than flows. More details can be found in the OVN architectureman page. In this post we will repeat the scenario ofits bridge CNI equivalent,using this SDN approach. This secondary network topology is akin to the onedescribed in the flatL2 topology,but allows connectivity to the physical underlay. Demo: To run this demo, we will prepare a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirtThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,and upstream latest multus-cni deployed. Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster, configured to use OVN-Kubernetes asthe default cluster network, configuring the multi-homing OVN-Kubernetes featuregate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v0. 59. 0). export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Single broadcast domain: In this scenario we will see how traffic from a single localnet network can beconnected to a physical network in the host using a dedicated bridge. This scenario does not use any VLAN encapsulation, thus is simpler, since thenetwork admin does not need to provision any VLANs in advance. Configuring the underlay: When you’ve started the KinD cluster with the --multi-network-enable flag anadditional OCI network was created, and attached to each of the KinD nodes. But still, further steps may be required, depending on the desired L2configuration. Let’s first create a dedicated OVS bridge, and attach the aforementionedvirtualized network to it: for node in $(kubectl -n ovn-kubernetes get pods -l app=ovs-node -o jsonpath= {. items[*]. metadata. name} )do kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-br ovsbr1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-port ovsbr1 eth1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl set open . external_ids:ovn-bridge-mappings=physnet:breth0,localnet-network:ovsbr1doneThe first two commands are self-evident: you create an OVS bridge, and attacha port to it; the last one is not. In it, we’re using theOVN bridge mappingAPI to configure which OVS bridge must be used for each physical network. It creates a patch port between the OVN integration bridge - br-int - and theOVS bridge you tell it to, and traffic will be forwarded to/from it with thehelp of alocalnet port. NOTE: The provided mapping must match the name within thenet-attach-def. Spec. Config JSON, otherwise, the patch ports will not becreated. You will also have to configure an IP address on the bridge for theextra-network the kind script created. For that, you first need to identify thebridge’s name. In the example below we’re providing a command for the podmanruntime: podman network inspect underlay --format '{{ . NetworkInterface }}'podman3ip addr add 10. 128. 0. 1/24 dev podman3NOTE: for docker, please use the following command: ip a | grep `docker network inspect underlay --format '{{ index . IPAM. Config 0 Gateway }}'` | awk '{print $NF}'br-0aeb0318f71fip addr add 10. 128. 0. 1/24 dev br-0aeb0318f71fLet’s also use an IP in the same subnet as the network subnet (defined in theNAD). This IP address must be excluded from the IPAM pool (also on the NAD),otherwise the OVN-Kubernetes IPAM may assign it to a workload. Defining the OVN-Kubernetes networks: Once the underlay is configured, we can now provision the attachment configuration: ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: localnet-networkspec: config: |2 { cniVersion : 0. 3. 1 , name : localnet-network , type : ovn-k8s-cni-overlay , topology : localnet , subnets : 10. 128. 0. 0/24 , excludeSubnets : 10. 128. 0. 1/32 , netAttachDefName : default/localnet-network }It is required to list the gateway IP in the excludedSubnets attribute, thuspreventing OVN-Kubernetes from assigning that IP address to the workloads. Spin up the VMs: These two VMs can be used for the single broadcast domain scenario (no VLANs). ---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-serverspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: localnet bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: localnet multus: networkName: localnet-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-clientspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker2 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: localnet bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: localnet multus: networkName: localnet-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true userData: |- #cloud-config password: fedora chpasswd: { expire: False }Test East / West communication: You can check east/west connectivity between both VMs via ICMP: $ kubectl get vmi vm-server -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent, multus-status , interfaceName : eth0 , ipAddress : 10. 128. 0. 2 , ipAddresses : [ 10. 128. 0. 2 , fe80::e83d:16ff:fe76:c1bd ], mac : ea:3d:16:76:c1:bd , name : localnet , queueCount : 1 }]$ virtctl console vm-clientSuccessfully connected to vm-client console. The escape sequence is ^][fedora@vm-client ~]$ ping 10. 128. 0. 2PING 10. 128. 0. 2 (10. 128. 0. 2) 56(84) bytes of data. 64 bytes from 10. 128. 0. 2: icmp_seq=1 ttl=64 time=0. 808 ms64 bytes from 10. 128. 0. 2: icmp_seq=2 ttl=64 time=0. 478 ms64 bytes from 10. 128. 0. 2: icmp_seq=3 ttl=64 time=0. 536 ms64 bytes from 10. 128. 0. 2: icmp_seq=4 ttl=64 time=0. 507 ms--- 10. 128. 0. 2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3005msrtt min/avg/max/mdev = 0. 478/0. 582/0. 808/0. 131 msCheck underlay services: We can now start HTTP servers listening to the IPs attached onthe gateway: python3 -m http. server --bind 10. 128. 0. 1 9000And finally curl this from your client: [fedora@vm-client ~]$ curl -v 10. 128. 0. 1:9000* Trying 10. 128. 0. 1:9000. . . * Connected to 10. 128. 0. 1 (10. 128. 0. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 10. 128. 0. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:05:09 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923. . . Multiple physical networks pointing to the same OVS bridge: This example will feature 2 physical networks, each with a different VLAN,both pointing at the same OVS bridge. Configuring the underlay: Again, the first thing to do is create a dedicated OVS bridge, and attach theaforementioned virtualized network to it, while defining it as a trunk portfor two broadcast domains, with tags 10 and 20. for node in $(kubectl -n ovn-kubernetes get pods -l app=ovs-node -o jsonpath= {. items[*]. metadata. name} )do kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-br ovsbr1 kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl --may-exist add-port ovsbr1 eth1 trunks=10,20 vlan_mode=trunk kubectl -n ovn-kubernetes exec -ti $node -- ovs-vsctl set open . external_ids:ovn-bridge-mappings=physnet:breth0,tenantblue:ovsbr1,tenantred:ovsbr1doneWe must now configure the physical network; since the packets are leaving theOVS bridge tagged with either the 10 or 20 VLAN, we must configure the physicalnetwork where the virtualized nodes run to handle the tagged traffic. For that we will create two VLANed interfaces, each with a different subnet; wewill need to know the name of the bridge the kind script created to implementthe extra network it required. Those VLAN interfaces also need to be configuredwith an IP address: (for docker see previous example) podman network inspect underlay --format '{{ . NetworkInterface }}'podman3# create the VLANsip link add link podman3 name podman3. 10 type vlan id 10ip addr add 192. 168. 123. 1/24 dev podman3. 10ip link set dev podman3. 10 upip link add link podman3 name podman3. 20 type vlan id 20ip addr add 192. 168. 124. 1/24 dev podman3. 20ip link set dev podman3. 20 upNOTE: both the tenantblue and tenantred networks forward their trafficto the ovsbr1 OVS bridge. Defining the OVN-Kubernetes networks: Let us now provision the attachment configuration for the two physical networks. Notice they do not have a subnet defined, which means our workloads mustconfigure static IPs via cloud-init. ---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: tenantredspec: config: |2 { cniVersion : 0. 3. 1 , name : tenantred , type : ovn-k8s-cni-overlay , topology : localnet , vlanID : 10, netAttachDefName : default/tenantred }---apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: tenantbluespec: config: |2 { cniVersion : 0. 3. 1 , name : tenantblue , type : ovn-k8s-cni-overlay , topology : localnet , vlanID : 20, netAttachDefName : default/tenantblue }NOTE: each of the tenantblue and tenantred networks tags their trafficwith a different VLAN, which must be listed on the port trunks configuration. Spin up the VMs: These two VMs can be used for the OVS bridge sharing scenario (two physicalnetworks share the same OVS bridge, each on a different VLAN). ---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-red-1spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-red bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-red multus: networkName: tenantred terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 123. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-red-2spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: flatl2-overlay multus: networkName: tenantred terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 123. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-blue-1spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-blue bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-blue multus: networkName: tenantblue terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 124. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-blue-2spec: running: true template: spec: nodeSelector: kubernetes. io/hostname: ovn-worker domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: physnet-blue bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: physnet-blue multus: networkName: tenantblue terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: addresses: [ 192. 168. 124. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }Test East / West communication: You can check east/west connectivity between both red VMs via ICMP: $ kubectl get vmi vm-red-2 -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 192. 168. 123. 20 , ipAddresses : [ 192. 168. 123. 20 , fe80::e83d:16ff:fe76:c1bd ], mac : ea:3d:16:76:c1:bd , name : flatl2-overlay , queueCount : 1 }]$ virtctl console vm-red-1Successfully connected to vm-red-1 console. The escape sequence is ^][fedora@vm-red-1 ~]$ ping 192. 168. 123. 20PING 192. 168. 123. 20 (192. 168. 123. 20) 56(84) bytes of data. 64 bytes from 192. 168. 123. 20: icmp_seq=1 ttl=64 time=0. 534 ms64 bytes from 192. 168. 123. 20: icmp_seq=2 ttl=64 time=0. 246 ms64 bytes from 192. 168. 123. 20: icmp_seq=3 ttl=64 time=0. 178 ms64 bytes from 192. 168. 123. 20: icmp_seq=4 ttl=64 time=0. 236 ms--- 192. 168. 123. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3028msrtt min/avg/max/mdev = 0. 178/0. 298/0. 534/0. 138 msThe same behavior can be seen on the VMs attached to the blue network: $ kubectl get vmi vm-blue-2 -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 192. 168. 124. 20 , ipAddresses : [ 192. 168. 124. 20 , fe80::6cae:e4ff:fefc:bd02 ], mac : 6e:ae:e4:fc:bd:02 , name : physnet-blue , queueCount : 1 }]$ virtctl console vm-blue-1Successfully connected to vm-blue-1 console. The escape sequence is ^][fedora@vm-blue-1 ~]$ ping 192. 168. 124. 20PING 192. 168. 124. 20 (192. 168. 124. 20) 56(84) bytes of data. 64 bytes from 192. 168. 124. 20: icmp_seq=1 ttl=64 time=0. 531 ms64 bytes from 192. 168. 124. 20: icmp_seq=2 ttl=64 time=0. 255 ms64 bytes from 192. 168. 124. 20: icmp_seq=3 ttl=64 time=0. 688 ms64 bytes from 192. 168. 124. 20: icmp_seq=4 ttl=64 time=0. 648 ms--- 192. 168. 124. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3047msrtt min/avg/max/mdev = 0. 255/0. 530/0. 688/0. 169 msAccessing the underlay services: We can now start HTTP servers listening to the IPs attached on the VLANinterfaces: python3 -m http. server --bind 192. 168. 123. 1 9000 &python3 -m http. server --bind 192. 168. 124. 1 9000 &And finally curl this from your client (blue network): [fedora@vm-blue-1 ~]$ curl -v 192. 168. 124. 1:9000* Trying 192. 168. 124. 1:9000. . . * Connected to 192. 168. 124. 1 (192. 168. 124. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 192. 168. 124. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:05:09 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923. . . And from the client connected to the red network: [fedora@vm-red-1 ~]$ curl -v 192. 168. 123. 1:9000* Trying 192. 168. 123. 1:9000. . . * Connected to 192. 168. 123. 1 (192. 168. 123. 1) port 9000 (#0)> GET / HTTP/1. 1> Host: 192. 168. 123. 1:9000> User-Agent: curl/7. 69. 1> Accept: */*> * Mark bundle as not supporting multiuse* HTTP 1. 0, assume close after body< HTTP/1. 0 200 OK< Server: SimpleHTTP/0. 6 Python/3. 11. 3< Date: Thu, 01 Jun 2023 16:06:02 GMT< Content-type: text/html; charset=utf-8< Content-Length: 2923< . . . Conclusions: In this post we have seen how to use OVN-Kubernetes to create secondarynetworks connected to the physical underlay, allowing both east/westcommunication between VMs, and access to services running outside theKubernetes cluster. " }, { - "id": 10, + "id": 11, "url": "/2023/OVN-kubernetes-secondary-networks.html", "title": "Secondary networks for KubeVirt VMs using OVN-Kubernetes", "author" : "Miguel Duarte Barroso", "tags" : "Kubevirt, kubernetes, virtual machine, VM, SDN, OVN", "body": "Introduction: OVN (Open Virtual Network) is a series of daemons for the Open vSwitch thattranslate virtual network configurations into OpenFlow. It provides virtualnetworking capabilities for any type of workload on a virtualized platform(virtual machines and containers) using the same API. OVN provides a higher-layer of abstraction than Open vSwitch, working withlogical routers and logical switches, rather than flows. More details can be found in the OVN architectureman page. In this post we will repeat the scenario ofits bridge CNI equivalent,using this SDN approach, which uses virtual networking infrastructure: thus, itis not required to provision VLANs or other physical network resources. Demo: To run this demo, you will need a Kubernetes cluster with the followingcomponents installed: OVN-Kubernetes multus-cni KubeVirtThe following section will show you how to create aKinD cluster, with upstream latest OVN-Kubernetes,and upstream latest multus-cni deployed. Please skip this section if yourcluster already features these components (e. g. Openshift). Setup demo environment: Refer to the OVN-Kubernetes repoKIND documentationfor more details; the gist of it is you should clone the OVN-Kubernetesrepository, and run their kind helper script: git clone git@github. com:ovn-org/ovn-kubernetes. gitcd ovn-kubernetespushd contrib ; . /kind. sh --multi-network-enable ; popdThis will get you a running kind cluster, configured to use OVN-Kubernetes asthe default cluster network, configuring the multi-homing OVN-Kubernetes featuregate, and deployingmultus-cni in the cluster. Install KubeVirt in the cluster: Follow Kubevirt’suser guideto install the latest released version (currently, v0. 59. 0). Please skip thissection if you already have a running cluster with KubeVirt installed in it. export RELEASE=$(curl https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Define the overlay network: Provision the following yaml to define the overlay which will configure thesecondary attachment for the KubeVirt VMs. Please refer to the OVN-Kubernetesuserdocumentationfor details into each of the knobs. cat <<EOF | kubectl apply -f -apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: l2-network namespace: defaultspec: config: |2 { cniVersion : 0. 3. 1 , name : l2-network , type : ovn-k8s-cni-overlay , topology : layer2 , netAttachDefName : default/l2-network }EOFThe above example will configure a cluster-wide overlay without a subnetdefined. This means the users will have to define static IPs for their VMs. It is also worth to point out the value of the netAttachDefName attributemust match the <namespace>/<name> of the surroundingNetworkAttachmentDefinition object. Spin up the VMs: cat <<EOF | kubectl apply -f ----apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-serverspec: running: true template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: flatl2-overlay multus: networkName: l2-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 192. 0. 2. 20/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vm-clientspec: running: true template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: flatl2-overlay bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: flatl2-overlay multus: networkName: l2-network terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: quay. io/kubevirt/fedora-with-test-tooling-container-disk:devel - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 192. 0. 2. 10/24 ] userData: |- #cloud-config password: fedora chpasswd: { expire: False }EOFProvision these two Virtual Machines, and wait for them to boot up. Test connectivity: To verify connectivity over our layer 2 overlay, we need first to ensure the IPaddress of the server VM; let’s query the VMI status for that: kubectl get vmi vm-server -ojsonpath= { @. status. interfaces } | jq[ { infoSource : domain, guest-agent , interfaceName : eth0 , ipAddress : 10. 244. 2. 8 , ipAddresses : [ 10. 244. 2. 8 ], mac : 52:54:00:23:1c:c2 , name : default , queueCount : 1 }, { infoSource : domain, guest-agent , interfaceName : eth1 , ipAddress : 192. 0. 2. 20 , ipAddresses : [ 192. 0. 2. 20 , fe80::7cab:88ff:fe5b:39f ], mac : 7e:ab:88:5b:03:9f , name : flatl2-overlay , queueCount : 1 }]You can afterwards connect to them via console and ping vm-server: Note The user and password for this VMs is fedora; check the VM template spec cloudinit userData virtctl console vm-clientip a # confirm the IP address is the one set via cloud-init[fedora@vm-client ~]$ ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127. 0. 0. 1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:29:de:53 brd ff:ff:ff:ff:ff:ff altname enp1s0 inet 10. 0. 2. 2/24 brd 10. 0. 2. 255 scope global dynamic noprefixroute eth0 valid_lft 86313584sec preferred_lft 86313584sec inet6 fe80::5054:ff:fe29:de53/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000 link/ether 36:f9:29:65:66:55 brd ff:ff:ff:ff:ff:ff altname enp2s0 inet 192. 0. 2. 10/24 brd 192. 0. 2. 255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::34f9:29ff:fe65:6655/64 scope link valid_lft forever preferred_lft forever[fedora@vm-client ~]$ ping -c4 192. 0. 2. 20 # ping the vm-server static IPPING 192. 0. 2. 20 (192. 0. 2. 20) 56(84) bytes of data. 64 bytes from 192. 0. 2. 20: icmp_seq=1 ttl=64 time=1. 05 ms64 bytes from 192. 0. 2. 20: icmp_seq=2 ttl=64 time=1. 05 ms64 bytes from 192. 0. 2. 20: icmp_seq=3 ttl=64 time=0. 995 ms64 bytes from 192. 0. 2. 20: icmp_seq=4 ttl=64 time=0. 902 ms--- 192. 0. 2. 20 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3006msrtt min/avg/max/mdev = 0. 902/0. 997/1. 046/0. 058 msConclusion: In this post we have seen how to use OVN-Kubernetes to create an overlay toconnect VMs in different nodes using secondary networks, without having toconfigure any physical networking infrastructure. " }, { - "id": 11, + "id": 12, "url": "/2023/KubeVirt-Summit-2023.html", "title": "KubeVirt Summit 2023!", "author" : "Andrew Burden", "tags" : "kubevirt, event, community", "body": "The third online KubeVirt Summit starts March 29, 2023! When: The event will take place online over two half-days: Dates: March 29 and 30, 2023. Time: 14:00 – 19:00 UTC (9:00–14:00 EST, 15:00–20:00 CET)Register: KubeVirt Summit is hosted on Community. CNCF. io. This is a free event but you need to register in order to attend. Register for KubeVirt Summit 2023 If this is your first time attending, you will need to create an account with CNCF. io. Schedule: The schedule is available on the CNCF Community Events page where you register, as well as on the KubeVirt Summit page. Keep up to date: Connect with the KubeVirt Community through our community page. See you there! " }, { - "id": 12, + "id": 13, "url": "/2023/changelog-v0.59.0.html", "title": "KubeVirt v0.59.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 59. 0: Released on: Wed Mar 1 16:49:27 2023 +0000 [PR #9311][kubevirt-bot] fixes the requests/limits CPU number mismatch for VMs with isolatedEmulatorThread [PR #9276][fossedihelm] Added foreground finalizer to virtual machine [PR #9295][kubevirt-bot] Fix bug of possible re-trigger of memory dump [PR #9270][kubevirt-bot] BugFix: Guestfs image url not constructed correctly [PR #9234][kubevirt-bot] The dedicatedCPUPlacement attribute is once again supported within the VirtualMachineInstancetype and VirtualMachineClusterInstancetype CRDs after a recent bugfix improved VirtualMachine validations, ensuring defaults are applied before any attempt to validate. [PR #9267][fossedihelm] This version of KubeVirt includes upgraded virtualization technology based on libvirt 9. 0. 0 and QEMU 7. 2. 0. [PR #9197][kubevirt-bot] Fix addvolume not rejecting adding existing volume source, fix removevolume allowing to remove non hotpluggable volume [PR #9120][0xFelix] Fix access to portforwarding on VMs/VMIs with the cluster roles kubevirt. io:admin and kubevirt. io:edit [PR #9116][EdDev] Allow the specification of the ACPI Index on a network interface. [PR #8774][avlitman] Added new Virtual machines CPU metrics: [PR #9087][zhuchenwang] Open /dev/vhost-vsock explicitly to ensure that the right vsock module is loaded [PR #9020][feitnomore] Adding support for status/scale subresources so that VirtualMachinePool now supports HorizontalPodAutoscaler [PR #9085][0xFelix] virtctl: Add options to infer instancetype and preference when creating a VM [PR #8917][xpivarc] Kubevirt can be configured with Seccomp profile. It now ships a custom profile for the launcher. [PR #9054][enp0s3] do not inject LimitRange defaults into VMI [PR #7862][vladikr] Store the finalized VMI migration status in the migration objects. [PR #8878][0xFelix] Add ‘create vm’ command to virtctl [PR #9048][jean-edouard] DisableCustomSELinuxPolicy feature gate introduced to disable our custom SELinux policy [PR #8953][awels] VMExport now has endpoint containing entire VM definition. [PR #8976][iholder101] Fix podman CRI detection [PR #9043][iholder101] Adjust operator functional tests to custom images specification [PR #8875][machadovilaca] Rename migration metrics removing ‘total’ keyword [PR #9040][lyarwood] inferFromVolume now uses labels instead of annotations to lookup default instance type and preference details from a referenced Volume. This has changed in order to provide users with a way of looking up suitably decorated resources through these labels before pointing to them within the VirtualMachine. [PR #9039][orelmisan] client-go: Added context to additional VirtualMachineInstance’s methods. [PR #9018][orelmisan] client-go: Added context to additional VirtualMachineInstance’s methods. [PR #9025][akalenyu] BugFix: Hotplug pods have hardcoded resource req which don’t comply with LimitRange maxLimitRequestRatio of 1 [PR #8908][orelmisan] client-go: Added context to some of VirtualMachineInstance’s methods. [PR #6863][rmohr] The install strategy job will respect the infra node placement from now on [PR #8948][iholder101] Bugfix: virt-handler socket leak [PR #8649][acardace] KubeVirt is now able to run VMs inside restricted namespaces. [PR #8992][iholder101] Align with k8s fix for default limit range requirements [PR #8889][rmohr] Add basic TLS encryption support for vsock websocket connections [PR #8660][huyinhou] Fix remoteAddress field in virt-api log being truncated when it is an ipv6 address [PR #8961][rmohr] Bump distroless base images [PR #8952][rmohr] Fix read-only sata disk validation [PR #8657][fossedihelm] Use an increasingly exponential backoff before retrying to start the VM, when an I/O error occurs. [PR #8480][lyarwood] New inferFromVolume attributes have been introduced to the {Instancetype,Preference}Matchers of a VirtualMachine. When provided the Volume referenced by the attribute is checked for the following annotations with which to populate the {Instancetype,Preference}Matchers: [PR #7762][VirrageS] Service kubevirt-prometheus-metrics now sets ClusterIP to None to make it a headless service. [PR #8599][machadovilaca] Change KubevirtVmHighMemoryUsage threshold from 20MB to 50MB [PR #7761][VirrageS] imagePullSecrets field has been added to KubeVirt CR to support deployments form private registries [PR #8887][iholder101] Bugfix: use virt operator image if provided [PR #8750][jordigilh] Fixes an issue that prevented running real time workloads in non-root configurations due to libvirt’s dependency on CAP_SYS_NICE to change the vcpu’s thread’s scheduling and priority to FIFO and 1. The change of priority and scheduling is now executed in the virt-launcher for both root and non-root configurations, removing the dependency in libvirt. [PR #8845][lyarwood] An empty Timer is now correctly omitted from Clock fixing bug #8844. [PR #8842][andreabolognani] The virt-launcher pod no longer needs the SYS_PTRACE capability. [PR #8734][alicefr] Change libguestfs-tools image using root appliance in qcow2 format [PR #8764][ShellyKa13] Add list of included and excluded volumes in vmSnapshot [PR #8811][iholder101] Custom components: support gs [PR #8770][dhiller] Add Ginkgo V2 Serial decorator to serial tests as preparation to simplify parallel vs. serial test run logic [PR #8808][acardace] Apply migration backoff only for evacuation migrations. [PR #8525][jean-edouard] CR option mediatedDevicesTypes is deprecated in favor of mediatedDeviceTypes [PR #8792][iholder101] Expose new custom components env vars to csv-generator and manifest-templator [PR #8701][enp0s3] Consider the ParallelOutboundMigrationsPerNode when evicting VMs [PR #8740][iholder101] Fix: Align Reenlightenment flows between converter. go and template. go [PR #8530][acardace] Use exponential backoff for failing migrations [PR #8720][0xFelix] The expand-spec subresource endpoint was renamed to expand-vm-spec and made namespaced [PR #8458][iholder101] Introduce support for clones with a snapshot source (e. g. clone snapshot -> VM) [PR #8716][rhrazdil] Add overhead of interface with Passt binding when no ports are specified [PR #8619][fossedihelm] virt-launcher: use virtqemud daemon instead of libvirtd [PR #8736][knopt] Added more precise rest_client_request_latency_seconds histogram buckets [PR #8624][zhuchenwang] Add the REST API to be able to talk to the application in the guest VM via VSOCK. [PR #8625][AlonaKaplan] iptables are no longer used by masquerade binding. Nodes with iptables only won’t be able to run VMs with masquerade binding. [PR #8673][iholder101] Allow specifying custom images for core components [PR #8622][jean-edouard] Built with golang 1. 19 [PR #8336][alicefr] Flag for setting the guestfs uid and gid [PR #8667][huyinhou] connect VM vnc failed when virt-launcher work directory is not / [PR #8368][machadovilaca] Use collector to set migration metrics [PR #8558][xpivarc] Bug-fix: LimitRange integration now works when VMI is missing namespace [PR #8404][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 8. 7. 0, QEMU 7. 1. 0 and CentOS Stream 9. [PR #8652][akalenyu] BugFix: Exporter pod does not comply with restricted PSA [PR #8563][xpivarc] Kubevirt now runs with nonroot user by default [PR #8442][kvaps] Add Deckhouse to the Adopters list [PR #8546][zhuchenwang] Provides the Vsock feature for KubeVirt VMs. [PR #8598][acardace] VMs configured with hugepages can now run using the default container_t SELinux type [PR #8594][kylealexlane] Fix permission denied on on selinux relabeling on some kernel versions [PR #8521][akalenyu] Add an option to specify a TTL for VMExport objects [PR #7918][machadovilaca] Add alerts for VMs unhealthy states [PR #8516][rhrazdil] When using Passt binding, virl-launcher has unprivileged_port_start set to 0, so that passt may bind to all ports. [PR #7772][jean-edouard] The SELinux policy for virt-launcher is down to 4 rules, 1 for hugepages and 3 for virtiofs. [PR #8402][jean-edouard] Most VMIs now run under the SELinux type container_t [PR #8513][alromeros] [Bug-fix] Fix error handling in virtctl image-upload" }, { - "id": 13, + "id": 14, "url": "/2022/changelog-v0.58.0.html", "title": "KubeVirt v0.58.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 58. 0: Released on: Thu Oct 13 00:24:51 2022 +0000 [PR #8578][rhrazdil] When using Passt binding, virl-launcher has unprivileged_port_start set to 0, so that passt may bind to all ports. [PR #8463][Barakmor1] Improve metrics documentation [PR #8282][akrejcir] Improves instancetype and preference controller revisions. This is a backwards incompatible change and introduces a new v1alpha2 api for instancetype and preferences. [PR #8272][jean-edouard] No more empty section in the kubevirt-cr manifest [PR #8536][qinqon] Don’t show a failure if ConfigDrive cloud init has UserDataSecretRef and not NetworkDataSecretRef [PR #8375][xpivarc] Virtiofs can be used with Nonroot feature gate [PR #8465][rmohr] Add a vnc screenshot REST endpoint and a “virtctl vnc screenshot” command for UI and script integration [PR #8418][alromeros] Enable automatic token generation for VirtualMachineExport objects [PR #8488][0xFelix] virtctl: Be less verbose when using the local ssh client [PR #8396][alicefr] Add group flag for setting the gid and fsgroup in guestfs [PR #8476][iholder-redhat] Allow setting virt-operator log verbosity through Kubevirt CR [PR #8366][rthallisey] Move KubeVirt to a 15 week release cadence [PR #8479][arnongilboa] Enable DataVolume GC by default in cluster-deploy [PR #8474][vasiliy-ul] Fixed migration failure of VMs with containerdisks on systems with containerd [PR #8316][ShellyKa13] Fix possible race when deleting unready vmsnapshot and the vm remaining frozen [PR #8436][xpivarc] Kubevirt is able to run with restricted Pod Security Standard enabled with an automatic escalation of namespace privileges. [PR #8197][alromeros] Add vmexport command to virtctl [PR #8252][fossedihelm] Add tlsConfiguration to Kubevirt Configuration [PR #8431][rmohr] Fix shadow status updates and periodic status updates on VMs, performed by the snapshot controller [PR #8359][iholder-redhat] [Bugfix]: HyperV Reenlightenment VMIs should be able to start when TSC Frequency is not exposed [PR #8330][jean-edouard] Important: If you use docker with SELinux enabled, set the DockerSELinuxMCSWorkaround feature gate before upgrading [PR #8401][machadovilaca] Rename metrics to follow the naming convention" }, { - "id": 14, + "id": 15, "url": "/2022/changelog-v0.57.0.html", "title": "KubeVirt v0.57.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 57. 0: Released on: Mon Sep 12 14:00:44 2022 +0000 [PR #8129][mlhnono68] Fixes virtctl to support connection to clusters proxied by RANCHER or having special paths [PR #8337][0xFelix] virtctl’s native SSH client is now useable in the Windows console without workarounds [PR #8257][awels] VirtualMachineExport now supports VM export source type. [PR #8367][vladikr] fix the guest memory conversion by setting it to resources. requests. memory when guest memory is not explicitly provided [PR #7990][ormergi] Deprecate SR-IOV live migration feature gate. [PR #8069][lyarwood] The VirtualMachineInstancePreset resource has been deprecated ahead of removal in a future release. Users should instead use the VirtualMachineInstancetype and VirtualMachinePreference resources to encapsulate any shared resource or preferences characteristics shared by their VirtualMachines. [PR #8326][0xFelix] virtctl: Do not log wrapped ssh command by default [PR #8325][rhrazdil] Enable route_localnet sysctl option for masquerade binding at virt-handler [PR #8159][acardace] Add support for USB disks [PR #8006][lyarwood] AutoattachInputDevice has been added to Devices allowing an Input device to be automatically attached to a VirtualMachine on start up. PreferredAutoattachInputDevice has also been added to DevicePreferences allowing users to control this behaviour with a set of preferences. [PR #8134][arnongilboa] Support DataVolume garbage collection [PR #8157][StefanKro] TrilioVault for Kubernetes now supports KubeVirt for backup and recovery. [PR #8273][alaypatel07] add server-side validations for spec. topologySpreadConstraints during object creation [PR #8049][alicefr] Set RunAsNonRoot as default for the guestfs pod [PR #8107][awels] Allow VirtualMachineSnapshot as a VirtualMachineExport source [PR #7846][janeczku] Added support for configuring topology spread constraints for virtual machines. [PR #8215][alaypatel07] support validation for spec. affinity fields during vmi creation [PR #8071][oshoval] Relax networkInterfaceMultiqueue semantics: multi queue will configure only what it can (virtio interfaces). [PR #7549][akrejcir] Added new API subresources to expand instancetype and preference. " }, { - "id": 15, + "id": 16, "url": "/2022/changelog-v0.56.0.html", "title": "KubeVirt v0.56.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 56. 0: Released on: Thu Aug 18 20:10:29 2022 +0000 [PR #7599][iholder-redhat] Introduce a mechanism to abort non-running migrations - fixes “Unable to cancel live-migration if virt-launcher pod in pending state” bug [PR #8027][alaypatel07] Wait deletion to succeed all the way till objects are finalized in perfscale tests [PR #8198][rmohr] Improve path handling for non-root virt-launcher workloads [PR #8136][iholder-redhat] Fix cgroups unit tests: mock out underlying runc cgroup manager [PR #8047][iholder-redhat] Deprecate live migration feature gate [PR #7986][iholder-redhat] [Bug-fix]: Windows VM with WSL2 guest fails to migrate [PR #7814][machadovilaca] Add VMI filesystem usage metrics [PR #7849][AlonaKaplan] [TECH PREVIEW] Introducing passt - a new approach to user-mode networking for virtual machines [PR #7991][ShellyKa13] Virtctl memory dump with create flag to create a new pvc [PR #8039][lyarwood] The flavor API and associated CRDs of VirtualMachine{Flavor,ClusterFlavor} are renamed to instancetype and VirtualMachine{Instancetype,ClusterInstancetype}. [PR #8112][AlonaKaplan] Changing the default of virtctl expose ip-family parameter to be empty value instead of IPv4. [PR #8073][orenc1] Bump runc to v1. 1. 2 [PR #8092][Barakmor1] Bump the version of emicklei/go-restful from 2. 15. 0 to 2. 16. 0 [PR #8053][alromeros] [Bug-fix]: Fix mechanism to fetch fs overhead when CDI resource has a different name [PR #8035][0xFelix] Add option to wrap local scp client to scp command [PR #7981][lyarwood] Conflicts will now be raised when using flavors if the VirtualMachine defines any CPU or Memory resource requests. [PR #8068][awels] Set cache mode to match regular disks on hotplugged disks. " }, { - "id": 16, + "id": 17, "url": "/2022/KubeVirt-Introduction-of-instancetypes.html", "title": "Simplifying KubeVirt's `VirtualMachine` UX with Instancetypes and Preferences", "author" : "Lee Yarwood", "tags" : "kubevirt, kubernetes, virtual machine, VM, instancetypes, preferences, VirtualMachine, VirtualMachineInstancetype, VirtualMachinePreference", "body": "KubeVirt’s VirtualMachine API contains many advanced options for tuning a virtual machine’s resources and performance that go beyond what typical users need to be aware of. Users have until now been unable to simply define the storage/network they want assigned to their VM and then declare in broad terms what quality of resources and kind of performance they need for their VM. Instead, the user has to be keenly aware how to request specific compute resources alongside all of the performance tunings available on the VirtualMachine API and how those tunings impact their guest’s operating system in order to get a desired result. A common pattern for IaaS is to have abstractions separating the resource sizing and performance of a workload from the user-defined values related to launching their custom application. This pattern is evident across all the major cloud providers (also known as hyperscalers) as well as open source IaaS projects like OpenStack. AWS has instance types, GCP has machine types, Azure has instance VM sizes, and OpenStack has flavors. Let’s take AWS for example to help visualize what this abstraction enables. Launching an EC2 instance only requires a few top level arguments; the disk image, instance type, keypair, security group, and subnet: $ aws ec2 run-instances --image-id ami-xxxxxxxx \ --count 1 \ --instance-type c4. xlarge \ --key-name MyKeyPair \ --security-group-ids sg-903004f8 \ --subnet-id subnet-6e7f829eWhen creating the EC2 instance the user doesn’t define the amount of resources, what processor to use, how to optimize the performance of the instance, or what hardware to schedule the instance on. Instead, all of that information is wrapped up in that single --instance-type c4. xlarge CLI argument. c4 denotes a specific performance profile version, in this case from the Compute Optimized family and xlarge denotes a specific amount of compute resources provided by the instance type, in this case 4 vCPUs, 7. 5 GiB of RAM, 750 Mbps EBS bandwidth, etc. While hyperscalers can provide predefined types with performance profiles and compute resources already assigned IaaS and virtualization projects such as OpenStack and KubeVirt can only provide the raw abstractions for operators, admins, and even vendors to then create instances of these abstractions specific to each deployment. Instancetype API: The recently renamed instancetype API and associated CRDs aim to address this by providing KubeVirt users with a set of APIs and abstractions that allow them to make fewer choices when creating a VirtualMachine while still ending up with a working, performant guest at runtime. VirtualMachineInstancetype: ---apiVersion: instancetype. kubevirt. io/v1alpha1kind: VirtualMachineInstancetypemetadata: name: example-instancetypespec: cpu: guest: 1 memory: guest: 128MiKubeVirt now provides two instancetype based CRDs, a cluster wide VirtualMachineClusterInstancetype and a namespaced VirtualMachineInstancetype. These CRDs encapsulate the following resource related characteristics of a VirtualMachine through a shared VirtualMachineInstancetypeSpec: CPU : Required number of vCPUs presented to the guest Memory : Required amount of memory presented to the guest GPUs : Optional list of vGPUs to passthrough HostDevices: Optional list of HostDevices to passthrough IOThreadsPolicy : Optional IOThreadsPolicy to be used LaunchSecurity: Optional LaunchSecurity to be usedAnything provided within an instancetype cannot be overridden within a VirtualMachine. For example, CPU and Memory are both required attributes of an instancetype. If a user makes any requests for CPU or Memory resources within their VirtualMachine, the instancetype will conflict and the request will be rejected. VirtualMachinePreference: ---apiVersion: instancetype. kubevirt. io/v1alpha1kind: VirtualMachinePreferencemetadata: name: example-preferencespec: devices: preferredDiskBus: virtio preferredInterfaceModel: virtioKubeVirt also provides two further preference based CRDs, again a cluster-wide VirtualMachineClusterPreference and namespaced VirtualMachinePreference. These CRDs encapsulate the preferred value of any remaining attributes of a VirtualMachine required to run a given workload, again this is through a shared VirtualMachinePreferenceSpec. Unlike instancetypes, preferences only represent the preferred values and as such can be overridden by values in the VirtualMachine provided by the user. VirtualMachine{Instancetype,Preference}Matcher: ---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: example-vmspec:[. . ] instancetype: kind: VirtualMachineInstancetype name: example-instancetype preference: kind: VirtualMachinePreference name: example-preference[. . ]The previous instancetype and preference CRDs are matched to a given VirtualMachine through the use of a matcher. Each matcher consists of the following: Name (string): Name of the resource being referenced Kind (string): Optional, defaults to the cluster wide CRD kinds of VirtualMachineClusterInstancetype or VirtualMachineClusterPreference RevisionName (string) : Optional, name of a ControllerRevision containing a copy of the VirtualMachineInstancetypeSpec or VirtualMachinePreferenceSpec taken when the VirtualMachine is first started. VirtualMachineInstancePreset Deprecation: The new instancetype API and CRDs conflict somewhat with the existing VirtualMachineInstancePreset CRD. The approach taken by the CRD has also been removed in core k8s so, as advertised on the mailing list, I have started the process of deprecating VirtualMachineInstancePreset in favor of the Instancetype CRDs listed above. Examples: The following example is taken from the KubeVirt User Guide: $ cat << EOF | kubectl apply -f - ---apiVersion: instancetype. kubevirt. io/v1alpha1kind: VirtualMachineInstancetypemetadata: name: cmediumspec: cpu: guest: 1 memory: guest: 1Gi---apiVersion: instancetype. kubevirt. io/v1alpha1kind: VirtualMachinePreferencemetadata: name: fedoraspec: devices: preferredDiskBus: virtio preferredInterfaceModel: virtio preferredRng: {} features: preferredAcpi: {} preferredSmm: {} firmware: preferredUseEfi: true preferredUseSecureBoot: true ---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: creationTimestamp: null name: fedoraspec: instancetype: name: cmedium kind: virtualMachineInstancetype preference: name: fedora kind: virtualMachinePreference runStrategy: Always template: metadata: creationTimestamp: null spec: domain: devices: {} volumes: - containerDisk: image: quay. io/containerdisks/fedora:latest name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config users: - name: admin sudo: ALL=(ALL) NOPASSWD:ALL ssh_authorized_keys: - ssh-rsa AAAA. . . name: cloudinitEOFWe can compare the original VirtualMachine spec with that of the running VirtualMachineInstance to confirm our instancetype and preferences have been applied using the following diff command: $ diff --color -u <( kubectl get vms/fedora -o json | jq . spec. template. spec) <( kubectl get vmis/fedora -o json | jq . spec)[. . ] { domain : {- devices : {},+ cpu : {+ cores : 1,+ model : host-model ,+ sockets : 1,+ threads : 1+ },+ devices : {+ disks : [+ {+ disk : {+ bus : virtio + },+ name : containerdisk + },+ {+ disk : {+ bus : virtio + },+ name : cloudinit + }+ ],+ interfaces : [+ {+ bridge : {},+ model : virtio ,+ name : default + }+ ],+ rng : {}+ },+ features : {+ acpi : {+ enabled : true+ },+ smm : {+ enabled : true+ }+ },+ firmware : {+ bootloader : {+ efi : {+ secureBoot : true+ }+ },+ uuid : 98f07cdd-96da-5880-b6c7-1a5700b73dc4 + }, machine : { type : q35 },- resources : {}+ memory : {+ guest : 1Gi + },+ resources : {+ requests : {+ memory : 1Gi + }+ } },+ networks : [+ {+ name : default ,+ pod : {}+ }+ ], volumes : [ { containerDisk : {- image : quay. io/containerdisks/fedora:latest + image : quay. io/containerdisks/fedora:latest ,+ imagePullPolicy : Always }, name : containerdisk },Future work: There’s still plenty of work required before the API and CRDs can move from their current alpha version to beta. We have a specific kubevirt/kubevirt issue tracking our progress to beta. As set out there and in the KubeVirt community API Graduation Phase Expecations, part of this work is to seek feedback from the wider community so please do feel free to chime in there with any and all feedback on the API and CRDs. You can also track our work on this API through the area/instancetype tag or my personal blog where I will be posting regular updates and demos for instancetypes. " }, { - "id": 17, + "id": 18, "url": "/2022/KubeVirt-installing_Microsoft_Windows_11_from_an_iso.html", "title": "KubeVirt: installing Microsoft Windows 11 from an ISO", "author" : "Jed Lejosne", "tags" : "kubevirt, kubernetes, virtual machine, Microsoft Windows kubernetes, Microsoft Windows container, Windows", "body": "This blog post describes a simple way to deploy a Windows 11 VM with KubeVirt, using an installation ISO as a starting point. Although only tested with Windows 11, the steps described here should also work to deploy other recent versions of Windows. Pre-requisites: You’ll need a Kubernetes cluster with worker node(s) that have at least 6GB of available memory KubeVirt and CDI both deployed on the cluster A storage backend, such as Rook Ceph A Windows iso. One can be found at https://www. microsoft. com/software-download/windows11A suitable test cluster can easily be deployed thanks to KubeVirtCI by running the following commands from the KubeVirt source repository: $ export KUBEVIRT_MEMORY_SIZE=8192M$ export KUBEVIRT_STORAGE=rook-ceph-default$ make cluster-up && make cluster-syncPreparation: Before the virtual machine can be created, we need to setup storage volumes for the ISO and the drive, and write the appropriate VM(I) yaml. Uploading the ISO to a PVC KubeVirt provides a simple tool that is able to do that for us: virtctl. Here’s the command to upload the ISO, just replace /storage/win11. iso with the path to your Windows 11 ISO:virtctl image-upload pvc win11cd-pvc --size 6Gi --image-path=/storage/win11. iso --insecure Creating a persistent volume to use as the Windows drive This will depend on the storage configuration of your cluster. The following yaml, to apply to the cluster using kubectl create, should work just fine on a KubeVirtCI cluster: apiVersion: v1kind: PersistentVolumemetadata: name: task-pv-volume labels: type: localspec: storageClassName: hostpath capacity: storage: 15Gi accessModes: - ReadWriteOnce hostPath: path: /tmp/hostImages/win11 Note Microsoft actually recommends at least 64GB of storage. But, unlike some other requirements, the installer will accept smaller disks. This is convenient when testing with KubeVirtCI, as nodes only have about 20GB of free space. However, please bear in mind that such a small drive should only be used for testing purposes, and might lead to instabilities. Creating a persistent volume claim (PVC) for the drive Once again, your milage may vary, but the following PVC yaml works fine on KubeVirtCI: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: disk-windowsspec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi storageClassName: hostpath The name of PVC, disk-windows here, will be used in the yaml of the VM(I) as the main volume. Creating the VM(I) yaml file KubeVirt already includes an example Windows VMI yaml file, which we’ll use as a starting point here for convenience. Using a VMI yaml is more than enough for testing purposes, however for more serious applications you might want to consider changing it into a VM. First, in the yaml above, bump the memory up to 4Gi, which is a hard requirement of Windows 11. (Windows 10 is happy with 2Gi). Then, let’s add the ISO created above. Add is as a cdrom in the disks section: - cdrom: bus: sata name: winiso And the corresponding volume at the bottom: - name: winiso persistentVolumeClaim: claimName: win11cd-pvc Note that the names should match, and that the claimName is what we used in the virtctl command above. Here is what the VMI looks like after those changes: ---apiVersion: kubevirt. io/v1kind: VirtualMachineInstancemetadata: labels: special: vmi-windows name: vmi-windowsspec: domain: clock: timer: hpet: present: false hyperv: {} pit: tickPolicy: delay rtc: tickPolicy: catchup utc: {} cpu: cores: 2 devices: disks: - disk: bus: sata name: pvcdisk - cdrom: bus: sata name: winiso interfaces: - masquerade: {} model: e1000 name: default tpm: {} features: acpi: {} apic: {} hyperv: relaxed: {} spinlocks: spinlocks: 8191 vapic: {} smm: {} firmware: bootloader: efi: secureBoot: true uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 resources: requests: memory: 4Gi networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - name: pvcdisk persistentVolumeClaim: claimName: disk-windows - name: winiso persistentVolumeClaim: claimName: win11cd-pvc Note When customizing this VMI definition or creating your own, please keep in mind that the TPM device and the UEFI firmware with SecureBoot are both hard requirements of Windows 11. Not having them will cause the Windows 11 installation to fail early. Please also note that the SMM CPU feature is required for UEFI + SecureBoot. However, they can all be omitted in the case of a Windows 10 VM(I). Finally, we do not currently support TPM persistence, so any secret stored in the emulated TPM will be lost next time you boot the VMI. For example, do not enable BitLocker, as it will fail to find the encryption key next boot and you will have to manually enter the (55 characters!) recovery key each boot. Windows installation: You should now be able to create the VMI and start the Windows installation process. Just use kubectl to start the VMI created above: kubectl create -f vmi-windows. yaml. Shortly after, open a VNC session to it using virtctl vnc vmi-windows (keep trying until the VMI is running and the VNC session pops up). You should now see the boot screen, and shortly after a prompt to “Press any key to boot from CD or DVD…”. You have a few seconds to do so or the VM will fail to boot. Then just follow the steps to install Windows. VirtIO drivers installation (optional): Once Windows is installed, it’s a good ideas to install the VirtIO drivers inside the VM, as they can drastically improve performance. The latest version can be downloaded here. virtio-win-gt-x64. msi is the simplest package to install, as you just have to run it as Administrator. Alternatively, KubeVirt has a containerdisk image that can be mounted inside the VM. To use it, just add a simple cdrom disk to the VMI, like: - cdrom: bus: sata name: virtioand the volume: - containerDisk: image: kubevirt/virtio-container-disk name: virtioWhen using KubeVirtCI, a local copy of the image is also available at registry:5000/kubevirt/virtio-container-disk:devel. Further performance improvements: Windows is quite resource-hungry, and you might find that the VM created above is too slow, even with the VirtIO drivers installed. Here are a few steps you can take to improve things: Increasing the RAM is always a good idea, if you have enough available of course. Increasing the number of CPUs, and/or using CPUManager to assign dedicated CPU to the VM should also help a lot. Once the VirtIO drivers are installed, the main drive can also be switched from sata to virtio, and the attached CDROMs can be removed. " }, { - "id": 18, + "id": 19, "url": "/2022/changelog-v0.55.0.html", "title": "KubeVirt v0.55.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 55. 0: Released on: Thu Jul 14 16:33:25 2022 +0000 [PR #7336][iholder-redhat] Introduce clone CRD, controller and API [PR #7791][iholder-redhat] Introduction of an initial deprecation policy [PR #7875][lyarwood] ControllerRevisions of any VirtualMachineFlavorSpec or VirtualMachinePreferenceSpec are stored during the initial start of a VirtualMachine and used for subsequent restarts ensuring changes to the original VirtualMachineFlavor or VirtualMachinePreference do not modify the VirtualMachine and the VirtualMachineInstance it creates. [PR #8011][fossedihelm] Increase virt-launcher memory overhead [PR #7963][qinqon] Bump alpine_with_test_tooling [PR #7881][ShellyKa13] Enable memory dump to be included in VMSnapshot [PR #7926][qinqon] tests: Move main clean function to global AfterEach and create a VM per each infra_test. go Entry. [PR #7845][janeczku] Fixed a bug that caused make generate to fail when API code comments contain backticks. (#7844, @janeczku) [PR #7932][marceloamaral] Addition of kubevirt_vmi_migration_phase_transition_time_from_creation_seconds metric to monitor how long it takes to transition a VMI Migration object to a specific phase from creation time. [PR #7879][marceloamaral] Faster VM phase transitions thanks to an increased virt-controller QPS/Burst [PR #7807][acardace] make cloud-init ‘instance-id’ persistent across reboots [PR #7928][iholder-redhat] bugfix: node-labeller now removes “host-model-cpu. node. kubevirt. io/” and “host-model-required-features. node. kubevirt. io/” prefixes [PR #7841][jean-edouard] Non-root VMs will now migrate to root VMs after a cluster disables non-root. [PR #7933][akalenyu] BugFix: Fix vm restore in case of restore size bigger then PVC requested size [PR #7919][lyarwood] Device preferences are now applied to any default network interfaces or missing volume disks added to a VirtualMachineInstance at runtime. [PR #7910][qinqon] tests: Create the expected readiness probe instead of liveness [PR #7732][acardace] Prevent virt-handler from starting a migration twice [PR #7594][alicefr] Enable to run libguestfs-tools pod to run as noroot user [PR #7811][raspbeep] User now gets information about the type of commands which the guest agent does not support. [PR #7590][awels] VMExport allows filesystem PVCs to be exported as either disks or directories. [PR #7683][alicefr] Add –command and –local-ssh-opts” options to virtctl ssh to execute remote command using local ssh method" }, { - "id": 19, + "id": 20, "url": "/2022/KubeVirt-at-KubeCon-EU-2022.html", "title": "KubeVirt at KubeCon EU 2022", "author" : "Andrew Burden", "tags" : "kubevirt, event, community, KubeCon", "body": "KubeCon EU was in Valencia, Spain this year from May 16-20. For many of the 7000+ physical attendees, it was their first in-person conference in several years. With luck, it was the first of many more to come, as KubeCon is a rare opportunity to learn about, from, and with a rich variety of adopters, communities, and vendors that make up the open source and cloud native ecosystem. The KubeVirt community presented two sessions, both on Wednesday May 18th: A Virtual Open Office Hours session, and A Maintainer Track session: ‘It’s All for the Users. More Durable, Secure, and Pluggable. KubeVirt v0. 53’Virtual Open Office Hours: This was a 45-minute project virtual session, hosted by the CNCF. This was on the Bevy platform (which will be familiar to KubeVirt Summit attendees from the past two years) and we had five lovely people from the KubeVirt community ready with a variety of demos and presentations and to answer questions from attendees:+Alice Frosi, Itamar Holder, Miguel Duarte de Mora Barroso, Luboslav Pivarc, and Bartosz Rybacki This was an opportunity for KubeCon attendees (virtual and physical) to ask questions and discuss any topics, and our presenters covered the following: Introduction to KubeVirt, live migration, Istio integration, and CDI hotplug/resize. Despite some initial technical issues and improvised changes, this session went really well. We had about ~25 consistent attendees, and we received a good range of Q&A and interaction with the attendees on all topics presented. It was a very solid 45 minutes. Unfortunately, due to a miscommunication, there is no recording of this session. A huge thanks to the presenters for their time and collaboration in preparing for this. Maintainer Track: Later that day, on the Maintainer Track, Alice also gave an in-depth breakdown of a whole slew of new KubeVirt features and showed a demo with the KubeVirt Cluster API: deploying Kubernetes on top of Kubernetes. You can watch the CNCF recording here, and download the demo video and slides that are available from the schedule. There was a healthy amount of questions, both during Q&A and after the talk. The participants were particularly interested to know how to prepare and customize VM disks with KubeVirt, how to run Windows VM, especially combined with GPUs, and how to expose the Kubernetes API service of a deployed cluster to the KubeVirt cluster API provider outside of the KubeVirt VM. There were additional questions on the status of TPM support and VM migration when the hosting node goes down. Thank you!: Big thanks again to our presenters: Alice Frosi, Itamar Holder, Miguel Duarte de Mora Barroso, Luboslav Pivarc, and Bartosz Rybacki. And everyone who attended the sessions, listened, and asked great questions. Want to see more from KubeCon EU 2022?: If you’re interested in seeing more photos and recordings from the event: CNCF’s Photo album (Flickr) of the event. The CNCF video recordings of the sessions on Youtube. And the event schedule to help you find sessions. " }, { - "id": 20, + "id": 21, "url": "/2022/changelog-v0.54.0.html", "title": "KubeVirt v0.54.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 54. 0: Released on: Wed Jun 8 14:15:43 2022 +0000 [PR #7757][orenc1] new alert for excessive number of VMI migrations in a period of time. [PR #7517][ShellyKa13] Add virtctl Memory Dump command [PR #7801][VirrageS] Empty (nil values) of Address and Driver fields in XML will be omitted. [PR #7475][raspbeep] Adds the reason of a live-migration failure to a recorded event in case EvictionStrategy is set but live-migration is blocked due to its limitations. [PR #7739][fossedihelm] Allow virtualmachines/migrate subresource to admin/edit users [PR #7618][lyarwood] The requirement to define a Disk or Filesystem for each Volume associated with a VirtualMachine has been removed. Any Volumes without a Disk or Filesystem defined will have a Disk defined within the VirtualMachineInstance at runtime. [PR #7529][xpivarc] NoReadyVirtController and NoReadyVirtOperator should be properly fired. [PR #7465][machadovilaca] Add metrics for migrations and respective phases [PR #7592][akalenyu] BugFix: virtctl guestfs incorrectly assumes image name" }, { - "id": 21, + "id": 22, "url": "/2022/changelog-v0.53.0.html", "title": "KubeVirt v0.53.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 53. 0: Released on: Mon May 9 14:02:20 2022 +0000 [PR #7533][akalenyu] Add several VM snapshot metrics [PR #7574][rmohr] Pull in cdi dependencies with minimized transitive dependencies to ease API adoption [PR #7318][iholder-redhat] Snapshot restores now support restoring to a target VM different than the source [PR #7474][borod108] Added the following metrics for live migration: kubevirt_migrate_vmi_data_processed_bytes, kubevirt_migrate_vmi_data_remaining_bytes, kubevirt_migrate_vmi_dirty_memory_rate_bytes [PR #7441][rmohr] Add virtctl scp to ease copying files from and to VMs and VMIs [PR #7265][rthallisey] Support steady-state job types in the load-generator tool [PR #7544][fossedihelm] Upgraded go version to 1. 17. 8 [PR #7582][acardace] Fix failed reported migrations when actually they were successful. [PR #7546][0xFelix] Update virtio-container-disk to virtio-win version 0. 1. 217-1 [PR #7530][iholder-redhat] [External Kernel Boot]: Disallow kernel args without providing custom kernel [PR #7493][davidvossel] Adds new EvictionStrategy “External” for blocking eviction which is handled by an external controller [PR #7563][akalenyu] Switch VolumeSnapshot to v1 [PR #7406][acardace] Reject LiveMigrate as a workload-update strategy if the LiveMigration feature gate is not enabled. [PR #7103][jean-edouard] Non-persistent vTPM now supported. Keep in mind that the state of the TPM is wiped after each shutdown. Do not enable Bitlocker! [PR #7277][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 8. 0. 0 and QEMU 6. 2. 0. [PR #7130][Barakmor1] Add field to kubevirtCR to set Prometheus ServiceMonitor object’s namespace [PR #7401][iholder-redhat] virt-api deployment is now scalable - replicas are determined by the number of nodes in the cluster [PR #7500][awels] BugFix: Fixed RBAC for admin/edit user to allow virtualmachine/addvolume and removevolume. This allows for persistent disks [PR #7328][apoorvajagtap] Don’t ignore –identity-file when setting –local-ssh=true on virtctl ssh [PR #7469][xpivarc] Users can now enable the NonRoot feature gate instead of NonRootExperimental [PR #7451][fossedihelm] Reduce virt-launcher memory usage by splitting monitoring and launcher processes" }, { - "id": 22, + "id": 23, "url": "/2022/changelog-v0.52.0.html", "title": "KubeVirt v0.52.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 52. 0: Released on: Fri Apr 8 16:17:56 2022 +0000 [PR #7024][fossedihelm] Add an warning message if the client and server virtctl versions are not aligned [PR #7486][rmohr] Move stable. txt location to a more appropriate path [PR #7372][saschagrunert] Fixed KubeVirtComponentExceedsRequestedMemory alert complaining about many-to-many matching not allowed. [PR #7426][iholder-redhat] Add warning for manually determining core-component replica count in Kubevirt CR [PR #7424][maiqueb] Provide interface binding types descriptions, which will be featured in the KubeVirt API. [PR #7422][orelmisan] Fixed setting custom guest pciAddress and bootOrder parameter(s) to a list of SR-IOV NICs. [PR #7421][rmohr] Fix knowhosts file corruption for virtctl ssh [PR #6854][rmohr] Make virtctl ssh work with ssh-rsa+ preauthentication [PR #7267][iholder-redhat] Applied migration configurations can now be found in VMI’s status [PR #7321][iholder-redhat] [Migration Policies]: precedence to VMI labels over Namespace labels [PR #7326][oshoval] The Ginkgo dependency has been upgraded to v2. 1. 3 (major version upgrade) [PR #7361][SeanKnight] Fixed a bug that prevents virtctl from working with clusters accessed via Rancher authentication proxy, or any other cluster where the server URL contains a path component. (#3760) [PR #7255][tyleraharrison] Users are now able to specify --address [ip_address] when using virtctl vnc rather than only using 127. 0. 0. 1 [PR #7275][enp0s3] Add observedGeneration to virt-operator to have a race-free way to detect KubeVirt config rollouts [PR #7233][xpivarc] Bug fix: Successfully aborted migrations should be reported now [PR #7158][AlonaKaplan] Add masquerade VMs support to single stack IPv6. [PR #7227][rmohr] Remove VMI informer from virt-api to improve scaling characteristics of virt-api [PR #7288][raspbeep] Users now don’t need to specify container for kubectl logs <vmi-pod> and kubectl exec <vmi-pod>. [PR #6709][xpivarc] Workloads will be migrated to nonroot implementation if NonRoot feature gate is set. (Except VirtioFS) [PR #7241][lyarwood] Fixed a bug that prevents only a unattend. xml configmap or secret being provided as contents for a sysprep disk. (#7240, @lyarwood)" }, { - "id": 23, + "id": 24, "url": "/2022/Virtual-Machines-with-MetalLB.html", "title": "Load-balancer for virtual machines on bare metal Kubernetes clusters", "author" : "Ram Lavi", "tags" : "Kubevirt, kubernetes, virtual machine, VM, load-balancer, MetalLB", "body": "Introduction: Over the last year, Kubevirt and MetalLB have shown to be powerful duo in order to support fault-tolerant access to an application on virtual machines through an external IP address. As a Cluster administrator using an on-prem cluster without a network load-balancer, now it’s possible to use MetalLB operator to provide load-balancer capabilities (with Services of type LoadBalancer) to virtual machines. MetalLB: MetalLB allows you to create Kubernetes services of type LoadBalancer, and provides network load-balancer implementation in on-prem clusters that don’t run on a cloud provider. MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes, Layer 2 and BGP: Layer 2 mode (ARP/NDP): This mode - which actually does not implement real load-balancing behavior - provides a failover mechanism where a single node owns the LoadBalancer service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network. In this method, the MetalLB speaker pod announces the IPs in ARP (for IPv4) and NDP (for IPv6) protocols over the host network. From a network perspective, the node owning the service appears to have multiple IP addresses assigned to a network interface. After traffic is routed to the node, the service proxy sends the traffic to the application pods. BGP mode: This mode provides real load-balancing behavior, by establishing BGP peering sessions with the network routers - which advertise the external IPs of the LoadBalancer service, distributing the load over the nodes. To read more on MetalLB concepts, implementation and limitations, please read its documentation. Demo: Virtual machine with external IP and MetalLB load-balancer: With the following recipe we will end up with a nginx server running on a virtual machine, accessible outside the cluster using MetalLB load-balancer with Layer 2 mode. Demo environment setup: We are going to use kind provider as an ephemeral Kubernetes cluster. Prerequirements: First install kind on your machine following its installation guide. To use kind, you will also need to install docker. External IPs on macOS and Windows: This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer’s external IP if the IP space is within the docker IP space. On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind’s Configuration Guide. Deploying cluster: To start a kind cluster: kind create clusterIn order to interact with the specific cluster created: kubectl cluster-info --context kind-kindInstalling components: Installing MetalLB on the cluster: There are many ways to install MetalLB. For the sake of this example, we will install MetalLB via manifests. To do this, follow this guide. Confirm successful installation by waiting for MetalLB pods to have a status of Running: kubectl get pods -n metallb-system --watchInstalling Kubevirt on the cluster: Following Kubevirt user guide to install released version v0. 51. 0 export RELEASE=v0. 51. 0kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=360s --for condition=AvailableNow we have a Kubernetes cluster with all the pieces to start the Demo. Network resources configuration: Setting Address Pool to be used by the LoadBalancer: In order to complete the Layer 2 mode configuration, we need to set a range of IP addresses for the LoadBalancer to use. On Linux we can use the docker kind network (macOS and Windows users see External IPs Prerequirement), so by using this command: docker network inspect -f '' kindYou should get the subclass you can set the IP range from. The output should contain a cidr such as 172. 18. 0. 0/16. Using this result we will create the following Layer 2 address pool with 172. 18. 1. 1-172. 18. 1. 16 range: cat <<EOF | kubectl apply -f -apiVersion: v1kind: ConfigMapmetadata: namespace: metallb-system name: configdata: config: | address-pools: - name: addresspool-sample1 protocol: layer2 addresses: - 172. 18. 1. 1-172. 18. 1. 16EOFNetwork utilization: Spin up a Virtual Machine running Nginx: Now it’s time to start-up a virtual machine running nginx using the following yaml. The virtual machine has a metallb-service=nginx we created to use when creating the service. cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: name: fedora-nginx namespace: default labels: metallb-service: nginxspec: running: true template: metadata: labels: metallb-service: nginx spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/fedora-cloud-container-disk-demo name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } packages: - nginx runcmd: - [ systemctl , enable , --now , nginx ] name: cloudinitdiskEOFExpose the virtual machine with a typed LoadBalancer service: When creating the LoadBalancer typed service, we need to remember annotating the address-pool we want to use addresspool-sample1 and also add the selector metallb-service: nginx: cat <<EOF | kubectl apply -f -kind: ServiceapiVersion: v1metadata: name: metallb-nginx-svc namespace: default annotations: metallb. universe. tf/address-pool: addresspool-sample1spec: externalTrafficPolicy: Local ipFamilies: - IPv4 ports: - name: tcp-5678 protocol: TCP port: 5678 targetPort: 80 type: LoadBalancer selector: metallb-service: nginxEOFNotice that the service got assigned with an external IP from the range assigned by the address pool: kubectl get service -n default metallb-nginx-svcExample output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmetallb-nginx-svc LoadBalancer 10. 96. 254. 136 172. 18. 1. 1 5678:32438/TCP 53sAccess the virtual machine from outside the cluster: Finally, we can check that the nginx server is accessible from outside the cluster: curl -s -o /dev/null 172. 18. 1. 1:5678 && echo URL exists Example output: URL existsNote that it may take a short while for the URL to work after setting the service. Doing this on your own cluster: Moving outside the demo example, one who would like use MetalLB on their real life cluster, should also take other considerations in mind: User privileges: you should have cluster-admin privileges on the cluster - in order to install MetalLB. IP Ranges for MetalLB: getting IP Address pools allocation for MetalLB depends on your cluster environment: If you’re running a bare-metal cluster in a shared host environment, you need to first reserve this IP Address pool from your hosting provider. Alternatively, if you’re running on a private cluster, you can use one of the private IP Address spaces (a. k. a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN. Conclusion: In this blog post we used MetalLB to expose a service using an external IP assigned to a virtual machine. This illustrates how virtual machine traffic can be load-balanced via a service. " }, { - "id": 24, + "id": 25, "url": "/2022/changelog-v0.51.0.html", "title": "KubeVirt v0.51.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 51. 0: Released on: Tue Mar 8 21:06:59 2022 +0000 [PR #7102][machadovilaca] Add Virtual Machine name label to virt-launcher pod [PR #7139][davidvossel] Fixes inconsistent VirtualMachinePool VM/VMI updates by using controller revisions [PR #6754][jean-edouard] New and resized disks are now always 1MiB-aligned [PR #7086][acardace] Add ‘EvictionStrategy’ as a cluster-wide setting in the KubeVirt CR [PR #7232][rmohr] Properly format the PDB scale event during migrations [PR #7223][Barakmor1] Add a name label to virt-operator pods [PR #7221][davidvossel] RunStrategy: Once - allows declaring a VM should run once to a finalized state [PR #7091][EdDev] SR-IOV interfaces are now reported in the VMI status even without an active guest-agent. [PR #7169][rmohr] Improve device plugin de-registration in virt-handler and some test stabilizations [PR #6604][alicefr] Add shareable option to identify if the disk is shared with other VMs [PR #7144][davidvossel] Garbage collect finalized migration objects only leaving the most recent 5 objects [PR #6110][xpivarc] [Nonroot] SRIOV is now available. " }, { - "id": 25, + "id": 26, "url": "/2022/changelog-v0.50.0.html", "title": "KubeVirt v0.50.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 50. 0: Released on: Wed Feb 9 18:01:08 2022 +0000 [PR #7056][fossedihelm] Update k8s dependencies to 0. 23. 1 [PR #7135][davidvossel] Switch from reflects. DeepEquals to equality. Semantic. DeepEquals() across the entire project [PR #7052][sradco] Updated recording rule “kubevirt_vm_container_free_memory_bytes” [PR #7000][iholder-redhat] Adds a possibility to override default libvirt log filters though VMI annotations [PR #7064][davidvossel] Fixes issue associated with blocked uninstalls when VMIs exist during removal [PR #7097][iholder-redhat] [Bug fix] VMI with kernel boot stuck on “Terminating” status if more disks are defined [PR #6700][VirrageS] Simplify replacing time. Ticker in agent poller and fix default values for qemu-*-interval flags [PR #6581][ormergi] SRIOV network interfaces are now hot-plugged when disconnected manually or due to aborted migrations. [PR #6924][EdDev] Support for legacy GPU definition is removed. Please see https://kubevirt. io/user-guide/virtual_machines/host-devices on how to define host-devices. [PR #6735][uril] The command migrate_cancel was added to virtctl. It cancels an active VM migration. [PR #6883][rthallisey] Add instance-type to cloud-init metadata [PR #6999][maya-r] When expanding disk images, take the minimum between the request and the capacity - avoid using the full underlying file system on storage like NFS, local. [PR #6946][vladikr] Numa information of an assigned device will be presented in the devices metadata [PR #6042][iholder-redhat] Fully support cgroups v2, include a new cohesive package and perform major refactoring. [PR #6968][vladikr] Added Writeback disk cache support [PR #6995][sradco] Alert OrphanedVirtualMachineImages name was changed to OrphanedVirtualMachineInstances. [PR #6923][rhrazdil] Fix issue with ssh being unreachable on VMIs with Istio proxy [PR #6821][jean-edouard] Migrating VMIs that contain dedicated CPUs will now have properly dedicated CPUs on target [PR #6793][oshoval] Add infoSource field to vmi. status. interfaces. " }, { - "id": 26, + "id": 27, "url": "/2022/Dedicated-migration-network.html", "title": "Dedicated migration network in KubeVirt", "author" : "Jed Lejosne", "tags" : "kubevirt, kubernetes, virtual machine, VM, live migration, dedicated network", "body": "Since version 0. 49, KubeVirt supports live migrating VMIs over a separate network than the one Kubernetes is running on. Running migrations over a dedicated network is a great way to increase migration bandwidth and reliability. This article gives an overview of the feature as well as a concrete example. For more technical information, refer to the KubeVirt documentation. Hardware configuration: The simplest way to use the feature is to find an unused NIC on every worker node, and to connect them all to the same switch. All NICs must have the same name. If they don’t, they should be permanently renamed. The process for renaming NICs varies depending on your operating system, refer to its documentation if you need help. Adding servers to the network for services like DHCP or DNS is an option but it is not required. If a DHCP is running, it is best if it doesn’t provide routes to other networks / the internet, to keep the migration network isolated. Cluster configuration: The interface between the physical network and KubeVirt is a NetworkAttachmentDefinition (NAD), created in the namespace where KubeVirt is installed. The implementation of the NAD is up to the admin, as long as it provides a link to the secondary network. The admin must also ensure that the NAD is able to provide cluster-wide IPs, either through a physical DHCP, or with another CNI plugin like whereabouts Important: the subnet used here must be completely distinct from the ones used by the main Kubernetes network, to ensure proper routing. Testing: If you just want to test the feature, KubeVirtCI supports the creation of multiple nodes, as well as secondary networks. All you need is to define the right environment variables before starting the cluster. See the example below for more info (note that text in the “video” can actually be selected and copy/pasted). Example: Here is a quick example of a dual-node KubeVirtCI cluster running a migration over a secondary network. The description of the clip includes more detailed information about the steps involved. " }, { - "id": 27, + "id": 28, "url": "/2022/KubeVirt-Summit-2022.html", "title": "KubeVirt Summit is coming back!", "author" : "Chandler Wilkerson", "tags" : "kubevirt, event, community", "body": "The second online KubeVirt Summit is coming on February 16, 2022! When: The event will take place online during two half-days: Dates: February 16 and 17, 2022. Time: 14:00 – 19:00 UTC (9:00–14:00 EST, 15:00–20:00 CET)Register: KubeVirt Summit is hosted on Community. CNCF. io. Because of how that platform works, you need to register for each of the two days of the summit independantly: Register for Day 1 Register for Day 2You will need to create an account with CNCF. io if you have not before. Attendance is free. Keep up to date: Connect with the KubeVirt Community through our community page. We are looking forward to meeting you there! " }, { - "id": 28, + "id": 29, "url": "/2022/changelog-v0.49.0.html", "title": "KubeVirt v0.49.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 49. 0: Released on: Tue Jan 11 17:27:09 2022 +0000 [PR #7004][iholder-redhat] Bugfix: Avoid setting block migration for volumes used by read-only disks [PR #6959][enp0s3] generate event when target pod enters unschedulable phase [PR #6888][assafad] Added common labels into alert definitions [PR #6166][vasiliy-ul] Experimental support of AMD SEV [PR #6980][vasiliy-ul] Updated the dependencies to include the fix for CVE-2021-43565 (KubeVirt is not affected) [PR #6944][iholder-redhat] Remove disabling TLS configuration from Live Migration Policies [PR #6800][jean-edouard] CPU pinning doesn’t require hardware-assisted virtualization anymore [PR #6501][ShellyKa13] Use virtctl image-upload to upload archive content [PR #6918][iholder-redhat] Bug fix: Unscheduable host-model VMI alert is now properly triggered [PR #6796][Barakmor1] ‘kubevirt-operator’ changed to ‘virt-operator’ on ‘managed-by’ label in kubevirt’s components made by virt-operator [PR #6036][jean-edouard] Migrations can now be done over a dedicated multus network [PR #6933][erkanerol] Add a new lane for monitoring tests [PR #6949][jean-edouard] KubeVirt components should now be successfully removed on CR deletion, even when using only 1 replica for virt-api and virt-controller [PR #6954][maiqueb] Update the virtctl exposed services IPFamilyPolicyType default to IPFamilyPolicyPreferDualStack [PR #6931][fossedihelm] added DryRun to AddVolumeOptions and RemoveVolumeOptions [PR #6379][nunnatsa] Fix issue https://bugzilla. redhat. com/show_bug. cgi?id=1945593 [PR #6399][iholder-redhat] Introduce live migration policies that allow system-admins to have fine-grained control over migration configuration for different sets of VMs. [PR #6880][iholder-redhat] Add full Podman support for make and make test [PR #6702][acardace] implement virt-handler canary upgrade and rollback for faster and safer rollouts [PR #6717][davidvossel] Introducing the VirtualMachinePools feature for managing stateful VMs at scale [PR #6698][rthallisey] Add tracing to the virt-controller work queue [PR #6762][fossedihelm] added DryRun mode to virtcl to migrate command [PR #6891][rmohr] Fix “Make raw terminal failed: The handle is invalid?” issue with “virtctl console” when not executed in a pty [PR #6783][rmohr] Skip SSH RSA auth if no RSA key was explicitly provided and not key exists at the default location" }, { - "id": 29, + "id": 30, "url": "/2021/changelog-v0.48.0.html", "title": "KubeVirt v0.48.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 48. 0: Released on: Mon Dec 6 18:26:51 2021 +0000 [PR #6670][futuretea] Added ‘virtctl soft-reboot’ command to reboot the VMI. [PR #6861][orelmisan] virtctl errors are written to stderr instead of stdout [PR #6836][enp0s3] Added PHASE and VMI columns for the ‘kubectl get vmim’ CLI output [PR #6784][nunnatsa] kubevirt-config configMap is no longer supported for KubeVirt configuration [PR #6839][ShellyKa13] fix restore of VM with RunStrategy [PR #6533][zcahana] Paused VMIs are now marked as unready even when no readinessProbe is specified [PR #6858][rmohr] Fix a nil pointer in virtctl in combination with some external auth plugins [PR #6780][fossedihelm] Add PatchOptions to the Patch request of the VirtualMachineInstanceInterface [PR #6773][iholder-redhat] alert if migration for VMI with host-model CPU is stuck since no node is suitable [PR #6714][rhrazdil] Shorten timeout for Istio proxy detection [PR #6725][fossedihelm] added DryRun mode to virtcl for pause and unpause commands [PR #6737][davidvossel] Pending migration target pods timeout after 5 minutes when unschedulable [PR #6814][fossedihelm] Changed some terminology to be more inclusive [PR #6649][Barakmor1] Designate the apps. kubevirt. io/component label for KubeVirt components. [PR #6650][victortoso] Introduces support to ich9 or ac97 sound devices [PR #6734][Barakmor1] replacing the command that extract libvirtd’s pid to avoid this error: [PR #6802][rmohr] Maintain a separate api package which synchronizes to kubevirt. io/api for better third party integration with client-gen [PR #6730][zhhray] change kubevrit cert secret type from Opaque to kubernetes. io/tls [PR #6508][oshoval] Add missing domain to guest search list, in case subdomain is used. [PR #6664][vladikr] enable the display and ramfb for vGPUs by default [PR #6710][iholder-redhat] virt-launcher fix - stop logging successful shutdown when it isn’t true [PR #6162][vladikr] KVM_HINTS_REALTIME will always be set when dedicatedCpusPlacement is requested [PR #6772][zcahana] Bugfix: revert #6565 which prevented upgrades to v0. 47. [PR #6722][zcahana] Remove obsolete scheduler. alpha. kubernetes. io/critical-pod annotation [PR #6723][acardace] remove stale pdbs created by < 0. 41. 1 virt-controller [PR #6721][iholder-redhat] Set default CPU model in VMI spec, even if not defined in KubevirtCR [PR #6713][zcahana] Report WaitingForVolumeBinding VM status when PVC/DV-type volumes reference unbound PVCs [PR #6681][fossedihelm] Users can use –dry-run flag [PR #6663][jean-edouard] The number of virt-api and virt-controller replicas is now configurable in the CSV [PR #5981][maya-r] Always resize disk. img files to the largest size at boot. " }, { - "id": 30, + "id": 31, "url": "/2021/Running-Realtime-Workloads.html", "title": "Running real-time workloads with improved performance", "author" : "Jordi Gil", "tags" : "kubevirt, kubernetes, virtual machine, VM, real-time, NUMA, CPUManager", "body": "Motivation: It has been possible in KubeVirt for some time already to run a VM running with a RT kernel, however the performance of such workloads never achieved parity against running on top of a bare metal host virtualized. With the availability of NUMA and CPUManager as features in KubeVirt, we were close to a point where we had almost all the ingredients to deliver the recommended tunings in libvirt for achieving the low CPU latency needed for such workloads. We were missing two important settings: The ability to configure the VCPUs to run with real-time scheduling policy. Lock the VMs huge pages in RAM to prevent swapping. Setting up the Environment: To achieve the lowest latency possible in a given environment, first it needs to be configured to allow its resources to be consumed efficiently. The Cluster: The target node has to be configured to reserve memory for hugepages and the kernel to allow threads to run with real-time scheduling policy. The memory can be reserved as a kernel boot parameter or by changing the kernel’s page count at runtime. The kernel’s runtime scheduling limit can be adjusted either by installing a real-time kernel in the node (the recommended option), or changing the kernel’s setting kernel. sched_rt_runtime_us to equal -1, to allow for unlimited runtime of real-time scheduled threads. This kernel setting defines the time period to be devoted to running real-time threads. KubeVirt will detect if the node has been configured with unlimited runtime and will label the node with kubevirt. io/realtime to highlight the capacity of running real-time workloads. Later on we’ll come back to this label when we talk about how the workload is scheduled. It is also recommended tuning the node’s BIOS settings for optimal real-time performance is also recommended to achieve even lower CPU latencies. Consult with your hardware provider to obtain the information on how to best tune your equipment. KubeVirt: The VM will require to be granted fully dedicated CPUs and be able to use huge pages. These requirements can be achieved in KubeVirt by enabling the feature gates of CPUManager and NUMA in the KubeVirt CR. There is no dedicated feature gate to enable the new real-time optimizations. The Manifest: With the cluster configured to provide the dedicated resources for the workload, it’s time to review an example of a VM manifest using the optimizations for low CPU latency. The first focus is to reduce the VM’s I/O by limiting it’s devices to only serial console: spec. domain. devices. autoattachSerialConsole: truespec. domain. devices. autoattachMemBalloon: falsespec. domain. devices. autoattachGraphicsDevice: falseThe pod needs to have a guaranteed QoS for its memory and CPU resources, to make sure that the CPU manager will dedicate the requested CPUs to the pod. spec. domain. resources. request. cpu: 2spec. domain. resources. request. memory: 1Gispec. domain. resources. limits. cpu: 2spec. domain. resources. limits. memory: 1GiStill on the CPU front, we add the settings to instruct the KVM to give a clear visibility of the host’s features to the guest, request the CPU manager in the node to isolate the assigned CPUs and to make sure that the emulator and IO threads in the VM run in their own dedicated VCPU rather than sharing the computational time with the workload. spec. domain. cpu. model: host-passthroughspec. domain. cpu. dedicateCpuPlacement: truespec. domain. cpu. isolateEmulatorThread: truespec. domain. cpu. ioThreadsPolicy: autoWe also request the huge pages size and guaranteed NUMA topology that will pin the CPU and memory resources to a single NUMA node in the host. The Kubernetes scheduler will perform due diligence to schedule the pod in a node with enough free huge pages of the given size. spec. domain. cpu. numa. guestMappingPassthrough: {}spec. domain. memory. hugepages. pageSize: 1GiLastly, we define the new real-time settings to instruct KubeVirt to apply the real-time scheduling policy for the pinned VCPUs and lock the process memory to avoid from being swapped by the host. In this example, we’ll configure the workload to only apply the real-time scheduling policy to VCPU 0. spec. domain. cpu. realtime. mask: 0Alternatively, if no mask value is specified, all requested CPUs will be configured for real-time scheduling. spec. domain. cpu. realtime: {}The following yaml is a complete manifest including all the settings we just reviewed. ---apiVersion: kubevirt. io/v1kind: VirtualMachinemetadata: labels: kubevirt. io/vm: fedora-realtime name: fedora-realtime namespace: pocspec: running: true template: metadata: labels: kubevirt. io/vm: fedora-realtime spec: domain: devices: autoattachSerialConsole: true autoattachMemBalloon: false autoattachGraphicsDevice: false disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi cpu: 2 limits: memory: 1Gi cpu: 2 cpu: model: host-passthrough dedicatedCpuPlacement: true isolateEmulatorThread: true ioThreadsPolicy: auto features: - name: tsc-deadline policy: require numa: guestMappingPassthrough: {} realtime: mask: 0 memory: hugepages: pageSize: 1Gi terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: quay. io/kubevirt/fedora-realtime-container-disk:20211008_5a22acb18 name: containerdisk - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - tuned-adm profile realtime name: cloudinitdiskThe Deployment: Because the manifest has enabled the real-time setting, when deployed KubeVirt applies the node label selector so that the Kubernetes scheduler will place the deployment in a node that is able to run threads with real-time scheduling policy (node label kubevirt. io/realtime). But there’s more, because the manifest also specifies the pod’s resource need of dedicated CPUs, KubeVirt will also add the node selector of cpumanager=true to guarantee that the pod is able to use the assigned CPUs alone. And finally, the scheduler also takes care of guaranteeing that the target node has sufficient free huge pages of the specified size (1Gi in our example) to satisfy the memory requested. With all these validations checked, the pod is successfully scheduled. Key Takeaways: Being able to run real-time workloads in KubeVirt with lower CPU latency opens new possibilities and expands the use cases where KubeVirt can assist in migrating legacy VMs into the cloud. Real-time workloads are extremely sensitive to the amount of layers between the bare metal and its runtime: the more layers in between, the higher the latency will be. The changes introduced in KubeVirt help reduce such waste and provide lower CPU latencies as the hardware is more efficiently tuned. " }, { - "id": 31, + "id": 32, "url": "/2021/changelog-v0.46.0.html", "title": "KubeVirt v0.46.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 46. 0: Released on: Fri Oct 8 21:12:33 2021 +0000 [PR #6425][awels] Hotplug disks are possible when iothreads are enabled. [PR #6297][acardace] mutate migration PDBs instead of creating an additional one for the duration of the migration. [PR #6464][awels] BugFix: Fixed hotplug race between kubelet and virt-handler when virt-launcher dies unexpectedly. [PR #6465][salanki] Fix corrupted DHCP Gateway Option from local DHCP server, leading to rejected IP configuration on Windows VMs. [PR #6458][vladikr] Tagged SR-IOV interfaces will now appear in the config drive metadata [PR #6446][brybacki] Access mode for virtctl image upload is now optional. This version of virtctl now requires CDI v1. 34 or greater [PR #6391][zcahana] Cleanup obsolete permissions from virt-operator’s ClusterRole [PR #6419][rthallisey] Fix virt-controller panic caused by lots of deleted VMI events [PR #5972][kwiesmueller] Add a ssh command to virtctl that can be used to open SSH sessions to VMs/VMIs. [PR #6403][jrife] Removed go module pinning to an old version (v0. 3. 0) of github. com/go-kit/kit [PR #6367][brybacki] virtctl imageupload now uses DataVolume. spec. storage [PR #6198][iholder-redhat] Fire a Prometheus alert when a lot of REST failures are detected in virt-api [PR #6211][davidvossel] cluster-profiler pprof gathering tool and corresponding “ClusterProfiler” feature gate [PR #6323][vladikr] switch live migration to use unix sockets [PR #6374][vladikr] Fix the default setting of CPU requests on vmipods [PR #6283][rthallisey] Record the time it takes to delete a VMI and expose it as a metric [PR #6251][rmohr] Better place vcpu threads on host cpus to form more efficient passthrough architectures [PR #6377][rmohr] Don’t fail on failed selinux relabel attempts if selinux is permissive [PR #6308][awels] BugFix: hotplug was broken when using it with a hostpath volume that was on a separate device. [PR #6186][davidvossel] Add resource and verb labels to rest_client_requests_total metric" }, { - "id": 32, + "id": 33, "url": "/2021/Importing-EC2-to-KubeVirt.html", "title": "Import AWS AMIs as KubeVirt Golden Images", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, AWS, EC2, AMI", "body": "Breaking Out: There comes a point where an operations team has invested so heavily in a Iaas platform that they are effectively locked into that platform. For example, here’s one scenario outlining how this can happen. An operations team has created automation around building VM images and keeping images up-to-date. In AWS that automation likely involves starting an EC2 instance, injecting some application logic into that instance, sealing the instance’s boot source as an AMI, and finally copying that AMI around to all the AWS regions the team deploys in. If the team was interested in evaluating KubeVirt as an alternative Iaas platform to AWS’s EC2, given the team’s existing tooling there’s not a clear path for doing this. It’s that scenario where the tooling in the kubevirt-cloud-import project comes into play. Kubevirt Cloud Import: The KubeVirt Cloud Import project explores the practicality of transitioning VMs from various cloud providers into KubeVirt. As of writing this, automation for exporting AMIs from EC2 into KubeVirt works, and it’s really not all that complicated. This blog post will explore the fundamentals of how AMIs are exported, and how the KubeVirt Cloud Import project leverages these techniques to build automation pipelines. Nuts and Bolts of Importing AMIs: Official AWS AMI Export Support: AWS supports an api for exporting AMIs as a file to an s3 bucket. This support works quite well, however there’s a long list of limitations that impact what AMIs are eligible for export. The most limiting of those items is the one that prevents any image built from an AMI on the marketplace from being eligible for the official export support. Unofficial AWS export Support: Regardless of what AWS officially supports or not, there’s absolutely nothing preventing someone from exporting an AMI’s contents themselves. The technique just involves creating an EC2 instance, attaching an EBS volume (containing the AMI contents) as a block device, then streaming that block devices contents where ever you want. Theoretically, the steps roughly look like this. Convert AMI to a volume by finding the underlying AMI’s snapshot and converting it to an EBS volume. Create an EC2 instance with the EBS volume containing the AMI contents as a secondary data device. Within the EC2 guest, copy the EBS device’s contents as a disk img dd if=/dev/xvda of=/tmp/disk/disk. img Then upload the disk image to an object store like s3. aws s3 cp /tmp/disk/disk. img s3://my-b1-bucket/ upload: . . /tmp/disk/disk. img to s3://my-b1-bucket/disk. imgBasics of Importing Data into KubeVirt: Once a disk image is in s3, a KubeVirt companion project called the Containerized Data Importer (or CDI for short) can be used to import the disk from s3 into a PVC within the KubeVirt cluster. This import flow can be expressed as a CDI DataVolume custom resource. Below is an example yaml for importing s3 contents into a PVC using a DataVolume apiVersion: cdi. kubevirt. io/v1beta1kind: DataVolumemetadata: name: example-import-dv spec: source: s3: url: https://s3. us-west-2. amazonaws. com/my-ami-exports/kubevirt-image-exports/export-ami-0dc4e69702f74df50. vmdk secretRef: my-s3-credentials pvc: accessModes: - ReadWriteOnce resources: requests: storage: 6Gi Once the AMI file content is stored in a PVC, CDI can be used further to clone that AMI’s PVC on a per VM basis. This effectively recreates the AMI to EC2 relationship that exists in AWS. You can find more information about CDI here Automating AMI import: Using the technique of exporting an AMI to an s3 bucket and importing the AMI from s3 into a KubeVirt cluster using CDI, the Kubevirt Cloud Import project provides the glue necessary for tying all of these pieces together in the form of the import-ami cli command and a Tekton task. Automation using the import-ami CLI command: The import-ami takes a set of arguments related to the AMI you wish to import into KubeVirt and the name of the PVC you’d like the AMI to be imported into. Upon execution, import-ami will call all the appropriate AWS and KubeVirt APIs to make this work. The result is a PVC with the AMI contents that is capable of being launched by a KubeVirt VM. In the example below, A publicly shared fedora34 AMI is imported into the KubeVirt cluster as a PVC called fedora34-golden-image export S3_BUCKET=my-bucketexport S3_SECRET=s3-readonly-credexport AWS_REGION=us-west-2export AMI_ID=ami-00a4fdd3db8bb2851export PVC_STORAGECLASS=rook-ceph-blockexport PVC_NAME=fedora34-golden-imageimport-ami --s3-bucket $S3_BUCKET --region $AWS_REGION --ami-id $AMI_ID --pvc-storageclass $PVC_STORAGECLASS --s3-secret $S3_SECRET --pvc-name $PVC_NAMEAutomation using the import-ami Tekton Task: In addition to the import-ami cli command, the KubeVirt Cloud Import project also includes a Tekton task which wraps the cli command and allows integrating AMI import into a Tekton pipeline. Using a Tekton pipeline, someone can combine the task of importing an AMI into KubeVirt with the task of starting a VM using that AMI. An example pipeline can be found here which outlines how this is accomplished. Below is a pipeline run that uses the example pipeline to import the publicly shared fedora34 AMI into a PVC, then starts a VM using that imported AMI. cat << EOF > pipeline-run. yamlapiVersion: tekton. dev/v1beta1kind: PipelineRunmetadata: name: my-vm-creation-pipeline namespace: defaultspec: serviceAccountName: my-kubevirt-service-account pipelineRef: name: create-vm-pipeline params: - name: vmName value: vm-fedora34 - name: s3Bucket value: my-kubevirt-exports - name: s3ReadCredentialsSecret value: my-s3-read-only-credentials - name: awsRegion value: us-west-2 - name: amiId value: ami-00a4fdd3db8bb2851 - name: pvcStorageClass value: rook-ceph-block - name: pvcName value: fedora34 - name: pvcNamespace value: default - name: pvcSize value: 6Gi - name: pvcAccessMode value: ReadWriteOnce - name: awsCredentialsSecret value: my-aws-credentialsEOFkubectl create -f pipeline-run. yamlAfter posting the pipeline run, watch for the pipeline run to complete. $ kubectl get pipelinerunselecting docker as container runtimeNAME SUCCEEDED REASON STARTTIME COMPLETIONTIMEmy-vm-creation-pipeline True Succeeded 11m 9m54sThen observe that the resulting VM is online $ kubectl get vmiselecting docker as container runtimeNAME AGE PHASE IP NODENAME READYvm-fedora34 11m Running 10. 244. 196. 175 node01 TrueFor more detailed and up-to-date information about how to automate AMI import using Tekton, view the KubeVirt Cloud Import README. md Key Takeaways: The portability of workloads across different environments is becoming increasingly important and operations teams need to be vigilant about avoiding vendor lock in. For containers, Kubernetes is an attractive option because it provides a consistent API layer that can run across multiple cloud platforms. KubeVirt can provide that same level of consistency for VMs. As a community we need to invest further into automation tools that allow people to make the transition to KubeVirt. " }, { - "id": 33, + "id": 34, "url": "/2021/changelog-v0.45.0.html", "title": "KubeVirt v0.45.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 45. 0: Released on: Wed Sep 8 13:56:47 2021 +0000 [PR #6191][marceloamaral] Addition of perfscale-load-generator to perform stress tests to evaluate the control plane [PR #6248][VirrageS] Reduced logging in hot paths [PR #6079][weihanglo] Hotplug volume can be unplugged at anytime and reattached after a VM restart. [PR #6101][rmohr] Make k8s client rate limits configurable [PR #6204][sradco] This PR adds to each alert the runbook url that points to a runbook that provides additional details on each alert and how to mitigate it. [PR #5974][vladikr] a list of desired mdev types can now be provided in KubeVirt CR to kubevirt to configure these devices on relevant nodes [PR #6147][rmohr] Fix rbac permissions for freeze/unfreeze, addvolume/removevolume, guestosinfo, filesystemlist and userlist [PR #6161][ashleyschuett] Remove HostDevice validation on VMI creation [PR #6078][zcahana] Report ErrImagePull/ImagePullBackOff VM status when image pull errors occur [PR #6176][kwiesmueller] Fix goroutine leak in virt-handler, potentially causing issues with a high turnover of VMIs. [PR #6047][ShellyKa13] Add phases to the vm snapshot api, specifically a failure phase [PR #6138][ansijain] NA" }, { - "id": 34, + "id": 35, "url": "/2021/Virtual-machines-in-Istio-service-mesh.html", "title": "Running virtual machines in Istio service mesh", "author" : "Radim Hrazdil", "tags" : "kubevirt, istio, virtual machine, VM, service mesh, mesh", "body": "Introduction: This blog post demonstrates running virtual machines in Istio service mesh. Istio service mesh allows to monitor, visualize, and manage traffic between pods and external services byinjecting a proxy container - a sidecar - which forwards inbound and outbound traffic of a pod/virtual machine. This allows the sidecar to collect metadata about the proxied traffic and also actively interfere with it. For more in-depth information about the Istio proxy mechanism, see this blog post published by Dough Smith et al. The main features of Istio are traffic shifting (migrating traffic from an old to new version of a service), dynamic request routing, fault injection or traffic mirroring for testing/debugging purposes, and more. Visit Istio documentation to learn about all its features. Istio featureset may be further extended by installing addons. Kiali, for example, is a UI dashboard that provides traffic informationof all microservices in a mesh, capable of composing communication graph between all microservices. Prerequisites: KubeVirt v0. 43. 0 CRI-O v1. 19. 0Limitations: Istio is only supported with masquerade network binding and pod network over IPv4. Demo: This section covers deployment of a local cluster with Istio service mesh, KubeVirt installation and creation of an Istio-enabled virtual machine. Finally, Kiali dashboard is used to examine both inbound and outbound traffic of the created virtual machine. Run Kubernetes cluster: In this blog post, we are going to use kubevirtci as our Kubernetes ephemeral cluster provider. Follow these steps to deploy a local cluster with pre-installed Istio service mesh: git clone https://github. com/kubevirt/kubevirtcicd kubevirtciexport KUBEVIRTCI_TAG=2108222252-0007793# Pin to version used in this blog post in case# k8s-1. 21 provider version disappears in the futuregit checkout $KUBEVIRTCI_TAGexport KUBEVIRT_NUM_NODES=2export KUBEVIRT_PROVIDER=k8s-1. 21export KUBEVIRT_DEPLOY_ISTIO=trueexport KUBEVIRT_WITH_CNAO=truemake cluster-upexport KUBECONFIG=$(. /cluster-up/kubeconfig. sh)For details about Istio configuration, see Istio kubevirtci install script. Install Kubevirt: Following KubeVirt user guide to install released version v0. 43. 0: export RELEASE=v0. 43. 0kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator. yaml kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr. yaml kubectl -n kubevirt wait kv kubevirt --timeout=180s --for condition=AvailableInstall Istio addons: While the ephemeral kubevirtci installs core Istio components, addons like Kiali dashboard are not installed by default. Download Istio manifests and client binary by running the following command: export ISTIO_VERSION=1. 10. 0curl -L https://istio. io/downloadIstio | sh -and export path to the istioctl binary by following the output of the above command. Finally, deploy Kiali, Jaeger and Prometheus addons: kubectl create -f istio-${ISTIO_VERSION}/samples/addons/kiali. yamlkubectl create -f istio-${ISTIO_VERSION}/samples/addons/jaeger. yamlkubectl create -f istio-${ISTIO_VERSION}/samples/addons/prometheus. yamlNote: If there are errors when installing the addons, try running the command again. There may be timing issues which will be resolved when the command is run again. Prepare target namespace: Before creating virtual machines, the target namespace needs to be configured for the Istio sidecar to be injected and working properly. This involves adding a label and creating a NetworkAttachmentDefinition in the target namespace. Istio sidecar injection: Istio supports two ways of injecting a sidecar to a pod - automatic and manual. For simplicity, we will only consider automatic sidecar injection in this demo, which is enabled by adding istio-injection=enabled label to target namespace: kubectl label namespace default istio-injection=enabledNetwork attachment definiton: When Multus is installed in k8s cluster, a NetworkAttachmentDefinition called istio-cni must be created in each namespace where Istio sidecar containers are to be used: cat <<EOF | kubectl create -f -apiVersion: k8s. cni. cncf. io/v1 kind: NetworkAttachmentDefinitionmetadata: name: istio-cniEOFThe NetworkAttachmentDefinition spec is empty, as its only purpose is to trigger the istio-cni binary, which configures the in-pod traffic routing. Topology: To demonstrate monitoring and tracing capabilities, we will create two VMIs within Istio service mesh: istio-vmi repeatedly requests external HTTP service kubevirt. io, and serves a simple HTTP server on port 8080, cirros-vmi repeatedly request the HTTP service running on the istio-vmi VMI. With this setup, both inbound and outboundtraffic metrics can be observed in Kiali dashboard for istio-vmi. Create VMI resources: An Istio aware virtual machine must be annotated with sidecar. istio. io/inject: true , regardless of used Istio injection mechanism. Without this annotation, traffic would not be properly routed through the istio proxy sidecar. Additonally, Istio uses app label for adding contextual information to the collected telemetry. Both, the annotation and label can be seen in the following virtual machine example: cat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1kind: VirtualMachineInstancemetadata: annotations: sidecar. istio. io/inject: true labels: app: istio-vmi name: istio-vmispec: domain: devices: interfaces: - name: default masquerade: {} ports: - port: 8080 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} readinessProbe: httpGet: port: 8080 initialDelaySeconds: 120 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo:devel - cloudInitNoCloud: userData: | #cloud-config password: fedora chpasswd: { expire: False } runcmd: - dnf install -y screen nc - while true ; do sh -c nc -lp 8080 -c \ echo -e 'HTTP/1. 1 200 OK\n\nHello'\ ; done & - while true ; do curl kubevirt. io >out 2>/dev/null ; sleep 1 ; done & name: cloudinitdiskEOFThe cloud init section of the VMI runs two loops requesting kubevirt. io website every second to generate outbound traffic (from the VMI) and serving simple HTTP server on port 8080, which will be used for monitoring of inbound traffic (to the VMI). Let’s also create a service for the VMI that will be used to access the http server in istio-vmi: cat <<EOF | kubectl create -f-apiVersion: v1kind: Servicemetadata: name: istio-vmi-svcspec: selector: app: istio-vmi ports: - port: 8080 protocol: TCPEOFFinally, create the cirros-vmi VMI, for the purpose of generating inbound traffic to istio-vmi VMI: cat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1kind: VirtualMachineInstancemetadata: annotations: sidecar. istio. io/inject: true name: cirros-vmi labels: app: cirros-vmispec: domain: devices: interfaces: - name: default masquerade: {} disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 128M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/cirros-container-disk-demo:devel - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/sh while true ; do curl istio-vmi-svc. default. svc. cluster. local:8080 ; sleep 1 ; doneEOFWait for the istio-vmi to be ready: kubectl wait --for=condition=ready --timeout=180s pod -l app=istio-vmiAfter creating the VMIs, the corresponding virt-launcher pods should have 3 ready containers, as shown in the snippet below: kubectl get podsNAME READY STATUS RESTARTS AGEvirt-launcher-istio-vmi-XYZ 3/3 Running 0 4m13svirt-launcher-cirros-vmi-XYZ 3/3 Running 0 2m21sIstioctl proxy-status should report that the sidecar proxies running inside the virt-launcher pods have synced with Istio control plane: istioctl proxy-statusNAME CDS LDS EDS RDS ISTIOD VERSIONvirt-launcher-cirros-vmi-9f765. default SYNCED SYNCED SYNCED SYNCED istiod-7d96484d6b-5d79g 1. 10. 0virt-launcher-istio-vmi-99t8t. default SYNCED SYNCED SYNCED SYNCED istiod-7d96484d6b-nk4cd 1. 10. 0Note: Displaying only relevant VMI entities. Monitor traffic in Kiali dashboard: With both VMIs up and running, we can open the Kiali dashboard and observe the traffic metrics. Run the following command, to access Kiali dashboard: istioctl dashboard kialiTopology graph: Let’s start by navigating to the topology graph by clicking the Graph menu item. In the topology graph, we can observe the following traffic flows: requests from cirros-vmi to istio-vmi via istio-vmi-svc service, requests from istio-vmi to PasstroughCluster. The PastroughCluster marks destinations external to our service mesh. Workloads: Navigate to istio-vmi workload overview by clicking the Workloads menu item and selecting istio-vmi from the list. The overview page presents partial topology graph with traffic related to istio-vmi. In our case, this graph is the same as the graph of our entire mesh. Navigate to Inbound Metrics tab to see metrics charts of inbound traffic. In Request volume chart we can see that number of requests stabilizes at around 1 ops, which matches our loop sending one reqest per second. Request throughput chart reveals that the requests consume around 4 kbit/s of bandwidth. Remaining two charts provide information about Request duration and size. The same metrics are collected for outbound traffic as well, which can be seen in Outbound Metrics tab. Cluster teardown: Run the following command to deprovision the ephemeral cluster: make cluster-downConclusion: KubeVirt introduced support for Istio, allowing virtual machines to be part of a service mesh. This blog post covered running KubeVirt virtual machine in Istio service mesh using an ephemeral kubevirtci cluster. Kiali dashboard was used to observe inbound and outbound traffic of a virtual machine. " }, { - "id": 35, + "id": 36, "url": "/2021/changelog-v0.44.0.html", "title": "KubeVirt v0.44.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 44. 0: Released on: Mon Aug 9 14:20:14 2021 +0000 [PR #6058][acardace] Fix virt-launcher exit pod race condition [PR #6035][davidvossel] Addition of perfscale-audit tool for auditing performance of control plane during stress tests [PR #6145][acardace] virt-launcher: disable unencrypted TCP socket for libvirtd. [PR #6163][davidvossel] Handle qemu processes in defunc (zombie) state [PR #6105][ashleyschuett] Add VirtualMachineInstancesPerNode to KubeVirt CR under Spec. Configuration [PR #6104][zcahana] Report FailedUnschedulable VM status when scheduling errors occur [PR #5905][davidvossel] VM CrashLoop detection and Exponential Backoff [PR #6070][acardace] Initiate Live-Migration using a unix socket (exposed by virt-handler) instead of an additional TCP<->Unix migration proxy started by virt-launcher [PR #5728][vasiliy-ul] Live migration of VMs with hotplug volumes is now enabled [PR #6109][rmohr] Fix virt-controller SCC: Reflect the need for NET_BIND_SERVICE in the virt-controller SCC. [PR #5942][ShellyKa13] Integrate guest agent to online VM snapshot [PR #6034][ashleyschuett] Go version updated to version 1. 16. 6 [PR #6040][yuhaohaoyu] Improved debuggability by keeping the environment of a failed VMI alive. [PR #6068][dhiller] Add check that not all tests have been skipped [PR #6041][xpivarc] [Experimental] Virt-launcher can run as non-root user [PR #6062][iholder-redhat] replace dead “stress” binary with new, maintained, “stress-ng” binary [PR #6029][mhenriks] CDI to 1. 36. 0 with DataSource support [PR #4089][victortoso] Add support to USB Redirection with usbredir [PR #5946][vatsalparekh] Add guest-agent based ping probe [PR #6005][acardace] make containerDisk validation memory usage limit configurable [PR #5791][zcahana] Added a READY column to the tabular output of “kubectl get vm/vmi” [PR #6006][awels] DataVolumes created by DataVolumeTemplates will follow the associated VMs priority class. [PR #5982][davidvossel] Reduce vmi Update collisions (http code 409) during startup [PR #5891][akalenyu] BugFix: Pending VMIs when creating concurrent bulk of VMs backed by WFFC DVs [PR #5925][rhrazdil] Fix issue with Windows VMs not being assigned IP address configured in network-attachment-definition IPAM. [PR #6007][rmohr] Fix: The bandwidth limitation on migrations is no longer ignored. Caution: The default bandwidth limitation of 64Mi is changed to “unlimited” to not break existing installations. [PR #4944][kwiesmueller] Add /portforward subresource to VirtualMachine and VirtualMachineInstance that can tunnel TCP traffic through the API Server using a websocket stream. [PR #5402][alicefr] Integration of libguestfs-tools and added new command guestfs to virtctl [PR #5953][ashleyschuett] Allow Failed VMs to be stopped when using --force --gracePeriod 0 [PR #5876][mlsorensen] KubeVirt CR supports specifying a runtime class for virt-launcher pods via ‘launcherRuntimeClass’. " }, { - "id": 36, + "id": 37, "url": "/2021/kubevirt-api-auth.html", "title": "Kubernetes Authentication Options using KubeVirt Client Library", "author" : "Mark DeNeve", "tags" : "kubevirt, go, api, authentication", "body": " Introduction Requirements Setup Compiling our test application Running our application externally leveraging a kubeconfig file Using the default kubeconfig Creating a kubeconfig for the service account Running in a Kubernetes Cluster Extending RBAC Role across Namespaces Creating Custom RBAC Roles Conclusion ReferencesIntroduction: Most interaction with the KubeVirt service can be handled using the virtctl command, or raw yaml applied to your Kubernetes cluster. But what if you want to have more direct programmatic control over the instantiation and management of those virtual machines? The KubeVirt project supplies a Go client library for interacting with KubeVirt called client-go. This library allows you to write your own applications that interact directly with the KubeVirt api quickly and easily. In this post, we will use a simple application to demonstrate how the KubeVirt client library authenticates with your Kubernetes cluster both in and out of your cluster. This application is based on the example application in the “client-go” library with a few small modifications to it, to allow for running both locally and within in the cluster. This tutorial assumes you have some knowledge of Go, and is not meant to be a Go training doc. Requirements: In order to compile and run the test application locally you will need to have the Go programming language installed on your machine. If you do not have the latest version of Go installed, follow the steps on the Downloads page of the Go web site before proceeding with the rest of the steps in this blog. The steps listed here were tested with Go version 1. 16. You will need a Kubernetes cluster running with the KubeVirt operator installed. If you do not have a cluster available, the easiest way to do this is to follow the steps outlined in the Quick Start with Minikube lab. The example application we will be using to demonstrate the authentication methods lists out the VMI and VM instances in your cluster in the current namespace. If you do not have any running VMs in your cluster, be sure to create at least one new virtual machine instance in your cluster. For guidance in creating a quick test vm see the Use KubeVirt lab. Setup: Compiling our test application: Start by cloning the example application repo https://github. com/xphyr/kubevirt-apiauth and compiling our test application: git clone https://github. com/xphyr/kubevirt-apiauth. gitcd kubevirt-apiauth/listvmsgo buildOnce the program compiles, test to ensure that the application compiled correctly. If you have a working Kubernetes context, running this command may return some values. If you do not have a current context, you will get an error. This is OK, we will discuss authentication next. $ . /listvms2021/06/23 16:51:28 cannot obtain KubeVirt vm list: Get http://localhost:8080/apis/kubevirt. io/v1alpha3/namespaces/default/virtualmachines : dial tcp 127. 0. 0. 1:8080: connect: connection refusedAs long as the program runs, you are all set to move onto the next step. Running our application externally leveraging a kubeconfig file: The default authentication file for Kubernetes is the kubeconfig file. We will not be going into details of this file, but you can click the link to goto the documentation on the kubeconfig file to learn more about it. All you need to know at this time is that when you use the kubectl command you are using a kubeconfig file for your authentication. Using the default kubeconfig: If you haven’t already done so, validate that you have a successful connection to your cluster with the “kubectl” command: $ kubectl get nodesNAME STATUS ROLES AGE VERSIONminikube Ready control-plane,master 5d21h v1. 20. 7We now have a valid kubeconfig. On *nix OS such as Linux and OSX, this file is stored in your home directory at ~/. kube/config. You should now be able to run our test application and get some results (assuming you have some running vms in your cluster). $ . /listvms/listvmsType Name Namespace StatusVirtualMachine testvm default falseVirtualMachineInstance testvm default ScheduledThis is great, but there is an issue. The authentication method we used is your primary Kubernetes authentication. It has roles and permissions to do many different things in your k8s cluster. Wouldn’t it be better if we could scope that authentication and ensure that your application had a dedicated account, with only the proper permissions to interact with just what your application will need. This is what Kubernetes Service Accounts are for. Service Accounts are accounts for processes as opposed to users. By default they are scoped to a namespace, but you can give service accounts access to other namespaces through RBAC rules that we will discuss later. In this demo, we will be using the “default” project/namespace, so the service account we create will be initially scoped only to this namespace. Start by creating a new service account called “mykubevirtrunner” using your default Kubernetes account: $ kubectl create sa mykubevirtrunner$ kubectl describe sa mykubevirtrunnerName: mykubevirtrunnerNamespace: defaultLabels: <none>Annotations: <none>Image pull secrets: <none>Mountable secrets: mykubevirtrunner-token-pd2mqTokens: mykubevirtrunner-token-pd2mqEvents: <none>In the describe output you can see that a token and a mountable secret have been created. Let’s take a look at the contents of the secret: $ kubectl describe secret mykubevirtrunner-token-pd2mqName: mykubevirtrunner-token-pd2mqNamespace: defaultLabels: <none>Annotations: kubernetes. io/service-account. name: mykubevirtrunner kubernetes. io/service-account. uid: f401493b-658a-489d-bcce-0ccce39160a0Type: kubernetes. io/service-account-tokenData====namespace: 7 bytestoken: eyJhbGciOiJS. . . ca. crt: 1111 bytesThe data listed for the “token” key is the information we will use in the next step, your output will be much longer, it has been truncated for this document. Ensure when copying the value that you get the entire token value. Creating a kubeconfig for the service account: We will create a new kubeconfig file that leverages the service account and token we just created. The easiest way to do this is to create an empty kubeconfig file, and use the “kubectl” command to log in with the new token. Open a NEW terminal window. This will be the window we use for the service account. In this new terminal window start by setting the KUBECONFIG environment variable to point to a file in our local directory, and then using the “kubectl” command to generate a new kubeconfig file: export KUBECONFIG=$(pwd)/sa-kubeconfigkubectl config set-cluster minikube --server=https://<update IP address>:8443 --insecure-skip-tls-verifykubectl config set-credentials mykubevirtrunner --token=<paste token from last step here>kubectl config set-context minikube --cluster=minikube --namespace=default --user=mykubevirtrunnerkubectl config use-context minikubeWe can test that the new kubeconfig file is working by running a kubectl command: $ kubectl get podsError from server (Forbidden): pods is forbidden: User system:serviceaccount:default:mykubevirtrunner cannot list resource pods in API group in the namespace default Note that the “User” is now listed as “system:serviceaccount:default:mykubevirtrunner” so we know we are using our new service account. Now try running our test program and note that it is using the service account as well: $ listvms/listvms2021/07/07 14:53:23 cannot obtain KubeVirt vm list: virtualmachines. kubevirt. io is forbidden: User system:serviceaccount:default:mykubevirtrunner cannot list resource virtualmachines in API group kubevirt. io in the namespace default You can see we are now using our service account in our application, but that service account doesn’t have the right permissions… We now need to assign a role to our service account to give it the proper API access. We will start simple and give the service account the kubevirt. io:view role, which will allow the service account to see the KubeVirt objects within the “default” namespace: $ kubectl create clusterrolebinding kubevirt-viewer --clusterrole=kubevirt. io:view --serviceaccount=default:mykubevirtrunnerclusterrolebinding. rbac. authorization. k8s. io/kubevirt-viewer createdNow run the listvms command again: . /listvms/listvmsType Name Namespace StatusVirtualMachineInstance vm-fedora-ephemeral myvms RunningSuccess! Our application is now using the service account that we created for authentication to the cluster. The service account can be extended by adding additional default roles to the account, or by creating custom roles that limit the scope of the service account to only the exact actions you want to take. When you install KubeVirt you get a set of default roles including “View”, “Edit” and “Admin”. Additional details about these roles are available here: KubeVirt Default RBAC Cluster Roles Running in a Kubernetes Cluster: So all of this is great if you want to run the application outside of your cluster … but what if you want your application to run INSIDE you cluster. You could create a kubeconfig file, and add it to your namespace as a secret and then mount that secret as a volume inside your pod, but there is an easier way that continues to leverage the service account that we created. By default Kubernetes creates a few environment variables for every pod that indicate that the container is running within Kubernetes, and it makes a Kubernetes authentication token for the service account that the container is running as available at /var/run/secrets/kubernetes. io/serviceaccount/token. The client-go KubeVirt library can detect that it is running inside a Kubernetes hosted container and will transparently use the authentication token provided with no additional configuration needed. A container image with the listvms binary is available at quay. io/markd/listvms. We can start a copy of this container using the deployment yaml file located in the ‘listvms/listvms_deployment. yaml’ file. Switch back to your original terminal window that is using your primary kubeconfig file, and using the “kubectl” command deploy one instance of the test pod, and then check the logs of the pod: $ kubectl create -f listvms/listvms_deployment. yaml$ kubectl get podsNAME READY STATUS RESTARTS AGElistvms-7b8f865c8d-2zqqn 1/1 Running 0 7m30svirt-launcher-vm-fedora-ephemeral-4ljg4 2/2 Running 0 24h$ kubectl logs listvms-7b8f865c8d-2zqqn2021/07/07 19:06:42 cannot obtain KubeVirt vm list: virtualmachines. kubevirt. io is forbidden: User system:serviceaccount:default:default cannot list resource virtualmachines in API group kubevirt. io in the namespace default NOTE: Be sure to deploy this demo application in a namespace that contains at least one running VM or VMI. The application is unable to run the operation, because it is running as the default service account in the “default” namespace. If you remember previously we created a service account in this namespace called “mykubevirtrunner”. We need only update the deployment to use this service account and we should see some success. Use the “kubectl edit deployment/listvms” command to update the container spec to include the “serviceAccount: mykubevirtrunner” line as show below: spec: containers: - name: listvms image: quay. io/markd/listvms serviceAccount: mykubevirtrunner securityContext: {} schedulerName: default-schedulerThis change will trigger Kubernetes to redeploy your pod, using the new serviceAccount. We should now see some output from our program: $ kubectl get podsNAME READY STATUS RESTARTS AGElistvms-7b8f865c8d-2qzzn 1/1 Running 0 7m30svirt-launcher-vm-fedora-ephemeral-4ljg4 2/2 Running 0 24h$ kubectl logs listvms-7b8f865c8d-2qzznType Name Namespace StatusVirtualMachineInstance vm-fedora-ephemeral myvms Runningawaiting signalExtending RBAC Role across Namespaces: As currently configured, the mykubevirtrunner service account can only “view” KubeVirt resources within its own namespace. If we want to extend that ability to other namespaces, we can add the view role for other namespaces to the mykubevirtrunner serviceAccount. kubectl create namespace myvms<launch an addition vm here>kubectl create clusterrolebinding kubevirt-viewer --clusterrole=kubevirt. io:view --serviceaccount=default:mykubevirtrunner -n myvmsWe can test that the ServiceAccount has been updated to also have permissions to view in the “myvms” namespace by running our listvms command one more time, this time passing in the optional flag –namespaces. Switch to your terminal window that is using the service account kubeconfig file and run the following command: $ listvms/listvms --namespaces myvmsadditional namespaces to check are: myvmsChecking the following namespaces: [default myvms]Type Name Namespace StatusVirtualMachine testvm default falseVirtualMachineInstance testvm default ScheduledVirtualMachine testvm myvms falseYou can see that now, the ServiceAccount can view the vm and vmi that are in both the default namespace as well as the myvms namespace. Creating Custom RBAC Roles: In this demo we used RBAC roles created as part of the KubeVirt install. You can also create custom RBAC roles for KubeVirt. Documentation on how this can be done is available in the KubeVirt documentation Creating Custom RBAC Roles Conclusion: It is possible to control and manage your KubeVirt machines with the use of Kubernetes service accounts and the “client-go” library. When using service accounts, you want to ensure that the account has the minimum role or permissions to do it’s job to ensure the security of your cluster. The “client-go” library gives you options on how you authenticate with your Kubernetes cluster, allowing you to deploy your application both in and out of your Kubernetes cluster. References: KubeVirt Client Go KubeVirt API Access Control KubeVirt Default RBAC Cluster Roles " }, { - "id": 37, + "id": 38, "url": "/2021/changelog-v0.43.0.html", "title": "KubeVirt v0.43.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 43. 0: Released on: Fri Jul 9 15:46:22 2021 +0000 [PR #5952][mhenriks] Use CDI beta API. CDI v1. 20. 0 is now the minimum requirement for kubevirt. [PR #5846][rmohr] Add “spec. cpu. numaTopologyPassthrough” which allows emulating a host-alligned virtual numa topology for high performance [PR #5894][rmohr] Add spec. migrations. disableTLS to the KubeVirt CR to allow disabling encrypted migrations. They stay secure by default. [PR #5649][awels] Enhancement: remove one attachment pod per disk limit (behavior on upgrade with running VM with hotplugged disks is undefined) [PR #5742][rmohr] VMIs which choose evictionStrategy LifeMigrate and request the invtsc cpuflag are now live-migrateable [PR #5911][dhiller] Bumps kubevirtci, also suppresses kubectl. sh output to avoid confusing checks [PR #5863][xpivarc] Fix: ioerrors don’t cause crash-looping of notify server [PR #5867][mlsorensen] New build target added to export virt-* images as a tar archive. [PR #5766][davidvossel] Addition of kubevirt_vmi_phase_transition_seconds_since_creation to monitor how long it takes to transition a VMI to a specific phase from creation time. [PR #5823][dhiller] Change default branch to main for kubevirt/kubevirt repository [PR #5763][nunnatsa] Fix bug 1945589: Prevent migration of VMIs that uses virtiofs [PR #5827][mlsorensen] Auto-provisioned disk images on empty PVCs now leave 128KiB unused to avoid edge cases that run the volume out of space. [PR #5849][davidvossel] Fixes event recording causing a segfault in virt-controller [PR #5797][rhrazdil] Add serviceAccountDisk automatically when Istio is enabled in VMI annotations [PR #5723][ashleyschuett] Allow virtctl to stop VM and ignore the graceful shutdown period [PR #5806][mlsorensen] configmap, secret, and cloud-init raw disks now work when underlying node storage has 4k blocks. [PR #5623][iholder-redhat] [bugfix]: Allow migration of VMs with host-model CPU to migrate only for compatible nodes [PR #5716][rhrazdil] Fix issue with virt-launcher becoming NotReady after migration when Istio is used. [PR #5778][ashleyschuett] Update ca-bundle if it is unable to be parsed [PR #5787][acardace] migrated references of authorization/v1beta1 to authorization/v1 [PR #5461][rhrazdil] Add support for Istio proxy when no explicit ports are specified on masquerade interface [PR #5751][acardace] EFI VMIs with secureboot disabled can now be booted even when only OVMF_CODE. secboot. fd and OVMF_VARS. fd are present in the virt-launcher image [PR #5629][andreyod] Support starting Virtual Machine with its guest CPU paused using virtctl start --paused [PR #5725][dhiller] Generate REST API coverage report after functional tests [PR #5758][davidvossel] Fixes kubevirt_vmi_phase_count to include all phases, even those that occur before handler hand off. [PR #5745][ashleyschuett] Alert with resource usage exceeds resource requests [PR #5759][mhenriks] Update CDI to 1. 34. 1 [PR #5038][kwiesmueller] Add exec command to VM liveness and readinessProbe executed through the qemu-guest-agent. [PR #5431][alonSadan] Add NFT and IPTables rules to allow port-forward to non-declared ports on the VMI. Declaring ports on VMI will limit" }, { - "id": 38, + "id": 39, "url": "/2021/changelog-v0.42.0.html", "title": "KubeVirt v0.42.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 42. 0: Released on: Tue Jun 8 12:09:49 2021 +0000 [PR #5738][rmohr] Stop releasing jinja2 templates of our operator. Kustomize is the preferred way for customizations. [PR #5691][ashleyschuett] Allow multiple shutdown events to ensure the event is received by ACPI [PR #5558][ormergi] Drop virt-launcher SYS_RESOURCE capability [PR #5694][davidvossel] Fixes null pointer dereference in migration controller [PR #5416][iholder-redhat] [feature] support booting VMs from a custom kernel/initrd images with custom kernel arguments [PR #5495][iholder-redhat] Go version updated to version 1. 16. 1. [PR #5502][rmohr] Add downwardMetrics volume to expose a limited set of hots metrics to guests [PR #5601][maya-r] Update libvirt-go to 7. 3. 0 [PR #5661][davidvossel] Validation/Mutation webhooks now explicitly define a 10 second timeout period [PR #5652][rmohr] Automatically discover kube-prometheus installations and configure kubevirt monitoring [PR #5631][davidvossel] Expand backport policy to include logging and debug fixes [PR #5528][zcahana] Introduced a “status. printableStatus” field in the VirtualMachine CRD. This field is now displayed in the tabular output of “kubectl get vm”. [PR #5200][rhrazdil] Add support for Istio proxy traffic routing with masquerade interface. nftables is required for this feature. [PR #5560][oshoval] virt-launcher now populates domain’s guestOS info and interfaces status according guest agent also when doing periodic resyncs. [PR #5514][rhrazdil] Fix live-migration failing when VM with masquarade iface has explicitly specified any of these ports: 22222, 49152, 49153 [PR #5583][dhiller] Reenable coverage [PR #5129][davidvossel] Gracefully shutdown virt-api connections and ensure zero exit code under normal shutdown conditions [PR #5582][dhiller] Fix flaky unit tests [PR #5600][davidvossel] Improved logging around VM/VMI shutdown and restart [PR #5564][omeryahud] virtctl rename support is dropped [PR #5585][iholder-redhat] [bugfix] - reject VM defined with volume with no matching disk [PR #5595][zcahana] Fixes adoption of orphan DataVolumes [PR #5566][davidvossel] Release branches are now cut on the first business day of the month rather than the first day. [PR #5108][Omar007] Fixes handling of /proc//mountpoint by working on the device information instead of mount information [PR #5250][mlsorensen] Controller health checks will no longer actively test connectivity to the Kubernetes API. They will rely in health of their watches to determine if they have API connectivity. [PR #5563][ashleyschuett] Set KubeVirt resources flags in the KubeVirt CR [PR #5328][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 7. 0. 0 and QEMU 5. 2. 0. " }, { - "id": 39, + "id": 40, "url": "/2021/changelog-v0.41.0.html", "title": "KubeVirt v0.41.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 41. 0: Released on: Wed May 12 14:30:49 2021 +0000 [PR #5586][kubevirt-bot] This version of KubeVirt includes upgraded virtualization technology based on libvirt 7. 0. 0 and QEMU 5. 2. 0. [PR #5344][ashleyschuett] Reconcile PrometheusRules and ServiceMonitor resources [PR #5542][andreyod] Add startStrategy field to VMI spec to allow Virtual Machine start in paused state. [PR #5459][ashleyschuett] Reconcile service resource [PR #5520][ashleyschuett] Reconcile required labels and annotations on ConfigMap resources [PR #5533][rmohr] Fix docker save and docker push issues with released kubevirt images [PR #5428][oshoval] virt-launcher now populates domain’s guestOS info and interfaces status according guest agent also when doing periodic resyncs. [PR #5410][ashleyschuett] Reconcile ServiceAccount resources [PR #5109][Omar007] Add support for specifying a logical and physical block size for disk devices [PR #5471][ashleyschuett] Reconcile APIService resources [PR #5513][ashleyschuett] Reconcile Secret resources [PR #5496][davidvossel] Improvements to migration proxy logging [PR #5376][ashleyschuett] Reconcile CustomResourceDefinition resources [PR #5435][AlonaKaplan] Support dual stack service on “virtctl expose”- [PR #5425][davidvossel] Fixes VM restart during eviction when EvictionStrategy=LiveMigrate [PR #5423][ashleyschuett] Add resource requests to virt-controller, virt-api, virt-operator and virt-handler [PR #5343][erkanerol] Some cleanups and small additions to the storage metrics [PR #4682][stu-gott] Updated Guest Agent Version compatibility check. The new approach is much more accurate. [PR #5485][rmohr] Fix fallback to iptables if nftables is not used on the host on arm64 [PR #5426][rmohr] Fix fallback to iptables if nftables is not used on the host [PR #5403][tiraboschi] Added a kubevirt_ prefix to several recording rules and metrics [PR #5241][stu-gott] Introduced Duration and RenewBefore parameters for cert rotation. Previous values are now deprecated. [PR #5463][acardace] Fixes upgrades from KubeVirt v0. 36 [PR #5456][zhlhahaha] Enable arm64 cross-compilation [PR #3310][davidvossel] Doc outlines our Kubernetes version compatibility commitment [PR #3383][EdDev] Add vmIPv6NetworkCIDR under NetworkSource. pod to support custom IPv6 CIDR for the vm network when using masquerade binding. [PR #3415][zhlhahaha] Make kubevirt code fit for arm64 support. No testing is at this stage performed against arm64 at this point. [PR #5147][xpivarc] Remove CAP_NET_ADMIN from the virt-launcher pod(second take). [PR #5351][awels] Support hotplug with virtctl using addvolume and removevolume commands [PR #5050][ashleyschuett] Fire Prometheus Alert when a vmi is orphaned for more than an hour" }, { - "id": 40, + "id": 41, "url": "/2021/intel-vgpu-kubevirt.html", "title": "Using Intel vGPUs with Kubevirt", "author" : "Mark DeNeve", "tags" : "kubevirt, vGPU, Windows, GPU, Intel, minikube, Fedora", "body": " Introduction Prerequisites Fedora Workstation Prep Preparing the Intel vGPU driver Install Kubernetes with minikube Install kubevirt Validate vGPU detection Install Containerize Data Importer Install Windows Accessing the Windows VM Using the GPUIntroduction: Graphical User Interfaces (GUIs) have come along way over the past few years and most modern desktop environments expect some form of GPU acceleration in order to give you a seamless user experience. If you have tried running things like Windows 10 within Kubevirt you may have noticed that the desktop experience felt a little slow. This is due to Windows 10 reliance on GPU acceleration. In addition many applications are also now taking advantage of GPU acceleration and it can even be used in web based applications such as “FishGL”: Without GPU hardware acceleration the user experience of a Virtual machine can be greatly impacted. Starting with 5th generation Intel Core processors that have embedded Intel graphics processing units it is possible to share the graphics processor between multiple virtual machines. In Linux, this sharing of a GPU is typically enabled through the use of mediated GPU devices, also known as vGPUs. Kubevirt has supported the use of GPUs including GPU passthrough and vGPU since v0. 22. 0 back in 2019. This support was centered around one specific vendor, and only worked with expensive enterprise class cards and required additional licensing. Starting with Kubevirt 0. 40 support for detecting and allocating the Intel based vGPUs has been added to Kubevirt. Support for the creation of these virtualized Intel GPUs is available in the Linux Kernel since the 4. 19 release. What does this meaning for you? You no longer need additional drivers or licenses to test out GPU accelerated virtual machines. The total number of Intel vGPUs you can create is dependent on your specific hardware as well as support for changing the Graphics aperture size and shared graphics memory within your BIOS. For more details on this see Create vGPU (KVMGT only) in the Intel GVTg wiki. Minimally configured devices can typically make at least two vGPU devices. You can reproduce this work on any Kubernetes cluster running kubevirt v0. 40. 0 or later, but the steps you need to take to load the kernel modules and enable the virtual devices will vary based on the underlying OS your Kubernetes cluster is running on. In order to demonstrate how you can enable this feature, we will use an all-in-one Kubernetes cluster built using Fedora 32 and minikube. Note This blog post is a more advanced topic and assumes some Linux and Kubernetes understanding. Prerequisites: Before we begin you will need a few things to make use of the Intel GPU: A workstation or server with a 5th Generation or higher Intel Core Processor, or E3_v4 or higher Xeon Processor and enough memory to virtualize one or more VMs A preinstalled Fedora 32 Workstation with at least 50Gb of free space in the “/” filesystem The following software: minikube - See minikube start virtctl - See kubevirt releases kubectl - See Install and Set Up kubectl on Linux A Windows 10 Install ISO Image - See Download Windows 10 Disk ImageFedora Workstation Prep: In order to use minikube on Fedora 32 we will be installing multiple applications that will be used throughout this demo. In addition we will be configuring the workstation to use cgroups v1 and we will be updating the firewall to allow proper communication to our Kubernetes cluster as well as any hosted applications. Finally we will be disabling SELinux per the minikube bare-metal install instructions: Note This post assumes that we are starting with a fresh install of Fedora 32. If you are using an existing configured Fedora 32 Workstation, you may have some software conflicts. sudo dnf update -ysudo dnf install -y pciutils podman podman-docker conntrack tigervnc rdesktopsudo grubby --update-kernel=ALL --args= systemd. unified_cgroup_hierarchy=0 # Setup firewall rules to allow inbound and outbound connections from your minikube clustersudo firewall-cmd --add-port=30000-65535/tcp --permanentsudo firewall-cmd --add-port=30000-65535/udp --permanentsudo firewall-cmd --add-port=10250-10252/tcp --permanentsudo firewall-cmd --add-port=10248/tcp --permanentsudo firewall-cmd --add-port=2379-2380/tcp --permanentsudo firewall-cmd --add-port=6443/tcp --permanentsudo firewall-cmd --add-port=8443/tcp --permanentsudo firewall-cmd --add-port=9153/tcp --permanentsudo firewall-cmd --add-service=dns --permanentsudo firewall-cmd --add-interface=cni-podman0 --permanentsudo firewall-cmd --add-masquerade --permanentsudo vi /etc/selinux/config# change the SELINUX=enforcing to SELINUX=permissive sudo setenforce 0sudo systemctl enable sshd --nowWe will now install the CRIO runtime: sudo dnf module enable -y cri-o:1. 18sudo dnf install -y cri-o cri-toolssudo systemctl enable --now crioPreparing the Intel vGPU driver: In order to make use of the Intel vGPU driver, we need to make a few changes to our all-in-one host. The commands below assume you are using a Fedora based host. If you are using a different base OS, be sure to update your commands for that specific distribution. The following commands will do the following: load the kvmgt module to enable support within kvm enable gvt in the i915 module update the Linux kernel to enable Intel IOMMUsudo sh -c echo kvmgt > /etc/modules-load. d/gpu-kvmgt. conf sudo grubby --update-kernel=ALL --args= intel_iommu=on i915. enable_gvt=1 sudo shutdown -r nowAfter the reboot check to ensure that the proper kernel modules have been loaded: $ sudo lsmod | grep kvmgtkvmgt 32768 0mdev 20480 2 kvmgt,vfio_mdevvfio 32768 3 kvmgt,vfio_mdev,vfio_iommu_type1kvm 798720 2 kvmgt,kvm_inteli915 2494464 4 kvmgtdrm 557056 4 drm_kms_helper,kvmgt,i915We will now create our vGPU devices. These virtual devices are created by echoing a GUID into a sys device created by the Intel driver. This needs to be done every time the system boots. The easiest way to do this is using a systemd service that runs on every boot. Before we create this systemd service, we need to validate the PCI ID of your Intel Graphics card. To do this we will use the lspci command $ sudo lspci00:00. 0 Host bridge: Intel Corporation Device 9b53 (rev 03)00:02. 0 VGA compatible controller: Intel Corporation Device 9bc8 (rev 03)00:08. 0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture ModelTake note that in the above output the Intel GPU is on “00:02. 0”. Now create the /etc/systemd/system/gvtg-enable. service but be sure to update the PCI ID as appropriate for your machine: cat > ~/gvtg-enable. service << EOF[Unit]Description=Create Intel GVT-g vGPU[Service]Type=oneshotExecStart=/bin/sh -c echo '56a4c4e2-c81f-4cba-82bf-af46c30ea32d' > /sys/devices/pci0000:00/0000:00:02. 0/mdev_supported_types/i915-GVTg_V5_8/create ExecStart=/bin/sh -c echo '973069b7-2025-406b-b3c9-301016af3150' > /sys/devices/pci0000:00/0000:00:02. 0/mdev_supported_types/i915-GVTg_V5_8/create ExecStop=/bin/sh -c echo '1' > /sys/devices/pci0000:00/0000:00:02. 0/56a4c4e2-c81f-4cba-82bf-af46c30ea32d/remove ExecStop=/bin/sh -c echo '1' > /sys/devices/pci0000:00/0000:00:02. 0/973069b7-2025-406b-b3c9-301016af3150/remove RemainAfterExit=yes[Install]WantedBy=multi-user. targetEOFsudo mv ~/gvtg-enable. service /etc/systemd/system/gvtg-enable. servicesudo systemctl enable gvtg-enable --nowNote The above systemd service will create two vGPU devices, you can repeat the commands with additional unique GUIDs up to a maximum of 8 vGPU if your particular hardware supports it. We can validate that the vGPU devices were created by looking in the /sys/devices/pci0000:00/0000:00:02. 0/ directory. $ ls -lsa /sys/devices/pci0000:00/0000:00:02. 0/56a4c4e2-c81f-4cba-82bf-af46c30ea32dtotal 0lrwxrwxrwx. 1 root root 0 Apr 20 13:56 driver -> . . /. . /. . /. . /bus/mdev/drivers/vfio_mdevdrwxr-xr-x. 2 root root 0 Apr 20 14:41 intel_vgpulrwxrwxrwx. 1 root root 0 Apr 20 14:41 iommu_group -> . . /. . /. . /. . /kernel/iommu_groups/8lrwxrwxrwx. 1 root root 0 Apr 20 14:41 mdev_type -> . . /mdev_supported_types/i915-GVTg_V5_8drwxr-xr-x. 2 root root 0 Apr 20 14:41 power--w-------. 1 root root 4096 Apr 20 14:41 removelrwxrwxrwx. 1 root root 0 Apr 20 13:56 subsystem -> . . /. . /. . /. . /bus/mdev-rw-r--r--. 1 root root 4096 Apr 20 13:56 ueventNote that “mdev_type” points to “i915-GVTg_V5_8”, this will come into play later when we configure kubevirt to detect the vGPU. Install Kubernetes with minikube: We will now install Kubernetes onto our Fedora Workstation. Minikube will help quickly set up our Kubernetes cluster environment. We will start by getting the latest release of minikube and kubectl. curl -LO https://storage. googleapis. com/minikube/releases/latest/minikube-linux-amd64sudo install minikube-linux-amd64 /usr/local/bin/minikubeVERSION=$(minikube kubectl version | head -1 | awk -F', ' {'print $3'} | awk -F':' {'print $2'} | sed s/\ //g)sudo install ${HOME}/. minikube/cache/linux/${VERSION}/kubectl /usr/local/binWe will be using the minikube driver “none” which will install Kubernetes directly onto this machine. This will allow you to maintain a copy of the virtual machines that you build through a reboot. Later in this post we will create persistent volumes for virtual machine storage in “/data”. As previously noted, ensure that you have at least 50Gb of free space in “/data” to complete this setup. The minikube install will take a few minutes to complete. $ sudo mkdir -p /data/winhd1-pv$ sudo minikube start --driver=none --container-runtime=crio😄 minikube v1. 19. 0 on Fedora 32✨ Using the none driver based on user configuration👍 Starting control plane node minikube in cluster minikube🤹 Running on localhost (CPUs=12, Memory=31703MB, Disk=71645MB) . . . ℹ️ OS release is Fedora 32 (Workstation Edition)🐳 Preparing Kubernetes v1. 20. 2 on Docker 20. 10. 6 . . . ▪ Generating certificates and keys . . . ▪ Booting up control plane . . . ▪ Configuring RBAC rules . . . 🤹 Configuring local host environment . . . 🔎 Verifying Kubernetes components. . . ▪ Using image gcr. io/k8s-minikube/storage-provisioner:v5🌟 Enabled addons: storage-provisioner, default-storageclass🏄 Done! kubectl is now configured to use minikube cluster and default namespace by defaultIn order to make our interaction with Kubernetes a little easier, we will need to copy some files and update our . kube/config mkdir -p ~/. minikube/profiles/minikubesudo cp -r /root/. kube /home/$USERsudo cp /root/. minikube/ca. crt /home/$USER/. minikube/ca. crtsudo cp /root/. minikube/profiles/minikube/client. crt /home/$USER/. minikube/profiles/minikubesudo cp /root/. minikube/profiles/minikube/client. key /home/$USER/. minikube/profiles/minikubesudo chown -R $USER:$USER /home/$USER/. kubesudo chown -R $USER:$USER /home/$USER/. minikubesed -i s/root/home\/$USER/ ~/. kube/configOnce the minikube install is complete, validate that everything is working properly. $ kubectl get nodesNAME STATUS ROLES AGE VERSIONkubevirt Ready control-plane,master 4m5s v1. 20. 2As long as you don’t get any errors, your base Kubernetes cluster is ready to go. Install kubevirt: Our all-in-one Kubernetes cluster is now ready for installing Installing Kubevirt. Using the minikube addons manager, we will install kubevirt into our cluster: sudo minikube addons enable kubevirtkubectl -n kubevirt wait kubevirt kubevirt --for condition=Available --timeout=300sAt this point, we need to update our instance of kubevirt in the cluster. We need to configure kubevirt to detect the Intel vGPU by giving it an mdevNameSelector to look for, and a resourceName to assign to it. The mdevNameSelector comes from the “mdev_type” that we identified earlier when we created the two virtual GPUs. When the kubevirt device manager finds instances of this mdev type, it will record this information and tag the node with the identified resourceName. We will use this resourceName later when we start up our virtual machine. cat > kubevirt-patch. yaml << EOFspec: configuration: developerConfiguration: featureGates: - GPU permittedHostDevices: mediatedDevices: - mdevNameSelector: i915-GVTg_V5_8 resourceName: intel. com/U630 EOFkubectl patch kubevirt kubevirt -n kubevirt --patch $(cat kubevirt-patch. yaml) --type=mergeWe now need to wait for kubevirt to reload its configuration. Validate vGPU detection: Now that kubevirt is installed and running, lets ensure that the vGPU was identified correctly. Describe the minikube node, using the command kubectl describe node and look for the “Capacity” section. If kubevirt properly detected the vGPU you will see an entry for “intel. com/U630” with a capacity value of greater than 0. $ kubectl describe nodeName: kubevirtRoles: control-plane,masterLabels: beta. kubernetes. io/arch=amd64 beta. kubernetes. io/os=linux. . . Capacity: cpu: 12 devices. kubevirt. io/kvm: 110 devices. kubevirt. io/tun: 110 devices. kubevirt. io/vhost-net: 110 ephemeral-storage: 71645Mi hugepages-1Gi: 0 hugepages-2Mi: 0 intel. com/U630: 2 memory: 11822640Ki pods: 110There it is, intel. com/U630 - two of them are available. Now all we need is a virtual machine to consume them. Install Containerize Data Importer: In order to install Windows 10, we are going to need to upload a Windows 10 install ISO to the cluster. This can be facilitated through the use of the Containerized Data Importer. The following steps are taken from the Experiment with the Containerized Data Importer (CDI) web page: export VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator. yamlkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr. yamlkubectl -n cdi wait cdi cdi --for condition=Available --timeout=300sNow that our CDI is available, we will expose it for consumption using a nodePort. This will allow us to connect to the cdi-proxy in the next steps. cat > cdi-nodeport. yaml << EOFapiVersion: v1kind: Servicemetadata: name: cdi-proxy-nodeport namespace: cdispec: type: NodePort selector: cdi. kubevirt. io: cdi-uploadproxy ports: - port: 8443 nodePort: 30443EOFkubectl create -f cdi-nodeport. yamlOne final step, lets get the latest release of virtctl which we will be using as we install Windows. VERSION=$(kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. observedKubeVirtVersion} )curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64sudo install virtctl /usr/local/binInstall Windows: At this point we can now install a Windows VM in order to test this feature. The steps below are based on KubeVirt: installing Microsoft Windows from an ISO however we will be using Windows 10 instead of Windows Server 2012. The commands below assume that you have a Windows 10 ISO file called win10-virtio. iso. If you need a Windows 10 CD, please see Download Windows 10 Disk Image and come back here after you have obtained your install CD. $ virtctl image-upload \ --image-path=win10-virtio. iso \ --pvc-name=iso-win10 \ --access-mode=ReadWriteOnce \ --pvc-size=6G \ --uploadproxy-url=https://127. 0. 0. 1:30443 \ --insecure \ --wait-secs=240We need a place to store our Windows 10 virtual disk, use the following to create a 40Gb space to store our file. In order to do this within minikube we will manually create a PersistentVolume (PV) as well as a PersistentVolumeClaim (PVC). These steps assume that you have 45+ GiB of free space in “/”. We will create a “/data” directory as well as a subdirectory for storing our PV. If you do not have at least 45 GiB of free space in “/”, you will need to free up space, or mount storage on “/data” to continue. cat > win10-pvc. yaml << EOF---apiVersion: v1kind: PersistentVolumemetadata: name: pvwinhd1spec: accessModes: - ReadWriteOnce capacity: storage: 43Gi claimRef: namespace: default name: winhd1 hostPath: path: /data/winhd1-pv---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhd1spec: accessModes: - ReadWriteOnce resources: requests: storage: 40GiEOFkubectl create -f win10-pvc. yamlWe can now create our Windows 10 virtual machine. Use the following to create a virtual machine definition file that includes a vGPU: cat > win10vm1. yaml << EOFapiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win10vm1spec: running: false template: metadata: creationTimestamp: null labels: kubevirt. io/domain: win10vm1 spec: domain: clock: timer: hpet: present: false hyperv: {} pit: tickPolicy: delay rtc: tickPolicy: catchup utc: {} cpu: cores: 1 sockets: 2 threads: 1 devices: gpus: - deviceName: intel. com/U630 name: gpu1 disks: - cdrom: bus: sata name: windows-guest-tools - bootOrder: 1 cdrom: bus: sata name: cdrom - bootOrder: 2 disk: bus: sata name: disk-1 inputs: - bus: usb name: tablet type: tablet interfaces: - masquerade: {} model: e1000e name: nic-0 features: acpi: {} apic: {} hyperv: relaxed: {} spinlocks: spinlocks: 8191 vapic: {} machine: type: pc-q35-rhel8. 2. 0 resources: requests: memory: 8Gi hostname: win10vm1 networks: - name: nic-0 pod: {} terminationGracePeriodSeconds: 3600 volumes: - name: cdrom persistentVolumeClaim: claimName: iso-win10 - name: disk-1 persistentVolumeClaim: claimName: winhd1 - containerDisk: image: quay. io/kubevirt/virtio-container-disk name: windows-guest-toolsEOFkubectl create -f win10vm1. yamlNOTE This VM is not optimized to use virtio devices to simplify the OS install. By using SATA devices as well as an emulated e1000 network card, we do not need to worry about loading additional drivers. The key piece of information that we have added to this virtual machine definition is this snippet of yaml: devices: gpus: - deviceName: intel. com/U630 name: gpu1Here we are identifying the gpu device that we want to attach to this VM. The deviceName relates back to the name that we gave to kubevirt to identify the Intel GPU resources. It also is the same identifier that shows up in the “Capacity” section of a node when you run kubectl describe node. We can now start the virtual machine: virtctl start win10vm1kubectl get vmi --watchWhen the output of shows that the vm is in a “Running” phase you can “CTRL+C” to end the watch command. Accessing the Windows VM: Since we are running this VM on this local machine, we can now take advantage of the virtctl command to connect to the VNC console of the virtual machine. virtctl vnc win10vm1A new VNC Viewer window will open and you should now see the Windows 10 install screen. Follow standard Windows 10 install steps at this point. Once the install is complete you have a Windows 10 VM running with a GPU available. You can test that GPU acceleration is available by opening the Windows 10 task manager, selecting Advanced and then select the “Performance” tab. Note that the first time you start up, Windows is still detecting and installing the appropriate drivers. It may take a minute or two for the GPU information to show up in the Performance tab. Try testing out the GPU acceleration. Open a web browser in your VM and navigate to “https://webglsamples. org/fishtank/fishtank. html” HOWEVER don’t be surprised by the poor performance. The default kubevirt console does not take advantage of the GPU. For that we need to take one final step to use the Windows Remote Desktop Protocol (RDP) which can use the GPU. Using the GPU: In order to take advantage of the virtual GPU we have added, we will need to connect to the virtual machine over Remote Desktop Protocol (RDP). Follow these steps to enable RDP: In the Windows 10 search bar, type “Remote Desktop Settings” and then open the result. Select “Enable Remote Desktop” and confirm the change. Select “Advanced settings” and un-check “Require computers to use Network level Authentication”, and confirm this change. Finally reboot the Windows 10 Virtual machine. Now, run the following commands in order to expose the RDP server to outside your Kubernetes cluster: $ virtctl expose vm win10vm1 --port=3389 --type=NodePort --name=win10vm1-rdp$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10. 96. 0. 1 <none> 443/TCP 18hwin10vm1-rdp NodePort 10. 105. 159. 184 <none> 3389:30627/TCP 39sNote the port that was assigned to this service we will use it in the next step. In the above output the port is 30627. We can now use the rdesktop tool to connect to our VM and get the full advantages of the vGPU. From a command line run rdesktop localhost:<port> being sure to update the port based on the output from above. When prompted by rdesktop accept the certificate. Log into your Windows 10 client. You can now test out the vGPU. Let’s try FishGL again. Open a browser and go to https://webglsamples. org/fishtank/fishtank. html. You should notice a large improvement in the applications performance. You can also open the Task Manager and look at the performance tab to see the GPU under load. Note that since you are running your Fedora 32 workstation on this same GPU you are already sharing the graphics workload between your primary desktop, and the virtualized Windows Desktop also running on this machine. Congratulations! You now have a VM running in Kubernetes using an Intel vGPU. If your test machine has enough resources you can repeat the steps and create multiple virtual machines all sharing the one Intel GPU. " }, { - "id": 41, + "id": 42, "url": "/2021/Automated-Windows-Installation-With-Tekton-Pipelines.html", "title": "Automated Windows Installation With Tekton Pipelines", "author" : "Filip Křepinský", "tags" : "kubevirt, Kubernetes, virtual machine, VM, Tekton Pipelines, KubeVirt Tekton Tasks, Windows", "body": "Introduction: This blog shows how we can easily automate a process of installing Windows VMs on KubeVirt with Tekton Pipelines. Tekton Pipelines can be used to create a single Pipeline that encapsulates the installation process which can be run and replicated with PipelineRuns. The pipeline will be built with KubeVirt Tekton Tasks, which includes all the necessary tasks for this example. Pipeline Description: The pipeline will prepare an empty Persistent Volume Claim (PVC) and download a Windows source ISO into another PVC. Both of them will be initialized with Containerized Data Importer (CDI). It will then spin up an installation VM and use Windows Answer Files to automatically install the VM. Then the pipeline will wait for the installation to complete and will delete the installation VM while keeping the artifact PVC with the installed operating system. You can later use the artifact PVC as a base image and copy it for new VMs. Prerequisites: KubeVirt v0. 39. 0 Tekton Pipelines v0. 19. 0 KubeVirt Tekton Tasks v0. 3. 0Running Windows Installer Pipeline: Obtaining a URL of Windows Source ISO: First we have to obtain a Download URL of Windows Source ISO. Go to https://www. microsoft. com/en-us/software-download/windows10ISO. You can also obtain a server edition for evaluation at https://www. microsoft. com/en-us/evalcenter/evaluate-windows-server-2019. Fill in the edition and English language (other languages need to be updated in windows-10-autounattend ConfigMap below) and go to the download page. Right-click on the 64-bit download button and copy the download link. The link should be valid for 24 hours. We will need this URL a bit later when running the pipeline. Preparing autounattend. xml ConfigMap: Now we have to prepare our autounattend. xml Answer File with the installation instructions. We will store it in a ConfigMap, but optionally it can be stored in a Secret as well. The configuration file can be generated with Windows SIMor it can be specified manually according to Answer File Referenceand Answer File Components Reference. The following config map includes the required drivers and guest disk configuration. It also specifies how the installation should proceed and what users should be created. In our case it is an Administrator user with changepassword password. You can also change the Answer File according to your needs by consulting the already mentioned documentation. apiVersion: v1kind: ConfigMapmetadata: name: windows-10-autounattenddata: Autounattend. xml: | <?xml version= 1. 0 encoding= utf-8 ?> <unattend xmlns= urn:schemas-microsoft-com:unattend > <settings pass= windowsPE > <component name= Microsoft-Windows-PnpCustomizationsWinPE publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS processorArchitecture= amd64 xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State > <DriverPaths> <PathAndCredentials wcm:action= add wcm:keyValue= 1 > <Path>E:\viostor\w10\amd64</Path> </PathAndCredentials> <PathAndCredentials wcm:action= add wcm:keyValue= 2 > <Path>E:\NetKVM\w10\amd64</Path> </PathAndCredentials> <PathAndCredentials wcm:action= add wcm:keyValue= 3 > <Path>E:\viorng\w10\amd64</Path> </PathAndCredentials> </DriverPaths> </component> <component name= Microsoft-Windows-International-Core-WinPE processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SetupUILanguage> <UILanguage>en-US</UILanguage> </SetupUILanguage> <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name= Microsoft-Windows-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <DiskConfiguration> <Disk wcm:action= add > <CreatePartitions> <CreatePartition wcm:action= add > <Order>1</Order> <Type>Primary</Type> <Size>100</Size> </CreatePartition> <CreatePartition wcm:action= add > <Extend>true</Extend> <Order>2</Order> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action= add > <Active>true</Active> <Format>NTFS</Format> <Label>System Reserved</Label> <Order>1</Order> <PartitionID>1</PartitionID> <TypeID>0x27</TypeID> </ModifyPartition> <ModifyPartition wcm:action= add > <Active>true</Active> <Format>NTFS</Format> <Label>OS</Label> <Letter>C</Letter> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> </OSImage> </ImageInstall> <UserData> <AcceptEula>true</AcceptEula> <FullName>Administrator</FullName> <Organization></Organization> <ProductKey> <Key>W269N-WFGWX-YVC9B-4J6C9-T83GX</Key> </ProductKey> </UserData> </component> </settings> <settings pass= offlineServicing > <component name= Microsoft-Windows-LUA-Settings processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <EnableLUA>false</EnableLUA> </component> </settings> <settings pass= generalize > <component name= Microsoft-Windows-Security-SPP processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SkipRearm>1</SkipRearm> </component> </settings> <settings pass= specialize > <component name= Microsoft-Windows-International-Core processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <InputLocale>0409:00000409</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name= Microsoft-Windows-Security-SPP-UX processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <SkipAutoActivation>true</SkipAutoActivation> </component> <component name= Microsoft-Windows-SQMApi processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <CEIPEnabled>0</CEIPEnabled> </component> <component name= Microsoft-Windows-Shell-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <ComputerName>WindowsVM</ComputerName> <ProductKey>W269N-WFGWX-YVC9B-4J6C9-T83GX</ProductKey> </component> </settings> <settings pass= oobeSystem > <component name= Microsoft-Windows-Shell-Setup processorArchitecture= amd64 publicKeyToken= 31bf3856ad364e35 language= neutral versionScope= nonSxS xmlns:wcm= http://schemas. microsoft. com/WMIConfig/2002/State xmlns:xsi= http://www. w3. org/2001/XMLSchema-instance > <AutoLogon> <Password> <Value>changepassword</Value> <PlainText>true</PlainText> </Password> <Enabled>true</Enabled> <Username>Administrator</Username> </AutoLogon> <OOBE> <HideEULAPage>true</HideEULAPage> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <NetworkLocation>Home</NetworkLocation> <SkipUserOOBE>true</SkipUserOOBE> <SkipMachineOOBE>true</SkipMachineOOBE> <ProtectYourPC>3</ProtectYourPC> </OOBE> <UserAccounts> <LocalAccounts> <LocalAccount wcm:action= add > <Password> <Value>changepassword</Value> <PlainText>true</PlainText> </Password> <Description></Description> <DisplayName>Administrator</DisplayName> <Group>Administrators</Group> <Name>Administrator</Name> </LocalAccount> </LocalAccounts> </UserAccounts> <RegisteredOrganization></RegisteredOrganization> <RegisteredOwner>Administrator</RegisteredOwner> <DisableAutoDaylightTimeSet>false</DisableAutoDaylightTimeSet> <FirstLogonCommands> <SynchronousCommand wcm:action= add > <Description>Control Panel View</Description> <Order>1</Order> <CommandLine>reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel /v StartupPage /t REG_DWORD /d 1 /f</CommandLine> <RequiresUserInput>true</RequiresUserInput> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>2</Order> <Description>Control Panel Icon Size</Description> <RequiresUserInput>false</RequiresUserInput> <CommandLine>reg add HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel /v AllItemsIconView /t REG_DWORD /d 0 /f</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>3</Order> <RequiresUserInput>false</RequiresUserInput> <CommandLine>cmd /C wmic useraccount where name= Administrator set PasswordExpires=false</CommandLine> <Description>Password Never Expires</Description> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>4</Order> <Description>Remove AutoAdminLogon</Description> <RequiresUserInput>false</RequiresUserInput> <CommandLine>reg add HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon /v AutoAdminLogon /t REG_SZ /d 0 /f</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action= add > <Order>5</Order> <RequiresUserInput>false</RequiresUserInput> <CommandLine>cmd /c shutdown /s /f /t 10</CommandLine> <Description>Shuts down the system</Description> </SynchronousCommand> </FirstLogonCommands> <TimeZone>Alaskan Standard Time</TimeZone> </component> </settings> </unattend>---Creating the Pipeline: Let’s create a pipeline which consists of the following tasks. create-source-dv --- create-vm-from-manifest --- wait-for-vmi-status --- cleanup-vm | create-base-dv -- create-source-dv task downloads a Windows source ISO into a PVC called windows-10-source-*. create-base-dv task creates an empty PVC for new windows installation called windows-10-base-*. create-vm-from-manifest task creates a VM called windows-installer-*from the empty PVC and with the windows-10-source-* PVC attached as a CD-ROM. wait-for-vmi-status task waits until the VM shuts down. cleanup-vm deletes the installer VM and ISO PVC. The output artifact will be the windows-10-base-* PVC with the Windows installation. apiVersion: tekton. dev/v1beta1kind: Pipelinemetadata: name: windows-installerspec: params: - name: winImageDownloadURL type: string - name: autounattendConfigMapName default: windows-10-autounattend type: string tasks: - name: create-source-dv params: - name: manifest value: | apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: generateName: windows-10-source- spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 7Gi volumeMode: Filesystem source: http: url: $(params. winImageDownloadURL) - name: waitForSuccess value: 'true' timeout: '2h' taskRef: kind: ClusterTask name: create-datavolume-from-manifest - name: create-base-dv params: - name: manifest value: | apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: generateName: windows-10-base- spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi volumeMode: Filesystem source: blank: {} - name: waitForSuccess value: 'true' taskRef: kind: ClusterTask name: create-datavolume-from-manifest - name: create-vm-from-manifest params: - name: manifest value: | apiVersion: kubevirt. io/v1alpha3 kind: VirtualMachine metadata: generateName: windows-installer- annotation: description: Windows VM generated by windows-installer pipeline labels: app: windows-installer spec: runStrategy: RerunOnFailure template: metadata: labels: kubevirt. io/domain: windows-installer spec: domain: cpu: sockets: 2 cores: 1 threads: 1 resources: requests: memory: 2Gi devices: disks: - name: installcdrom cdrom: bus: sata bootOrder: 1 - name: rootdisk bootOrder: 2 disk: bus: virtio - name: virtiocontainerdisk cdrom: bus: sata - name: sysprepconfig cdrom: bus: sata interfaces: - bridge: {} name: default inputs: - type: tablet bus: usb name: tablet networks: - name: default pod: {} volumes: - name: installcdrom - name: rootdisk - name: virtiocontainerdisk containerDisk: image: kubevirt/virtio-container-disk - name: sysprepconfig sysprep: configMap: name: $(params. autounattendConfigMapName) - name: ownDataVolumes value: - installcdrom:$(tasks. create-source-dv. results. name) - name: dataVolumes value: - rootdisk:$(tasks. create-base-dv. results. name) runAfter: - create-source-dv - create-base-dv taskRef: kind: ClusterTask name: create-vm-from-manifest - name: wait-for-vmi-status params: - name: vmiName value: $(tasks. create-vm-from-manifest. results. name) - name: successCondition value: status. phase == Succeeded - name: failureCondition value: status. phase in (Failed, Unknown) runAfter: - create-vm-from-manifest timeout: '2h' taskRef: kind: ClusterTask name: wait-for-vmi-status - name: cleanup-vm params: - name: vmName value: $(tasks. create-vm-from-manifest. results. name) - name: delete value: true runAfter: - wait-for-vmi-status taskRef: kind: ClusterTask name: cleanup-vmRunning the Pipeline: To run the pipeline we need to create the following PipelineRun which references our Pipeline. Before we do that, we should replace DOWNLOAD_URL with the Windows source URL we obtained earlier. The PipelineRun also specifies the serviceAccount names for all the steps/tasks and the timeout for the whole Pipeline. The timeout should be changed appropriately; for example if you have a slow download connection. You can also set a timeout for each task in the Pipeline definition. apiVersion: tekton. dev/v1beta1kind: PipelineRunmetadata: generateName: windows-installer-run-spec: params: - name: winImageDownloadURL value: DOWNLOAD_URL pipelineRef: name: windows-installer timeout: '5h' serviceAccountNames: - taskName: create-source-dv serviceAccountName: create-datavolume-from-manifest-task - taskName: create-base-dv serviceAccountName: create-datavolume-from-manifest-task - taskName: create-vm-from-manifest serviceAccountName: create-vm-from-manifest-task - taskName: wait-for-vmi-status serviceAccountName: wait-for-vmi-status-task - taskName: cleanup-vm serviceAccountName: cleanup-vm-taskInspecting the output: Firstly, you can inspect the progress of the windows-10-source and windows-10-base import: kubectl get dvs | grep windows-10-> windows-10-base-8zxwr Succeeded 100. 0% 21s> windows-10-source-jdv64 ImportInProgress 1. 01% 20sTo inspect the status of the pipeline run: kubectl get pipelinerun -l tekton. dev/pipeline=windows-installer > NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME> windows-installer-run-n2mjf Unknown Running 118sTo check the status of each task and its pods: kubectl get pipelinerun -o yaml -l tekton. dev/pipeline=windows-installer kubectl get pods -l tekton. dev/pipeline=windows-installer Once the pipeline run completes, you should be left with a windows-10-base-xxxxx PVC (backed by a DataVolume). You can then create a new VM with a copy of this PVC to test it. You need to replace PVC_NAME with windows-10-base-xxxxx (you can use kubectl get dvs -o name | grep -o windows-10-base-. * ) and PVC_NAMESPACE with the correct namespace in the following YAML. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: windows-10-vmspec: dataVolumeTemplates: - apiVersion: cdi. kubevirt. io/v1beta1 kind: DataVolume metadata: name: windows-10-vm-root spec: pvc: accessModes: - ReadWriteMany resources: requests: storage: 20Gi source: pvc: name: PVC_NAME namespace: PVC_NAMESPACE running: false template: metadata: labels: kubevirt. io/domain: windows-10-vm spec: domain: cpu: sockets: 2 cores: 1 threads: 1 resources: requests: memory: 2Gi devices: disks: - name: rootdisk bootOrder: 1 disk: bus: virtio - name: virtiocontainerdisk cdrom: bus: sata interfaces: - bridge: {} name: default inputs: - type: tablet bus: usb name: tablet networks: - name: default pod: {} volumes: - name: rootdisk dataVolume: name: windows-10-vm-root - name: virtiocontainerdisk containerDisk: image: kubevirt/virtio-container-diskYou can start the VM and login with Administrator : changepassword credentials. Then you should be welcomed by your fresh VM. Resources: YAML files used in this example KubeVirt Tekton Tasks Tekton Pipelines" }, { - "id": 42, + "id": 43, "url": "/2021/changelog-v0.40.0.html", "title": "KubeVirt v0.40.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 40. 0: Released on: Mon Apr 19 12:25:41 2021 +0000 [PR #5467][rmohr] Fixes upgrades from KubeVirt v0. 36 [PR #5350][jean-edouard] Removal of entire permittedHostDevices section will now remove all user-defined host device plugins. [PR #5242][jean-edouard] Creating more than 1 migration at the same time for a given VMI will now fail [PR #4907][vasiliy-ul] Initial cgroupv2 support [PR #5324][jean-edouard] Default feature gates can now be defined in the provider configuration. [PR #5006][alicefr] Add discard=unmap option [PR #5022][davidvossel] Fixes race condition between operator adding service and webhooks that can result in installs/uninstalls failing [PR #5310][ashleyschuett] Reconcile CRD resources [PR #5102][iholder-redhat] Go version updated to 1. 14. 14 [PR #4746][ashleyschuett] Reconcile Deployments, DaemonSets, MutatingWebhookConfigurations and ValidatingWebhookConfigurations [PR #5037][ormergi] Hot-plug SR-IOV VF interfaces to VM’s post a successful migration. [PR #5269][mlsorensen] Prometheus metrics scraped from virt-handler are now served from the VMI informer cache, rather than calling back to the Kubernetes API for VMI information. [PR #5138][davidvossel] virt-handler now waits up to 5 minutes for all migrations on the node to complete before shutting down. [PR #5191][yuvalturg] Added a metric for monitoring CPU affinity [PR #5215][xphyr] Enable detection of Intel GVT-g vGPU. [PR #4760][rmohr] Make virt-handler heartbeat more efficient and robust: Only one combined PATCH and no need to detect different cluster types anymore. [PR #5091][iholder-redhat] QEMU SeaBios debug logs are being seen as part of virt-launcher log. [PR #5221][rmohr] Remove workload placement validation webhook which blocks placement updates when VMIs are running [PR #5128][yuvalturg] Modified memory related metrics by adding several new metrics and splitting the swap traffic bytes metric [PR #5084][ashleyschuett] Add validation to CustomizeComponents object on the KubeVirt resource [PR #5182][davidvossel] New [release-blocker] functional test marker to signify tests that can never be disabled before making a release [PR #5137][davidvossel] Added our policy around release branch backporting in docs/release-branch-backporting. md [PR #5096][yuvalturg] Modified networking metrics by adding new metrics, splitting existing ones by rx/tx and using the device alias for the interface name when available [PR #5088][awels] Hotplug works with hostpath storage. [PR #4908][dhiller] Move travis tag and master builds to kubevirt prow. [PR #4741][EdDev] Allow live migration for SR-IOV VM/s without preserving the VF interfaces. " }, { - "id": 43, + "id": 44, "url": "/2021/changelog-v0.39.0.html", "title": "KubeVirt v0.39.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 39. 0: Released on: Wed Mar 10 14:51:58 2021 +0000 [PR #5010][jean-edouard] Migrated VMs stay persistent and can therefore survive S3, among other things. [PR #4952][ashleyschuett] Create warning NodeUnresponsive event if a node is running a VMI pod but not a virt-handler pod [PR #4686][davidvossel] Automated workload updates via new KubeVirt WorkloadUpdateStrategy API [PR #4886][awels] Hotplug support for WFFC datavolumes. [PR #5026][AlonaKaplan] virt-launcher, masquerade binding - prefer nft over iptables. [PR #4921][borod108] Added support for Sysprep in the API. A user can now add a answer file through a ConfigMap or a Secret. The User Guide is updated accordingly. /kind feature [PR #4874][ormergi] Add new feature-gate SRIOVLiveMigration, [PR #4917][iholder-redhat] Now it is possible to enable QEMU SeaBios debug logs setting virt-launcher log verbosity to be greater than 5. [PR #4966][arnongilboa] Solve virtctl “Error when closing file … file already closed” that shows after successful image upload [PR #4489][salanki] Fix a bug where a disk. img file was created on filesystems mounted via Virtio-FS [PR #4982][xpivarc] Fixing handling of transient domain [PR #4984][ashleyschuett] Change customizeComponents. patches such that ‘*’ resourceName or resourceType matches all, all fields of a patch (type, patch, resourceName, resourceType) are now required. [PR #4972][vladikr] allow disabling pvspinlock to support older guest kernels [PR #4927][yuhaohaoyu] Fix of XML and JSON marshalling/unmarshalling for user defined device alias names which can make migrations fail. [PR #4552][rthallisey] VMs using bridged networking will survive a kubelet restart by having kubevirt create a dummy interface on the virt-launcher pods, so that some Kubernetes CNIs, that have implemented the CHECK RPC call, will not cause VMI pods to enter a failed state. [PR #4883][iholder-redhat] Bug fixed: Enabling libvirt debug logs only if debugLogs label value is “true”, disabling otherwise. [PR #4840][alicefr] Generate k8s events on IO errors [PR #4940][vladikr] permittedHostDevices will support both upper and lowercase letters in the device ID" }, { - "id": 44, + "id": 45, "url": "/2021/KubeVirt-Summit-Wrap-Up.html", "title": "The KubeVirt Summit 2021 is a wrap!", "author" : "Pep Turró Mauri", "tags" : "kubevirt, event, community", "body": "Just a few weeks ago, the KubeVirt community had their first ever dedicatedonline event, the KubeVirt Summit! We are very happy to have had this opportunity to meet so many communitymembers, hear from users, vendors and contributors, and learn so many thingsabout KubeVirt. If you missed the event, or if you were there and want to remember the greattime we had, the session recordingsare available in the KubeVirt YouTube channel. The landing page about the KubeVirt Summitcontains a detailed list of all the sessions, with information about thecontents, presenters, and direct links to each session recording and slides(where available). Thanks: We would like to thank everyone who contributed to make this event happen: allthe presenters / session leads, everyone who proposed a session, the variouscommunity members who contributed to the organization, all the attendees, andthe Container-native Computing Foundation who sponsored theevent. I want more: We are just wrapping up this first edition. Based on this experience, we reallyhope to have more community events of this type in the future, but it is still abit early to say when/where how. For now, please keep the conversations going through the various community channels: The mailing list The #virtualization Slack channel in Kubernetes Slack Our community meetings The github repositories Twitter: @kubevirt" }, { - "id": 45, + "id": 46, "url": "/2021/changelog-v0.38.0.html", "title": "KubeVirt v0.38.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 38. 0: Released on: Mon Feb 8 13:15:32 2021 +0000 [PR #4870][qinqon] Bump k8s deps to 0. 20. 2 [PR #4571][yuvalturg] Added os, workflow and flavor labels to the kubevirt_vmi_phase_count metric [PR #4659][salanki] Fixed an issue where non-root users inside a guest could not write to a Virtio-FS mount. [PR #4844][xpivarc] Fixed limits/requests to accept int again [PR #4850][rmohr] virtio-scsi now respects the useTransitionalVirtio flag instead of assigning a virtio version depending on the machine layout [PR #4672][vladikr] allow increasing logging verbosity of infra components in KubeVirt CR [PR #4838][rmohr] Fix an issue where it may not be able to update the KubeVirt CR after creation for up to minutes due to certificate propagation delays [PR #4806][rmohr] Make the mutating webhooks for VMIs and VMs required to avoid letting entities into the cluster which are not properly defaulted [PR #4779][brybacki] Error messsge on virtctl image-upload to WaitForFirstConsumer DV [PR #4749][davidvossel] KUBEVIRT_CLIENT_GO_SCHEME_REGISTRATION_VERSION env var for specifying exactly what client-go scheme version is registered [PR #4772][jean-edouard] Faster VMI phase transitions thanks to an increased number of VMI watch threads in virt-controller [PR #4730][rmohr] Add spec. domain. devices. useVirtioTransitional boolean to support virtio-transitional for old guests" }, { - "id": 46, + "id": 47, "url": "/2021/changelog-v0.37.0.html", "title": "KubeVirt v0.37.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 37. 0: Released on: Mon Jan 18 17:57:03 2021 +0000 [PR #4654][AlonaKaplan] Introduce virt-launcher DHCPv6 server. [PR #4669][kwiesmueller] Add nodeSelector to kubevirt components restricting them to run on linux nodes only. [PR #4648][davidvossel] Update libvirt base container to be based of packages in rhel-av 8. 3 [PR #4653][qinqon] Allow configure cloud-init with networkData only. [PR #4644][ashleyschuett] Operator validation webhook will deny updates to the workloads object of the KubeVirt CR if there are running VMIs [PR #3349][davidvossel] KubeVirt v1 GA api [PR #4645][maiqueb] Re-introduce the CAP_NET_ADMIN, to allow migration of VMs already having it. [PR #4546][yuhaohaoyu] Failure detection and handling for VM with EFI Insecure Boot in KubeVirt environments where EFI Insecure Boot is not supported by design. [PR #4625][awels] virtctl upload now shows error when specifying access mode of ReadOnlyMany [PR #4396][xpivarc] KubeVirt is now explainable! [PR #4517][danielBelenky] Fix guest agent reporting. " }, { - "id": 47, + "id": 48, "url": "/2021/KubeVirt-Summit-announce.html", "title": "KubeVirt Summit is coming!", "author" : "Pep Turró Mauri", "tags" : "kubevirt, event, community", "body": "Exciting news! The KubeVirt community are in the process of planning the first ever KubeVirt Summit! Save the dates: The event will take place online during two half-days: Dates: February 9 and 10, 2021. Time: 15:00 – 19:00 UTC (10:00–14:00 EST, 16:00–20:00 CET)Proposing topics: We want to encourage anyone who is interested in presenting to submit a topic toour community repohere. Simplycopy thetemplate in that repo directory as a new file, fill in the details pertaining to yoursession, and submit your proposal as a Pull Request. Keep up to date: The event has a landing page here. More details will be shared as they become available, here in the website and also on our mailing list, twitter and our weekly community meetings. Reach out through any of these channels to get involved. Looking forward to meeting you there! " }, { - "id": 48, + "id": 49, "url": "/2020/changelog-v0.36.0.html", "title": "KubeVirt v0.36.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 36. 0: Released on: Wed Dec 16 14:30:37 2020 +0000 [PR #4667][kubevirt-bot] Update libvirt base container to be based of packages in rhel-av 8. 3 [PR #4634][kubevirt-bot] Failure detection and handling for VM with EFI Insecure Boot in KubeVirt environments where EFI Insecure Boot is not supported by design. [PR #4647][kubevirt-bot] Re-introduce the CAP_NET_ADMIN, to allow migration of VMs already having it. [PR #4627][kubevirt-bot] Fix guest agent reporting. [PR #4458][awels] It is now possible to hotplug DataVolume and PVC volumes into a running Virtual Machine. [PR #4025][brybacki] Adds a special handling for DataVolumes in WaitForFirstConsumer state to support CDI’s delayed binding mode. [PR #4217][mfranczy] Set only an IP address for interfaces reported by qemu-guest-agent. Previously that was CIDR. [PR #4195][davidvossel] AccessCredentials API for dynamic user/password and ssh public key injection [PR #4335][oshoval] VMI status displays SRIOV interfaces with their network name only when they have originally [PR #4408][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 6. 6. 0 and QEMU 5. 1. 0. [PR #4514][ArthurSens] domain label removed from metric kubevirt_vmi_memory_unused_bytes [PR #4542][danielBelenky] Fix double migration on node evacuation [PR #4506][maiqueb] Remove CAP_NET_ADMIN from the virt-launcher pod. [PR #4501][AlonaKaplan] CAP_NET_RAW removed from virt-launcher. [PR #4488][salanki] Disable Virtio-FS metadata cache to prevent OOM conditions on the host. [PR #3937][vladikr] Generalize host devices assignment. Provides an interface between kubevirt and external device plugins. Provides a mechanism for whitelisting host devices. [PR #4443][rmohr] All kubevirt webhooks support now dry-runs. " }, { - "id": 49, + "id": 50, "url": "/2020/Monitoring-KubeVirt-VMs-from-the-inside.html", "title": "Monitoring KubeVirt VMs from the inside", "author" : "arthursens", "tags" : "kubevirt, Kubernetes, virtual machine, VM, prometheus, prometheus-operator, node-exporter, monitoring", "body": "Monitoring KubeVirt VMs from the inside: This blog post will guide you on how to monitor KubeVirt Linux based VirtualMachines with Prometheus node-exporter. Since node_exporter will run inside the VM and expose metrics at an HTTP endpoint, you can use this same guide to expose custom applications that expose metrics in the Prometheus format. Environment: This set of tools will be used on this guide: Helm v3 - To deploy the Prometheus-Operator. minikube - Will provide us a k8s cluster, you are free to choose any other k8s provider though. kubectl - To deploy different k8s resources virtctl - to interact with KubeVirt VirtualMachines, can be downloaded from the KubeVirt repo. Deploy Prometheus Operator: Once you have your k8s cluster, with minikube or any other provider, the first step will be to deploy the Prometheus Operator. The reason is that the KubeVirt CR, when installed on the cluster, will detect if the ServiceMonitor CR already exists. If it does, then it will create ServiceMonitors configured to monitor all the KubeVirt components (virt-controller, virt-api, and virt-handler) out-of-the-box. Although monitoring KubeVirt itself is not covered in this guide, it is a good practice to always deploy the Prometheus Operator before deploying KubeVirt. To deploy the Prometheus Operator, you will need to create its namespace first, e. g. monitoring: kubectl create ns monitoringThen deploy the operator in the new namespace: helm fetch stable/prometheus-operatortar xzf prometheus-operator*. tgzcd prometheus-operator/ && helm install -n monitoring -f values. yaml kubevirt-prometheus stable/prometheus-operatorAfter everything is deployed, you can delete everything that was downloaded by helm: cd . . rm -rf prometheus-operator*One thing to keep in mind is the release name we added here: kubevirt-prometheus. The release name will be used when declaring our ServiceMonitor later on. . Deploy KubeVirt Operators and KubeVirt CustomResources: Alright, the next step will be deploying KubeVirt itself. We will start with its operator. We will fetch the latest version, then use kubectl create to deploy the manifest directly from Github:: export KUBEVIRT_VERSION=$(curl -s https://api. github. com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs)kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlBefore deploying the KubeVirt CR, make sure that all kubevirt-operator replicas are ready, you can do that with: kubectl rollout status -n kubevirt deployment virt-operatorAfter that, we can deploy KubeVirt and wait for all it’s components to get ready in a similar manner: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr. yamlkubectl rollout status -n kubevirt deployment virt-apikubectl rollout status -n kubevirt deployment virt-controllerkubectl rollout status -n kubevirt daemonset virt-handlerIf we want to monitor VMs that can restart, we want our node-exporter to be persisted and, thus, we need to set up persistent storage for them. CDI will be the component responsible for that, so we will deploy it’s operator and custom resource as well. As always, waiting for the right components to get ready before proceeding: export CDI_VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-operator. yamlkubectl rollout status -n cdi deployment cdi-operatorkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VERSION/cdi-cr. yamlkubectl rollout status -n cdi deployment cdi-apiserverkubectl rollout status -n cdi deployment cdi-uploadproxykubectl rollout status -n cdi deployment cdi-deploymentDeploying a VirtualMachine with persistent storage: Alright, cool. We have everything we need now. Let’s setup the VM. We will start with the PersistenVolume’s required by CDI’s DataVolume resources. Since I’m using minikube with no dynamic storage provider, I’ll be creating 2 PVs with a reference to the PVCs that will claim them. Notice claimRef in each of the PVs. apiVersion: v1kind: PersistentVolumemetadata: name: example-volumespec: storageClassName: claimRef: namespace: default name: cirros-dv accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /data/example-volume/---apiVersion: v1kind: PersistentVolumemetadata: name: example-volume-scratchspec: storageClassName: claimRef: namespace: default name: cirros-dv-scratch accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /data/example-volume-scratch/With the persistent storage in place, we can create our VM with the following manifest: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: monitorable-vmspec: running: true template: metadata: name: monitorable-vm labels: prometheus. kubevirt. io: node-exporter spec: domain: resources: requests: memory: 1024Mi devices: disks: - disk: bus: virtio name: my-data-volume volumes: - dataVolume: name: cirros-dv name: my-data-volume dataVolumeTemplates: - metadata: name: cirros-dv spec: source: http: url: https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img pvc: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Notice that KubeVirt’s VirtualMachine resource has a VirtualMachine template and a dataVolumeTemplate. On the VirtualMachine template, it is important noticing that we named our VM monitorable-vm, and we will use this name to connect to its console with virtctl later on. The label we’ve added, prometheus. kubevirt. io: node-exporter , is also important, since we’ll use it when configuring Prometheus to scrape the VM’s node-exporter On dataVolumeTemplate, it is important noticing that we named the PVC cirros-dv and the DataVolume resource will create 2 PVCs with that, cirros-dv and cirros-dv-scratch. Notice that cirros-dv and cirros-dv-scratch are the names referenced on our PersistentVolume manifests. The names must match for this to work. Installing the node-exporter inside the VM: Once the VirtualMachineInstance is running, we can connect to its console using virtctl console monitorable-vm. If user and password are required, provide your credentials accordingly. If you are using the same disk image from this guide, the user and password are cirros and gocubsgo respectively. The following script will install node-exporter and configure the VM to always start the exporter when booting: curl -LO -k https://github. com/prometheus/node_exporter/releases/download/v1. 0. 1/node_exporter-1. 0. 1. linux-amd64. tar. gzgunzip -c node_exporter-1. 0. 1. linux-amd64. tar. gz | tar xopf -. /node_exporter-1. 0. 1. linux-amd64/node_exporter &sudo /bin/sh -c 'cat > /etc/rc. local <<EOF#!/bin/shecho Starting up node_exporter at :9100! /home/cirros/node_exporter-1. 0. 1. linux-amd64/node_exporter 2>&1 > /dev/null &EOF'sudo chmod +x /etc/rc. localP. S. : If you are using a different base image, please configure node-exporter to start at boot time accordingly Configuring Prometheus to scrape the VM’s node-exporter: To configure Prometheus to scrape the node-exporter (or other applications) is really simple. All we need is to create a new Service and a ServiceMonitor: apiVersion: v1kind: Servicemetadata: name: monitorable-vm-node-exporter labels: prometheus. kubevirt. io: node-exporter spec: ports: - name: metrics port: 9100 targetPort: 9100 protocol: TCP selector: prometheus. kubevirt. io: node-exporter ---apiVersion: monitoring. coreos. com/v1kind: ServiceMonitormetadata: name: kubevirt-node-exporters-servicemonitor namespace: monitoring labels: prometheus. kubevirt. io: node-exporter release: monitoringspec: namespaceSelector: any: true selector: matchLabels: prometheus. kubevirt. io: node-exporter endpoints: - port: metrics interval: 15sLet’s break this down just to make sure we set up everything right. Starting with the Service: spec: ports: - name: metrics port: 9100 targetPort: 9100 protocol: TCP selector: prometheus. kubevirt. io: node-exporter On the specification, we are creating a new port named metrics that will be redirected to every pod labeled with prometheus. kubevirt. io: node-exporter , at port 9100, which is the default port number for the node-exporter. apiVersion: v1kind: Servicemetadata: name: monitorable-vm-node-exporter labels: prometheus. kubevirt. io: node-exporter We are also labeling the Service itself with prometheus. kubevirt. io: node-exporter , that will be used by the ServiceMonitor object. Now let’s take a look at our ServiceMonitor specification: spec: namespaceSelector: any: true selector: matchLabels: prometheus. kubevirt. io: node-exporter endpoints: - port: metrics interval: 15sSince our ServiceMonitor will be deployed at the monitoring namespace, but our service is at the default namespace, we need namespaceSelector. any=true. We are also telling our ServiceMonitor that Prometheus needs to scrape endpoints from services labeled with prometheus. kubevirt. io: node-exporter and which ports are named metrics. Luckily, that’s exactly what we did with our Service! One last thing to keep an eye on. Prometheus configuration can be set up to watch multiple ServiceMonitors. We can see which ServiceMonitors our Prometheus is watching with the following command: # Look for Service Monitor Selectorkubectl describe -n monitoring prometheuses. monitoring. coreos. com monitoring-prometheus-oper-prometheusMake sure our ServiceMonitor has all labels required by Prometheus’ Service Monitor Selector. One common selector is the release name that we’ve set when deploying our Prometheus with helm! Testing: You can do a quick test by port-forwarding Prometheus web UI and executing some PromQL: kubectl port-forward -n monitoring prometheus-monitoring-prometheus-oper-prometheus-0 9090:9090To make sure everything is working, access localhost:9090/graph and execute the PromQL up{pod=~ virt-launcher. * }. Prometheus should return data that is being collected from monitorable-vm’s node-exporter. You can play around with virtctl, stop and starting the VM to see how the metrics behave. You will notice that when stopping the VM with virtctl stop monitorable-vm, the VirtualMachineInstance is killed and, thus, so is it’s pod. This will result with our service not being able to find the pod’s endpoint and then it will be removed from Prometheus’ targets. With this behavior, alerts like the one below won’t work since our target is literally gone, not down. - alert: KubeVirtVMDown expr: up{pod=~ virt-launcher. * } == 0 for: 1m labels: severity: warning annotations: summary: KubeVirt VM {{ $labels. pod }} is down. BUT, if the VM is constantly crashing without being stopped, the pod won’t be killed and the target will still be monitored. Node-exporter will never start or will go down constantly alongside the VM, so an alert like this might work: - alert: KubeVirtVMCrashing expr: up{pod=~ virt-launcher. * } == 0 for: 5m labels: severity: critical annotations: summary: KubeVirt VM {{ $labels. pod }} is constantly crashing before node-exporter starts at boot. Conclusion: In this blog post we used node-exporter to expose metrics out of a KubeVirt VM. We also configured Prometheus Operator to collect these metrics. This illustrates how to bring Kubernetes monitoring best practices with applications running inside KubeVirt VMs. " }, { - "id": 50, + "id": 51, "url": "/2020/Customizing-images-for-containerized-vms.html", "title": "Customizing images for containerized VMs part I", "author" : "Alberto Losada Grande", "tags" : "kubevirt, kubernetes, virtual machine, okd, containerDisk, registry, composer-cli, virt-customize, builder tool", "body": "Table of contents The vision Preparation of the environment Configuration of the Builder image server Building standard CentOS 8 image Image creation with Builder Tool Verify the custom-built image Image tailoring with virt-customize Building a standard CentOS 7 image from cloud images Image creation with virt-customize Information The content of this article has been divided into two: this one, which is the first part, explains how to create a golden image using different tools such as Builder Tool and virt-customize. Once the custom-built image is ready, it is containerized so that it can be uploaded and stored into a container registry. The second part deals with the different ways the developers can deploy, modify and connect to the VirtualMachineInstance running in the OKD Kubernetes cluster. The vision: If you work for a software factory, some kind of development environment standardization is probably in place. There are a lot of approaches which fit different use cases. In this blog post, our example company has allowed developers to choose their preferred editing tools and debugging environment locally to their workstations. However, before committing their changes to a Git repository, they need to validate them in a specifically tailored environment. This environment, due to legal restrictions, contains exact versions of the libraries, databases, web server or any other software previously agreed with customers. Note Aside from the pre-commit environments, the company already has an automated continuous integration workflow composed by several shared environments: development, integration and production. This blog post focuses on showing a use case where containerized VMs running on top of Kubernetes ease the deployment and creation of standardized VMs to our developers. These VMs are meant to be ephemeral. However, if necessary, additional non-persistent disk or shared persistent storage can be attached so that important information can be kept safe. Along the process, different approaches and tools to create custom VM images that will be stored in a corporate registry are detailed. Containerizing VMs means adapting them so that they can be saved in a container registry. Being able to manage VMs as container images leverages the benefits of a container registry, such as: The registry becomes a source of truth for the VMs you want to run. Everybody can list all VMs available searching on a centralized point. The container registry, depending on the storage size, contains historical information of all the VMs, which might have multiple different versions, identified by their tags. Any developer with the proper permissions is able to run any specific version of your standardized VM. It is the unique point where all your VMs are stored avoiding having them spread all over your infrastructure. Information A container image registry is a service that stores container images, and is hosted either by a third-party or as a public/private registry such as Docker Hub, Quay, and so on. The ambitious goal is allowing the developers to deploy standardized VMs on the current Kubernetes infrastructure. Then, execute the required tests and if they are good, push the code to the corporate Git repositories and delete the VM eventually. This goal is divided into three main procedures: Create custom standardized VM images, also known as golden images. Containerize the resulting golden VM images. Deploy the proper VM images from the corporate registry into the OKD Kubernetes cluster. Preparation of the environment: Running containerized VMs in KubeVirt uses the containerDisk feature which provides the ability to store and distributed VM disks in the container image registry. The disks are pulled from the container registry and reside on the local node hosting the VMs that consume the disks. The company already have an OKD 4 Kubernetes cluster installed which provides out of the box a container registry and some required security features such as Role Based Access Controls (RBAC) and Security Context Constraints (SCC). Information In the OpenShift blog there is a post called Enterprise Kubernetes with OpenShift where you can find valuable information between the similarities and differences between OKD and Kubernetes. On top of the OKD cluster, KubeVirt is required so that we can run our virtual machines. The installation process is pretty well detailed in the KubeVirt’s documentation. Below it is shown how KubeVirt components can be seen from the OKD web console. Information KubeVirt version deployed is 0. 34. 2 which is the latest at the moment of writing. $ echo $KUBEVIRT_VERSION0. 34. 2$ kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yaml$ kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr. yaml Warning oc is the specific command-line tool for OKD, however, it is based in kubectl plus some additional features detailed here. It is probably that along the blog post, you can find executions with oc or kubectl interchangeably. containerDisks are created from RAW or QCOW2 virtual machine images. Nevertheless, virtual machine images with all the agreed software and proper configuration must be created previously. The company currently uses CentOS 7 as their approved base operating system to run their applications. However, during the last months, it has been encouraging to move to the recently released version 8 of CentOS. From a long time they had been using the prebuilt CentOS cloud images with virt-customize, which allowed them to modify the prebuilt cloud images. As a trade-off, they had to trust on the cloud image provided by CentOS or verify if new packages were added on each release. Information virt-customize can customize a virtual machine (disk image) by installing packages, editing configuration files, and so on. Virt-customize modifies the guest or disk image in place. Now, they are starting to use a new tool called Image Builder that creates deployment-ready customized system images from scratch. Furthermore, there is an integration with Cockpit where you can create custom CentOS images in various formats including QCOW2 for OpenStack, AMI (Amazon Machine Image), VHD (Azure Disk Image) etc. from a friendly user interface. Note There are a lot of tools that can accomplish the objective of creating custom images. Here we are focusing on two: virt-customize and Image Builder. Along this blog post, both tools are used together in the image building process, leveraging their strengths. In the following diagram is depicted the different agents that take part in the process of running our standardized VMs in Kubernetes. This workflow includes the creation and customization of the images, their containerization, storing them into the OKD container registry and finally the creation of the VMs in Kubernetes by the employees. okd imageStream devstation Configuration of the Builder image server: In order to prepare the building environment, it is recommended to install Image Builder in a dedicated server as it has specific security requirements. Actually, the lorax-composer which is one of its components doesn’t work properly with SELinux running, as it installs an entire OS image in an alternate directory. Warning As shown in the lorax-composer documentation SELinux must be disabled. However, I have been able to create custom images successfully with SELinux enabled. In case you find any problems during your building, check the lorax-composer logs in journal in order to get more detailed information. Here it is a table where the software required to run the builds along with the versions have been used. Note Operating System is CentOS 8 since CentOS 7 Image Builder is still an experimental feature | Component | Version || ——— | ———————————————————————————————– || Operating System | CentOS Linux release 8. 2. 2004 (Core) || Libvirt | libvirtd (libvirt) 4. 5. 0 || virt-customize | virt-customize 1. 38. 4rhel=8,release=14. module_el8. 1. 0+248+298dec18,libvirt || Image Builder | lorax-composer (28. 14. 42-2), composer-cli (composer-cli-28. 14. 42-2), cockpit-composer (cockpit-composer-12. 1-1. el8. noarch) | Once the builder image server is provisioned with latest CentOS 8, the Virtualization Host group package is installed. It will be required to test our customized images locally before containerizing and pushing them to the OKD registry. yum groupinstall Virtualization Host -ysystemctl enable libvirtd --now Next, virt-customize is installed from the libguestfs-tools package along with the Image Builder. The latest is composed by lorax-composer, the Cockpit composer plugin and the composer-cli, which will be used to interact directly with Composer using command-line. dnf install -y libguestfs-tools lorax-composer composer-cli cockpit-composersystemctl enable lorax-composer. socketsystemctl enable lorax-composer --nowsystemctl start cockpit Then, the local firewall is configured so that we can connect to the Cockpit web user interface from our workstation. firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanentFinally, connect to the Cockpit user interface by typing the IP or name of the Builder image server and port TCP/9090 (Cockpit’s default) in your favourite web browser. Then, log in with a local administrator account. The following image shows the Image Build plugin web page. Actually, what it is depicted are the different Image Build blueprints that are shipped by default. The blueprint defines what should be included in your image. This includes packages, users, files, server settings … Error If Cockpit’s web UI is not working, take a look at the output of the lorax service with the command: journalctl -fu lorax-composerBuilding standard CentOS 8 image: It is time to create our standardized CentOS 8 image or also called golden CentOS 8 image. This image will be built from the ground up using the Image Builder tool. Image creation with Builder Tool: The easiest way to start is creating a new blueprint (devstation-centos8) from the Cockpit user interface. This will produce a scaffold file where all the required modifications can be made. Here it is shown the process of creation a new blueprint from Cockpit: I would also suggest adding some users and all the packages you want to install from the user interface. In our case, we are going to create the following users by clicking on the details tab of the new blueprint. In both cases, the password is known by the respective group of users and also belongs to the wheel group. Users Note sysadmin Privileged user owned by the Systems Engineering team to troubleshoot and have access to the VM developer These are the credentials used by the developers to access the VM Next, select the packages to include. Add the proper version of the package already agreed with the customer. Package Version httpd 2. 4. 37 mod_ssl 2. 4. 37 php 7. 2. 24 mariadb-server 10. 3. 17 openssh-server latest At this point, you already have a blueprint template to start working. In addition to using the web console, you can also use the Image Builder CLI to create images. When using the CLI, you have access to a few more customization options, such as managing firewall rules or download files from Git. Since we already have installed the composer-cli package in the Image Builder server, let’s use it to further customize our golden image. First, access to the Builder Image server and download the custom blueprint called devstation-centos8. $ composer-cli blueprints listdevstation-centos8example-atlasexample-developmentExample-http-server$ composer-cli blueprints save devstation-centos8$ lsdevstation-centos8. tomlInformation All composer-cli options are documented in the official webpage. Take a look if you need further detail. Now, let’s edit the devstation-centos8. toml file which is in charge of building our custom image. The time zone has been added to match Europe/Madrid with proper NTP servers. The kernel has been modified to allow connection via console. Several firewall rules have been added to allow our services being accessed from outside. Some services have been configured so that they are enabled and started at boot. A Git repository has been configured to be cloned. Actually, it is a Git repository that contains a manual detailing how the custom image is configured and how it must be used. Warning It is important to add console as a kernel option since the Builder Image tool disables access to serial console by default. It will allow the virtctl command to connect to the VM while it is booting in our OKD Kubernetes cluster. This is the final building configuration file, it can be downloaded from here name = devstation-centos8 description = A developer station version = 0. 0. 1 modules = []groups = [][[packages]]name = httpd version = 2. 4. 37 [[packages]]name = mod_ssl version = 2. 4. 37 [[packages]]name = php version = 7. 2. 24 [[packages]]name = mariadb-server version = 10. 3. 17 [[packages]]name = openssh-server version = * [customizations]hostname = devstation [customizations. kernel]append = console=tty0 console=ttyS0,19200n81 [customizations. timezone]timezone = Europe/Madrid ntpservers = [ 0. europe. pool. ntp. org , 1. europe. pool. ntp. org ][[customizations. user]]name = sysadmin description = Company Systems Admin password = $6$ZGmDxvGu3Q0M4RO/$KkfU0bD32FrLNpUCWEL8sy3dknJVyqExoy. NJMOcSCRjpt1H6sFKFjx8mFWn8H5CWTP7. bibPLBrRSRq3MrDb. home = /home/sysadmin/ shell = /usr/bin/bash groups = [ users , wheel ][[customizations. user]]name = developer description = developer groups = [ wheel ]password = $6$wlIgNacMnqCcXn3o$mPpw0apT4iZ3jDq0q6epXN3xCmNN. oVGFW. Gvu9r0nDVX. FXY3iCwfFkfPEcmhj7Kxw4Ppoes2LsUzPtNRjez0 [customizations. services]enabled = [ httpd , mariadb , sshd ][customizations. firewall. services]enabled = [ http , https , mysql , ssh ][[repos. git]]rpmname = manual rpmversion = 1. 0 rpmrelease = 1 summary = Manual how to work with devstation repo = https://github. com/alosadagrande/lorax ref = master destination = /var/www/html/manual Note In this case, we are using a Git repository to download useful information on how to deal with the customized image. However, it is possible to download for instance code or other information that can be stored in Git. And what is most important, it is versioned. Once edited, push the configuration to Image Builder and start the building process by selecting the blueprint and the output format. Builder Image tool can export the same blueprint into multiple output formats. Thus, one blueprint might create the same custom image running on multiple providers (qcow2 in our case). $ composer-cli blueprints push devstation-centos8. toml$ composer-cli compose start devstation-centos8 qcow2Compose 248161f5-0870-41e8-b871-001348395ca7 added to the queueNote It is possible to verify that the modified blueprint has been pushed successfully by executing the show command. $ composer-cli blueprints show devstation-centos8The building process can take tens of minutes. It is possible to see the process by checking the lorax-composer logs in the journal or request the status of the blueprint built from the composer-cli: $ composer-cli compose status248161f5-0870-41e8-b871-001348395ca7 RUNNING Fri Nov 27 15:12:09 2020 devstation-centos8 0. 0. 2 qcow2$ journalctl -u lorax-composer -fNov 27 15:13:31 eko3. cloud. lab. eng. bos. redhat. com lorax-composer[38218]: 2020-11-27 15:13:31,715: Installing. Nov 27 15:13:31 eko3. cloud. lab. eng. bos. redhat. com lorax-composer[38218]: 2020-11-27 15:13:31,716: Starting package installation processNov 27 15:13:31 eko3. cloud. lab. eng. bos. redhat. com lorax-composer[38218]: 2020-11-27 15:13:31,716: Downloading packagesNov 27 15:13:31 eko3. cloud. lab. eng. bos. redhat. com lorax-composer[38218]: 2020-11-27 15:13:31,716: Downloading 474 RPMs, 3. 75 MiB / 396. 83 MiB (0%) done. Nov 27 15:13:31 eko3. cloud. lab. eng. bos. redhat. com lorax-composer[38218]: 2020-11-27 15:13:31,716: Downloading 474 RPMs, 15. 58 MiB / 396. 83 MiB (3%) done. Once the building process is finished, it is time to download the QCOW2 file. It can be downloaded from Cockpit UI or from the composer-cli: $ composer-cli compose image 248161f5-0870-41e8-b871-001348395ca7248161f5-0870-41e8-b871-001348395ca7-disk. qcow2: 1854. 31 MB$ ls -lhrt-rw-r--r--. 1 root root 1. 5K Nov 27 15:11 devstation-centos8. toml-rw-r--r--. 1 root root 1. 9G Nov 27 15:26 248161f5-0870-41e8-b871-001348395ca7-disk. qcow2Afterwards, the image is suggested to be renamed to something more meaningful. Below the information given by qemu is exhibited: $ mv 248161f5-0870-41e8-b871-001348395ca7-disk. qcow2 golden-devstation-centos8-disk. qcow2$ qemu-img info golden-devstation-centos8-disk. qcow2image: golden-devstation-centos8-disk. qcow2file format: qcow2virtual size: 4. 3G (4566548480 bytes)disk size: 1. 8Gcluster_size: 65536Format specific information: compat: 1. 1 lazy refcounts: false refcount bits: 16 corrupt: falseWarning Virtual size of the image is 4. 3G, since we agreed 10G the disk must be resized and root filesystem expanded before being containerized. Currently, there is no way to specify disk capacity in containerDisk as it can be done with emptyDisks. The size of the root filesystem and disk when running in KubeVirt is driven by the image. It is recommended to save the QCOW2 images under /var/lib/libvirt/images/ so that qemu user have permissions to expand or resize them. $ qemu-img resize golden-devstation-centos8-disk. qcow2 10GImage resized. The expansion is executed on the root partition, which in case of our golden image is /dev/sda2 partition. It must be checked previously, for instance using the virt-filesystems utility: $ virt-filesystems --partitions --long -a golden-devstation-centos8-disk. qcow2Name Type MBR Size Parent/dev/sda1 partition 83 1073741824 /dev/sda/dev/sda2 partition 83 2966421504 /dev/sda Note that a copy of the golden image is created and that’s the one expanded. $ cp golden-devstation-centos8-disk. qcow2 golden-devstation-centos8-disk-10G. qcow2$ virt-resize --expand /dev/sda2 golden-devstation-centos8-disk. qcow2 golden-devstation-centos8-disk-10G. qcow2[ 0. 0] Examining golden-devstation-centos8-disk-10G. qcow2**********Summary of changes:/dev/sda1: This partition will be left alone. /dev/sda2: This partition will be resized from 2. 7G to 9. 0G. Thefilesystem xfs on /dev/sda2 will be expanded using the ‘xfs_growfs’method. **********[ 2. 2] Setting up initial partition table on golden-devstation-centos8-disk-10G. qcow2[ 3. 1] Copying /dev/sda1[ 4. 0] Copying /dev/sda2 100%[ 8. 5] Expanding /dev/sda2 using the ‘xfs_growfs’ methodResize operation completed with no errors. Before deleting the old disk,carefully check that the resized disk boots and works correctly. Finally, it is verified that the image meets the expected size (see virtual size): $ qemu-img info golden-devstation-centos8-disk-10G. qcow2image: golden-devstation-centos8-disk-10G. qcow2file format: qcow2virtual size: 10G (10737418240 bytes)disk size: 1. 8Gcluster_size: 65536Format specific information: compat: 1. 1 lazy refcounts: false refcount bits: 16 corrupt: falseNote In case the developers are allowed to select between multiple flavours, e. g. different root filesystem sizes, you will end up with multiple containerized VM images. In the event that an additional block device is needed, emptyDisk is the proper way to go. Verify the custom-built image: Before continuing, it is suggested to verify the golden expanded image. Since the qcow2 image is not yet containerized, it can easily run on KVM/libvirt. In our case, the builder server has already in place the Virtualization Host group packages. Information There are a lot of tools that allow us to run a qcow2 image in libvirt. In this example, virt-install is used, however, other tool that makes easy to deploy VM images and worth exploring is kcli First, install virt-install, which is a command-line tool for creating new KVM, Xen, or Linux container guests using the “libvirt” hypervisor management library, and run a new VM from the golden image: $ yum install virt-install -y$ virt-install --version2. 2. 1$ virt-install --memory 2048 --vcpus 2 --name devstation-centos8 --disk /var/lib/libvirt/images/golden-devstation-centos8-disk-10G. qcow2,device=disk --os-type Linux --os-variant rhel8. 1 --virt-type kvm --graphics none --network default --importStarting install. . . Connected to domain devstation-centos8Escape character is ^]CentOS Linux 8 (Core)Kernel 4. 18. 0-147. 5. 1. el8_1. x86_64 on an x86_64devstation login:Login as developer or sysadmin user, scale privileges and check that the VM is configured as expected. $ firewall-cmd --list-allpublic (active) target: default icmp-block-inversion: no interfaces: ens3 sources: services: cockpit dhcpv6-client http https mysql ssh$ systemctl is-active httpdactive$ systemctl is-active mariadbactive$ systemctl is-active sshdactiveVerify the disk and partition sizes are correctly configured: $ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTvda 252:0 0 10G 0 disk├─vda1 252:1 0 1G 0 part /boot└─vda2 252:2 0 9G 0 part /$ df -hFilesystem Size Used Avail Use% Mounted ondevtmpfs 962M 0 962M 0% /devtmpfs 995M 0 995M 0% /dev/shmtmpfs 995M 17M 979M 2% /runtmpfs 995M 0 995M 0% /sys/fs/cgroup/dev/vda2 9. 0G 1. 9G 7. 2G 21% /Note In case you are unsure on which partition you need to expand or contains the root filesystem, just run a VM from the golden qcow2 image and execute the previous commands. Then delete the VM and expand the image accordingly. Finally, notice how the cloned repository has been copied successfully during the built process. Users can check the custom image information connecting to the local Apache server: [root@devstation ~]# curl localhost/manual/Dear developer,<br><br>Welcome to the devstation server. <h2> How to use the devstation server </h2>Remember that before committing your changes to the corporate source management control server, you need to validate your code here. <h2> Need help? </h2>Please contact us at sysadmin@corporate. com Image tailoring with virt-customize: In the previous section, we verified that the golden image was successfully built. However, there are still a few things that need to be added so that the golden image can be successfully containerized and run on top of our OKD Kubernetes cluster. First, a worthy package that is suggested to be included in the golden image is cloud-init. KubeVirt allows you to create VM objects along with cloud-init configurations. Cloud-init will let our developers further adapt the custom image to their application needs. On the other hand, it has been agreed with the Software Engineering team to add a graphical interface to the custom image since there are developers that are not familiar with the terminal. The result will be two golden images CentOS 8, both with cloud-init, but one will include a GUI and the other is terminal-based and therefore much lighter. Warning It is important to set the memsize of the building process to 4096m and have expanded the root filesystem otherwise you will face an out of space or/and out of memory error while installing the GNOME GUI. cp golden-devstation-centos8-disk-10G. qcow2 golden-devstation-centos8-disk-10G-gui. qcow2virt-customize --format qcow2 -a /var/lib/libvirt/images/golden-devstation-centos8-disk-10G. qcow2 --install cloud-init --memsize 4096 --selinux-relabelvirt-customize --format qcow2 -a /var/lib/libvirt/images/golden-devstation-centos8-disk-10G-gui. qcow2 --install @graphical-server-environment,cloud-init --memsize 4096 --run-command systemctl set-default graphical. target --selinux-relabel At this point we built: A golden CentOS 8 image which can run on libvirt/KVM virtualization servers (golden-devstation-centos8-disk. qcow2) A 10G CentOS 8 image prepared to be executed by KubeVirt including cloud-init. (golden-devstation-centos8-disk-10G. qcow2) A 10G CentOS 8 image prepared to be executed by KubeVirt including both cloud-init and GNOME GUI (golden-devstation-centos8-disk-10G-gui. qcow2)Building a standard CentOS 7 image from cloud images: In the previous section, it was shown how we can build and customize images from scratch using the Builder Image tool. However, there are settings that could not be configured even with the composer-cli. Thus, virt-customize is used to fine-tune the custom image, i. e, add cloud-init and a graphical user interface. Since the Builder Tool is an experimental tool in CentOS 7, the company continues creating their golden CentOS 7 images based on CentOS cloud images. Comparing with the CentOS 8 workflow, the cloud image corresponds to the golden image even it is not built by the Systems Engineering department. Warning Note that with CentOS 7 images, the company is trusting a cloud image provided by a third party instead of creating one from scratch. Image creation with virt-customize: The process to create the golden CentOS 7 image is quite similar to the CentOS 8 one. However, in this case, the customize procedure is entirely done with virt-customize. The first step is to download the cloud image. curl -o /var/lib/libvirt/images/golden-devstation-centos7-disk. qcow2 https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2Then, it is required to resize and expand the image to meet the agreed size of 10GB. The details are the same explained in the previous section $ qemu-img info golden-devstation-centos7-disk. qcow2image: golden-devstation-centos7-disk. qcow2file format: qcow2virtual size: 8. 0G (8589934592 bytes)disk size: 819Mcluster_size: 65536Format specific information: compat: 1. 1 lazy refcounts: false refcount bits: 16 corrupt: false$ qemu-img resize golden-devstation-centos7-disk. qcow2 10GImage resized. $ cp golden-devstation-centos7-disk. qcow2 golden-devstation-centos7-disk-10G. qcow2$ virt-resize --expand /dev/sda1 golden-devstation-centos7-disk. qcow2 golden-devstation-centos7-disk-10G. qcow2Warning In this case, unlike CentOS 8 image, the partition where the root filesystem resides is /dev/sda1. That’s the partition that needs to be expanded. Below it is the virt-customize command that modifies the CentOS 7 expanded cloud image by: Installing the required packages (however, not the exact versions) Changing the root password Setting devstation as hostname to the customized image Configuring the time zone Enabling the installed services Including files from the manual. Note Manual files must be pulled first from alosadagrande/lorax GitHub repository. $ virt-customize --format qcow2 -a /var/lib/libvirt/images/golden-devstation-centos7-disk-10G. qcow2 \ --install cloud-init,mod_ssl,httpd,mariadb-server,php,openssh-server \ --memsize 4096 --hostname devstation --selinux-relabel --timezone Europe/Madrid \ --root-password password:toor --password centos:password:developer123 \ --run-command 'systemctl enable httpd' --run-command 'systemctl enable mariadb' \ --mkdir /var/www/html/manual --upload ~/lorax/index. html:/var/www/html/manual/index. htmlInformation Instead of executing all parameters in the command-line it is possible to create a file that is used as an input file for virt-customize. See option commands-from-file Next, we need to create the graphical user interface image in a similar way as we did previously with CentOS 8 image. cp golden-devstation-centos7-disk-10G. qcow2 golden-devstation-centos7-disk-10G-gui. qcow2virt-customize --format qcow2 -a /var/lib/libvirt/images/golden-devstation-centos7-disk-10G-gui. qcow2 --install cloud-init --memsize 4096 --run-command yum groupinstall 'GNOME Desktop' -y --run-command systemctl set-default graphical. target --selinux-relabelAt this point we built: A golden CentOS 7 image which can run on libvirt/KVM virtualization servers (golden-devstation-centos7-disk. qcow2). A 10G CentOS 7 image prepared to be executed by KubeVirt which includes cloud-init (golden-devstation-centos7-disk-10G. qcow2). A 10G CentOS 7 image prepared to be executed by KubeVirt which includes cloud-init and GNOME GUI (golden-devstation-centos7-disk-10G-gui. qcow2). Image containerization procedure: The procedure to inject a VirtualMachineInstance disk into a container images is pretty well explained in containerDisk Workflow example from the official documentation. Only RAW and QCOW2 formats are supported and the disk it is recommended to be placed into the /disk directory inside the container. Actually, it can be placed in other directories, but then, it must be explicitly configured when creating the VirtualMachine Currently, there are 4 standardized images ready to be containerized. The process is the same for all of them, so in order to keep it short, we are just going to show the process of creating a container image from the CentOS 8 QCOW2 images. Information These are the four available images: CentOS 8 with GNOME, CentOS 8 terminal only, CentOS 7 with GNOME and CentOS 7 terminal only. $ cat << EOF > ContainerfileFROM scratchADD golden-devstation-centos8-disk-10G. qcow2 /disk/EOF$ cat ContainerfileFROM scratchADD golden-devstation-centos8-disk-10G-gui. qcow2 /disk/Then, it is time to build the image. In our case, podman has chosen to execute the task, however, we could have used docker or buildah. $ podman build . -t openshift/devstation-centos8:terminalSTEP 1: FROM scratchSTEP 2: ADD golden-devstation-centos8-disk-10G. qcow2 /disk/STEP 3: COMMIT openshift/devstation-centos8:terminal8a9e83db71f08995fa73699c4e5a2d331c61b393daa18aa0b63269dc10078467$ podman build . -t openshift/devstation-centos8:guiSTEP 1: FROM scratchSTEP 2: ADD golden-devstation-centos8-disk-10G-gui. qcow2 /disk/STEP 3: COMMIT openshift/devstation-centos8:gui2a4ecc7bf9da91bcb5847fd1cf46f4cd10726a4ceae88815eb2a9ab38b316be4After the successful build, the images are stored locally to the local server, in our case the Builder Server. Remember that they must be uploaded to the OKD container registry. $ podman imagesREPOSITORY TAG IMAGE ID CREATED SIZElocalhost/openshift/devstation-centos8 gui 2a4ecc7bf9da 3 minutes ago 5. 72 GBlocalhost/openshift/devstation-centos8 terminal 8a9e83db71f0 13 minutes ago 1. 94 GBStore the image in the container registry: Before pushing the images to the corporate container registry, it must be verified that the OKD registry is available outside the Kubernetes cluster. This allows any authenticated user to gain external access to push images into the OKD Kubernetes cluster. Exposing the secure registry consists basically on configuring a route and expose that route in the OKD routers. Once done, external authenticated access is allowed. $ oc get route -n openshift-image-registryNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDdefault-route default-route-openshift-image-registry. apps. okd. okdlabs. com image-registry <all> reencrypt NoneNote In order to upload your containerized images to the OKD registry, the user must be authenticated and authorized to execute the push action. The role that must be added to the OKD user is the registry-editor In order to authenticate with the OKD container registry, podman is employed as explained in the official documentation. $ oc login https://api. okd. okdlabs. com:6443 -u alosadagThe server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): yAuthentication required for https://api. okd. okdlabs. com:6443 (openshift)Username: alosadagPassword:Login successful. $ HOST=$(oc get route default-route -n openshift-image-registry -o jsonpath='{. spec. host }')$ echo $HOSTdefault-route-openshift-image-registry. apps. okd. okdlabs. com$ podman login -u $(oc whoami) -p $(oc whoami -t) --tls-verify=false $HOSTLogin Succeeded!Before pushing the images, adapt container images to the proper name so they can be uploaded to private registries. Since it is agreed that all developers must be able to pull the images into their namespaces, the images need to be pushed to the openshift project. Information Understanding containers, images and imageStreams from OpenShift documentation deeply explains container image naming. podman tag localhost/openshift/devstation-centos8:gui default-route-openshift-image-registry. apps. okd. okdlabs. com/openshift/devstation:v8-terminalpodman push default-route-openshift-image-registry. apps. okd. okdlabs. com/openshift/devstation:v8-terminal --tls-verify=falsepodman tag localhost/openshift/devstation-centos:gui default-route-openshift-image-registry. apps. okd. okdlabs. com/openshift/devstation:v8-guipodman push default-route-openshift-image-registry. apps. okd. okdlabs. com/openshift/devstation:v8-gui --tls-verify=falseVerify that the images are stored correctly in the OKD container registry by checking the imageStream. As shown below, both images were uploaded successfully since the devstation imageStream contains two images with v8-gui and v8-terminal tags respectively. oc describe imageStream devstation -n openshiftName: devstationNamespace: openshiftCreated: 23 hours agoLabels: <none>Annotations: <none>Image Repository: default-route-openshift-image-registry. apps. okd. okdlabs. com/openshift/devstationImage Lookup: local=falseUnique Images: 2Tags: 2v8-gui no spec tag * image-registry. openshift-image-registry. svc:5000/openshift/devstation@sha256:e301d935c1cb5a64d41df340d78e6162ddb0ede9b9b5df9c20df10d78f8fde0f 2 hours agov8-terminal no spec tag * image-registry. openshift-image-registry. svc:5000/openshift/devstation@sha256:47c2ba0c463da84fa1569b7fb8552c07167f3464a9ce3b6e3f607207ba4cee65At this point, the images are stored in a private registry and ready to be consumed by the developers. Information In case you do not have a corporate private registry available, you can upload images to any free public container registry. Then, consume the container images from the public container registry. Just in case you want to use them or take a look, it has been uploaded to my public container image repository at quay. io In the next article, we will show how our developers can consume the custom-built images to run into the OKD Kubernetes cluster. Summary: In this blog post, it was detailed a real use of a company that uses KubeVirt to run standardized environments to run and test the code of their applications. In their use case, VMs are spinned up on-demand in the OKD Kubernetes cluster by the developers. This makes them completely autonomous creating and deleting their environments once the tasks are accomplished. The article explained how to create a golden image using different tools such as Builder Tool and virt-customize. Once the custom-built image was ready, then it is transformed into a container image so that it can be uploaded and stored into a container registry. Information In the next blog post, the custom-built containerized VM will be deployed from our corporate registry into our Kubernetes cluster. We will show how the developers can fine-tune even more the image deployment, how extra storage can be requested and how to connect to the VirtualMachineInstance. Stay tuned! References: KubeVirt installation Image Builder: Building custom system images Composer-cli information Custom-built images available at quay. io" }, { - "id": 51, + "id": 52, "url": "/2020/run_strategies.html", "title": "High Availability -- RunStrategies for Virtual Machines", "author" : "Stu Gott", "tags" : "kubevirt, Kubernetes, virtual machine, VM", "body": "Why Isn’t My VM Running?: There’s been a longstanding point of confusion in KubeVirt’s API. One that was raised yet again a few times recently. The confusion stems from the “Running” field of the VM spec. Language has meaning. It’s natural to take it at face value that “Running” means “Running”, right? Well, not so fast. Spec vs Status: KubeVirt objects follow Kubernetes convention in that they generally have Spec and Status stanzas. The Spec is user configurable and allows the user to indicate the desired state of the cluster in a declarative manner. Meanwhile status sections are not user configurable and reflect the actual state of things in the cluster. In short, users edit the Spec and controllers edit the Status. So back to the Running field. In this case the Running field is in the VM’s Spec. In other words it’s the user’s intent that the VM is running. It doesn’t reflect the actual running state of the VM. RunStrategy: There’s a flip side to the above, equally as confusing: “Running” isn’t always what the user wants. If a user logs into a VM and shuts it down from inside the guest, KubeVirt will dutifully re-spawn it! There certainly exist high availability use cases where that’s exactly the correct reaction, but in most cases that’s just plain confusing. Shutdown is not restart! We decided to tackle both issues at the same time–by deprecating the “Running” field. As already noted, we could have picked a better name to begin with. By using the name “RunStrategy”, it should hopefully be more clear to the end user that they’re asking for a state, which is of course completely separate from what the system can actually provide. While RunStrategy helps address the nomenclature confusion, it also happens to be an enumerated value. Since Running is a boolean, it can only be true or false. We’re now able to create more meaningful states to accommodate different use cases. Four RunStrategies currently exist: Always: If a VM is stopped for any reason, a new instance will be spawned. RerunOnFailure: If a VM ends execution in an error state, a new instance will be spawned. This addressed the second concern listed above. If a user halts a VM manually a new instance will not be spawned. Manual: This is exactly what it means. KubeVirt will neither attempt to start or stop a VM. In order to change state, the user must invoke start/stop/restart from the API. There exist convenience functions in the virtctl command line client as well. Halted: The VM will be stopped if it’s running, and will remain off. An example using the RerunOnFailure RunStrategy was presented in KubeVirt VM Image Usage Patterns High Availability: No discussion of RunStrategies is complete without mentioning High Availability. After all, the implication behind the RerunOnFailure and Always RunStrategies is that your VM should always be available. For the most part this is completely true, but there’s one important scenario where there’s a gap to be aware of: if a node fails completely, e. g. loss of networking or power. Without some means of automatic detection that the node is no longer active, KubeVirt won’t know that the VM has failed. On OpenShift clusters installed using Installer Provisioned Infrastructure (IPI) with MachineHealthCheck enabled can detect failed nodes and reschedule workloads running there. Mode information on IPI and MHC can be found here: Installer Provisioned InfrastructureMachine Health Check " }, { - "id": 52, + "id": 53, "url": "/2020/changelog-v0.35.0.html", "title": "KubeVirt v0.35.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 35. 0: Released on: Mon Nov 9 13:08:27 2020 +0000 [PR #4409][vladikr] Increase the static memory overhead by 10Mi [PR #4272][maiqueb] Add ip-family to the virtctl expose command. [PR #4398][rmohr] VMIs reflect deleted stuck virt-launcher pods with the “PodTerminating” Reason in the ready condition. The VMIRS detects this reason and immediately creates replacement VMIs. [PR #4393][salanki] Disable legacy service links in virt-launcher Pods to speed up Pod instantiation and decrease Kubelet load in namespaces with many services. [PR #2935][maiqueb] Add the macvtap BindMechanism. [PR #4132][mstarostik] fixes a bug that prevented unique device name allocation when configuring both scsi and sata drives [PR #3257][xpivarc] Added support of kubectl explain for Kubevirt resources. [PR #4288][ezrasilvera] Adding DownwardAPI volumes type [PR #4233][maya-r] Update base image used for pods to Fedora 31. [PR #4192][xpivarc] We now run gosec in Kubevirt [PR #4328][stu-gott] Version 2. x QEMU guest agents are supported. [PR #4289][AlonaKaplan] Masquerade binding - set the virt-launcher pod interface MTU on the bridge. [PR #4300][maiqueb] Update the NetworkInterfaceMultiqueue openAPI documentation to better specify its semantics within KubeVirt. [PR #4277][awels] PVCs populated by DVs are now allowed as volumes. [PR #4265][dhiller] Fix virtctl help text when running as a plugin [PR #4273][dhiller] Only run Travis build for PRs against release branches" }, { - "id": 53, + "id": 54, "url": "/2020/Multiple-Network-Attachments-with-bridge-CNI.html", "title": "Multiple Network Attachments with bridge CNI", "author" : "ellorent", "tags" : "kubevirt-hyperconverged, cnao, cluster-network-addons-operator, kubernetes-nmstate, nmstate, bridge, multus, networking, CNI, multiple networks", "body": "Introduction: Over the last years the KubeVirt project has improved a lot regarding secondary interfaces networking configuration. Now it’s possible to do an end to end configuration from host networking to a VM using just the Kubernetes API withspecial Custom Resource Definitions. Moreover, the deployment of all the projects has been simplified by introducing KubeVirt hyperconverged cluster operator (HCO) and cluster network addons operator (CNAO) to install the networking components. The following is the operator hierarchy list presenting the deployment responsibilities of the HCO and CNAO operators used in this blog post: kubevirt-hyperconverged-cluster-operator (HCO) cluster-network-addons-operator (CNAO) multus bridge-cni kubemacpool kubernetes-nmstate KubeVirt Introducing cluster-network-addons-operator: The cluster network addons operator manages the lifecycle (deploy/update/delete) of different Kubernetes network components needed toconfigure secondary interfaces, manage MAC addresses and defines networking on hosts for pods and VMs. A Good thing about having an operator is that everything is done through the API and you don’t have to go over all nodes to install these components yourself and assures smooth updates. In this blog post we are going to use the following components, explained in a greater detail later on: multus: to start a secondary interface on containers in pods linux bridge CNI: to use bridge CNI and connect the secondary interfaces from pods to a linux bridge at nodes kubemacpool: to manage mac addresses kubernetes-nmstate: to configure the linux bridge on the nodesThe list of components we want CNAO to deploy is specified by the NetworkAddonsConfig Custom Resource (CR) and the progress of the installation appears in the CR status field, split per component. To inspectthis progress we can query the CR status with the following command: kubectl get NetworkAddonsConfig cluster -o yamlTo simplify this blog post we are going to use directly the NetworkAddonsConfig from HCO, which by default installs all the network components, but just to illustrate CNAO configuration, the following is a NetworkAddonsConfig CR instructing to deploy multus, linuxBridge, nmstate and kubemacpool components: apiVersion: networkaddonsoperator. network. kubevirt. io/v1kind: NetworkAddonsConfigmetadata: name: clusterspec: multus: {} linuxBridge: {} nmstate: {} imagePullPolicy: AlwaysConnecting Pods, VMs and Nodes over a single secondary network with bridge CNI: Although Kubernetes provides a default interface that gives connectivity to pods and VMs, it’s not easy to configure which NIC should be used for specific pods or VMs in a multi NIC node cluster. A Typical use case is to split control/traffic planes isolated by different NICs on nodes. With linux bridge CNI + multus it’s possible to create a secondary NIC in pod containers and attach it to a L2 linux bridge on nodes. This will add container’s connectivity to a specific NIC on nodes if that NIC is part of the L2 linux bridge. To ensure the configuration is applied only in pods on nodes that have the bridge, the k8s. v1. cni. cncf. io/resourceName label is added. This goes hand in hand with another component, bridge-marker which inspects nodes networking and if a new bridge pops up it will mark the node status with it. This is an example of the results from bridge-marker on nodes where bridge br0 is already configured: ---status: allocatable: bridge. network. kubevirt. io/br0: 1k capacity: bridge. network. kubevirt. io/br0: 1kThis is an example of NetworkAttachmentDefinition to expose the bridge available on the host to users: apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: bridge-network annotations: k8s. v1. cni. cncf. io/resourceName: bridge. network. kubevirt. io/br0spec: config: > { cniVersion : 0. 3. 1 , name : br0-l2 , plugins : [{ type : bridge , bridge : br0 , ipam : {} }] }Then adding the bridge secondary network to a pod is a matter of adding the following annotation toit: annotations: k8s. v1. cni. cncf. io/networks: bridge-networkSetting up node networking with NodeNetworkConfigurationPolicy (aka nncp): Changing Kubernetes cluster node networking can be done manually iterating over all the cluster nodes and making changes or using different automatization tools like ansible. However, using just another Kubernetes resource is more convenient. For this purpose the kubernetes-nmstate project was born as a cluster wide node network administrator based on Kubernetes CRs on top of nmstate. It works as a Kubernetes DaemonSet running pods on all the cluster nodes and reconciling three different CRs: NodeNetworkConfigurationPolicy to specify cluster node network desired configuration NodeNetworkConfigurationEnactment (nnce) to troubleshoot issues with nncp NodeNetworkState (nns) to view the node’s networking configurationNote Project kubernetes-nmstate has a distributed architecture to reduce kube-apiserver connectivity dependency, this means that every pod will configure the networking on the node that it’s running without much interaction with kube-apiserver. In case something goes wrong and the pod changing the node network cannot ping the default gateway, resolve DNS root servers or has lost the kube-apiserver connectivity it will rollback to the previous configuration to go back to a working state. Those errors can be checked by running kubectl get nnce. The command displays potential issues per node and nncp. The desired state fields follow the nmstate API described at their awesome doc Also for more details on kubernetes-nmstate there are guides covering reporting, configuration and troubleshooting. There are also nncp examples. Demo: mixing it all together, VM to VM communication between nodes: With the following recipe we will end up with a pair of virtual machines pair on two different nodes with one secondary NICs, eth1 at vlan 100. They will be connected to each other usingthe same bridge on nodes that also have the external secondary NIC eth1 connected. Demo environment setup: We are going to use a kubevirtci as Kubernetes ephemeral cluster provider. To start it up with two nodes and one secondary NIC and install NetworkManager >= 1. 22 (needed for kubernetes-nmstate) and dnsmasq follow these steps: git clone https://github. com/kubevirt/kubevirtcicd kubevirtci# Pin to version working with blog post steps in case# k8s-1. 19 provider disappear in the futuregit reset d5d8e3e376b4c3b45824fbfe320b4c5175b37171 --hardexport KUBEVIRT_PROVIDER=k8s-1. 19export KUBEVIRT_NUM_NODES=2export KUBEVIRT_NUM_SECONDARY_NICS=1make cluster-upexport KUBECONFIG=$(. /cluster-up/kubeconfig. sh)Installing components: To install KubeVirt we are going to use the operator kubevirt-hyper-converged-operator, this will install all the componentsneeded to have a functional KubeVirt with all the features including the ones we are going to use: multus, linux-bridge, kubemacpool and kubernetes-nmstate. curl https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/master/deploy/deploy. sh | bashkubectl wait hco -n kubevirt-hyperconverged kubevirt-hyperconverged --for condition=Available --timeout=500sNow we have a Kubernetes cluster with all the pieces to startup a VM with bridge attached to a secondary NIC. Creating the br0 on nodes with a port attached to secondary NIC eth1: First step is to create a L2 linux-bridge at nodes with one port on the secondary NIC eth1, this will beused later on by the bridge CNI. cat <<EOF | kubectl apply -f -apiVersion: nmstate. io/v1alpha1kind: NodeNetworkConfigurationPolicymetadata: name: br0-eth1spec: desiredState: interfaces: - name: br0 description: Linux bridge with eth1 as a port type: linux-bridge state: up bridge: options: stp: enabled: false port: - name: eth1EOFNow we wait for the bridge to be created checking nncp conditions: kubectl wait nncp br0-eth1 --for condition=Available --timeout 2mAfter the nncp becomes available, we can query the nncp resources in the clusterand see it listed with successful status. kubectl get nncpNAME STATUSbr0-eth1 SuccessfullyConfiguredWe can inspect the status of applying the policy to each node. For that there is the NodeNetworkConfigurationEnactment CR (nnce): kubectl get nnceNAME STATUSnode01. br0-eth1 SuccessfullyConfigurednode02. br0-eth1 SuccessfullyConfiguredNote In case of errors it is possible to retrieve the error dumped by nmstate runningkubectl get nnce -o yaml the status will contain the error. We can also inspect the network state on the nodes by retrieving the NodeNetworkState andchecking if the bridge br0 is up using jsonpath kubectl get nns node01 -o=jsonpath='{. status. currentState. interfaces[?(@. name== br0 )]. state}'kubectl get nns node02 -o=jsonpath='{. status. currentState. interfaces[?(@. name== br0 )]. state}'When inspecting the full currentState yaml we get the followinginterface configuration: kubectl get nns node01 -o yamlstatus: currentState: interfaces: - bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: eth1 stp-hairpin-mode: false stp-path-cost: 100 stp-priority: 32 description: Linux bridge with eth1 as a port ipv4: dhcp: false enabled: false ipv6: autoconf: false dhcp: false enabled: false mac-address: 52:55:00:D1:56:00 mtu: 1500 name: br0 state: up type: linux-bridgeWe can also check that the bridge-marker is working and check verify on nodes: kubectl get node node01 -o yamlThe following should appear stating that br0can be consumed on the node: status: allocatable: bridge. network. kubevirt. io/br0: 1k capacity: bridge. network. kubevirt. io/br0: 1kAt this point we have an L2 linux bridge ready and connected to NIC eth1. Configure network attachment with a L2 bridge and a vlan: In order to make the bridge a L2 bridge, we specify no IPAM (IP Address Management) since we arenot going to configure any ip address for the bridge. To configurebridge vlan-filtering we add the vlan we want to use to isolate our VMs: cat <<EOF | kubectl apply -f -apiVersion: k8s. cni. cncf. io/v1kind: NetworkAttachmentDefinitionmetadata: name: br0-100-l2 annotations: k8s. v1. cni. cncf. io/resourceName: bridge. network. kubevirt. io/br0spec: config: > { cniVersion : 0. 3. 1 , name : br0-100-l2-config , plugins : [ { type : bridge , bridge : br0 , vlan : 100, ipam : {} }, { type : tuning } ] }EOFStart a pair of VMs on different nodes using the multus configuration to connect a secondary interfaces to br0: Now it’s time to startup the VMs running on different nodes so we can check external connectivity ofbr0. They will also have a secondary NIC eth1 to connect to the other VM running at different node, so they goover the br0 at nodes. The following picture illustrates the cluster: bridgecluster_kubevirtcikubevirtci clustercluster_node01node01cluster_vmavmacluster_node02node02cluster_vmbvmbnd_br1_kubevirtcibr1nd_br0_node01br0nd_eth1_node01eth1nd_br0_node01--nd_eth1_node01nd_eth1_vmaeth1nd_br0_node01--nd_eth1_vmand_eth1_node01--nd_br1_kubevirtcind_br0_node02br0nd_eth1_node02eth1nd_br0_node02--nd_eth1_node02nd_eth1_vmbeth1nd_br0_node02--nd_eth1_vmbnd_eth1_node02--nd_br1_kubevirtciFirst step is to install the virtctl command line tool to play with virtual machines: curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/v0. 33. 0/virtctl-v0. 33. 0-linux-amd64chmod +x virtctlsudo install virtctl /usr/local/binNow let’s create two VirtualMachines on each node. They will have one secondary NIC connected to br0 using the multus configuration for vlan 100. We will also activate kubemacpool to be sure that mac addresses are unique in the cluster and install the qemu-guest-agent so IP addresses from secondary NICs are reported to VM and we can inspect them later on. cat <<EOF | kubectl apply -f -apiVersion: v1kind: Namespacemetadata: name: default labels: mutatevirtualmachines. kubemacpool. io: allocate---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vmaspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: node01 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: br0-100 bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: br0-100 multus: networkName: br0-100-l2 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 10. 200. 0. 1/24 ] userData: |- #!/bin/bash echo fedora |passwd fedora --stdin dnf -y install qemu-guest-agent sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agent---apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: vmbspec: running: true template: spec: nodeSelector: kubernetes. io/hostname: node02 domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: br0-100 bridge: {} machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - name: br0-100 multus: networkName: br0-100-l2 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: addresses: [ 10. 200. 0. 2/24 ] userData: |- #!/bin/bash echo fedora |passwd fedora --stdin dnf -y install qemu-guest-agent sudo systemctl enable qemu-guest-agent sudo systemctl start qemu-guest-agentEOFWait for the two VMs to be ready. Eventually you will see something like this: kubectl get vmiNAME AGE PHASE IP NODENAMEvma 2m4s Running 10. 244. 196. 142 node01vmb 2m4s Running 10. 244. 140. 86 node02We can check that they have one secondary NIC withoutaddress assigned: kubectl get vmi -o yaml## vma interfaces: - interfaceName: eth0 ipAddress: 10. 244. 196. 144 ipAddresses: - 10. 244. 196. 144 - fd10:244::c48f mac: 02:4a:be:00:00:0a name: default - interfaceName: eth1 ipAddress: 10. 200. 0. 1/24 ipAddresses: - 10. 200. 0. 1/24 - fe80::4a:beff:fe00:b/64 mac: 02:4a:be:00:00:0b name: br0-100## vmb interfaces: - interfaceName: eth0 ipAddress: 10. 244. 140. 84 ipAddresses: - 10. 244. 140. 84 - fd10:244::8c53 mac: 02:4a:be:00:00:0e name: default - interfaceName: eth1 ipAddress: 10. 200. 0. 2/24 ipAddresses: - 10. 200. 0. 2/24 - fe80::4a:beff:fe00:f/64 mac: 02:4a:be:00:00:0f name: br0-100Let’s finish this section by verifying connectivity between vma and vmb using ping. Open the console of vma virtual machine and use ping command with destination IP address 10. 200. 0. 2, which is the address assigned to the secondary interface of vmb: Note The user and password for this VMs is fedora, it was configured at cloudinit userData virtctl console vmaping 10. 200. 0. 2 -c 3PING 10. 200. 0. 2 (10. 200. 0. 2): 56 data bytes64 bytes from 10. 200. 0. 2: seq=0 ttl=50 time=357. 040 ms64 bytes from 10. 200. 0. 2: seq=1 ttl=50 time=379. 742 ms64 bytes from 10. 200. 0. 2: seq=2 ttl=50 time=404. 066 ms--- 10. 200. 0. 2 ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 357. 040/380. 282/404. 066 msConclusion: In this blog post we used network components from KubeVirt project to connect two VMs on different nodesthrough a linux bridge connected to a secondary NIC. This illustrates how VM traffic can be directed to a specific NICon a node using a secondary NIC on a VM. " }, { - "id": 54, + "id": 55, "url": "/2020/changelog-v0.34.0.html", "title": "KubeVirt v0.34.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 34. 0: Released on: Wed Oct 7 13:59:50 2020 +0300 [PR #4315][kubevirt-bot] PVCs populated by DVs are now allowed as volumes. [PR #3837][jean-edouard] VM interfaces with no bootOrder will no longer be candidates for boot when using the BIOS bootloader, as documented [PR #3879][ashleyschuett] KubeVirt should now be configured through the KubeVirt CR configuration key. The usage of the kubevirt-confg ConfigMap will be deprecated in the future. [PR #4074][stu-gott] Fixed bug preventing non-admin users from pausing/unpausing VMs [PR #4252][rhrazdil] Fixes https://bugzilla. redhat. com/show_bug. cgi?id=1853911 [PR #4016][ashleyschuett] Allow for post copy VMI migrations [PR #4235][davidvossel] Fixes timeout failure that occurs when pulling large containerDisk images [PR #4263][rmohr] Add readiness and liveness probes to virt-handler, to clearly indicate readiness [PR #4248][maiqueb] always compile KubeVirt with selinux support on pure go builds. [PR #4012][danielBelenky] Added support for the eviction API for VMIs with eviction strategy. This enables VMIs to be live-migrated when the node is drained or when the descheduler wants to move a VMI to a different node. [PR #4075][ArthurSens] Metric kubevirt_vmi_vcpu_seconds’ state label is now exposed as a human-readable state instead of an integer [PR #4162][vladikr] introduce a cpuAllocationRatio config parameter to normalize the number of CPUs requested for a pod, based on the number of vCPUs [PR #4177][maiqueb] Use vishvananda/netlink instead of songgao/water to create tap devices. [PR #4092][stu-gott] Allow specifying nodeSelectors, affinity and tolerations to control where KubeVirt components will run [PR #3927][ArthurSens] Adds new metric kubevirt_vmi_memory_unused_bytes [PR #3493][vladikr] virtIO-FS is being added as experimental, protected by a feature-gate that needs to be enabled in the kubevirt config by the administrator [PR #4193][mhenriks] Add snapshot. kubevirt. io to admin/edit/view roles [PR #4149][qinqon] Bump kubevirtci to k8s-1. 19 [PR #3471][crobinso] Allow hiding that the VM is running on KVM, so that Nvidia graphics cards can be passed through [PR #4115][phoracek] Add conformance automation and manifest publishing [PR #3733][davidvossel] each PRs description. [PR #4082][mhenriks] VirtualMachineRestore API and implementation [PR #4154][davidvossel] Fixes issue with Serivce endpoints not being updated properly in place during KubeVirt updates. [PR #3289][vatsalparekh] Add option to run only VNC Proxy in virtctl [PR #4027][alicefr] Added memfd as default memory backend for hugepages. This introduces the new annotation kubevirt. io/memfd to disable memfd as default and fallback to the previous behavior. [PR #3612][ashleyschuett] Adds customizeComponents to the kubevirt api [PR #4029][cchengleo] Fix an issue which prevented virt-operator from installing monitoring resources in custom namespaces. [PR #4031][rmohr] Initial support for sonobuoy for conformance testing" }, { - "id": 55, + "id": 56, "url": "/2020/changelog-v0.33.0.html", "title": "KubeVirt v0.33.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 33. 0: Released on: Tue Sep 15 14:46:00 2020 +0000 [PR #3226][vatsalparekh] Added tests to verify custom pciAddress slots and function [PR #4048][davidvossel] Improved reliability for failed migration retries [PR #3585][mhenriks] “virtctl image-upload pvc …” will create the PVC if it does not exist [PR #3945][xpivarc] KubeVirt is now being built with Go1. 13. 14 [PR #3845][ArthurSens] action required: The domain label from VMI metrics is being removed and may break dashboards that use the domain label to identify VMIs. Use name and namespace labels instead [PR #4011][dhiller] ppc64le arch has been disabled for the moment, see https://github. com/kubevirt/kubevirt/issues/4037 [PR #3875][stu-gott] Resources created by KubeVirt are now labelled more clearly in terms of relationship and role. [PR #3791][ashleyschuett] make node as kubevirt. io/schedulable=false on virt-handler restart [PR #3998][vladikr] the local provider is usable again. [PR #3290][maiqueb] Have virt-handler (KubeVirt agent) create the tap devices on behalf of the virt-launchers. [PR #3957][AlonaKaplan] virt-launcher support Ipv6 on dual stack cluster. [PR #3952][davidvossel] Fixes rare situation where vmi may not properly terminate if failure occurs before domain starts. [PR #3973][xpivarc] Fixes VMs with clock. timezone set. [PR #3923][danielBelenky] Add support to configure QEMU I/O mode for VMIs [PR #3889][rmohr] The status fields for our CRDs are now protected on normal PATCH and PUT operations. The /status subresource is now used where possible for status updates. [PR #3568][xpivarc] Guest swap metrics available" }, { - "id": 56, + "id": 57, "url": "/2020/changelog-v0.32.0.html", "title": "KubeVirt v0.32.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 32. 0: Released on: Tue Aug 11 19:21:56 2020 +0000 [PR #3921][vladikr] use correct memory units in libvirt xml [PR #3893][davidvossel] Adds recurring period that resyncs virt-launcher domains with virt-handler [PR #3880][sgarbour] Better error message when input parameters are not the expected number of parameters for each argument. Help menu will popup in case the number of parameters is incorrect. [PR #3785][xpivarc] Vcpu wait metrics available [PR #3642][vatsalparekh] Add a way to update VMI Status with latest Pod IP for Masquerade bindings [PR #3636][ArthurSens] Adds kubernetes metadata. labels as VMI metrics’ label [PR #3825][awels] Virtctl now prints error messages from the response body on upload errors. [PR #3830][davidvossel] Fixes re-establishing domain notify client connections when domain notify server restarts due to an error event. [PR #3778][danielBelenky] Do not emit a SyncFailed event if we fail to sync a VMI in a final state [PR #3803][andreabolognani] Not sure what to write here (see above) [PR #2694][rmohr] Use native go libraries for selinux to not rely on python-selinux tools like semanage, which are not always present. [PR #3692][victortoso] QEMU logs can now be fetched from outside the pod [PR #3738][enp0s3] Restrict creation of VMI if it has labels that are used internally by Kubevirt components. [PR #3725][danielBelenky] The tests binary is now part of the release and can be consumed from the GitHub release page. [PR #3684][rmohr] Log if critical devices, like kvm, which virt-handler wants to expose are not present on the node. [PR #3166][petrkotas] Introduce new virtctl commands: [PR #3708][andreabolognani] Make qemu work on GCE by pulling in a fix for https://bugzilla. redhat. com/show_bug. cgi?id=1822682" }, { - "id": 57, + "id": 58, "url": "/2020/Import-VM-from-oVirt.html", "title": "Import virtual machine from oVirt", "author" : "Ondra Machacek", "tags" : "kubevirt, Kubernetes, virtual machine, VM, import, oVirt", "body": "About vm-import-operator: Virtual machine import operator makes life easier for users who want to migrate their virtual machine workload from different infrastructures to KubeVirt. Currently the operator supports migration from oVirt only. The operator is configurable so user can define how the storage or network should be mapped. For the disk import vm import operator is using the CDI, so in order to have the vm import working you must have both KubeVirt and CDI installed. Import rules: Before the import process is initiated we run validation of the source VM, to be sure the KubeVirt will run the source VM smoothly. We have many rules defined including storage, network and the VM. You will see all warning messages in the conditions field. For example: - lastHeartbeatTime: 2020-08-11T11:13:31Z lastTransitionTime: 2020-08-11T11:13:31Z message: 'VM specifies IO Threads: 1, VM has NUMA tune mode secified: interleave' reason: MappingRulesVerificationReportedWarnings status: True type: MappingRulesVerifiedSupported Guest Operating Systems: We support following guest operating systems: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Microsoft Windows 10 Microsoft Windows Server 2012r2 Microsoft Windows Server 2016 Microsoft Windows Server 2019 CentOS Linux 6 CentOS Linux 7 CentOS Linux 8 Ubuntu 18. 04 Fedora openSUSESetup vm-import-operator: Source code for virtual machine import operator is hosted on github under KubeVirt organization. You can very easily deploy it on your Kubernetes by running following commands: kubectl apply -f https://github. com/kubevirt/vm-import-operator/releases/download/v0. 1. 0/namespace. yamlkubectl apply -f https://github. com/kubevirt/vm-import-operator/releases/download/v0. 1. 0/operator. yamlkubectl apply -f https://github. com/kubevirt/vm-import-operator/releases/download/v0. 1. 0/vmimportconfig_cr. yamlBy default the operator is deployed to kubevirt-hyperconverged namespace,you can verify that the operator is deployed and running by running: kubectl get deploy vm-import-controller -n kubevirt-hyperconvergedIf you are using HCO, you don’t have to install it manually,because the HCO takes care of that. Importing virtual machine from oVirt: In order to import a virtual machine from oVirt user must obtain credentials for the oVirt environment. oVirt environment is usually accessed using username, password and http URL. Note that you must provide CA certificate of your oVirt environment. If you have those - create a secret out of them: ---apiVersion: v1kind: Secretmetadata: name: ovirt-secrettype: OpaquestringData: ovirt: |- apiUrl: https://engine-url/ovirt-engine/api username: admin@internal password: secretpassword caCert: | -----BEGIN CERTIFICATE----- MIIEMjCCAxqgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwbDELMAkGA1UEBhMCVVMxJDAiBgNVBAoM . . . . fFyt91ClrUtTE707IFnYdQQUiZ4zI0q+6pmw6+xx8mH5k8Ad6D71pF718xCM1NiBx/Cusg== -----END CERTIFICATE-----Another step to initiate the import is creating the mappings. The mappings has three categories - storage mapping, disk mapping and network mapping. For storage mapping user can define which oVirt storage domain will be mapped to which storage class. Disk mapping can override the storage mapping for specific disks. The network mappings map oVirt network to the kubernetes network. So here an simple example of mapping: apiVersion: v2v. kubevirt. io/v1alpha1kind: ResourceMappingmetadata: name: myvm-mapping namespace: defaultspec: ovirt: networkMappings: - source: name: ovirtmgmt/ovirtmgmt target: name: pod type: pod storageMappings: - source: name: mystoragedomain target: name: mystorageclassThe above mapping maps ovirtmgmt/ovirtmgmt which is in format of vNIC profile/network to the pod network and disks from mystoragedomain to mystorageclass. Once we have mapping and the secret, we can initiate the import by creating a VM import CR. You must provide the name of the mapping, secret, source VM and target VM name. apiVersion: v2v. kubevirt. io/v1alpha1kind: VirtualMachineImportmetadata: name: myvm namespace: defaultspec: providerCredentialsSecret: name: ovirt-secret resourceMapping: name: myvm-mapping targetVmName: testvm source: ovirt: vm: name: myvm cluster: name: myclusterNote that it is also possible to use internal mappings, so the user can create the mappings inside the VM import CR, for example: apiVersion: v2v. kubevirt. io/v1alpha1kind: VirtualMachineImportmetadata: name: myvm namespace: defaultspec: providerCredentialsSecret: name: ovirt-secret namespace: default targetVmName: testvm source: ovirt: mappings: networkMappings: - source: name: ovirtmgmt/ovirtmgmt target: name: pod type: pod storageMappings: - source: name: mystoragedomain target: name: mystorageclass vm: name: myvm cluster: name: myclusterNow let the operator do its work. You can explore the status by checking the status of the VM import CR . . . status: conditions: - lastHeartbeatTime: 2020-08-05T13:09:22Z lastTransitionTime: 2020-08-05T13:09:22Z message: Validation completed successfully reason: ValidationCompleted status: True type: Valid - lastHeartbeatTime: 2020-08-05T13:09:22Z lastTransitionTime: 2020-08-05T13:09:22Z message: 'VM specifies IO Threads: 1, VM has NUMA tune mode secified: interleave' reason: MappingRulesVerificationReportedWarnings status: True type: MappingRulesVerified - lastHeartbeatTime: 2020-08-05T13:10:29Z lastTransitionTime: 2020-08-05T13:09:22Z message: Copying virtual machine disks reason: ProcessingCompleted status: True type: Processing - lastHeartbeatTime: 2020-08-05T13:10:29Z lastTransitionTime: 2020-08-05T13:10:29Z message: Virtual machine disks import done reason: VirtualMachineReady status: True type: Succeeded dataVolumes: - name: testvm-26097887-1f4d-4718-961f-f5b63a49c3f5 targetVmName: testvmThe import process goes through different stages. The first stage is the validation where HCO checks for unsupported mappings. The others are for processing and reporting to provide VM and disks ready status. Future: For future releases it is planned to support importing virtual machines from VMware, reporting Prometheus metrics and SR-IOV. " }, { - "id": 58, + "id": 59, "url": "/2020/Minikube_KubeVirt_Addon.html", "title": "Minikube KubeVirt addon", "author" : "Chris Callegari", "tags" : "kubevirt, Kubernetes, virtual machine, VM, minikube, addons", "body": "Deploying KubeVirt has just gotten easier: With the latest release (v1. 12) ofminikube we can now deploy KubeVirt witha one-liner. Deploy minikube: Start minikube. Since my host is Fedora 32 I will use --driver=kvm2 and I will also use --container-runtime=crio minikube start --driver=kvm2 --container-runtime=cri-o Check that kubectl client is working correctly kubectl cluster-info Enable the minikube kubevirt addon minikube addons enable kubevirt Verify KubeVirt components have been deployed to the kubevirt namespace kubectl get ns; kubectl get all -n kubevirt SUCCESS: From here a user can proceed on to theKubevirt Laboratory 1: Use KubeVirt As you can see it is now much easier to deploy KubeVirt in a minikubeKubernetes environment. " }, { - "id": 59, + "id": 60, "url": "/2020/changelog-v0.31.0.html", "title": "KubeVirt v0.31.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 31. 0: Released on: Thu Jul 9 16:08:18 2020 +0300 [PR 3690][davidvossel] Update go-grpc dependency to v1. 30. 0 in order to improve stability [PR 3628][AlonaKaplan] Avoid virt-handler crash in case of virt-launcher network configuration error [PR 3635][jean-edouard] The “HostDisk” feature gate has to be enabled to use hostDisks [PR 3641][vatsalparekh] Reverts kubevirt/kubevirt#3488 because CI seems to have merged it without all tests passing [PR 3488][vatsalparekh] Add a way to update VMI Status with latest Pod IP for Masquerade bindings [PR 3406][tomob] If a PVC was created by a DataVolume, it cannot be used as a Volume Source for a VM. The owning DataVolume has to be used instead. [PR 3566][kraxel] added: tigervnc support for linux & windows [PR 3529][jean-edouard] Enabling EFI will also enable Secure Boot, which requires SMM to be enabled. [PR 3455][ashleyschuett] Add KubevirtConfiguration, MigrationConfiguration, DeveloperConfiguration and NetworkConfiguration to API-types [PR 3520][rmohr] Fix hot-looping on the VMI sync-condition if errors happen during the Scheduled phase of a VMI [PR 3220][mhenriks] API and controller/webhook for VirtualMachineSnapshots" }, { - "id": 60, + "id": 61, "url": "/2020/Common_templates.html", "title": "Common-templates", "author" : "Karel Simon", "tags" : "kubevirt, Kubernetes, virtual machine, VM, common-templates", "body": "What is a virtual machine template?: The KubeVirt project provides a set of templates https://github. com/kubevirt/common-template to create VMS to handle common usage scenarios. These templates provide a combination of some key factors that could be further customized and processed to have a Virtual Machine object. With common templates you can easily start in a few minutes many VMS with predefined hardware resources (e. g. number of CPUs, requested memory, etc. ). Beware common templates work only on OpenShift. Kubernetes doesn’t have support for templates. What does a VM template cover?: The key factors which define a template are Guest Operating System (OS) This allows to ensure that the emulated hardware is compatible with the guest OS. Furthermore, it allows to maximize the stability of the VM, and allows performance optimizations. Currently common templates support RHEL 6, 7, 8, Centos 6, 7, 8, Fedora 31 and newer, Windows 10, Windows server 2008, 2012 R2, 2016, 2019. The Ansible playbook generate-templates. yaml describes all combinations of templates that should be generated. Workload type of most virtual machines should be server or desktop to have maximum flexibility; the highperformance workload trades some of this flexibility (ioThreadsPolicy is set to shared) to provide better performances (e. g. IO threads). Size (flavor) Defines the amount of resources (CPU, memory) to allocate to the VM. There are 4 sizes: tiny (1 core, 1 Gi memory), small (1 core, 2 Gi memory), medium (1 core, 4 Gi memory), large (2 cores, 8 Gi memory). If these predefined sizes don’t suit you, you can create a new template based on common templates via UI (choose Workloads in the left panel » press Virtualization » press Virtual Machine Templates » press Create Virtual Machine Template blue button) or CLI (update yaml template and create new template). Accessing the virtual machine templates: If you installed KubeVirt using a supported method, you should find the common templates preinstalled in the cluster. If you want to upgrade the templates, or install them from scratch, you can use one of the supported releasesThere are two ways to install and configure templates: Via CLI: To install the templates: $ export VERSION= v0. 11. 2 $ oc create -f https://github. com/kubevirt/common-templates/releases/download/$VERSION/common-templates-$VERSION. yaml To create VM from template: $ oc process rhel8-server-tiny PVCNAME=mydisk NAME=rheltinyvm | oc apply -f - To start VM from created objectThe created object is now a regular VirtualMachine object and from now it can be controlled by accessing Kubernetes API resources. The preferred way to do this is to use virtctl tool. $ virtctl start rheltinyvm An alternative way to start the VM is with the oc patch command. Example: $ oc patch virtualmachine rheltinyvm --type merge -p '{ spec :{ running :true}}' As soon as VM starts, openshift creates a new type of object - VirtualMachineInstance. It has a similar name to VirtualMachine. Via UI: The Kubevirt project has an official plugin in OpenShift Cluster Console Web UI. This UI supports the creation of VMS using templates and template features - flavors and workload profiles. To install the templates: Install OpenShift virtualization operator from Operators > OperatorHub. The operator-based deployment takes care of installing various components, including the common templates. To create VM from template: To create a VM from a template, choose Workloads in the left panel » press Virtualization » press Create Virtual Machine blue button » choose New with Wizard. Next, you have to see Create Virtual Machine window This wizard leads you through the basic setup of vm (like guest operating system, workload, flavor, …). After vm is created you can start requested vm. Note after the generation step (UI and CLI), VM objects and template objects have no relationship with each other besides the vm. kubevirt. io/template: rhel8-server-tiny-v0. 10. 0 label. This means that changes in templates do not automatically affect VMS, or vice versa. " }, { - "id": 61, + "id": 62, "url": "/2020/win_workload_in_k8s.html", "title": "Migrate a sample Windows workload to Kubernetes using KubeVirt and CDI", "author" : "Chris Callegari", "tags" : "kubevirt, Kubernetes, virtual machine, VM, images, storage, windows", "body": "The goal of this blog is to demonstrate that a web service can continue to runafter a Windows guest virtual machine providing the service is migrated fromMS Windows and Oracle VirtualBox to a guest virtual machine orchestrated byKubernetes and KubeVirt on a Fedora Linux host. Yes! It can be done! Source details: Host platform: Windows 2019 Datacenter Virtualization platform: Oracle VirtualBox 6. 1 Guest platform: Windows 2019 Datacenter (guest to be migrated)1 Guest application: My favorite dotnet applicationJellyfinTarget details: Host platform: Fedora 32 with latest updates applied Kubernetes cluster created KubeVirt and CDI installed in the Kubernetes cluster. Procedure: Tasks to performed on source host: Before we begin let's take a moment to ensure the service is running and web browser accessible Power down the guest virtual machine to ensure all changes to the filesystem are quiesced to disk. VBoxManage. exe controlvm testvm poweroff Upload the guest virtual machine disk image to the Kubernetes cluster and a target DataVolume called testvm 2 virtctl. exe image-upload dv testvm --size=14Gi --image-path= C:\Users\Administrator\VirtualBox VMs\testvm\testvm. vdi Verify the PersistentVolumeClaim created via the DataVolume image upload in the previous step kubectl describe pvc/testvm Create a guest virtual machine definition that references the DataVolume containing our guest virtual machine disk image kubectl create -f vm_testvm. yaml 3 Expose the Jellyfin service in Kubernetes via a NodePort type service kubectl create -f service_jellyfin. yaml 4 Let's verify the running guest virtual machine by using the virtctl command to open a vnc session to the MS Window console. While we are here let's also open a web browser and confirm web browser access to the application. virtctl vnc testvm Task to performed on user workstation: And finally let's confirm web browser access via the Kubernetes service url. SUCCESS: Here we have successfully demonstrated how simple it can be to migrate anexisting MS Windows platform and application to Kubernetes control. Forquestions feel free to join the conversation via one of the project forums. Footnotes: Fedora virtio drivers need to be installed on Windows hosts or virtual machines that will be migrated into a Kubernetes environment. Drivers can be found here . ↩ Please note: • Users without certificate authority trusted certificates added to the kubernetes api and cdi cdi-proxyuploader secret will require the --insecure arg. • Users without the uploadProxyURLOverride patch to the cdi cdiconfig. cdi. kubevirt. io/config crd will require the --uploadProxyURL arg. • Users need a correctly configured $HOME/. kube/config along with client authentication certificate. ↩ vm_testvm. yaml : Virtual machine manifest ↩ service_jellyfin. yaml : Service manifest ↩ " }, { - "id": 62, + "id": 63, "url": "/2020/changelog-v0.30.0.html", "title": "KubeVirt v0.30.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 30. 0: Released on: Fri Jun 5 12:19:57 2020 +0200 Tests: Many more test fixes Security: Introduce a custom SELinux policy for virt-launcher More user friendly IPv6 default CIDR for IPv6 addresses Fix OpenAPI compatibility issues by switching to openapi-gen Improved support for EFI boot (configurable OVMF path and test fixes) Improved VMI IP reporting Support propagation of annotations from VMI to pods Support for more fine grained (NET_RAW( capability granting to virt-launcher Support for eventual consistency with DataVolumes" }, { - "id": 63, + "id": 64, "url": "/2020/SELinux-from-basics-to-KubeVirt.html", "title": "SELinux, from basics to KubeVirt", "author" : "Jed Lejosne", "tags" : "kubevirt, kubernetes, virtual machine, VM, design, architecture, security, libvirt, qemu", "body": "SELinux is one of many security mechanisms leveraged by KubeVirt. For an overview of KubeVirt security, please first read this excellent article. SELinux 101: At its core, SELinux is a allow list-based security policy system intended to limit interactions between Linux processes and files. Simplified, it can be visualized as a “syscall firewall”. Policies are based on statically defined types, that can be assigned to files, processes and other objects. A simple policy example would be to allow a /bin/test program to read its /etc/test. conf configuration file. The policy for that would include directives to: Assign types to files and processes, like test_bin_t for /bin/test, test_conf_t for /etc/test. conf, and test_t for instances of the test program Configure a transition from test_bin_t to test_t Allow test_t processes to read test_conf_t files. The SELinux standard Reference Policy: Since SELinux policies are allow lists, a setup running with the above policy would not be allowed to do anything, except for that test program. A policy for an entire Linux distribution as seen in the wild is made of millions of lines, which wouldn’t be practical to write and maintain on a per-distribution basis. That is why the Reference Policy (refpolicy) was written. The refpolicy implements various mechanisms to simplify policy writing, but also contains modules for most core Linux applications. Most use-cases can be addressed with the “standard” refpolicy, plus optionally some custom modules for specific applications not covered by the Reference Policy. Limitations start to arise for use-cases that run the same binary multiple times concurrently, and expect instances to be isolated from each other. Virtualization is one of those use cases. Indeed if 2 virtual machines are running on the same system, it is usually desirable that one VM can’t see the resources of the other one. As an example, if qemu processes are labeled qemu_t and disk files are labeled qemu_disk_t, allowing qemu_t to read/write qemu_disk_t files would allow all qemu processes to access all disk files. Another mechanism is necessary to provide VM isolation. That is what SELinux MCS addresses. SELinux Multi-Category Security (MCS): Multi-Category Security, or MCS, provides the ability to dynamically add numerical IDs (called categories) to any SELinux type on any object (file/process/socket/…). Categories range from 0 to 1023. Since only 1024 unique IDs would be quite limiting, most virtualization-related applications combine 2 categories, which add up to about 500,000 combinations. It’s important to note that categories have no order, so c42,c42 is equivalent to c42, and c1,c2 is equivalent to c2,c1. In the example above, we can now: Dynamically compute a unique random category for each VM Assign the corresponding categories to all VM resources, like qemu instance and disk files Only allow access when all the involved resources have the same category number. And that is exactly what libvirt does when compiled with SELinux support, as shown in the diagram below. Note: MCS can do a lot more, this article only describes the bits that are used by libvirt and kubernetes. MCS and containers: Another application that leverages MCS is Linux containers. In fact, containers use very few SELinux types and rely mostly on MCS to provide container isolation. For example, all the files and processes in container filesystems have the same SELinux types. For a non-super-privileged container, those types are usually container_file_t for file and container_t for processes. Most operations are permitted within those types, and the categories are really what matters. As with libvirt, categories have to match for access to be granted, effectively blocking inter-container communication. Super-privileged containers however are exempt from categories. They use the spc_t SELinux type, which allows them to do pretty much anything, at least as far as SELinux is concerned. That is all defined as an SELinux module in the container-selinux Github repository MCS and container orchestrators: Container orchestrators add a level of management. They define pods of containers, and within a pod, cross-container communication is acceptable and often even necessary. Categories are therefore managed at the pod level, and all the containers that belong to the same pod are assigned the same categories, as illustrated by the following diagram. SELinux in Kubevirt: Finally getting to KubeVirt, which relies on all of the above, as it runs libvirt in a container managed by a container orchestrator on SELinux-enabled systems. In that context, libvirt runs inside a regular container and can’t manage SELinux object like types and categories. However, MCS isolation is provided by the container orchestrator, and every VM runs in its own pod (virt-launcher). And since no 2 virt-launcher pods will ever have the same categories on a given node, SELinux isolation of VMs is guaranteed. Note: As some host configuration is usually required for VMs to run, each node also runs a super-privileged pod (virt-handler), dedicated to such operations. " }, { - "id": 64, + "id": 65, "url": "/2020/KubeVirt-VM-Image-Usage-Patterns.html", "title": "KubeVirt VM Image Usage Patterns", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, images, storage", "body": "Building a VM Image Repository: You know what I hear a lot from new KubeVirt users? “How do I manage VM images with KubeVirt? There’s a million options and I have no idea where to start. ” And I agree. It’s not obvious. There are a million ways to use and manipulate VM images with KubeVirt. That’s by design. KubeVirt is meant to be as flexible as possible, but in the process I think we dropped the ball on creating some well defined workflows people can use as a starting point. So, that’s what I’m going to attempt to do. I’ll show you how to make your images accessible in the cluster. I’ll show you how to make a custom VM image repository for use within the cluster. And I’ll show you how to use this at scale using the same patterns you may have used in AWS or GCP. The pattern we’ll use here is… Import a base VM image into the cluster as an PVC Use KubeVirt to create a new immutable custom image with application assets Scale out as many VMIs as we’d like using the pre-provisioned immutable custom image. Remember, this isn’t “the definitive” way of managing VM images in KubeVirt. This is just an example workflow to help people get started. Importing a Base Image: Let’s start with importing a base image into a PVC. For our purposes in this workflow, the base image is meant to be immutable. No VM will use this image directly, instead VMs spawn with their own unique copy of this base image. Think of this just like you would containers. A container image is immutable, and a running container instance is using a copy of an image instead of the image itself. Step 0. Install KubeVirt with CDI: I’m not covering this. Use our documentation linked to below. Understand that CDI (containerized data importer) is the tool we’ll be using to help populate and manage PVCs. Installing KubeVirtInstalling CDI Step 1. Create a namespace for our immutable VM images: We’ll give users the ability to clone VM images living on PVCs from this namespace to their own namespace, but not directly create VMIs within this namespace. kubectl create namespace vm-imagesStep 2. Import your image to a PVC in the image namespace: Below are a few options for importing. For each example, I’m using the Fedora Cloud x86_64 qcow2 image that can be downloaded here If you try these examples yourself, you’ll need to download the current Fedora-Cloud-Base qcow2 image file in your working directory. Example: Import a local VM from your desktop environment using virtctl If you don’t have ingress setup for the cdi-uploadproxy service endpoint (which you don’t if you’re reading this) we can set up a local port forward using kubectl. That gives a route into the cluster to upload the image. Leave the command below executing to open the port. kubectl port-forward -n cdi service/cdi-uploadproxy 18443:443In a separate terminal upload the image over the port forward connection using the virtctl tool. Note that the size of the PVC must be the size of what the qcow image will expand to when converted to a raw image. In this case I chose 5 gigabytes as the PVC size. virtctl image-upload dv fedora-cloud-base --namespace vm-images --size=5Gi --image-path Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 --uploadproxy-url=https://127. 0. 0. 1:18443 --insecureOnce that completes, you’ll have a PVC in the vm-images namespace that contains the Fedora Cloud image. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sExample: Import using a container registry If the image’s footprint is small like our Fedora Cloud Base qcow image, then it probably makes sense to use a container image registry to import our image from a container image to a PVC. In the example below, I start by building a container image with the Fedora Cloud Base qcow VM image in it, and push that container image to my container registry. cat << END > DockerfileFROM scratchADD Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 /disk/ENDdocker build -t quay. io/dvossel/fedora:cloud-base . docker push quay. io/dvossel/fedora:cloud-baseNext a CDI DataVolume is used to import the VM image into a new PVC from the container image you just uploaded to your container registry. Posting the DataVolume manifest below will result in a new 5 gigabyte PVC being created and the VM image being placed on that PVC in a way KubeVirt can consume it. cat << END > fedora-cloud-base-datavolume. yamlapiVersion: cdi. kubevirt. io/v1alpha1kind: DataVolumemetadata: name: fedora-cloud-base namespace: vm-imagesspec: source: registry: url: docker://quay. io/dvossel/fedora:cloud-base pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5GiENDkubectl create -f fedora-cloud-base-datavolume. yamlYou can observe the CDI complete the import by watching the DataVolume object. kubectl describe datavolume fedora-cloud-base -n vm-images. . . Status: Phase: Succeeded Progress: 100. 0%Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ImportScheduled 2m49s datavolume-controller Import into fedora-cloud-base scheduled Normal ImportInProgress 2m46s datavolume-controller Import into fedora-cloud-base in progress Normal Synced 40s (x11 over 2m51s) datavolume-controller DataVolume synced successfully Normal ImportSucceeded 40s datavolume-controller Successfully imported into PVC fedora-cloud-baseOnce the import is complete, you’ll see the image available as a PVC in your vm-images namespace. The PVC will have the same name given to the DataVolume. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sExample: Import an image from an http or s3 endpoint While I’m not going to provide a detailed example here, another option for importing VM images into a PVC is to host the image on an http server (or as an s3 object) and then use a DataVolume to import the VM image into the PVC from a URL. Replace the url in this example with one hosting the qcow2 image. More information about this import method can be found here. kind: DataVolumemetadata: name: fedora-cloud-base namespace: vm-imagesspec: source: http: url: http://your-web-server-here/images/Fedora-Cloud-Base-XX-X. X. x86_64. qcow2 pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5GiProvisioning New Custom VM Image: The base image itself isn’t that useful to us. Typically what we really want is an immutable VM image preloaded with all our application related assets. This way when the VM boots up, it already has everything it needs pre-provisioned. The pattern we’ll use here is to provision the VM image once, and then use clones of the pre-provisioned VM image as many times as we’d like. For this example, I want a new immutable VM image preloaded with an nginx webserver. We can actually describe this entire process of creating this new VM image using the single VM manifest below. Note that I’m starting the VM inside the vm-images namespace. This is because I want the resulting VM image’s cloned PVC to remain in our vm-images repository namespace. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: nginx-provisioner name: nginx-provisioner namespace: vm-imagesspec: runStrategy: RerunOnFailure template: metadata: labels: kubevirt. io/vm: nginx-provisioner spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: fedora-nginx name: datavolumedisk1 - cloudInitNoCloud: userData: | #!/bin/sh yum install -y nginx systemctl enable nginx # removing instances ensures cloud init will execute again after reboot rm -rf /var/lib/cloud/instances shutdown now name: cloudinitdisk dataVolumeTemplates: - metadata: name: fedora-nginx spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: pvc: namespace: vm-images name: fedora-cloud-baseThere are a few key takeaways from this manifest worth discussing. Usage of runStrategy: “RerunOnFailure”. This tells KubeVirt to treat the VM’s execution similar to a Kubernetes Job. We want the VM to continue retrying until the VM guest shuts itself down gracefully. Usage of the cloudInitNoCloud volume. This volume allows us to inject a script into the VM’s startup procedure. In our case, we want this script to install nginx, configure nginx to launch on startup, and then immediately shutdown the guest gracefully once that is complete. Usage of the dataVolumeTemplates section. This allows us to define a new PVC which is a clone of our fedora-cloud-base base image. The resulting VM image attached to our VM will be a new image pre-populated with nginx. After posting the VM manifest to the cluster, wait for the corresponding VMI to reach the Succeeded phase. kubectl get vmi -n vm-imagesNAME AGE PHASE IP NODENAMEnginx-provisioner 2m26s Succeeded 10. 244. 0. 22 node01This tells us the VM successfully executed the cloud-init script which installed nginx and shut down the guest gracefully. A VMI that never shuts down or repeatedly fails means something is wrong with the provisioning. All that’s left now is to delete the VM and leave the resulting PVC behind as our immutable artifact. We do this by deleting the VM using the –cascade=false option. This tells Kubernetes to delete the VM, but leave behind anything owned by the VM. In this case we’ll be leaving behind the PVC that has nginx provisioned on it. kubectl delete vm nginx-provisioner -n vm-images --cascade=falseAfter deleting the VM, you can see the nginx provisioned PVC in your vm-images namespace. kubectl get pvc -n vm-imagesNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEfedora-cloud-base Bound local-pv-e824538e 5Gi RWO local 60sfedora-nginx Bound local-pv-8dla23ds 5Gi RWO local 60sUnderstanding the VM Image Repository: At this point we have a namespace, vm-images, that contains PVCs with our VM images on them. Those PVCs represent VM images in the same way AWS’s AMIs represent VM images and this vm-images namespace is our VM image repository. Using CDI’s icross namespace cloning feature, VM’s can now be launched across multiple namespaces throughout the entire cluster using the PVCs in this “repository”. Note that non-admin users need a special RBAC role to allow for this cross namespace PVC cloning. Any non-admin user who needs the ability to access the vm-images namespace for PVC cloning will need the RBAC permissions outlined here. Below is an example of the RBAC necessary to enable cross namespace cloning from the vm-images namespace to the default namespace using the default service account. apiVersion: rbac. authorization. k8s. io/v1kind: ClusterRolemetadata: name: cdi-clonerrules:- apiGroups: [ cdi. kubevirt. io ] resources: [ datavolumes/source ] verbs: [ create ]---apiVersion: rbac. authorization. k8s. io/v1kind: RoleBindingmetadata: name: default-cdi-cloner namespace: vm-imagessubjects:- kind: ServiceAccount name: default namespace: defaultroleRef: kind: ClusterRole name: cdi-cloner apiGroup: rbac. authorization. k8s. ioHorizontally Scaling VMs Using Custom Image: Now that we have our immutable custom VM image, we can create as many VMs as we want using that custom image. Example: Scale out VMI instances using the custom VM image: Clone the custom VM image from the vm-images namespace into the namespace the VMI instances will be running in as a ReadOnlyMany PVC. This will allow concurrent access to a single PVC. apiVersion: cdi. kubevirt. io/v1alpha1kind: DataVolumemetadata: name: nginx-rom namespace: defaultspec: source: pvc: namespace: vm-images name: fedora-nginx pvc: accessModes: - ReadOnlyMany resources: requests: storage: 5GiNext, create a VirtualMachineInstanceReplicaSet that references the nginx-rom PVC as an ephemeral volume. With an ephemeral volume, KubeVirt will mount the PVC read only, and use a cow (copy on write) ephemeral volume on local storage to back each individual VMI. This ephemeral data’s life cycle is limited to the life cycle of each VMI. Here’s an example manifest of a VirtualMachineInstanceReplicaSet starting 5 instances of our nginx server in separate VMIs. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: labels: kubevirt. io/vmReplicaSet: nginx name: nginxspec: replicas: 5 template: metadata: labels: kubevirt. io/vmReplicaSet: nginx spec: domain: devices: disks: - disk: bus: virtio name: nginx-image - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - ephemeral: name: nginx-image persistentVolumeClaim: claimName: nginx-rom - cloudInitNoCloud: userData: | # add any custom logic you want to occur on startup here. echo “cloud-init script execution name: cloudinitdiskExample: Launching a Single “Pet” VM from Custom Image: In the manifest below, we’re starting a new VM with a PVC cloned from our pre-provisioned VM image that contains the nginx server. When the VM boots up, a new PVC will be created in the VM’s namespace that is a clone of the PVC referenced in our vm-images namespace. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: nginx name: nginxspec: running: true template: metadata: labels: kubevirt. io/vm: nginx spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 - disk: bus: virtio name: cloudinitdisk machine: type: resources: requests: memory: 1Gi terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: nginx name: datavolumedisk1 - cloudInitNoCloud: userData: | # add any custom logic you want to occur on startup here. echo “cloud-init script execution name: cloudinitdisk dataVolumeTemplates: - metadata: name: nginx spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi source: pvc: namespace: vm-images name: fedora-nginxOther Custom Creation Image Tools: In my example I imported a VM base image into the cluster and used KubeVirt to provision a custom image with a technique that used cloud-init. This may or may not make sense for your use case. It’s possible you need to pre-provision the VM image before importing into the cluster at all. If that’s the case, I suggest looking into two tools. Packer. io using the qemu builder. This allows you to automate building custom images on your local machine using configuration files that describe all the build steps. I like this tool because it closely matches the Kubernetes “declarative” approach. Virt-customize is a cli tool that allows you to customize local VM images by injecting/modifying files on disk and installing packages. Virt-install is a cli tool that allows you to automate a VM install as if you were installing it from a cdrom. You’ll want to look into using a kickstart file to fully automate the process. The resulting VM image artifact created from any of these tools can then be imported into the cluster in the same way we imported the base image earlier in this document. " }, { - "id": 65, + "id": 66, "url": "/2020/changelog-v0.29.0.html", "title": "KubeVirt v0.29.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 29. 0: Released on: Wed May 6 15:01:57 2020 +0200 Tests: Many many test fixes Tests: Many more test fixes CI: Add lane with SELinux enabled CI: Drop PPC64 support for now Drop Genie support Drop the use of hostPaths in the virt-launcher for improved security Support priority classes for important componenets Support IPv6 over masquerade binding Support certificate rotations based on shared secrets Support for VM ready condition Support for advanced node labelling (supported CPU Families and machine types)" }, { - "id": 66, + "id": 67, "url": "/2020/KubeVirt-Operation-Fundamentals.html", "title": "KubeVirt Operation Fundamentals", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, design, architecture, operation", "body": "Simplicity Above All Else: In the late 1970s and early 1980s there were two video recording tape formats competing for market domination. The Betamax format was the technically superior option. Yet despite having better audio, video, and build quality, Betamax still eventually lost to the technically inferior VHS format. VHS won because it was “close enough” in terms of quality and drastically reduced the cost to the consumer. I’ve seen this same pattern play out in the open source world as well. It doesn’t matter how technically superior one project might be over another if no one can operate the thing. The “cost” here is operational complexity. The project people can actually get up and running in 5 minutes as a proof of concept is usually going to win over another project they struggle to stand up for several hours or days. With KubeVirt, our aim is Betamax for quality and VHS for operational complexity costs. When we have to choose between the two, the option that involves less operational complexity wins 9 out of 10 times. Essentially, above all else, KubeVirt must be simple to use. Installation Made Easy: From my experience, the first (and perhaps the largest) hurdle a user faces when approaching a new project is installation. When the KubeVirt architecture team placed their bet’s on what technical direction to take the project early on, picking a design that was easy to install was a critical component of the decision making process. As a result, our goal from day one has always been to make installing KubeVirt as simple as posting manifests to the cluster with standard Kubernetes client tooling (like kubectl). No per node package installations, no host level configurations. All KubeVirt components have to be delivered as containers and managed with Kubernetes. We’ve maintained this simplicity today. Installing KubeVirt v0. 27. 0 is as simple as… Step 1: posting the KubeVirt operator manifest kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/v0. 27. 0/kubevirt-operator. yamlStep 2: posting the KubeVirt install object, which you can use to define exactly what version you want to install using the KubeVirt operator. In our example here, this custom resource defaults to the release that matches the installed operator. kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/v0. 27. 0/kubevirt-cr. yamlStep 3: and then optionally waiting for the KubeVirt install object’s “Available” condition, which indicates installation has succeeded. kubectl -n kubevirt wait kv kubevirt --for condition=AvailableMaintaining this simplicity played a critical role in our design process early on. At one point we had to make a decision whether to use the existing Kubernetes container runtimes or create our own special virtualization runtime to run in parallel to the cluster’s container runtime. We certainly had more control with our own runtime, but there was no practical way of delivering our own CRI implementation that would be easy to install on existing Kubernetes clusters. The installation would require invasive per node modifications and fall outside of the scope of what we could deliver using Kubernetes manifests alone, so we dropped the idea. Lucky for us, reusing the existing container runtime was both the simplest approach operationally and eventually proved to be the superior approach technically for our use case. Zero Downtime Updates: While installation is likely the first hurdle for evaluating a project, how to perform updates quickly becomes the next hurdle before placing a project into production. This is why we created the KubeVirt virt-operator. If you go back and look at the installation steps in the previous section, you’ll notice the first step is to post the virt-operator manifest and the second step is posting a custom resource object. What we’re doing here is bringing up the virt-operator somewhere in the cluster, and then posting a custom resource object representing the KubeVirt install. That second step is telling virt-operator to install KubeVirt. The third step is simply watching our install object to determine when virt-operator has reported the install is complete. Using our default installation instructions, zero downtime updates are as simple as posting a new virt-operator deployment. Step 1. Update virt-operator from our original install of v0. 27. 0 to v0. 28. 0 by applying a new virt-operator manifest. kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/v0. 28. 0/kubevirt-operator. yamlStep 2: Watch the install object to see when the installation completes. Eventually it will report v0. 28. 0 as the observed version which indicates the update has completed. kubectl get kv -o yaml -n kubevirt | grep observedKubeVirtVersionBehind the scenes, virt-operator is coordinating the roll out of all the new KubeVirt components in a way that ensures existing virtual machine workloads are not disrupted. The KubeVirt community supports and tests the update path between each KubeVirt minor release to ensure workloads remain available both before, during, and after an update has completed. Furthermore, there are a set of functional tests that run on every pull request made to the project that validate the code about to be submitted does not disrupt the update path from the latest KubeVirt release. Our merge process won’t even allow code to enter the code base without first passing these update functional tests on a live cluster. " }, { - "id": 67, + "id": 68, "url": "/2020/KubeVirt-Security-Fundamentals.html", "title": "KubeVirt Security Fundamentals", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, design, architecture, security", "body": "Security Guidelines: In KubeVirt, our approach to security can be summed up by adhering to the following guidelines. Maintain the principle of least privilege for all our components, meaning each component only has access to exactly the minimum privileges required to operate. Establish boundaries between trusted vs untrusted components. In our case, an untrusted component is typically anything that executes user third party logic. Inter-component network communication must be secured by TLS with mutual peer authentication. Let’s take a look at what each of these guidelines mean for us practically when it comes to KubeVirt’s design. The Principle of Least Privilege: By limiting each component to only the exact privileges it needs to operate, we reduce the blast radius that occurs if a component is compromised. Here’s a simple and rather obvious example. If a component needs access to a secret in a specific namespace, then we give that component read-only access to that single secret and not access to read all secrets. If that component is compromised, we’ve then limited the blast radius for what can be exploited. For KubeVirt, the principle of least privilege can be broken into two categories. Cluster Level Access: The resources and APIs a component is permitted to access on the cluster. Host Level Access: The local resources a component is permitted to access on the host it is running on. Cluster Level Access: For cluster level access the primary tools we have to grant and restrict access to cluster resources are cluster Namespaces and RBAC (Role Based Access Control). Each KubeVirt component only has access to the exact RBAC permissions within the limited set of Namespaces it requires to operate. For example, let’s take a look at the KubeVirt control plane and runtime components highlighted in orange below. Virt-controller is the component responsible for spinning up pods across the entire cluster for virtual machines to live in. As a result, this component needs access to RBAC permissions to manage pods. However, another part of virt-controller’s operation involves needing access to a single secret that contains its TLS certificate information. We aren’t going to give virt-controller access to manage secrets as well as pods simply because it needs access to read a single secret. In fact we aren’t even going to give virt-controller direct API access to any secrets at all. Instead we use the ability to pass a cluster secret as a pod volume into the virt-controller’s pod in order to provide read-only access. Virt-api is the component that validates our api and provides virtual machine console and VNC access. This component doesn’t need access to create Pods like virt-controller does. Instead it mostly only requires read and modify access to existing KubeVirt API objects. As a result, if virt-api is compromised the blast radius is mostly limited to KubeVirt objects. Virt-handler is a privileged daemonset that resides at the host level on every node that is capable of spinning up KubeVirt virtual machines. This component needs cluster access to the KubeVirt VirtualMachineInstance objects in order to manage the startup flow of virtual machines. However it doesn’t need cluster access to the pod objects the virtual machines live in. Similar to virt-api, what little cluster access this component has is mostly read-only and limited to the KubeVirt API. Virt-launcher is a non-privileged component that resides in every virtual machine’s pod. This component is responsible for starting and monitoring the qemu-kvm process. Since this process lives within an “untrusted” environment that is executing third party logic, we’ve designed this component to require no cluster API access. As a result, this pod only receives the default service account for the namespace the pod resides in. If virt-launcher is compromised, cluster API access should not be impacted. Host Level Access: For host level access, the primary tools we have at our disposal for limiting access primarily reside within the Pod specification’s securityContext section. It’s here that we can define settings like what local user a container runs with, whether a container has access to host namespaces, and SELinux related options. Other tools for host level access involve exposing hostPath volumes for shared host directory access and DevicePlugins to pass host devices into the pod environment. Let’s take a look at a few examples of how host access is managed for our components. Virt-controller and virt-api are cluster level components only, and have no need for access to host resources. These components run as non-privileged and non-root within their own isolated namespaces. No special host level access is granted to these components. For OpenShift clusters, the SCC (Security Context Constraint) feature even provides the ability to restrict virt-controller’s permissions in a way that prevents it from creating pods with host access. Virt-launcher is a host level component that is non-privileged and untrusted. However this component still needs access to host level devices (like /dev/kvm, gpus, and network devices) in order to start the virtual machine. Through the use of the Kubernetes Device Plugin feature, we can expose host devices into a pod’s environment in a controlled way that doesn’t compromise namespace isolation or require hostPath volumes. Virt-handler is a host level component that is both privileged and trusted. This component’s responsibilities involve reaching into the virt-launcher’s pod to perform actions we don’t want the untrusted virt-launcher component to have permissions to perform itself. The primary method we have to restrict virt-handler’s access to the host is through SELinux. Since virt-handler requires maintaining some limited persistent state, hostPath volumes are also utilized to allow virt-handler to store persistent information on the host that can persist through virt-handler updates. Trusted vs Untrusted Components: For KubeVirt, the separation between trusted and untrusted components comes when users can execute their own third party logic within a component’s environment. We can clearly illustrate this concept using the boundary between our two host level components, virt-launcher and virt-handler. Establishing Boundaries: The virt-launcher pod is an untrusted environment. The third party code executed within this environment is the user’s kvm virtual machine. KubeVirt has no control over what is executing within this virtual machine guest, so if there is a security vulnerability that allows breaking out of the kvm hypervisor, we want to make sure the environment that’s broken into is as limited as possible. This is why the virt-launcher’s pod has such restricted cluster and host access. The virt-handler pod, on the other hand, is a trusted environment that does not involve executing any third party code. During the virtual machine startup flow, there are privileged tasks that need to take place on the host in order to prepare the virtual machine for starting. This ranges from performing the Device Plugin logic that injects a host device into a pod’s environment, to setting up network bridges and interfaces within a pod’s environment. To accomplish this, we use the trusted virt-handler component to reach into the untrusted virt-launcher environment to perform privileged tasks. The boundary established here is that we trust virt-handler with the ability to influence and provide information about all virtual machines running on a host, and limit virt-launcher to only influence and provide information about itself. Securing Boundaries: Any communication channel that gives an untrusted environment the ability to present information to a trusted environment must be heavily scrutinized to prevent the possibility of privilege escalation. For example, the boundary between virt-handler and virt-launcher is meant to work like a one way mirror. The trusted virt-handler component can reach directly into the untrusted virt-launcher environments, but each virt-launcher can’t reach outside of its own isolated environment. Host namespace isolation provides a reasonable guarantee that virt-launcher can’t reach outside of its own environment directly, however we still have to be mindful about indirection communication. Virt-handler observes information presented to it by each virt-launcher pod. If a virt-launcher environment is able to present fake information about another virtual machine, then the untrusted virt-launcher environment could indirectly influence the execution of another workload. To counter this, when designing communication channels between trusted and untrusted components, we have to be careful to only allow communication from untrusted sources to influence itself and furthermore only influence itself in a way that can’t result in escalated privileges. Mutual TLS Authentication: There is a built in trust that components have for interacting with one another. For example, virt-api is allowed to establish virtual machine Console/VNC streams with virt-handler, and live migration is performed by streaming information between two virt-handler instances. However for these types of interactions to work, we have to have a strong guarantee that the endpoints we’re talking to are in fact who they present themselves to be. Otherwise we could live migrate a virtual machine to an untrusted location, or provide VNC access to a virtual machine to an unauthorized endpoint. In KubeVirt we solve this issue of inter-component communication trust in the same way Kubernetes solves it. Each component receives a unique TLS certificate signed by a cluster Certificate Authority which is used to guarantee the component is who they say they are. The certificate and CA information is injected into each component using a secret passed in as a Pod volume. Whenever a component acts as a client establishing a new connection with another component, it uses its unique certificate to prove its identify. Likewise, the server accepting the clients connection also presents its certificate to the client. This mutual peer certificate authentication allows both the client and server to establish trust. So, when virt-api attempts to establish a VNC console stream with a virt-handler component, virt-handler is configured to only allow that stream to be opened by an endpoint providing a valid virt-api certificate, and virt-api will only talk to a server that presents the expected virt-handler certificate. CA and Certificate Rotation: In KubeVirt both our CA and certificates are rotated on a user defined recurring interval. In the event that either the CA key or a certificate is compromised, this information will eventually be rendered stale and unusable regardless if the compromise is known or not. If the compromise is known, a forced CA and certificate rotation can be invoked by the cluster admin simply by deleting the corresponding secrets in the KubeVirt install namespace. " }, { - "id": 68, + "id": 69, "url": "/2020/KubeVirt-Architecture-Fundamentals.html", "title": "KubeVirt Architecture Fundamentals", "author" : "David Vossel", "tags" : "kubevirt, kubernetes, virtual machine, VM, design, architecture", "body": "Placing our Bets: Back in 2017 the KubeVirt architecture team got together and placed their bets on a set of core design principles that became the foundation of what KubeVirt is today. At the time, our decisions broke convention. We chose to take some calculated risks with the understanding that those risks had a real chance of not playing out in our favor. Luckily, time has proven our bets were well placed. Since those early discussions back in 2017, KubeVirt has grown from a theoretical prototype into a project deployed in production environments with a thriving open source community. While KubeVirt has grown in maturity and sophistication throughout the past few years, the initial set of guidelines established in those early discussions still govern the project’s architecture today. Those guidelines can be summarized nearly entirely by the following two key decisions. Virtual machines run in Pods using the existing container runtimes. This decision came at a time when other Kubernetes virtualization efforts were creating their own virtualization specific CRI runtimes. We took a bet on our ability to successfully launch virtual machines using existing and future container runtimes within an unadulterated Pod environment. Virtual machines are managed using a custom “Kubernetes like” declarative API. When this decision was made, imperative APIs were the defacto standard for how other platforms managed virtual machines. However, we knew in order to succeed in our mission to deliver a truly cloud-native API managed using existing Kubernetes tooling (like kubectl), we had to adhere fully to the declarative workflow. We took a bet that the lackluster Kubernetes Third Party Resource support (now known as CRDs) would eventually provide the ability to create custom declarative APIs as first class citizens in the cluster. Let’s dive into these two points a bit and take a look at how these two key decisions permeated throughout our entire design. Virtual Machines as Pods: We often pitch KubeVirt by saying something like “KubeVirt allows you to run virtual machines side by side with your container workloads”. However, the reality is we’re delivering virtual machines as container workloads. So as far as Kubernetes is concerned, there are no virtual machines, just pods and containers. Fundamentally, KubeVirt virtual machines just look like any other containerized application to the rest of the cluster. It’s our KubeVirt API and control plane that make these containerized virtual machines behave like you’d expect from using other virtual machine management platforms. The payoff from running virtual machines within a Kubernetes Pod has been huge for us. There’s an entire ecosystem that continues to grow around how to provide pods with access to networks, storage, host devices, cpu, memory, and more. This means every time a problem or feature is added to pods, it’s yet another tool we can use for virtual machines. Here are a few examples of how pod features meet the needs of virtual machines as well. Storage: Virtual machines need persistent disks. Users should be able to stop a VM, start a VM, and have the data persist. There’s a Kubernetes storage abstraction called a PVC (persistent volume claim) that allows persistent storage to be attached to a pod. This means by placing the virtual machine in a pod, we can use the existing PVC mechanisms of delivering persistent storage to deliver our virtual machine disks. Network: Virtual machines need access to cluster networking. Pods are provided network interfaces that tie directly into the pod network via CNI. We can give a virtual machine running in a pod access to the pod network using the default CNI allocated network interfaces already present in the pod’s environment. CPU/Memory: Users need the ability to assign cpu and memory resources to Virtual machines. We can assign cpu and memory to pods using the resource requests/limits on the pod spec. This means through the use of pod resource requests/limits we are able to assign resources directly to virtual machines as well. This list goes on and on. As problems are solved for pods, KubeVirt leverages the solution and translates it to the virtual machine equivalent. The Declarative KubeVirt Virtualization API: While a KubeVirt virtual machine runs within a pod, that doesn’t change the fact that people working with virtual machines have a different set of expectations for how virtual machines should work compared to how pods are managed. Here’s the conflict. Pods are mortal workloads. A pod is declared by posting it’s manifest to the cluster, the pod runs once to completion, and that’s it. It’s done. Virtual machines are immortal workloads. A virtual machine doesn’t just run once to completion. Virtual machines have state. They can be started, stopped, and restarted any number of times. Virtual machines have concepts like live migration as well. Furthermore if the node a virtual machine is running on dies, the expectation is for that exact same virtual machine to resurrect on another node maintaining its state. So, pods run once and virtual machines live forever. How do we reconcile the two? Our solution came from taking a play directly out of the Kubernetes playbook. The Kubernetes core apis have this concept of layering objects on top of one another through the use of workload controllers. For example, the Kubernetes ReplicaSet is a workload controller layered on top of pods. The ReplicaSet controller manages ensuring that there are always ‘x’ number of pod replicas running within the cluster. If a ReplicaSet object declares that 5 pod replicas should be running, but a node dies bringing that total to 4, then the ReplicaSet workload controller manages spinning up a 5th pod in order to meet the declared replica count. The workload controller is always reconciling on the ReplicaSet objects desired state. Using this established Kubernetes pattern of layering objects on top of one another, we came up with our own virtualization specific API and corresponding workload controller called a “VirtualMachine” (big surprise there on the name, right?). Users declare a VirtualMachine object just like they would a pod by posting the VirtualMachine object’s manifest to the cluster. The big difference here that deviates from how pods are managed is that we allow VirtualMachine objects to be declared to exist in different states. For example, you can declare you want to “start” a virtual machine by setting “running: true” on the VirtualMachine object’s spec. Likewise you can declare you want to “stop” a virtual machine by setting “running: false” on the VirtualMachine object’s spec. Behind the scenes, setting the “running” field to true or false results in the workload controller creating or deleting a pod for the virtual machine to live in. In the end, we essentially created the concept of an immortal VirtualMachine by laying our own custom API on top of mortal pods. Our API and controller knows how to resurrect a “stopped” VirtualMachine by constructing a pod with all the right network, storage volumes, cpu, and memory attached to in order to accurately bring the VirtualMachine back to life with the exact same state it stopped with. " }, { - "id": 69, + "id": 70, "url": "/2020/changelog-v0.28.0.html", "title": "KubeVirt v0.28.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 28. 0: Released on: Thu Apr 9 23:01:29 2020 +0200 CI: Try to discover flaky tests before merge Fix the use of priorityClasses Fix guest memory overhead calculation Fix SR-IOV device overhead requirements Fix loading of tun module during virt-handler initialization Fixes for several test cases Fixes to support running with container_t Support for renaming a vM Support ioEmulator thread pinning Support a couple of alerts for virt-handler Support for filesystem listing using the guest agent Support for retrieving data from the guest agent Support for device role tagging Support for assigning devices to the PCI root bus Support for guest overhead override Rewrite container-disk in C to in order to reduce it’s memory footprint" }, { - "id": 70, + "id": 71, "url": "/2020/Live-migration.html", "title": "Live Migration in KubeVirt", "author" : "Pablo Iranzo Gómez", "tags" : "kubevirt, kubernetes, virtual machine, VM, Live Migration, node drain", "body": " Introduction Enabling Live Migration Configuring Live Migration Performing the Live Migration Cancelling a Live Migration What can go wrong? Node Eviction Conclusion ReferencesIntroduction: This blog post will be explaining on KubeVirt’s ability to perform live migration of virtual machines. Live Migration is a process during which a running Virtual Machine Instance moves to another compute node while the guest workload continues to run and remain accessible. The concept of live migration is already well-known among virtualization platforms and enables administrators to keep user workloads running while the servers can be moved to maintenance for any reason that you might think of like: Hardware maintenance (physical, firmware upgrades, etc) Power management, by moving workloads to a lower number of hypervisors during off-peak hours etcKubeVirt also includes support for virtual machine migration within Kubernetes when enabled. Keep reading to learn how! Enabling Live Migration: To enable live migration we need to enable the feature-gate for it by adding LiveMigration to the key: apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: LiveMigration A current kubevirt-config can be edited to append “LiveMigration” to an existing configuration: kubectl edit configmap kubevirt-config -n kubevirtdata: feature-gates: DataVolumes,LiveMigration Configuring Live Migration: If we want to alter the defaults for Live-Migration, we can further edit the kubevirt-config like: apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: LiveMigration migrations: |- parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 progressTimeout: 150Parameters are explained in the below table (check the documentation for more details): Parameter Default value Description parallelMigrationsPerCluster 5 How many migrations might happen at the same time parallelOutboundMigrationsPerNode 2 How many outbound migrations for a particular node bandwidthPerMigration 64Mi MiB/s to have the migration limited to, in order to not affect other systems completionTimeoutPerGiB 800 Time for a GiB of data to wait to be completed before aborting the migration. progressTimeout 150 Time to wait for Live Migration to progress in transferring data Performing the Live Migration: Limitations Virtual Machines using PVC must have a RWX access mode to be Live-Migrated Additionally, pod network binding of bridge interface is not allowedLive migration is initiated by posting an object VirtualMachineInstanceMigration to the cluster, indicating the VM name to migrate, like in the following example: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceMigrationmetadata: name: migration-jobspec: vmiName: vmi-fedoraThis will trigger the process for the VM. NOTE When a VM is started, a calculation has been already performed indicating if the VM is live-migratable or not. This information is stored in the VMI. status. conditions. Currently, most of the calculation is based on the Access Mode for the VMI volumes but can be based on multiple parameters. For example: Status: Conditions: Status: True Type: LiveMigratable Migration Method: BlockMigrationIf the VM is Live-Migratable, the request will submit successfully. The status change will be reported under VMI. status. Once live migration is complete, a status of Completed or Failed will be indicated. Watch out! The Migration Method field can contain: BlockMigration : meaning that the disk data is being copied from source to destination LiveMigration: meaning that only the memory is copied from source to destinationVMs with block devices located on shared storage backends like the ones provided by Rook that provide PVCs with ReadWriteMany access have the option to live-migrate only memory contents instead of having to also migrate the block devices. Cancelling a Live Migration: If we want to abort the Live Migration, ‘Kubernetes-Style’, we’ll just delete the object we created for triggering it. In this case, the VM status for migration will report some additional information: Migration State: Abort Requested: true Abort Status: Succeeded Completed: true End Timestamp: 2019-03-29T04:02:49Z Failed: true Migration Config: Completion Timeout Per GiB: 800 Progress Timeout: 150 Migration UID: 57a693d6-51d7-11e9-b370-525500d15501 Source Node: node02 Start Timestamp: 2019-03-29T04:02:47Z Target Direct Migration Node Ports: 39445: 0 43345: 49152 44222: 49153 Target Node: node01 Target Node Address: 10. 128. 0. 46 Target Node Domain Detected: true Target Pod: virt-launcher-testvmimcbjgw6zrzcmp8wpddvztvzm7x2k6cjbdgktwv8tkqNote that there are some additional fields that indicate that Abort Requested happened and in the above example that it has Succeded, in this case, the original fields for migration will report as Completed (because there’s no running migration) and Failed set to true. What can go wrong?: Live-migration is a complex process that requires transferring data from one ‘VM’ in one node to another ‘VM’ into another one, this requires that the activity of the VM being live-migrated to be compatible with the network configuration and throughput so that all the data can be migrated faster than the data is changed at the original VM, this is usually referred to as converging. Some values can be adjusted (check the table for settings that can be tuned), to allow it to succeed but as a trade-off: Increasing the number of VMs that can migrate at once, will reduce the available bandwidth. Increasing the bandwidth could affect applications running on that node (origin and target). Storage migration (check the Info note in the Performing the Live Migration section on the differences) might also consume bandwidth and resources. Node Eviction: Sometimes, a node requires to be put on maintenance and it includes workloads on it, either containers or, in KubeVirt’s case, VM’s. It is possible to use selectors, for example, move all the virtual machines to another node via kubectl drain <nodename>, for example, evicting all KubeVirt VM’s from a node can be done via: kubectl drain <node name> --delete-local-data --ignore-daemonsets=true --force --pod-selector=kubevirt. io=virt-launcherReenabling node after eviction Once the node has been tainted for eviction, we can use kubectl uncordon <nodename> to make it schedulable again. According to documentation, --delete-local-data, --ignore-daemonsets and --force are required because: Pods using emptyDir can be deleted because the data is ephemeral. VMI will have DaemonSets via virt-handler so it’s safe to proceed. VMIs are not owned by a ReplicaSet or DaemonSet, so kubectl can’t guarantee that those are restarted. KubeVirt has its own controllers for it managing VMI, so kubectl shouldn’t bother about it. If we omit the --pod-selector, we’ll force eviction of all Pods and VM’s from a node. In order to have VMIs using LiveMigration for eviction, we have to add a specific spec in the VMI YAML, so that when the node is tainted with kubevirt. io/drain:NoSchedule is added to a node. spec: evictionStrategy: LiveMigrateFrom that point, when kubectl taint nodes <foo> kubevirt. io/drain=draining:NoSchedule is executed, the migrations will start. Conclusion: As a briefing on the above data: LiveMigrate needs to be enabled on KubeVirt as a feature gate. LiveMigrate will add status to the VMI object indicating if it’s a candidate or not and if so, which mode to use (Block or Live) Based on the storage backend and other conditions, it will enable LiveMigration or just BlockMigration. References: Live Migration Node Drain/Eviction Rook" }, { - "id": 71, + "id": 72, "url": "/2020/changelog-v0.27.0.html", "title": "KubeVirt v0.27.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 27. 0: Released on: Fri Mar 6 22:40:34 2020 +0100 Support for more guest agent informations in the API Support setting priorityClasses on VMs Support for additional control plane alerts via prometheus Support for io and emulator thread pinning Support setting a custom SELinux type for the launcher Support to perform network configurations from handler instead of launcher Support to opt-out of auto attaching the serial console Support for different uninstallStaretgies for data protection Fix to let qemu run in the qemu group Fix guest agen connectivity check after i. e. live migrations" }, { - "id": 72, + "id": 73, "url": "/2020/Advanced-scheduling-with-affinity-rules.html", "title": "Advanced scheduling using affinity and anti-affinity rules", "author" : "Alberto Losada Grande", "tags" : "kubevirt, kubernetes, virtual machine, VM, Advanced VM scheduling, affinity, scheduling, topologyKeys", "body": "This blog post shows how KubeVirt can take advantage of Kubernetes inner features to provide an advanced scheduling mechanism to virtual machines (VMs). The same or even more complex affinity and anti-affinity rules can be assigned to VMs or Pods in Kubernetes than in traditional virtualization solutions. It is important to notice that from the Kubernetes scheduler stand point, which will be explained later, it only manages Pod and node scheduling. Since the VM is wrapped up in a Pod, the same scheduling rules are completely valid to KubeVirt VMs. Warning As informed in the official Kubernetes documentation: inter-pod affinity and anti-affinity require substantial amount of processing which can slow-down scheduling in large clusters significantly. This can be specially notorious in clusters larger than several hundred nodes. Introduction: In a Kubernetes cluster, kube-scheduler is the default scheduler and runs as part of the control plane. Kube-scheduler is in charge of selecting an optimal node for every newly created or unscheduled pod to run on. However, every container within a pod and the pods themselves, have different requirements for resources. Therefore, existing nodes need to be filtered according to the specific requirements. Note If you want and need to, you can write your own scheduling component and use it instead. When we talk about scheduling, we refer basically to making sure that Pods are matched to Nodes so that a Kubelet can run them. Actually, kube-scheduler selects a node for the pod in a 2-step operation: Filtering. The filtering step finds the set of candidate Nodes where it’s possible to schedule the Pod. The result is a list of Nodes, usually more than one. Scoring. In the scoring step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. This is accomplished based on a score obtained from a list of scoring rules that are applied by the scheduler. The obtained list of candidate nodes is evaluated using multiple priority criteria, which add up to a weighted score. Nodes with higher values are better candidates to run the pod. Among the criteria are affinity and anti-affinity rules; nodes with higher affinity for the pod have a higher score, and nodes with higher anti-affinity have a lower score. Finally, kube-scheduler assigns the Pod to the Node with the highest score. If there is more than one node with equal scores, kube-scheduler selects one of these randomly. In this blog post we are going to focus on examples of affinity and anti-affinity rules applied to solve real use cases. A common use for affinity rules is to schedule related pods to be close to each other for performance reasons. A common use case for anti-affinity rules is to schedule related pods not too close to each other for high availability reasons. Goal: Run my customapp: In this example, our mission is to run a customapp that is composed of 3 tiers: A web proxy cache based on varnish HTTP cache. A web appliance delivered by a third provider. A clustered database running on MS Windows. Instructions were delivered to deploy the application in our production Kubernetes cluster taking advantage of the existing KubeVirt integration and making sure the application is resilient to any problems that can occur. The current status of the cluster is the following: A stretched Kubernetes cluster is already up and running. KubeVirt is already installed. There is enough free CPU, Memory and disk space in the cluster to deploy customapp stack. information The Kubernetes stretched cluster is running in 3 different geographical locations to provide high availability. Also, all locations are close and well-connected to provide low latency between the nodes. Topology used is common for large data centers, such as cloud providers, which is based in organizing hosts into regions and zones: A region is a set of hosts in a close geographic area, which guarantees high-speed connectivity between them. A zone, also called an availability zone, is a set of hosts that might fail together because they share common critical infrastructure components, such as a network, storage, or power. There are some important labels when creating advanced scheduling workflows with affinity and anti-affinity rules. As explained previously, they are very close linked to common topologies used in datacenters. Labels such as: topology. kubernetes. io/zone topology. kubernetes. io/region kubernetes. io/hostname kubernetes. io/arch kubernetes. io/osWarning As it is detailed in the labels and annotations official documentation, starting in v1. 17, label failure-domain. beta. kubernetes. io/region and failure-domain. beta. kubernetes. io/zone are deprecated in favour of topology. kubernetes. io/region and topology kubernetes. io/zone respectively. Previous labels are just prepopulated Kubernetes labels that the system uses to denote such a topology domain. In our case, the cluster is running in Iberia region across three different zones: scu, bcn and sab. Therefore, it must be labelled accordingly since advanced scheduling rules are going to be applied: Information Pod anti-affinity requires nodes to be consistently labelled, i. e. every node in the cluster must have an appropriate label matching topologyKey. If some or all nodes are missing the specified topologyKey label, it can lead to unintended behavior. Below you can find a cluster labeling where topology is based in one region and several zones spread across geographically. Additionally, special high performing nodes composed by nodes with a high number of resources available including memory, cpu, storage and network are marked as well. $ kubectl label node kni-worker topology. kubernetes. io/region=iberia topology. kubernetes. io/zone=scunode/kni-worker labeled$ kubectl label node kni-worker2 topology. kubernetes. io/region=iberia topology. kubernetes. io/zone=scu performance=highnode/kni-worker2 labeled$ kubectl label node kni-worker3 topology. kubernetes. io/region=iberia topology. kubernetes. io/zone=bcnnode/kni-worker3 labeled$ kubectl label node kni-worker4 topology. kubernetes. io/region=iberia topology. kubernetes. io/zone=bcn performance=highnode/kni-worker4 labeled$ kubectl label node kni-worker5 topology. kubernetes. io/region=iberia topology. kubernetes. io/zone=sabnode/kni-worker5 labeled$ kubectl label node kni-worker6 topology. kubernetes. io/region=iberia topology. kubernetes. io/zone=sab performance=highnode/kni-worker6 labeledAt this point, Kubernetes cluster nodes are labelled as expected: $ kubectl get nodes --show-labelsNAME STATUS ROLES AGE VERSION LABELSkni-control-plane Ready master 18m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-control-plane,kubernetes. io/os=linux,node-role. kubernetes. io/master=kni-worker Ready <none> 17m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-worker,kubernetes. io/os=linux,topology. kubernetes. io/region=iberia,topology. kubernetes. io/zone=scukni-worker2 Ready <none> 17m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-worker2,kubernetes. io/os=linux,performance=high,topology. kubernetes. io/region=iberia,topology. kubernetes. io/zone=scukni-worker3 Ready <none> 17m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-worker3,kubernetes. io/os=linux,topology. kubernetes. io/region=iberia,topology. kubernetes. io/zone=bcnkni-worker4 Ready <none> 17m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-worker4,kubernetes. io/os=linux,performance=high,topology. kubernetes. io/region=iberia,topology. kubernetes. io/zone=bcnkni-worker5 Ready <none> 17m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-worker5,kubernetes. io/os=linux,topology. kubernetes. io/region=iberia,topology. kubernetes. io/zone=sabkni-worker6 Ready <none> 17m v1. 17. 0 beta. kubernetes. io/arch=amd64,beta. kubernetes. io/os=linux,kubernetes. io/arch=amd64,kubernetes. io/hostname=kni-worker6,kubernetes. io/os=linux,performance=high,topology. kubernetes. io/region=iberia,topology. kubernetes. io/zone=sabFinally, the cluster is ready to run and deploy our specific customapp. The clustered database: A MS Windows 2016 Server virtual machine is already containerized and ready to be deployed. As we have to deploy 3 replicas of the operating system a VirtualMachineInstanceReplicaSet has been created. Once the replicas are up and running, database administrators will be able to reach the VMs running in our Kubernetes cluster through Remote Desktop Protocol (RDP). Eventually, MS SQL2016 database is installed and configured as a clustered database to provide high availability to our customapp. Information Check KubeVirt: installing Microsoft Windows from an ISO if you need further information on how to deploy a MS Windows VM on KubeVirt. Regarding the scheduling, a Kubernetes node of each zone has been labelled as high-performance, e. g. it has more memory, storage, CPU and higher performing disk and network than the other node that shares the same zone. This specific Kubernetes node was provisioned to run the database VM due to the hardware requirements to run database applications. Therefore, a scheduling rule is needed to be sure that all MSSQL2016 instances run only in these high-performance servers. Note These nodes were labelled as performance=high. There are two options to accomplish our requirement, use nodeSelector or configure nodeAffinity rules. In our first approach, nodeSelector instead of nodeAffinity rule is used. nodeSelector matches the nodes where the performance key is equal to high and makes the VirtualMachineInstance to run on top of the matching nodes. The following code snippet shows the configuration: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: name: mssql2016spec: replicas: 3 selector: matchLabels: kubevirt. io/domain: mssql2016 template: metadata: name: mssql2016 labels: kubevirt. io/domain: mssql2016 spec: nodeSelector: #nodeSelector matches nodes where performance key has high as value. performance: high domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: default machine: type: resources: requests: memory: 16Gi networks: - name: default pod: {}Next, the VirtualMachineInstanceReplicaSet configuration partially shown previously is applied successfully. $ kubectl create -f vmr-windows-mssql. yamlvirtualmachineinstancereplicaset. kubevirt. io/mssql2016 createdThen, it is expected that the 3 VirtualMachineInstances will eventually run on the nodes where matching key/value label is configured. Actually, based on the hostname those are the even nodes. $ kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESvirt-launcher-mssql2016p948r-257pn 0/2 ContainerCreating 0 16s <none> kni-worker4 <none> <none>virt-launcher-mssql2016rd4lk-6zz9d 0/2 ContainerCreating 0 16s <none> kni-worker2 <none> <none>virt-launcher-mssql2016z2qnw-t924b 0/2 ContainerCreating 0 16s <none> kni-worker6 <none> <none>[root@eko1 ind-affinity]# kubectl get vmi -o wideNAME AGE PHASE IP NODENAME LIVE-MIGRATABLEmssql2016p948r 34s Schedulingmssql2016rd4lk 34s Schedulingmssql2016z2qnw 34s Scheduling$ kubectl get vmi -o wideNAME AGE PHASE IP NODENAME LIVE-MIGRATABLEmssql2016p948r 3m25s Running 10. 244. 1. 4 kni-worker4 Falsemssql2016rd4lk 3m25s Running 10. 244. 2. 4 kni-worker2 Falsemssql2016z2qnw 3m25s Running 10. 244. 5. 4 kni-worker6 FalseWarning nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature greatly expands the types of constraints you can express. Let’s test what happens if the node running the database must be rebooted due to an upgrade or any other valid reason. First, a node drain must be executed in order to evacuate all pods running and mark the node as unschedulable. $ kubectl drain kni-worker2 --delete-local-data --ignore-daemonsets=true --forcenode/kni-worker2 already cordonedevicting pod virt-launcher-mssql2016rd4lk-6zz9d pod/virt-launcher-mssql2016rd4lk-6zz9d evictednode/kni-worker2 evictedThe result is an unwanted scenario, where two databases are being executed in the same high performing server. This leads us to more advanced scheduling features like affinity and anti-affinity. $ kubectl get vmi -o wideNAME AGE PHASE IP NODENAME LIVE-MIGRATABLEmssql201696sz9 7m16s Running 10. 244. 5. 5 kni-worker6 Falsemssql2016p948r 19m Running 10. 244. 1. 4 kni-worker4 Falsemssql2016z2qnw 19m Running 10. 244. 5. 4 kni-worker6 FalseThe affinity/anti-affinity rules solve much more complex scenarios compared to nodeSelector. Some of the key enhancements are: The language is more expressive (not just “AND or exact match”). You can indicate that the rule is “soft”/”preference” rather than a hard requirement, so if the scheduler can’t satisfy it, the pod will still be scheduled. You can constrain against labels on other pods running on the node (or other topological domain), rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located. Before going into more detail, it should be noticed that there are currently two types of affinity that applies to both Node and Pod affinity. They are called requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution. You can think of them as hard and soft respectively, in the sense that the former specifies rules that must be met for a pod to be scheduled onto a node (just like nodeSelector but using a more expressive syntax), while the latter specifies preferences that the scheduler will try to enforce but will not guarantee. Said that, it is time to edit the VirtualMachineInstanceReplicaSet YAML file. Actually, nodeSelector must be removed and two different affinity rules created instead. nodeAffinity rule. This rule ensures that during scheduling time the application (MS SQL2016) must be placed only on nodes where the key performance contains the value high. Note the word only, there is no room for other nodes. podAntiAffinity rule. This rule ensures that two applications with the key kubevirt. io/domain equals to mssql2016 must not run in the same zone. Notice that the only application with this key value is the database itself and more important, notice that this rule applies to the topologyKey topology. kubernetes. io/zone. This means that only one database instance can run in each zone, e. g. one database in scu, bcn and sab respectively. In principle, the topologyKey can be any legal label-key. However, for performance and security reasons, there are some constraints on topologyKey that need to be taken into consideration: For affinity and for requiredDuringSchedulingIgnoredDuringExecution pod anti-affinity, empty topologyKey is not allowed. For preferredDuringSchedulingIgnoredDuringExecution pod anti-affinity, empty topologyKey is interpreted as “all topologies” (“all topologies” here is now limited to the combination of kubernetes. io/hostname, topology. kubernetes. io/zone and topology. kubernetes. io/region). For requiredDuringSchedulingIgnoredDuringExecution pod anti-affinity, the admission controller LimitPodHardAntiAffinityTopology was introduced to limit topologyKey to kubernetes. io/hostname. Verify if it is enabled or disabled. Here is the VirtualMachineInstanceReplicaSet object replaced. $ kubectl edit virtualmachineinstancereplicaset. kubevirt. io/mssql2016virtualmachineinstancereplicaset. kubevirt. io/mssql2016 editedNow, it contains both affinity rules: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: name: mssql2016replicasetspec: replicas: 3 selector: matchLabels: kubevirt. io/domain: mssql2016 template: metadata: name: mssql2016 labels: kubevirt. io/domain: mssql2016 spec: affinity: nodeAffinity: #This rule ensures the application (MS SQL2016) must ONLY be placed on nodes where the key performance contains the value high requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: performance operator: In values: - high podAntiAffinity: #This rule ensures that two applications with the key kubevirt. io/domain equals to mssql2016 cannot run in the same zone requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: kubevirt. io/domain operator: In values: - mssql2016 topologyKey: topology. kubernetes. io/zone domain:Notice that the VM or POD placement is executed only during the scheduling process, therefore we need to delete one of the VirtualMachineInstances (VMI) running in the same node. Deleting the VMI will make Kubernetes spin up a new one to reconcile the desired number of replicas (3). Information Remember to mark the kni-worker2 as schedulable again. $ kubectl uncordon kni-worker2node/kni-worker2 uncordonedBelow shows the current status, where two databases are running in the kni-worker6 node. By applying the previous affinity rules this should not happen again: $ kubectl get vmi -o wideNAME AGE PHASE IP NODENAME LIVE-MIGRATABLEmssql201696sz9 12m Running 10. 244. 5. 5 kni-worker6 Falsemssql2016p948r 24m Running 10. 244. 1. 4 kni-worker4 Falsemssql2016z2qnw 24m Running 10. 244. 5. 4 kni-worker6 FalseNext, we delete one of the VMIs running in kni-worker6 and wait for the rules to be applied at scheduling time. As can be seen, databases are distributed across zones and high performing nodes: $ kubectl delete vmi mssql201696sz9virtualmachineinstance. kubevirt. io mssql201696sz9 deleted$ kubectl get vmi -o wideNAME AGE PHASE IP NODENAME LIVE-MIGRATABLEmssql2016p948r 40m Running 10. 244. 1. 4 kni-worker4 Falsemssql2016tpj6n 22s Running 10. 244. 2. 5 kni-worker2 Falsemssql2016z2qnw 40m Running 10. 244. 5. 4 kni-worker6 FalseDuring the deployment of the clustered database nodeAffinity and nodeSelector rules were compared, however, there are a couple of things to take into consideration when creating node affinity rules, it is worth taking a look at node affinity in Kubernetes documentation. Information If you remove or change the label of the node where the Pod is scheduled, the Pod will not be removed. In other words, the affinity selection works only at the time of scheduling the Pod. The proxy http cache: Now, that the database is configured by database administrators and running across multiple zones, it’s time to spin up the varnish http-cache container image. This time we are going to run it as a Pod instead of as a KubeVirt VM. , however, scheduling rules are still valid for both objects. A detailed explanation on how to run a Varnish Cache in a Kubernetes cluster can be found in kube-httpcache repository. Below are detailed the steps taken: Start by creating a ConfigMap that contains a VCL template and a Secret object that contains the secret for the Varnish administration port. Then apply the Varnish deployment config. $ kubectl create -f configmap. yamlconfigmap/vcl-template created$ kubectl create secret generic varnish-secret --from-literal=secret=$(head -c32 /dev/urandom | base64)secret/varnish-secret createdIn our specific mandate, 3 replicas of our web cache application are needed. Each one must be running in a different zone or datacenter. Preferably, if possible, expected to run in a Kubernetes node different from the database since as administrators we would like the database to take advantage of all the possible resources of the high-performing server. Taken into account this prerequisite, the following advanced rules are applied: nodeAffinity rule. This rule ensures that during scheduling time the application should be placed if possible on nodes where the key performance does not contain the value high. Note the word if possible. This means, it will try to run on a not performing server, however, if there none available it will be co-located with the database. podAntiAffinity rule. This rule ensures that two applications with the key app equals to cache must not run in the same zone. Notice that the only application with this key value is the Varnish http-cache itself and more important, notice that this rule applies to the topologyKey topology. kubernetes. io/zone. This means that only one Varnish http-cache instance can run in each zone, e. g. one http-cache in scu, bcn and sab respectively. apiVersion: apps/v1kind: Deploymentmetadata: name: varnish-cachespec: selector: matchLabels: app: cache replicas: 3 template: metadata: labels: app: cache spec: affinity: nodeAffinity: #This rule ensures that during scheduling time the application must be placed *if possible* on nodes NOT performance=high preferredDuringSchedulingIgnoredDuringExecution: - weight: 10 preference: matchExpressions: - key: performance operator: NotIn values: - high podAntiAffinity: #This rule ensures that the application cannot run in the same zone (app=cache). requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - cache topologyKey: topology. kubernetes. io/zone containers: - name: cache image: quay. io/spaces/kube-httpcache:stable imagePullPolicy: AlwaysInformation In this set of affinity rules, a new scheduling policy has been introduced: preferredDuringSchedulingIgnoredDuringExecution. It can be thought as “soft” scheduling, in the sense that it specifies preferences that the scheduler will try to enforce but will not guarantee. The weight field in preferredDuringSchedulingIgnoredDuringExecution must be in the range 1-100 and it is taken into account in the scoring step. Remember that the node(s) with the highest total score is/are the most preferred. Here, the modified deployment is applied: [root@eko1 varnish]# kubectl create -f deployment. yamldeployment. apps/varnish-cache createdThe Pod is scheduled as expected since there is a node available in each zone without the performance=high label. $ kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESvarnish-cache-54489f9fc9-5pbr2 1/1 Running 0 91s 10. 244. 4. 5 kni-worker5 <none> <none>varnish-cache-54489f9fc9-9s9tm 1/1 Running 0 91s 10. 244. 3. 5 kni-worker3 <none> <none>varnish-cache-54489f9fc9-dflzs 1/1 Running 0 91s 10. 244. 6. 5 kni-worker <none> <none>virt-launcher-mssql2016p948r-257pn 2/2 Running 0 70m 10. 244. 1. 4 kni-worker4 <none> <none>virt-launcher-mssql2016tpj6n-l2fnf 2/2 Running 0 31m 10. 244. 2. 5 kni-worker2 <none> <none>virt-launcher-mssql2016z2qnw-t924b 2/2 Running 0 70m 10. 244. 5. 4 kni-worker6 <none> <none>At this point, database and http-cache components of our customapp are up and running. Only the appliance created by an external provider needs to be deployed to complete the stack. The third-party appliance virtual machine: A third-party provider delivered a black box (appliance) in the form of a virtual machine where the application bought by the finance department is installed. Lucky to us, we have been able to transform it into a container VM ready to be run in our cluster with the help of KubeVirt. Following up with our objective, this web application must take advantage of the web cache application running as a Pod. So we require the appliance to be co-located in the same server that Varnish Cache in order to accelerate the delivery of the content provided by the appliance. Also, it is required to run every replica of the appliance in different zones or data centers. Taken into account these prerequisites, the following advanced rules are configured: podAffinity rule. This rule ensures that during scheduling time the application must be placed on nodes where an application (Pod) with key app' equals tocache` is running. That is to say where the Varnish Cache is running. Note that this is mandatory, it will only run co-located with the web cache Pod. podAntiAffinity rule. This rule ensures that two applications with the key kubevirt. io/domain equals to blackbox must not run in the same zone. Notice that the only application with this key value is the appliance and more important, notice that this rule applies to the topologyKey topology. kubernetes. io/zone. This means that only one appliance instance can run in each zone, e. g. one appliance in scu, bcn and sab respectively. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstanceReplicaSetmetadata: name: blackboxspec: replicas: 3 selector: matchLabels: kubevirt. io/domain: blackbox template: metadata: name: blackbox labels: kubevirt. io/domain: blackbox spec: affinity: podAffinity: #This rule ensures that during scheduling time the application must be placed on nodes where Varnish Cache is running requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - cache topologyKey: topology. kubernetes. io/hostname podAntiAffinity: #This rule ensures that two applications with the key `kubevirt. io/domain` equals to `blackbox` cannot run in the same zone requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: kubevirt. io/domain operator: In values: - blackbox topologyKey: topology. kubernetes. io/zone domain:Here, the modified deployment is applied. As expected the VMI is scheduled as expected in the same Kubernetes nodes as Varnish Cache and each one in a different datacenter or zone. $ kubectl get pods,vmi -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/varnish-cache-54489f9fc9-5pbr2 1/1 Running 0 172m 10. 244. 4. 5 kni-worker5 <none> <none>pod/varnish-cache-54489f9fc9-9s9tm 1/1 Running 0 172m 10. 244. 3. 5 kni-worker3 <none> <none>pod/varnish-cache-54489f9fc9-dflzs 1/1 Running 0 172m 10. 244. 6. 5 kni-worker <none> <none>pod/virt-launcher-blackboxtk49x-nw45s 2/2 Running 0 2m31s 10. 244. 6. 6 kni-worker <none> <none>pod/virt-launcher-blackboxxt829-snjth 2/2 Running 0 2m31s 10. 244. 4. 9 kni-worker5 <none> <none>pod/virt-launcher-blackboxzf9kt-6mh56 2/2 Running 0 2m31s 10. 244. 3. 6 kni-worker3 <none> <none>pod/virt-launcher-mssql2016p948r-257pn 2/2 Running 0 4h1m 10. 244. 1. 4 kni-worker4 <none> <none>pod/virt-launcher-mssql2016tpj6n-l2fnf 2/2 Running 0 3h22m 10. 244. 2. 5 kni-worker2 <none> <none>pod/virt-launcher-mssql2016z2qnw-t924b 2/2 Running 0 4h1m 10. 244. 5. 4 kni-worker6 <none> <none>NAME AGE PHASE IP NODENAME LIVE-MIGRATABLEvirtualmachineinstance. kubevirt. io/blackboxtk49x 2m31s Running 10. 244. 6. 6 kni-worker Falsevirtualmachineinstance. kubevirt. io/blackboxxt829 2m31s Running 10. 244. 4. 9 kni-worker5 Falsevirtualmachineinstance. kubevirt. io/blackboxzf9kt 2m31s Running 10. 244. 3. 6 kni-worker3 Falsevirtualmachineinstance. kubevirt. io/mssql2016p948r 4h1m Running 10. 244. 1. 4 kni-worker4 Falsevirtualmachineinstance. kubevirt. io/mssql2016tpj6n 3h22m Running 10. 244. 2. 5 kni-worker2 Falsevirtualmachineinstance. kubevirt. io/mssql2016z2qnw 4h1m Running 10. 244. 5. 4 kni-worker6 FalseAt this point, our stack has been successfully deployed and configured accordingly to the requirements agreed. However, it is important before going into production to verify the proper behaviour in case of node failures. That’s what is going to be shown in the next section. Verify the resiliency of our customapp: In this section, several tests must be executed to validate that the scheduling already in place is line up with the expected behaviour of the customapp application. Draining a regular node: In this test, the node located in scu zone which is not labelled as high-performance will be upgraded. The proper procedure to maintain a Kubernetes node is as follows: drain the node, upgrade packages and then reboot it. As it is depicted, once the kni-worker is marked as unschedulable and drained, the Varnish Cache pod and the black box appliance VM are automatically moved to the high-performance node in the same zone. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/varnish-cache-54489f9fc9-5pbr2 1/1 Running 0 3h8m 10. 244. 4. 5 kni-worker5 <none> <none>pod/varnish-cache-54489f9fc9-9s5sr 1/1 Running 0 2m32s 10. 244. 2. 7 kni-worker2 <none> <none>pod/varnish-cache-54489f9fc9-9s9tm 1/1 Running 0 3h8m 10. 244. 3. 5 kni-worker3 <none> <none>pod/virt-launcher-blackboxxh5tg-g7hns 2/2 Running 0 13m 10. 244. 2. 8 kni-worker2 <none> <none>pod/virt-launcher-blackboxxt829-snjth 2/2 Running 0 18m 10. 244. 4. 9 kni-worker5 <none> <none>pod/virt-launcher-blackboxzf9kt-6mh56 2/2 Running 0 18m 10. 244. 3. 6 kni-worker3 <none> <none>pod/virt-launcher-mssql2016p948r-257pn 2/2 Running 0 4h17m 10. 244. 1. 4 kni-worker4 <none> <none>pod/virt-launcher-mssql2016tpj6n-l2fnf 2/2 Running 0 3h37m 10. 244. 2. 5 kni-worker2 <none> <none>pod/virt-launcher-mssql2016z2qnw-t924b 2/2 Running 0 4h17m 10. 244. 5. 4 kni-worker6 <none> <none>NAME AGE PHASE IP NODENAME LIVE-MIGRATABLEvirtualmachineinstance. kubevirt. io/blackboxxh5tg 13m Running 10. 244. 2. 8 kni-worker2 Falsevirtualmachineinstance. kubevirt. io/blackboxxt829 18m Running 10. 244. 4. 9 kni-worker5 Falsevirtualmachineinstance. kubevirt. io/blackboxzf9kt 18m Running 10. 244. 3. 6 kni-worker3 Falsevirtualmachineinstance. kubevirt. io/mssql2016p948r 4h17m Running 10. 244. 1. 4 kni-worker4 Falsevirtualmachineinstance. kubevirt. io/mssql2016tpj6n 3h37m Running 10. 244. 2. 5 kni-worker2 Falsevirtualmachineinstance. kubevirt. io/mssql2016z2qnw 4h17m Running 10. 244. 5. 4 kni-worker6 FalseRemember that this is happening because: There is a mandatory policy that only one replica of each application can run at the same time in the same zone. There is a soft policy (preferred) that both applications should run on a non high-performance node. However, since there are any of these nodes available it has been scheduled in the high-performance server along with the database. Both applications must run in the same nodeInformation Note that uncordoning the node will not make the blackbox appliance and the Varnish Cache pod to come back to the previous node. $ kubectl uncordon kni-workernode/kni-worker uncordonedNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/varnish-cache-54489f9fc9-5pbr2 1/1 Running 0 3h10m 10. 244. 4. 5 kni-worker5 <none> <none>pod/varnish-cache-54489f9fc9-9s5sr 1/1 Running 0 5m29s 10. 244. 2. 7 kni-worker2 <none> <none>pod/varnish-cache-54489f9fc9-9s9tm 1/1 Running 0 3h10m 10. 244. 3. 5 kni-worker3 <none> <none>pod/virt-launcher-blackboxxh5tg-g7hns 2/2 Running 0 16m 10. 244. 2. 8 kni-worker2 <none> <none>pod/virt-launcher-blackboxxt829-snjth 2/2 Running 0 21m 10. 244. 4. 9 kni-worker5 <none> <none>pod/virt-launcher-blackboxzf9kt-6mh56 2/2 Running 0 21m 10. 244. 3. 6 kni-worker3 <none> <none>pod/virt-launcher-mssql2016p948r-257pn 2/2 Running 0 4h20m 10. 244. 1. 4 kni-worker4 <none> <none>pod/virt-launcher-mssql2016tpj6n-l2fnf 2/2 Running 0 3h40m 10. 244. 2. 5 kni-worker2 <none> <none>pod/virt-launcher-mssql2016z2qnw-t924b 2/2 Running 0 4h20m 10. 244. 5. 4 kni-worker6 <none> <none>In order to return to the most desirable state, the pod and VM from kni-worker2 must be deleted. Information Both applications must be deleted since the requiredDuringSchedulingIgnoredDuringExecution policy is only applied during scheduling time. $ kubectl delete pod/varnish-cache-54489f9fc9-9s5srpod varnish-cache-54489f9fc9-9s5sr deleted$ kubectl delete virtualmachineinstance. kubevirt. io/blackboxxh5tgvirtualmachineinstance. kubevirt. io blackboxxh5tg deletedOnce done, the scheduling process is run again for both applications and the applications are placed in the most desirable node taking into account affinity rules configured. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/varnish-cache-54489f9fc9-5pbr2 1/1 Running 0 3h13m 10. 244. 4. 5 kni-worker5 <none> <none>pod/varnish-cache-54489f9fc9-9s9tm 1/1 Running 0 3h13m 10. 244. 3. 5 kni-worker3 <none> <none>pod/varnish-cache-54489f9fc9-fldhc 1/1 Running 0 2m7s 10. 244. 6. 7 kni-worker <none> <none>pod/virt-launcher-blackbox54l7t-4c6wh 2/2 Running 0 23s 10. 244. 6. 8 kni-worker <none> <none>pod/virt-launcher-blackboxxt829-snjth 2/2 Running 0 23m 10. 244. 4. 9 kni-worker5 <none> <none>pod/virt-launcher-blackboxzf9kt-6mh56 2/2 Running 0 23m 10. 244. 3. 6 kni-worker3 <none> <none>pod/virt-launcher-mssql2016p948r-257pn 2/2 Running 0 4h23m 10. 244. 1. 4 kni-worker4 <none> <none>pod/virt-launcher-mssql2016tpj6n-l2fnf 2/2 Running 0 3h43m 10. 244. 2. 5 kni-worker2 <none> <none>pod/virt-launcher-mssql2016z2qnw-t924b 2/2 Running 0 4h23m 10. 244. 5. 4 kni-worker6 <none> <none>NAME AGE PHASE IP NODENAME LIVE-MIGRATABLEvirtualmachineinstance. kubevirt. io/blackbox54l7t 23s Running 10. 244. 6. 8 kni-worker Falsevirtualmachineinstance. kubevirt. io/blackboxxt829 23m Running 10. 244. 4. 9 kni-worker5 Falsevirtualmachineinstance. kubevirt. io/blackboxzf9kt 23m Running 10. 244. 3. 6 kni-worker3 Falsevirtualmachineinstance. kubevirt. io/mssql2016p948r 4h23m Running 10. 244. 1. 4 kni-worker4 Falsevirtualmachineinstance. kubevirt. io/mssql2016tpj6n 3h43m Running 10. 244. 2. 5 kni-worker2 Falsevirtualmachineinstance. kubevirt. io/mssql2016z2qnw 4h23m Running 10. 244. 5. 4 kni-worker6 FalseThis behaviour can be extrapolated to a failure or shutdown of any odd or non high-performance worker node. In that case, all workloads will be moved to the high performing server in the same zone. Although this is not ideal, our customapp will be still available while the node recovery is ongoing. Draining a high-performance node: On the other hand, in case of a high-performance worker node failure, which was shown previously, the database will not be able to move to another server, since there is only one high performing server per zone. A possible solution is just adding a stand-by high-performance node in each zone. However, since the database is configured as a clustered database, the application running in the same zone as the failed database will still be able to establish a connection to any of the other two running databases located in different zones. This configuration is done at the application level. Actually, from the application standpoint, it just connects to a database pool of resources. Since this is not ideal either, e. g. establishing a connection to another zone or datacenter takes longer than in the same datacenter, the application will be still available and providing service to the clients. Affinity rules are everywhere: As written in the title section, affinity rules are essential to provide high availability and resiliency to Kubernetes applications. Furthermore, KubeVirt’s components also take advantage of these rules to avoid unwanted situations that could compromise the stability of the VMs running in the cluster. For instance, below it is partly shown a snippet of the deployment object for virt-api and virt-controller. Notice the following affinity rule created: podAntiAffinity rule. This rule ensures that two replicas of the same application should not run if possible in the same Kubernetes node (kubernetes. io/hostname). It is used the key kubevirt. io to match the application virt-api or virt-controller. See that it is a soft requirement, which means that the kube-scheduler will try to match the rule, however, if it is not possible it can place both replicas in the same node. apiVersion: apps/v1 kind: Deployment metadata: labels: kubevirt. io: virt-api name: virt-api namespace: kubevirt spec: replicas: 2 selector: matchLabels: kubevirt. io: virt-api strategy: {} template: metadata: annotations: scheduler. alpha. kubernetes. io/critical-pod: scheduler. alpha. kubernetes. io/tolerations: '[{ key : CriticalAddonsOnly , operator : Exists }]' labels: kubevirt. io: virt-api prometheus. kubevirt. io: name: virt-api spec: affinity: podAntiAffinity: #This rule ensures that two replicas of the same application should not run if possible in the same Kubernetes node preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: kubevirt. io operator: In values: - virt-api topologyKey: kubernetes. io/hostname weight: 1 containers: . . . apiVersion: apps/v1kind: Deploymentmetadata: labels: kubevirt. io: virt-controller name: virt-controller namespace: kubevirtspec: replicas: 2 selector: matchLabels: kubevirt. io: virt-controller strategy: {} template: metadata: annotations: scheduler. alpha. kubernetes. io/critical-pod: scheduler. alpha. kubernetes. io/tolerations: '[{ key : CriticalAddonsOnly , operator : Exists }]' labels: kubevirt. io: virt-controller prometheus. kubevirt. io: name: virt-controller spec: affinity: podAntiAffinity: #This rule ensures that two replicas of the same application should not run if possible in the same Kubernetes node preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: kubevirt. io operator: In values: - virt-controller topologyKey: kubernetes. io/hostname weight: 1Information It is worth mentioning that DaemonSets internally also uses advanced scheduling rules. Basically, they are nodeAffinity rules in order to place each replica in each Kubernetes node of the cluster. A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created. [root@eko1 varnish]# kubectl get daemonset -n kubevirtNAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkubevirt virt-handler 6 6 6 6 6 <none> 25hSee the partial snippet of a virt-handler Pod created by a DaemonSet (see ownerReferences section, kind: DaemonSet) that configures a nodeAffinity rule that requires the Pod to run in a specific hostname matched by the key metadata. name and value the name of the node (kni-worker2). Note that the value of the key changes depending on the nodes that are part of the cluster, this is done by the DaemonSet. apiVersion: v1kind: Podmetadata: annotations: kubevirt. io/install-strategy-identifier: 0000ee7f7cd4756bb221037885c3c86816db6de7 kubevirt. io/install-strategy-registry: index. docker. io/kubevirt kubevirt. io/install-strategy-version: v0. 26. 0 scheduler. alpha. kubernetes. io/critical-pod: scheduler. alpha. kubernetes. io/tolerations: '[{ key : CriticalAddonsOnly , operator : Exists }]' creationTimestamp: 2020-02-12T11:11:14Z generateName: virt-handler- labels: app. kubernetes. io/managed-by: kubevirt-operator controller-revision-hash: 84d96d4775 kubevirt. io: virt-handler pod-template-generation: 1 prometheus. kubevirt. io: name: virt-handler-ctzcg namespace: kubevirt ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: virt-handler uid: 6e7faece-a7aa-4ed0-959e-4332b2be4ec3 resourceVersion: 28301 selfLink: /api/v1/namespaces/kubevirt/pods/virt-handler-ctzcg uid: 95d68dad-ad06-489f-b3d3-92413bcae1daspec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata. name operator: In values: - kni-worker2Summary: In this blog post, a real use case has been detailed on how advanced scheduling can be configured in a hybrid scenario where VMs and Pods are part of the same application stack. The reader can realize that Kubernetes itself already provides a lot of functionality out of the box to Pods running on top of it. One of these inherited capabilities is the possibility to create even more complex affinity or/and anti-affinity rules than traditional virtualization products. References: Kubernetes labels and annotations official documentation] Kubevirt node drain blog post Kubernetes node affinity documentation. Kubernetes design proposal for Inter-pod topological affinity and anti-affinity KubeVirt add affinity to virt pods pull request discussion" }, { - "id": 73, + "id": 74, "url": "/2020/KubeVirt-installing_Microsoft_Windows_from_an_iso.html", "title": "KubeVirt: installing Microsoft Windows from an ISO", "author" : "Pedro Ibáñez Requena", "tags" : "kubevirt, kubernetes, virtual machine, Microsoft Windows kubernetes, Microsoft Windows container, Windows", "body": "Warning! While this post still contains valuable information, a lot of it is outdated. For more up-to-date information, including Windows 11 installation, please refer to this post Hello! nowadays each operating system vendor has its cloud image available to download ready to import and deploy a new Virtual Machine (VM) inside Kubernetes with KubeVirt,but what if you want to follow the traditional way of installing a VM using an existing iso attached as a CD-ROM? In this blogpost, we are going to explain how to prepare that VM with the ISO file and the needed drivers to proceed with the installation of Microsoft Windows. Pre-requisites: A Kubernetes cluster is already up and running KubeVirt and CDI are already installed There is enough free CPU, Memory and disk space in the cluster to deploy a Microsoft Windows VM, in this example, the version 2012 R2 VM is going to be usedPreparation: To proceed with the Installation steps the different elements involved are listed: NOTE No need for executing any command until the Installation section. An empty KubeVirt Virtual Machine apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win2k12-isospec: running: false template: metadata: labels: kubevirt. io/domain: win2k12-iso spec: domain: cpu: cores: 4 devices: . . . machine: type: q35 resources: requests: memory: 8G volumes: . . . A PVC with the Microsoft Windows ISO file attached as CD-ROM to the VM, would be automatically created with the virtctl command when uploading the file First thing here is to download the ISO file of the Microsoft Windows, for that the Microsoft Evaluation Center offersthe ISO files to download for evaluation purposes: To be able to start the evaluation some personal data has to be filled in. Afterwards, the architecture to be checked is “64 bit” and the language selected as shown inthe following picture: Once the ISO file is downloaded it has to be uploaded with virtctl, the parameters used in this example are the following: image-upload: Upload a VM image to a PersistentVolumeClaim --image-path: The path of the ISO file --pvc-name: The name of the PVC to store the ISO file, in this example is iso-win2k12 --access-mode: the access mode for the PVC, in the example ReadOnlyMany has been used. --pvc-size: The size of the PVC, is where the ISO will be stored, in this case, the ISO is 4. 3G so a PVC OS 5G should be enough --uploadproxy-url: The URL of the cdi-upload proxy service, in the following example, the CLUSTER-IP is 10. 96. 164. 35 and the PORT is 443 Information To upload data to the cluster, the cdi-uploadproxy service must be accessible from outside the cluster. In a production environment, this probably involves setting up an Ingress or a LoadBalancer Service. $ kubectl get services -n cdi NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cdi-api ClusterIP 10. 96. 117. 29 <none> 443/TCP 6d18h cdi-uploadproxy ClusterIP 10. 96. 164. 35 <none> 443/TCP 6d18hIn this example the ISO file was copied to the Kubernetes node, to allow the virtctl to find it and to simplify the operation. --insecure: Allow insecure server connections when using HTTPS --wait-secs: The time in seconds to wait for upload pod to start. (default 60)The final command with the parameters and the values would look like: $ virtctl image-upload \ --image-path=/root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO \ --pvc-name=iso-win2k12 \ --access-mode=ReadOnlyMany \ --pvc-size=5G \ --uploadproxy-url=https://10. 96. 164. 35:443 \ --insecure \ --wait-secs=240 A PVC for the hard drive where the Operating System is going to be installed, in this example it is called winhd and the space requested is 15Gi: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhdspec: accessModes: - ReadWriteOnceresources: requests: storage: 15GistorageClassName: hostpath A container with the virtio drivers attached as a CD-ROM to the VM. The container image has to be pulled to have it available in the local registry. docker pull kubevirt/virtio-container-disk And also it has to be referenced in the VM YAML, in this example the name for the containerDisk is virtiocontainerdisk. - disk: bus: sata name: virtiocontainerdisk---- containerDisk: image: kubevirt/virtio-container-disk name: virtiocontainerdisk If the pre-requisites are fulfilled, the final YAML (win2k12. yml), will look like: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: winhdspec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi storageClassName: hostpathapiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: win2k12-isospec: running: false template: metadata: labels: kubevirt. io/domain: win2k12-iso spec: domain: cpu: cores: 4 devices: disks: - bootOrder: 1 cdrom: bus: sata name: cdromiso - disk: bus: virtio name: harddrive - cdrom: bus: sata name: virtiocontainerdisk machine: type: q35 resources: requests: memory: 8G volumes: - name: cdromiso persistentVolumeClaim: claimName: iso-win2k12 - name: harddrive persistentVolumeClaim: claimName: winhd - containerDisk: image: kubevirt/virtio-container-disk name: virtiocontainerdisk Information Special attention to the bootOrder: 1 parameter in the first disk as it is the volume containing the ISO and it has to be marked as the first device to boot from. Installation: To proceed with the installation the commands commented above are going to be executed: Uploading the ISO file to the PVC: $ virtctl image-upload \--image-path=/root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO \--pvc-name=iso-win2k12 \--access-mode=ReadOnlyMany \--pvc-size=5G \--uploadproxy-url=https://10. 96. 164. 35:443 \--insecure \--wait-secs=240DataVolume default/iso-win2k12 createdWaiting for PVC iso-win2k12 upload pod to be ready. . . Pod now readyUploading data to https://10. 96. 164. 35:4434. 23 GiB / 4. 23 GiB [=======================================================================================================================================================================] 100. 00% 1m21sUploading /root/9600. 17050. WINBLUE_REFRESH. 140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9. ISO completed successfully Pulling the virtio container image to the locally: $ docker pull kubevirt/virtio-container-diskUsing default tag: latestTrying to pull repository docker. io/kubevirt/virtio-container-disk . . . latest: Pulling from docker. io/kubevirt/virtio-container-diskDigest: sha256:7e5449cb6a4a9586a3cd79433eeaafd980cb516119c03e499492e1e37965fe82Status: Image is up to date for docker. io/kubevirt/virtio-container-disk:latest Creating the PVC and Virtual Machine definitions: $ kubectl create -f win2k12. ymlvirtualmachine. kubevirt. io/win2k12-iso configuredpersistentvolumeclaim/winhd created Starting the Virtual Machine Instance: $ virtctl start win2k12-isoVM win2k12-iso was scheduled to start$ kubectl get vmiNAME AGE PHASE IP NODENAMEwin2k12-iso 82s Running 10. 244. 0. 53 master-00. kubevirt-io Once the status of the VMI is RUNNING it’s time to connect using VNC: virtctl vnc win2k12-iso Here is important to comment that to be able to connect through VNC using virtctl it’s necessary to reach the Kubernetes API. The following video shows how to go through the Microsoft Windows installation process: Once the Virtual Machine is created, the PVC with the ISO and the virtio drivers can be unattached from the Virtual Machine. References: KubeVirt user-guide: Virtio Windows Driver disk usage Creating a registry image with a VM disk CDI Upload User Guide KubeVirt user-guide: How to obtain virtio drivers?" }, { - "id": 74, + "id": 75, "url": "/2020/changelog-v0.26.0.html", "title": "KubeVirt v0.26.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 26. 0: Released on: Fri Feb 7 09:40:07 2020 +0100 Fix incorrect ownerReferences to avoid VMs getting GCed Fixes for several tests Fix greedy permissions around Secrets by delegating them to kubelet Fix OOM infra pod by increasing it’s memory request Clarify device support around live migrations Support for an uninstall strategy to protect workloads during uninstallation Support for more prometheus metrics and alert rules Support for testing SRIOV connectivity in functional tests Update Kubernetes client-go to 1. 16. 4 FOSSA fixes and status" }, { - "id": 75, + "id": 76, "url": "/2020/KubeVirt_deep_dive-virtualized_gpu_workloads.html", "title": "NA KubeCon 2019 - KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt - David Vossel, Red Hat & Vishesh Tanksale, NVIDIA", "author" : "Pedro Ibáñez Requena", "tags" : "KubeCon, GPU, NVIDIA, GPU workloads, pass-through, passthrough, KVM, QEMU", "body": "In this video, David and Vishesh explore the architecture behind KubeVirt and how NVIDIA is leveraging that architecture to power GPU workloads on Kubernetes. Using NVIDIA’s GPU workloads as a case of study, they provide a focused view on how host device passthrough is accomplished with KubeVirt as well as providing someperformance metrics comparing KubeVirt to standalone KVM. KubeVirt Intro: David introduces the talk showing what KubeVirt is and what is not: KubeVirt is not involved with managing AWS or GCP instances KubeVirt is not a competitor to Firecracker or Kata containers KubeVirt is not a container runtime replacementHe likes to define KubeVirt as: KubeVirt is a Kubernetes extension that allows running traditional VM workloads natively side by side with Container workloads. But why KubeVirt? Already have on-premise solutions like OpenStack, oVirt And then there’s the public cloud, AWS, GCP, Azure Why are we doing this VM management stuff yet again?The answer is that the initial motivation for it was this idea of infrastructure convergence: The transition to the cloud model involves multiple stacks, containers and VMs, old code and new code. With KubeVirt all this is simplified with just one stack to manage containers and VMs to run old code and new code. The workflow convergence means that: Converging VM management into container management workflows Using the same tooling (kubectl) for containers and Virtual Machines Keeping the declarative API for VM management (just like pods, deployments, etc…)An example of a VM Instance in YAML could be so simple as the following example: $ cat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1alpha1kind: VirtualMachineInstance. . . spec: domain: cpu: cores: 2 devices: disk: fedora29Architecture: The truth here is that a KubeVirt VM is a KVM+qemu process running inside a pod. It’s as simple as that. The VM Launch flow is shown in the following diagram. Since the user posts a VM manifest to the cluster until the Kubelet spins up the VM pod. And finally the virt-handler instructs the virt-launcher how to launch the qemu. The storage in KubeVirt is used in the same way as the pods, if there is a need to have persistent storage in a VM a PVC (Persistent Volume Claim)needs to be created. For example, if you have a VM in your laptop, you can upload that image using the containerized-data-importer (CDI) to a PVC and then you can attachthat PVC to the VM pod to get it running. About the use of network services, the traffic routes to the KubeVirt VM in the same way it does to container workloads. Also with Multus there isthe possibility to have different network interfaces per VM. For using the Host Resources: VM Guest CPU and NUMA Affinity CPU Manager (pinning) Topology Manager (NUMA nodes) VM Guest CPU/MEM requirements POD resource request/limits VM Guest use of Host Devices Device Plugins for access to (/dev/kvm, SR-IOV, GPU passthrough) POD resource request/limits for device allocation GPU/vGPU in Kubevirt VMs: After the introduction of David, Vishesh takes over and talks in-depth the whys and hows of GPUs in VM. Lots of new Machine and Deep learning applicationsare taking advance of the GPU workloads. Nowadays the Big data is one of the main consumers of GPUs but there are some gaps, the gaming and professional graphics sectorstill need to run VMs and have native GPU functionalities, that is why NVIDIA decided to work with KubeVirt. To enable the device pass-through NVIDIA has developed the KubeVirt GPU device Plugin, it is available in GitHubIt’s open-source and anybody can take a look to it and download it. Using the device plugin framework is a natural choice to provide GPU access to Kubevirt VMs,the following diagram shows the different layers involved in the GPU pass-through architecture: Vishesh also comments an example of a YAML code where it can be seen the Node Status containing the NVIDIA card information (5 GPUs in that node), the Virtual Machine specificationcontaining the deviceName that points to that NVIDIA card and also the Pod Status where the user can set the limits and request for that resource asany other else in Kubernetes. The main Device Plugin Functions are: GPU and vGPU device Discovery GPUs with VFIO-PCI driver on the host are identified vGPUs configured using NVIDIA vGPU manager are identified GPU and vGPU device Advertising Discovered devices are advertised to kubelet as allocatable resources GPU and vGPU device Allocation Returns the PCI address of allocated GPU device GPU and vGPU Health Check Performs health check on the discovered GPU and vGPU devices To understand how the GPU passthrough lifecycle works Vishesh shows the different phases involve in the process using the following diagram: In the following diagram there are some of the Key features that NVIDIA is using with KubeVirt: If you are interested in the details of how the lifecycle works or in why NVIDIA is highly using some of the KubeVirt features listed above, you may be interested intaking a look to the following video. Video: Speakers: David Vossel David Vossel is a Principal Software Engineer at Red Hat. He is currently working on OpenShift’s Container Native Virtualization (CNV)and is a core contributor to the open source KubeVirt project. Vishesh Tanksale is currently a Senior Software Engineer at NVIDIA. He is focussing on different aspects of enabling VM workload management on Kubernetes Cluster. He is specifically interested in GPU workloads on VMs. He is an active contributor to Kubevirt, a CNCF Sandbox Project. References: YouTube video: KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt - David Vossel, Red Hat & Vishesh Tanksale, NVIDIA Presentation: Virtualized GPU workloads on KubeVirt KubeCon NA 2019 event" }, { - "id": 76, + "id": 77, "url": "/2020/KubeVirt_Intro-Virtual_Machine_Management_on_Kubernetes.html", "title": "NA KubeCon 2019 - KubeVirt introduction by Steve Gordon and Chandrakanth Jakkidi", "author" : "Pedro Ibáñez Requena", "tags" : "video, virtual machine management, KubeCon, NUMA, CPU pinning, QEMU, KVM", "body": "In this session, Steve and Chandra provide an introduction to the KubeVirt project, which turns Kubernetes into anorchestration engine for not just application containers but virtual machine workloads as well. This provides aunified development platform where developers can build, modify, and deploy applications made up of both applicationcontainers as well as the virtual machines (VM) in a common, shared environment. They show how the KubeVirt community is continuously growing and helping with their contributions to the code inKubeVirt GitHub repository. In the session, you will learn more about why KubeVirt exists: Growing velocity behind Kubernetes and surrounding ecosystem for new applications. Reality that users will be dealing with virtual machine workloads for many years to come. Focus on building transition paths for users with workloads that will either never be containerized: Technical reasons (e. g. older operating system or kernel) Business reasons (e. g. time to market, cost of conversion) …or will be decomposed over a longer time horizon. They also explain the common use cases, how people are using it today: To run VMs to support new development Build new applications relying on existing VM-based applications and APIs. Leverage Kubernetes-based developer flows while bringing in these VM-based dependencies. To run VMs to support applications that can’t lift and shift Users with very old applications who are not in a position to change them significantly. Vendors with appliances (customer kernels, custom kmods, optimized workflows to build appliances, …) they want to bring to the cloud-native ecosystem. To run Kubernetes in Kubernetes (!) KubeVirt as a Cluster API provider Hard Multi-Tenancy Community provided cloud-provider-kubevirt To run Virtual Network Functions (VNFs) and other virtual appliances VNFs in the context of Kubernetes are of continued interest, in parallel to Cloud-native Network Function exploration. Kubernetes is an attractive target for VNFs. Compute features and management approach is appealing. But: VNFs are hard to containerize! They also cover how the project actually works from an architectural perspective and the ideal environment. And how is the ideal environment with KubeVirt: A walk through the KubeVirt components is also shown: virt-api-server: The entry point to KubeVirt for all virtualization related flows and takes care to update the virtualization related custom resource definition (CRD) virt-launcher: A VM is inside a POD launched by virt-launcher using Libvirt virt-controller: Each Object has a corresponding controller virt-handler: is a Daemonset that acts as a minion communication to Libvirt via socket libvirtd: toolkit to manage virtualization platformsIn the Video, a short demo of the project in action is shown. Eventually, Chandra shows how to install KubeVirt and bring up a virtual machine in a short time! Finally, you will hear about future plans for developing KubeVirt capabilities that are emerging from the community. Some hints: Better support for deterministic workloads: CPU Pinning○NUMA Topology Alignment IO Thread pinning Storage-assisted snapshot and cloning. Forensic virtual machine capture GPU passthrough Policy-based live migration and additional migration modes. Hotplugging of CPUs, RAM, disks, and NICs (not necessarily in that order!). Video: Speakers: Steve Gordon is currently a Principal Product Manager at Red Hat based in Toronto, Canada. Focused on building infrastructure solutions for compute use cases using a spectrum of virtualization, containerization,and bare-metal provisioning technologies. He got his start in Open Source while building out and managing web-based solutions for the Earth Systems Science ComputationalCentre (ESSCC) at the University of Queensland. After graduating with degrees in Information Technology and Commerce, Stephen tooka multi-year detour into the wonderful world of the z-Series mainframe while writing new COBOL applications for the Australian Tax Office (ATO). Stephen then landed at Red Hat where he has grown his knowledge of the infrastructure space working across multiple roles and solutionsat the intersection of the Linux virtualization stack (KVM, QEMU, Libvirt), OpenStack, and more recently Kubernetes. Now he is working with ateam attempting to realize a vision for unification of application containers and virtual machines enabled by the KubeVirt project. Stephen has previously presented on a variety of infrastructure topics at OpenStack Summit, multiple Red Hat Summit, KVM Forum, OpenStack Days Canada,OpenStack Silicon Valley, and local meetups. Chandrakanth Reddy Jakkidi is an active Open Source Contributor. He is involved in CNCF and open infrastructure community projects. He has contributed to OpenStack and Kubernetes projects. Presently an active contributor to the Kubevirt Project. Chandrakanth is having 14+ years experience in networking, virtualization, cloud, Kubernetes, SDN, NFV, OpenStack and infrastructure technologies. He is currently working with F5 Networks as Senior Software Engineer. He previously worked with Cisco Systems, Starent Networks, Emerson/Artesyn EmbeddedTechnologies and NXP/Freescale Semiconductors/Intoto Network Security companies. He is a speaker and drives local open source meetups. His present passionis towards CNCF projects. In 2018, he was a speaker of the DevOpsDays Event. References: YouTube Video: KubeVirt Intro: Virtual Machine Management on Kubernetes - Steve Gordon & Chandrakanth Jakkidi Presentation: KubeVirt Intro: Virtual Machine Management on Kubernetes - Steve Gordon & Chandrakanth Jakkidi KubeCon NA 2019 event" }, { - "id": 77, + "id": 78, "url": "/2020/OKD-web-console-install.html", "title": "Managing KubeVirt with OpenShift Web Console", "author" : "Alberto Losada Grande", "tags" : "OpenShift web console, web interface, OKD", "body": "In the previous post, KubeVirt user interface options were described and showed some features, pros and cons of using OKD console to manage our KubeVirt deployment. This blog post will focus on installing and running the OKD web console in a Kubernetes cluster so that it can leverage the deep integrations between KubeVirt and the OKD web console. There are two options to run the OKD web console to manage a Kubernetes cluster: Executing the web console as a binary. This installation method is the only one described in the OKD web console repository. Personally, looks like more targetted at developers who want to quickly iterate over the development process while hacking in the web console. This development approach is described in the native Kubernetes section of the OpenShift console code repository. Executing the web console as another pod. The idea is leveraging the containerized version available as origin-console in the OpenShift container image repository and execute it inside a Kubernetes cluster as a regular application, e. g. as a pod. What is the OKD web console: The OKD web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of namespaces. It is also referred as a more friendly kubectl in the form of a single page web application. It integrates with other services like monitoring, chargeback and the Operator Lifecycle Manager or OLM. Some things that go on behind the scenes include: Proxying the Kubernetes API under /api/kubernetes Providing additional non-Kubernetes APIs for interacting with the cluster Serving all frontend static assets User AuthenticationAs it is stated in the official GitHub’s repository, the OKD web console runs as a binary listening in a local port. The static assets required to run the web console are served by the binary itself. It is possible to customize the web console using extensions. The extensions have to be committed to be to the sources of the console directly. When the web console is accessed from a browser, it first loads all required static assets. Then makes requests to the Kubernetes API using the values defined as environment variables in the host where the console is running. Actually, there is a script called environment. sh that helps exporting the proper values for these environment variables. The web console uses WebSockets to maintain a persistent connection with the API server and receive updated information as soon as it is available. Note as well that JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets. OKD web console’s developers inform that Google Chrome/Chromium version 76 or greater is used in their integration tests. Unlike what is explained in the official repository, OKD actually executes the OKD web console in a pod. Therefore, even not officially mentioned, information how to run the OKD web console as a pod in a native Kubernetes cluster will be described later. Binary installation: This method is suggested to be a development installation since it is mainly used by the OKD web console developers to test new features. In this section the OKD web console will be compiled from the source code and executed as a binary artifact in a CentOS 8 server which does not belong to any Kubernetes cluster. The following diagram shows the relationship between all the components: user, OKD web console and Kubernetes cluster. Note that it is possible to run the OKD web console in a Kubernetes master, in a regular node or, as shown, in a server outside the cluster. In the latter case, the external server must have access to the master API. Notice also that it can be configured with different security and network settings or even different hardware resources. The first step when using the binary installation is cloning the repository. Dependencies: Below are detailed the dependencies needed to compile the OKD web console artifact: Operating System. CentOS 8 was chosen as our operating system for the server running the OKD web console. Kubernetes cluster is running latest CentOS 7. $ cat /etc/redhat-releaseCentOS Linux release 8. 0. 1905 (Core) Kubernetes. It has been deployed the latest available version at the moment of writing: v1. 17. Kubernetes cluster is comprised by one master node and one regular node with enough CPU (4vCPUs) and memory (16Gi) to run KubeVirt and a couple of KubeVirt’s VirtualMachineInstances. No extra storage was needed since the virtual machines will run as container-disk instances. $ kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEblog-master-00. kubevirt. local Ready master 29h v1. 17. 0 192. 168. 123. 250 <none> CentOS Linux 7 (Core) 3. 10. 0-957. 27. 2. el7. x86_64 docker://1. 13. 1blog-worker-00. kubevirt. local Ready <none> 29h v1. 17. 0 192. 168. 123. 232 <none> CentOS Linux 7 (Core) 3. 10. 0-957. 27. 2. el7. x86_64 docker://1. 13. 1 Node. js >= 8. Node. js 10 is available as an AppStream module. $ sudo yum module install nodejsInstalled: nodejs-1:10. 16. 3-2. module_el8. 0. 0+186+542b25fc. x86_64 npm-1:6. 9. 0-1. 10. 16. 3. 2. module_el8. 0. 0+186+542b25fc. x86_64Complete! yarn >= 1. 3. 2. Yarn is a dependency of Node. js. In this case, the official yarn repository has to be added as a local repositories. $ sudo curl --silent --location https://dl. yarnpkg. com/rpm/yarn. repo | sudo tee /etc/yum. repos. d/yarn. repo$ sudo rpm --import https://dl. yarnpkg. com/rpm/pubkey. gpg$ sudo yum install yarn$ yarn --version1. 21. 1 go >= 1. 11+. Golang is available as an AppStream module in CentOS 8:$ sudo yum module install go-toolsetInstalled: golang-1. 11. 6-1. module_el8. 0. 0+192+8b12aa21. x86_64 jq (for contrib/environment. sh). Finally, jq is installed in order to work with JSON data. $ yum install jqInstalled: jq. x86_64 0:1. 5-1. el7Compiling OKD web console: Once all dependencies are met, access the cloned directory and export the correct variables that will be used during the building process. Then, execute build. sh script which actually calls the build-frontend and build-backend scripts. $ git clone https://github. com/openshift/console. git$ cd console/$ export KUBECONFIG=~/. kube/config$ source . /contrib/environment. shUsing https://192. 168. 123. 250:6443$ . /build. sh. . . Done in 215. 91s. The result of the process is a binary file called bridge inside the bin folder. Prior to run the “bridge”, it has to be verified that the port where the OKD web console is expecting connections is not blocked. iptables -A INPUT -p tcp --dport 9000 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPTThen, the artifact can be executed: $ . /bin/bridge2020/01/7 10:21:17 cmd/main: cookies are not secure because base-address is not https!2020/01/7 10:21:17 cmd/main: running with AUTHENTICATION DISABLED!2020/01/7 10:21:17 cmd/main: Binding to 0. 0. 0. 0:9000. . . 2020/01/7 10:21:17 cmd/main: not using TLSAt this point, the connection to the OKD web console from your network should be established successfully. Note that by default there is no authentication required to login into the console and the connection is using HTTP protocol. There are variables in the environment. sh file that can change this default behaviour. Probably, the following issue will be faced when connecting to the web user interface: services is forbidden: User system:serviceaccount:kube-system:default cannot list resource services in API group in the namespace default . The problem is that the default service account does not have enough privileges to show all the cluster objects. There are two options to fix the issue: one is granting cluster-admin permissions to the default service account. That, it is not recommended since default service account is, at its very name indicates, the default service account for all pods if not explicitly configured. This means granting cluster-admin permissions to some applications running in kube-system namespace and even future ones when no service account is configured. The other option is create a new service account called console, grant cluster-admin permissions to it and configure the web console to run with this new service account: kubectl create serviceaccount console -n kube-systemkubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-systemOnce created, modify the environment. sh file and change the line that starts with secretname as shown below: vim contrib/environment. shsecretname=$(kubectl get serviceaccount **console** --namespace=kube-system -o jsonpath='{. secrets[0]. name}')Now, variables configured in the environment. sh file have to be exported again and the connection to the console must be reloaded. source . /contrib/environment. shDeploy KubeVirt using the Hyperconverged Cluster Operator (HCO): In order to ease the installation of KubeVirt, the unified operator called HCO will be deployed. The goal of the hyperconverged-cluster-operator (HCO) is to provide a single entrypoint for multiple operators - kubevirt, cdi, networking, etc… - where users can deploy and configure them in a single object. This operator is sometimes referred to as a “meta operator” or an “operator for operators”. Most importantly, this operator doesn’t replace or interfere with OLM. It only creates operator CRs, which is the user’s prerogative. More information about HCO can be found in the post published in this blog by Ryan Hallisey: Hyper Converged Operator on OCP 4 and K8s(HCO). The HCO repository provides plenty of information on how to install the operator. In this lab, it has been installed as Using the HCO without OLM or Marketplace, which basically executes this deployment script: $ curl https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/deploy. sh | bash+ kubectl create ns kubevirt-hyperconvergednamespace/kubevirt-hyperconverged created+ namespaces=( openshift )+ for namespace in ${namespaces[@]}++ kubectl get ns openshiftError from server (NotFound): namespaces openshift not found+ [[ '' == '' ]]+ kubectl create ns openshiftnamespace/openshift created++ kubectl config current-context+ kubectl config set-context kubernetes-admin@kubernetes --namespace=kubevirt-hyperconvergedContext kubernetes-admin@kubernetes modified. + kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/cluster-network-addons00. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/networkaddonsconfigs. networkaddonsoperator. network. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/containerized-data-importer00. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/cdis. cdi. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/hco. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/hyperconvergeds. hco. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/kubevirt00. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/kubevirts. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/node-maintenance00. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/nodemaintenances. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/scheduling-scale-performance00. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/kubevirtcommontemplatesbundles. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/scheduling-scale-performance01. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/kubevirtmetricsaggregations. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/scheduling-scale-performance02. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/kubevirtnodelabellerbundles. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/scheduling-scale-performance03. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/kubevirttemplatevalidators. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/crds/v2vvmware. crd. yamlcustomresourcedefinition. apiextensions. k8s. io/v2vvmwares. kubevirt. io created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/cluster_role. yamlrole. rbac. authorization. k8s. io/cluster-network-addons-operator createdclusterrole. rbac. authorization. k8s. io/hyperconverged-cluster-operator createdclusterrole. rbac. authorization. k8s. io/cluster-network-addons-operator createdclusterrole. rbac. authorization. k8s. io/kubevirt-operator createdclusterrole. rbac. authorization. k8s. io/kubevirt-ssp-operator createdclusterrole. rbac. authorization. k8s. io/cdi-operator createdclusterrole. rbac. authorization. k8s. io/node-maintenance-operator created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/service_account. yamlserviceaccount/cdi-operator createdserviceaccount/cluster-network-addons-operator createdserviceaccount/hyperconverged-cluster-operator createdserviceaccount/kubevirt-operator createdserviceaccount/kubevirt-ssp-operator createdserviceaccount/node-maintenance-operator created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/cluster_role_binding. yamlrolebinding. rbac. authorization. k8s. io/cluster-network-addons-operator createdclusterrolebinding. rbac. authorization. k8s. io/hyperconverged-cluster-operator createdclusterrolebinding. rbac. authorization. k8s. io/cluster-network-addons-operator createdclusterrolebinding. rbac. authorization. k8s. io/kubevirt-operator createdclusterrolebinding. rbac. authorization. k8s. io/kubevirt-ssp-operator createdclusterrolebinding. rbac. authorization. k8s. io/cdi-operator createdclusterrolebinding. rbac. authorization. k8s. io/node-maintenance-operator created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/operator. yamldeployment. apps/hyperconverged-cluster-operator createddeployment. apps/cluster-network-addons-operator createddeployment. apps/virt-operator createddeployment. apps/kubevirt-ssp-operator createddeployment. apps/cdi-operator createddeployment. apps/node-maintenance-operator created+ kubectl create -f https://raw. githubusercontent. com/kubevirt/hyperconverged-cluster-operator/main/deploy/hco. cr. yamlhyperconverged. hco. kubevirt. io/hyperconverged-cluster createdThe result is a new namespace called kubevirt-hyperconverged with all the operators, Custom Resources (CRs) and objects needed by KubeVirt: $ kubectl get pods -n kubevirt-hyperconverged -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESbridge-marker-bwq6f 1/1 Running 0 12m 192. 168. 123. 250 blog-master-00. kubevirt. local <none> <none>bridge-marker-st7f7 1/1 Running 0 12m 192. 168. 123. 232 blog-worker-00. kubevirt. local <none> <none>cdi-apiserver-6f59996849-2hmm9 1/1 Running 0 12m 10. 244. 1. 17 blog-worker-00. kubevirt. local <none> <none>cdi-deployment-57c68dbddc-c4n8l 1/1 Running 0 12m 10. 244. 1. 22 blog-worker-00. kubevirt. local <none> <none>cdi-operator-64bbf595c-48v7k 1/1 Running 1 24m 10. 244. 1. 12 blog-worker-00. kubevirt. local <none> <none>cdi-uploadproxy-5cbf6f4897-95fn5 1/1 Running 0 12m 10. 244. 1. 16 blog-worker-00. kubevirt. local <none> <none>cluster-network-addons-operator-5956598648-5r79l 1/1 Running 0 24m 10. 244. 1. 10 blog-worker-00. kubevirt. local <none> <none>hyperconverged-cluster-operator-d567b5dd8-7d8wq 0/1 Running 0 24m 10. 244. 1. 9 blog-worker-00. kubevirt. local <none> <none>kube-cni-linux-bridge-plugin-kljvq 1/1 Running 0 12m 10. 244. 1. 19 blog-worker-00. kubevirt. local <none> <none>kube-cni-linux-bridge-plugin-p6dkz 1/1 Running 0 12m 10. 244. 0. 7 blog-master-00. kubevirt. local <none> <none>kube-multus-ds-amd64-84gcj 1/1 Running 1 12m 10. 244. 1. 23 blog-worker-00. kubevirt. local <none> <none>kube-multus-ds-amd64-flq8s 1/1 Running 2 12m 10. 244. 0. 10 blog-master-00. kubevirt. local <none> <none>kubemacpool-mac-controller-manager-675ff47587-pb57c 1/1 Running 0 11m 10. 244. 1. 20 blog-worker-00. kubevirt. local <none> <none>kubemacpool-mac-controller-manager-675ff47587-rf65j 1/1 Running 0 11m 10. 244. 0. 8 blog-master-00. kubevirt. local <none> <none>kubevirt-ssp-operator-7b5dcb45c4-qd54h 1/1 Running 0 24m 10. 244. 1. 11 blog-worker-00. kubevirt. local <none> <none>nmstate-handler-8r6d5 1/1 Running 0 11m 192. 168. 123. 232 blog-worker-00. kubevirt. local <none> <none>nmstate-handler-cq5vs 1/1 Running 0 11m 192. 168. 123. 250 blog-master-00. kubevirt. local <none> <none>node-maintenance-operator-7f8f78c556-q6flt 1/1 Running 0 24m 10. 244. 0. 5 blog-master-00. kubevirt. local <none> <none>ovs-cni-amd64-4z2qt 2/2 Running 0 11m 192. 168. 123. 250 blog-master-00. kubevirt. local <none> <none>ovs-cni-amd64-w8fzj 2/2 Running 0 11m 192. 168. 123. 232 blog-worker-00. kubevirt. local <none> <none>virt-api-7b7d486d88-hg4rd 1/1 Running 0 11m 10. 244. 1. 21 blog-worker-00. kubevirt. local <none> <none>virt-api-7b7d486d88-r9s2d 1/1 Running 0 11m 10. 244. 0. 9 blog-master-00. kubevirt. local <none> <none>virt-controller-754466fb86-js6r7 1/1 Running 0 10m 10. 244. 1. 25 blog-worker-00. kubevirt. local <none> <none>virt-controller-754466fb86-mcxwd 1/1 Running 0 10m 10. 244. 0. 11 blog-master-00. kubevirt. local <none> <none>virt-handler-cz7q2 1/1 Running 0 10m 10. 244. 0. 12 blog-master-00. kubevirt. local <none> <none>virt-handler-k6npr 1/1 Running 0 10m 10. 244. 1. 24 blog-worker-00. kubevirt. local <none> <none>virt-operator-84f5588df6-2k49b 1/1 Running 0 24m 10. 244. 1. 14 blog-worker-00. kubevirt. local <none> <none>virt-operator-84f5588df6-zzrsb 1/1 Running 1 24m 10. 244. 0. 4 blog-master-00. kubevirt. local <none> <none>Note Only once HCO is completely deployed, VirtualMachines can be managed from the web console. This is because the web console is shipped with an specific plugin that detects a KubeVirt installation by the presence of KubeVirt Custom Resource Definition (CRDs) in the cluster. Once detected, it automatically shows a new option under the Workload left pane menu to manage KubeVirt resources. It is worth noting that there is an ongoing effort to adapt the OpenShift web console’s user interface in native Kubernetes additionally to OKD or OpenShift as they are expected. As an example, a few days ago, the non applicable Virtual Machine Templates option from the Workload menu was removed and the VM Wizard was made fully functional when native Kubernetes is detected. Containerized installation: The OKD web console actually runs as a pod in OKD along with its deployment, services and all objects needed to run properly. The idea is to take advantage of the containerized OKD web console available and execute it in one of the nodes of a native Kubernetes cluster. Note that unlike the binary installation the pod must run in a node inside our Kubernetes cluster Running the OKD web console as a native Kubernetes application will benefit from all the Kubernetes advantages: rolling deployments, easy upgrades, high availability, scalability, auto-restart in case of failure, liveness and readiness probes… An example of how easy it is to update the OKD web console to a newer version will be presented as well. Deploying OKD web console: In order to configure the deployment of the OKD web console the proper Kubernetes objects have to be created. As shown in the previously Compiling OKD web console there are quite a few environment variables that needs to be set. When dealing with Kubernetes objects these variables should be included in the deployment object. A YAML file containing a deployment and service objects that mimic the binary installation is already prepared. It can be downloaded from here and configured depending on the user’s local installation. Then, create a specific service account (console) for running the OpenShift web console in case it is not created previously and grant cluster-admin permissions: kubectl create serviceaccount console -n kube-systemkubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-systemNext, extract the token secret name associated with the console service account: $ kubectl get serviceaccount console --namespace=kube-system -o jsonpath='{. secrets[0]. name}'console-token-ppfc2Finally, the downloaded YAML file must be modified assigning the proper values to the token section. The following command may help to extract the token name from the user console, which is a user created by apiVersion: apps/v1kind: Deploymentmetadata: name: console-deployment namespace: kube-system labels: app: consolespec: replicas: 1 selector: matchLabels: app: console template: metadata: labels: app: console spec: containers: - name: console-app image: quay. io/openshift/origin-console:4. 2 env: - name: BRIDGE_USER_AUTH value: disabled # no authentication required - name: BRIDGE_K8S_MODE value: off-cluster - name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value: https://kubernetes. default #master api - name: BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS value: true # no tls enabled - name: BRIDGE_K8S_AUTH value: bearer-token - name: BRIDGE_K8S_AUTH_BEARER_TOKEN valueFrom: secretKeyRef: name: console-token-ppfc2 # console serviceaccount token key: token---kind: ServiceapiVersion: v1metadata: name: console-np-service namespace: kube-systemspec: selector: app: console type: NodePort # nodePort configuration ports: - name: http port: 9000 targetPort: 9000 nodePort: 30036 protocol: TCP---Finally, the deployment and service objects can be created. The deployment will trigger the download and installation of the OKD web console image. $ kubectl create -f okd-web-console-install. yamldeployment. apps/console-deployment createdservice/console-service created$ kubectl get pods -o wide -n kube-systemNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESconsole-deployment-59d8956db5-td462 1/1 Running 0 4m49s 10. 244. 0. 13 blog-master-00. kubevirt. local <none> <none>$ kubectl get svc -o wide -n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEconsole-np-service NodePort 10. 96. 195. 45 <none> 9000:30036/TCP 19mkube-dns ClusterIP 10. 96. 0. 10 <none> 53/UDP,53/TCP,9153/TCP 20dOnce running, a connection to the nodeport defined in the service object can be established. It can be checked that the OKD web console is up and running version 4. 2. It can be verified that it is possible to see and manage VirtualMachines running inside of the native Kubernetes cluster. Upgrade OKD web console: The upgrade process is really straightforward. All available image versions of the OpenShift console can be consulted in the official OpenShift container image repository. Then, the deployment object must be modified accordingly to run the desired version of the OKD web console. In this case, the we console will be updated to the newest version, which is 4. 5. 0/4. 5. Note that this is not linked with the latest tag, actually latest tag is the same as version 4. 4. Upgrading process only involves updating the image value to the desired container image: quay. io/openshift/origin-console:4. 5 and save. apiVersion: apps/v1kind: Deploymentmetadata: name: console-deployment namespace: kube-system labels: app: consolespec: replicas: 1 selector: matchLabels: app: console template: metadata: labels: app: console spec: containers: - name: console-app image: quay. io/openshift/origin-console:4. 5 #new image version env: - name: BRIDGE_USER_AUTH value: disabled - name: BRIDGE_K8S_MODE value: off-cluster - name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value: https://kubernetes. default - name: BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS value: true - name: BRIDGE_K8S_AUTH value: bearer-token - name: BRIDGE_K8S_AUTH_BEARER_TOKEN valueFrom: secretKeyRef: name: console-token-ppfc2 key: tokenOnce the deployment has been saved, a new pod with the configured version of the OKD web console is created and eventually will replace the old one. $ kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEconsole-deployment-5588f98644-bw7jr 0/1 ContainerCreating 0 5sconsole-deployment-59d8956db5-td462 1/1 Running 0 16h In the video below, the procedure explained in this section is shown. Summary: In this post two ways to install the OKD web console to manage a KubeVirt deployment in a native Kubernetes cluster have been explored. Running the OKD web console will allow you to create, manage and delete virtual machines running in a native cluster from a friendly user interface. Also you will be able to delegate to the developers or other users the creation and maintenance of their virtual machines without having a deep knowledge of Kubernetes. Personally, I would like to see more user interfaces to manage and configure KubeVirt deployments and their virtual machines. In a previous post, KubeVirt user interface options, some options were explored, however only OKD web console was found to be deeply integrated with KubeVirt. Ping us or feel free to comment this post in case there are some other existing options that we did not notice. References: KubeVirt user interface options Managing KubeVirt with OpenShift web console running as a container application on Youtube Managing KubeVirt with OpenShift web console running as a compiled binary on Youtube Kubevirt Laboratory 1 blogpost: Use Kubevirt KubeVirt basic operations video on Youtube Kubevirt installation notes First steps with KubeVirt - Katacoda scenario" }, { - "id": 78, + "id": 79, "url": "/2020/KubeVirt_lab3_upgrade.html", "title": "KubeVirt Laboratory 3, upgrades", "author" : "Pedro Ibáñez Requena", "tags" : "lab, kubevirt upgrade, upgrading, lifecycle", "body": "In this video, we are showing the step by step of the KubeVirt Laboratory 3: Upgrades Pre-requisites: In the video, there is a Kubernetes cluster already running. Also, the virtctl command is already installed and available in the PATH. Video: Operations: The following operations are shown in the video: Install a specific version of the KubeVirt Operator and Custom Resource Create a cirros Virtual Machine running in KubeVirt Connect to the Virtual Machine using SSH Upgrade to a specific version of KubeVirt while the Virtual Machine is running Check the new KubeVirt versionReferences: Kubevirt Laboratory 3: Upgrades Kubevirt Laboratory 2 blogpost: Experimenting with CDI Kubevirt Laboratory 2: Experimenting with CDI Kubevirt Laboratory 1 blogpost: Use Kubevirt Kubevirt Laboratory 1: Use KubeVirt KubeVirt basic operations video on youtube Kubevirt installation notes First steps with KubeVirt - Katacoda scenario" }, { - "id": 79, + "id": 80, "url": "/2020/changelog-v0.25.0.html", "title": "KubeVirt v0.25.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 25. 0: Released on: Mon Jan 13 20:37:15 2020 +0100 CI: Support for Kubernetes 1. 17 Support emulator thread pinning Support virtctl restart –force Support virtctl migrate to trigger live migrations from the CLI" }, { - "id": 80, + "id": 81, "url": "/2019/KubeVirt_UI_options.html", "title": "KubeVirt user interface options", "author" : "Alberto Losada Grande, Pedro Ibáñez Requena", "tags" : "octant, okd, openshift console, cockpit, noVNC, user interface, web interface, virtVNC, OKD console", "body": " The user interface (UI), in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators’ decision-making process. Wikipedia:User Interface In this blogpost we show the results of a research about the different options existing in the market to enable KubeVirt with a user interface to manage, access and control the life cycle of the Virtual Machines inside Kubernetes with KubeVirt. The different UI options available for KubeVirt that we have been checking, at the moment of writing this article, are the following: Octant OKD: The Origin Community Distribution of Kubernetes OpenShift console running on vanilla Kubernetes Cockpit noVNCOctant: As the Octant webpage claims: Octant is an open-source developer-centric web interface for Kubernetes that lets you inspect a Kubernetes cluster and its applications. Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities. Some of the key features of this tool can be checked in their latest release notes: Resource Viewer: Graphically visualize relationships between objects in a Kubernetes cluster. The status of individual objects is represented by colour to show workload performance. Summary View: Consolidated status and configuration information in a single page aggregated from output typically found using multiple kubectl commands. Port Forward: Forward a local port to a running pod with a single button for debugging applications and even port forward multiple pods across namespaces. Log Stream: View log streams of pod and container activity for troubleshooting or monitoring without holding multiple terminals open. Label Filter: Organize workloads with label filtering for inspecting clusters with a high volume of objects in a namespace. Cluster Navigation: Easily change between namespaces or contexts across different clusters. Multiple kubeconfig files are also supported. Plugin System: Highly extensible plugin system for users to provide additional functionality through gRPC. Plugin authors can add components on top of existing views. We installed it and found out that: Octant provides a very basic dashboard for Kubernetes and it is pretty straightforward to install. It can be installed in your laptop or in a remote server. Regular Kubernetes objects can be seen from the UI. Pod logs can be checked as well. However, mostly everything is in view mode, even the YAML description of the objects. Therefore, as a developer or cluster operator you cannot edit YAML files directly from the UI Custom resources (CRs) and custom resource definitions (CRDs) are automatically detected and shown in the UI. This means that KubeVirt CRs can be viewed from the dashboard. However, VirtualMachines and VirtualMachineInstances cannot be modified from Octant, they can only be deleted. There is an option to extend the functionality adding plugins to the dashboard. No specific options to manage KubeVirt workloads have been found. With further work and investigation, it could be an option to develop a specific plugin to enable remote console or VNC access to KubeVirt workloads. OKD: The Origin Community Distribution of Kubernetes: As defined in the official webpage: OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is the upstream Kubernetes distribution embedded in Red Hat OpenShift. OKD embeds Kubernetes and extends it with security and other integrated concepts. OKD is also referred to as Origin in github and in the documentation. An OKD release corresponds to the Kubernetes distribution - for example, OKD 1. 10 includes Kubernetes 1. 10. A few weeks ago Kubernetes distribution OKD4 was released as preview. OKD is the official upstream version of Red Hat’s OpenShift. Since OpenShift includes KubeVirt (Red Hat calls it CNV) as a tech-preview feature since a couple of releases, there is already a lot of integration going on between OKD console and KubeVirt. Note that OKD4 is in preview, which means that only a subset of platforms and functionality will be available until it reaches beta. That being said, we have we found a similar behaviour as testing KubeVirt with OpenShift. We have noticed that from the UI a user can: Install the KubeVirt operator from the operator marketplace. Create Virtual Machines by importing YAML files or following a wizard. The wizard prevents you from moving to the next screen until you provide values in the required fields. Modify the status of the Virtual Machine: stop, start, migrate, clone, edit label, edit annotations, edit CD-ROMs and delete Edit network interfaces. It is possible to add multiple network interfaces to the VM. Add disks to the VM Connect to the VM via serial or VNC console. Edit the YAML object files online. Create VM templates. The web console features an interactive wizard that guides you through the Basic Settings, Networking, and Storage screens to simplify the process of creating virtual machine templates. Check VM events in real time. Gather metrics and utilization of the VM. Pretty much everything you can do with KubeVirt from the command line. One of the drawbacks is that the current KubeVirt HCO operator contains KubeVirt version 0. 18. 1, which is quite outdated. Note that last week version 0. 24 of KubeVirt was released. Using such an old release could cause some issues when creating VMs using newer container disk images. For instance, we have not been able to run the latest Fedora cloud container disk image and instead we were forced to use the one tagged as v0. 18. 1 which matches the version of KubeVirt deployed. If for any reason there is a need to deploy the latest version, it can be done by running the following script which applies directly the HCO operator: unreleased bundles using the hco without marketplace. Note that in this case automatic updates to KubeVirt are not triggered or advised automatically in OKD as it happens with the operator. OpenShift console (bridge): There is actually a KubeVirt Web User Interface, however the standalone project was deprecated in favor of OpenShift Console where it is included as a plugin. As we reviewed previously the OpenShift web console is just another piece inside OKD. It is an independent part and, as it is stated in their official GitHub repository, it can run on top of native Kubernetes. OpenShift Console a. k. a bridge is defined as: a friendly kubectl in the form of a single page web application. It also integrates with other services like monitoring, chargeback, and OLM. Some things that go on behind the scenes include: Proxying the Kubernetes API under /api/kubernetes Providing additional non-Kubernetes APIs for interacting with the cluster Serving all frontend static assets User AuthenticationThen, as briefly explained in their repository, our Kubernetes cluster can be configured to run the OpenShift Console and leverage its integrations with KubeVirt. Features related to KubeVirt are similar to the ones found in the OKD installation except: KubeVirt installation is done using the Hyperconverged Cluster Operator (HCO) without OLM or Marketplace instead of the KubeVirt operator. Therefore, available updates to KubeVirt are not triggered or advised automatically Virtual Machines objects can only be created from YAML. Although the wizard dialog is still available in the console, it does not function properly because it uses specific OpenShift objects under the hood. These objects are not available in our native Kubernetes deployment. Connection to the VM via serial or VNC console is flaky. VM templates can only be created from YAML. The wizard dialog is based on OpenShift templates. Note that the OpenShift console documentation briefly points out how to integrate the OpenShift console with a native Kubernetes deployment. It is uncertain if it can be installed in any other Kubernetes cluster. Cockpit: When testing cockpit in a CentOS 7 server with a Kubernetes cluster and KubeVirt we have realised that some of the containers/k8s features have to be enabled installing extra cockpit packages: To see the containers and images the package cockpit-docker has to be installed, then a new option called containers appears in the menu. To see the k8s cluster the package cockpit-kubernetes has to be installed and a new tab appears in the left menu. The new options allow you to: Overview: filtering by project, it shows Pods, volumes, Nodes, services and resources used. Nodes: nodes and the resources used are being shown here. Containers: a full list of containers and some metadata about them is displayed in this option. Topology: A graph with the pods, services and nodes is shown in this option. Details: allows to filter by project and type of resource and shows some metadata in the results. Volumes: allows to filter by project and shows the volumes with the type and the status. In CentOS 7 there are also the following packages: cockpit-machines. x86_64 : Cockpit user interface for virtual machines. If “virt-install” is installed, you can also create new virtual machines. It adds a new option in the main menu called Virtual Machines but it uses libvirt and is not KubeVirt related. cockpit-machines-ovirt. noarch : Cockpit user interface for oVirt virtual machines, like the package above but with support for ovirt. At the moment none of the cockpit complements has support for KubeVirt Virtual Machine. KubeVirt support for cockpit was removed from fedora 29 noVNC: noVNC is a JavaScript VNC client using WebSockets and HTML5 Canvas. It just allows you to connect through VNC to the virtual Machine already deployed in KubeVirt. No VM management or even a dashboard is enabled with this option, it’s a pure DIY code that can embed the VNC access to the VM into HTML in any application or webpage. Summary: From the different options we have investigated, we can conclude that OpenShift Console along with OKD Kubernetes distribution provides a powerful way to manage and control our KubeVirt objects. From the user interface, a developer or operator can do pretty much everything you do in the command line. Additionally, users can create custom reusable templates to deploy their virtual machines with specific requirements. Wizard dialogs are provided as well in order to guide new users during the creation of their VMs. OpenShift Console can also be considered as an interesting option in case your KubeVirt installation is running on a native Kubernetes cluster. On the other hand, noVNC provides a lightweight interface to simply connect to the console of your virtual machine. Octant, although it does not have any specific integration with KubeVirt, looks like a promising Kubernetes user interface that could be extended to manage our KubeVirt instances in the future. Note We encourage our readers to let us know of user interfaces that can be used to manage our KubeVirt virtual machines. Then, we can include them in this list References: Octant OKD OKD Console Cockpit virtVNC, noVNC for Kubevirt" }, { - "id": 81, + "id": 82, "url": "/2019/KubeVirt_lab2_experiment_with_cdi.html", "title": "KubeVirt Laboratory 2, experimenting with CDI", "author" : "Pedro Ibáñez Requena", "tags" : "lab, CDI, containerized data importer, vm import", "body": "In this video, we are showing the step by step of the KubeVirt Laboratory 2: Experiment with CDI Pre-requisites: In the video, there is a Kubernetes cluster together with KubeVirt already running. If you need help for preparing that setup you can check the KubeVirt installation notes or try it yourself in the First steps with KubeVirt Katacoda scenario. Also, the virtctl command is already installed and available in the PATH. Video: Operations: The following operations are shown in the video: Configure the storage for the Virtual Machine Install the CDI Operator and the CR for the importer Create and customize the Fedora Virtual Machine from the cloud image Connect to the console of the Virtual Machine Connect to the Virtual Machine using SSH Redirect a host port to the Virtual Machine to enable external SSH connectivityReferences: Kubevirt Laboratory 2: Experimenting with CDI Kubevirt Laboratory 1 blogpost: Use Kubevirt Kubevirt Laboratory 1: Use KubeVirt KubeVirt basic operations video on youtube Kubevirt installation notes First steps with KubeVirt - Katacoda scenario" }, { - "id": 82, + "id": 83, "url": "/2019/KubeVirt_lab1_use_kubevirt.html", "title": "KubeVirt Laboratory 1, use KubeVirt", "author" : "Pedro Ibáñez Requena", "tags" : "laboratory, lab, installing kubevirt, use kubevirt, admin operations", "body": "In this video, we are showing the step by step of the KubeVirt Laboratory 1: Use KubeVirt Pre-requisites: In the video, there is a Kubernetes cluster together with KubeVirt already running. If you need help for preparing that setup you can check the KubeVirt installation notes or try it yourself in the First steps with KubeVirt Katacoda scenario. Also, the virtctl command is already installed and available in the PATH. Video: Operations: The following operations are shown in the video: Creating a Virtual Machine from a YAML and a containerdisk Starting the Virtual Machine Connecting to the Virtual Machine using the console Stopping and removing the Virtual MachineReferences: Kubevirt Laboratory 1: Use KubeVirt instructions KubeVirt basic operations video on youtube Kubevirt installation notes First steps with KubeVirt - Katacoda scenario" }, { - "id": 83, + "id": 84, "url": "/2019/changelog-v0.24.0.html", "title": "KubeVirt v0.24.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 24. 0: Released on: Tue Dec 3 15:34:34 2019 +0100 CI: Support for Kubernetes 1. 15 CI: Support for Kubernetes 1. 16 Add and fix a couple of test cases Support for pause and unpausing VMs Update of libvirt to 5. 6. 0 Fix bug related to parallel scraping of Prometheus endpoints Fix to reliably test VNC" }, { - "id": 84, + "id": 85, "url": "/2019/KubeVirt_basic_operations_video.html", "title": "KubeVirt basic operations video", "author" : "Pedro Ibáñez Requena", "tags" : "admin, operations, create vm, start vm, connect to console, connect to ssh, stop vm, remove vm, virtual machine, operator manual, basic operations", "body": "In this video, we are showing how KubeVirt can be used from a beginner point of view. Pre-requisites: In the video, there is a Kubernetes cluster together with KubeVirt already running. If you need help for preparing that setup you can check the KubeVirt installation notes or try it yourself in the First steps with KubeVirt Katacoda scenario. Also, the virtctl command is already installed and available in the PATH. Video: Operations: The following operations are shown in the video: Creating a Virtual Machine from a YAML and a cloud image Starting the Virtual Machine Connecting through the console and SSH to the Virtual Machine Connecting to the Virtual Machine through VNC Stopping and removing the Virtual MachineReferences: KubeVirt basic operations video on youtube Kubevirt installation notes First steps with KubeVirt - Katacoda scenario" }, { - "id": 85, + "id": 86, "url": "/2019/jenkins-ci-server-upgrade-and-jobs-for-kubevirt.html", "title": "Jenkins Infra upgrade", "author" : "Pablo Iranzo Gómez", "tags" : "jenkins, community, infrastructure, contra-lib", "body": "Introduction: In the article Jenkins Jobs for KubeVirt Lab Validation, we covered how Jenkins did get the information about the labs and jobs to perform from the KubeVirt repositories. In this article, we’ll cover the configuration changes in both Jenkins and the JenkinsFiles required to get our CI setup updated to latest versions and syntax . Jenkins: Our Jenkins instance, is running on top of (CentOS CI)[https://pagure. io/centos-infra/] and is one of the OS-enhanced Jenkins instances, that provide persistent storage and other pieces bundled, required for non-testing setups. What we found is that Jenkins was already complaining because of pending updates (security, engine, etc), but the jenkins. war was embedded in the container image we were using. Initial attempts tried to use environment variables to override the WAR to use, but our image was not prepared for it, so we were given the option to just generate a new container for it, but this seemed a bad approach as our image, also contained custom libraries (contra-lib) that enables communicating with OpenShift to run the tests inside containers there. During the investigation and testing, we found that the persistent storage folder we were using (/var/lib/jenkins) contained a war folder which contained the unarchived jenkins. war, so the next attempt was to manually download the latest jenkins. war, and unzip it on that folder, which finally allowed us to upgrade Jenkins core. The plugins: After upgrading the Jenkins core, we could use the internal plugin manager to upgrade all the remaining plugins, however, that meant a big change in the plugins, configurations, etc. After initially being able to run the lab validations for a while, on next morning we wanted to release a new image (from Cloud-Image-Builder) and it failed to build because of the external libraries, and also affected the lab validations again, so we got back to square one for the upgrade process, leaving us with the decision to go forward with the full upgrade: the latest stable jenkins and available plugins and reconfigure what was changed to suit the upgraded requirements. Here we’ll show you the configuration settings for each one of the new/updated plugins. OpenShift Plugin: Updated to configure the instance of CentOS OpenShift with the system account for accessing it: OpenShift Jenkins Sync: Updated and configured to use the console as well with kubevirt namespace: Global Pipeline Libraries: Here we added the libraries we used, but instead of a specific commit, targetting the master branch. contra-lib: https://github. com/openshift/contra-lib. git cico-pipeline-library: https://github. com/CentOS/cico-pipeline-library. git contra-library: https://github. com/CentOS-PaaS-SIG/contra-env-sample-projectFor all of them, we ticked: Load Implicitly Allow default version to be overriden Include @Library changes in job recent changesJenkins replied with the ‘currently maps to revision: hash’ for each one of them, after having loaded them properly, indicating that it was successful. Slack plugin: In addition to regular plugins used for builds, we incorporated the slack plugin to validate the notifications of build status to a test slack channel. Configuration is really easy, from within Slack, when added the jenkins notifications plugin, a token is provided that must be configured in Jenkins as well as a default room to send notifications to. This allows us to get notified when a new build is started and the resulting status, just in case something was generated with errors or something external changed (remember that we use latest KubeVirt release, latest tools for virtctl, kubectl and a new image is generated out of them to validate the labs). Kubernetes ‘Cloud’: In addition, Kubernets Cloud was configured pointing to the same console access and using the kubevirt namespace: The libraries we added, automatically add some pod templates for jenkins-contra-slave: Other changes: Our environment also used other helper tools as regular OpenShift builds (contra): We had to update the repositories from using some older forks (no longer valid and outdated) to use the latest versions, and for the ansible-executor we also created a fork to use the newest libraries for accessing Google Cloud environment and tuning some other variables (https://github. com/CentOS-PaaS-SIG/contra-env-infra/pull/59) (changes have now landed the upstream repo). The issue that we were facing was related with the failure to write temporary files to the $HOME folder for the user so ansible configuration was forced to use one in a temporary and writable folder. Additionally, Google Cloud access required updating libraries for authentication that were failing as well, that is fixed via the Dockerfile that generated ansible-executor container image. Job Changes: Our Jenkins Jobs were defined (as documented in prior article) inside each repository that made that part easy on one side, but also required some tuning and changes: We have disabled minikube validation as it was failing for both AWS and GCP unless using baremetal (so we’re wondering about using another approach here) We’ve added code to do the actual ‘Slack’ notification we mentioned above Extend the try/catch block to include a ‘finally’ to send the notifications Change the syntax for ‘artifacts’ as it was previously ArchiveArtifacts and now it’s a postBuildThe outcome: After several attempts for fine-tuning the configuration, the builds started succeeding: Of course, one of the advantages is that builds happen automatically every day or on code changes on the repositories. There’s still room for improvement identified that will happen in next iterations: Find not needed running instances on Cloud providers for reducing the bills Trigger builds when new releases of KubeVirt happen (out of kubevirt/kubevirt repo) Unify testing on Prow instanceOf course, the builds can be failing for external reasons (like VM in cloud provider taking longer to start up and have SSH available, or the nested VM inside after importing, etc), but still a great help for checking if things are working as they should and of course, to get the validations improved for reduce the number of false positives. " }, { - "id": 86, + "id": 87, "url": "/2019/kubecon-na-2019.html", "title": "KubeVirt at KubeCon + CloudNativeCon North America", "author" : "Pep Turró Mauri", "tags" : "KubeCon, cloudnativecon, America, conference, talk, gathering", "body": "The KubeCon + CloudNativeCon North America 2019conference is next week in San Diego, California. KubeVirt will have a presence at the event and this post highlights someactivities that will have a KubeVirt focus there. Sessions: There are two sessions covering KubeVirt specifically: On Tuesday at 2:25 PM Chandrakanth Jakkidi and Steve Gordon will present anintroduction to KubeVirt that will cover thebackground of the project, its motivation and use cases, and an architecturaloverview and demo. On Wednesday at 10:55 AM David Vossel and Vishesh Tanksale will take a deep-diveon virtualized GPU workloads on KubeVirt where theywill show KubeVirt’s capabilities around host device passthrough using NVIDIAGPU workloads as a case study. Users and Contributors gathering: KubeVirt users and contributors will get together to talk about KubeVirt,brainstorm ideas to help us shape the project’s next steps, and generally getsome face to face time. If you are already using or contributing to KubeVirt, you are considering to tryit, or just want to present your use case and discuss KubeVirt’s fit or needswe’d be very glad to meet you there. Red Hat is sponsoring a venue for the meetup right next to the conference’svenue. Space is limited, so we are asking people to register inadvance. Demos: KubeVirt will also be featured in a couple of demos at the Red Hat booth inthe exposition hall. You can find the demo schedule at their event landing page. Keeping in touch: Follow @kubevirt on Twitter for updates. We look forward to seeing you at KubeCon + CloudNativeCon! " }, { - "id": 87, + "id": 88, "url": "/2019/changelog-v0.23.0.html", "title": "KubeVirt v0.23.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 23. 0: Released on: Mon Nov 4 16:42:54 2019 +0100 Guest OS Information is available under the VMI status now Updated to Go 1. 12. 8 and latest bazel Updated go-yaml to v2. 2. 4, which has a ddos vulnerability fixed Cleaned up and fixed CRD scheme registration Several bugfixes Many CI improvements (e. g. more logs in case of test failures)" }, { - "id": 88, + "id": 89, "url": "/2019/prow-jobs-for-kubevirt.html", "title": "Prow jobs for KubeVirt website and Tutorial repo", "author" : "Pablo Iranzo Gómez", "tags" : "prow, infrastructure, kubevirt-tutorial, CI-CD, continuous integration, community", "body": "Introduction: Prow is a Kubernetes based CI/CD system that has several types of jobs and is used at KubeVirt project. General PR’s, etc are tested by Prow to be validated to be reviewed by doing some sanity checks defined by developers. In general, the internals on how it works can be checked at Life of a Prow Job. Community repositories: There are other repos (involved in the project ecosystem) that have tests to validate the information provided on them. The community repositories include: KubeVirt website KubeVirt tutorial Katacoda Scenarios Community repo Cloud Image BuilderThose repos contain useful information for new users, like the try-it scenarios, the Laboratories, Katacoda scenarios, Community supporting files (like logos, proposals, etc). The jobs: For each repo we’ve some types of jobs: periodical: Run automatically to validate that the repo, without further changes is still working (for example, detecting broken URL’s). presubmit: Validates that the incoming PR will not break the environment. post-submit: After merging the PR, the repo is still working. Jobs are defined in the project-infra repository, for example: https://github. com/kubevirt/project-infra/blob/master/github/ci/prow-deploy/files/jobs/kubevirt/kubevirt-tutorial/kubevirt-tutorial-periodics. yaml https://github. com/kubevirt/project-infra/blob/master/github/ci/prow-deploy/files/jobs/kubevirt/kubevirt-tutorial/kubevirt-tutorial-presubmits. yamlThose jobs define the image to use (image and tag), and the commands to execute. In the examples above we’re using ‘Docker-in-Docker’ (dind) images and we’re targetting the KubeVirt-tutorial repository. KubeVirt-tutorial: The jobs, when executed as part of the Prow workflow, run the commands defined in the repo itself, for example for kubevirt-tutorial check the following folder: https://github. com/kubevirt/kubevirt-tutorial/tree/master/hackThat folder contains three scripts: build, test_lab and tests, which do setup the environment for running the validations, that is: install required software on top of the used images. prepare the scripts to be executed via mdsh which extracts markdown from lab files to be executed against the cluster setup by Prow (using dind). Run each script and report statusOnce the execution has finished, if the final status is ok, the status is reported back to the GitHub PR so that it can be reviewed by mantainers of the repo. Job status: The jobs executed and the logs are available on the Prow instance we use, for example: https://KubeVirt. io Pre-submit link checker: https://prow. apps. ovirt. org/?job=kubevirt-io-presubmit-link-checker Periodical link checker: https://prow. apps. ovirt. org/?job=kubevirt-io-periodic-link-checker KubeVirt Tutorial Pre-submit: https://prow. apps. ovirt. org/?job=kubevirt-tutorial-presubmit-lab-testing-k8s-1. 13. 3 Periodical: https://prow. apps. ovirt. org/?job=periodic-kubevirt-tutorial-lab-testing Wrap-up: If you find that a test should be performed to further validate the integrity and information provided, feel free to raise issues or even a PR against the project-infra repository so that we can get it improved! " }, { - "id": 89, + "id": 90, "url": "/2019/jenkins-jobs-for-kubevirt-lab-validation.html", "title": "Jenkins Jobs for KubeVirt lab validation", "author" : "Pablo Iranzo Gómez", "tags" : "prow, infrastructure, kubevirt-tutorial, CI-CD, continuous integration, jenkins", "body": "Introduction: Jenkins is an open-source automation server that allows to define jobs, triggers, etc to validate that certain conditions are met. Jobs can run either after a trigger has been received (for example from a repo merge or PR), periodically to validate that a previous ‘validated’ job is still ‘valid’ or even manually to force refresh of information. Community repositories: Outside of the main KubeVirt binaries, there are other repos that are involved in the project ecosystem have tests to validate the information provided on them. The community repositories include: KubeVirt website KubeVirt tutorial Katacoda Scenarios Community repo Cloud Image BuilderThose repos contain useful information for new users, like the try-it scenarios, the Laboratories, Katacoda scenarios, Community supporting files (like logos, proposals, etc). The jobs: Our Jenkins instance is hosted at CentOS OpenShift instance and it’s available at https://jenkins-kubevirt. apps. ci. centos. org/ There, we’ve two jobs we’re currently refining to get better results: Cloud Image Builder / https://jenkins-kubevirt. apps. ci. centos. org/job/cloud-image-builder, which builds, according to the repo defined above contents what the AWS, GCP and Minikube images contain (binaries, KubeVirt version, Minikube version). The resulting AWS images / https://jenkins-kubevirt. apps. ci. centos. org/job/cloud-image-builder/job/master/lastSuccessfulBuild/artifact/new-images. json are copied to each region. The resulting GCP images are also copied The resulting Minikube image is used for lab validation Lab Validation / https://jenkins-kubevirt. apps. ci. centos. org/job/Lab%20Validation/ which uses above created images with the contents of the /tests folder at Kubevirt. github. io repository to spin up instances and validate that the contents of the labs are validBoth tests can be executed periodically (by default each day), causing a repository rescan to detect new changes and later validation of them and only on branch master. If you’re curious about what Jenkins does, check the file JenkinsFile at the root of each repository: https://github. com/kubevirt/kubevirt. github. io/blob/abd315b2bcdabd2effa71fd3e6af1207d8fcbf42/Jenkinsfile https://github. com/kubevirt/cloud-image-builder/blob/master/JenkinsfileBoth of them define pipelines so that runs can be executed in parallel for each one of the environments: GCP, AWS, Minikube. Cloud Image Builder: Cloud Image Builder has primarily two steps: build publishBuild takes most of the logic, as it prepares virtctl and kubectl binaries and then for each one of the environments it executes the required ansible playbooks: ${environment}-provision. yml: Which creates the VM instance on the provider (for Minikube, it’s a VM inside GCP) ${environment}-setup. yml: Which configures the VM instance (repositories, packages, first-boot script, KubeVirt installation, virtctl binaries, etc) ${environment}-mkimage. yml: Which creates an image out of the instance generated by steps above ${environment}-publish. yml: Which, for GCP and AWS, publishes the generated image in above stepOnce the images have been published, the jobs end and instances are removed from the providers. Lab Validation: Lab Validation is meant to check that the labs described in the website are working on the three platforms (GCE, AWS, Minikube). In opposition to KubeVirt Tutorial, it doesn’t use mdsh yet for extracting the actual commands out of the lab text and uses ansible playbooks to imitate the lab execution: https://github. com/kubevirt/kubevirt. github. io/blob/abd315b2bcdabd2effa71fd3e6af1207d8fcbf42/tests/ansible/lab1. yml https://github. com/kubevirt/kubevirt. github. io/blob/abd315b2bcdabd2effa71fd3e6af1207d8fcbf42/tests/ansible/lab2. ymlIn addition, it contains files for also setting up the instance for running the tests (${environment}-provision. yml) and doing the later cleanup ($environment-cleanup. yml). The first playbook, does create a new instance on the environment being checked using the images created by the Cloud Image Builder, so this means that not only the labs are validated, but also the images generated are validated to detect possible defects like missing binaries, wrongly installed software, etc. The biggest part on the lab validation is with the lab. sh script, which accepts the lab being executed and the environment as parameters, and takes care of provisioning the instance, run the lab and perform cleanup afterwards. " }, { - "id": 90, + "id": 91, "url": "/2019/KubeVirt_storage_rook_ceph.html", "title": "Persistent storage of your Virtual Machines in KubeVirt with Rook", "author" : "Pedro Ibáñez Requena", "tags" : "rook, ceph, ntp, chronyd", "body": " Introduction: Quoting Wikipedia: In computer science, persistence refers to the characteristic of state that outlives the processthat created it. This is achieved in practice by storing the state as data in computer data storage. Programs have to transfer data to and from storage devices and have to provide mappings from thenative programming-language data structures to the storage device data structures. In this post, we are going to show how to set up a persistence system to store VM images with the help of Ceph and the automation of Rook. Pre-requisites: Some prerequisites have to be met: An existent Kubernetes cluster with 3 masters and 1 worker (min) is already set up, it’s not mandatory, but allows to demonstrate an example of a HA Ceph installation. Each Kubernetes node has an extra empty disk connected (has to be blank with no filesystem). KubeVirt is already installed and running. In this example the following systems names and IP addresses are used: System Purpose IP kv-master-00 Kubernetes Master node 00 192. 168. 122. 6 kv-master-01 Kubernetes Master node 01 192. 168. 122. 106 kv-master-02 Kubernetes Master node 02 192. 168. 122. 206 kv-worker-00 Kubernetes Worker node 00 192. 168. 122. 222 For being able to import Virtual Machines, the KubeVirt CDI has to be configured too. Containerized-Data-Importer (CDI) is a persistent storage management add-on for Kubernetes. Its primary goal is to provide a declarative way to build Virtual Machine Disks on PVCs for KubeVirt VMs. CDI works with standard core Kubernetes resources and is storage device-agnostic, while its primary focus is to build disk images for Kubevirt, it’s also useful outside of a KubeVirt context to use for initializing your Kubernetes Volumes with data. In the case your cluster doesn’t have CDI, the following commands will cover CDI operator and the CR setup: [root@kv-master-00 ~]# export VERSION=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\+\. [0-9]*\. [0-9]* )[root@kv-master-00 ~]# kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator. yamlnamespace/cdi createdcustomresourcedefinition. apiextensions. k8s. io/cdis. cdi. kubevirt. io createdconfigmap/cdi-operator-leader-election-helper createdserviceaccount/cdi-operator createdclusterrole. rbac. authorization. k8s. io/cdi-operator-cluster createdclusterrolebinding. rbac. authorization. k8s. io/cdi-operator createddeployment. apps/cdi-operator createdcontainerized-data-importer/releases/download/$VERSION/cdi-operator. yaml[root@kv-master-00 ~]# kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr. yamlcdi. cdi. kubevirt. io/cdi createdThe nodes of the cluster have to be time synchronized This should have been done for you by chronyd It can’t harm to do it again: [root@kv-master-00 ~]# for i in $(echo 6 106 206 222); do ssh -oStrictHostKeyChecking=no \ root@192. 168. 122. $i sudo chronyc -a makestep; doneWarning: Permanently added '192. 168. 122. 6' (ECDSA) to the list of known hosts. 200 OKWarning: Permanently added '192. 168. 122. 106' (ECDSA) to the list of known hosts. 200 OKWarning: Permanently added '192. 168. 122. 206' (ECDSA) to the list of known hosts. 200 OKWarning: Permanently added '192. 168. 122. 222' (ECDSA) to the list of known hosts. 200 OKThis step could also be done with ansible (one line or rhel-system-roles. noarch). Installing Rook in Kubernetes to handle the Ceph cluster: Next, the latest upstream release of Rook has to be cloned: [root@kv-master-00 ~]# git clone https://github. com/rook/rookCloning into 'rook'. . . remote: Enumerating objects: 1, done. remote: Counting objects: 100% (1/1), done. remote: Total 37745 (delta 0), reused 0 (delta 0), pack-reused 37744Receiving objects: 100% (37745/37745), 13. 02 MiB | 1. 54 MiB/s, done. Resolving deltas: 100% (25309/25309), done. Now, change the actual directory to the location of the Kubernetes examples where the respective resource definitions can be found: [root@kv-master-00 ~]# cd rook/cluster/examples/kubernetes/cephThe Rook common resources that make up Rook have to be created: [root@kv-master-00 ~]# kubectl create -f common. yaml(output removed)Next, create the Kubernetes Rook operator: [root@kv-master-00 ~]# kubectl create -f operator. yamldeployment. apps/rook-ceph-operator createdTo check the progress of the operator pod and the discovery pods starting up, the commands below can be executed. The discovery pods are responsible for investigating the available resources (e. g. disks that can make up OSD’s) across all available Nodes: [root@kv-master-00 ~]# watch kubectl get pods -n rook-cephNAME READY STATUS RESTARTS AGErook-ceph-operator-fdfbcc5c5-qs7x8 1/1 Running 1 3m14srook-discover-7v65m 1/1 Running 2 2m19srook-discover-cjfdz 1/1 Running 0 2m19srook-discover-f8k4s 0/1 ImagePullBackOff 0 2m19srook-discover-x22hh 1/1 Running 0 2m19sNAME READY STATUS RESTARTS AGEpod/rook-ceph-operator-fdfbcc5c5-qs7x8 1/1 Running 1 4m21spod/rook-discover-7v65m 1/1 Running 2 3m26spod/rook-discover-cjfdz 1/1 Running 0 3m26spod/rook-discover-f8k4s 1/1 Running 0 3m26spod/rook-discover-x22hh 1/1 Running 0 3m26sAfter, the Ceph cluster configuration inside of the Rook operator has to be prepared: [root@kv-master-00 ~]# kubectl create -f cluster. yamlcephcluster. ceph. rook. io/rook-ceph createdOne of the key elements of the default cluster configuration is to configure the Ceph cluster to use all nodes and use all devices, i. e. run Rook/Ceph on every system and consume any free disks that it finds; this makes configuring Rook a lot more simple: [root@kv-master-00 ~]# grep useAll cluster. yml useAllNodes: true useAllDevices: true # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the namedThe progress can be checked now, check the pods in the rook-ceph namespace: [root@kv-master-00 ~]# watch kubectl -n rook-ceph get podsNAME READY STATUS RESTARTS AGEcsi-cephfsplugin-2kqbd 3/3 Running 0 36scsi-cephfsplugin-hjnf9 3/3 Running 0 36scsi-cephfsplugin-provisioner-75c965db4f-tbgfn 4/4 Running 0 36scsi-cephfsplugin-provisioner-75c965db4f-vgcwv 4/4 Running 0 36scsi-cephfsplugin-svcjk 3/3 Running 0 36scsi-cephfsplugin-tv6rs 3/3 Running 0 36scsi-rbdplugin-dsdwk 3/3 Running 0 37scsi-rbdplugin-provisioner-69c9869dc9-bwjv4 5/5 Running 0 37scsi-rbdplugin-provisioner-69c9869dc9-vzzp9 5/5 Running 0 37scsi-rbdplugin-vzhzz 3/3 Running 0 37scsi-rbdplugin-w5n6x 3/3 Running 0 37scsi-rbdplugin-zxjcc 3/3 Running 0 37srook-ceph-mon-a-canary-84c7fc67ff-pf7t5 1/1 Running 0 14srook-ceph-mon-b-canary-5f7c7cfbf4-8dvcp 1/1 Running 0 8srook-ceph-mon-c-canary-7779478497-7x25x 0/1 ContainerCreating 0 3srook-ceph-operator-fdfbcc5c5-qs7x8 1/1 Running 1 9m30srook-discover-7v65m 1/1 Running 2 8m35srook-discover-cjfdz 1/1 Running 0 8m35srook-discover-f8k4s 1/1 Running 0 8m35srook-discover-x22hh 1/1 Running 0 8m35sWait until the Ceph monitor pods are created. Next, the toolbox pod has to be created; this is useful to verify the status/health of the cluster, getting/setting authentication, and querying the Ceph cluster using standard Ceph tools: [root@kv-master-00 ~]# kubectl create -f toolbox. yamldeployment. apps/rook-ceph-tools createdTo check how well this is progressing: [root@kv-master-00 ~]# kubectl -n rook-ceph get pods | grep toolrook-ceph-tools-856c5bc6b4-s47qm 1/1 Running 0 31sBefore proceeding with the pool and the storage class the Ceph cluster status can be checked already: [root@kv-master-00 ~]# toolbox=$(kubectl -n rook-ceph get pods -o custom-columns=NAME:. metadata. name --no-headers | grep tools)[root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox shsh-4. 2# ceph status cluster: id: 5a0bbe74-ce42-4f49-813d-7c434af65aad health: HEALTH_WARN clock skew detected on mon. c services: mon: 3 daemons, quorum a,b,c (age 3m) mgr: a(active, since 2m) osd: 4 osds: 4 up (since 105s), 4 in (since 105s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 4. 0 GiB used, 72 GiB / 76 GiB avail pgs:Note In this example, the health value is HEALTH_WARN because there is a clock skew between the monitor in node c and the rest of the cluster. If this is your case, go to the troubleshooting point at the end of the blogpost to find out how to solve this issue and get a HEALTH_OK. Next, some other resources need to be created. First, the block pool that defines the name (and specification) of the RBD pool that will be used for creating persistent volumes, in this case, is called replicapool: Configuring the CephBlockPool and the Kubernetes StorageClass for using Ceph hosting the Virtual Machines: The cephblockpool. yml is based in the pool. yml, you can check that file in the same directory to learn about the details of each parameter: [root@kv-master-00 ~]# cat pool. yml################################################################################################################## Create a Ceph pool with settings for replication in production environments. A minimum of 3 OSDs on# different hosts are required in this example. # kubectl create -f pool. yaml#################################################################################################################apiVersion: ceph. rook. io/v1kind: CephBlockPoolmetadata: name: replicapool namespace: rook-cephspec: # The failure domain will spread the replicas of the data across different failure zones failureDomain: host # For a pool based on raw copies, specify the number of copies. A size of 1 indicates no redundancy. replicated: size: 3 # A key/value list of annotations annotations: # key: valueThe following file has to be created to define the CephBlockPool: [root@kv-master-00 ~]# vim cephblockpool. ymlapiVersion: ceph. rook. io/v1kind: CephBlockPoolmetadata: name: replicapool namespace: rook-cephspec: failureDomain: host replicated: size: 2[root@kv-master-00 ~]# kubectl create -f cephblockpool. ymlcephblockpool. ceph. rook. io/replicapool created[root@kv-master-00 ~]# kubectl get cephblockpool -n rook-cephNAME AGEreplicapool 19sNow is time to create the Kubernetes storage class that would be used to create the volumes later: [root@kv-master-00 ~]# vim storageclass. ymlapiVersion: storage. k8s. io/v1kind: StorageClassmetadata: name: rook-ceph-block# Change rook-ceph provisioner prefix to match the operator namespace if neededprovisioner: rook-ceph. rbd. csi. ceph. comparameters: # clusterID is the namespace where the rook cluster is running clusterID: rook-ceph # Ceph pool into which the RBD image shall be created pool: replicapool # RBD image format. Defaults to 2 . imageFormat: 2 # RBD image features. Available for imageFormat: 2 . CSI RBD currently supports only `layering` feature. imageFeatures: layering # The secrets contain Ceph admin credentials. csi. storage. k8s. io/provisioner-secret-name: rook-ceph-csi csi. storage. k8s. io/provisioner-secret-namespace: rook-ceph csi. storage. k8s. io/node-stage-secret-name: rook-ceph-csi csi. storage. k8s. io/node-stage-secret-namespace: rook-ceph # Specify the filesystem type of the volume. If not specified, csi-provisioner # will set default as `ext4`. csi. storage. k8s. io/fstype: xfs# Delete the rbd volume when a PVC is deletedreclaimPolicy: Delete[root@kv-master-00 ~]# kubectl create -f storageclass. ymlstorageclass. storage. k8s. io/rook-ceph-block created[root@kv-master-00 ~]# kubectl get storageclassNAME PROVISIONER AGErook-ceph-block rook-ceph. rbd. csi. ceph. com 61sSpecial attention to the pool name, it has to be the same as configured in the CephBlockPool. Now, simply wait for the Ceph OSD’s to finish provisioning and we’ll be done with our Ceph deployment: [root@kv-master-00 ~]# watch kubectl -n rook-ceph get pods | grep rook-ceph-osd-prepare rook-ceph-osd-prepare-kv-master-00. kubevirt-io-4npmf 0/1 Completed 0 20mrook-ceph-osd-prepare-kv-master-01. kubevirt-io-69smd 0/1 Completed 0 20mrook-ceph-osd-prepare-kv-master-02. kubevirt-io-zm7c2 0/1 Completed 0 20mrook-ceph-osd-prepare-kv-worker-00. kubevirt-io-5qmjg 0/1 Completed 0 20mThis process may take a few minutes as it has to zap the disks, deploy a BlueStore configuration on them, and start the OSD service pods across our nodes. The cluster deployment can be validated now: [root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox shsh-4. 2# ceph -s cluster: id: 5a0bbe74-ce42-4f49-813d-7c434af65aad health: HEALTH_WARN too few PGs per OSD (4 < min 30) services: mon: 3 daemons, quorum a,b,c (age 12m) mgr: a(active, since 21m) osd: 4 osds: 4 up (since 20m), 4 in (since 20m) data: pools: 1 pools, 8 pgs objects: 0 objects, 0 B usage: 4. 0 GiB used, 72 GiB / 76 GiB avail pgs: 8 active+cleanOh Wait! the health value is again HEALTH_WARN, no problem! it is because there are too few PGs per OSD, in this case 4, for a minimum value of 30. Let’s fix it changing that value to 256: [root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox shsh-4. 2# ceph osd pool set replicapool pg_num 256set pool 1 pg_num to 256sh-4. 2# ceph -s cluster: id: 5a0bbe74-ce42-4f49-813d-7c434af65aad health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 18m) mgr: a(active, since 27m) osd: 4 osds: 4 up (since 26m), 4 in (since 26m) data: pools: 1 pools, 256 pgs objects: 0 objects, 0 B usage: 4. 0 GiB used, 72 GiB / 76 GiB avail pgs: 12. 109% pgs unknown 0. 391% pgs not active 224 active+clean 31 unknown 1 peeringIn a moment, Ceph will end peering and the status of the pgs would be active+clean: sh-4. 2# ceph -s cluster: id: 5a0bbe74-ce42-4f49-813d-7c434af65aad health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 21m) mgr: a(active, since 29m) osd: 4 osds: 4 up (since 28m), 4 in (since 28m) data: pools: 1 pools, 256 pgs objects: 0 objects, 0 B usage: 4. 0 GiB used, 72 GiB / 76 GiB avail pgs: 256 active+cleanSome additional checks on the Ceph cluster can be performed: sh-4. 2# ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 0. 07434 root default-9 0. 01859 host kv-master-00-kubevirt-io 3 hdd 0. 01859 osd. 3 up 1. 00000 1. 00000-7 0. 01859 host kv-master-01-kubevirt-io 2 hdd 0. 01859 osd. 2 up 1. 00000 1. 00000-3 0. 01859 host kv-master-02-kubevirt-io 0 hdd 0. 01859 osd. 0 up 1. 00000 1. 00000-5 0. 01859 host kv-worker-00-kubevirt-io 1 hdd 0. 01859 osd. 1 up 1. 00000 1. 00000sh-4. 2# ceph osd status+----+--------------------------+-------+-------+--------+---------+--------+---------+-----------+| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |+----+--------------------------+-------+-------+--------+---------+--------+---------+-----------+| 0 | kv-master-02. kubevirt-io | 1026M | 17. 9G | 0 | 0 | 0 | 0 | exists,up || 1 | kv-worker-00. kubevirt-io | 1026M | 17. 9G | 0 | 0 | 0 | 0 | exists,up || 2 | kv-master-01. kubevirt-io | 1026M | 17. 9G | 0 | 0 | 0 | 0 | exists,up || 3 | kv-master-00. kubevirt-io | 1026M | 17. 9G | 0 | 0 | 0 | 0 | exists,up |+----+--------------------------+-------+-------+--------+---------+--------+---------+-----------+That should match the available block devices in the nodes, let’s check it in the kv-master-00 node: [root@kv-master-00 ~]# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsr0 11:0 1 512K 0 romvda 253:0 0 50G 0 disk└─vda1 253:1 0 50G 0 part /vdb 253:16 0 20G 0 disk└─ceph--09112f92--11cd--4284--b763--447065cc169c-osd--data--0102789c--852c--4696--96ce--54c2ad3a848b 252:0 0 19G 0 lvmTo validate that the pods are running on the correct nodes, check the NODE column below: [root@kv-master-00 ~]# kubectl get pods -n rook-ceph -o wide | egrep '(NAME|osd)'NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESrook-ceph-osd-0-8689c68c78-rgdbj 1/1 Running 0 31m 10. 244. 2. 9 kv-master-02. kubevirt-io <none> <none>rook-ceph-osd-1-574cb85d9d-vs2jc 1/1 Running 0 31m 10. 244. 3. 18 kv-worker-00. kubevirt-io <none> <none>rook-ceph-osd-2-65b54c458f-zkk6v 1/1 Running 0 31m 10. 244. 1. 10 kv-master-01. kubevirt-io <none> <none>rook-ceph-osd-3-5fd97cd4c9-2xd6c 1/1 Running 0 30m 10. 244. 0. 10 kv-master-00. kubevirt-io <none> <none>rook-ceph-osd-prepare-kv-master-00. kubevirt-io-4npmf 0/1 Completed 0 31m 10. 244. 0. 9 kv-master-00. kubevirt-io <none> <none>rook-ceph-osd-prepare-kv-master-01. kubevirt-io-69smd 0/1 Completed 0 31m 10. 244. 1. 9 kv-master-01. kubevirt-io <none> <none>rook-ceph-osd-prepare-kv-master-02. kubevirt-io-zm7c2 0/1 Completed 0 31m 10. 244. 2. 8 kv-master-02. kubevirt-io <none> <none>rook-ceph-osd-prepare-kv-worker-00. kubevirt-io-5qmjg 0/1 Completed 0 31m 10. 244. 3. 17 kv-worker-00. kubevirt-io <none> <none>All good! For validating the storage provisioning through the new Ceph cluster managed by the Rook operator, a persistent volume claim (PVC) can be created: [root@kv-master-00 ~]# vim pvc. ymlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: pv-claimspec: storageClassName: rook-ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 1Gi[root@kv-master-00 ceph]# kubectl create -f pvc. ymlpersistentvolumeclaim/pv-claim createdEnsure that the storageClassName contains the name of the storage class you have created, in this case, rook-ceph-block For checking that it has been bound, list the PVCs and look for the ones in the rook-ceph-block storageclass: [root@kv-master-00 ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpv-claim Bound pvc-62a9738a-e027-4a68-9ecf-16278711ff64 1Gi RWO rook-ceph-block 63sIf the volume is still in a ‘Pending’ state, likely, that one of the pods haven’t come up correctly or one of the steps above has been missed. To check it, the command ‘kubectl get pods -n rook-ceph’ can be executed for viewing the running/failed pods. Before proceeding let’s clean up the temporary PVC: [root@kv-master-00 ~]# kubectl delete pvc pv-claimpersistentvolumeclaim pv-claim deletedCreating a Virtual Machine in KubeVirt backed by Ceph: Once the Ceph cluster is up and running, the first Virtual Machine can be created, to do so, a YML example file is being downloaded and modified: [root@kv-master-00 ~]# wget https://raw. githubusercontent. com/kubevirt/containerized-data-importer/master/manifests/example/vm-dv. yaml[root@kv-master-00 ~]# sed -i 's/hdd/rook-ceph-block/' vm-dv. yaml[root@kv-master-00 ~]# sed -i 's/fedora/centos/' vm-dv. yaml[root@kv-master-00 ~]# sed -i 's@https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img@http://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2@' vm-dv. yaml[root@kv-master-00 ~]# sed -i 's/storage: 100M/storage: 9G/' vm-dv. yaml[root@kv-master-00 ~]# sed -i 's/memory: 64M/memory: 1G/' vm-dv. yamlThe modified YAML could be run already like this but a user won’t be able to log in as we don’t know the password used in that image. cloud-init can be used to change the password of the default user of that image centos and grant us access, two parts have to be added: Add a second disk after the datavolumevolume (already existing), in this example is called cloudint:[root@kv-master-00 ~]# vim vm-dv. yaml---template: metadata: labels: kubevirt. io/vm: vm-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumevolume - disk: bus: virtio name: cloudinit Afterwards, add the volume at the end of the file, after the volume already defined as datavolumevolume, in this example it’s also called cloudinit:[root@kv-master-00 ~]# vim vm-dv. yaml---volumes: - dataVolume: name: centos-dv name: datavolumevolume - cloudInitNoCloud: userData: | #cloud-config password: changeme chpasswd: { expire: False } name: cloudinitThe password value (changeme in this example), can be set to your preferred one. Once the YAML file is prepared the Virtual Machine can be created and started: [root@kv-master-00 ~]# kubectl create -f vm-dv. yamlvirtualmachine. kubevirt. io/vm-centos-datavolume created[root@kv-master-00 ~]# kubectl get vmNAME AGE RUNNING VOLUMEvm-centos-datavolume 62m falseLet’s wait a little bit until the importer pod finishes, meanwhile you can check it with: [root@kv-master-00 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimporter-centos-dv-8v6l5 0/1 ContainerCreating 0 12sOnce that pods ends, the Virtual Machine can be started (in this case the virt parameter can be used because of the krew plugin system: [root@kv-master-00 tmp]# kubectl virt start vm-centos-datavolumeVM vm-centos-datavolume was scheduled to start[root@kv-master-00 ~]# kubectl get vmiNAME AGE PHASE IP NODENAMEvm-centos-datavolume 2m4s Running 10. 244. 3. 20 kv-worker-00. kubevirt-io[root@kv-master-00 ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEcentos-dv Bound pvc-5604eb4a-21dd-4dca-8bb7-fbacb0791402 9Gi RWO rook-ceph-block 7m34sAwesome! the Virtual Machine is running in a pod through KubeVirt and it’s backed up with Ceph under the management of Rook. Now it’s the time for grabbing a coffee to allow cloud-init to do its job. A little while later let’s connect to that VM console: [root@kv-master-00 ~]# kubectl virt console vm-centos-datavolumeSuccessfully connected to vm-centos-datavolume console. The escape sequence is ^]CentOS Linux 7 (Core)Kernel 3. 10. 0-957. 27. 2. el7. x86_64 on an x86_64vm-centos-datavolume login: centosPassword:[centos@vm-centos-datavolume ~]$And there it is! our Kubernetes cluster provided with virtualization capabilities thanks to KubeVirt and backed up with a strong Ceph cluster under the management of Rook. Troubleshooting: It might happen that once the Ceph cluster is created, the hosts are not properly time-synchronized, in that case, the Ceph configuration can be modified to allow a bigger time difference between the nodes, in this case, the variable mon clock drift allowed is changed to 0. 5 seconds, the steps to do so are the following: Connect to the toolbox pod to check the cluster status Modify the configMap with the Ceph cluster configuration Verify the changes Remove the mon pods to apply the new configuration[root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox shsh-4. 2# ceph status cluster: id: 5a0bbe74-ce42-4f49-813d-7c434af65aad health: HEALTH_WARN clock skew detected on mon. c services: mon: 3 daemons, quorum a,b,c (age 3m) mgr: a(active, since 2m) osd: 4 osds: 4 up (since 105s), 4 in (since 105s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 4. 0 GiB used, 72 GiB / 76 GiB avail pgs:[root@kv-master-00 ~]# kubectl -n rook-ceph edit ConfigMap rook-config-override -o yamlconfig: | [global] mon clock drift allowed = 0. 5[root@kv-master-00 ~]# kubectl -n rook-ceph get ConfigMap rook-config-override -o yamlapiVersion: v1data: config: | [global] mon clock drift allowed = 0. 5kind: ConfigMapmetadata: creationTimestamp: 2019-10-18T14:08:39Z name: rook-config-override namespace: rook-ceph ownerReferences: - apiVersion: ceph. rook. io/v1 blockOwnerDeletion: true kind: CephCluster name: rook-ceph uid: d0bd3351-e630-44af-b981-550e8a2a50ec resourceVersion: 12831 selfLink: /api/v1/namespaces/rook-ceph/configmaps/rook-config-override uid: bdf1f1fb-967a-410b-a2bd-b4067ce005d2[root@kv-master-00 ~]# kubectl -n rook-ceph delete pod $(kubectl -n rook-ceph get pods -o custom-columns=NAME:. metadata. name --no-headers| grep mon)pod rook-ceph-mon-a-8565577958-xtznq deletedpod rook-ceph-mon-b-79b696df8d-qdcpw deletedpod rook-ceph-mon-c-5df78f7f96-dr2jn deleted[root@kv-master-00 ~]# kubectl -n rook-ceph exec -it $toolbox shsh-4. 2# ceph status cluster: id: 5a0bbe74-ce42-4f49-813d-7c434af65aad health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 43s) mgr: a(active, since 9m) osd: 4 osds: 4 up (since 8m), 4 in (since 8m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 4. 0 GiB used, 72 GiB / 76 GiB avail pgs:References: Kubernetes getting started KubeVirt Containerized Data Importer Ceph: free-software storage platform Ceph hardware recommendations Rook: Open-Source,Cloud-Native Storage for Kubernetes KubeVirt User Guide" }, { - "id": 91, + "id": 92, "url": "/2019/KubeVirt_k8s_crio_from_scratch_installing_KubeVirt.html", "title": "KubeVirt on Kubernetes with CRI-O from scratch - Installing KubeVirt", "author" : "Pedro Ibáñez Requena", "tags" : "cri-o, kubevirt installation", "body": "Building your environment for testing or automation purposes can be difficult when using different technologies. In this guide, you’ll find how to set up your system step-by-step to work with the latest versions of Kubernetes (up to today), CRI-O and KubeVirt. In this series of blogposts the following topics are going to be covered en each post: Requirements: dependencies and containers runtime Kubernetes: Cluster and Network KubeVirt: requirements and first Virtual MachineIn the first blogpost of the series (KubeVirt on Kubernetes with CRI-O from scratch) the initial set up for a CRI-O runtime environment has been covered. In the second blogpost of the series (Kubernetes: Cluster and Network) the Kubernetes cluster and network were set up based on the CRI-O installation already prepared in the first post. This is the last blogpost of the series of 3, in this case KubeVirt is going to be installed and also would be used to deploy an example Virtual Machine. Installing KubeVirt: What is KubeVirt? if you navigate to the KubeVirt webpage you can read: KubeVirt technology addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing Virtual Machine-based workloads that cannot be easily containerized. More specifically, the technology provides a unified development platform where developers can build, modify, and deploy applications residing in both Application Containers as well as Virtual Machines in a common, shared environment. Benefits are broad and significant. Teams with a reliance on existing virtual machine-based workloads are empowered to rapidly containerize applications. With virtualized workloads placed directly in development workflows, teams can decompose them over time while still leveraging remaining virtualized components as is comfortably desired. In this example there is a Kubernetes cluster compose of one master, for it to be schedulable to host the KubeVirt pods, a little modification has to be done: k8s-test. local# kubectl taint nodes k8s-test node-role. kubernetes. io/master:NoSchedule-The last version of KubeVirt at the moment is v0. 20. 8, to check it the following command can be executed: k8s-test. local# export KUBEVIRT_VERSION=$(curl -s https://api. github. com/repos/kubevirt/kubevirt/releases/latest | jq -r . tag_name)k8s-test. local# echo $KUBEVIRT_VERSIONv0. 20. 8To install KubeVirt, the operator and the cr are going to be created with the following commands: k8s-test. local# kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlk8s-test. local# kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr. yamlThis demo environment already runs within a virtualized environment, and in order to be able to run VMs here we need to pre-configure KubeVirt so it uses software-emulated virtualization instead of trying to use real hardware virtualization. k8s-test. local# kubectl create configmap kubevirt-config -n kubevirt --from-literal debug. useEmulation=trueThe deployment can be checked with the following command: k8s-test. local# kubectl get pods -n kubevirtNAME READY STATUS RESTARTS AGEvirt-api-5546d58cc8-5sm4v 1/1 Running 0 16hvirt-api-5546d58cc8-pxkgt 1/1 Running 0 16hvirt-controller-5c749d77bf-cxxj8 1/1 Running 0 16hvirt-controller-5c749d77bf-wwkxm 1/1 Running 0 16hvirt-handler-cx7q7 1/1 Running 0 16hvirt-operator-6b4dccb44d-bqxld 1/1 Running 0 16hvirt-operator-6b4dccb44d-m2mvf 1/1 Running 0 16hNow that KubeVirt is installed is the right time to download the client tool to interact with th Virtual Machines. k8s-test. local# wget -O virtctl https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64k8s-test. local# chmod +x virtctlk8s-test. local# . /virtctlAvailable Commands: console Connect to a console of a virtual machine instance. expose Expose a virtual machine instance, virtual machine, or virtual machine instance replica set as a new service. help Help about any command image-upload Upload a VM image to a PersistentVolumeClaim. restart Restart a virtual machine. start Start a virtual machine. stop Stop a virtual machine. version Print the client and server version information. vnc Open a vnc connection to a virtual machine instance. Use virtctl <command> --help for more information about a given command. Use virtctl options for a list of global command-line options (applies to all commands). This step is optional, right now anything related with the Virtual Machines can be done running the virtctl command. In case there’s a need to interact with the Virtual Machines without leaving the scope of the kubectl command, the virt plugin for Krew can be installed following the instructions below: k8s-test. local# ( set -x; cd $(mktemp -d) && curl -fsSLO https://github. com/kubernetes-sigs/krew/releases/download/v0. 3. 1/krew. {tar. gz,yaml} && tar zxvf krew. tar. gz && . /krew- $(uname | tr '[:upper:]' '[:lower:]')_amd64 install \ --manifest=krew. yaml --archive=krew. tar. gz). . . Installed plugin: krewWARNING: You installed a plugin from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk. The warning printed by the Krew maintainers can be ignored. To have the krew plugin available, the PATH variable has to be modified: k8s-test. local# vim ~/. bashrcexport PATH= ${KREW_ROOT:-$HOME/. krew}/bin:$PATH k8s-test. local# source ~/. bashrcNow, the virt plugin is going to be installed using the krew plugin manager: k8s-test. local# kubectl krew install virtUpdated the local copy of plugin index. Installing plugin: virtCAVEATS:\ | virt plugin is a wrapper for virtctl originating from the KubeVirt project. In order to use virtctl you will | need to have KubeVirt installed on your Kubernetes cluster to use it. See https://kubevirt. io/ for details | | Run | | kubectl virt help | | to get an overview of the available commands | | See | | https://kubevirt. io/user-guide/virtual_machines/graphical_and_console_access/ | | for a usage example/Installed plugin: virtWARNING: You installed a plugin from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk. Installing the first Virtual Machine in KubeVirt: For this example, a cirros Virtual Machine is going to be created, in this example, the kind of disk used is a registry disk (not persistent): k8s-test. local# kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlk8s-test. local# kubectl get vmsNAME AGE RUNNING VOLUMEtestvm 13s falseAfter the Virtual Machine has been created, it has to be started, to do so, the virtctl or the kubectl can be used (depending on what method has been chosen in previous steps). k8s-test. local# . /virtctl start testvmVM vm-cirros was scheduled to startk8s-test. local# kubectl get vmsNAME AGE RUNNING VOLUMEtestvm 7m11s trueNext thing to do is to use the kubectl command for getting the IP address and the actual status of the virtual machines: k8s-test. local# kubectl get vmiskubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 14s Schedulingk8s-test. local# kubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 63s Running 10. 244. 0. 15 k8s-testSo, finally the Virtual Machine is running and has an IP address. To connect to that VM, the console can be used (. /virtctl console testvm) or also a direct connection with SSH can be made: k8s-test. local# ssh cirros@10. 244. 0. 15cirros@10. 244. 0. 15's password: gocubsgo$ uname -aLinux testvm 4. 4. 0-28-generic #47-Ubuntu SMP Fri Jun 24 10:09:13 UTC 2016 x86_64 GNU/Linux$ exitTo stop the Virtual Machine one of the following commands can be executed: k8s-test. local# . /virtctl stop testvmVM testvm was scheduled to stopk8s-test. local# kubectl virt stop testvmVM testvm was scheduled to stopTroubleshooting: Each step of this guide has a place where to look for possible issues, in general, the troubleshooting guide of kubernetes can be checked. The following list tries to ease the possible troubleshooting in case of problems during each step of this guide: CRI-O: check the status of the CRI-O service systemctl status crio and also the messages in the journal journalctl -u crio -lf Kubernetes: check the status of the Kubelet service systemctl status kubelet and also the messages in the journal journalctl -u kubelet -fl Pods: for checking the status of the pods the kubectl command can be used in different ways kubectl get pods -A kubectl describe pod $pod Nodes: a Ready status would mean everything is ok with the node, otherwise the details of that node can be checked. kubectl get nodes -o wide kubectl get node <nodename> -o yaml KubeVirt: to check the status of the KubeVirt pods use kubectl get pods -n kubevirtReferences: Kubernetes getting started Kubernetes installing kubeadm Running CRI-O with kubeadm Kubernetes pod-network configuration Kubectl cheatsheet Multus KubeVirt User Guide KubeVirt Katacoda scenarios" }, { - "id": 92, + "id": 93, "url": "/2019/KubeVirt_k8s_crio_from_scratch_installing_kubernetes.html", "title": "KubeVirt on Kubernetes with CRI-O from scratch - Installing Kubernetes", "author" : "Pedro Ibáñez Requena", "tags" : "cri-o, kubernetes, ansible", "body": "Building your environment for testing or automation purposes can be difficult when using different technologies. In this guide you’ll find how to set up your system step-by-step to work with the latest versions of Kubernetes (up to today), CRI-O and KubeVirt. In this series of blogposts the following topics are going to be covered en each post: Requirements: dependencies and containers runtime Kubernetes: Cluster and Network KubeVirt: requirements and first Virtual MachineIn the first blogpost of the series (KubeVirt on Kubernetes with CRI-O from scratch) the initial set up for a CRI-O runtime environment has been covered. In this post is shown the installation and configuration of Kubernetes based in the previous CRI-O environment. Installing Kubernetes: If the ansible way was chosen, you may want to skip this section since the repository and needed packages were already installed during execution. To install the K8s packages a new repo has to be added: k8s-test. local# vim /etc/yum. repos. d/kubernetes. repo[Kubernetes]name=Kubernetesbaseurl=https://packages. cloud. google. com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages. cloud. google. com/yum/doc/yum-key. gpghttps://packages. cloud. google. com/yum/doc/rpm-package-key. gpgNow, the gpg keys of the packages can be imported into the system and the installation can proceed: k8s-test. local# rpm --import https://packages. cloud. google. com/yum/doc/yum-key. gpg https://packages. cloud. google. com/yum/doc/rpm-package-key. gpgk8s-test. local# yum install -y kubelet kubeadm kubectlOnce the Kubelet is configured and CRI-O also ready, the CRI-O daemon can be started and the setup of the cluster can be done: Note The kubelet will not start successfully until the Kubernetes cluster is installed. k8s-test. local# systemctl restart criok8s-test. local# systemctl enable --now kubeletInstalling the Kubernetes cluster: There are multiple ways for installing a Kubernetes cluster, in this example it will be done with the command kubeadm, the pod network cidr is the same that has been previously used for the CRI-O bridge in the 10-crio-bridge. conf configuration file: k8s-test. local# kubeadm init --pod-network-cidr=10. 244. 0. 0/16When the installation finishes the command will print a similar message like this one: Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/. kube sudo cp -i /etc/kubernetes/admin. conf $HOME/. kube/config sudo chown $(id -u):$(id -g) $HOME/. kube/configYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork]. yaml with one of the options listed at: https://kubernetes. io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192. 168. 0. 10:6443 --token 6fsrbi. iqsw1girupbwue5o \ --discovery-token-ca-cert-hash sha256:c7cf9d9681876856f9b7819067841436831f19004caadab0b5838a9bf7f4126aNow, it’s time to deploy the pod network. If the reader is curious and want to already check the status of the cluster, the following commands can be executed for getting all the pods running and their status: k8s-test. local# export KUBECONFIG=/etc/kubernetes/kubelet. confk8s-test. local# kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-5644d7b6d9-ffnvx 1/1 Running 0 101skube-system coredns-5644d7b6d9-lh9gm 1/1 Running 0 101skube-system etcd-k8s-test 1/1 Running 0 59skube-system kube-apiserver-k8s-test 1/1 Running 0 54skube-system kube-controller-manager-k8s-test 1/1 Running 0 58skube-system kube-proxy-tdcdv 1/1 Running 0 101skube-system kube-scheduler-k8s-test 1/1 Running 0 50sInstalling the pod network: The Kubernetes pod-network documentation shows different add-on to handle the communications between the pods. In this example Virtual Machines will be deployed with KubeVirt and also they will have multiple network interfaces attached to the VMs, in this example Multus is going to be used. Some of the Multus Prerequisites indicate: After installing Kubernetes, you must install a default network CNI plugin. If you’re using kubeadm, refer to the “Installing a pod network add-on” section in the kubeadm documentation. If it’s your first time, we generally recommend using Flannel for the sake of simplicity. So flannel is going to be installed running the following commands: k8s-test. local# cd /rootk8s-test. local# wget https://raw. githubusercontent. com/coreos/flannel/master/Documentation/kube-flannel. ymlThe version of CNI has to be checked and ensured that is the 0. 3. 1 version, otherwise, it has to be changed, in this example the version 0. 2. 0 is replaced by the 0. 3. 1: k8s-test. local# grep cniVersion kube-flannel. yml cniVersion : 0. 2. 0 ,k8s-test. local# sed -i 's/0. 2. 0/0. 3. 1/g' kube-flannel. ymlk8s-test. local# kubectl apply -f kube-flannel. ymlpodsecuritypolicy. policy/psp. flannel. unprivileged createdclusterrole. rbac. authorization. k8s. io/flannel createdclusterrolebinding. rbac. authorization. k8s. io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset. apps/kube-flannel-ds-amd64 createddaemonset. apps/kube-flannel-ds-arm64 createddaemonset. apps/kube-flannel-ds-arm createddaemonset. apps/kube-flannel-ds-ppc64le createddaemonset. apps/kube-flannel-ds-s390x createdOnce the flannel network has been created the Multus can be defined, to check the status of the pods the following command can be executed: k8s-test. local# kubectl get pods -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-5644d7b6d9-9mfc9 1/1 Running 0 20hkube-system coredns-5644d7b6d9-sd6ck 1/1 Running 0 20hkube-system etcd-k8s-test 1/1 Running 0 20hkube-system kube-apiserver-k8s-test 1/1 Running 0 20hkube-system kube-controller-manager-k8s-test 1/1 Running 0 20hkube-system kube-flannel-ds-amd64-ml68d 1/1 Running 0 20hkube-system kube-proxy-lqjpv 1/1 Running 0 20hkube-system kube-scheduler-k8s-test 1/1 Running 0 20hTo load the multus configuration, the multus-cni repository has to be cloned, and also the kube-1. 16-change branch has to be used: k8s-test. local# git clone https://github. com/intel/multus-cni /root/src/github. com/multus-cnik8s-test. local# cd /root/src/github. com/multus-cnik8s-test. local# git checkout origin/kube-1. 16-changek8s-test. local# cd multus-cni/imagesTo load the multus daemonset the following command has to be executed: k8s-test. local# kubectl create -f multus-daemonset-crio. ymlcustomresourcedefinition. apiextensions. k8s. io/network-attachment-definitions. k8s. cni. cncf. io createdclusterrole. rbac. authorization. k8s. io/multus createdclusterrolebinding. rbac. authorization. k8s. io/multus createdserviceaccount/multus createdconfigmap/multus-cni-config createddaemonset. apps/kube-multus-ds-amd64 createddaemonset. apps/kube-multus-ds-ppc64le createdIn the next post KubeVirt: requirements and first Virtual Machine, the KubeVirt requirements will be set up together with the binaries and YAML files and also the first virtual Machines will be deployed. " }, { - "id": 93, + "id": 94, "url": "/2019/changelog-v0.22.0.html", "title": "KubeVirt v0.22.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 22. 0: Released on: Thu Oct 10 18:55:08 2019 +0200 Support for Nvidia GPUs and vGPUs exposed by Nvidia Kubevirt Device Plugin. VMIs now successfully start if they get a 0xfe prefixed MAC address assigned from the pod network Removed dependency on host semanage in SELinux Permissive mode Some changes as result of entering the CNCF sandbox (DCO check, FOSSA check, best practice badge) Many bug fixes and improvements in several areas CI: Introduced a OKD 4 test lane CI: Many improved tests, resulting in less flakyness" }, { - "id": 94, + "id": 95, "url": "/2019/KubeVirt_k8s_crio_from_scratch.html", "title": "KubeVirt on Kubernetes with CRI-O from scratch", "author" : "Pedro Ibáñez Requena", "tags" : "lab, cri-o, quickstart, homelab", "body": "Building your environment for testing or automation purposes can be difficult when using different technologies. In this guide you’ll find how to set up your system step-by-step to work with the latest versions up to today of Kubernetes, CRI-O and KubeVirt. In this series of blogposts the following topics are going to be covered en each post: Requirements: dependencies and containers runtime Kubernetes: Cluster and Network KubeVirt: requirements and first Virtual MachinePre-requisites: Versions: The following versions are going to be used: Software Purpose Version CentOS Operating System 7. 7. 1908 Kubernetes Orchestration v1. 16. 0 CRI-O Containers runtime 1. 16. 0-dev KubeVirt Virtual Machine Management on Kubernetes v0. 20. 7 Ansible (optional) Automation tool 2. 8. 4 Requirements: It is a requirement to have a Virtual Machine (VM) with enough resources, in my case I am running a 16GB memory and 4vCPUs VM, but should probably be run with less resources. Operating System (OS) running this VM as indicated in the table above has to be CentOS 7. 7. 1908 and you should take care of its deployment. In my lab I used latest Centos 7 cloud image to speed up the provisioning process. In this guide the system will be named k8s-test. local and the IP address is 192. 168. 0. 10. A second system called laptop would be used to run the playbooks (if you choose to go the easy and automated way). It is also needed to have access to the root account in the VM for installing the required software and configure some kernel parameters. In this example only a Kubernetes master would be used. Instructions: Preparing the VM: Ensure the VM system is updated to the latest versions of the software and also ensure that the epel repository is installed: k8s-test. local# yum install epel-release -yk8s-test. local# yum update -yk8s-test. local# yum install vim jq -yThe following kernel parameters have to be configured: k8s-test. local# cat > /etc/sysctl. d/99-kubernetes-cri. conf <<EOFnet. bridge. bridge-nf-call-iptables = 1net. ipv4. ip_forward = 1net. bridge. bridge-nf-call-ip6tables = 1EOFAnd also the following kernel modules have to be installed: k8s-test. local# modprobe br_netfilterk8s-test. local# echo br_netfilter > /etc/modules-load. d/br_netfilter. confk8s-test. local# modprobe overlayk8s-test. local# echo overlay > /etc/modules-load. d/overlay. confThe new sysctl parameters have to be loaded in the system with the following command: k8s-test. local# sysctl -p/etc/sysctl. d/99-kubernetes-cri. confThe next step is to disable SELinux: k8s-test. local# setenforce 0k8s-test. local# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configAnd the installation of Kubernetes and CRI-O can proceed. Installing Kubernetes and CRI-O: To install Kubernetes and CRI-O several ways can be used, in this guide there is the step-by-step guide where the user can do everything by himself or the alternative option, taking the easy road and running the ansible-playbook that will take care of almost everything. The ansible way: we are waiting for the PR to be merged in the official cri-o-ansible repository, meantime a fork in an alternative repository would be used. Also, note that the following commands are executed from a different place, in this case from a computer called laptop: laptop$ sudo yum install ansible -ylaptop# git clone https://github. com/ptrnull/cri-o-ansiblelaptop# cd cri-o-ansiblelaptop# git checkout fixes_k8s_1_16laptop# ansible-playbook cri-o. yml -i 192. 168. 0. 10,Once the playbook ends the system would be ready for getting CRI-O configured. The step-by-step way: If the ansible way was chosen, you may want to skip this section. Otherwise, let’s configure each piece. The required packages may be installed in the system running the following command: k8s-test. local# yum install btrfs-progs-devel container-selinux device-mapper-devel gcc git glib2-devel glibc-devel glibc-static gpgme-devel json-glib-devel libassuan-devel libgpg-error-devel libseccomp-devel make pkgconfig skopeo-containers tar wget -yInstall golang and the md2man packages: depending on the operating system running in your VM, it may be needed to change the name of the md2man golang package. k8s-test. local# yum install golang-github-cpuguy83-go-md2man golang -yThe following directories have to be created: /usr/local/go /etc/systemd/system/kubelet. service. d/ /var/lib/etcd /etc/cni/net. dk8s-test. local# for d in /usr/local/go /etc/systemd/system/kubelet. service. d/ /var/lib/etcd /etc/cni/net. d /etc/containers ; do mkdir -p $d; doneClone the runc repository: k8s-test. local# git clone https://github. com/opencontainers/runc /root/src/github. com/opencontainers/runcClone the CRI-O repository: k8s-test. local# git clone https://github. com/cri-o/cri-o /root/src/github. com/cri-o/cri-oClone the CNI repository: k8s-test. local# git clone https://github. com/containernetworking/plugins /root/src/github. com/containernetworking/pluginsTo build each part, a series of commands have to be executed, first building runc: k8s-test. local# cd /root/src/github. com/opencontainers/runck8s-test. local# export GOPATH=/rootk8s-test. local# make BUILDTAGS= seccomp selinux k8s-test. local# make installAnd also runc has to be linked in the correct path: k8s-test. local# ln -sf /usr/local/sbin/runc /usr/bin/runcNow building CRI-O (special focus on switching the branch): k8s-test. local# export GOPATH=/rootk8s-test. local# export GOBIN=/usr/local/go/bink8s-test. local# export PATH=/usr/local/go/bin:$PATHk8s-test. local# cd /root/src/github. com/cri-o/cri-ok8s-test. local# git checkout release-1. 16k8s-test. local# makek8s-test. local# make installk8s-test. local# make install. systemdk8s-test. local# make install. configCRI-O also needs the conmon software as a dependency: k8s-test. local# git clone http://github. com/containers/conmon /root/src/github. com/conmonk8s-test. local# cd /root/src/github. com/conmonk8s-test. local# makek8s-test. local# make installNow, the ContainerNetworking plugins have to be built and installed: k8s-test. local# cd /root/src/github. com/containernetworking/pluginsk8s-test. local# . /build_linux. shk8s-test. local# mkdir -p /opt/cni/bink8s-test. local# cp bin/* /opt/cni/bin/The cgroup manager has to be changed in the CRI-O configuration from the value of systemd to cgroupfs, to get it done, the file /etc/crio/crio. conf has to be edited and the variable cgroup_manager has to be replaced from its original value of systemd to cgroupfs (it could be already set it up to that value, in that case this step can be skipped): k8s-test. local# vim /etc/crio/crio. conf# group_manager = systemd group_manager = cgroupfs In the same file, the storage_driver is not configured, the variable storage_driver has to be uncommented and the value has to be changed from overlay to overlay2: k8s-test. local# vim /etc/crio/crio. conf#storage_driver = overlay storage_driver = overlay2 Also related with the storage, the storage_option has to be configured to have the following value: k8s-test. local# vim /etc/crio/crio. confstorage_option = [ overlay2. override_kernel_check=1 ]Preparing CRI-O: CRI-O is the lightweight container runtime for Kubernetes. As it is pointed in the CRI-O Website: CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for Kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Kata Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle. CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes. The first step is to change the configuration of the network_dir parameter in the CRI-O configuration file, for doing so, the network_dir parameter in the /etc/crio/crio. conf file has to be changed to point to /etc/crio/net. d k8s-test. local$ vim /etc/crio/crio. conf[crio. network]# Path to the directory where CNI configuration files are located. network_dir = /etc/crio/net. d/ Also that directory has to be created: k8s-test. local$ mkdir /etc/crio/net. dThe reason behind that change is because CRI-O and kubeadm reset don’t play well together, as kubeadm reset empties /etc/cni/net. d/. Therefore, it is good to change the crio. network. network_dir in crio. conf to somewhere kubeadm won’t touch. To get more information the following link [Running CRI-O with kubeadm] in the References section can be checked. Now Kubernetes has to be configured to be able to talk to CRI-O, to proceed, a new file has to be created in /etc/default/kubelet with the following content: KUBELET_EXTRA_ARGS=--feature-gates= AllAlpha=false,RunAsGroup=true --container-runtime=remote --cgroup-driver=cgroupfs --container-runtime-endpoint='unix:///var/run/crio/crio. sock' --runtime-request-timeout=5mNow the systemd has to be reloaded: k8s-test. local# systemctl daemon-reloadCRI-O will use flannel network as it is recommended for multus so the following file has to be downloaded and configured: k8s-test. local# cd /etc/crio/net. d/k8s-test. local# wget https://raw. githubusercontent. com/cri-o/cri-o/master/contrib/cni/10-crio-bridge. confk8s-test. local# sed -i 's/10. 88. 0. 0/10. 244. 0. 0/g' 10-crio-bridge. confAs the previous code block has shown, the network used is 10. 244. 0. 0, now the crio service can be started and enabled: k8s-test. local# systemctl enable criok8s-test. local# systemctl start criok8s-test. local# systemctl status crio● crio. service - Container Runtime Interface for OCI (CRI-O) Loaded: loaded (/usr/local/lib/systemd/system/crio. service; enabled; vendor preset: disabled) Active: active (running) since mié 2019-10-02 16:17:06 CEST; 3s ago Docs: https://github. com/cri-o/cri-o Main PID: 15427 (crio) CGroup: /system. slice/crio. service └─15427 /usr/local/bin/criooct 02 16:17:06 k8s-test systemd[1]: Starting Container Runtime Interface for OCI (CRI-O). . . oct 02 16:17:06 k8s-test systemd[1]: Started Container Runtime Interface for OCI (CRI-O). In the next posts, the Kubernetes cluster will be set up, together with the pod Network and also the KubeVirt with the virtual Machines deployments. " }, { - "id": 95, + "id": 96, "url": "/2019/changelog-v0.21.0.html", "title": "KubeVirt v0.21.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 21. 0: Released on: Mon Sep 9 09:59:08 2019 +0200 CI: Support for Kubernetes 1. 14 Many bug fixes in several areas Support for virtctl migrate Support configurable number of controller threads Support to opt-out of bridge binding for podnetwork Support for OpenShift Prometheus monitoring Support for setting more SMBIOS fields Improved containerDisk memory usage and speed Fix CRI-O memory limit Drop spc_t from launcher Add feature gates to security sensitive features" }, { - "id": 96, + "id": 97, "url": "/2019/CNCF-Sandbox.html", "title": "KubeVirt is now part of CNCF Sandbox", "author" : "Pablo Iranzo Gómez", "tags" : "CNCF, sandbox", "body": "Some time ago, with the PR https://github. com/cncf/toc/pull/265, KubeVirt was proposed to be part of the CNCF Sandbox. On 9th September 2019, the project has finally accomplished the required steps to get in (including two sponsors) to get listed as part of it at https://www. cncf. io/sandbox-projects/ The document with the proposal can be read at the final repo at https://github. com/cncf/toc/blob/master/proposals/sandbox/kubevirt. adoc for more information. It’s interesting to see the messages of support at the PR that show interesting use cases by our users, so keep an eye on them! " }, { - "id": 97, + "id": 98, "url": "/2019/changelog-v0.20.0.html", "title": "KubeVirt v0.20.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 20. 0: Released on: Fri Aug 9 16:42:41 2019 +0200 Containerdisks are now secure and they are not copied anymore on every start. Create specific SecurityContextConstraints on OKD instead of using the Added clone authorization check for DataVolumes with PVC source The sidecar feature is feature-gated now Use container image shasums instead of tags for KubeVirt deployments Protect control plane components against voluntary evictions with a Replaced hardcoded virtctl by using the basename of the call, this enables Added RNG device to all Fedora VMs in tests and examples (newer kernels might The virtual memory is now set to match the memory limit, if memory limit is Support nftable for CoreOS Added a block-volume flag to the virtctl image-upload command Improved virtctl console/vnc data flow Removed DataVolumes feature gate in favor of auto-detecting CDI support Removed SR-IOV feature gate, it is enabled by default now VMI-related metrics have been renamed from kubevirt_vm_ to kubevirt_vmi_ Added metric to report the VMI count Improved integration with HCO by adding a CSV generator tool and modified CI Improvements:" }, { - "id": 98, + "id": 99, "url": "/2019/Kubevirt-CR-Condition-Types-Rename-Now-ACTIVE.html", "title": "KubeVirt Condition Types Renamed", "author" : "Pablo Iranzo Gómez", "tags" : "Condition types", "body": "Hi,As previously announced in /2019/KubeVirt-CR-Condition-Types-Rename. html, Condition Types have been renamed from Ready to Available and from Updating to Progressing, check the linked article for more details. Check Release notes on kubevirt. io for v0. 20. 0 to see the changes. " }, { - "id": 99, + "id": 100, "url": "/2019/KubeVirt-CR-Condition-Types-Rename.html", "title": "KubeVirt Condition Types Rename in Custom Resource", "author" : "Pablo Iranzo Gómez", "tags" : "condition types", "body": "The announcement: Hi KubeVirt Community! As per the message from Marc Sluiter on our mailing list: Hello everybody,today we merged a PR [0], which renamed the condition types on the KubeVirt custom resources. This was done for alignment of conditions of all components in the KubeVirt ecosystem, which are deployed by the Hyperconverged Cluster Operator (HCO)[1], in order to make it easier for HCO to determine the deployment status of these components. The conditions are explained in detail in [2]. For KubeVirt this means that especially the Ready condition was renamed to Available . This might affect you in case you used the Ready condition for waiting for a successful deployment of KubeVirt. If so, you need to update the corresponding command to something like `kubectl -n kubevirt wait kv kubevirt --for condition=Available`. The second renamed condition is Updating . This one is named Progressing now. As explained in [2], there also is a new condition named Degraded . The Created and Synchronized conditions are unchanged. These changes take effect immediately if you are deploying KubeVirt from the master branch, or starting with the upcoming v0. 20. 0 release. [0] https://github. com/kubevirt/kubevirt/pull/2548[1] https://github. com/kubevirt/hyperconverged-cluster-operator[2] https://github. com/kubevirt/hyperconverged-cluster-operator/blob/main/docs/conditions. mdBest regards,We’re renaming some of the prior ‘conditions’ reported by the Custom Resources. What does it mean to us: CR Rename We’re making KubeVirt more compatible with the standard for Operators, when doing so, some of the conditions are changing, so check your scripts using checks for conditions to use the new ones. | Prior | Actual | Note || :———-: | :—————: | :—————- || Ready | Available | Updated || Updating | Progressing | Updated || - | Degraded | New condition || Created | Created | Unchanged || Synchronized | Synchronized | Unchanged | References: Check for more information on the following URL’s https://github. com/kubevirt/kubevirt/pull/2548 https://github. com/kubevirt/hyperconverged-cluster-operator https://github. com/kubevirt/hyperconverged-cluster-operator/blob/main/docs/conditions. md" }, { - "id": 100, + "id": 101, "url": "/2019/NodeDrain-KubeVirt.html", "title": "Node Drain in KubeVirt", "author" : "DirectedSoul", "tags" : "node drain, eviction, nmo", "body": "Introduction: In a Kubernetes (k8s) cluster, the control plane(scheduler) is responsible for deploying workloads(pods, deployments, replicasets) on the worker nodes depending on the resource availability. What do we do with the workloads if the need arises for maintaining this node? Well, there is good news, node-drain feature and node maintenance operator(NMO) both come to our rescue in this situation. This post discusses evicting the VMI(virtual machine instance) and other resources from the node using node drain feature and NMO. Note The environment used for writing this post is based on OpenShift 4 with 3 Masters and 3 Worker nodes. HyperconvergedClusterOperator: The goal of the hyper-converged-cluster-operator (HCO) is to provide a single entry point for multiple operators (kubevirt, cdi, networking, etc) where users can deploy and configure them in a single object. This operator is sometimes referred to as a “meta operator” or an “operator for operators”. Most importantly, this operator doesn’t replace or interfere with OLM which is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Check for more information about OLM. It only creates operator CRs, which is the user’s prerogative. In our cluster (3 master and 3 nodes) we’ll be able to see something similar to: $oc get nodesip-10-0-132-147. us-east-2. compute. internal Ready worker 14m v1. 13. 4+27816e1b1ip-10-0-142-95. us-east-2. compute. internal Ready master 15m v1. 13. 4+27816e1b1ip-10-0-144-125. us-east-2. compute. internal Ready worker 14m v1. 13. 4+27816e1b1ip-10-0-150-125. us-east-2. compute. internal Ready master 14m v1. 13. 4+27816e1b1ip-10-0-161-166. us-east-2. compute. internal Ready master 15m v1. 13. 4+27816e1b1ip-10-0-173-203. us-east-2. compute. internal Ready worker 15m v1. 13. 4+27816e1b1To test the node eviction, there are two methods. Method 1: Use kubectl node drain command: Before sending a node into maintenance state it is very much necessary to evict the resources on it, VMI’s, pods, deployments etc. One of the easiest option for us is to stick to the oc adm drain command. For this, select the node from the cluster from which you want the VMIs to be evicted oc get nodesHere ip-10-0-173-203. us-east-2. compute. internal, then issue the following command. oc adm drain <node-name> --delete-local-data --ignore-daemonsets=true --force --pod-selector=kubevirt. io=virt-launcher --delete-local-data is used to remove any VMI’s that use emptyDir volumes, however the data in those volumes are ephemeral which means it is safe to delete after termination. --ignore-daemonsets=true is a must needed flag because when VMI is deployed a daemon set named virt-handler will be running on each node. DaemonSets are not allowed to be evicted using kubectl drain. By default, if this command encounters a DaemonSet on the target node, the command will fail. This flag tells the command it is safe to proceed with the eviction and to just ignore DaemonSets. --pod-selector=kubevirt. io=virt-launcher flag tells the command to evict the pods that are managed by kubevirt Evict a node: If you want to evict all pods from the node just use: oc adm drain <node name> --delete-local-data --ignore-daemonsets=true --forceHow to evacuate VMIs via Live Migration from a Node: If the LiveMigration feature gate is enabled, it is possible to specify an evictionStrategy on VMIs which will react with live-migrations on specific taints on nodes. The following snippet on a VMI ensures that the VMI is migrated if the kubevirt. io/drain:NoSchedule taint is added to a node: spec: evictionStrategy: LiveMigrateOnce the VMI is created, taint the node with kubectl taint nodes foo kubevirt. io/drain=draining:NoScheduleThis command will then trigger a migration. Behind the scenes a PodDisruptionBudget is created for each VMI which has an evictionStrategy defined. This ensures that evictions are be blocked on these VMIs and that we can guarantee that a VMI will be migrated instead of shut off. Re-enabling a Node after Eviction: We have seen how to make the node unschedulable, now lets see how to re-enable the node. The oc adm drain will result in the target node being marked as unschedulable. This means the node will not be eligible for running new VirtualMachineInstances or Pods. If target node should become schedulable again, the following command must be run: oc adm uncordon <node name>Method 2: Use Node Maintenance Operator (NMO): NMO is part of HyperConvergedClusterOperator, so we need to deploy it. Either check: the gist for deploying HCO the blog post on HCOHere will continue using the gist for demonstration purposes. Observe the resources that get created after the HCO is installed $oc get pods -n kubevirt-hyperconvergedNAME READY STATUS RESTARTS AGEcdi-apiserver-769fcc7bdf-xgpt8 1/1 Running 0 12mcdi-deployment-8b64c5585-gq46b 1/1 Running 0 12mcdi-operator-77b8847b96-kx8rx 1/1 Running 0 13mcdi-uploadproxy-8dcdcbff-47lng 1/1 Running 0 12mcluster-network-addons-operator-584dff99b8-2c96w 1/1 Running 0 13mhco-operator-59b559bd44-vpznq 1/1 Running 0 13mkubevirt-ssp-operator-67b78446f7-b9klr 1/1 Running 0 13mkubevirt-web-ui-operator-9df6b67d9-f5l4l 1/1 Running 0 13mnode-maintenance-operator-6b464dc85-zd6nt 1/1 Running 0 13mvirt-api-7655b9696f-g48p8 1/1 Running 1 12mvirt-api-7655b9696f-zfsw9 1/1 Running 0 12mvirt-controller-7c4584f4bc-6lmxq 1/1 Running 0 11mvirt-controller-7c4584f4bc-6m62t 1/1 Running 0 11mvirt-handler-cfm5d 1/1 Running 0 11mvirt-handler-ff6c8 1/1 Running 0 11mvirt-handler-mcl7r 1/1 Running 1 11mvirt-operator-87d7c98b-fvvzt 1/1 Running 0 13mvirt-operator-87d7c98b-xzc42 1/1 Running 0 13mvirt-template-validator-76cbbd6f68-5fbzx 1/1 Running 0 12mAs seen from above HCO deploys the node-maintenance-operator. Next, let’s install a kubevirt CR to start using VM workloads on worker nodes. Please feel free to follow the steps here and deploy a VMI as explained. Please feel free to check the video that explains the same $oc get vmsNAME AGE RUNNING VOLUMEtestvm 2m13s trueDeploy a node-maintenance-operator CR: As seen from above NMO is deployed from HCO, the purpose of this operator is to watch the node maintenance CustomResource(CR) called NodeMaintenance which mainly contains the node that needs a maintenance and the reason for it. The below actions are performed If a NodeMaintenance CR is created: Marks the node as unschedulable, cordons it and evicts all the pods from that node If a NodeMaintenance CR is deleted: Marks the node as schedulable, uncordons it, removes pod from maintenance. To install the NMO, please follow upsream instructions at NMO Either use HCO to create NMO Operator or deploy NMO operator as shown below After you follow the instructions: Create a CRD oc create -f deploy/crds/nodemaintenance_crd. yamlcustomresourcedefinition. apiextensions. k8s. io/nodemaintenances. kubevirt. io created Create the NS oc create -f deploy/namespace. yamlnamespace/node-maintenance-operator created Create a Service Account: oc create -f deploy/service_account. yamlserviceaccount/node-maintenance-operator created Create a ROLE oc create -f deploy/role. yamlclusterrole. rbac. authorization. k8s. io/node-maintenance-operator created Create a ROLE Binding oc create -f deploy/role_binding. yamlclusterrolebinding. rbac. authorization. k8s. io/node-maintenance-operator created Then finally make sure to add the image version of the NMO operator in the deploy/operator. yml image: quay. io/kubevirt/node-maintenance-operator:v0. 3. 0 and then deploy the NMO Operator as shown oc create -f deploy/operator. yamldeployment. apps/node-maintenance-operator created Finally, We can verify the deployment for the NMO Operator as below oc get deployment -n node-maintenance-operatorNAME READY UP-TO-DATE AVAILABLE AGEnode-maintenance-operator 1/1 1 1 4m23sNow that the NMO operator is created, we can create the NMO CR which puts the node into maintenance mode (this CR has the info about the node->from which the pods needs to be evicted and the reason for the maintenance) cat deploy/crds/nodemaintenance_cr. yamlapiVersion: kubevirt. io/v1alpha1kind: NodeMaintenancemetadata: name: nodemaintenance-xyzspec: nodeName: <Node-Name> reason: Test node maintenance For testing purpose, we can deploy a sample VM instance as shown: kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlNow start the VM testvm . /virtctl start testvmWe can see that it’s up and running kubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 92s Running 10. 131. 0. 17 ip-10-0-173-203. us-east-2. compute. internalAlso, we can see the status: kubectl get vmis -o yaml testvm. . . interfaces: - ipAddress: 10. 131. 0. 17 mac: 0a:58:0a:83:00:11 name: default migrationMethod: BlockMigration nodeName: ip-10-0-173-203. us-east-2. compute. internal #NoteDown the nodeName phase: RunningNote down the node name and edit the nodemaintenance_cr. yaml file and then issue the CR manifest which sends the node into maintenance. Now to evict the pods from the node ip-10-0-173-203. us-east-2. compute. internal, edit the node-maintenance_cr. yaml as shown: cat deploy/crds/nodemaintenance_cr. yamlapiVersion: kubevirt. io/v1alpha1kind: NodeMaintenancemetadata: name: nodemaintenance-xyzspec: nodeName: ip-10-0-173-203. us-east-2. compute. internal reason: Test node maintenance As soon as you apply the above CR, the current VM gets deployed in the other node, oc apply -f deploy/crds/nodemaintenance_cr. yamlnodemaintenance. kubevirt. io/nodemaintenance-xyz createdWhich immediately evicts the VMI kubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 33s Schedulingkubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 104s Running 10. 128. 2. 20 ip-10-0-132-147. us-east-2. compute. internalip-10-0-173-203. us-east-2. compute. internal Ready,SchedulingDisabled workerWhen all of this happens, we can view the changes that are taking place with: oc logs pods/node-maintenance-operator-645f757d5-89d6r -n node-maintenance-operator. . . { level : info , ts :1559681430. 650298, logger : controller_nodemaintenance , msg : Applying Maintenance mode on Node: ip-10-0-173-203. us-east-2. compute. internal with Reason: Test node maintenance , Request. Namespace : , Request. Name : nodemaintenance-xyz }{ level : info , ts :1559681430. 7509086, logger : controller_nodemaintenance , msg : Taints: [{\ key\ :\ node. kubernetes. io/unschedulable\ ,\ effect\ :\ NoSchedule\ },{\ key\ :\ kubevirt. io/drain\ ,\ effect\ :\ NoSchedule\ }] will be added to node ip-10-0-173-203. us-east-2. compute. internal }{ level : info , ts :1559681430. 7509348, logger : controller_nodemaintenance , msg : Applying kubevirt. io/drain taint add on Node: ip-10-0-173-203. us-east-2. compute. internal }{ level : info , ts :1559681430. 7509415, logger : controller_nodemaintenance , msg : Patchi{ level : info , ts :1559681430. 9903986, logger : controller_nodemaintenance , msg : evicting pod \ virt-controller-b94d69456-b9dkw\ \n }{ level : info , ts :1559681430. 99049, logger : controller_nodemaintenance , msg : evicting pod \ community-operators-5cb68db58-4m66j\ \n }{ level : info , ts :1559681430. 9905066, logger : controller_nodemaintenance , msg : evicting pod \ alertmanager-main-1\ \n }{ level : info , ts :1559681430. 9905581, logger : controller_nodemaintenance , msg : evicting pod \ virt-launcher-testvm-q5t7l\ \n }{ level : info , ts :1559681430. 9905746, logger : controller_nodemaintenance , msg : evicting pod \ redhat-operators-6b6f6bd788-zx8nm\ \n }{ level : info , ts :1559681430. 990588, logger : controller_nodemaintenance , msg : evicting pod \ image-registry-586d547bb5-t9lwr\ \n }{ level : info , ts :1559681430. 9906075, logger : controller_nodemaintenance , msg : evicting pod \ kube-state-metrics-5bbd4c45d5-sbnbg\ \n }{ level : info , ts :1559681430. 9906383, logger : controller_nodemaintenance , msg : evicting pod \ certified-operators-9f9f6fd5c-9ltn8\ \n }{ level : info , ts :1559681430. 9908028, logger : controller_nodemaintenance , msg : evicting pod \ virt-api-59d7c4b595-dkpvs\ \n }{ level : info , ts :1559681430. 9906204, logger : controller_nodemaintenance , msg : evicting pod \ router-default-6b57bcc884-frd57\ \n }{ level : info , ts :1559681430. 9908257, logger : controller_nodemaintenance , msg : evictClearly we can see that the previous node went into SchedulingDisabled state and the VMI was evicted and placed into other node in the cluster. This demonstrates the node eviction using NMO. VirtualMachine Evictions notes: The eviction of any VirtualMachineInstance that is owned by a VirtualMachine set to running=true will result in the VirtualMachineInstance being re-scheduled to another node. The VirtualMachineInstance in this case will be forced to power down and restart on another node. In the future once KubeVirt introduces live migration support, the VM will be able to seamlessly migrate to another node during eviction. Wrap-up: The NMO achieved its aim of evicting the VMI’s successfully from the node, hence we can now safely repair/update the node and make it available for running the workloads again once the maintenance is over. " }, { - "id": 101, + "id": 102, "url": "/2019/How-To-Import-VM-into-Kubevirt.html", "title": "How to import VM into KubeVirt", "author" : "DirectedSoul", "tags" : "cdi, vm import", "body": "Introduction: Kubernetes has become the new way to orchestrate the containers and to handle the microservices, but what if I already have applications running on my old VM’s in my datacenter ? Can those apps ever be made k8s friendly ? Well, if that is the use-case for you, then we have a solution with KubeVirt! In this blog post we will show you how to deploy a VM as a yaml template and the required steps on how to import it as a PVC onto your kubernetes environment using the CDI and KubeVirt add-ons. Assumptions: A basic understanding of the k8s architecture: In its simplest terms Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. For complete details check Kubernetes-architecture User is familiar with the concept of a Libvirt based VM PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. Feel free to check more on Persistent Volume(PV). Persistent Volume Claim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Feel free to check more on Persistent Volume Claim(PVC). User is familiar with the concept of KubeVirt-architecture and CDI-architecture User has already installed KubeVirt in an available K8s environment, if not please follow the link Installing KubeVirt to further proceed. User is already familiar with VM operation with Kubernetes, for a refresher on how to use ‘Virtual Machines’ in Kubernetes, please do check LAB 1 before proceeding. Creating Virtual Machines from local images with CDI and virtctl: The Containerized Data Importer (CDI) project provides facilities for enabling Persistent Volume Claims (PVCs) to be used as disks for KubeVirt VMs. The three main CDI use cases are: Import a disk image from a URL to a PVC (HTTP/S3) Clone an existing PVC Upload a local disk image to a PVCThis document covers the third use case and covers the HTTP based import use case at the end of this post. NOTE: You should have CDI installed in your cluster, a VM disk that you’d like to upload, and virtctl in your path Please follow the instructions for the installation of CDI (v1. 9. 0 as of this writing) Expose cdi-uploadproxy service: The cdi-uploadproxy service must be accessible from outside the cluster. Here are some ways to do that: NodePort Service Ingress RouteWe can take a look at example manifests here The supported image formats are: . img . iso . qcow2 Compressed (. tar, . gz or . xz) of the above formats. We will use this image from CirrOS Project (in . img format) We can use virtctl command for uploading the image as shown below: virtctl image-upload --helpUpload a VM image to a PersistentVolumeClaim. Usage: virtctl image-upload [flags]Examples: # Upload a local disk image to a newly created PersistentVolumeClaim: virtctl image-upload --uploadproxy-url=https://cdi-uploadproxy. mycluster. com --pvc-name=upload-pvc --pvc-size=10Gi --image-path=/images/fedora28. qcow2Flags: --access-mode string The access mode for the PVC. (default ReadWriteOnce ) -h, --help help for image-upload --image-path string Path to the local VM image. --insecure Allow insecure server connections when using HTTPS. --no-create Don't attempt to create a new PVC. --pvc-name string The destination PVC. --pvc-size string The size of the PVC to create (ex. 10Gi, 500Mi). --storage-class string The storage class for the PVC. --uploadproxy-url string The URL of the cdi-upload proxy service. --wait-secs uint Seconds to wait for upload pod to start. (default 60)Use virtctl options for a list of global command-line options (applies to all commands). Creation of VirtualMachineInstance from a PVC: Here, virtctl image-upload works by creating a PVC of the requested size, sending an UploadTokenRequest to the cdi-apiserver, and uploading the file to the cdi-uploadproxy. virtctl image-upload --pvc-name=cirros-vm-disk --pvc-size=500Mi --image-path=/home/shegde/images/cirros-0. 4. 0-x86_64-disk. img --uploadproxy-url=<url to upload proxy service>The data inside are ephemeral meaning is lost when the VM restarts, in order to prevent that, and provide a persistent data storage, we use PVC (persistentVolumeClaim) which allows connecting a PersistentVolumeClaim to a VM disk. cat <<EOF | kubectl apply -f -apiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstancemetadata: name: cirros-vmspec: domain: devices: disks: - disk: bus: virtio name: pvcdisk machine: type: resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - name: pvcdisk persistentVolumeClaim: claimName: cirros-vm-diskstatus: {}EOFA PersistentVolume can be in filesystem or block mode: Filesystem: For KubeVirt to be able to consume the disk present on a PersistentVolume’s filesystem, the disk must be named disk. img and be placed in the root path of the filesystem. Currently the disk is also required to be in raw format. Important: The disk. img image file needs to be owned by the user-id 107 in order to avoid permission issues. Additionally, if the disk. img image file has not been created manually before starting a VM then it will be created automatically with the PersistentVolumeClaim size. Since not every storage provisioner provides volumes with the exact usable amount of space as requested (e. g. due to filesystem overhead), KubeVirt tolerates up to 10% less available space. This can be configured with the pvc-tolerate-less-space-up-to-percent value in the kubevirt-config ConfigMap. Block: Use a block volume for consuming raw block devices. To do that, BlockVolume feature gate must be enabled. A simple example which attaches a PersistentVolumeClaim as a disk may look like this: metadata: name: testvmi-pvcapiVersion: kubevirt. io/v1alpha3kind: VirtualMachineInstancespec: domain: resources: requests: memory: 64M devices: disks: - name: mypvcdisk lun: {} volumes: - name: mypvcdisk persistentVolumeClaim: claimName: mypvcCreation with a DataVolume: DataVolumes are a way to automate importing virtual machine disks onto pvc’s during the virtual machine’s launch flow. Without using a DataVolume, users have to prepare a pvc with a disk image before assigning it to a VM or VMI manifest. With a DataVolume, both the pvc creation and import is automated on behalf of the user. DataVolume VM Behavior: DataVolumes can be defined in the VM spec directly by adding the DataVolumes to the dataVolumeTemplates list. Below is an example. apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: labels: kubevirt. io/vm: vm-alpine-datavolume name: vm-alpine-datavolumespec: running: false template: metadata: labels: kubevirt. io/vm: vm-alpine-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 resources: requests: memory: 64M volumes: - dataVolume: #Note the type is dataVolume name: alpine-dv name: datavolumedisk1 dataVolumeTemplates: # Automatically a PVC of size 2Gi is created - metadata: name: alpine-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: #This is the source where the ISO file resides http: url: http://cdi-http-import-server. kubevirt/images/alpine. isoFrom the above manifest the two main sections that needs an attention are source and pvc. The source part declares that there is a disk image living on an http server that we want to use as a volume for this VM. The pvc part declares the spec that should be used to create the pvc that hosts the source data. When this VM manifest is posted to the cluster, as part of the launch flow a pvc will be created using the spec provided and the source data will be automatically imported into that pvc before the VM starts. When the VM is deleted, the storage provisioned by the DataVolume will automatically be deleted as well. A few caveats to be considered before using DataVolumes: A DataVolume is a custom resource provided by the Containerized Data Importer (CDI) project. KubeVirt integrates with CDI in order to provide users a workflow for dynamically creating pvcs and importing data into those pvcs. In order to take advantage of the DataVolume volume source on a VM or VMI, the DataVolumes feature gate must be enabled in the kubevirt-config config map before KubeVirt is installed. CDI must also be installed(follow the steps as mentioned above). Enabling the DataVolumes feature gate: Below is an example of how to enable DataVolume support using the kubevirt-config config map. cat <<EOF | kubectl create -f -apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: DataVolumes EOFThis config map assumes KubeVirt will be installed in the KubeVirt namespace. Change the namespace to suit your installation. First post the configmap above, then install KubeVirt. At that point DataVolume integration will be enabled. Wrap-up: As demonstrated, VM can be imported as a k8s object using a CDI project along with KubeVirt. For more detailed insights, please feel free to follow the KubeVirt project. " }, { - "id": 102, + "id": 103, "url": "/2019/website-roadmap.html", "title": "Website roadmap", "author" : "Pablo Iranzo Gómez", "tags" : "website, community, roadmap", "body": "Detour ahead!Working with websites and with this KubeVirt website for a while has given the idea of things that should improve it. As this is a community-driven effort, what could better do rather than ask YOU for feedback? We’ve created a TODO. md file to track what we’ve identified. Additionally, as that file is part of the repository, it can be PR’d, commented via the PR’s to it, etc. Please, let us know what do you think about what is proposed, or propose new ideas to be added (or create new issues to have them added) Thanks for being part of KubeVirt community! " }, { - "id": 103, + "id": 104, "url": "/2019/kubevirt-with-ansible-part-2.html", "title": "KubeVirt with Ansible, part 2", "author" : "mmazur", "tags" : "ansible", "body": "Part 1 contained a short introduction to basic VM management with Ansible’s kubevirt_vm module. This time we’ll paint a more complete picture of all the features on offer. As before, examples found herein are also available as full working playbooks in ourplaybooks example repository. Additionally, each section of this post links to the corresponding module’s Ansible documentation page. Those pages always contain an Examples section, which the reader is encouraged to look through, as they havemany more ways of using the modules than can reasonably fit here. More VM management: Virtual machines managed by KubeVirt are highly customizable. Among the features accessible from Ansible, are: various libvirt–level virtualized hardware tweaks (e. g. machine_type or cpu_model), network interface configuration (interfaces), including multi–NIC utilizing the Multus CNI, non–persistent VMs (ephemeral: yes), direct DataVolumes support (datavolumes), and OpenShift Templates support (template). Further resources: Ansible module documentation Examples, lots of examples DataVolumes Introductory blog post Upstream documentation Multus Introductory blog post GitHub repo VM Image Management with the Containerized Data Importer: The main functionality of the kubevirt_pvc module is to manage Persistent Volume Claims. The following snippetshould seem familiar to anyone who dealt with PVCs before: kubevirt_pvc: name: pvc1 namespace: default size: 100Mi access_modes: - ReadWriteOnceRunning it inside a playbook will result in a new PVC named pvc1 with the access mode ReadWriteOnce and at least100Mi of storage assigned. The option dedicated to working with VM images is named cdi_source and lets one fill a PVC with data immediatelyupon creation. But before we get to the examples, the Containerized Data Importer needs to be properly deployed,which is as simple as running the following commands: export CDI_VER=$(curl -s https://github. com/kubevirt/containerized-data-importer/releases/latest | grep -o v[0-9]\. [0-9]*\. [0-9]* )kubectl apply -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VER/cdi-operator. yamlkubectl apply -f https://github. com/kubevirt/containerized-data-importer/releases/download/$CDI_VER/cdi-cr. yamlOnce kubectl get pods -n cdi confirms all pods are ready, CDI is good to go. The module can instruct CDI to fill the PVC with data from: a remote HTTP(S) server (http:), a container registry (registry:), a local file (upload: yes), though this requires using kubevirt_cdi_upload for the actual upload step, or nowhere (the blank: yes option). Here’s a simple example: kubevirt_pvc:name: pvc2namespace: defaultsize: 100Miaccess_modes: - ReadWriteOncewait: yescdi_source: http: url: https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img infoPlease notice the wait: yes parameter. The module will only exit after CDI has completed transferring its data. Let’s see this in action: [mmazur@klapek part2]$ ansible-playbook pvc_cdi. yaml(…)TASK [Create pvc and fetch data] **********************************************************************************changed: [localhost]PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0[mmazur@klapek part2]$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc2 Bound local-pv-6b6380e2 37Gi RWO local 71s[mmazur@klapek part2]$ kubectl get pvc/pvc2 -o yaml|grep cdi cdi. kubevirt. io/storage. import. endpoint: https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img cdi. kubevirt. io/storage. import. importPodName: importer-pvc2-gvn5c cdi. kubevirt. io/storage. import. source: http cdi. kubevirt. io/storage. pod. phase: SucceededEverything worked as expected. Further resources: Ansible module documentation (kubevirt_pvc) Ansible module documentation (kubevirt_cdi_upload) CDI GitHub RepoInventory plugin: The default way of using Ansible is to iterate over a list of hosts and perform operations on each one. Listing KubeVirt VMs can be done using the KubeVirt inventory plugin. It needs a bit of setting up before it canbe used. First, enable the plugin in ansible. cfg: [inventory]enable_plugins = kubevirtThen configure the plugin using a file named kubevirt. yml or kubevirt. yaml: plugin: kubevirtconnections: - namespaces: - default network_name: defaultAnd now let’s see if it worked and there’s a VM running in the default namespace (as represented by thenamespace_default inventory group): [mmazur@klapek part2]$ ansible -i kubevirt. yaml namespace_default --list-hosts [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does notmatch 'all' hosts (0):Right, we don’t have any VMs running. Let’s go back to part 1, create vm1, make sure it’s runingand then try again: [mmazur@klapek part2]$ ansible-playbook . . /part1/02_vm1. yaml(…)PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0[mmazur@klapek part2]$ ansible-playbook . . /part1/01_vm1_running. yaml(…)PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0[mmazur@klapek part2]$ ansible -i kubevirt. yaml namespace_default --list-hosts hosts (1): default-vm1-2c680040-9e75-11e9-8839-525500d15501Works! Further resources: Ansible inventory plugin documentationMore: Lastly, for the sake of brevity, a quick mention of the remaining modules: kubevirt_presets allows setting upVM presets to be used by deployed VMs, kubevirt_template brings in a generictemplating mechanism, when running on top of OpenShift or OKD, and kubevirt_rs lets one configure KubeVirt’sown ReplicaSets for running multiple instances of a specified virtual machine. " }, { - "id": 104, + "id": 105, "url": "/2019/changelog-v0.19.0.html", "title": "KubeVirt v0.19.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 19. 0: Released on: Fri Jul 5 12:52:16 2019 +0200 Fixes when run on kind Fixes for sub-resource RBAC Limit pod network interface bindings Many additional bug fixes in many areas Additional testcases for updates, disk types, live migration with NFS Additional testcases for memory over-commit, block storage, cpu manager, Improvements around HyperV Improved error handling for runStartegies Improved update procedure Improved network metrics reporting (packets and errors) Improved guest overhead calculation Improved SR-IOV testsuite Support for live migration auto-converge Support for config-drive disks Support for setting a pullPolicy con containerDisks Support for unprivileged VMs when using SR-IOV Introduction of a project security policy" }, { - "id": 105, + "id": 106, "url": "/2019/changelog-v0.18.0.html", "title": "KubeVirt v0.18.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 18. 0: Released on: Wed Jun 5 22:25:09 2019 +0200 Build: Use of go modules CI: Support for Kubernetes 1. 13 Countless testcase fixes and additions Several smaller bug fixes Improved upgrade documentation" }, { - "id": 106, + "id": 107, "url": "/2019/Kubevirt-vagrant-provider.html", "title": "KubeVirt vagrant provider", "author" : "pkliczewski", "tags" : "vagrant, lifecycle, virtual machines", "body": "IntroductionVagrant is a command line utility for managing the lifecycle of virtual machines. There are number of providers available which allow to control and provision virtual machines in different environment. In this blog post we update how to use the provider to manage KubeVirt. The KubeVirt Vagrant provider implements the following features: Manages virtual machines lifecycle - start, halt, status and destroy. Creates virtual machines using templates, container disks or existing pvc. Supports Vagrant built-in provisioners. Provides ability to ssh to the virtual machines Supports folder synchronization by using rsyncInstallationIn order to use the provider we need to install Vagrant first. The steps how to do it are available here. Once command line tool is available in our system, we can install the plugin by running: $ vagrant plugin install vagrant-kubevirtNow, we can obtain predefined box and start it using: $ vagrant up --provider=kubevirtVirtual machine definitionInstead of building a virtual machine from scratch, which would be a slow and tedious process, Vagrant uses a base image as template for virtual machines. These base images are known as “boxes” and every provider must introduce its own box format. The provider introduces kubevirt boxes. You can view an example box here. There are two ways to tell Vagrant, how to connect to KubeVirt cluster in Vagrantfile: use Kubernetes configuration file. When no other connection details provided, the provider will look for kubeconfig using value of KUBECONFIG environment variable or $HOME/. kube/config location. define connection details as part of box definitionVagrant. configure( 2 ) do |config| config. vm. provider :kubevirt do |kubevirt| kubevirt. hostname = '<kubevirt host>' kubevirt. port = '<kubevirt port>' kubevirt. token = '<token>' endendValues used in above sample box: kubevirt host - Hostname where KubeVirt is deployed kubevirt port - Port on where KubeVirt is listening token - User token used to authenticate any requestThere are number of options we can customize for specific a virtal machine: cpus - Number of cpus used by a virtual machine memory - Amount of memory by a virtual machineWe can choose one of the three following options: template - Name of a template which will be used to create the virtual machine image - Name of a container disk stored in a registry pvc - Name of persistent volume claim containing virtual machine diskBelow, you can find sample Vagrantfile exposing all the supported features: Vagrant. configure( 2 ) do |config| # name of the box config. vm. box = 'kubevirt' # vm boot timeout config. vm. boot_timeout = 360 # disables default vagrant folder config. vm. synced_folder . , /vagrant , disabled: true # synchoronizes a directory between a host and virtual machine config. vm. synced_folder $HOME/src , /srv/website , type: rsync # uses provision action to touch a file in a virtual machine config. vm. provision shell do |s| s. inline = touch example. txt end # defines virtual machine resources and source of disk config. vm. provider :kubevirt do |kubevirt| kubevirt. cpus = 2 kubevirt. memory = 512 kubevirt. image = 'kubevirt/fedora-cloud-container-disk-demo' end # defines a user configured on a virtual machine using cloud-init config. ssh. username = 'vagrant' config. ssh. password = 'vagrant'endUsageNow, once we defined a virtual machine we can see how to use the provider to manage it. vagrant upThe above command starts a virtual machines and performs any additonal operations defined in the Vagrantfile like provisioning, folder synchronization setup. For more information check here vagrant haltThe above command stops a virtual machine. For more information check here vagrant statusThe above command provides status of a virtual machine. For more information check here vagrant destroyThe above command stops a virtual machine and destroys all the resources used. For more information check here vagrant provisionThe above command runs configured provisioners for specific virtual machine. For more information check here vagrant sshThe above command ssh to running virtual machine. For more information check here Future workThere are still couple of features we would like to implement such as network management or user friendly box packaging. " }, { - "id": 107, + "id": 108, "url": "/2019/kubevirt-with-ansible-part-1.html", "title": "KubeVirt with Ansible, part 1 – Introduction", "author" : "mmazur", "tags" : "ansible", "body": "KubeVirt is a great solution for migrating existing workloads towards Kubernetes without having to containerizeeverything all at once (or at all). If some parts of your system can run as pods, while others are perfectly fine as virtual machines, KubeVirt is thetechnology that lets you seamlessly run both in a single cluster. And with the recent release of Ansible 2. 8 containing a new set of dedicated modules, it’s now possible to treat KubeVirtjust like any other ansible–supported VM hosting system. Already an Ansible user? Or maybe still researching your options?This series of posts should give you a good primer on how combining both technologies can ease your Kubernetes journey. Prerequisites: While it’s possible to specify the connection and authentication details of your k8s cluster directly in theplaybook, for the purpose of this introduction, we’ll assume you have a working kubeconfig file in your system. Ifrunning kubectl get nodes correctly returns a list of nodes and you’ve already deployed KubeVirt, then you’regood to go. If not, here’s a KubeVirt quickstart (with Minikube). Basic VM management: Before we get down to the YAML, please keep in mind that this post contains only the most interesting bits of the playbooks. To get actually runnable versions of each example, take a look at this code repository. Let’s start with creating the most basic VM by utilizing the kubevirt_vm module, like so: kubevirt_vm: namespace: default name: vm1 state: runningAnd now run it: [mmazur@klapek blog1]$ ansible-playbook 01_vm1_running. yaml(…)TASK [Create first vm?] *******************************************************************************************fatal: [localhost]: FAILED! => { changed : false, msg : It's impossible to create an empty VM or change state of a non-existent VM. }PLAY RECAP ********************************************************************************************************localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0Oops, too basic. Let’s try again, but this time with a small set of parameters specifying cpu, memory and a boot disk. The latter will be a demo image provided by the KubeVirt project. kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtioAnd run it: [mmazur@klapek blog1]$ ansible-playbook 02_vm1. yaml(…)TASK [Create first vm, for real this time] ************************************************************************changed: [localhost]PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0It worked! One thing to note is that by default kubevirt_vm will not start a newly–created VM. Running kubectl get vms -n default will confirm as much. Changing this behavior requires specifying state: running as one of the module’s parameters when creating a new VM. Or we can get vm1 toboot by running the first playbook one more time, since this time the task will be interpreted as attempting to change the state ofan existing VM to running, which is what we want. [mmazur@klapek blog1]$ ansible-playbook 01_vm1_running. yaml(…)TASK [Create first vm] ********************************************************************************************changed: [localhost]PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0While the first two runs likely finished almost immediately, this time around ansible-playbook is waiting for the VM to boot, sodon’t be alarmed if that takes a bit of time. If everything went correctly, you should have an actual virtual machine running inside your k8s cluster. If present, the virtctl toolcan be used to log onto the new VM and to take a look around. Run virtctl console vm1 -n default and press ENTER to get a login prompt. It’s useful to note at this point something about how Ansible and Kubernetes operate. This is best illustrated with an example. Let’s runthe first playbook one more time: [mmazur@klapek blog1]$ ansible-playbook 01_vm1_running. yaml(…)TASK [Create first vm?] *******************************************************************************************ok: [localhost]PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0The output is almost the same as on the previous run, with the one difference being that this time no changes were reported (changed=0). This is a concept called idempotency and is present in both Kubernetes and Ansible (though not everywhere). In this context it means that if the state you want to achieve with your playbook (have the VM running) is the state that the clustercurrently is in (the VM is already running) then nothing will change, no matter how many times you attempt the operation. Note Kubernetes versions prior to 1. 12 contain a bug that might report operations that didn’t really do anything as having changed things. If your second (and third, etc. ) run of 01_vm1_running. yaml keep reporting changed=1, this might be the reason why. Let’s finish with cleaning up after ourselves by removing vm1. First the relevant YAML: kubevirt_vm: namespace: default name: vm1 state: absentAnd run it: [mmazur@klapek blog1]$ ansible-playbook 03_vm1_absent. yaml(…)TASK [Delete the vm] **********************************************************************************************changed: [localhost]PLAY RECAP ********************************************************************************************************localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0Now the VM is gone, which running kubectl get vms -n default will confirm. Just like before, if you run the playbook a few more times, the play recap will keep reporting changed=0. Next: Please read part two for a wider overview of available features. " }, { - "id": 108, + "id": 109, "url": "/2019/changelog-v0.17.0.html", "title": "KubeVirt v0.17.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 17. 0: Released on: Mon May 6 16:18:01 2019 +0200 Several testcase additions Improved virt-controller node distribution Improved support between version migrations Support for a configurable MachineType default Support for live-migration of a VM on node taints Support for VM swap metrics Support for versioned virt-launcher / virt-handler communication Support for HyperV flags Support for different VM run strategies (i. e manual and rerunOnFailure) Several fixes for live-migration (TLS support, protected pods)" }, { - "id": 109, + "id": 110, "url": "/2019/Hyper-Converged-Operator.html", "title": "Hyper Converged Operator", "author" : "DirectedSoul", "tags" : "HCO, hyperconverged operator", "body": "HCO known as Hyper Converged OperatorPrerequisites: This Blog assumes that the reader is aware of the concept of Operators and how it works in K8’s environment. Before proceeding further, feel free to take a look at this concept using CoreOS BlogPost What it does?: The goal of the hyperconverged-cluster-operator (HCO) is to provide a single entrypoint for multiple operators - kubevirt, cdi, networking, etc… - where users can deploy and configure them in a single object. This operator is sometimes referred to as a “meta operator” or an “operator for operators”. Most importantly, this operator doesn’t replace or interfere with OLM. It only creates operator CRs, which is the user’s prerogative. How does it work?: In this blog post, I’d like to focus on the first method(i. e by deploying a HCO using a CustomResourceDefinition method)which might seem like the most immediate benefit of this feature. Let’s get started! Environment description: We can use HCO both on minikube and also on OpenShift 4. We will be using OpenShift 4 for HCO in this post. Note: All the commands for installing HCO on minikube will remain the same as documented below, please follow the link Install_HCO_minikube install minikube by adjusting the memory to your requirement(atleast 4GiB of RAM is recommended). Deploying HCO on OpenShift 4 Cluster. : OpenShift Installation steps for OpenShift 4 including video tutorial can be found here Upon successful installation of OpenShift, we will have a cluster consisting of 3 masters and 3 workers which can be used for HCO integration $oc versionClient Version: version. Info{Major: 4 , Minor: 1+ , GitVersion: v4. 1. 0 , GitCommit: 2793c3316 , GitTreeState: , BuildDate: 2019-04-23T07:46:06Z , GoVersion: , Compiler: , Platform: }Server Version: version. Info{Major: 1 , Minor: 12+ , GitVersion: v1. 12. 4+0ba401e , GitCommit: 0ba401e , GitTreeState: clean , BuildDate: 2019-03-31T22:28:12Z , GoVersion: go1. 10. 8 , Compiler: gc , Platform: linux/amd64 }Check the nodes: $oc get nodesNAME STATUS ROLES AGE VERSIONip-10-0-133-213. us-east-2. compute. internal Ready worker 12m v1. 13. 4+da48e8391ip-10-0-138-120. us-east-2. compute. internal Ready master 18m v1. 13. 4+da48e8391ip-10-0-146-51. us-east-2. compute. internal Ready master 18m v1. 13. 4+da48e8391ip-10-0-150-215. us-east-2. compute. internal Ready worker 12m v1. 13. 4+da48e8391ip-10-0-160-201. us-east-2. compute. internal Ready master 17m v1. 13. 4+da48e8391ip-10-0-168-28. us-east-2. compute. internal Ready worker 12m v1. 13. 4+da48e8391Clone the HCO repo: git clone https://github. com/kubevirt/hyperconverged-cluster-operator. gitThis gives all the necessary go packages and yaml manifests for the next steps. Let’s create a NameSpace for the HCO deployment oc create new-project kubevirt-hyperconvergedNow switch to the kubevirt-hyperconverged NameSpace oc project kubevirt-hyperconvergedNow launch all the CRD’s oc create -f deploy/converged/crds/hco. crd. yamloc create -f deploy/converged/crds/kubevirt. crd. yamloc create -f deploy/converged/crds/cdi. crd. yamloc create -f deploy/converged/crds/cna. crd. yamlLet’s see the yaml file for HCO Custom Resource Definition ---apiVersion: apiextensions. k8s. io/v1beta1kind: CustomResourceDefinitionmetadata: name: hyperconvergeds. hco. kubevirt. iospec: additionalPrinterColumns: - JSONPath: . metadata. creationTimestamp name: Age type: date - JSONPath: . status. phase name: Phase type: string group: hco. kubevirt. io names: kind: HyperConverged plural: hyperconvergeds shortNames: - hco - hcos singular: hyperconverged scope: Namespaced subresources: status: {} version: v1alpha1 versions: - name: v1alpha1 served: true storage: trueLet’s create ClusterRoleBindings, ClusterRole, ServerAccounts and Deployments for the operator $ oc create -f deploy/convergedAnd after verifying all the above resources we can now finally deploy our HCO custom resource $ oc create -f deploy/converged/crds/hco. cr. yamlWe can take a look at the YAML definition of the CustomResource of HCO: Let’s create ClusterRoleBindings, ClusterRole, ServerAccounts and Deployments for the operator $ oc create -f deploy/convergedAnd after verifying all the above resources we can now finally deploy our HCO custom resource $ oc create -f deploy/converged/crds/hco. cr. yamlWe can take a look at the YAML definition of the CustomResource of HCO: ---apiVersion: hco. kubevirt. io/v1alpha1kind: HyperConvergedmetadata: name: hyperconverged-clusterAfter successfully executing the above commands,we should be now be having a virt-controller pod, HCO pod, and a network-addon pod functional and can be viewed as below. Let’s see the deployed pods: $oc get podsNAME READY STATUS RESTARTS AGEcdi-apiserver-769fcc7bdf-rv8zt 1/1 Running 0 5m2scdi-deployment-8b64c5585-g7zfk 1/1 Running 0 5m2scdi-operator-c77447cc7-58ld2 1/1 Running 0 11mcdi-uploadproxy-8dcdcbff-rddl6 1/1 Running 0 5m2scluster-network-addons-operator-85cd468ff5-xjgds 1/1 Running 0 11mhyperconverged-cluster-operator-75dd9c96f9-pqvdk 1/1 Running 0 11mvirt-api-7f5bfb4c58-bkbhq 1/1 Running 0 4m59svirt-api-7f5bfb4c58-kkvwc 1/1 Running 1 4m59svirt-controller-6ccbfb7d5b-m7ljf 1/1 Running 0 3m49svirt-controller-6ccbfb7d5b-mbvlv 1/1 Running 0 3m49svirt-handler-hqz9d 1/1 Running 0 3m49svirt-operator-667b6c845d-jfnsr 1/1 Running 0 11mAlso the below deployments: $oc get deploymentsNAME READY UP-TO-DATE AVAILABLE AGEcdi-apiserver 1/1 1 1 10mcdi-deployment 1/1 1 1 10mcdi-operator 1/1 1 1 16mcdi-uploadproxy 1/1 1 1 10mcluster-network-addons-operator 1/1 1 1 16mhyperconverged-cluster-operator 1/1 1 1 16mvirt-api 2/2 2 2 9m58svirt-controller 2/2 2 2 8m49svirt-operator 1/1 1 1 16mNote Here, Once we applied the Custom Resource the operator took care of deploying the actual KubeVirt pods (virt-api, virt-controller and virt-handler), CDI pods(cdi-upload-proxy, cdi-apiserver, cdi-deployment, cdi-operator) and Network add-on pods ( cluster-network-addons-operator). We will need to wait until all of the resources are up and running. This can be done using the command above or by using the command above with the -wflag. After the HCO is up and running on the cluster, we should be able to see the info of CRD’s $oc get crds | grep kubevirtcdiconfigs. cdi. kubevirt. io 2019-05-07T20:22:17Zcdis. cdi. kubevirt. io 2019-05-07T20:20:58Zdatavolumes. cdi. kubevirt. io 2019-05-07T20:22:17Zhyperconvergeds. hco. kubevirt. io 2019-05-07T20:20:58Zkubevirtcommontemplatesbundles. kubevirt. io 2019-05-07T20:20:58Zkubevirtnodelabellerbundles. kubevirt. io 2019-05-07T20:20:58Zkubevirts. kubevirt. io 2019-05-07T20:20:58Zkubevirttemplatevalidators. kubevirt. io 2019-05-07T20:20:58Zkwebuis. kubevirt. io 2019-05-07T20:20:58Znetworkaddonsconfigs. networkaddonsoperator. network. kubevirt. io 2019-05-07T20:20:58Znodemaintenances. kubevirt. io 2019-05-07T20:20:58Zvirtualmachineinstancemigrations. kubevirt. io 2019-05-07T20:23:02Zvirtualmachineinstancepresets. kubevirt. io 2019-05-07T20:23:01Zvirtualmachineinstancereplicasets. kubevirt. io 2019-05-07T20:23:02Zvirtualmachineinstances. kubevirt. io 2019-05-07T20:23:01Zvirtualmachines. kubevirt. io 2019-05-07T20:23:02ZNote In OpenShift we can use both kubectl and oc interchangeably to interact with the cluster objects once HCO is up and running. You can also read more about CDI, CNA, ssp-operator, web-ui and KubeVirt:: CDI CNA KubeVirt ssp-operator kubevirt-web-ui NodeMaintenanceHCO using the OLM methodNote The complete architecture of OLM and its components that connect together can be understood using the link OLM_architecture Replace with your Docker organization as official operator-registry images for HCO will not be provided. Next, build and publish the converged HCO operator-registry image. cd deploy/convergedexport HCO_DOCKER_ORG=<docker_org>docker build --no-cache -t docker. io/$HCO_DOCKER_ORG/hco-registry:example -f Dockerfile . docker push docker. io/$HCO_DOCKER_ORG/hco-registry:exampleAs an example deployment, Let’s take the value of operator-registry image as docker. io/rthallisey/hyperconverged-cluster-operator:latestNow, Let’s create the kubevirt-hyperconverged NS as below oc create ns kubevirt-hyperconvergedCreate the OperatorGroup cat <<EOF | oc create -f -apiVersion: operators. coreos. com/v1alpha2kind: OperatorGroupmetadata: name: hco-operatorgroup namespace: kubevirt-hyperconvergedEOFCreate a Catalog Source backed by a grpc registry cat <<EOF | oc create -f -apiVersion: operators. coreos. com/v1alpha1kind: CatalogSourcemetadata: name: hco-catalogsource namespace: openshift-operator-lifecycle-manager imagePullPolicy: Alwaysspec: sourceType: grpc image: docker. io/rthallisey/hco-registry:v0. 1-8 displayName: KubeVirt HyperConverged publisher: Red HatEOFPlease wait until the hco-catalogsource pod comes up Next is to create a subscription, we can create a subscription from the OpenShift4 web interface as shown below: Once subscribed, we can create a kubevirt Hyperconverged Operator from UI: Install the HCO Operator: Please wait until the virt-operator, cdi-operator and cluster-network-addons-operator comes up. After they are up, its now time to launch the HCO-Custom Resource itself: Once the HCO Operator is deployed in the kubevirt-hyperconverged NS, we can see all the pods are up and running: We can verify the same from the CLI: oc get pods -n kubevirt-hyperconvergedNAME READY STATUS RESTARTS AGEcdi-apiserver-769fcc7bdf-b5v8n 1/1 Running 0 4m5scdi-deployment-8b64c5585-qs527 1/1 Running 0 4m4scdi-operator-77b8847b96-5kmb2 1/1 Running 0 4m55scdi-uploadproxy-8dcdcbff-xgnxf 1/1 Running 0 4m5scluster-network-addons-operator-584dff99b8-c5kz5 1/1 Running 0 4m55shco-operator-59b559bd44-lgdnm 1/1 Running 0 4m55skubevirt-ssp-operator-67b78446f7-l7rfv 1/1 Running 0 4m55skubevirt-web-ui-operator-9df6b67d9-mzf6s 1/1 Running 0 4m55snode-maintenance-operator-6b464dc85-v6vmw 1/1 Running 0 4m55svirt-api-7b56d7dd89-8s78r 1/1 Running 0 2m59svirt-api-7b56d7dd89-h75t8 1/1 Running 1 2m59svirt-controller-77c6d6d779-9qpp4 1/1 Running 0 2m32svirt-controller-77c6d6d779-vbbxg 1/1 Running 0 2m32svirt-handler-4bfb9 1/1 Running 0 2m32svirt-handler-ns97x 1/1 Running 0 2m32svirt-handler-q7wbh 1/1 Running 0 2m32svirt-operator-87d7c98b-mh8pg 1/1 Running 0 4m55svirt-operator-87d7c98b-p6mbd 1/1 Running 0 4m55sWe can see how OLM operator manages the HCO pods from the openshift-operator-lifecycle-manager NS: The above method demonstrates the integration of HCO operator in OpenShift4. So, after HCO is up and running we need to test it by deploying a small instance of a VM. To deploy an instance follow the instructions here minikube_quickstart: Conclusion: What to expect next? HCO achieved its goal which was to provide a single entrypoint for multiple operators - kubevirt, cdi, networking, etc. where users can deploy and configure them in a single object as seen above. Now, we can also launch the HCO through OLM. Note Until we publish (and consume) the HCO and component operators through operatorhub. io, this is a means to demonstrate the HCO workflow without OLMOnce we publish operators through Marketplace at OperatorHub. io, it will be available here " }, { - "id": 110, + "id": 111, "url": "/2019/changelog-v0.16.0.html", "title": "KubeVirt v0.16.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 16. 0: Released on: Fri Apr 5 23:18:22 2019 +0200 Bazel fixes Initial work to support upgrades (not finalized) Initial support for HyperV features Support propagation of MAC addresses to multus Support live migration cancellation Support for table input devices Support for generating OLM metadata Support for triggering VM live migration on node taints" }, { - "id": 111, + "id": 112, "url": "/2019/More-about-Kubevirt-metrics.html", "title": "More About Kubevirt Metrics", "author" : "fromanirh", "tags" : "metrics, prometheus", "body": "More about KubeVirt and Prometheus metricsIn this blog post, we update about the KubeVirt metrics, continuing the series started earlier this year. Since the previous post, the initial groundwork and first set of metrics was merged, and it is expectedto be available with KubeVirt v0. 15. 0 and onwards. Make sure you followed the steps described in the previous post to set up properly the monitoring stackin your KubeVirt-powered cluster. New metrics: Let’s look at the initial set of metrics exposed by KubeVirt 0. 15. 0: kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0-alpha. 0. 74+d7aaf3b5df4a60-dirty }kubevirt_vm_memory_resident_bytes{domain= $VM_NAME }kubevirt_vm_network_traffic_bytes_total{domain= $VM_NAME ,interface= $IFACE_NAME0 ,type= rx }kubevirt_vm_network_traffic_bytes_total{domain= $VM_NAME ,interface= $IFACE_NAME0 ,type= tx }kubevirt_vm_storage_iops_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_iops_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_storage_times_ms_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_times_ms_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_storage_traffic_bytes_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= read }kubevirt_vm_storage_traffic_bytes_total{domain= $VM_NAME ,drive= $DRIVE_NAME ,type= write }kubevirt_vm_vcpu_seconds{domain= $VM_NAME ,id= 0 ,state= 1 }The metrics expose versioning information according to the recommendations using the kubevirt_info metric; the other metrics should be self-explanatory. As we can expect, labels like domain, drive and interface depend on the specifics of the VM. type, however, is not and represents the subtype of the metric. Let’s now see a real life example, from this idle, diskless VM: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: creationTimestamp: null labels: kubevirt. io/vm: vm-test-01 name: vm-test-01spec: running: false template: metadata: creationTimestamp: null labels: kubevirt. io/vm: vm-test-01 spec: domain: devices: interfaces: - name: default bridge: {} machine: type: resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0status: {}Querying the endpoint (see below) yields something like kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0 } 1kubevirt_vm_memory_resident_bytes{domain= default_vm-test-01 } 4. 25984e+07kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= rx } 90kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= tx } 0kubevirt_vm_vcpu_seconds{domain= default_vm-test-01 ,id= 0 ,state= 1 } 613Example of how the kubevirt_vm_memory_resident_bytes metric looks like in the Prometheus UI Accessing the metrics programmatically: We can access the VM metrics using the standard Prometheus API. For example, let’s get the same data about the memory consumption we have seen above in the Prometheus UI. The following query yields all the data for the year 2019, aggregated every two hours. Not much data in this case, but beware of potentially large result sets. curl -g 'http://$CLUSTER_IP:9090/api/v1/query_range?query=kubevirt_vm_memory_resident_bytes&start=2019-01-01T00:00:00. 001Z&end=2019-12-31T23:59:59. 999Z&step=7200s' | json_ppWhich yields something like { data : { resultType : matrix , result : [ { values : [ [1552514400. 001, 44036096 ], [1552521600. 001, 42348544 ], [1552528800. 001, 44040192 ], [1552536000. 001, 42291200 ], [1552543200. 001, 42450944 ], [1552550400. 001, 43315200 ] ], metric : { __name__ : kubevirt_vm_memory_resident_bytes , job : kubevirt-prometheus-metrics , endpoint : metrics , pod : virt-handler-6ng6j , domain : default_vm-test-01 , instance : 10. 244. 0. 29:8443 , service : kubevirt-prometheus-metrics , namespace : kubevirt } } ] }, status : success }Troubleshooting tips: We strive to make the monitoring experience seamless, streamlined and working out of the box, but the stack is still evolving fast,and there are many options to actually set up the monitoring stack. Here we present some troubleshooting tips for the most common issues. prometheus targets: An underused feature of the Prometheus server is the target configuration. The Prometehus server exposes data about the targets it islooking for, so we can easily asses if the Prometheus server knows that it must scrape the kubevirt endpoints for metrics. We can see this both in the Prometheus UI: Or programmatically, with the Prometheus REST API: curl -g 'http://192. 168. 48. 7:9090/api/v1/targets' | json_pp(output trimmed for brevity): { data : { activeTargets : [ { lastError : , lastScrape : 2019-03-14T13:38:52. 886262669Z , scrapeUrl : https://10. 244. 0. 72:8443/metrics , labels : { service : kubevirt-prometheus-metrics , instance : 10. 244. 0. 72:8443 , job : kubevirt-prometheus-metrics , pod : virt-handler-6ng6j , endpoint : metrics , namespace : kubevirt }, discoveredLabels : { __meta_kubernetes_pod_phase : Running , __meta_kubernetes_endpoints_name : kubevirt-prometheus-metrics , __meta_kubernetes_endpoint_address_target_name : virt-handler-6ng6j , __meta_kubernetes_service_name : kubevirt-prometheus-metrics , __meta_kubernetes_pod_label_pod_template_generation : 1 , __meta_kubernetes_endpoint_port_name : metrics , __meta_kubernetes_service_label_app_kubernetes_io_managed_by : kubevirt-operator , __meta_kubernetes_pod_name : virt-handler-6ng6j , __address__ : 10. 244. 0. 72:8443 , __meta_kubernetes_pod_container_name : virt-handler , __meta_kubernetes_pod_container_port_number : 8443 , __meta_kubernetes_pod_controller_kind : DaemonSet , __meta_kubernetes_pod_label_kubevirt_io : virt-handler , __meta_kubernetes_pod_label_controller_revision_hash : 7bc9c7665b , __meta_kubernetes_pod_container_port_name : metrics , __meta_kubernetes_pod_ready : true , __scheme__ : https , __meta_kubernetes_namespace : kubevirt , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations : [{\ key\ :\ CriticalAddonsOnly\ ,\ operator\ :\ Exists\ }] , __meta_kubernetes_pod_container_port_protocol : TCP , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_critical_pod : , __meta_kubernetes_pod_label_prometheus_kubevirt_io : , __metrics_path__ : /metrics , __meta_kubernetes_pod_controller_name : virt-handler , __meta_kubernetes_pod_node_name : c7-allinone-2. kube. lan , __meta_kubernetes_endpoint_address_target_kind : Pod , __meta_kubernetes_endpoint_port_protocol : TCP , __meta_kubernetes_service_label_prometheus_kubevirt_io : , __meta_kubernetes_pod_uid : 7d65f67a-45c8-11e9-8567-5254000be9ec , job : kubevirt/kubevirt/0 , __meta_kubernetes_service_label_kubevirt_io : , __meta_kubernetes_pod_ip : 10. 244. 0. 72 , __meta_kubernetes_endpoint_ready : true , __meta_kubernetes_pod_host_ip : 192. 168. 48. 7 }, health : up } ], droppedTargets : [ { discoveredLabels : { __meta_kubernetes_service_name : virt-api , __meta_kubernetes_endpoint_address_target_name : virt-api-649859444c-dnvnm , __meta_kubernetes_pod_phase : Running , __meta_kubernetes_endpoints_name : virt-api , __meta_kubernetes_pod_container_name : virt-api , __meta_kubernetes_service_label_app_kubernetes_io_managed_by : kubevirt-operator , __meta_kubernetes_pod_name : virt-api-649859444c-dnvnm , __address__ : 10. 244. 0. 59:8443 , __meta_kubernetes_endpoint_port_name : , __meta_kubernetes_pod_container_port_name : virt-api , __meta_kubernetes_pod_ready : true , __meta_kubernetes_pod_label_kubevirt_io : virt-api , __meta_kubernetes_pod_controller_kind : ReplicaSet , __meta_kubernetes_pod_container_port_number : 8443 , __meta_kubernetes_namespace : kubevirt , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_tolerations : [{\ key\ :\ CriticalAddonsOnly\ ,\ operator\ :\ Exists\ }] , __scheme__ : https , __meta_kubernetes_pod_label_prometheus_kubevirt_io : , __meta_kubernetes_pod_annotation_scheduler_alpha_kubernetes_io_critical_pod : , __meta_kubernetes_pod_container_port_protocol : TCP , __metrics_path__ : /metrics , __meta_kubernetes_endpoint_address_target_kind : Pod , __meta_kubernetes_endpoint_port_protocol : TCP , __meta_kubernetes_pod_controller_name : virt-api-649859444c , __meta_kubernetes_pod_label_pod_template_hash : 649859444c , __meta_kubernetes_pod_node_name : c7-allinone-2. kube. lan , __meta_kubernetes_pod_host_ip : 192. 168. 48. 7 , job : kubevirt/kubevirt/0 , __meta_kubernetes_service_label_kubevirt_io : virt-api , __meta_kubernetes_endpoint_ready : true , __meta_kubernetes_pod_ip : 10. 244. 0. 59 , __meta_kubernetes_pod_uid : 7d5c3299-45c8-11e9-8567-5254000be9ec } } ] }, status : success }The Prometheus target state gives us a very useful information that shapes the next steps during the troubleshooting: does the Prometheus server know it should scrape our target? If no, we should check the Prometheus configuration, which is, in our case, driven by the Prometheus operator. Otherwise: can the Prometheus server access the endpoint? If no, we need to check the network connectivity/DNS configuration, or the endpoint itselfservicemonitors: servicemonitors are the objects the prometheus-operator consume to produce the right prometheus configuration that the server running in the clusterwill consume to scrape the metrics endpoints. See the documentation for all the details. We describe two of the most common pitfalls. create the servicemonitor in the right namespace: KubeVirt services run in the kubevirt namespace. Make sure to create the servicemonitor in the same namespace: kubectl get pods -n kubevirtNAME READY STATUS RESTARTS AGEvirt-api-649859444c-dnvnm 1/1 Running 2 19hvirt-api-649859444c-j9d94 1/1 Running 2 19hvirt-controller-7f49b8f77c-8kh46 1/1 Running 2 19hvirt-controller-7f49b8f77c-qk4hq 1/1 Running 2 19hvirt-handler-6ng6j 1/1 Running 2 19hvirt-operator-6c5db798d4-wr9wl 1/1 Running 6 19hkubectl get servicemonitor -n kubevirtNAME AGEkubevirt 16hActually, the servicemonitor should be created in the same namespace on which the kubevirt-prometheus-metrics service is defined: kubectl get svc -n kubevirtNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubevirt-prometheus-metrics ClusterIP 10. 109. 85. 101 <none> 443/TCP 19hvirt-api ClusterIP 10. 109. 162. 102 <none> 443/TCP 19hSee the KubeVirt documentation for all the details. configure the Prometheus instance to look in the right namespace: The prometheus server instance(s) run by default in their own namespace; this is the recommended configuration, and running them in the same kubevirt namespaceis not recommended anyway. So, make sure that the prometheus configuration we use looks in all the relevant namespaces, using something like apiVersion: monitoring. coreos. com/v1kind: Prometheusmetadata: name: prometheusspec: serviceAccountName: prometheus serviceMonitorNamespaceSelector: matchLabels: prometheus. kubevirt. io: serviceMonitorSelector: matchLabels: prometheus. kubevirt. io: resources: requests: memory: 400MiPlease note the usage of the serviceMonitorNamespaceSelector. See here and herefor more details. Namespaces must have the right label, prometheus. kubevirt. io, to be searched for servicemonitors. The kubevirt namespace is, of course, set correctly by default apiVersion: v1kind: Namespacemetadata: creationTimestamp: 2019-03-13T19:43:25Z labels: kubevirt. io: prometheus. kubevirt. io: name: kubevirt resourceVersion: 228178 selfLink: /api/v1/namespaces/kubevirt uid: 44a0783f-45c8-11e9-8567-5254000be9ecspec: finalizers: - kubernetesstatus: phase: ActiveBut please make sure that any other namespace you may want to monitor has the correct label. endpoint state: As in KubeVirt 0. 15. 0, virt-handler is the component which exposes the VM metrics through its Prometheus endpoint. Let’s check it reports the data correctly. First, let’s get the virt-handler IP address. We look out the instance we want to check with kubectl get pods -n kubevirtThen we query the address: kubectl get pod -o json -n KubeVirt $VIRT_HANDLER_POD | jq -r '. status. podIP'Prometheus tooling adds lots of metrics about internal state. In this case we care only about kubevirt-related metrics, so we filter out everything else with something like grep -E '^kubevirt_'Putting all together: curl -s -k -L https://$(kubectl get pod -o json -n KubeVirt virt-handler-6ng6j | jq -r '. status. podIP'):8443/metrics | grep -E '^kubevirt_'Let’s see how a healthy output looks like: kubevirt_info{goversion= go1. 11. 4 ,kubeversion= v0. 15. 0 } 1kubevirt_vm_memory_resident_bytes{domain= default_vm-test-01 } 4. 1168896e+07kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= rx } 90kubevirt_vm_network_traffic_bytes_total{domain= default_vm-test-01 ,interface= vnet0 ,type= tx } 0kubevirt_vm_vcpu_seconds{domain= default_vm-test-01 ,id= 0 ,state= 1 } 5173Please remember that some metrics can be correctly omitted for some VMs. In general, we should always see metrics about version (pseudo metric), memory, network, and CPU. But there are known cases on which not having storage metrics is expected and correct: for example this case, since we are using a diskless VM. Coming next: The KubeVirt team is still working to enhance and refine the metrics support. There are two main active topics. First, discussion is ongoing about adding more metrics,depending on the needs of the community or the needs of the ecosystem. Furthermore, there is work in progress to increase the robustnessand the reliability of the monitoring. We also have plans to improve the integration with kubernetes. Stay tuned for more updates! " }, { - "id": 112, + "id": 113, "url": "/2019/changelog-v0.15.0.html", "title": "KubeVirt v0.15.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 15. 0: Released on: Tue Mar 5 10:35:08 2019 +0100 CI: Several fixes Fix configurable number of KVM devices Narrow virt-handler permissions Use bazel for development builds Support for live migration with shared and non-shared disks Support for live migration progress tracking Support for EFI boot Support for libvirt 5. 0 Support for extra DHCP options Support for a hook to manipualte cloud-init metadata Support setting a VM serial number Support for exposing infra and VM metrics Support for a tablet input device Support for extra CPU flags Support for ignition metadata Support to set a default CPU model Update to go 1. 11. 5" }, { - "id": 113, + "id": 114, "url": "/2019/federated-kubevirt.html", "title": "Federated Kubevirt", "author" : "karmab", "tags" : "federation, kubefed, multicluster", "body": "Federated KubeVirtFederated KubeVirt is a reference implementation of deploying and managing KubeVirt across multipleKubernetes clusters using Federation-v2. Federation-v2 is an API and control-plane for actively managing multiple Kubernetes clusters and applications in thoseclusters. This makes Federation-v2 a viable solution for managing KubeVirt deployments that span multiple Kubernetesclusters. Federation-v2 Deployment: We assume federation is already deployed (using latest stable version) and you have configured your two clusters with context cluster1 and cluster2 Federated KubeVirt Deployment: We create KubeVirt namespace on first cluster: kubectl create ns kubevirtWe then create a placement for this namespace to get replicated to the second cluster. kubectl create -f federated_namespace. yamlNOTE: This yaml file was generated for version 0. 14. 0. but feel free to edit in order to use a more recent version of the operator We create the federated objects required as per kubevirt deployment: kubefed2 enable ClusterRoleBindingkubefed2 enable CustomResourceDefinitionAnd federated kubevirt itself, with placements so that it gets deployed at both sites. kubectl create -f federated_kubevirt-operator. yamlThis gets KubeVirt operator deployed at both sites, which creates the Custom Resource definition KubeVirt. We then deploy kubevirt by federating this CRD and creates an instance of it. kubefed2 enable kubevirtskubectl create -f federated_kubevirt-cr. yamlTo help starting/stopping vms and connecting to consoles, we install virtctl (which is aware of contexts): VERSION= v0. 14. 0 wget https://github. com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64mv virtctl-$VERSION-linux-amd64 /usr/bin/virtctlchmod +x /usr/bin/virtctlKubeVirt Deployment Verification: Verify that all KubeVirt pods are running in the clusters: $ for c in cluster1 cluster2; do kubectl get pods -n kubevirt --context ${c} ; doneNAME READY STATUS RESTARTS AGEvirt-api-578cff4f56-2dsml 1/1 Running 0 3mvirt-api-578cff4f56-8mk27 1/1 Running 0 3mvirt-controller-7d8c4fbc4c-pfwll 1/1 Running 0 3mvirt-controller-7d8c4fbc4c-xvlvr 1/1 Running 0 3mvirt-handler-plfg7 1/1 Running 0 3mvirt-operator-67c86544f7-pnjjk 1/1 Running 0 5mNAME READY STATUS RESTARTS AGEvirt-api-578cff4f56-jjbmf 1/1 Running 0 3mvirt-api-578cff4f56-m6g2c 1/1 Running 0 3mvirt-controller-7d8c4fbc4c-tt9tz 1/1 Running 0 3mvirt-controller-7d8c4fbc4c-zf6hh 1/1 Running 0 3mvirt-handler-bldss 1/1 Running 0 3mvirt-operator-67c86544f7-zz5jc 1/1 Running 0 5mNow that KubeVirt is up and created its own custom resource types, we federate them: kubefed2 enable virtualmachineskubefed2 enable virtualmachineinstanceskubefed2 enable virtualmachineinstancepresetskubefed2 enable virtualmachineinstancereplicasetskubefed2 enable virtualmachineinstancemigrationsFor demo purposes, we also federate persistent volume claims: kubefed2 enable persistentvolumeclaimDemo Workflow: We create a federated persistent volume claim, pointing to an existing pv created at both sites, against the same nfs server: kubectl create -f federated_pvc. yamlWe then create a federated virtualmachine, with a placement so that it’s only created at cluster1 kubectl create -f federated_vm. yamlWe can check how its underlying pod only got created at one site: $ for c in cluster1 cluster2; do kubectl get pods --context ${c} ; doneNAME READY STATUS RESTARTS AGEvirt-launcher-testvm2-9dq48 2/2 Running 0 6mNo resources found. Once the vm is up, we connect to it and format its secondary disk, put some data there Playing with placement resource, we have it stopping at cluster1 and launch at cluster2. kubectl patch federatedvirtualmachineplacements testvm2 --type=merge -p '{ spec :{ clusterNames : [ cluster2 ]}}'We can then connect there and see how the data is still available!!! Final ThoughtsFederating KubeVirt allows interesting use cases around kubevirt like disaster recovery scenarios. More over, the pattern used to federate this product can be seen as a generic way to federate modern applications: federate operator federate the CRD deploying the app (either at both sites or selectively) federate the CRDS handled by the app" }, { - "id": 114, + "id": 115, "url": "/2019/changelog-v0.14.0.html", "title": "KubeVirt v0.14.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 14. 0: Released on: Mon Feb 4 22:04:14 2019 +0100 CI: Several stabilizing fixes docs: Document the KubeVirt Razor build: golang update Update to Kubernetes 1. 12 Update CDI Support for Ready and Created Operator conditions Support (basic) EFI Support for generating cloud-init network-config" }, { - "id": 115, + "id": 116, "url": "/2019/An-overview-to-KubeVirt-metrics.html", "title": "An Overview To Kubevirt Metrics", "author" : "tripledes", "tags" : "metrics, prometheus, grafana", "body": "KubeVirt and Prometheus metricsIn this blog post, we will explore the current state of integration between KubeVirt and Prometheus. For that, we’ll be using the following bits and pieces: minikube, as local Kubernetes deployment. kube-prometheus bundle, to quickly and easily deploy the whole monitoring stack, Promtheus, Grafana, … KubeVirtStarting Kubernetes up: Installing minikube is detailed on the Installation section of the project’s README. If you happen to be running Fedora 29, this Copr repository can be used. Following the documentation on both minikube and kube-prometheus bundle, the command I used to start Kubernetes is the following one: $ minikube start --cpus 2 --disk-size 30g --memory 10240 --vm-driver kvm2 --feature-gates=DevicePlugins=true --bootstrapper=kubeadm --extra-config=kubelet. authentication-token-webhook=true --extra-config=kubelet. authorization-mode=Webhook --extra-config=scheduler. address=0. 0. 0. 0 --extra-config=controller-manager. address=0. 0. 0. 0 --kubernetes-version=v1. 11. 5 With that command you’ll get a VM, using 2 vCPUS with 10GiB of RAM and running Kubernetes version 1. 11. 5, please adjust that to your needs. Installing the monitoring stack: Follow this README for step by step installation instructions. Once installed, we can verify everything is up and running by checking out the monitoring namespace: $ kubectl get all -n monitoringNAME READY STATUS RESTARTS AGEpod/alertmanager-main-0 2/2 Running 2 3dpod/alertmanager-main-1 2/2 Running 2 3dpod/alertmanager-main-2 2/2 Running 2 3dpod/grafana-7b9578fb4-jb2ts 1/1 Running 1 3dpod/kube-state-metrics-fb7d5f59b-dr5zp 4/4 Running 5 3dpod/node-exporter-jf2zk 2/2 Running 2 3dpod/prometheus-adapter-69bd74fc7-vlfcq 1/1 Running 2 3dpod/prometheus-k8s-0 3/3 Running 4 3dpod/prometheus-k8s-1 3/3 Running 4 3dpod/prometheus-operator-6db8dbb7dd-5cb6r 1/1 Running 2 3dNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/alertmanager-main ClusterIP 10. 100. 239. 1 <none> 9093/TCP 3dservice/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 3dservice/grafana ClusterIP 10. 104. 160. 71 <none> 3000/TCP 3dservice/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 3dservice/node-exporter ClusterIP None <none> 9100/TCP 3dservice/prometheus-adapter ClusterIP 10. 109. 240. 201 <none> 443/TCP 3dservice/prometheus-k8s ClusterIP 10. 103. 208. 241 <none> 9090/TCP 3dservice/prometheus-operated ClusterIP None <none> 9090/TCP 3dservice/prometheus-operator ClusterIP None <none> 8080/TCP 3dNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset. apps/node-exporter 1 1 1 1 1 beta. kubernetes. io/os=linux 3dNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment. apps/grafana 1 1 1 1 3ddeployment. apps/kube-state-metrics 1 1 1 1 3ddeployment. apps/prometheus-adapter 1 1 1 1 3ddeployment. apps/prometheus-operator 1 1 1 1 3dNAME DESIRED CURRENT READY AGEreplicaset. apps/grafana-7b9578fb4 1 1 1 3dreplicaset. apps/kube-state-metrics-6dc79554cd 0 0 0 3dreplicaset. apps/kube-state-metrics-fb7d5f59b 1 1 1 3dreplicaset. apps/prometheus-adapter-69bd74fc7 1 1 1 3dreplicaset. apps/prometheus-operator-6db8dbb7dd 1 1 1 3dNAME DESIRED CURRENT AGEstatefulset. apps/alertmanager-main 3 3 3dstatefulset. apps/prometheus-k8s 2 2 3d So we can see that everything is up and running and we can even test that the access to Grafana and PromUI are working: For Grafana, forward the port 3000 as follows and access http://localhost:3000: $ kubectl --namespace monitoring port-forward svc/grafana 3000 At the time of this writing, the username and password for Grafana are both admin. For PromUI, forward the port 9090 as follows and access http://localhost:9090: $ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 Let’s deploy KubeVirt and dig on the metrics components: Deploy KubeVirt using the official documentation. This blog post uses version 0. 11. 0. Metrics: If you’ve installed KubeVirt before, there’s a service that might be unfamiliar to you, service/kubevirt-prometheus-metrics, this service uses a selector set to match the label prometheus. kubevirt. io: ““ which is included on all the main KubeVirt components, like the virt-api, virt-controllers and virt-handler. The kubevirt-prometheus-metrics also exposes the metrics port set to 443 so Promtheus can scrape the metrics for all the components through it. Let’s have a first look to the metrics that are exported: $ kubectl --namespace kube-system port-forward svc/kubevirt-prometheus-metrics 8443:443$ curl -k https://localhost:8443/metrics# TYPE go_gc_duration_seconds summarygo_gc_duration_seconds{quantile= 0 } 2. 856e-05go_gc_duration_seconds{quantile= 0. 25 } 8. 4197e-05go_gc_duration_seconds{quantile= 0. 5 } 0. 000148234go_gc_duration_seconds{quantile= 0. 75 } 0. 000358119go_gc_duration_seconds{quantile= 1 } 0. 014123096go_gc_duration_seconds_sum 0. 481708749go_gc_duration_seconds_count 328. . . # HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host. # TYPE rest_client_requests_total counterrest_client_requests_total{code= 200 ,host= 10. 96. 0. 1:443 ,method= GET } 125rest_client_requests_total{code= 200 ,host= 10. 96. 0. 1:443 ,method= PATCH } 284rest_client_requests_total{code= 404 ,host= 10. 96. 0. 1:443 ,method= GET } 284rest_client_requests_total{code= <error> ,host= 10. 96. 0. 1:443 ,method= GET } 2 As can be seen in the output from curl, there are quite some metrics, but we’ll focus here mostly about the ones starting by rest as the others are mostly about Golang runtime and few other process internals, so the metrics list will be reduced to the following: rest_client_request_latency_seconds_bucket rest_client_request_latency_seconds_count rest_client_request_latency_seconds_sum rest_client_requests_total The rest_client_request_latency_seconds, represents the latency for each request being made to the API components broken down by verb and URL. The rest_client_requests_total, represents the number of HTTP requests, partitioned by status code, method, and host. Now, following again KubeVirt’s docs, we need to deploy two resources: A Prometheus resource. Just copy the YAML snippet from KubeVirt’s docs and apply it as follows: $ kubectl apply -f kubevirt-prometheus. yml -n kube-system A ServiceMonitor resource, again, take the YAML snippet from KubeVirt’s docs and apply it to the cluster: $ kubectl apply -f kubevirt-servicemonitor. yml -n kube-system At this point we’re ready to fire up PromUI and start querying, accessing to it at http://localhost:9090, here are some examples: Let’s query for the rest_client_requests_total filterying by service name kubevirt-prometheus-metrics: Now, the same metric, but let’s apply rate function, on 1 minute intervals, looking at the graph tab we can see each component, with different HTTP status codes, methods (verbs) and more labels being added by Prometheus itself: On both images, there is one status code, that I feel it’s worth a special mention, as it might be confusing, it’s <error>. This is not actual HTTP code, obvsiously, but rather a real error logged out by the component in question, in this case it was the pod virt-handler-2pxcb. What does it mean? To keep the variaty of metrics under control, any error string logged out during a request is translated by the string we see in the images, <error>, and it’s meant for us to notice that there might be issues that need our attention. Checking the pod for errors in the logs we can find the following ones: $ kubectl logs virt-handler-2pxcb -n kube-system | grep -i error{ component : virt-handler , level : error , msg : kubevirt. io/kubevirt/pkg/virt-handler/vm. go:440: Failed to list *v1. VirtualMachineInstance: Get https://10. 96. 0. 1:443/apis/kubevirt. io/v1alpha2/virtualmachineinstances?labelSelector=kubevirt. io%2FnodeName+in+%28minikube%29\u0026limit=500\u0026resourceVersion=0: dial tcp 10. 96. 0. 1:443: i/o timeout , pos : reflector. go:205 , timestamp : 2018-12-21T09:46:27. 921051Z }{ component : virt-handler , level : error , msg : kubevirt. io/kubevirt/pkg/virt-handler/vm. go:441: Failed to list *v1. VirtualMachineInstance: Get https://10. 96. 0. 1:443/apis/kubevirt. io/v1alpha2/virtualmachineinstances?labelSelector=kubevirt. io%2FmigrationTargetNodeName+in+%28minikube%29\u0026limit=500\u0026resourceVersion=0: dial tcp 10. 96. 0. 1:443: i/o timeout , pos : reflector. go:205 , timestamp : 2018-12-21T09:46:27. 921168Z }Looking back at the first image, we can see the information there, matches what the logs say, exactly two ocurrances with method GET. So far, in this case, it’s nothing to worry about as it seems to be a temporary issue, but if the count grows, it’s likely there are serious issues that need fixing. With that in mind, it’s not hard to create a dashboard in Grafana that would give us a glimpse of how KubeVirt is doing. The three rectangles on the top, are singlestat, in Grafana’s own terms, and those are first applying rate() by 5 minutes samples, then applying count() to aggragate the results in a single value. So the query is: count(rate(rest_client_requests_total{service=”kubevirt-prometheus-metrics”,code=”XXX”} [5m]))Replacing XXX by 404, 500 or <error>. The singlestat is useful for counters and for quickly seeing how a system/service is doing, as thresholds can be defined, changing the background (or the value) color based on the current measured amount. The graph below, runs the same query, but without the aggregation so we can see each component with different status codes and verbs. Closing thoughts: Even though the current state might not look very exciting, it’s a start, we can now monitor the KubeVirt components and make sure we get alarms when something is wrong. Besides, there’s more to come, the KubeVirt team is working hard to bring VM metrics to the table. Once this work is completed, we’ll write another blog post, so stay tuned! " }, { - "id": 116, + "id": 117, "url": "/2019/changelog-v0.13.0.html", "title": "KubeVirt v0.13.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 13. 0: Released on: Tue Jan 15 08:26:25 2019 +0100 CI: Fix virt-api race API: Remove volumeName from disks" }, { - "id": 117, + "id": 118, "url": "/2019/changelog-v0.12.0.html", "title": "KubeVirt v0.12.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 12. 0: Released on: Fri Jan 11 22:22:02 2019 +0100 Introduce a KubeVirt Operator for KubeVirt life-cycle management Introduce dedicated kubevirt namespace Support VMI ready conditions Support vCPU threads and sockets Support scale and HPA for VMIRS Support to pass NTP related DHCP options Support guest IP address reporting via qemu guest agent Support for live migration with shared storage Support scheduling of VMs based on CPU family Support masquerade network interface binding" }, { - "id": 118, + "id": 119, "url": "/2018/kubevirt-autolatest.html", "title": "Kubevirt Autolatest", "author" : "karmab", "tags" : "gcp, autodeployer", "body": "How to easily test specific versions of KubeVirt on GCPAt KubeVirt, we created cloud images on gcp and aws to ease evaluation of the project. It works fine, has a dedicated CI and is updated when new releases come out, but i wanted to go a little bit further and see if i could easily spawn a vm which would default to latest versions of the components, or that would allow me to test a given PR without focusing on deployment details So What did I come up with: the image is called autolatest and can be found on Google Storage I assume that you have a Google account with an active payment methodor a free trial. You also need to make sure that you have a default keypairinstalled. From console. cloud. google. com, go to “Compute Engine”, “Images” and then clickon “Create Image” or click this link. Fill in the following data: Name: kubevirt-autodeployer Family: centos-7 (optional) Source: cloud storage file Cloud storage file: kubevirt-button/autolatest-v0. 1. tar. gz Then you can create a new instance based on this image. Go to “Compute Engine”, then to “VM instances”, and then click on “Create instance”. It’s recommended to select: the 2 CPU / 7. 5GB instance a zone that supports the Haswell CPU Platform or newer (for nested virtualization to work), us-central1-b for instanceUnder boot disk, select the image that you created above. If you want to use specific versions for any of the following components, create the corresponding metadata entry in Management/Metadata k8s_version flannel_version kubevirt_version cdi_version Now hit Create to start the instance. Once vm is up, you should be able to connect and see through the presented banner which components got deployed What happened under the hood: When the vm boots, it executes a boot script which does the following: Gather metadata for the following variables k8s_version flannel_version kubevirt_version cdi_version If those metadata variables are not set, rely on values fetched from this url Once those variables are set, the corresponding elements are deployed. When latest or a PR number is specified for one of the components, we gather the corresponding latest release tag from the product repo and use it to deploy When master or a number is specified for kubevirt, we build containers from source and deploy kubevirt with them The full script is available here and can be adapted to other platforms " }, { - "id": 119, + "id": 120, "url": "/2018/changelog-v0.11.0.html", "title": "KubeVirt v0.11.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 11. 0: Released on: Thu Dec 6 10:15:51 2018 +0100 API: registryDisk got renamed to containreDisk CI: User OKD 3. 11 Fix: Tolerate if the PVC has less capacity than expected Aligned to use ownerReferences Update to libvirt-4. 10. 0 Support for VNC on MAC OSX Support for network SR-IOV interfaces Support for custom DHCP options Support for VM restarts via a custom endpoint Support for liveness and readiness probes" }, { - "id": 120, + "id": 121, "url": "/2018/kubevirt-at-kubecon-na.html", "title": "Kubevirt At Kubecon Na", "author" : "xsgordon", "tags" : "kubecon, conference", "body": "KubeCon + CloudNativeCon North America 2018 (Seattle, December 11-13) israpidly approaching and promises to be another jam packed event for followers ofcloud-native technologies. Given the increasing scope of the event we thought it might be useful to preparea guide to where you are likely to find KubeVirt featured at the event. These sessions will provide you an opportunity not just to learn aboutKubeVirt’s to turning Kubernetes into a common platform for containers andvirtual machines, but also to meet other members of the community: KubeVirt Introductory Birds of a Feather (BoF) Session led by RyanHallisey, Software Engineer, Red Hat and Daniel Gonzalez Nothnagel, CloudInfrastructure Developer, SAP Tuesday, December 11 @ 10:50 AM PST KubeVirt Deep Dive Birds of a Feather (BoF) Session led by Scott Collier,Consulting Engineer, Red Hat and Vishesh Ajay Tanksale, nVidia Tuesday, December 11 @ 1:45 PM PST Connecting and Testing Virtual Network Topologies on Kubernetes presentedby Gage Orsburn, Software Architect, One Source Integrations and RichRenner, Solutions Architect, One Source Integrations Tuesday, December 11 @ 2:35 PM PST Running VM Workloads Side by Side with Container Workloads presented bySebastian Scheele, Co-founder and CEO, Loodse Thursday, December 13 @ 10:50 AM PST As previously announced on the kubevirt-dev mailing list we willalso be holding a users and contributors meetup on the Tuesday evening of theevent: Location: Sheraton Grand, Seattle Room: Aspen Room, 2nd Floor (Union Street Tower) Date: Tuesday, December 11th Time: 6:45 - 8:45 PM PSTWhile we wont have Ice Cube, we do plan to have food, so if you plan to attendplease register your interest so that we can cater accordingly! We lookforward to seeing you all at the event! Don’t forget to follow kubevirton Twitter for updates throughout! " }, { - "id": 121, + "id": 122, "url": "/2018/ignition-support.html", "title": "Ignition Support", "author" : "karmab", "tags" : "ignition, coreos, rhcos", "body": "Introduction: Ignition is a new provisioning utility designed specifically for CoreOS/RhCOS. At the most basic level, it is a tool for manipulating a node during early boot. This includes: Partitioning disks. Formatting partitions. Writing files (regular files, systemd units, networkd units). Configuring users and their associated ssh public keys. Recently, we added support for it in KubeVirt so ignition data can now be embedded in a vm specification, through a dedicated annotation. Ignition support is still needed in the guest operating system. Enabling Ignition Support: Ignition Support has to be enabled through a feature gate. This is achieved by creating (or editing ) the kubevirt-config ConfigMap in the kubevirt namespace. A minimal config map would look like this: apiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: ExperimentalIgnitionSupportMake sure to delete kubevirt related pods afterward for the configuration to be taken into account: kubectl delete pod --all -n kubevirtWorkThrough: We assume that you already have a Kubernetes or OpenShift cluster running with KubeVirt installed. Step 1: Create The following VM spec in the file myvm1. yml: apiVersion: kubevirt. io/v1alpha3kind: VirtualMachinemetadata: name: myvm1spec: running: true template: metadata: labels: kubevirt. io/size: small annotations: kubevirt. io/ignitiondata: | { ignition : { config : {}, version : 2. 2. 0 }, networkd : {}, passwd : { users : [ { name : core , sshAuthorizedKeys : [ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/AvM9VbO2yiIb9AillBp/kTr8jqIErRU1LFKqhwPTm4AtVIjFSaOuM4AlspfCUIz9IHBrDcZmbcYKai3lC3JtQic7M/a1OWUjWE1ML8CEvNsGPGu5yNVUQoWC0lmW5rzX9c6HvH8AcmfMmdyQ7SgcAnk0zir9jw8ed2TRAzHn3vXFd7+saZLihFJhXG4zB8vh7gJHjLfjIa3JHptWzW9AtqF9QsoBY/iu58Rf/hRnrfWscyN3x9pGCSEqdLSDv7HFuH2EabnvNFFQZr4J1FYzH/fKVY3Ppt3rf64UWCztDu7L44fPwwkI7nAzdmQVTaMoD3Ej8i7/OSFZsC2V5IBT kboumedh@bumblefoot ] }, ] } } spec: domain: devices: disks: - name: containerdisk disk: bus: virtio interfaces: - name: default bridge: {} resources: requests: memory: 64M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demoNote We simply inject the ignition data as a string in vm/spec/domain/spec/metadata/annotations, using kubevirt. io/ignitiondata as an annotation Step 2: Create the VM: $ kubectl apply -f myvm1. ymlvirtualmachine myvm1 createdAt this point, when VM boots, ignition data will be injected. How does it work under the hood?: We currently leverage Pass-through of arbitrary qemu commands although there is some discussion around using a metadata server instead Summary: Ignition Support brings the ability to run CoreOS/RHCOS distros on KubeVirt and to customize them at boot time. " }, { - "id": 122, + "id": 123, "url": "/2018/new-volume-types.html", "title": "New Volume Types", "author" : "slintes", "tags" : "volume types, serviceaccount", "body": "Introduction: Recently three new volume types were introduced, which can be used for additional VM disks, and allow better integration of virtual machines withwell known Kubernetes resources. ConfigMap and Secret: Both ConfigMapsand Secrets are used to provide configuration settings and credentials to Pods. In order to use them in your VM too, you can add them as additional disks, using the new configMapand secret volume types. ServiceAccount: Kubernetes pods can be configured to get a special type of secret injected, which can be used foraccessing the Kubernetes API. With the third new volume type serviceAccount you can get this information into your VM, too. Example: We assume that you already have a Kubernetes or OpenShift cluster running with KubeVirt installed. Step 1: Create a ConfigMap and Secret, which will be used in your VM: $ kubectl create secret generic mysecret --from-literal=PASSWORD=hiddensecret mysecret created$ kubectl create configmap myconfigmap --from-literal=DATABASE=stagingconfigmap myconfigmap createdStep 2: Define a VirtualMachineInstance which uses all three new volume types, and save it to vmi-fedora. yaml. Note how we add 3 disks for the ConfigMap and Secret we just created, and for the default ServiceAccount. In order to automount these disks, we also add a cloudInitNoCloud disk with mount instructions. Details onhow to do this might vary depending on the VM’s operating system. apiVersion: kubevirt. io/v1alpha2kind: VirtualMachineInstancemetadata: name: vmi-fedoraspec: domain: devices: disks: - name: registrydisk volumeName: registryvolume - name: cloudinitdisk volumeName: cloudinitvolume - name: configmap-disk serial: configmap volumeName: configmap-volume - name: secret-disk serial: secret volumeName: secret-volume - name: serviceaccount-disk serial: serviceaccount volumeName: serviceaccount-volume resources: requests: memory: 1024M volumes: - name: registryvolume registryDisk: image: kubevirt/fedora-cloud-container-disk-demo:latest - name: cloudinitvolume cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: # mount the disks - mkdir /mnt/{myconfigmap,mysecret,myserviceaccount} - mount /dev/disk/by-id/ata-QEMU_HARDDISK_configmap /mnt/myconfigmap - mount /dev/disk/by-id/ata-QEMU_HARDDISK_secret /mnt/mysecret - mount /dev/disk/by-id/ata-QEMU_HARDDISK_serviceaccount /mnt/myserviceaccount - name: configmap-volume configMap: name: myconfigmap - name: secret-volume secret: secretName: mysecret - name: serviceaccount-volume serviceAccount: serviceAccountName: defaultStep 3: Create the VMI: $ kubectl apply -f vmi-fedora. yamlvirtualmachineinstance vmi-fedora createdStep 4: Inspect the new disks: $ virtctl console vmi-fedoravmi-fedora login: fedoraPassword:[fedora@vmi-fedora ~]$ ls -R /mnt//mnt/:myconfigmap mysecret myserviceaccount/mnt/myconfigmap:DATABASE/mnt/mysecret:PASSWORD/mnt/myserviceaccount:ca. crt namespace token[fedora@vmi-fedora ~]$ cat /mnt/myconfigmap/DATABASEstaging[fedora@vmi-fedora ~]$ cat /mnt/mysecret/PASSWORDhidden[fedora@vmi-fedora ~]$ cat /mnt/myserviceaccount/namespacedefaultSummary: With these new volume types KubeVirt further improves the integration with native Kubernetes resources. Learn more about all available volume types on the userguide. " }, { - "id": 123, + "id": 124, "url": "/2018/changelog-v0.10.0.html", "title": "KubeVirt v0.10.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 10. 0: Released on: Thu Nov 8 15:21:34 2018 +0100 Support for vhost-net Support for block multi-queue Support for custom PCI addresses for virtio devices Support for deploying KubeVirt to a custom namespace Support for ServiceAccount token disks Support for multus backed networks Support for genie backed networks Support for kuryr backed networks Support for block PVs Support for configurable disk device caches Support for pinned IO threads Support for virtio net multi-queue Support for image upload (depending on CDI) Support for custom entity lists with more VM details (cusomt columns) Support for IP and MAC address reporting of all vNICs Basic support for guest agent status reporting More structured logging Better libvirt error reporting Stricter CR validation Better ownership references Several test improvements" }, { - "id": 124, + "id": 125, "url": "/2018/CDI-DataVolumes.html", "title": "Cdi Datavolumes", "author" : "tripledes", "tags" : "cdi, datavolumes", "body": "CDI DataVolumesContainerized Data Importer (or CDI for short), is a data import service for Kubernetes designed with KubeVirt in mind. Thanks to CDI, we can now enjoy the addition of DataVolumes, which greatly improve the workflow of managing KubeVirt and its storage. What it does: DataVolumes are an abstraction of the Kubernetes resource, PVC (Persistent Volume Claim) and it also leverages other CDI features to ease the process of importing data into a Kubernetes cluster. DataVolumes can be defined by themselves or embedded within a VirtualMachine resource definition, the first method can be used to orchestrate events based on the DataVolume status phases while the second eases the process of providing storage for a VM. How does it work?: In this blog post, I’d like to focus on the second method, embedding the information within a VirtualMachine definition, which might seem like the most immediate benefit of this feature. Let’s get started! Environment description: OpenShift For testing DataVolumes, I’ve spawned a new OpenShift cluster, using dynamic provisioning for storage running OpenShift Cloud Storage (GlusterFS), so the Persistent Volumes (PVs for short) are created on-demand. Other than that, it’s a regular OpenShift cluster, running with a single master (also used for infrastructure components) and two compute nodes. CDI We also need CDI, of course, CDI can be deployed either together with KubeVirt or independently, the instructions can be found in the project’s GitHub repo. KubeVirt Last but not least, we’ll need KubeVirt to run the VMs that will make use of the DataVolumes. Enabling DataVolumes feature: As of this writing, DataVolumes have to be enabled through a feature gate, for KubeVirt, this is achieved by creating the kubevirt-config ConfigMap on the namespace where KubeVirt has been deployed, by default kube-system. Let’s create the ConfigMap with the following definition: ---apiVersion: v1data: feature-gates: DataVolumeskind: ConfigMapmetadata: name: kubevirt-config namespace: kube-system$ oc create -f kubevirt-config-cm. ymlAlternatively, the following one-liner can also be used to achieve the same result: $ oc create configmap kubevirt-config --from-literal feature-gates=DataVolumes -n kube-systemIf the ConfigMap was already present on the system, just use oc edit to add the DataVolumes feature gate under the data field like the YAML above. If everything went as expected, we should see the following log lines on the virt-controller pods: level=info timestamp=2018-10-09T08:16:53. 602400Z pos=application. go:173 component=virt-controller msg= DataVolume integration enabled NOTE: It’s worth noting the values in the ConfigMap are not dynamic, in the sense that virt-controller and virt-api will need to be restarted, scaling their deployments down and back up again, just remember to scale it up to the same number of replicas they previously had. Creating a VirtualMachine embedding a DataVolume: Now that the cluster is ready to use the feature, let’s have a look at our VirtualMachine definition, which includes a DataVolume. apiVersion: kubevirt. io/v1alpha2kind: VirtualMachinemetadata: labels: kubevirt. io/vm: testvm1 name: testvm1spec: dataVolumeTemplates: - metadata: name: centos7-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi source: http: url: https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 running: true template: metadata: labels: kubevirt. io/vm: testvm1 spec: domain: cpu: cores: 1 devices: disks: - volumeName: test-datavolume name: disk0 disk: bus: virtio - name: cloudinitdisk volumeName: cloudinitvolume cdrom: bus: virtio resources: requests: memory: 8Gi volumes: - dataVolume: name: centos7-dv name: test-datavolume - cloudInitNoCloud: userData: | #cloud-config hostname: testvm1 users: - name: kubevirt gecos: KubeVirt Project sudo: ALL=(ALL) NOPASSWD:ALL passwd: $6$JXbc3063IJir. e5h$ypMlYScNMlUtvQ8Il1ldZi/mat7wXTiRioGx6TQmJjTVMandKqr. jJfe99. QckyfH/JJ. OdvLb5/OrCa8ftLr. shell: /bin/bash home: /home/kubevirt lock_passwd: false name: cloudinitvolumeThe new addition to a regular VirtualMachine definition is the dataVolumeTemplates block, which will trigger the import of the CentOS-7 cloud image defined on the url field, storing it on a PV, the resulting DataVolume will be named centos7-dv, being referenced on the volumes section, it will serve as the boot disk (disk0) for our VirtualMachine. Going ahead and applying the above manifest to our cluster results in the following set of events: The DataVolume is created, triggering the creation of a PVC and therefore, using the dynamic provisioning configured on the cluster, a PV is provisioned to satisfy the needs of the PVC. An importer pod is started, this pod is the one actually downloading the image defined in the url field and storing it on the provisioned PV. Once the image has been downloaded and stored, the DataVolume status changes to Succeeded, from that point the virt launcher controller will go ahead and schedule the VirtualMachine. Taking a look to the resources created after applying the VirtualMachine manifest, we can see the following: $ oc get podsNAME READY STATUS RESTARTS AGEimporter-centos7-dv-t9zx2 0/1 Completed 0 11mvirt-launcher-testvm1-cpt8n 1/1 Running 0 8mLet’s look at the importer pod logs to understand what it did: $ oc logs importer-centos7-dv-t9zx2I1009 12:37:45. 384032 1 importer. go:32] Starting importerI1009 12:37:45. 393461 1 importer. go:37] begin import processI1009 12:37:45. 393519 1 dataStream. go:235] copying https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 to /data/disk. img . . . I1009 12:37:45. 393569 1 dataStream. go:112] IMPORTER_ACCESS_KEY_ID and/or IMPORTER_SECRET_KEY are emptyI1009 12:37:45. 393606 1 dataStream. go:298] create the initial Reader based on the endpoint's https schemeI1009 12:37:45. 393665 1 dataStream. go:208] Attempting to get object https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 via http clientI1009 12:37:45. 762330 1 dataStream. go:314] constructReaders: checking compression and archive formats: /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2I1009 12:37:45. 841564 1 dataStream. go:323] found header of type qcow2 I1009 12:37:45. 841618 1 dataStream. go:338] constructReaders: no headers found for file /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 I1009 12:37:45. 841635 1 dataStream. go:340] done processing /centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 headersI1009 12:37:45. 841650 1 dataStream. go:138] NewDataStream: endpoint https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 's computed byte size: 8589934592I1009 12:37:45. 841698 1 dataStream. go:566] Validating qcow2 fileI1009 12:37:46. 848736 1 dataStream. go:572] Doing streaming qcow2 to raw conversionI1009 12:40:07. 546308 1 importer. go:43] import completeSo, following the events we see, it fetched the image from the defined url, validated its format and converted it to raw for being used by qemu. $ oc describe dv centos7-dvName: centos7-dvNamespace: test-dvLabels: kubevirt. io/created-by=1916da5f-cbc0-11e8-b467-c81f666533c3Annotations: kubevirt. io/owned-by=virt-controllerAPI Version: cdi. kubevirt. io/v1alpha1Kind: DataVolumeMetadata: Creation Timestamp: 2018-10-09T12:37:34Z Generation: 1 Owner References: API Version: kubevirt. io/v1alpha2 Block Owner Deletion: true Controller: true Kind: VirtualMachine Name: testvm1 UID: 1916da5f-cbc0-11e8-b467-c81f666533c3 Resource Version: 2474310 Self Link: /apis/cdi. kubevirt. io/v1alpha1/namespaces/test-dv/datavolumes/centos7-dv UID: 19186b29-cbc0-11e8-b467-c81f666533c3Spec: Pvc: Access Modes: ReadWriteOnce Resources: Requests: Storage: 10Gi Source: Http: URL: https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2Status: Phase: SucceededEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Synced 29s (x13 over 14m) datavolume-controller DataVolume synced successfully Normal Synced 18s datavolume-controller DataVolume synced successfullyThe DataVolume description matches what was defined under dataVolumeTemplates. Now, as we know it uses a PV/PVC underneath, let’s have a look: $ oc describe pvc centos7-dvName: centos7-dvNamespace: test-dvStorageClass: glusterfs-storageStatus: BoundVolume: pvc-191d27c6-cbc0-11e8-b467-c81f666533c3Labels: app=containerized-data-importer cdi-controller=centos7-dvAnnotations: cdi. kubevirt. io/storage. import. endpoint=https://cloud. centos. org/centos/7/images/CentOS-7-x86_64-GenericCloud. qcow2 cdi. kubevirt. io/storage. import. importPodName=importer-centos7-dv-t9zx2 cdi. kubevirt. io/storage. pod. phase=Succeeded pv. kubernetes. io/bind-completed=yes pv. kubernetes. io/bound-by-controller=yes volume. beta. kubernetes. io/storage-provisioner=kubernetes. io/glusterfsFinalizers: [kubernetes. io/pvc-protection]Capacity: 10GiAccess Modes: RWOEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ProvisioningSucceeded 18m persistentvolume-controller Successfully provisioned volume pvc-191d27c6-cbc0-11e8-b467-c81f666533c3 using kubernetes. io/glusterfsIt’s important to pay attention to the annotations, these are monitored/set by CDI. CDI triggers an import when it detects the cdi. kubevirt. io/storage. import. endpoint, assigns a pod as the import task owner and updates the pod phase annotation. At this point, everything is in place, the DataVolume has its underlying components, the image has been imported so now the VirtualMachine can start the VirtualMachineInstance based on its definition and using the CentOS7 image as boot disk, as users we can connect to its console as usual, for instance running the following command: $ virtctl console testvm1Cleaning it up: Once we’re happy with the results, it’s time to clean up all these tests. The task is easy: $ oc delete vm testvm1Once the VM (and its associated VMI) are gone, all the underlying storage resources are removed, there is no trace of the PVC, PV or DataVolume. $ oc get dv centos7-dv$ oc get pvc centos7-dv$ oc get pv pvc-191d27c6-cbc0-11e8-b467-c81f666533c3All three commands returned No resources found. " }, { - "id": 125, + "id": 126, "url": "/2018/containerized-data-importer.html", "title": "Containerized Data Importer", "author" : "tavni", "tags" : "import, clone, upload, virtual machine, disk image, cdi", "body": "IntroductionContainerized Data Importer (CDI) is a utility to import, upload and clone Virtual Machine images for use with KubeVirt. At a high level, a persistent volume claim (PVC), which defines VM-suitable storage via a storage class, is created. A custom controller watches for specific annotation on the persistent volume claim, and when discovered, starts an import, upload or clone process. The status of the each process is reflected in an additional annotation on the associated claim, and when the process completes KubeVirt can create the VM based on the new image. The Containerized Data Cloner gives the option to clone the imported/uploaded VM image from one PVC to another one either within the same namespace or across two different namespaces. This Containerized Data Importer project is designed with KubeVirt in mind and provides a declarative method for importing amd uploading VM images into a Kuberenetes cluster. KubeVirt detects when the VM disk image import/upload is complete and uses the same PVC that triggered the import/upload process, to create the VM. This approach supports two main use-cases: A cluster administrator can build an abstract registry of immutable images (referred to as “Golden Images”) which can be cloned and later consumed by KubeVirt An ad-hoc user (granted access) can import a VM image into their own namespace and feed this image directly to KubeVirt, bypassing the cloning stepFor an in depth look at the system and workflow, see the Design documentation. Data FormatThe Containerized Data Importer is capable of performing certain functions that streamline its use with KubeVirt. It automatically decompresses gzip and xz files, and un-tar’s tar archives. Also, qcow2 images are converted into the raw format which is required by KubeVirt, resulting in the final file being a simple . img file. Supported file formats are: Tar archive Gzip compressed file XZ compressed file Raw image data ISO image data Qemu qcow2 image dataNote: CDI also supports combinations of these formats such as gzipped tar archives, gzipped raw images, etc. Deploying CDIAssumptions: A running Kubernetes cluster that is capable of binding PVCs to dynamically or statically provisioned PVs. A storage class and provisioner (only for dynamically provisioned PVs). An HTTP file server hosting VM images An optional “golden” namespace acting as the image repository. The default namespace is fine for tire kicking. Deploy CDI from a release: Deploying the CDI controller is straight forward. In this document the default namespace is used, but in a production setup a protected namespace that is inaccessible to regular users should be used instead. Ensure that the cdi-sa service account has proper authority to run privileged containers, typically in a kube environment this is true by default. If you are running an openshift variation of kubernetes you may need to enable privileged containers in the security context:$ oc adm policy add-scc-to-user privileged -z cdi-sa Deploy the controller from the release manifest:$ VERSION=<cdi version>$ kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-controller. yamlDeploy CDI using a template: By default when using manifests/generated/cdi-controller. yaml CDI will deploy into the kube-system namespace using default settings. You can customize the deployment by using the generated manifests/generated/cdi-controller. yaml. j2 jinja2 template. This allows you to alter the install namespace, docker image repo, docker image tags, etc. To deploy using the template follow these steps: Install j2cli:$ pip install j2cli Install CDI:$ cdi_namespace=default \ docker_prefix=kubevirt \ docker_tag=v1. 2. 0 \ pull_policy=IfNotPresent \ verbosity=1 \ j2 manifests/generated/cdi-controller. yaml. j2 | kubectl create -f -Check the template file and make sure to supply values for all variables. Notes: The default verbosity level is set to 1 in the controller deployment file, which is minimal logging. If greater details are desired increase the -v number to 2 or 3. The importer pod uses the same logging verbosity as the controller. If a different level of logging is required after the controller has been started, the deployment can be edited and applied by using kubectl apply -f . This will not alter the running controller's logging level but will affect importer pods created after the change. To change the running controller's log level requires it to be restarted after the deployment has been edited. Download CDIThere are few ways to download CDI through command line: git clone command:$ git clone https://github. com/kubevirt/containerized-data-importer. git $GOPATH/src/kubevirt. io/containerized-data-importer download only the yamls:$ mkdir cdi-manifests && cd cdi-manifests$ wget https://raw. githubusercontent. com/kubevirt/containerized-data-importer/kubevirt-centric-readme/manifests/example/golden-pvc. yaml$ wget https://raw. githubusercontent. com/kubevirt/containerized-data-importer/kubevirt-centric-readme/manifests/example/endpoint-secret. yaml go get command:$ go get kubevirt. io/containerized-data-importerStart Importing ImagesImport disk image is achieved by creating a new PVC with the ‘cdi. kubevirt. io/storage. import. endpoint’ annotation indicating the url of the source image that we want to download from. Once the controller detects the PVC, it starts a pod which is responsible for importing the image from the given url. Create a PVC yaml file named golden-pvc. yaml: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: golden-pvc labels: app: containerized-data-importer annotations: cdi. kubevirt. io/storage. import. endpoint: https://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img # Required. Format: (http||s3)://www. myUrl. com/path/of/dataspec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi # Optional: Set the storage class or omit to accept the default # storageClassName: localEdit the PVC above - cdi. kubevirt. io/storage. import. endpoint: The full URL to the VM image in the format of: http://www. myUrl. com/path/of/data or s3://bucketName/fileName. storageClassName: The default StorageClass will be used if not set. Otherwise, set to a desired StorageClass. Note: It is possible to use authentication when importing the image from the endpoint url. Please see using secret during import Deploy the manifest yaml files: Create the persistent volume claim to trigger the import process:$ kubectl -n <NAMESPACE> create -f golden-pvc. yaml (Optional) Monitor the cdi-controller:$ kubectl -n <CDI-NAMESPACE> logs cdi-deployment-<RANDOM> (Optional )Monitor the importer pod:$ kubectl -n <NAMESPACE> logs importer-<PVC-NAME> # pvc name is shown in controller log Verify the import is completed by checking the following annotation value:$ kubectl -n <NAMESPACE> get pvc golden-pvc. yaml -o yamlannotation to verify - cdi. kubevirt. io/storage. pod. phase: Succeeded Start cloning disk imageCloning is achieved by creating a new PVC with the ‘k8s. io/CloneRequest’ annotation indicating the name of the PVC the image is copied from. Once the controller detects the PVC, it starts two pods (source and target pods) which are responsible for the cloning of the image from one PVC to another using a unix socket that is created on the host itself. When the cloning is completed, the PVC which the image was copied to, is assigned with the ‘k8s. io/CloneOf’ annotation to indicate cloning completion. The copied VM image can be used by a new pod only after the cloning process is completed. The two cloning pods must execute on the same node. Pod adffinity is used to enforce this requirement; however, the cluster also needs to be configured to delay volume binding until pod scheduling has completed. When using local storage and Kubernetes 1. 9 and older, export KUBE_FEATURE_GATES before bringing up the cluster: $ export KUBE_FEATURE_GATES= PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true These features default to true in Kubernetes 1. 10 and later and thus do not need to be set. Regardless of the Kubernetes version, a storage class with volumeBindingMode set to “WaitForFirstConsumer” needs to be created. Eg: kind: StorageClass apiVersion: storage. k8s. io/v1 metadata: name: <local-storage-name> provisioner: kubernetes. io/no-provisioner volumeBindingMode: WaitForFirstConsumerCreate a PVC yaml file named target-pvc. yaml: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: target-pvc namespace: target-ns labels: app: Host-Assisted-Cloning annotations: k8s. io/CloneRequest: source-ns/golden-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10GiEdit the PVC above - k8s. io/CloneRequest: The name of the PVC we copy the image from (including its namespace). For example: “source-ns/golden-pvc”. add the name of the storage class which defines volumeBindingMode per above. Note, this is not required in Kubernetes 1. 10 and later. Deploy the manifest yaml files: (Optional) Create the namespace where the target PVC will be deployed:$ kubectl create ns <TARGET-NAMESPACE> Deploy the target PVC:$ kubectl -n <TARGET-NAMESPACE> create -f target-pvc. yaml (Optional) Monitor the cloning pods:$ kubectl -n <SOURCE-NAMESPACE> logs <clone-source-pod-name>$ kubectl -n <TARGET-NAMESPACE> logs <clone-target-pod-name> Check the target PVC for ‘k8s. io/CloneOf’ annotation:$ kubectl -n <TARGET-NAMESPACE> get pvc <target-pvc-name> -o yamlStart uploading disk imageUploading a disk image is achieved by creating a new PVC with the ‘cdi. kubevirt. io/storage. upload. target’ annotation indicating the request for uploading. Part of the uploading process is the authentication of upload requests with an UPLOAD_TOKEN header. The user posts an upload token request to the cluster, and the encrypted Token is returned immediately within the response in the status field. For this to work, a dedicated service is deployed with a nodePort field. At that point, a curl request including the token is sent to start the upload process. Given the upload PVC and the curl request, the controller starts a pod which is responsible for uploading the local image to the PVC. Create a PVC yaml file named upload-pvc. yaml: apiVersion: v1kind: PersistentVolumeClaimmetadata: name: upload-pvc labels: app: containerized-data-importer annotations: cdi. kubevirt. io/storage. upload. target: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1GiCreate the upload-token. yaml file: apiVersion: upload. cdi. kubevirt. io/v1alpha1kind: UploadTokenRequestmetadata: name: upload-pvc namespace: defaultspec: pvcName: upload-pvcUpload an image: deploy the upload-pvc$ kubectl apply -f upload-pvc. yaml Request for upload token$ TOKEN=$(kubectl apply -f upload-token. yaml -o= jsonpath={. status. token} ) Upload the image$ curl -v --insecure -H Authorization: Bearer $TOKEN --data-binary @tests/images/cirros-qcow2. img https://$(minikube ip):31001/v1alpha1/uploadSecurity ConfigurationsRBAC Roles: CDI runs under a custom ServiceAccount (cdi-sa) and uses the Kubernetes RBAC model to apply an application specific custom ClusterRole with rules to properly access needed resources such as PersistentVolumeClaims and Pods. Protecting VM Image Namespaces: Currently there is no support for automatically implementing Kubernetes ResourceQuotas and Limits on desired namespaces and resources, therefore administrators need to manually lock down all new namespaces from being able to use the StorageClass associated with CDI/KubeVirt and cloning capabilities. This capability of automatically restricting resources is planned for future releases. Below are some examples of how one might achieve this level of resource protection: Lock Down StorageClass Usage for Namespace:apiVersion: v1kind: ResourceQuotametadata: name: protect-mynamespacespec: hard: <STORAGE-CLASS-NAME>. storageclass. storage. k8s. io/requests. storage: 0 Note . storageclass. storage. k8s. io/persistentvolumeclaims: 0 would also accomplish the same affect by not allowing any pvc requests against the storageclass for this namespace. Open Up StorageClass Usage for Namespace:apiVersion: v1kind: ResourceQuotametadata: name: protect-mynamespacespec: hard: <STORAGE-CLASS-NAME>. storageclass. storage. k8s. io/requests. storage: 500Gi Note . storageclass. storage. k8s. io/persistentvolumeclaims: 4 could be used and this would only allow for 4 pvc requests in this namespace, anything over that would be denied. " }, { - "id": 126, + "id": 127, "url": "/2018/changelog-v0.9.0.html", "title": "KubeVirt v0.9.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 9. 0: Released on: Thu Oct 4 14:42:28 2018 +0200 CI: NetworkPolicy tests CI: Support for an external provider (use a preconfigured cluster for tests) Fix virtctl console issues with CRI-O Support to initialize empty PVs Support for basic CPU pinning Support for setting IO Threads Support for block volumes Move preset logic to mutating webhook Introduce basic metrics reporting using prometheus metrics Many stabilizing fixes in many places" }, { - "id": 127, + "id": 128, "url": "/2018/KubeVirt-Network-Rehash.html", "title": "Kubevirt Network Rehash", "author" : "jcpowermac", "tags" : "networking, multus, ovs-cni, iptables", "body": "IntroductionThis post is a quick rehash of the previous post regarding KubeVirt networking. It has been updated to reflect the updates that are included with v0. 8. 0 which includesoptional layer 2 support via Multus and the ovs-cni. I won’t be covering the installationof OKD, Kubernetes, KubeVirt, Multus or ovs-cni all can be found in other documentation orposts. KubeVirt Virtual MachinesLike in the previous post I will deploy two virtual machines on two different hosts within an OKD cluster. These instances are where we will install our simple NodeJS and MongoDB application. Create Objects and Start the Virtual Machines: One of the first objects to create is the NetworkAttachmentDefinition. We are using a fairly simple definition for this post with an ovs bridge br1 and no vlan configured. apiVersion: k8s. cni. cncf. io/v1 kind: NetworkAttachmentDefinitionmetadata: name: ovs-net-br1spec: config: '{ cniVersion : 0. 3. 1 , type : ovs , bridge : br1 }'oc create -f https://gist. githubusercontent. com/jcpowermac/633de0066ee7990afc09fbd35ae776fe/raw/ac259386e1499b7f9c51316e4d5dcab152b60ce7/mongodb. yamloc create -f https://gist. githubusercontent. com/jcpowermac/633de0066ee7990afc09fbd35ae776fe/raw/ac259386e1499b7f9c51316e4d5dcab152b60ce7/nodejs. yamlStart the virtual machines instances ~/virtctl start nodejs~/virtctl start mongodbReview KubeVirt virtual machine related objects $ oc get net-attach-defNAME AGEovs-net-br1 16d$ oc get vmNAME AGEmongodb 4dnodejs 4d$ oc get vmiNAME AGEmongodb 3hnodejs 3h$ oc get podNAME READY STATUS RESTARTS AGEvirt-launcher-mongodb-bw2t8 2/2 Running 0 3hvirt-launcher-nodejs-dlgv6 2/2 Running 0 3hService and Endpoints: We may still want to use services and routes with a KubeVirt virtual machine instance utilizingmultiple interfaces. The service object below is consideredheadlessbecause the clusterIP is set to None. We don’t want load-balancing or single service IP asthis would force traffic over the cluster network which in this example we are trying to avoid. Mongo: ---kind: ServiceapiVersion: v1metadata: name: mongospec: clusterIP: None ports: - port: 27017 targetPort: 27017 name: mongo nodePort: 0selector: {}---kind: EndpointsapiVersion: v1metadata: name: mongosubsets: - addresses: - ip: 192. 168. 123. 139 ports: - port: 27017 name: mongoThe above ip address is provided by DHCP via dnsmasq to the virtual machine instance’s eth1 interface. All the nodes are virtual instances configured by libvirt. After creating the service and endpoints objects lets confirm that DNS is resolving correctly. $ ssh fedora@$(oc get pod -l kubevirt-vm=nodejs --template '{{ range . items }}{{. status. podIP}}{{end}}') \ python3 -c \ import socket;print(socket. gethostbyname('mongo. vm. svc. cluster. local'))\ 192. 168. 123. 139Node: We can also add a service, endpoints and route for the nodejs virtual machine so the applicationis accessible from the defined subdomain. apiVersion: v1kind: Servicemetadata: name: nodespec: clusterIP: None ports: - name: node port: 8080 protocol: TCP targetPort: 8080 sessionAffinity: None type: ClusterIP---apiVersion: v1kind: Endpointsmetadata: name: nodesubsets: - addresses: - ip: 192. 168. 123. 140 ports: - name: node port: 8080 protocol: TCP---apiVersion: v1kind: Routemetadata: name: nodespec: to: kind: Service name: nodeTesting our application: I am using the same application and method of installation as the previous post so I won’tduplicate it here. Just in case though let’s make sure that the application is availablevia the route. $ curl http://node-vm. apps. 192. 168. 122. 101. nip. io <!DOCTYPE html><html lang= en > <head> <meta charset= utf-8 /> <meta http-equiv= X-UA-Compatible content= IE=edge,chrome=1 /> <title>Welcome to OpenShift</title> . . . outout. . . <p> Page view count: <span class= code id= count-value >2</span> . . . output. . . </p> </head></html>Networking in DetailJust like in the previous post we should confirm how this works all together. Let’s review the virtual machine to virtual machinecommunication and route to virtual machine. Kubernetes-level: services: We have created two headless services one for node and one for mongo. This allows us to use the hostname mongo to connect to MongoDB via the alternative interface. $ oc get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmongo ClusterIP None <none> 27017/TCP 8hnode ClusterIP None <none> 8080/TCP 7h$ ssh fedora@$(oc get pod virt-launcher-nodejs-dlgv6 --template '{{. status. podIP}}') cat /etc/sysconfig/nodejsMONGO_URL=mongodb://nodejs:nodejspassword@mongo. vm. svc. cluster. local/nodejsendpoints: The endpoints below were manually created for each virtual machine based on the IP Address of eth1. $ oc get endpointsNAME ENDPOINTS AGEmongo 192. 168. 123. 139:27017 8hnode 192. 168. 123. 140:8080 7hroute: This will allow us access the NodeJS example application using the route url. $ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDnode node-vm. apps. 192. 168. 122. 101. nip. io node <all> NoneHost-level: In addition to the existing interface eth0 and bridge br0, eth1 is the uplink for the ovs-cni bridge br1. This needs to be manually configured prior to use. interfaces: ip a . . . output. . . 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:5f:90:85 brd ff:ff:ff:ff:ff:ff inet 192. 168. 122. 111/24 brd 192. 168. 122. 255 scope global noprefixroute dynamic eth0 valid_lft 2282sec preferred_lft 2282sec inet6 fe80::5054:ff:fe5f:9085/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000 link/ether 52:54:01:5f:90:85 brd ff:ff:ff:ff:ff:ff. . . output. . . 5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 2a:6e:65:7e:65:3a brd ff:ff:ff:ff:ff:ff9: br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 6e:d5:db:12:b5:43 brd ff:ff:ff:ff:ff:ff10: br0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000 link/ether aa:3c:bd:5a:ac:46 brd ff:ff:ff:ff:ff:ff. . . output. . . Bridge: The command and output below shows the Open vSwitch bridge and interfaces. The veth8bf25a9b interfaceis one of the veth pair created to connect the virtual machine to the Open vSwitch bridge. ovs-vsctl show 77147900-3d26-46c6-ac0b-755da3aa4b97 Bridge br1 Port br1 Interface br1 type: internal Port veth8bf25a9b Interface veth8bf25a9b Port eth1 Interface eth1 . . . output. . . Pod-level: interfaces: There are two bridges k6t-eth0 and k6t-net0. eth0 and net1 are a veth pair with the alternate sideavailable on the host. eth0 is a member of the k6t-eth0 bridge. net1 is a member of the k6t-net0 bridge. ~ oc exec -n vm -c compute virt-launcher-nodejs-76xk7 -- ip a . . . output3: eth0@if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master k6t-eth0 state UP group default link/ether 0a:58:0a:17:79:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::858:aff:fe17:7904/64 scope link valid_lft forever preferred_lft forever5: net1@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master k6t-net1 state UP group default link/ether 02:00:00:74:17:75 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::ff:fe74:1775/64 scope link valid_lft forever preferred_lft forever6: k6t-eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 0a:58:0a:17:79:04 brd ff:ff:ff:ff:ff:ff inet 169. 254. 75. 10/32 brd 169. 254. 75. 10 scope global k6t-eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe82:21/64 scope link valid_lft forever preferred_lft forever7: k6t-net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:00:00:74:17:75 brd ff:ff:ff:ff:ff:ff inet 169. 254. 75. 11/32 brd 169. 254. 75. 11 scope global k6t-net1 valid_lft forever preferred_lft forever inet6 fe80::ff:fe07:2182/64 scope link dadfailed tentative valid_lft forever preferred_lft forever8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master k6t-eth0 state UNKNOWN group default qlen 1000 link/ether fe:58:0a:82:00:21 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc58:aff:fe82:21/64 scope link valid_lft forever preferred_lft forever9: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master k6t-net1 state UNKNOWN group default qlen 1000 link/ether fe:37:cf:e0:ad:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc37:cfff:fee0:adf2/64 scope link valid_lft forever preferred_lft foreverShowing the bridge k6t-eth0 and k6t-net member ports. ~ oc exec -n vm -c compute virt-launcher-nodejs-dlgv6 -- bridge link show 3: eth0 state UP @if41: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master k6t-eth0 state forwarding priority 32 cost 25: net1 state UP @if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master k6t-net1 state forwarding priority 32 cost 28: vnet0 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master k6t-eth0 state forwarding priority 32 cost 1009: vnet1 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master k6t-net1 state forwarding priority 32 cost 100DHCP: The virtual machine network is configured by DHCP. You can see virt-launcher has UDP port 67 openon the k6t-eth0 interface to serve DHCP to the virtual machine. As described in the previouspost the virt-launcher process containsa simple DHCP server that provides an offer and typical options to the virtual machine instance. ~ oc exec -n vm -c compute virt-launcher-nodejs-dlgv6 -- ss -tuanp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Portudp UNCONN 0 0 0. 0. 0. 0%k6t-eth0:67 0. 0. 0. 0:* users:(( virt-launcher ,pid=7,fd=15))libvirt: With virsh domiflist we can also see that the vnet0 interface is a member on the k6t-eth0 bridge and vnet1 is a member of the k6t-net1 bridge. ~ oc exec -n vm -c compute virt-launcher-nodejs-dlgv6 -- virsh domiflist vm_nodejs Interface Type Source Model MAC-------------------------------------------------------vnet0 bridge k6t-eth0 virtio 0a:58:0a:82:00:2avnet1 bridge k6t-net1 virtio 20:37:cf:e0:ad:f2VM-level: interfaces: Fortunately the vm interfaces are fairly typical. Two interfaces: one that has been assigned the originalpod ip address and the other the ovs-cni layer 2 interface. The eth1 interface receives a IP addressfrom DHCP provided by dnsmasq that was configured by libvirt network on the physical host. ~ ssh fedora@$(oc get pod virt-launcher-nodejs-dlgv6 --template '{{. status. podIP}}') sudo ip a . . . output. . . 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP group default qlen 1000 link/ether 0a:58:0a:82:00:2a brd ff:ff:ff:ff:ff:ff inet 10. 130. 0. 42/23 brd 10. 130. 1. 255 scope global dynamic eth0 valid_lft 86239518sec preferred_lft 86239518sec inet6 fe80::858:aff:fe82:2a/64 scope link tentative dadfailed valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 20:37:cf:e0:ad:f2 brd ff:ff:ff:ff:ff:ff inet 192. 168. 123. 140/24 brd 192. 168. 123. 255 scope global dynamic eth1 valid_lft 3106sec preferred_lft 3106sec inet6 fe80::2237:cfff:fee0:adf2/64 scope link valid_lft forever preferred_lft foreverConfiguration and DNS: In this example we want to use Kubernetes services so special care must be used whenconfiguring the network interfaces. The default route and dns configuration must bemaintained by eth0. eth1 has both route and dns configuration disabled. ~ ssh fedora@$(oc get pod virt-launcher-nodejs-dlgv6 --template '{{. status. podIP}}') sudo cat /etc/sysconfig/network-scripts/ifcfg-eth0 BOOTPROTO=dhcpDEVICE=eth0ONBOOT=yesTYPE=EthernetUSERCTL=no# Use route and dns from DHCPDEFROUTE=yesPEERDNS=yes~ ssh fedora@$(oc get pod virt-launcher-nodejs-dlgv6 --template '{{. status. podIP}}') sudo cat /etc/sysconfig/network-scripts/ifcfg-eth1 BOOTPROTO=dhcpDEVICE=eth1IPV6INIT=noNM_CONTROLLED=noONBOOT=yesTYPE=Ethernet# Do not use route and dns from DHCPPEERDNS=noDEFROUTE=noJust quickly wanted to cat the /etc/resolv. conf file to show that DNS is configured so that kube-dns will be properly queried. ~ ssh fedora@$(oc get pod virt-launcher-nodejs-76xk7 --template '{{. status. podIP}}') sudo cat /etc/resolv. conf search vm. svc. cluster. local. svc. cluster. local. cluster. local. 168. 122. 112. nip. io. nameserver 192. 168. 122. 112VM to VM communication: The virtual machines are on different hosts. This was done purposely to show that connectivitybetween virtual machine and hosts. Here we finally get to use Skydive. The real-time topology below along witharrows annotate the flow of packets between the host and virtual machine network devices. VM to VM Connectivity Tests: To confirm connectivity we are going to do a few things. First look for an establishedconnection to MongoDB and finally check the NodeJS logs looking for confirmation of database connection. TCP connection: After connecting to the nodejs virtual machine via ssh we can use ss to determine the current TCP connections. We are specifically looking for the established connections to the MongoDB service that is running on the mongodb virtual machine. ssh fedora@$(oc get pod virt-launcher-nodejs-dlgv6 --template '{{. status. podIP}}') sudo ss -tanp State Recv-Q Send-Q Local Address:Port Peer Address:Port. . . output. . . ESTAB 0 0 192. 168. 123. 140:33156 192. 168. 123. 139:27017 users:(( node ,pid=12893,fd=11))ESTAB 0 0 192. 168. 123. 140:33162 192. 168. 123. 139:27017 users:(( node ,pid=12893,fd=13))ESTAB 0 0 192. 168. 123. 140:33164 192. 168. 123. 139:27017 users:(( node ,pid=12893,fd=14)). . . output. . . Logs: Here we are reviewing the logs of node to confirm we have a database connection to mongo via the service hostname. ssh fedora@$(oc get pod virt-launcher-nodejs-dlgv6 --template '{{. status. podIP}}') sudo journalctl -u nodejs . . . output. . . October 01 18:28:09 nodejs. localdomain systemd[1]: Started OpenShift NodeJS Example. October 01 18:28:10 nodejs. localdomain node[12893]: Server running on http://0. 0. 0. 0:8080October 01 18:28:10 nodejs. localdomain node[12893]: Connected to MongoDB at: mongodb://nodejs:nodejspassword@mongo. vm. svc. cluster. local/nodejs. . . output. . . Route to VM communication: Finally let’s confirm that when using the OKD route that traffic is successfully routed to nodejs eth1 interface. HAProxy Traffic Status: OKD HAProxy provides optional traffic status - which we already enabled. The screenshot below showsthe requests that Nginx is receiving for nodejs. ingress. virtomation. com. haproxy-stats HAProxy to NodeJS VM: The HAProxy pod runs on the master OKD in this scenario. Using skydive we can see a TCP 8080 connection to nodejs eth1 interface exiting eth1 of the master. $ oc get pod -o wide -n default -l router=router NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODErouter-2-nfqr4 0/1 Running 0 20h 192. 168. 122. 101 192. 168. 122. 101. nip. io <none> haproxy-vm " }, { - "id": 128, + "id": 129, "url": "/2018/attaching-to-multiple-networks.html", "title": "Attaching To Multiple Networks", "author" : "yuvalif", "tags" : "multus, networking, CNI, multiple networks", "body": "IntroductionVirtual Machines often need multiple interfaces connected to different networks. This could be because the application running on it expect to be connected to different interfaces (e. g. a virtual router running on the VM); because the VM need L2 connectivity to a network not managed by Kubernetes (e. g. allow for PXE booting); because an existing VM, ported into KubeVirt, runs applications that expect multiple interfaces to exists; or any other reason - which we’ll be happy to hear about in the comment section! In KubeVirt, as nicely explained in this blog post, there is already a mechanism to take an interface from the pod and move it into the Virtual Machine. However, Kubernetes allows for a single network plugin to be used in a cluster (across all pods), and provide one interface for each pod. This forces us to choose between having pod network connectivity and any other network connectivity for the pod and, in the context of KubeVirt, the Virtual Machine within. To overcome this limitation, we use Multus, which is a “meta” CNI (Container Network Interface), allowing multiple CNIs to coexist, and allow for a pod to use the right ones for its networking needs. How Does it Work for Pods?The magic is done via a new CRD (Custom Resource Definition) called NetworkAttachmentDefinition introduced by the Multus project, and adopted by the Kubernetes community as the de-facto standard for attaching pods to one or more networks. These network definition contains a field called type which indicates the name of the actual CNI that provide the network, and different configuration payloads which the Multus CNI is passing to the actual CNI. For example, the following network definition: apiVersion: k8s. cni. cncf. io/v1 kind: NetworkAttachmentDefinitionmetadata: name: a-bridge-networkspec: config: '{ cniVersion : 0. 3. 0 , name : a-bridge-network , type : bridge , bridge : br0 , isGateway : true, ipam : { type : host-local , subnet : 192. 168. 5. 0/24 , dataDir : /mnt/cluster-ipam }}'Allows attaching a pod into a network provided by the bridge CNI. Once a pod with the following annotation is created: apiVersion: v1kind: Podmetadata: name: samplepod annotations: k8s. v1. cni. cncf. io/networks: a-bridge-networkspec:The Multus CNI will find out whether a CNI of type bridge exists, and invoke it with the rest of the configuration in the CRD. Even without Multus, this exact configuration could have been put under /etc/cni/net. d, and provide the same network to the pod, using the bride CNI. But, in such a case, this would have been the only network interface to the pod, since Kubernetes just takes the first configuration file from that directory (sorted by alphabetical order) and use it to provide a single interface for all pods. If we have Multus around, and some other CNI (e. g. flannel), in addition to the bridge one, we could have have defined another NetworkAttachmentDefinition object, of type flannel, with its configuration, for example: apiVersion: k8s. cni. cncf. io/v1 kind: NetworkAttachmentDefinitionmetadata: name: flannel-networkspec: config: '{ cniVersion : 0. 3. 0 , type : flannel , delegate : { isDefaultGateway : true } }'Add a reference to it in the pod’s annotation, and have two interfaces, connected to two different networks on the pod. It is quite common that basic networking is provided by one of the mainstream CNIs (flannel, calico, weave etc. ) for all pods, and more advanced cases are added specifically when needed. For that, a default CNI could be configured for Multus, so that a NetworkAttachmentDefinition object is not needed, nor any annotation at pod level. The interface provided for such a network wil be marked as eth0 on the pod, for smooth transition when Multus is introduced into an cluster with networking. Any other interface added to the pod due to an explicit NetworkAttachmentDefinition object, will be marked as: net1, net2 and so on. How Does it Work in KubeVirt?Most initial steps would be the same as in the pod’s case: Install the different CNIs that you would like to provide networks to our Virtual Machines Install Multus Configure Multus with some default CNI that we would like to provide eth0 for all Virtual Machines Add NetworkAttachmentDefinition object for each network that we would like some of our Virtual Machines to be usingNow, inside the VMI (virtual Machine Instance) definition, a new type of network called multus should be added: networks: - name: default-net pod: {} - name: another-net multus: networkName: a-bridge-networkThis would allow VMI interfaces to be connected to two networks: default which is connected to the CNI which is defined as the default one for Multus. No NetworkAttachmentDefinition CRD is needed for this one, and we assume that the needed configuration is just taken from the default CNI’s configuration under /etc/cni/net. d/. We also assume that an IP address will be provided to eth0 on the pod, which will be delegated to the Virtual Machine’s eth0 interface. another-net which is connected to the network defined by a NetworkAttachmentDefinition CRD named a-bridge-network. The identity fo the CNI that would actually provide the network, as well as the configuration for this network are all defined in the CRD. An interface named net1 connected to that network wil be created on the pod. If this interface get an IP address from the CNI, this IP will be delegated to the Virtual Machine’s eth1 interface. If no IP address is given by the CNI, no IP will be given to eth1 on the Virtual Machine, and only L2 connectivity will be provided. Deployment ExampleIn the following example we use flannel as the CNI that provides the primary pod network, and an OVS bridge CNI provides a secondary network. Install Kubernetes: This was tested with latest version, on a single node cluster. Best would be to just follow these instructions Since we use a single node cluster, Don’t forget to allow scheduling pods on the master:$ kubectl taint nodes --all node-role. kubernetes. io/master- If running kubectl from master itself, don’t forget to copy over the conf file:$ mkdir -p /$USER/. kube && cp /etc/kubernetes/admin. conf /$USER/. kube/configInstall Flannel: Make sure pass these parameters are used when starting kubeadm:$ kubeadm init --pod-network-cidr=10. 244. 0. 0/16 Then call:$ kubectl apply -f https://raw. githubusercontent. com/coreos/flannel/v0. 10. 0/Documentation/kube-flannel. ymlInstall and Start OVS: On Fedora28 that would be (see here for other options):$ dnf install openvswitch$ systemctl start openvswitchInstall and Configure Multus: Install Multus as a daemon set (flannel is already set as the default CNI in the yaml below):$ kubectl apply -f https://raw. githubusercontent. com/intel/multus-cni/master/images/multus-daemonset. yml Make sure that Multus is the first CNI under: /etc/cni/net. d/. If not, rename it so it would be the first, e. g. : mv /etc/cni/net. d/70-multus. conf /etc/cni/net. d/00-multus. confInstall and Configure OVS CNI: First step would be to create the OVS bridge:ovs-vsctl add-br blue To install the OVS CNI use:$ kubectl apply -f https://raw. githubusercontent. com/k8snetworkplumbingwg/ovs-cni/main/examples/ovs-cni. yml Create a NetworkAttachmentDefinition CRD for the “blue” bridge:apiVersion: k8s. cni. cncf. io/v1 kind: NetworkAttachmentDefinitionmetadata: name: ovs-bluespec: config: '{ cniVersion : 0. 3. 1 , type : ovs , bridge : blue }' To use as specific port/vlan from that bridge, you should first create one:ovs-vsctl add-br blue1 blue 100 Then, define its NetworkAttachmentDefinition CRD:apiVersion: k8s. cni. cncf. io/v1 kind: NetworkAttachmentDefinitionmetadata: name: ovs-blue100spec: config: '{ cniVersion : 0. 3. 1 , type : ovs , bridge : blue100 , vlan : 100 }' More information could be found in the OVS CNI documentationDeploy a Virtual Machine with 2 Interfaces: First step would be to deploy KubeVirt (note that 0. 8 is needed for Multus support):$ export VERSION=v0. 8. 0$ kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/$VERSION/kubevirt. yaml Now, create a VMI with 2 interfaces, one connected to the default network (flannel in our case) and one to the OVS “blue” bridge:apiVersion: kubevirt. io/v1alpha2kind: VirtualMachineInstancemetadata: creationTimestamp: null labels: special: vmi-multus-multiple-net name: vmi-multus-multiple-netspec: domain: devices: disks: - disk: bus: virtio name: registrydisk volumeName: registryvolume - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume interfaces: - bridge: {} name: default - bridge: {} name: ovs-blue-net machine: type: resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: ovs-blue name: ovs-blue-net terminationGracePeriodSeconds: 0 volumes: - name: registryvolume registryDisk: image: kubevirt/fedora-cloud-container-disk-demo - cloudInitNoCloud: userData: | #!/bin/bash echo fedora | passwd fedora --stdin name: cloudinitvolumestatus: {} Once the machine is up and running, you can use virtctl to log into it and make sure that eth0 exists as the default interface (with an IP address on the flannel subnet) and eth1 as the interface connected to the OVS bridge (without an IP)" }, { - "id": 129, + "id": 130, "url": "/2018/KubeVirt-Memory-Overcommit.html", "title": "Kubevirt Memory Overcommit", "author" : "tripledes", "tags" : "memory, overcommitment", "body": "KubeVirt memory overcommitmentOne of the latest additions to KubeVirt has been the memory overcommitment feature which allows the memory being assigned to a Virtual Machine Instance to be different than what it requests to Kubernetes. What it does: As you might already know, when a pod is created in Kubernetes, it can define requests for resources like CPU or memory, those requests are taken into account for deciding to what node the pod will be scheduled. Usually, on a node, there are already some resources reserved or requested, Kubernetes itself reserves some resources for its processes and there might be monitoring pods or storage pods already requesting resources as well, all those are also accounted for what is left to run pods. Having the memory overcommitment feature included in KubeVirt allows the users to assign the VMI more or less memory than set into the requests, offering more flexibility, giving the user the option to overcommit (or undercommit) the node’s memory if needed. How does it work?: It’s not too complex to get this working, all that is needed is to have, at least, KubeVirt version 0. 8. 0 installed, which includes the aforementioned feature, and use the following settings on the VMI definition: domain. memory. guest: Defines the amount memory assigned to the VMI process (by libvirt). domain. resources. requests. memory: Defines the memory requested to Kubernetes by the pod that will run the VMI. domain. resources. overcommitGuestOverhead: Boolean switch to enable the feature. Once those are in place, Kubernetes will consider the requested memory for scheduling while libvirt will define the domain with the amount of memory defined in domain. memory. guest. For example, let’s define a VMI which requests 24534983Ki but wants to use 25761732Kiinstead. apiVersion: kubevirt. io/v1alpha2kind: VirtualMachineInstancemetadata: name: testvm1 namespace: kubevirtspec: domain: memory: guest: 25761732Ki resources: requests: memory: 24534983Ki overcommitGuestOverhead: true devices: disks: - volumeName: myvolume name: mydisk disk: bus: virtio - name: cloudinitdisk volumeName: cloudinitvolume cdrom: bus: virtio volumes: - name: myvolume registryDisk: image: <registry_address>/kubevirt/fedora-cloud-container-disk-demo:latest - cloudInitNoCloud: userData: | #cloud-config hostname: testvm1 users: - name: kubevirt gecos: KubeVirt Project sudo: ALL=(ALL) NOPASSWD:ALL passwd: $6$JXbc3063IJir. e5h$ypMlYScNMlUtvQ8Il1ldZi/mat7wXTiRioGx6TQmJjTVMandKqr. jJfe99. QckyfH/JJ. OdvLb5/OrCa8ftLr. shell: /bin/bash home: /home/kubevirt lock_passwd: false name: cloudinitvolumeAs explained already, the QEMU process spawn by libvirt, will get 25761732Ki of RAM, minus some amount for the graphics and firmwares, the guest OS will see its total memory close to that amount, while Kubernetes would think the pod requests 24534983Ki, making more room to schedule more pods if needed. Now let’s imagine we want to undercommit, here’s the same YAML definition but setting less memory than requested: apiVersion: kubevirt. io/v1alpha2kind: VirtualMachineInstancemetadata: name: testvm1 namespace: kubevirtspec: domain: memory: guest: 23308234Ki resources: requests: memory: 24534983Ki overcommitGuestOverhead: true devices: disks: - volumeName: myvolume name: mydisk disk: bus: virtio - name: cloudinitdisk volumeName: cloudinitvolume cdrom: bus: virtio volumes: - name: myvolume registryDisk: image: <registry_url>/kubevirt/fedora-cloud-container-disk-demo:latest - cloudInitNoCloud: userData: | #cloud-config hostname: testvm1 users: - name: kubevirt gecos: KubeVirt Project sudo: ALL=(ALL) NOPASSWD:ALL passwd: $6$JXbc3063IJir. e5h$ypMlYScNMlUtvQ8Il1ldZi/mat7wXTiRioGx6TQmJjTVMandKqr. jJfe99. QckyfH/JJ. OdvLb5/OrCa8ftLr. shell: /bin/bash home: /home/kubevirt lock_passwd: false name: cloudinitvolumeWhy this is needed: At this point you might be asking yourself why would this feature be needed if Kubernetes already does resource management for you, right? Well, there might be few scenarios where this feature would be needed, for instance imagine you decide to have a cluster or few nodes completely dedicated to run Virtual Machines, this feature allows you to make use of all the memory in the nodes without really accounting for the already reserved or requested memory in the system. Let’s put it as an example, say a node has 100GiB of RAM, with 2GiB of reserved memory plus 1GiB requested by monitoring and storage pods, that leaves the user 97GiB of allocatable memory to schedule pods, so each VMI that needs to be started on a node needs to request an amount that would fit, if the user wants to run 10 VMIs on each node with 10GiB of RAM Kubernetes wouldn’t allow that cause the sum of their requests would be more than what’s allocatable in the node. Using the memory overcommitment feature the user can tell Kubernetes that each VMI requests 9. 7GiB and set domain. memory. guest to 10GiB. The other way around, undercommitting the node, also works, for instance, to make sure that no matter how many VMIs will be under memory pressure the node will still be in good shape. Using the same node sizing, 100GiB, we could define 10 VMIs to request 9. 7GiB, while giving them exactly 9. 0GiB, that’d leave around 7GiB for the node processes while Kubernetes wouldn’t try to schedule any more pods on it cause all the requests already sum up to 100% of the allocatable memory. " }, { - "id": 130, + "id": 131, "url": "/2018/changelog-v0.8.0.html", "title": "KubeVirt v0.8.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 8. 0: Released on: Thu Sep 6 14:25:22 2018 +0200 Support for DataVolume Support for a subprotocol for webbrowser terminals Support for virtio-rng Support disconnected VMs Support for setting host model Support for host CPU passthrough Support setting a vNICs mac and PCI address Support for memory over-commit Support booting from network devices Use less devices by default, aka disable unused ones Improved VMI shutdown status More logging to improve debugability A lot of small fixes, including typos and documentation fixes Race detection in tests Hook improvements Update to use Fedora 28 (includes updates of dependencies like libvirt and Move CI to support Kubernetes 1. 11" }, { - "id": 131, + "id": 132, "url": "/2018/kubevirtci.html", "title": "Kubevirtci", "author" : "awels", "tags" : "kubevirtci, ci-cd, cicd, qemu, virtual machine, container", "body": "Building Clusters with kubevirtciOne of the projects in the KubeVirt github organization is a project called kubevirtci. While this may sound like it’s the repo containing the KubeVirt CI system and scripts, that’s not completely accurate. We leverage kubevirtci for our CI process, but there’s more to the CI system than just this repo. Today, we’re not going to talk about the CI system in general, but instead we’re going to talk about the kubevirtci project, what it does, how it does it and why it’s pretty cool. What it does: In short: Deploys a Kubernetes (or OpenShift) cluster using QEMU Virtual Machines, that run inside Docker containers. First a base image is provisioned, this image contains a stock a Kubernetes node. Then one or more of these images are deployed in the target environment to make up the cluster. Provisioning: Building the Kubernetes Node virtual machine happens in several steps: Use a Fedora cloud image. Provision Kubernetes using Ansible onto that image. Install any drivers and providers/provisioners for storage or other resources needed for the cluster. Once the node is provisioned, it is packaged inside a Docker container and published to a registry. Deployment: To create a cluster, all one has to do is start up a number of containers, which in turn start up the Virtual Machine contained within that container. The first node is designated as the master node and a script is run during startup to configure any master node specific configuration. Any additional nodes have a post startup script run to register themselves as nodes with the master. As this point we have a basic kubernetes cluster running the number of specified nodes. Plus some other interesting services described later. How does it work?: After the provision step, which has created a pre-built Kubernetes node in a virtual machine, the deployment step doesn’t happen all by itself. Starting up the cluster is handled by a cli application, aptly named ‘gocli’. Gocli is a go application that contains the knowledge needed to start the cluster, make a node the master node, and then register the other nodes with the master node as compute nodes. It also does a few other nice things, like start a dnsmasq container for dns inside the cluster, and a docker registry container for images. It also can extract and update the kubeconfig for the cluster, to make it possible for external tools to interact with the cluster (such as kubectl). And it can of course shut down the cluster in an orderly fashion. The entire cluster is ephemeral which is very helpful when developing new applications which could potentially damage the cluster. Simply shut down the cluster, and start a new fresh one. The ephemeral nature and ease of deployment makes this extremely useful in a CI/CD pipeline context Why is the registry container useful? Let’s say you are developing a new application and you wish to test it in a Kubernetes cluster. You would build the container image with whatever tools you normally use, you can then publish it to the cluster registry, and deploy the manifest that uses the registry to deploy the container in the cluster. The gocli command tool has the ability to find an external name and port for the internal registry into which you can publish your images. All of the above allows you to now with a few commands, spin up a kubernetes cluster, build application images, push those images into a registry inside that cluster, and deploy that application inside the cluster. As a developer this allows quick compile/test scenarios in a somewhat realistic environment on a regular workstation. It allows for automatic end to end testing in a real cluster, and it allows for CI/CD tools to run tests against an ephemeral cluster that can be spun up and destroyed easily. Why this is cool: So how is this better than simply creating some Virtual Machines and cloning them to add new nodes? For one packaging is much easier, you can simply drop it in a container registry and its available everywhere the registry is available. Another thing is a lot of the configuration is stored in the container instead of in the Virtual Machine, so it is easy to layer on top of that container a different configuration for different use cases. The Virtual Machine stays the same, it’s the container that changes to meet the needs of the use case. Since the container knows all the details of the Virtual Machine it is also possible to construct utilities that retrieve information about the Virtual Machine or pass new information to the Virtual Machine through the container. But the biggest advantage is that this can all be easily automated and repeated. Each time you spin up a cluster, it will be identical from a previous run. You have a mechanism to allow you to populate the cluster with an application you are developing/testing and then running automated processes against that application. KubeVirt itself works on a similar principal, embed a QEMU process in a container to start a Virtual Machine with configuration obtained from the encapsulating container. And the KubeVirt development team uses kubevirtci images in their development workflow as well. And a final interesting thought: Everything mentioned in this article is a container, from the Virtual Machine images, to the gocli utility. It might be an interesting exercise to see if we can leverage kubernetes to manage the life cycle in a CI/CD system. We would then be creating Kubernetes clusters inside a kubernetes cluster to run CI/CD. " }, { - "id": 132, + "id": 133, "url": "/2018/Kubevirt-v0.7.0.html", "title": "Kubevirt V0.7.0", "author" : "karmab", "tags" : "hilights, release notes, review, hugepages", "body": "IntroductionKubeVirt v0. 7. 0 was released a few weeks ago and brings a bunch of new features that this blog post will detail. The full list is visible here but we will pick the ones oriented to the end user Featureshugepages support: To use hugepages as backing memory, we need to indicate a desired amount of memory (resources. requests. memory) and size of hugepages to use (memory. hugepages. pageSize) apiVersion: kubevirt. io/v1alpha1kind: VirtualMachinemetadata: name: myvmspec: domain: resources: requests: memory: 64Mi memory: hugepages: pageSize: 2Mi disks: - name: myimage volumeName: myimage disk: {} volumes: - name: myimage persistentVolumeClaim: claimname: myclaimNote that a node must have pre-allocated hugepages hugepages size cannot be bigger than requested memory requested memory must be divisible by hugepages sizesetting network interface model and MAC address: the following syntax within interfaces section allows us to set both a mac address and network model kind: VMspec: domain: devices: interfaces: - name: red macAddress: de:ad:00:00:be:af model: e1000 bridge: {} networks: - name: red pod: {}alternative network models can be e1000 e1000e ne2k_pci pcnet rtl8139 virtiosetting a disks serial number: The new keyword serial in the disks section allows us to specify a serial number apiVersion: kubevirt. io/v1alpha1kind: VirtualMachinemetadata: name: myvmspec: domain: resources: requests: memory: 64Mi disks: - name: myimage volumeName: myimage serial: sn-11223344 disk: {} volumes: - name: myimage persistentVolumeClaim: claimname: myclaimspecifying the CPU model: Setting the CPU model is possible via spec. domain. cpu. model. The following VM will have a CPU with the Conroe model: metadata: name: myvmispec: domain: cpu: # this sets the CPU model model: ConroeThe available models are listed here Additionally, we can also use host-model host-passthroughvirtctl expose: To access services listening within vms, we can expose their ports using standard kubernetes services. Alternatively, we can make use of the virtctl binary to achieve the same result: to expose a cluster ip servicevirtctl expose virtualmachineinstance vmi-ephemeral --name vmiservice --port 27017 --target-port 22 to expose a node port servicevirtctl expose virtualmachineinstance vmi-ephemeral --name nodeport --type NodePort --port 27017 --target-port 22 --node-port 30000 to expose a load balancer servicevirtctl expose virtualmachineinstance vmi-ephemeral --name lbsvc --type LoadBalancer --port 27017 --target-port 3389Kubernetes compatible networking approach (SLIRP): In slirp mode, virtual machines are connected to the network backend using QEMU user networking mode. In this mode, QEMU allocates internal IP addresses to virtual machines and hides them behind NAT. kind: VMspec: domain: devices: interfaces: - name: red slirp: {} # connect using SLIRP mode networks: - name: red pod: {}Role aggregation for our roles: Every KubeVirt installation after version v0. 5. 1 comes a set of default RBAC cluster roles that can be used to grant users access to VirtualMachineInstances. The kubevirt. io:admin and kubevirt. io:edit ClusterRoles have console and VNC access permissions built into them. ConclusionThis concludes our review of latest kubevirt features. Enjoy them ! " }, { - "id": 133, + "id": 134, "url": "/2018/changelog-v0.7.0.html", "title": "KubeVirt v0.7.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 7. 0: Released on: Wed Jul 4 17:41:33 2018 +0200 CI: Move test storage to hostPath CI: Add support for Kubernetes 1. 10. 4 CI: Improved network tests for multiple-interfaces CI: Drop Origin 3. 9 support CI: Add test for testing templates on Origin VM to VMI rename VM affinity and anti-affinity Add awareness for multiple networks Add hugepage support Add device-plugin based kvm Add support for setting the network interface model Add (basic and inital) Kubernetes compatible networking approach (SLIRP) Add role aggregation for our roles Add support for setting a disks serial number Add support for specyfing the CPU model Add support for setting an network intefraces MAC address Relocate binaries for FHS conformance Logging improvements Template fixes Fix OpenShift CRD validation virtctl: Improve vnc logging improvements virtctl: Add expose virtctl: Use PATCH instead of PUT" }, { - "id": 134, + "id": 135, "url": "/2018/Unit-Test-Howto.html", "title": "Unit Test Howto", "author" : "yuvalif", "tags" : "unit testing", "body": "There are way too many reasons to write unit tests, but my favorite one is: the freedom to hack, modify and improve the code without fear, and get quick feedback that you are on the right track. Of course, writing good integration tests (the stuff under the tests directory) is the best way to validate that everything works, but unit tests has great value as: They are much faster to run (~30 seconds in our case) You get nice coverage reports with coveralls No need to: make cluster up/sync Cover corner cases and easier to debug Some Notes: We use same frameworks (ginkgo, gomega) for unit testing and integration testing, which means that with the same learning curve, you can develop much more! “Bang for the Buck” - it usually takes 20% of the time to get to 80% coverage, and 80% of the time to get to 100%. Which mean that you have to use common sense when improving coverage - some code is just fine with 80% coverage (e. g. large files calling some other APIs with little logic), and other would benefit from getting close to 100% (e. g. complex core functionality handling lots of error cases) Follow the “boy (or girl) scout rule” - every time you enhance/fix some code, add more testing around the existing code as well Avoid “white box testing”, as this will cause endless maintenance of the test code. Best way to assure that, is to put the test code under a different package than the code under test Explore coveralls. Not only it will show you the coverage and the overall trend, it will also help you understand which tests are missing. When drilling down into a file, you can see hits per line, and make better decision on what needs to be covered next FrameworksThere are several frameworks we use to write unit tests: The tests themselves are written using ginkgo, which is a Behavior-Driven Development (BDD) framework The library used for assertions in the tests is gomega. It has a very rich set of matchers, so, before you write you own code around the “equal” matcher, check here to see if there is a more expressive assertion you can use We use GoMock to generate mocks for the different kubevirt interfaces and objects. The command make generate will (among other things) create a file holding the mocked version of our objects and interfaces Many examples exist in our code on how to use this framework Also see here for sample code from GoMock If you need mocks for k8s objects and interfaces, use their framework. They have a tool called client-gen, which generates both the code and the mocks based on the defined APIs The generated mock interfaces and objects of the k8s client are here. Note that they a use a different mechanism to control the mocked behavior than the one used in GoMock Mocked actions are more are here Unit test utilities are placed under testutils Some integration test utilities are also useful for unit testing, see this file When testing interfaces, a mock HTTP server is usually needed. For that we use the golang httptest package gomega also has a package called ghttp that could be used for same purpose Best Practices and Tipsginkgo: Don’t mix setup and tests, use BeforeEach/JustBeforeEach for setup and It/Specify for tests Don’t write setup/cleanup code under Describe/Context clause, which is not inside BeforeEach/AfterEach etc. Make sure that any state change inside an “It” clause, that may impact other tests, is reverted in “AfterEach” Don’t assume the “It” clauses, which are at the same level, are invoked in any specific ordergomega: Be verbose and use specific matchers. For example, to check that an array has N elements, you can use: Expect(len(arr)). To(Equal(N))But a better way would be: Expect(arr). To(HaveLen(N))Function Override: Sometimes the code under test is invoking a function which is not mocked. In most cases, this is an indication that the code needs to be refactored, so this function, or its return values, will be passed as part of the API of the code being tested. However, if this refactoring is not possible (or too costly), you can inject your own implementation of this function. The original function should be defined as a closure, and assigned to a global variable. Since functions are 1st class citizens in go, you can assign your implementation to that function variable. More detailed example is here " }, { - "id": 135, + "id": 136, "url": "/2018/Run-Istio-with-kubevirt.html", "title": "Run Istio With Kubevirt", "author" : "SchSeba", "tags" : "istio, service mesh", "body": "On this blog post, we are going to deploy virtual machines with the KubeVirt project and insert them into the Istio service mesh. Some information about the technologies we are going to use in this blog post. Kubernetes: Production-Grade Container Orchestration. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Kubeadm: kubeadm helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. Calico: Calico provides secure network connectivity for containers and virtual machine workloads. Calico creates and manages a flat layer 3 network, assigning each workload a fully routable IP address. Workloads can communicate without IP encapsulation or network address translation for bare metal performance, easier troubleshooting, and better interoperability. In environments that require an overlay, Calico uses IP-in-IP tunneling or can work with other overlay networking such as flannel. KubeVirt: Virtualization API for kubernetes in order to manage virtual machines KubeVirt technology addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing Virtual Machine-based workloads that cannot be easily containerized. More specifically, the technology provides a unified development platform where developers can build, modify, and deploy applications residing in both Application Containers as well as Virtual Machines in a common, shared environment. Benefits are broad and significant. Teams with a reliance on existing virtual machine-based workloads are empowered to rapidly containerize applications. With virtualized workloads placed directly in development workflows, teams can decompose them over time while still leveraging remaining virtualized components as is comfortably desired. Istio: An open platform to connect, manage, and secure microservices. Istio provides an easy way to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio’s control plane functionality. Bookinfo application: A simple application that displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages, and so on), and a few book reviews. The Bookinfo application is broken into four separate microservices: productpage. The productpage microservice calls the details and reviews microservices to populate the page. details. The details microservice contains book information. reviews. The reviews microservice contains book reviews. It also calls the ratings microservice. ratings. The ratings microservice contains book ranking information that accompanies a book review. Note: This demo is going to be deployed on a kubernetes 1. 10 cluster. Requirements docker kubeadmFollow this document to install everything we need for the POC DeploymentFor the POC we clone this repo The repo contains all the configuration we need to deploy KubeVirt and Istio. kubevirt. yaml istio-demo-auth. yamlIt also contains the deployment configuration of our sample application. bookinfo. yaml bookinfo-gateway. yamlRun the bash script cd kubevirt-istio-poc. /deploy-istio-poc. shDemo applicationWe are going to use the bookinfo sample application from the istio webpage. The following yaml will deploy the bookinfo application with a ‘small’ change the details service will run on a virtual machine inside our kubernetes cluster! Note: it will take like 5 minutes for the application to by running inside the virtual machine because we install git and ruby, then clone the istio repo and start the application. POC detailsLets start with the bash script: #!/bin/bashset -xkubeadm init --pod-network-cidr=192. 168. 0. 0/16yes | cp -i /etc/kubernetes/admin. conf $HOME/. kube/configkubectl apply -f https://docs. projectcalico. org/v3. 0/getting-started/kubernetes/installation/hosted/kubeadm/1. 7/calico. yamlwhile [[ $(kubectl get po -n kube-system | grep kube-dns | grep Running | wc -l) -eq 0 ]]do echo Calico deployment is no ready yet. sleep 5doneecho Calico is ready. echo Taint the master node. kubectl taint nodes --all node-role. kubernetes. io/master-echo Deploy kubevirt. kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/v0. 7. 0/kubevirt. yamlecho Deploy istio. kubectl apply -f istio-demo-auth. yamlecho Add istio-injection to the default namespace. kubectl label namespace default istio-injection=enabledwhile [[ $(kubectl get po -n istio-system | grep sidecar-injector | grep Running | wc -l) -eq 0 ]]do echo Istio deployment is no ready yet. sleep 5doneecho Istio is ready. sleep 20echo Deploy the bookinfo example applicationkubectl apply -f bookinfo. yamlkubectl apply -f bookinfo-gateway. yamlThe follow script create a kubernetes cluster using the kubeadm command, deploy calico as a network CNI and taint the master node (have only one node in the cluster). After the cluster is up the script deploy both istio with mutual TLS and kubevirt projects, it also add the auto injection to the default namespace. At last the script deploy the bookinfo demo application that we change a bit. Lets take a closer look in the virtual machine part inside the bookinfo. yaml file ################################################################################################### Details service##################################################################################################apiVersion: v1kind: Servicemetadata: name: details labels: app: detailsspec: ports: - port: 9080 name: http selector: app: details---apiVersion: kubevirt. io/v1alpha2kind: VirtualMachineInstancemetadata: creationTimestamp: null labels: special: vmi-details app: details version: v1 name: vmi-detailsspec: domain: devices: disks: - disk: bus: virtio name: registrydisk volumeName: registryvolume - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume interfaces: - name: testSlirp slirp: {} ports: - name: http port: 9080 machine: type: resources: requests: memory: 1024M networks: - name: testSlirp pod: {} terminationGracePeriodSeconds: 0 volumes: - name: registryvolume registryDisk: image: kubevirt/fedora-cloud-container-disk-demo:latest - cloudInitNoCloud: userData: |- #!/bin/bash echo fedora |passwd fedora --stdin yum install git ruby -y git clone https://github. com/istio/istio. git cd istio/samples/bookinfo/src/details/ ruby details. rb 9080 & name: cloudinitvolumestatus: {}---. . . . . . . . . . Details:: Create a network of type podnetworks: - name: testSlirp pod: {} Create an interface of type slirp and connect it to the pod network by matching the pod network name Add our application portinterfaces: - name: testSlirp slirp: {} ports: - name: http port: 9080 Use the cloud init script to download install and run the details application- cloudInitNoCloud: userData: |- #!/bin/bash echo fedora |passwd fedora --stdin yum install git ruby -y git clone https://github. com/istio/istio. git cd istio/samples/bookinfo/src/details/ ruby details. rb 9080 & name: cloudinitvolumePOC CheckAfter running the bash script the environment should look like this NAME READY STATUS RESTARTS AGEproductpage-v1-7bbdd59459-w6nwq 2/2 Running 0 1hratings-v1-76dc7f6b9-6n6s9 2/2 Running 0 1hreviews-v1-64545d97b4-tvgl2 2/2 Running 0 1hreviews-v2-8cb9489c6-wjp9x 2/2 Running 0 1hreviews-v3-6bc884b456-hr5bm 2/2 Running 0 1hvirt-launcher-vmi-details-94pb6 3/3 Running 0 1hLet’s find the istio ingress service port # kubectl get service -n istio-system | grep istio-ingressgatewayistio-ingressgateway LoadBalancer 10. 97. 163. 91 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP 3hThen browse the following url http://<k8s-node-ip-address>:<istio-ingress-service-port-exposed-by-k8s>/productpageExample: http://10. 0. 0. 1:31380/productpageConclusionsThis POC show how we can use KubeVirt with Istio to integrate the Istio service mesh to virtual machine workloads running inside our kubernetes cluster. " }, { - "id": 136, + "id": 137, "url": "/2018/KVM-Using-Device-Plugins.html", "title": "Kvm Using Device Plugins", "author" : "stu-gott", "tags" : "kvm, qemu, device plugins", "body": "As of Kubernetes 1. 10, the Device Plugins API is now in beta! KubeVirt is nowusing this framework to provide hardware acceleration and network devices tovirtual machines. The motivation behind this is that virt-launcher pods are nolonger responsible for creating their own device nodes. Or stated another way:virt-launcher pods no longer require excess privileges just for the purpose ofcreating device nodes. Kubernetes Device Plugin Basics: Device Plugins consist of two main parts: a server that provides devices andpods that consume them. Each plugin server is used to share a preconfiguredlist of devices local to the node with pods scheduled on that node. Kubernetesmarks each node with the devices it’s capable of sharing, and uses the presenceof such devices when scheduling pods. Device Plugins In KubeVirt: Providing Devices: In KubeVirt virt-handler takes on the role of the device plugin server. When itstarts up on each node, it registers with the Kubernetes Device Plugin API andadvertises KVM and TUN devices. apiVersion: v1kind: Nodemetadata: . . . spec: . . . status: allocatable: cpu: 2 devices. kubevirt. io/kvm: 110 devices. kubevirt. io/tun: 110 pods: 110 . . . capacity: cpu: 2 devices. kubevirt. io/kvm: 110 devices. kubevirt. io/tun: 110 pods: 110 . . . In this case advertising 110 KVM or TUN devices is simply an arbitrary defaultbased on the number of pods that node is limited to. Consuming Devices: Now any pod that requests a devices. kubevirt. io/kvm ordevices. kubevirt. io/tun device can only be scheduled on nodes which providethem. On clusters where KubeVirt is deployed this conveniently happens to beall nodes in the cluster that have these physical devices, which normally meansall nodes in the cluster. Here’s an excerpt of what the pod spec looks like in this case. apiVersion: v1kind: Podmetadata: . . . spec: containers: - command: - /entrypoint. sh . . . name: compute . . . resources: limits: devices. kubevirt. io/kvm: 1 devices. kubevirt. io/tun: 1 requests: devices. kubevirt. io/kvm: 1 devices. kubevirt. io/tun: 1 memory: 161679432 securityContext: capabilities: add: - NET_ADMIN privileged: false runAsUser: 0 . . . Of special note is the securityContext stanza. The only special privilegerequired is the NET_ADMIN capability! This is needed by libvirt to set up thedomain’s networking stack. " }, { - "id": 137, + "id": 138, "url": "/2018/Proxy-vm-conclusion.html", "title": "Proxy VM Conclusion", "author" : "SchSeba", "tags" : "istio, multus, roadmap", "body": "This blog post follow my previous research on how to allow vms inside a k8s cluster tp play nice with istio and other sidecars. Research conclusions and network roadmapAfter the deep research about different options/ways to connect VM to pods, we find that all the solution have different pros and cons. All the represented solution need access to kernel modules and have the risk of conflicting with other networking tools. We decided to implement a 100% Kubernetes compatible network approach on the KubeVirt project by using the slirp interface qemu provides. This approach let the VM (from a networking perspective) behave like a process. Thus all traffic is going in and out of TCP or UDP sockets. The approach especially needs to avoid to rely on any specific Kernel configurations (like iptables, ebtables, tc, …) in order to not conflict with other Kubernetes networking tools like Istio or multus. This is just an intermediate solution, because it’s shortcomings (unmaintained, unsafe, not performing well) Slirp interface: Pros: vm ack like a process No external modules needed No external process needed Works with any sidecar solution no rely on any specific Kernel configurations pod can run without privilegeCons: poor performance use userspace network stackIptables only: Pros: No external modules needed No external process needed All the traffic is handled by the kernel user space not involvedCons: Istio dedicated solution! Not other process can change the iptables rulesIptables with a nat-proxy: Pros: No external modules needed Works with any sidecar solutionCons: Not other process can change the iptables rules External process needed The traffic is passed to user space Only support ingress TCP connectionIptables with a trasperent-proxy: Pros: other process can change the nat table (this solution works on the mangle table) better preformance comparing to nat-proxy Works with any sidecar solutionCons: Need NET_ADMIN capability for the docker External process needed The traffic is passed to user space Only support ingress TCP connection" }, { - "id": 138, + "id": 139, "url": "/2018/changelog-v0.6.0.html", "title": "KubeVirt v0.6.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 6. 0: Released on: Mon Jun 11 09:30:28 2018 +0200 A range of flakyness reducing test fixes Vagrant setup got deprectated Updated Docker and CentOS versions Add Kubernetes 1. 10. 3 to test matrix A couple of ginkgo concurrency fixes A couple of spelling fixes A range if infra updates Use /dev/kvm if possible, otherwise fallback to emulation Add default view/edit/admin RBAC Roles Network MTU fixes CDRom drives are now read-only Secrets can now be correctly referenced on VMs Add disk boot ordering Add virtctl version Add virtctl expose Fix virtual machine memory calculations Add basic virtual machine Network API" }, { - "id": 139, + "id": 140, "url": "/2018/Non-Dockerized-Build.html", "title": "Non Dockerized Build", "author" : "yuvalif", "tags" : "docker, container, build", "body": "In this post we will set up an alternative to the existing containerized build system used in KubeVirt. A new makefile will be presented here, which you can for experimenting (if you are brave enough…) Why?Current build system for KubeVirt is done inside docker. This ensures a robust and consistent build environment: No need to install system dependencies Controlled versions of these dependencies Agnostic of local golang environmentSo, in general, you should just use the dockerized build system. Still, there are some drawbacks there: Tool integration: Since your tools are not running in the dockerized environment, they may give different outcome than the ones running in the dockerized environment Invoking any of the dockerized scripts (under hack directory) may be inconsistent with the outside environment (e. g. file path is different than the one on your machine) Build time: the dockerized build has some small overheads, and some improvements are still needed to make sure that caching work properly and build is optimized And last, but not least, sometimes it is just hard to resist the tinkering…How?: Currently, the Makefile includes targets that address different things: building, dependencies, cluster management, testing etc. - here I tried to modify the minimum which is required for non-containerized build. Anything not related to it, should just be done using the existing Makefile. note “NoteCross compilation is not covered here (e. g. building virtctl for mac and windows) Prerequisites: Best place to look for that is in the docker file definition for the build environment: hack/docker-builder/Dockerfile Note that not everything from there is needed for building, so the bare minimum on Fedora27 would be: sudo dnf install -y gitsudo dnf install -y libvirt-develsudo dnf install -y golangsudo dnf install -y dockersudo dnf install -y qemu-imgSimilarly to the containerized case, docker is still needed (e. g. all the cluster stuff is done via docker), and therefore, any docker related preparations are needed as well. This would include running docker on startup and making sure that docker commands does not need root privileges. On Fedora27 this would mean: sudo groupadd dockersudo usermod -aG docker $USERsudo systemctl enable dockersudo systemctl start dockerNow, getting the actual code could be done either via go get (don’t forget to set the GOPATH environment variable): go get -d kubevirt. io/kubevirt/. . . Or git clone: mkdir -p $GOPATH/src/kubevirt. io/ && cd $GOPATH/src/kubevirt. io/git clone https://github. com/kubevirt/kubevirtMakefile. nocontainer: all: buildbootstrap: go get -u github. com/onsi/ginkgo/ginkgo go get -u mvdan. cc/sh/cmd/shfmt go get -u -d k8s. io/code-generator/cmd/deepcopy-gen go get -u -d k8s. io/code-generator/cmd/defaulter-gen go get -u -d k8s. io/code-generator/cmd/openapi-gen cd ${GOPATH}/src/k8s. io/code-generator/cmd/deepcopy-gen && git checkout release-1. 9 && go install cd ${GOPATH}/src/k8s. io/code-generator/cmd/defaulter-gen && git checkout release-1. 9 && go install cd ${GOPATH}/src/k8s. io/code-generator/cmd/openapi-gen && git checkout release-1. 9 && go installgenerate: . /hack/generate. shapidocs: generate . /hack/gen-swagger-doc/gen-swagger-docs. sh v1 htmlbuild: check go install -v . /cmd/. . . . /pkg/. . . . /hack/copy-cmd. shtest: build go test -v -cover . /pkg/. . . check: . /hack/check. shOUT_DIR=. /_outTESTS_OUT_DIR=${OUT_DIR}/testsfunctest: build go build -v . /tests/. . . ginkgo build . /tests mkdir -p ${TESTS_OUT_DIR}/ mv . /tests/tests. test ${TESTS_OUT_DIR}/ . /hack/functests. shcluster-sync: build . /hack/build-copy-artifacts. sh . /hack/build-manifests. sh . /hack/build-docker. sh build . /cluster/clean. sh . /cluster/deploy. sh. PHONY: bootstrap generate apidocs build test check functest cluster-syncTargets: To execute any of the targets use: make -f Makefile. nocontainer <target>File has the following targets: bootstrap: this is actually part of the prerequisites, but added all golang tool dependencies here, since this is agnostic of the running platform Should be called once Note that the k8s code generators use specific version Note that these are not code dependencies, as they are handled by using a vendor directory, as well as the distclean, deps-install and deps-update targets in the standard makefile generate: Calling hack/generate. sh script similarly to the standard makefile. It builds all generators (under the tools directory) and use them to generate: test mocks, KubeVirt resources and test yamls apidocs: this is similar to apidocs target in the standard makefile build: this is building all product binaries, and then using a script (copy-cmd. sh, should be placed under: hack) to copy the binaries from their standard location into the _out directory, where the cluster management scripts expect them test: building and running unit testscheck: using similar code to the one used in the standard makefile: formatting files, fixing package imports and calling go vet functest: building and running integration tests. After tests are built , they are moved to the _out directory so that the standard script for running integration tests would find them cluster-sync: this is the only “cluster management” target that had to be modified from the standard makefile" }, { - "id": 140, + "id": 141, "url": "/2018/Research-run-VMs-with-istio-service-mesh.html", "title": "Research Run Vms With Istio Service Mesh", "author" : "SchSeba", "tags" : "istio, iptables, libvirt, tproxy, service mesh, ebtables", "body": "In this blog post we are going to talk about istio and virtual machines on top of Kubernetes. Some of the components we are going to use are istio, libvirt, ebtables, iptables, and tproxy. Please review the links provided for an overview and deeper dive into each technology Research explanationOur research goal was to give virtual machines running inside pods (KubeVirt project) all the benefits Kubernetes have to offer, one of them is a service mesh like istio. Iptables only with dnat and source nat configuration: This configuration is istio only! For this solution we created the following architecture With the following yaml configuration apiVersion: v1kind: Servicemetadata: name: application-devel labels: app: libvirtd-develspec: ports: - port: 9080 name: http selector: app: libvirtd-devel---apiVersion: v1kind: Servicemetadata: name: libvirtd-client-devel labels: app: libvirtd-develspec: ports: - port: 16509 name: client-connection - port: 5900 name: spice - port: 22 name: ssh selector: app: libvirtd-devel type: LoadBalancer---apiVersion: extensions/v1beta1kind: Deploymentmetadata: creationTimestamp: null name: libvirtd-develspec: replicas: 1 strategy: {} template: metadata: annotations: sidecar. istio. io/status: '{ version : 43466efda2266e066fb5ad36f2d1658de02fc9411f6db00ccff561300a2a3c78 , initContainers :[ istio-init , enable-core-dump ], containers :[ istio-proxy ], volumes :[ istio-envoy , istio-certs ]}' creationTimestamp: null labels: app: libvirtd-devel spec: containers: - image: docker. io/sebassch/mylibvirtd:devel imagePullPolicy: Always name: compute ports: - containerPort: 9080 - containerPort: 16509 - containerPort: 5900 - containerPort: 22 securityContext: capabilities: add: - ALL privileged: true runAsUser: 0 volumeMounts: - mountPath: /var/lib/libvirt/images name: test-volume - mountPath: /host-dev name: host-dev - mountPath: /host-sys name: host-sys resources: {} env: - name: LIBVIRTD_DEFAULT_NETWORK_DEVICE value: eth0 - args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - productpage - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot. istio-system:15005 - --discoveryRefreshDelay - 1s - --zipkinAddress - zipkin. istio-system:9411 - --connectTimeout - 10s - --statsdUdpAddress - istio-mixer. istio-system:9125 - --proxyAdminPort - 15000 - --controlPlaneAuthPolicy - MUTUAL_TLS env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata. name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata. namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status. podIP image: docker. io/istio/proxy:0. 7. 1 imagePullPolicy: IfNotPresent name: istio-proxy resources: {} securityContext: privileged: false readOnlyRootFilesystem: true runAsUser: 1337 volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true initContainers: - args: - -p - 15001 - -u - 1337 image: docker. io/istio/proxy_init:0. 7. 1 imagePullPolicy: IfNotPresent name: istio-init resources: {} securityContext: capabilities: add: - NET_ADMIN - args: - -c - sysctl -w kernel. core_pattern=/etc/istio/proxy/core. %e. %p. %t && ulimit -c unlimited command: - /bin/sh image: alpine imagePullPolicy: IfNotPresent name: enable-core-dump resources: {} securityContext: privileged: true volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true secretName: istio. default - name: host-dev hostPath: path: /dev type: Directory - name: host-sys hostPath: path: /sys type: Directory - name: test-volume hostPath: # directory location on host path: /bricks/brick1/volume/Images # this field is optional type: Directorystatus: {}---apiVersion: extensions/v1beta1kind: Ingressmetadata: name: gateway-devel annotations: kubernetes. io/ingress. class: istio spec: rules: - http: paths: - path: /devel-myvm backend: serviceName: application-devel servicePort: 9080When the my-libvirt container starts it runs an entry point script for iptables configuration. 1. iptables -t nat -D PREROUTING 12. iptables -t nat -A PREROUTING -p tcp -m comment --comment KubeVirt Spice --dport 5900 -j ACCEPT3. iptables -t nat -A PREROUTING -p tcp -m comment --comment KubeVirt virt-manager --dport 16509 -j ACCEPT4. iptables -t nat -A PREROUTING -d 10. 96. 0. 0/12 -m comment --comment istio/redirect-ip-range-10. 96. 0. 0/12-service cidr -j ISTIO_REDIRECT5. iptables -t nat -A PREROUTING -d 192. 168. 0. 0/16 -m comment --comment istio/redirect-ip-range-192. 168. 0. 0/16-Pod cidr -j ISTIO_REDIRECT6. iptables -t nat -A OUTPUT -d 127. 0. 0. 1/32 -p tcp -m comment --comment KubeVirt mesh application port --dport 9080 -j DNAT --to-destination 10. 0. 0. 27. iptables -t nat -A POSTROUTING -s 127. 0. 0. 1/32 -d 10. 0. 0. 2/32 -m comment --comment KubeVirt VM Forward -j SNAT --to-source `ifconfig eth0 | grep inet | awk '{print $2}'Now let’s explain every one of these lines: Remove istio ingress connection rule that send all the ingress traffic directly to the envoy proxy (our vm traffic is ingress traffic for our pod) Allow ingress connection with spice port to get our libvirt process running in the pod Allow ingress connection with virt-manager port to get our libvirt process running in the pod Redirect all the traffic that came from the k8s clusters services to the envoy process Redirect all the traffic that came from the k8s clusters pods to the envoy process Send all the traffic that came from envoy process to our vm by changing the destination ip address to ur vm ip address Change the source ip address of the packet send by envoy from localhost to the pod ip address so the virtual machine can return the connectionIptables configuration conclusions: With this configuration all the traffic that exit the virtual machine to a k8s service will pass the envoy process and will enter the istio service mash. Also all the traffic that came into the pod will be pass to envoy and after that it will be send to our virtual machine Egress data flow in this solution: Ingress data flow in this solution: Pros: No external modules needed No external process needed All the traffic is handled by the kernel user space not involvedCons: Istio dedicated solution! Not other process can change the iptables rulesIptables with a nat-proxy process: For this solution a created the following architecture With the following yaml configuration apiVersion: v1kind: Servicemetadata: name: application-nat-proxt labels: app: libvirtd-nat-proxtspec: ports: - port: 9080 name: http selector: app: libvirtd-nat-proxt type: LoadBalancer---apiVersion: v1kind: Servicemetadata: name: libvirtd-client-nat-proxt labels: app: libvirtd-nat-proxtspec: ports: - port: 16509 name: client-connection - port: 5900 name: spice - port: 22 name: ssh selector: app: libvirtd-nat-proxt type: LoadBalancer---apiVersion: extensions/v1beta1kind: Deploymentmetadata: creationTimestamp: null name: libvirtd-nat-proxtspec: replicas: 1 strategy: {} template: metadata: annotations: sidecar. istio. io/status: '{ version : 43466efda2266e066fb5ad36f2d1658de02fc9411f6db00ccff561300a2a3c78 , initContainers :[ istio-init , enable-core-dump ], containers :[ istio-proxy ], volumes :[ istio-envoy , istio-certs ]}' creationTimestamp: null labels: app: libvirtd-nat-proxt spec: containers: - image: docker. io/sebassch/mylibvirtd:devel imagePullPolicy: Always name: compute ports: - containerPort: 9080 - containerPort: 16509 - containerPort: 5900 - containerPort: 22 securityContext: capabilities: add: - ALL privileged: true runAsUser: 0 volumeMounts: - mountPath: /var/lib/libvirt/images name: test-volume - mountPath: /host-dev name: host-dev - mountPath: /host-sys name: host-sys resources: {} env: - name: LIBVIRTD_DEFAULT_NETWORK_DEVICE value: eth0 - image: docker. io/sebassch/mynatproxy:devel imagePullPolicy: Always name: proxy resources: {} securityContext: privileged: true capabilities: add: - NET_ADMIN - args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - productpage - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot. istio-system:15005 - --discoveryRefreshDelay - 1s - --zipkinAddress - zipkin. istio-system:9411 - --connectTimeout - 10s - --statsdUdpAddress - istio-mixer. istio-system:9125 - --proxyAdminPort - 15000 - --controlPlaneAuthPolicy - MUTUAL_TLS env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata. name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata. namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status. podIP image: docker. io/istio/proxy:0. 7. 1 imagePullPolicy: IfNotPresent name: istio-proxy resources: {} securityContext: privileged: false readOnlyRootFilesystem: true runAsUser: 1337 volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true initContainers: - args: - -p - 15001 - -u - 1337 - -i - 10. 96. 0. 0/12,192. 168. 0. 0/16 image: docker. io/istio/proxy_init:0. 7. 1 imagePullPolicy: IfNotPresent name: istio-init resources: {} securityContext: capabilities: add: - NET_ADMIN - args: - -c - sysctl -w kernel. core_pattern=/etc/istio/proxy/core. %e. %p. %t && ulimit -c unlimited command: - /bin/sh image: alpine imagePullPolicy: IfNotPresent name: enable-core-dump resources: {} securityContext: privileged: true volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true secretName: istio. default - name: host-dev hostPath: path: /dev type: Directory - name: host-sys hostPath: path: /sys type: Directory - name: test-volume hostPath: # directory location on host path: /bricks/brick1/volume/Images # this field is optional type: Directorystatus: {}---apiVersion: extensions/v1beta1kind: Ingressmetadata: name: gateway-nat-proxt annotations: kubernetes. io/ingress. class: istio spec: rules: - http: paths: - path: /nat-proxt-myvm backend: serviceName: application-nat-proxt servicePort: 9080When the mynatproxy container starts it runs an entry point script for iptables configuration. 1. iptables -t nat -I PREROUTING 1 -p tcp -s 10. 0. 1. 2 -m comment --comment nat-proxy redirect -j REDIRECT --to-ports 80802. iptables -t nat -I OUTPUT 1 -p tcp -s 10. 0. 1. 2 -j ACCEPT3. iptables -t nat -I POSTROUTING 1 -s 10. 0. 1. 2 -p udp -m comment --comment nat udp connections -j MASQUERADENow let’s explain every one of these lines: Redirect all the tcp traffic that came from the virtual machine to our proxy on port 8080 Accept all the traffic that go from the pod to the virtual machine Nat all the udp traffic that came from the virtual machineThis solution uses a container I created that has two processes inside, one for the egress traffic of the virtual machine and one for the ingress traffic. For the egress traffic I used a program writen in golang, and for the ingress traffic I used haproxy. The nat-proxy used a system call to get the original destination address and port that it’s being redirected to us from the iptables rules I created. The extract function: func getOriginalDst(clientConn *net. TCPConn) (ipv4 string, port uint16, newTCPConn *net. TCPConn, err error) { if clientConn == nil { log. Printf( copy(): oops, dst is nil! ) err = errors. New( ERR: clientConn is nil ) return } // test if the underlying fd is nil remoteAddr := clientConn. RemoteAddr() if remoteAddr == nil { log. Printf( getOriginalDst(): oops, clientConn. fd is nil! ) err = errors. New( ERR: clientConn. fd is nil ) return } srcipport := fmt. Sprintf( %v , clientConn. RemoteAddr()) newTCPConn = nil // net. TCPConn. File() will cause the receiver's (clientConn) socket to be placed in blocking mode. // The workaround is to take the File returned by . File(), do getsockopt() to get the original // destination, then create a new *net. TCPConn by calling net. Conn. FileConn(). The new TCPConn // will be in non-blocking mode. What a pain. clientConnFile, err := clientConn. File() if err != nil { log. Printf( GETORIGINALDST|%v->?->FAILEDTOBEDETERMINED|ERR: could not get a copy of the client connection's file object , srcipport) return } else { clientConn. Close() } // Get original destination // this is the only syscall in the Golang libs that I can find that returns 16 bytes // Example result: &{Multiaddr:[2 0 31 144 206 190 36 45 0 0 0 0 0 0 0 0] Interface:0} // port starts at the 3rd byte and is 2 bytes long (31 144 = port 8080) // IPv4 address starts at the 5th byte, 4 bytes long (206 190 36 45) addr, err := syscall. GetsockoptIPv6Mreq(int(clientConnFile. Fd()), syscall. IPPROTO_IP, SO_ORIGINAL_DST) log. Printf( getOriginalDst(): SO_ORIGINAL_DST=%+v\n , addr) if err != nil { log. Printf( GETORIGINALDST|%v->?->FAILEDTOBEDETERMINED|ERR: getsocketopt(SO_ORIGINAL_DST) failed: %v , srcipport, err) return } newConn, err := net. FileConn(clientConnFile) if err != nil { log. Printf( GETORIGINALDST|%v->?->%v|ERR: could not create a FileConn fron clientConnFile=%+v: %v , srcipport, addr, clientConnFile, err) return } if _, ok := newConn. (*net. TCPConn); ok { newTCPConn = newConn. (*net. TCPConn) clientConnFile. Close() } else { errmsg := fmt. Sprintf( ERR: newConn is not a *net. TCPConn, instead it is: %T (%v) , newConn, newConn) log. Printf( GETORIGINALDST|%v->?->%v|%s , srcipport, addr, errmsg) err = errors. New(errmsg) return } ipv4 = itod(uint(addr. Multiaddr[4])) + . + itod(uint(addr. Multiaddr[5])) + . + itod(uint(addr. Multiaddr[6])) + . + itod(uint(addr. Multiaddr[7])) port = uint16(addr. Multiaddr[2])<<8 + uint16(addr. Multiaddr[3]) return}After we get the original destination address and port we start a connection to it and copy all the packets. var streamWait sync. WaitGroupstreamWait. Add(2)streamConn := func(dst io. Writer, src io. Reader) { io. Copy(dst, src) streamWait. Done()}go streamConn(remoteConn, VMconn)go streamConn(VMconn, remoteConn)streamWait. Wait()The Haproxy help us with the ingress traffic with the follow configuration defaults mode tcpfrontend main bind *:9080 default_backend guestbackend guest server guest 10. 0. 1. 2:9080 maxconn 2048It sends all the traffic to our virtual machine on the service port the machine is listening. Code repository nat proxy conclusions: This solution is a general solution, not a dedicated solution to istio only. Its make the vm traffic look like a regular process inside the pod so it will work with any sidecars projects Egress data flow in this solution: Ingress data flow in this solution: Pros: No external modules needed Works with any sidecar solutionCons: Not other process can change the iptables rules External process needed The traffic is passed to user space Only support ingress TCP connectionIptables with a trasperent-proxy process: This is the last solution I used in my research, it use a kernel module named TPROXY The official documentation from the linux kernel documentation. For this solution I created the following architecture With the follow yaml configuration apiVersion: v1kind: Servicemetadata: name: application-devel labels: app: libvirtd-develspec: ports: - port: 9080 name: http selector: app: libvirtd-devel type: LoadBalancer---apiVersion: v1kind: Servicemetadata: name: libvirtd-client-devel labels: app: libvirtd-develspec: ports: - port: 16509 name: client-connection - port: 5900 name: spice - port: 22 name: ssh selector: app: libvirtd-devel type: LoadBalancer---apiVersion: extensions/v1beta1kind: Deploymentmetadata: creationTimestamp: null name: libvirtd-develspec: replicas: 1 strategy: {} template: metadata: annotations: sidecar. istio. io/status: '{ version : 43466efda2266e066fb5ad36f2d1658de02fc9411f6db00ccff561300a2a3c78 , initContainers :[ istio-init , enable-core-dump ], containers :[ istio-proxy ], volumes :[ istio-envoy , istio-certs ]}' creationTimestamp: null labels: app: libvirtd-devel spec: containers: - image: docker. io/sebassch/mylibvirtd:devel imagePullPolicy: Always name: compute ports: - containerPort: 9080 - containerPort: 16509 - containerPort: 5900 - containerPort: 22 securityContext: capabilities: add: - ALL privileged: true runAsUser: 0 volumeMounts: - mountPath: /var/lib/libvirt/images name: test-volume - mountPath: /host-dev name: host-dev - mountPath: /host-sys name: host-sys resources: {} env: - name: LIBVIRTD_DEFAULT_NETWORK_DEVICE value: eth0 - image: docker. io/sebassch/mytproxy:devel imagePullPolicy: Always name: proxy resources: {} securityContext: privileged: true capabilities: add: - NET_ADMIN - args: - proxy - sidecar - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - productpage - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot. istio-system:15005 - --discoveryRefreshDelay - 1s - --zipkinAddress - zipkin. istio-system:9411 - --connectTimeout - 10s - --statsdUdpAddress - istio-mixer. istio-system:9125 - --proxyAdminPort - 15000 - --controlPlaneAuthPolicy - MUTUAL_TLS env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata. name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata. namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status. podIP image: docker. io/istio/proxy:0. 7. 1 imagePullPolicy: IfNotPresent name: istio-proxy resources: {} securityContext: privileged: false readOnlyRootFilesystem: true runAsUser: 1337 volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true initContainers: - args: - -p - 15001 - -u - 1337 - -i - 10. 96. 0. 0/12,192. 168. 0. 0/16 image: docker. io/istio/proxy_init:0. 7. 1 imagePullPolicy: IfNotPresent name: istio-init resources: {} securityContext: capabilities: add: - NET_ADMIN - args: - -c - sysctl -w kernel. core_pattern=/etc/istio/proxy/core. %e. %p. %t && ulimit -c unlimited command: - /bin/sh image: alpine imagePullPolicy: IfNotPresent name: enable-core-dump resources: {} securityContext: privileged: true volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true secretName: istio. default - name: host-dev hostPath: path: /dev type: Directory - name: host-sys hostPath: path: /sys type: Directory - name: test-volume hostPath: # directory location on host path: /bricks/brick1/volume/Images # this field is optional type: Directorystatus: {}---apiVersion: extensions/v1beta1kind: Ingressmetadata: name: gateway-devel annotations: kubernetes. io/ingress. class: istio spec: rules: - http: paths: - path: /devel-myvm backend: serviceName: application-devel servicePort: 9080When the tproxy container starts it runs an entry point script for iptables configuration but this time the proxy redirect came in the mangle table and not in the nat table that because TPROXY module avilable only in the mangle table. TPROXYThis target is only valid in the mangle table, in thePREROUTING chain and user-defined chains which are onlycalled from this chain. It redirects the packet to a localsocket without changing the packet header in any way. It canalso change the mark value which can then be used inadvanced routing rules. iptables rules: iptables -t mangle -vLiptables -t mangle -N KUBEVIRT_DIVERTiptables -t mangle -A KUBEVIRT_DIVERT -j MARK --set-mark 8iptables -t mangle -A KUBEVIRT_DIVERT -j ACCEPTtable=mangleiptables -t ${table} -N KUBEVIRT_INBOUNDiptables -t ${table} -A PREROUTING -p tcp -m comment --comment KubeVirt Spice --dport 5900 -j RETURNiptables -t ${table} -A PREROUTING -p tcp -m comment --comment KubeVirt virt-manager --dport 16509 -j RETURNiptables -t ${table} -A PREROUTING -p tcp -i vnet0 -j KUBEVIRT_INBOUNDiptables -t ${table} -N KUBEVIRT_TPROXYiptables -t ${table} -A KUBEVIRT_TPROXY ! -d 127. 0. 0. 1/32 -p tcp -j TPROXY --tproxy-mark 8/0xffffffff --on-port 9401#iptables -t mangle -A KUBEVIRT_TPROXY ! -d 127. 0. 0. 1/32 -p udp -j TPROXY --tproxy-mark 8/0xffffffff --on-port 8080# If an inbound packet belongs to an established socket, route it to the# loopback interface. iptables -t ${table} -A KUBEVIRT_INBOUND -p tcp -m socket -j KUBEVIRT_DIVERT#iptables -t mangle -A KUBEVIRT_INBOUND -p udp -m socket -j KUBEVIRT_DIVERT# Otherwise, it's a new connection. Redirect it using TPROXY. iptables -t ${table} -A KUBEVIRT_INBOUND -p tcp -j KUBEVIRT_TPROXY#iptables -t mangle -A KUBEVIRT_INBOUND -p udp -j KUBEVIRT_TPROXYiptables -t ${table} -I OUTPUT 1 -d 10. 0. 1. 2 -j ACCEPTtable=nat# Remove vm Connection from iptables rulesiptables -t ${table} -I PREROUTING 1 -s 10. 0. 1. 2 -j ACCEPTiptables -t ${table} -I OUTPUT 1 -d 10. 0. 1. 2 -j ACCEPT# Allow guest -> world -- using nat for UDPiptables -t ${table} -I POSTROUTING 1 -s 10. 0. 1. 2 -p udp -j MASQUERADEFor this solution we also need to load the bridge kernel module modprobe bridgeAnd create some ebtables rules so egress and ingress traffict from the virtial machine will exit the l2 rules and pass to the l3 rules: ebtables -t broute -F # Flush the table # inbound traffic ebtables -t broute -A BROUTING -p IPv4 --ip-dst 10. 0. 1. 2 \ -j redirect --redirect-target DROP # returning outbound traffic ebtables -t broute -A BROUTING -p IPv4 --ip-src 10. 0. 1. 2 \ -j redirect --redirect-target DROPWe also need to disable rp_filter on the virtual machine interface and the libvirt bridge interface echo 0 > /proc/sys/net/ipv4/conf/virbr0/rp_filterecho 0 > /proc/sys/net/ipv4/conf/virbr0-nic/rp_filterecho 0 > /proc/sys/net/ipv4/conf/vnet0/rp_filterAfter this configuration the container start the semi-tproxy process for egress traffic and the haproxy process for the ingress traffic. The semi-tproxy program is a golag program,binding a listener socket with the IP_TRANSPARENT socket optionPreparing a socket to receive connections with TProxy is really no different than what is normally done when setting up a socket to listen for connections. The only difference in the process is before the socket is bound, the IP_TRANSPARENT socket option. syscall. SetsockoptInt(fileDescriptor, syscall. SOL_IP, syscall. IP_TRANSPARENT, 1)About IP_TRANSPARENT IP_TRANSPARENT (since Linux 2. 6. 24)Setting this boolean option enables transparent proxying onthis socket. This socket option allows the calling applica‐tion to bind to a nonlocal IP address and operate both as aclient and a server with the foreign address as the localend‐point. NOTE: this requires that routing be set up ina way that packets going to the foreign address are routedthrough the TProxy box (i. e. , the system hosting theapplication that employs the IP_TRANSPARENT socket option). Enabling this socket option requires superuser privileges(the CAP_NET_ADMIN capability). TProxy redirection with the iptables TPROXY target alsorequires that this option be set on the redirected socket. Then we set the IP_TRANSPARENT socket option on outbound connectionsSame goes for making connections to a remote host pretending to be the client, the IP_TRANSPARENT socket option is set and the Linux kernel will allow the bind so along as a connection was intercepted with those details being used for the bind. When the process get a new connection we start a connection to the real destination address and copy the traffic between both sockets var streamWait sync. WaitGroupstreamWait. Add(2)streamConn := func(dst io. Writer, src io. Reader) { io. Copy(dst, src) streamWait. Done()}go streamConn(remoteConn, VMconn)go streamConn(VMconn, remoteConn)streamWait. Wait()The Haproxy helps us with the ingress traffic with the follow configuration defaults mode tcpfrontend main bind *:9080 default_backend guestbackend guest server guest 10. 0. 1. 2:9080 maxconn 2048It sends all the traffic to our virtual machine on the service port the machine is listening. Code repository tproxy conclusions: This solution is a general solution, not a dedicated solution to istio only. Its make the vm traffic look like a regular process inside the pod so it will work with any sidecars projects Egress data flow in this solution: Ingress data flow in this solution: Pros: other process can change the nat table (this solution works on the mangle table) better preformance comparing to nat-proxy Works with any sidecar solutionCons: Need NET_ADMIN capability for the docker External process needed The traffic is passed to user space Only support ingress TCP connectionResearch ConclustionKubeVirt shows it is possible to run virtual machines inside a kubernetes cluster, and this post shows that the virtual machine can also get the benefit of it. " }, { - "id": 141, + "id": 142, "url": "/2018/Use-VS-Code-for-Kube-Virt-Development.html", "title": "Use Vs Code For Kube Virt Development", "author" : "SchSeba", "tags" : "vscode, development, debug", "body": "In this post we will install and configure Visual Studio Code (vscode) for KubeVirt development and debug. Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and macOS. It includes support for debugging, embedded Git control, syntax highlighting, intelligent code completion, snippets, and code refactoring. Golang InstallationGO installation is required, We can find the binaries in golang page. Golang Linux Installation: After downloading the binaries extract them with the following command: tar -C /usr/local -xzf go$VERSION. $OS-$ARCH. tar. gzNow let’s Add /usr/local/go/bin to the PATH environment variable. You can do this by adding this line to your /etc/profile (for a system-wide installation) or $HOME/. profile: export PATH=$PATH:/usr/local/go/binGolang Windows Installation: Open the MSI file and follow the prompts to install the Go tools. By default, the installer puts the Go distribution in C:\Go. The installer should put the C:\Go\bin directory in your PATH environment variable. You may need to restart any open command prompts for the change to take effect. VSCODE InstallationNow we will install Visual Studio Code in our system. For linux machines: We need to choose our linux distribution. For RHEL/Centos/Fedora: The following script will install the key and repository: sudo rpm --import https://packages. microsoft. com/keys/microsoft. ascsudo sh -c 'echo -e [code]\nname=Visual Studio Code\nbaseurl=https://packages. microsoft. com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages. microsoft. com/keys/microsoft. asc > /etc/yum. repos. d/vscode. repo'Then update the package cache and install the package using dnf (Fedora 22 and above): dnf check-updatesudo dnf install codeOr on older versions using yum: yum check-updatesudo yum install codeFor Debian/Ubuntu: We need to download the . deb package from the vscode download page,and from the command line run the package management. sudo dpkg -i <file>. debsudo apt-get install -f # Install dependenciesFor Windows machinesDownload the Visual Studio Code installer, and then run the installer (VSCodeSetup-version. exe) Go Project structLet’s create the following structure for our kubevirt project development environment: ├── <Go-projects-folder> # Your Golang projects root folder│ ├── bin│ ├── pkg│ ├── src│ │ ├── kubevirt. ioNow navigate to kubevirt. io folder and run: git clone <kubevirt-fork>Install VSCODE ExtensionsNow we are going to install some extensions for a better development experience with the IDE. Open vscode and select your go project root folder you created in the last step. On the extensions tab (Ctrl+Shift+X), search for golang and install it. Now open the command palette (Ctrl+Shift+P) view->Command Palette and type “Go: install/update tools”, this will install all the requirements for example: delve debugger, etc… (optional) We can install docker extension for syntax highlighting, commands, etc. . GOPATH and GOROOT configurationOpen the vscode configuration file (ctrl+,) file->preferences->settings. Now on the right file we need to add this configuration: go. gopath : <Go-projects-folder> , go. goroot : /usr/local/go , Create debug configurationFor the last part we are going to configure the debugger file, open it by Debug->Open Configurations and add to the configuration list the following structure ** Change the parameter to your golang projects root directory { name : Kubevirt , type : go , request : launch , mode : debug , remotePath : , port : 2345, host : 127. 0. 0. 1 , program : ${fileDirname} , env : {}, args : [ --kubeconfig , cluster/k8s-1. 9. 3/. kubeconfig , --port , 1234 ], showLog : true, cwd : ${workspaceFolder}/src/kubevirt. io/kubevirt , output : <Go-projects-folder>/bin/${fileBasenameNoExtension} ,} Debug ProcessFor debug we need to open the main package we want to debug. For example if we want to debug the virt-api component, open the main package: kubevirt. io/cmd/virt-api/virt-api. go Now change to debug view (ctrl+shift+D), check that we are using the kubevirt configuration and hit the play button More Information: For more information, keyboard shortcuts and advance vscode usage please refer the following link editor code basics " }, { - "id": 142, + "id": 143, "url": "/2018/ovn-multi-network-plugin-for-kubernetes-kubetron.html", "title": "Ovn Multi Network Plugin For Kubernetes Kubetron", "author" : "phoracek", "tags" : "ovn, kubetron, network, neutron", "body": "Kubernetes networking model is suited for containerized applications, based mostly around L4 and L7 services, where all pods are connected to one big network. This is perfectly ok for most use cases. However, sometimes there is a need for fine-grained network configuration with better control. Use-cases such as L2 networks, static IP addresses, interfaces dedicated for storage traffic etc. For such needs there is ongoing effort in Kubernetes sig-network to support multiple networks (see Kubernetes Network CRD De-Facto Standard. There exist many prototypes of plugins providing such functionality. You are reading about one of them. Kubetron (working name, kubernetes + neutron, quite misleading since we want to support setup without Neutron involved too), allows users to connect their pods to multiple networks configured on OVN. Important part here is, that such networks are configured by an external tool, be it OVN Northbound Database client or higher level tool such as Neutron or oVirt Provider OVN. This allows administrators to configure complicated networks, Kubernetes then only knows enough about the known networks to be able to connect to them - but not all the complexity involved to manage them. Kubetron does not affect default Kubernetes networking at all, default networks will be left intact. In order to enable the use-cases outlined above, Kubetron can be used to provide multiple interfaces to a pod, further more KubeVirt will then use those interfaces to pass them to its virtual machines via the in progress VirtualMachine networking API. You can find source code in Kubetron GitHub repository. Contents: Desired Model and Usage Proof of Concept Demo Try it Yourself Looking for Help DisclaimerDesired Model and Usage: Let’s talk about how Kubetron looks from administrator’s and user’s point of view. Please note that following examples are still for the desired state and some of them might not be implemented in PoC yet. If you want to learn more about deployment and architecture, check Kubetron slide deck. Configure OVN Networks: First of all, administrator must create and configure networks in OVN. That could be done either directly on OVN northbound database (e. g. using ovn-nbctl) or via OVN manager (e. g. Neutron or oVirt Provider OVN, using ansible). Expose Available Networks: Once the networks are configured, there are two options how to expose available networks to a user. First one is providing some form of access to OVN or Neutron API, this one is completely out of Kubernetes’ and Kubetron’sscope. Second option is to enable Network object support (as described in Kubernetes Network CRD De-Facto standard). With this option, administrator must create a Network object per each OVN network is allowed to be used by a user. This object allows administrator to expose only limited subset of networks or to limit access per Namespace. This process could be automated, e. g. via a service that monitors available logical switches and exposes them as Networks. # List networks (Logical Switches) directly from OVN Northbound databaseovn-nbctl ls-list# List networks available on Neutronneutron net-list# List networks as Network objects created in Kuberneteskubectl get networksAttach pod to a Network: Once user selects a desired network based on options described in previous section, he or she can request them for a pod using an annotation. This annotation is compatible with earlier mentioned Kubernetes Network CRD De-Facto Standard. apiVersion: v1kind: podmetadata: name: network-consumer annotations: kubernetes. v1. cni. cncf. io/networks: red # requested networksspec: containers: - name: busybox image: busyboxAccess the Network from the pod: Once the pod is created, a user can list its interfaces and their assigned IP addresses: $ kubectl exec -it network-consumer -- ip address. . . 10: red-bcxoeffrsw: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue state UNKNOWN qlen 1000 link/ether 4e:71:3b:ee:a5:f4 brd ff:ff:ff:ff:ff:ff inet 10. 1. 0. 3/24 brd 10. 1. 0. 255 scope global dynamic red-bcxoeffrsw valid_lft 86371sec preferred_lft 86371sec inet6 fe80::4c71:3bff:feee:a5f4/64 scope link valid_lft forever preferred_lft forever. . . In order to make it easier to obtain the network’s interface name inside pod’s containers, environment variables with network-interface mapping are created: $ echo $NETWORK_INTERFACE_REDred-bcxoeffrswProof of Concept: As for now, current implementation does not completely implement the desired model yet: Only Neutron mode is implemented, Kubetron can not be used with OVN alone Network object handling is not implemented, Kubetron obtains networks directly from Neutron Interface names are not exposed as environment variablesIt might be unstable and there are some missing parts. However, basic scenario works, at least in development environment. Demo: In the following recording we create two networks red and blue using Neutron API via Ansible. Then we create two pods and connect them to both mentioned networks. And then we ping. Try it Yourself: I encourage you to try Kubetron yourself. It has not yet been tested on regular Kubernetes deployment (and it likely won’t work without some tuning). Fortunately, Kubetron repository contains Vagrant file and set of scripts that will help you deploy multi-node Kuberneteswith OVN and Kubetron installed. On top of that it describes how to create networks and connect pods to them. Check out Kubetron README. md and give it a try! Looking for Help: If you are interested in contributing to Kubetron, please follow its GitHub repository. There are many missing features and possible improvements, I will open issues to track them soon. Stay tuned! Disclaimer: Kubetron is in early development stage, both it’s architecture and tools to use it will change. " }, { - "id": 143, + "id": 144, "url": "/2018/Use-GlusterFS-Cloning-with-KubeVirt.html", "title": "Use Glusterfs Cloning With Kubevirt", "author" : "karmab", "tags" : "glusterfs, storage", "body": "Gluster seems like a good fit for storage in kubernetes and in particular in kubevirt. Still, as for other storage backends, we will likely need to use a golden set of images and deploy vms from them. That’s where cloning feature of gluster comes at rescue! Contents: Prerequisites Installing Gluster provisioner Using The cloning feature ConclusionPrerequisites: I assume you already have a running instance of openshift and kubevirt along with gluster and an already existing pvc where you copied a base operating system ( you can get those from here) For reference, I used the following components and versions: 3 baremetal servers with Rhel 7. 4 as base OS OpenShift and CNS 3. 9 KubeVirt latestInstalling Gluster provisioner: initial deployment: We will deploy the custom provisioner using this template, along with cluster rules located in this file Note that we also patch the image to use an existing one from gluster org located at docker. io instead of quay. io, as the corresponding repository is private by the time of this writing, and the heketi one, to make sure it has the code required to handle cloning NAMESPACE= app-storage oc create -f openshift-clusterrole. yamloc process -f glusterfile-provisioner-template. yml | oc apply -f - -n $NAMESPACEoc adm policy add-cluster-role-to-user cluster-admin -z glusterfile-provisioner -n $NAMESPACEoc adm policy add-scc-to-user privileged -z glusterfile-provisioneroc set image dc/heketi-storage heketi=gluster/heketiclone:latest -n $NAMESPACEoc set image dc/glusterfile-provisioner glusterfile-provisioner=gluster/glusterfileclone:latest -n $NAMESPACEAnd you will see something similar to this in your storage namespace [root@master01 ~]# NAMESPACE= app-storage [root@master01 ~]# kubectl get pods -n $NAMESPACENAME READY STATUS RESTARTS AGEglusterfile-provisioner-3-vhkx6 1/1 Running 0 1dglusterfs-storage-b82x4 1/1 Running 1 23dglusterfs-storage-czthc 1/1 Running 0 23dglusterfs-storage-z68hm 1/1 Running 0 23dheketi-storage-2-qdrks 1/1 Running 0 6hadditional configuration: for the custom provisioner to work, we need two additional things: a storage class pointing to it, but also containing the details of the current heketi installation a secret similar to the one used by the current heketi installation, but using a different typeYou can use the following NAMESPACE= app-storage oc get sc glusterfs-storage -o yamloc get secret heketi-storage-admin-secret -n $NAMESPACE-o yamlthen, create the following objects: glustercloning-heketi-secret secret in your storage namespace glustercloning storage classfor reference, here are samples of those files. Note how we change the type for the secret and add extra options for our storage class (in particular, enabling smartclone). apiVersion: v1data: key: eEt0NUJ4cklPSmpJb2RZcFpqVExSSjUveFV5WHI4L0NxcEtMME1WVlVjQT0=kind: Secretmetadata: name: glustercloning-heketi-secret namespace: app-storagetype: gluster. org/glusterfileapiVersion: storage. k8s. io/v1kind: StorageClassmetadata: name: glustercloningparameters: restsecretname: glustercloning-heketi-secret restsecretnamespace: app-storage resturl: http://heketi-storage. 192. 168. 122. 10. xip. io restuser: admin smartclone: true snapfactor: 10 volumeoptions: group virtprovisioner: gluster. org/glusterfilereclaimPolicy: DeleteThe full set of supported parameters can be found here Using the cloning feature: Once deployed, you can now provision pvcs from a base origin Cloning single pvcs: For instance, provided you have an existing pvc named cirros containing this base operating system, and that this PVC contains an annotion of the following (. . . )metadata: annotations: gluster. org/heketi-volume-id: f0cbbb29ef4202c5226f87708da57e5c(. . . )A cloned pvc can be created with the following yaml ( note that we simply indicate a clone request in the annotations) apiVersion: v1kind: PersistentVolumeClaimmetadata: name: testclone1 namespace: default annotations: k8s. io/CloneRequest: cirrosspec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: glustercloningstatus: accessModes: - ReadWriteOnce capacity: storage: 1GiOnce provisioned, the pvc will contain this additional annotation created by the provisioner (. . . )metadata: annotations: k8s. io/CloneOf: cirros(. . . )Leveraging the feature in openshift templates: We can make direct use of the feature in this openshift template which would create the following objects: a persistent volume claim as a clone of an existing pvc (cirros by default) an offline virtual machine object additional services for ssh and http accessyou can use it with something like oc process -f template. yml -p Name=myvm | oc process -f - -n defaultConclusion: Cloning features in the storage backend allow us to simply use a given set of pvcs as base OS for the deployment of our vms. This feature is growing in gluster, worth giving it a try! " }, { - "id": 144, + "id": 145, "url": "/2018/KubeVirt-API-Access-Control.html", "title": "Kubevirt Api Access Control", "author" : "davidvossel", "tags" : "api, rbac, roles", "body": "Access to KubeVirt resources are controlled entirely by Kubernete’s ResourceBased Access Control (RBAC) system. This system allows KubeVirt to tie directlyinto the existing authentication and authorization mechanisms Kubernetesalready provides to its core api objects. KubeVirt RBAC Role Basics: Typically, when people think of Kubernetes RBAC system, they’re thinking aboutgranting users access to create/delete kubernetes objects (like Pods,deployments, etc), however those same RBAC mechanisms work naturally withKubeVirt objects as well. When we look at KubeVirt’s objects, we can see they are structured just likethe objects that come predefined in the Kubernetes core. For example, look here’s an example of a VirtualMachine spec. apiVersion: kubevirt. io/v1alpha1kind: VirtualMachinemetadata: name: vm-ephemeralspec: domain: devices: disks: - disk: bus: virtio name: registrydisk volumeName: registryvolume resources: requests: memory: 64M volumes: - name: registryvolume registryDisk: image: kubevirt/cirros-container-disk-demo:develIn the spec above, we see the KubeVirt VirtualMachine object has an apiVersionfield and a kind field just like a Pod spec does. The kubevirt. io portionof the apiVersion field represents KubeVirt apiGroup the resource is a part of. The kind field reflects the resource type. Using that information, we can create an RBAC role that gives a user permissionto create, delete, and view all VirtualMachine objects. apiVersion: rbac. authorization. k8s. io/v1beta1kind: ClusterRolemetadata: name: vm-access labels: kubevirt. io: rules: - apiGroups: - kubevirt. io resources: - virtualmachines verbs: - get - delete - create - update - patch - list - watchThis same logic can be applied when creating RBAC roles for other KubeVirtobjects as well. If we wanted to extend this RBAC role to grant similarpermissions for VirtualMachinePreset objects, we’d just have to add a secondresource kubevirt. io resource list. The result would look like this. apiVersion: rbac. authorization. k8s. io/v1beta1kind: ClusterRolemetadata: name: vm-access labels: kubevirt. io: rules: - apiGroups: - kubevirt. io resources: - virtualmachines - virtualmachinepresets verbs: - get - delete - create - update - patch - list - watchKubeVirt Subresource RBAC Roles: Access to a VirtualMachines’s VNC and console stream using KubeVirt’svirtctl tool is managed by the Kubernetes RBAC system as well. Permissionsfor these resources work slightly different than the other KubeVirt objectsthough. Console and VNC access is performed using the KubeVirt Stream API, which hasits own api group called subresources. kubevirt. io. Below is an example ofhow to create a role that grants access to the VNC and console streams APIs. apiVersion: rbac. authorization. k8s. io/v1beta1kind: ClusterRolemetadata: name: vm-vnc-access labels: kubevirt. io: rules: - apiGroups: - subresources. kubevirt. io resources: - virtualmachines/console - virtualmachines/vnc verbs: - getLimiting RBAC To a Single Namespace. : A ClusterRole can be bound to a user in two different ways. When a ClusterRoleBinding is used, a user is permitted access to all resourcesdefined in the ClusterRole across all namespaces in the cluster. When a RoleBinding is used, a user is limited to accessing only the resourcesdefined in the ClusterRole within the namespace RoleBinding exists in. Limiting RBAC To a Single Resource. : A user can also be limit to accessing only a single resource within a resourcetype. Below is an example that only grants VNC access to the VirtualMachinenamed ‘bobs-vm’ apiVersion: rbac. authorization. k8s. io/v1beta1kind: ClusterRolemetadata: name: vm-vnc-access labels: kubevirt. io: rules: - apiGroups: - subresources. kubevirt. io resources: - virtualmachines/console - virtualmachines/vnc resourceName: - bobs-vm verbs: - getDefault KubeVirt RBAC Roles: The next release of KubeVirt is coming with three default ClusterRoles thatadmins can use to grant users access to KubeVirt resources. In most cases,these roles will prevent admins from ever having to create their own customKubeVirt RBAC roles. More information about these default roles can be found in the KubeVirtuser guide here " }, { - "id": 145, + "id": 146, "url": "/2018/KubeVirt-objects.html", "title": "Kubevirt Objects", "author" : "jcpowermac", "tags" : "custom resources, kubevirt objects, objects, VirtualMachine", "body": "The KubeVirt project provides extensions to Kubernetes via custom resources. These resources are a collection a API objects that defines a virtual machine within Kubernetes. I think it’s important to point out the two great resources that I used tocompile information for this post: user-guide api-referenceWith that let’s take a look at the objects that are available. KubeVirt top-level objectsBelow is a list of the top level API objects and descriptions that KubeVirt provides. VirtualMachine (vm[s]) - represents a virtual machine in the runtime environment of Kubernetes. OfflineVirtualMachine (ovm[s]) - handles the virtual machines that are not running or are in a stopped state. VirtualMachinePreset (vmpreset[s]) - is an extension to general VirtualMachine configuration behaving much like PodPresets from Kubernetes. When a VirtualMachine is created, any applicable VirtualMachinePresets will be applied to the existing spec for the VirtualMachine. This allows for re-use of common settings that should apply to multiple VirtualMachines. VirtualMachineReplicaSet (vmrs[s]) - tries to ensures that a specified number of VirtualMachine replicas are running at any time. DomainSpec is listed as a top-level object but is only used within all of the objects above. Currently the DomainSpec is a subset of what is configurable via libvirt domain XML. VirtualMachine: VirtualMachine is mortal object just like aPod within Kubernetes. It only runs once and cannot be resurrected. This might seem problematic especiallyto an administrator coming from a traditional virtualization background. Fortunatelylater we will discuss OfflineVirtualMachines which will address this. First let’s use kubectl to retrieve a list of VirtualMachine objects. $ kubectl get vms -n nodejs-exNAME AGEmongodb 5dnodejs 5dWe can also use kubectl describe $ kubectl describe vms -n testName: testvmNamespace: testLabels: guest=testvm kubevirt. io/nodeName=kn2. virtomation. com kubevirt. io/size=small. . . output. . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 59m virtualmachine-controller Created virtual machine pod virt-launcher-testvm-8h927 Normal SuccessfulHandOver 59m virtualmachine-controller Pod owner ship transfered to the node virt-launcher-testvm-8h927 Normal Created 59m (x2 over 59m) virt-handler, kn2. virtomation. com VM defined. Normal Started 59m virt-handler, kn2. virtomation. com VM started. And just in case if you want to return the yaml definition of a VirtualMachine object here is an example. $ kubectl -o yaml get vms mongodb -n nodejs-exapiVersion: kubevirt. io/v1alpha1kind: VirtualMachine. . . output. . . The first object we will annotate is VirtualMachine. The important sections . spec for VirtualMachineSpec and . spec. domain for DomainSpec will be annotated only in this section then referred to in the other object sections. apiVersion: kubevirt. io/v1alpha1kind: VirtualMachinemetadata: annotations: {} labels: {} name: string namespace: stringspec: {}Node Placement: Kubernetes has the ability to schedule a pod to specific nodes based on affinity and anti-affinity rules. Node affinity is also possible with KubeVirt. To constrain a virtual machine to run on a node define a matching expressions using node labels. affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: string operator: string values: - string weight: 0 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: string operator: string values: - stringA virtual machine can also more easily be constrained by using nodeSelector which is defined by node’s label and value. Here is an example nodeSelector: kubernetes. io/hostname: kn1. virtomation. comClocks and Timers: Configures the virtualize hardware clock provided by QEMU. domain: clock: timezone: string utc: offsetSeconds: 0The timer defines the type and policy attribute that determines what action is take when QEMU misses a deadline for injecting a tick to the guest. domain: clock: timer: hpet: present: true tickPolicy: string hyperv: present: true kvm: present: true pit: present: true tickPolicy: string rtc: present: true tickPolicy: string track: stringCPU and Memory: The number of CPU cores a virtual machine will be assigned. . spec. domain. cpu. cores will not be used for scheduling use . spec. domain. resources. requests. cpu instead. cpu: cores: 1There are two supported resource limits and requests: cpu and memory. A . spec. domain. resources. requests. memory should be defined to determine the allocation of memory provided to the virtual machine. These values will be used to in scheduling decisions. resources: limits: {} requests: {}Watchdog Devices: . spec. domain. watchdog automatically triggers an action via Libvirt and QEMU when the virtual machine operating system hangs or crashes. watchdog: i6300esb: action: string name: stringFeatures: . spec. domain. featuresare hypervisor cpu or machine features that can be enabled. After reviewing both Linux and Microsoft QEMU virtual machines managed byLibvirtboth acpi andapicshould be enabled. The hyperv features should be enabled only for Windows-based virtual machines. For additional information regarding features please visit the virtual hardware configuration in the kubevirt user guide. features: acpi: enabled: true apic: enabled: true endOfInterrupt: true hyperv: relaxed: enabled: true reset: enabled: true runtime: enabled: true spinlocks: enabled: true spinlocks: 0 synic: enabled: true synictimer: enabled: true vapic: enabled: true vendorid: enabled: true vendorid: string vpindex: enabled: trueQEMU Machine Type: . spec. domain. machine. type is the emulated machine architecture provided by QEMU. machine: type: stringHere is an example how to retrieve the supported QEMU machine types. $ qemu-system-x86_64 --machine help Supported machines are: . . . output. . . pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2. 10) pc-i440fx-2. 10 Standard PC (i440FX + PIIX, 1996) (default) . . . output. . . q35 Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-2. 10) pc-q35-2. 10 Standard PC (Q35 + ICH9, 2009)Disks and Volumes: . spec. domain. devices. disks configures a QEMU type of disk to the virtual machine and assigns a specific volume and its type to that disk via the volumeName. devices: disks: - cdrom: bus: string readonly: true tray: string disk: bus: string readonly: true floppy: readonly: true tray: string lun: bus: string readonly: true name: string volumeName: stringcloudInitNoCloudinjects scripts and configuration into a virtual machine operating system. There are three different parameters that can be used to provide thecloud-init coniguration: secretRef, userData or userDataBase64. See the user-guide for examples of how to use . spec. volumes. cloudInitNoCloud. volumes: - cloudInitNoCloud: secretRef: name: string userData: string userDataBase64: stringAn emptyDisk volume creates an extra qcow2 disk that is created with the virtual machine. It will be removed if the VirtualMachine object is deleted. emptyDisk: capacity: stringEphemeral volume creates a temporary local copy on write image storage that will be discarded when the VirtualMachine is removed. ephemeral: persistentVolumeClaim: claimName: string readOnly: truename: stringpersistentVolumeClaim volume persists after the VirtualMachine is deleted. persistentVolumeClaim: claimName: string readOnly: trueregistryDisk volume type uses a virtual machine disk that is stored in a container image registry. registryDisk: image: string imagePullSecret: stringVirtual Machine Status: Once the VirtualMachine object has been created the VirtualMachineStatus will be available. VirtualMachineStatus can be used in automation tools such as Ansible to confirm running state, determine where a VirtualMachine is running via nodeName or the ipAddress of the virtual machine operating system. kubectl -o yaml get vm mongodb -n nodejs-ex# . . . output. . . status: interfaces: - ipAddress: 10. 244. 2. 7 nodeName: kn2. virtomation. com phase: RunningExample using --template to retrieve the . status. phase of the VirtualMachine. kubectl get vm mongodb --template {{. status. phase}} -n nodejs-exRunningExamples: https://kubevirt. io/user-guide/virtual_machines/virtual_machine_instances/#virtualmachineinstance-apiOfflineVirtualMachine: An OfflineVirtualMachine is an immortal object within KubeVirt. The VirtualMachinedescribed within the spec will be recreated with a start power operation, host issueor simply a accidental deletion of the VirtualMachine object. For a traditional virtual administrator this object might be appropriate formost use-cases. Just like VirtualMachine we can retrieve the OfflineVirtualMachine objects. $ kubectl get ovms -n nodejs-exNAME AGEmongodb 5dnodejs 5dAnd display the object in yaml. $ kubectl -o yaml get ovms mongodb -n nodejs-exapiVersion: kubevirt. io/v1alpha1kind: OfflineVirtualMachinemetadata:. . . output. . . We continue by annotating OfflineVirtualMachine object. apiVersion: kubevirt. io/v1alpha1kind: OfflineVirtualMachinemetadata: annotations: {} labels: {} name: string namespace: stringspec:What is Running in OfflineVirtualMachine?: . spec. running controls whether the associated VirtualMachine object is created. In other words this changes the power status of the virtual machine. running: trueThis will create a VirtualMachine object which will instantiate and power on a virtual machine. kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ running :true }}' -n nodejs-exThis will delete the VirtualMachine object which will power off the virtual machine. kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ running :false }}' -n nodejs-exAnd if you would rather not have to remember the kubectl patch command abovethe KubeVirt team has provided a cli tool virtctl that can start and stopa guest. . /virtctl start mongodb -n nodejs-ex. /virtctl stop mongodb -n nodejs-exOffline Virtual Machine Status: Once the OfflineVirtualMachine object has been created the OfflineVirtualMachineStatus will be available. Like VirtualMachineStatus OfflineVirtualMachineStatus can be used for automation tools such as Ansible. kubectl -o yaml get ovms mongodb -n nodejs-ex# . . . output. . . status: created: true ready: trueExample using --template to retrieve the . status. conditions[0]. type of OfflineVirtualMachine. kubectl get ovm mongodb --template {{. status. ready}} -n nodejs-extrueVirtualMachineReplicaSet: VirtualMachineReplicaSet is great when you want to run multiple identical virtual machines. Just like the other top-level objects we can retrieve VirtualMachineReplicaSet. $ kubectl get vmrs -n nodejs-exNAME AGEreplica 1mWith the replicas parameter set to 2 the command below displays the two VirtualMachine objects that were created. $ kubectl get vms -n nodejs-exNAME AGEreplicanmgjl 7mreplicarjhdz 7mPause rollout: The . spec. paused parameter if true pauses the deployment of the VirtualMachineReplicaSet. paused: trueReplica quantity: The . spec. replicas number of VirtualMachine objects that should be created. replicas: 0The selector must be defined and match labels defined in the template. It is used by the controller to keep track of managed virtual machines. selector: matchExpressions: - key: string operator: string values: - string matchLabels: {}Virtual Machine Template Spec: The VMTemplateSpec is the definition of a VirtualMachine objects that will be created. In the VirtualMachine section the . spec VirtualMachineSpec describes the available parameters for that object. template: metadata: annotations: {} labels: {} name: string namespace: string spec: {}Replica Status: Like the other objects we already have discussed VMReplicaSetStatus is an important object to use for automation. status: readyReplicas: 0 replicas: 0Example using --template to retrieve the . status. readyReplicas and . status. replicas of VirtualMachineReplicaSet. $ kubectl get vmrs replica --template {{. status. readyReplicas}} -n nodejs-ex2$ kubectl get vmrs replica --template {{. status. replicas}} -n nodejs-ex2Examples: https://kubevirt. io/user-guide/virtual_machines/replicaset/#exampleVirtualMachinePreset: This is used to define a DomainSpec that can be used for multiple virtual machines. To configure a DomainSpec for multiple VirtualMachine objects the selector defines which VirtualMachine the VirtualMachinePreset should be applied to. $ kubectl get vmpreset -n nodejs-exNAME AGEm1. small 17sDomain Spec: See the VirtualMachine section above for annotated details of the DomainSpec object. spec: domain: {}Preset Selector: The selector is optional but if not defined will be applied to all VirtualMachine objects; which is probably not the intended purpose so I recommend always including a selector. selector: matchExpressions: - key: string operator: string values: - string matchLabels: {}Examples: https://kubevirt. io/user-guide/virtual_machines/presets/#examplesWe provided an annotated view into the KubeVirt objects - VirtualMachine, OfflineVirtualMachine, VirtualMachineReplicaSet and VirtualMachinePreset. Hopefully this will help a user of KubeVirt to understand the options and parameters that are currently available when creating a virtual machine on Kubernetes. " }, { - "id": 146, + "id": 147, "url": "/2018/Deploying-VMs-on-Kubernetes-GlusterFS-KubeVirt.html", "title": "Deploying Vms On Kubernetes Glusterfs Kubevirt", "author" : "rwsu", "tags" : "glusterfs, heketi, virtual machine, weavenet", "body": "Kubernetes is traditionally used to deploy and manage containerized applications. Did you know Kubernetes can also be used to deploy and manage virtual machines? This guide will walk you through installing a Kubernetes environment backed by GlusterFS for storage and the KubeVirt add-on to enable deployment and management of VMs. Contents: Prerequisites Known Issues Installing Kubernetes Installing GlusterFS and Heketi using gk-deploy Installing KubeVirt Deploying Virtual MachinesPrerequisites: You should have access to at least three baremetal servers. One server will be the master Kubernetes node and other two servers will be the worker nodes. Each server should have a block device attached for GlusterFS, this is in addition to the ones used by the OS. You may use virtual machines in lieu of baremetal servers. Performance may suffer and you will need to ensure your hardware supports nested virtualization and that the relevant kernel modules are loaded in the OS. For reference, I used the following components and versions: baremetal servers with CentOS version 7. 4 as the base OS latest version of Kubernetes (at the time v1. 10. 1) Weave Net as the Container Network Interface (CNI), v2. 3. 0 gluster-kubernetes master commit 2a2a68ce5739524802a38f3871c545e4f57fa20a KubeVirt v0. 4. 1. Known Issues: You may need to set SELinux to permissive mode prior to running “kubeadm init” if you see failures attributed to etcd in /var/log/audit. log. Prior to installing GlusterFS, you may need to disable firewalld until this issue is resolved: https://github. com/gluster/gluster-kubernetes/issues/471 kubevirt-ansible install may fail in storage-glusterfs role: https://github. com/kubevirt/kubevirt-ansible/issues/219Installing Kubernetes: Create the Kubernetes cluster by using kubeadm. Detailed instructions can be found at https://kubernetes. io/docs/setup/independent/install-kubeadm/. Use Weave Net as the CNI. Other CNIs may work, but I have only tested Weave Net. If you are using only 2 servers as workers, then you will need to allow scheduling of pods on the master node because GlusterFS requires at least three nodes. To schedule pods on the master node, see “Master Isolation” in the kubeadm guide or execute this command: kubectl taint nodes --all node-role. kubernetes. io/master-Move onto the next step when your master and worker nodes are Ready. [root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster. somewhere. com Ready master 6d v1. 10. 1worker1. somewhere. com Ready <none> 6d v1. 10. 1worker2. somewhere. com Ready <none> 6d v1. 10. 1And all of the pods in the kube-system namespace are Running. [root@master ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEetcd-master. somewhere. com 1/1 Running 0 6dkube-apiserver-master. somewhere. com 1/1 Running 0 6dkube-controller-manager-master. somewhere. com 1/1 Running 0 6dkube-dns-86f4d74b45-glv4k 3/3 Running 0 6dkube-proxy-b6ksg 1/1 Running 0 6dkube-proxy-jjxs5 1/1 Running 0 6dkube-proxy-kw77k 1/1 Running 0 6dkube-scheduler-master. somewhere. com 1/1 Running 0 6dweave-net-ldlh7 2/2 Running 0 6dweave-net-pmhlx 2/2 Running 1 6dweave-net-s4dp6 2/2 Running 0 6dInstalling GlusterFS and Heketi using gluster-kubernetes: The next step is to deploy GlusterFS and Heketi onto Kubernetes. GlusterFS provides the storage system on which the virtual machine images are stored. Heketi provides the REST API that Kubernetes uses to provision GlusterFS volumes. The gk-deploy tool is used to deploy both of these components as pods in the Kubernetes cluster. There is a detailed setup guide for gk-deploy. Note each node must have a raw block device that is reserved for use by heketi and they must not contain any data or be pre-formatted. You can reset your block device to a useable state by running: wipefs -a <path to device>To aid you, below are the commands you will need to run if you are following the setup guide. On all nodes: # Open ports for GlusterFS communicationssudo iptables -I INPUT 1 -p tcp --dport 2222 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 24007 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 24008 -j ACCEPTsudo iptables -I INPUT 1 -p tcp --dport 49152:49251 -j ACCEPT# Load kernel modulessudo modprobe dm_snapshotsudo modprobe dm_thin_poolsudo modprobe dm_mirror# Install glusterfs-fuse and git packagessudo yum install -y glusterfs-fuse gitOn the master node: # checkout gluster-kubernetes repogit clone https://github. com/gluster/gluster-kubernetescd gluster-kubernetes/deployBefore running the gk-deploy script, we need to first create a topology. json file that maps the nodes present in the GlusterFS cluster and the block devices attached to each node. The block devices should be raw and unformatted. Below is a sample topology. json file for a 3 node cluster all operating in the same zone. The gluster-kubernetes/deploy directory also contains a sample topology. json file. # topology. json{ clusters : [ { nodes : [ { node : { hostnames : { manage : [ master. somewhere. com ], storage : [ 192. 168. 10. 100 ] }, zone : 1 }, devices : [ /dev/vdb ] }, { node : { hostnames : { manage : [ worker1. somewhere. com ], storage : [ 192. 168. 10. 101 ] }, zone : 1 }, devices : [ /dev/vdb ] }, { node : { hostnames : { manage : [ worker2. somewhere. com ], storage : [ 192. 168. 10. 102 ] }, zone : 1 }, devices : [ /dev/vdb ] } ] } ]}Under “hostnames”, the node’s hostname is listed under “manage” and its IP address is listed under “storage”. Multiple block devices can be listed under “devices”. If you are using VMs, the second block device attached to the VM will usually be /dev/vdb. For multi-path, the device path will usually be /dev/mapper/mpatha. If you are using a second disk drive, the device path will usually be /dev/sdb. Once you have your topology. json file and saved it in gluster-kubernetes/deploy, we can execute gk-deploy to create the GlusterFS and Heketi pods. You will need to specify an admin-key which will be used in the next step and will be discovered during the KubeVirt installation. # from gluster-kubernetes/deploy. /gk-deploy -g -v -n kube-system --admin-key my-admin-keyAdd the end of the installation, you will see: heketi is now running and accessible via http://10. 32. 0. 4:8080 . To runadministrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://10. 32. 0. 4:8080 --user admin --secret '<ADMIN_KEY>' cluster listYou can find it at https://github. com/heketi/heketi/releases . Alternatively,use it from within the heketi pod: # /usr/bin/kubectl -n kube-system exec -i heketi-b96c7c978-dcwlw -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster listFor dynamic provisioning, create a StorageClass similar to this:\Take note of the URL for Heketi which will be used next step. If successful, 4 additional pods will be shown as Running in the kube-system namespace. [root@master deploy]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGE. . . snip. . . glusterfs-h4nwf 1/1 Running 0 6dglusterfs-kfvjk 1/1 Running 0 6dglusterfs-tjm2f 1/1 Running 0 6dheketi-b96c7c978-dcwlw 1/1 Running 0 6d. . . snip. . . Installing KubeVirt and setting up storage: The final component to install and which will enable us to deploy VMs on Kubernetes is KubeVirt. We will use kubevirt-ansible to deploy KubeVirt which will also help us configure a Secret and a StorageClass that will allow us to provision Persistent Volume Claims (PVCs) on GlusterFS. Let’s first clone the kubevirt-ansible repo. git clone https://github. com/kubevirt/kubevirt-ansiblecd kubevirt-ansibleEdit the inventory file in the kubevirt-ansible checkout. Modify the section that starts with “#BEGIN CUSTOM SETTINGS”. As an example using the servers from above: # BEGIN CUSTOM SETTINGS[masters]# Your master FQDNmaster. somewhere. com[etcd]# Your etcd FQDNmaster. somewhere. com[nodes]# Your nodes FQDN'sworker1. somewhere. comworker2. somewhere. com[nfs]# Your nfs server FQDN[glusterfs]# Your glusterfs nodes FQDN# Each node should have the glusterfs_devices variable, which# points to the block device that will be used by gluster. master. somewhere. comworker1. somewhere. comworker1. somewhere. com## If you run openshift deployment# You can add your master as schedulable node with option openshift_schedulable=true# Add at least one node with lable to run on it router and docker containers# openshift_node_labels= {'region': 'infra','zone': 'default'} # END CUSTOM SETTINGSNow let’s run the kubevirt. yml playbook: ansible-playbook -i inventory playbooks/kubevirt. yml -e cluster=k8s -e storage_role=storage-glusterfs -e namespace=kube-system -e glusterfs_namespace=kube-system -e glusterfs_name= -e heketi_url=http://10. 32. 0. 4:8080 -vIf successful, we should see 7 additional pods as Running in the kube-system namespace. [root@master kubevirt-ansible]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEvirt-api-785fd6b4c7-rdknl 1/1 Running 0 6dvirt-api-785fd6b4c7-rfbqv 1/1 Running 0 6dvirt-controller-844469fd89-c5vrc 1/1 Running 0 6dvirt-controller-844469fd89-vtjct 0/1 Running 0 6dvirt-handler-78wsb 1/1 Running 0 6dvirt-handler-csqbl 1/1 Running 0 6dvirt-handler-hnlqn 1/1 Running 0 6dDeploying Virtual Machines: To deploy a VM, we must first grab a VM image in raw format, place the image into a PVC, define the VM in a yaml file, source the VM definition into Kubernetes, and then start the VM. The containerized data importer (CDI) is usually used to import VM images into Kubernetes, but there are some patches and additional testing to be done before the CDI can work smoothly with GlusterFS. For now, we will be placing the image into the PVC using a Pod that curls the image from the local filesystem using httpd. On master or on a node where kubectl is configured correctly install and start httpd. sudo yum install -y httpdsudo systemctl start httpdDownload the cirros cloud image and convert it into raw format. curl http://download. cirros-cloud. net/0. 4. 0/cirros-0. 4. 0-x86_64-disk. img -o /var/www/html/cirros-0. 4. 0-x86_64-disk. imgsudo yum install -y qemu-imgqemu-img convert /var/www/html/cirros-0. 4. 0-x86_64-disk. img /var/www/html/cirros-0. 4. 0-x86_64-disk. rawCreate the PVC to store the cirros image. cat <<EOF | kubectl create -f -apiVersion: v1kind: PersistentVolumeClaimmetadata: name: gluster-pvc-cirros annotations: volume. beta. kubernetes. io/storage-class: kubevirtspec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiEOFCheck the PVC was created and has “Bound” status. [root@master ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEgluster-pvc-cirros Bound pvc-843bd508-4dbf-11e8-9e4e-149ecfc53021 5Gi RWO kubevirt 2mCreate a Pod to curl the cirros image into the PVC. Note: You will need to substitute with actual hostname or IP address. cat <<EOF | kubectl create -f -apiVersion: v1kind: Podmetadata: name: image-importer-cirrosspec: restartPolicy: OnFailure containers: - name: image-importer-cirros image: kubevirtci/disk-importer env: - name: CURL_OPTS value: -L - name: INSTALL_TO value: /storage/disk. img - name: URL value: http://<hostname>/cirros-0. 4. 0-x86_64-disk. raw volumeMounts: - name: storage mountPath: /storage volumes: - name: storage persistentVolumeClaim: claimName: gluster-pvc-cirrosEOFCheck and wait for the image-importer-cirros Pod to complete. [root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimage-importer-cirros 0/1 Completed 0 28sCreate a Virtual Machine definition for your VM and source it into Kubernetes. Note the PVC containing the cirros image must be listed as the first disk under spec. domain. devices. disks. cat <<EOF | kubectl create -f -apiVersion: kubevirt. io/v1alpha2kind: VirtualMachinemetadata: creationTimestamp: null labels: kubevirt. io/ovm: cirros name: cirrosspec: running: false template: metadata: creationTimestamp: null labels: kubevirt. io/ovm: cirros spec: domain: devices: disks: - disk: bus: virtio name: pvcdisk volumeName: cirros-pvc - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume machine: type: resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - cloudInitNoCloud: userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK name: cloudinitvolume - name: cirros-pvc persistentVolumeClaim: claimName: gluster-pvc-cirrosstatus: {}Finally start the VM. export VERSION=v0. 4. 1curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64chmod +x virtctl. /virtctl start cirrosWait for the VM pod to be in “Running” status. [root@master ~]# kubectl get podsNAME READY STATUS RESTARTS AGEimage-importer-cirros 0/1 Completed 0 28svirt-launcher-cirros-krvv2 0/1 Running 0 13sOnce it is running, we can then connect to its console. . /virtctl console cirrosPress enter if a login prompt doesn’t appear. " }, { - "id": 147, + "id": 148, "url": "/2018/changelog-v0.5.0.html", "title": "KubeVirt v0.5.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 5. 0: Released on: Fri May 4 18:25:32 2018 +0200 Better controller health signaling Better virtctl error messages Improvements to enable CRI-O support Run CI on stable OpenShift Add test coverage for multiple PVCs Improved controller life-cycle guarantees Add Webhook validation Add tests coverage for node eviction OfflineVirtualMachine status improvements RegistryDisk API update" }, { - "id": 148, + "id": 149, "url": "/2018/Deploying-KubeVirt-on-a-Single-oVirt-VM.html", "title": "Deploying Kubevirt On A Single Ovirt Vm", "author" : "awels", "tags" : "ovirt, openshift", "body": "In this blog post we are exploring the possibilities of deploying KubeVirt on top of OpenShift which is running inside an oVirt VM. First we must prepare the environment. In my testing I created a VM with 4 cpus, 14G memory and a 100G disk. I then installed CentOS 7. 4 minimal on it. I also have nested virtualizationenabled on my hosts, so any VMs I create can run VMs inside them. These instructions are specific to oVirt, however if you are running another virtualizationplatform that can nested virtualization this will also work. For this example I chose to use a single VM for everything, but I could have done different VMs for my master/nodes/storage/etc, for simplicity I used a singleVM. Preparing the VM: First we will need to enable epel and install some needed tools, like git to get at the source, and ansible to do the deploy: As root: $ yum -y install epel-release$ yum -y install ansible git wgetoptionalInstall ovirt-guest-agent so you can see information in your oVirt admin view. As root: $ yum -y install ovirt-guest-agent$ systemctl start ovirt-guest-agent$ systemctl enable ovirt-guest-agentMake a template out of the VM, so if something goes wrong you have a good starting point to try again. Make sure the VM has a fully qualified domain name, using either DNS or editing /etc/hosts. As we are going to install openshift we will need to install the openshift client tooling from openshift githubin this article I opted to simply copy the oc command into /usr/bin, but anywhere in your path will do. Alternatively you can add oc to your PATH. As root: $ wget https://github. com/openshift/origin/releases/download/v3. 9. 0/openshift-origin-client-tools-v3. 9. 0-191fece-linux-64bit. tar. gz$ tar zxvf openshift-origin-client-tools-v3. 9. 0-191fece-linux-64bit. tar. gz$ cp openshift-origin-client-tools-v3. 9. 0-191fece-linux-64bit/oc /usr/binNext we will install docker and configure it for use with open shift. As root: $ yum -y install dockerWe need to setup an insecure registry in docker before we can start open shift. To do this we must add:INSECURE_REGISTRY=”–insecure-registry 172. 30. 0. 0/16”to the end of /etc/sysconfig/docker Now we can start docker. As root: $ systemctl start docker$ systemctl enable dockerNow we are ready to test if we can bring our cluster to up. As root: $ oc cluster upInstalling KubeVirt with Ansible: Now that we have everything configured we can the rest as a regular user. Also note that if you had an existing cluster you can could have skipped the previous section. Clone the kube-virt ansible repo, and setup the ansible galaxy roles needed to deploy. As user: $ git clone https://github. com/kubevirt/kubevirt-ansible$ cd kubevirt-ansible$ mkdir $HOME/galaxy-roles$ ansible-galaxy install -p $HOME/galaxy-roles -r requirements. yml$ export ANSIBLE_ROLES_PATH=$HOME/galaxy-rolesNow that we are in the kubevirt-ansible directory, we have to edit the inventory file on where we are going to deploy the different open shift nodes. Because we opted to install everything on a single VM the FQDN we enter is the same as the one we defined for our VM. Had we had different nodes we wouldenter the FQDN of each in the inventory file. Lets assume our VMs FQDN is kubevirt. demo, we would changed the inventory file as follows: As user: [masters]kubevirt. demo[etcd]kubevirt. demo[nodes]kubevirt. demo openshift_node_labels= {'region': 'infra','zone': 'default'} openshift_schedulable=true[nfs]kubevirt. demoIn order to allow ansible to ssh into the box using ssh keys instead of a password we will need to generate some, assuming we don’t have theseconfigured already: As root: $ ssh-keygen -t rsaFill out the information in the questions, which will generate two files in /root/. ssh, id_rsa and id_rsa. pub. The id_rsa. pub is the public key which will allowssh to verify your identify when you ssh into a machine. Since we are doing all of this on the same machine, we can simply append the contents ofid_rsa. pub to authorized_keys in /root/. ssh. If that file doesn’t exist you can simply copy id_rsa. pub to authorized_keys. If you are deploying to multiple hostsyou need to append the contents of id_rsa. pub on each host. Next we need to configure docker storage, one can write a whole book about how to do that, so I will post a link https://docs. okd. io/1. 5/install_config/install/host_preparation. html#configuring-docker-storage to the installation document and for now go with the defaults which are not recommended for production, but since this is an introduction its fine. As root: $ docker-storage-setupLets double check the cluster is up before we start running the ansible play books. As root: $ oc cluster upInstall kubernetes. As root: $ ansible-playbook -i inventory playbooks/cluster/kubernetes/config. ymlDisable selinux on all hosts, this hopefully won’t be needed in the future. As root: $ ansible-playbook -i inventory playbooks/selinux. ymllog in as admin to give developer user rights. As root: $ oc login -u system:admin$ oc adm policy add-cluster-role-to-user cluster-admin developerLog in as the developer user. As user: $ oc login -u developerThe password for the developer user is developer. Now finally deploy kubevirt. As user: $ ansible-playbook -i localhost playbooks/kubevirt. yml -e@vars/all. ymlVerify that the pods are running, you should be in the kube-system namespace, if not switch with oc project kube-system. As user: $ kubectl get podsNAME READY STATUS RESTARTS AGEvirt-api-747745669-mswk8 1/1 Running 0 10mvirt-api-747745669-t9dsp 1/1 Running 0 10mvirt-controller-648945bbcb-ln7dv 1/1 Running 0 10mvirt-controller-648945bbcb-nxrj8 0/1 Running 0 10mvirt-handler-6zh77 1/1 Running 0 10mNow that we have KubeVirt up and running we are ready to try to start a VM. Let’s install virtctl to make it easier tostart and stop VMs. The latest available version while writing this was 0. 4. 1. As user: $ export VERSION=v0. 4. 1$ curl -L -o virtctl \ https://github. com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64$ chmod +x virtctlLets grab the demo VM specification from the kubevirt github page. As user: $ kubectl apply -f https://raw. githubusercontent. com/kubevirt/demo/master/manifests/vm. yamlNow we can start the VM. As user: $ . /virtctl start testvmNow a new pod will be running that is controlling the VM. As user: $ kubectl get podsNAME READY STATUS RESTARTS AGEvirt-api-747745669-mswk8 1/1 Running 0 15mvirt-api-747745669-t9dsp 1/1 Running 0 15mvirt-controller-648945bbcb-ln7dv 1/1 Running 0 15mvirt-controller-648945bbcb-nxrj8 0/1 Running 0 15mvirt-handler-6zh77 1/1 Running 0 15mvirt-launcher-testvm-gv5nt 2/2 Running 0 23sCongratulations you now have a VM running in OpenShift using KubeVirt inside an oVirt VM. Useful resources: KubeVirt KubeVirt Ansible Minikube kubevirt Demo Kubectl installation" }, { - "id": 149, + "id": 150, "url": "/2018/This-Week-in-Kube-Virt-23.html", "title": "This Week In Kube Virt 23", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a close-to weekly update from the KubeVirt team. In general there is now more work happening outside of the core kubevirtrepository. We are currently driven by Closing a lot of loose ends Stepping back to identify gaps for 1. 0 Within the last two weeks we achieved to: Release KubeVirt v0. 4. 1 to address some shutdown issues https://github. com/kubevirt/kubevirt/releases/tag/v0. 4. 1 Many VM life-cycle and guarantee fixes (@rmohr @vossel) https://github. com/kubevirt/kubevirt/pull/951 https://github. com/kubevirt/kubevirt/pull/948 https://github. com/kubevirt/kubevirt/pull/935 https://github. com/kubevirt/kubevirt/pull/838 https://github. com/kubevirt/kubevirt/pull/907 https://github. com/kubevirt/kubevirt/pull/883 Pass labels from VM to pod for better Service integration (@rmohr) https://github. com/kubevirt/kubevirt/pull/939 Packaging preparations (@rmohr) https://github. com/kubevirt/kubevirt/pull/941 https://github. com/kubevirt/kubevirt/issues/924 https://github. com/kubevirt/kubevirt/pull/950 Controller readiness clarifications (@rmohr) https://github. com/kubevirt/kubevirt/pull/901 Validation improvements using CRD scheme and webhooks (@vossel) Webhook: https://github. com/kubevirt/kubevirt/pull/911 Scheme: https://github. com/kubevirt/kubevirt/pull/850 https://github. com/kubevirt/kubevirt/pull/917 Add Windows tests (@alukiano) https://github. com/kubevirt/kubevirt/pull/809 Improve PVC tests (@petrkotas) https://github. com/kubevirt/kubevirt/pull/862 Enable SELinux in OpenShift CI environment Tests to run KubeVirt on Kubernetes 1. 10 In addition to this, we are also working on: virtctl expose convenience verb (@yuvalif) https://github. com/kubevirt/kubevirt/pull/962 CRIO support in CI virtctl bash/zsh completion (@rmohr) https://github. com/kubevirt/kubevirt/pull/916 Improved error messages from virtctl (@fromanirh) https://github. com/kubevirt/kubevirt/pull/934 Improved validation feedback (@vossel) https://github. com/kubevirt/kubevirt/pull/960 Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 150, + "id": 151, "url": "/2018/KubeVirt-Network-Deep-Dive.html", "title": "Kubevirt Network Deep Dive", "author" : "jcpowermac, booxter", "tags" : "network, flannel, kubevirt-ansible, Skydive", "body": "In this post we will research and discover how KubeVirt networking functions along with Kubernetes objects services and ingress. This should also provide enough technical details to start troubleshooting your own environment if a problem should arise. So with that let’s get started. Remember to also check KubeVirt Network Rehash which provides updates to this article. Component InstallationWe are going to walk through the installation that assisted me to write this post. I have created three CentOS 7. 4 with nested virtualization enabled where Kubernetes will be installed, which is up next. Kubernetes: I am rehashing what is available in Kubernetes documentation just to make it easier to follow along and provide an identical environment that I used to research KubeVirt networking. Packages: Add the Kubernetes repository cat <<EOF > /etc/yum. repos. d/kubernetes. repo[kubernetes]name=Kubernetesbaseurl=https://packages. cloud. google. com/yum/repos/kubernetes-el7-\$basearchenabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages. cloud. google. com/yum/doc/yum-key. gpg https://packages. cloud. google. com/yum/doc/rpm-package-key. gpgEOFUpdate and install prerequisites. yum update -yyum install kubelet-1. 9. 4 \ kubeadm-1. 9. 4 \ kubectl-1. 9. 4 \ docker \ ansible \ git \ curl \ wget -yDocker prerequisites: For docker storage we will use a new disk vdb formatted XFS using the Overlay driver. cat <<EOF > /etc/sysconfig/docker-storage-setupSTORAGE_DRIVER=overlay2DEVS=/dev/vdbCONTAINER_ROOT_LV_NAME=dockerlvCONTAINER_ROOT_LV_SIZE=100%FREECONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/dockerVG=dockervgEOFStart and enable Docker systemctl start dockersystemctl enable dockerAdditional prerequisites: In this section we continue with the required prerequistes. This is also described in the install kubeadm kubernetes documentation. systemctl enable kubeletThis is a requirement for Flannel - pass bridged IPv4 traffic to iptables’ chains cat <<EOF > /etc/sysctl. d/k8s. conf net. bridge. bridge-nf-call-ip6tables = 1 net. bridge. bridge-nf-call-iptables = 1 EOF sysctl --systemTemporarily disable selinux so we can run kubeadm init setenforce 0And let’s also permanently disable selinux - yes I know. If this isn’t done once you reboot your node kubernetes won’t start and then you will be wondering what happened :) cat <<EOF > /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted EOFInitialize cluster: Now we are ready to create our cluster starting with the first and only master. Note --pod-network-cidr is required for Flannel kubeadm init --pod-network-cidr=10. 244. 0. 0/16. . . output. . . mkdir -p $HOME/. kube sudo cp -i /etc/kubernetes/admin. conf $HOME/. kube/config sudo chown $(id -u):$(id -g) $HOME/. kube/configThere are multiple CNI providers in this example environment just going to use Flannel since its simple to deploy and configure. kubectl apply -f https://raw. githubusercontent. com/coreos/flannel/v0. 9. 1/Documentation/kube-flannel. ymlAfter Flannel is deployed join the nodes to the cluster. kubeadm join --token 045c1c. 04765c236e1bd8da 172. 31. 50. 221:6443 \ --discovery-token-ca-cert-hash sha256:redactedOnce all the nodes have been joined check the status. $ kubectl get nodeNAME STATUS ROLES AGE VERSIONkm1. virtomation. com Ready master 11m v1. 9. 4kn1. virtomation. com Ready <none> 10m v1. 9. 4kn2. virtomation. com Ready <none> 10m v1. 9. 4Additional Components: KubeVirt: The recommended installation method is to use kubevirt-ansible. For this example I don’t require storage so just deploying using kubectl create. For additional information regarding KubeVirt install see the installation readme. $ kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/v0. 4. 1/kubevirt. yamlserviceaccount kubevirt-apiserver created. . . output . . . customresourcedefinition offlinevirtualmachines. kubevirt. io createdLet’s make sure that all the pods are running. $ kubectl get pod -n kube-system -l 'kubevirt. io'NAME READY STATUS RESTARTS AGEvirt-api-747745669-62cww 1/1 Running 0 4mvirt-api-747745669-qtn7f 1/1 Running 0 4mvirt-controller-648945bbcb-dfpwm 0/1 Running 0 4mvirt-controller-648945bbcb-tppgx 1/1 Running 0 4mvirt-handler-xlfc2 1/1 Running 0 4mvirt-handler-z5lsh 1/1 Running 0 4mSkydive: I have used Skydive in the past. It is a great tool to understand the topology of software-defined-networking. The only caveat is that Skydive doesn’t create a complete topology when using Flannel but there is still a good picture of what is going on. So with that let’s go ahead and install. kubectl create ns skydivekubectl create -n skydive -f https://raw. githubusercontent. com/skydive-project/skydive/master/contrib/kubernetes/skydive. yamlCheck the status of Skydive agent and analyzer $ kubectl get pod -n skydiveNAME READY STATUS RESTARTS AGEskydive-agent-5hh8k 1/1 Running 0 5mskydive-agent-c29l7 1/1 Running 0 5mskydive-analyzer-5db567b4bc-m77kq 2/2 Running 0 5mingress-nginx: To provide external access our example NodeJS application we need to an ingress controller. For this example we are going to use ingress-nginx I created a simple script ingress. sh that follows the installation documentation for ingress-nginx with a couple minor modifications: Patch the nginx-configuration ConfigMap to enable vts status Add an additional containerPort to the deployment and an additional port to the service. Create an ingress to access nginx status page The script and additional files are available in the github repo listed below. git clone https://github. com/jcpowermac/kubevirt-network-deepdivecd kubevirt-network-deepdive/kubernetes/ingressbash ingress. shAfter the script is complete confirm that ingress-nginx pods are running. $ kubectl get pod -n ingress-nginxNAME READY STATUS RESTARTS AGEdefault-http-backend-55c6c69b88-jpl95 1/1 Running 0 1mnginx-ingress-controller-85c8787886-vf5tp 1/1 Running 0 1mKubeVirt Virtual MachinesNow, we are at a point where we can deploy our first KubeVirt virtual machines. These instances are where we will install our simple NodeJS and MongoDB application. Create objects: Let’s create a clean new namespace to use. $ kubectl create ns nodejs-exnamespace nodejs-ex createdThe nodejs-ex. yaml contains multiple objects. The definitions for our two virtual machines - mongodb and nodejs. Two Kubernetes Services and a one Kubernetes Ingress object. These instances will be created as offline virtual machines so after kubectl create we will start them up. $ kubectl create -f https://raw. githubusercontent. com/jcpowermac/kubevirt-network-deepdive/master/kubernetes/nodejs-ex. yaml -n nodejs-exofflinevirtualmachine nodejs createdofflinevirtualmachine mongodb createdservice mongodb createdservice nodejs createdingress nodejs createdStart the nodejs virtual machine $ kubectl patch offlinevirtualmachine nodejs --type merge -p '{ spec :{ running :true}}' -n nodejs-exofflinevirtualmachine nodejs patchedStart the mongodb virtual machine $ kubectl patch offlinevirtualmachine mongodb --type merge -p '{ spec :{ running :true}}' -n nodejs-exofflinevirtualmachine mongodb patchedReview kubevirt virtual machine objects $ kubectl get ovms -n nodejs-exNAME AGEmongodb 7mnodejs 7m$ kubectl get vms -n nodejs-exNAME AGEmongodb 4mnodejs 5mWhere are the virtual machines and what is their IP address? $ kubectl get pod -o wide -n nodejs-exNAME READY STATUS RESTARTS AGE IP NODEvirt-launcher-mongodb-qdpmg 2/2 Running 0 4m 10. 244. 2. 7 kn2. virtomation. comvirt-launcher-nodejs-5r59c 2/2 Running 0 4m 10. 244. 1. 8 kn1. virtomation. comNote To test virtual machine to virtual machine network connectivity I purposely set the host where which instance would run by using a nodeSelector. Installing the NodeJS Example Application: To quickly deploy our example application Ansible project is included in the repository. Two inventory files need to be modified before executing ansible-playbook. Within all. yml change the analyzers IP address to what is listed in the command below. $ kubectl get endpoints -n skydiveNAME ENDPOINTS AGEskydive-analyzer 10. 244. 1. 2:9200,10. 244. 1. 2:12379,10. 244. 1. 2:8082 + 1 more. . . 18hAnd finally use the IP Addresses from the kubectl get pod -o wide -n nodejs-ex command (example above) to modify inventory/hosts. ini. Now we can run ansible-playbook. cd kubevirt-network-deepdive/ansiblevim inventory/group_vars/all. ymlvim inventory/hosts. iniansible-playbook -i inventory/hosts. ini playbook/main. yml. . . output . . . Determine Ingress URL: First let’s find the host. This is defined within the Ingress object. In this case it is nodejs. ingress. virtomation. com. $ kubectl get ingress -n nodejs-exNAME HOSTS ADDRESS PORTS AGEnodejs nodejs. ingress. virtomation. com 80 22mWhat are the NodePorts? For this installation Service spec was modified to include nodePort for http (30000) and http-mgmt (32000). Note When deploying ingress-nginx using the provided Service definition the nodePort is undefined. Kubernetes will assign a random port to ports defined in the spec. $ kubectl get service ingress-nginx -n ingress-nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEingress-nginx NodePort 10. 110. 173. 97 <none> 80:30000/TCP,443:30327/TCP,18080:32000/TCP 52mWhat node is the nginx-ingress controller running on? This is needed to configure DNS. $ kubectl get pod -n ingress-nginx -o wideNAME READY STATUS RESTARTS AGE IP NODEdefault-http-backend-55c6c69b88-jpl95 1/1 Running 0 53m 10. 244. 1. 3 kn1. virtomation. comnginx-ingress-controller-85c8787886-vf5tp 1/1 Running 0 53m 10. 244. 1. 4 kn1. virtomation. comConfigure DNS: In my homelab I am using dnsmasq. To support ingress add the host where the controller is running as an A record. [root@dns1 ~]# cat /etc/dnsmasq. d/virtomation. conf. . . output . . . address=/km1. virtomation. com/172. 31. 50. 221address=/kn1. virtomation. com/172. 31. 50. 231address=/kn2. virtomation. com/172. 31. 50. 232# Needed for nginx-ingressaddress=/. ingress. virtomation. com/172. 31. 50. 231. . . output . . . Restart dnsmasq for the new config systemctl restart dnsmasqTesting our application: This application uses MongoDB to store the views of the website. Listing the count-value shows that the database is connected and networking is functioning correctly. $ curl http://nodejs. ingress. virtomation. com:30000/<!doctype html><html lang= en >. . . output. . . <p>Page view count:<span class= code id= count-value >7</span></p>. . . output. . . KubeVirt NetworkingNow that we shown that kubernetes, kubevirt, ingress-nginx and flannel work together how is it accomplished? First let’s go over what is going on in kubevirt specifically. KubeVirt networking virt-launcher - virtwrap: virt-launcher is the pod that runs the necessary components instantiate and run a virtual machine. We are only going to concentrate on the network portion in this post. virtwrap manager: Before the virtual machine is started the preStartHook will run SetupPodNetwork. SetupPodNetwork → SetupDefaultPodNetwork: This function calls three functions that are detailed below discoverPodNetworkInterface, preparePodNetworkInterface and StartDHCP discoverPodNetworkInterface: This function gathers the following information about the pod interface: IP Address Routes Gateway MAC address This is stored for later use in configuring DHCP. preparePodNetworkInterfaces: Once the current details of the pod interface have been stored following operations are performed: Delete the IP address from the pod interface Set the pod interface down Change the pod interface MAC address Set the pod interface up Create the bridge Add the pod interface to the bridge This will provide libvirt a bridge to use for the virtual machine that will be created. StartDHCP → DHCPServer → SingleClientDHCPServer: This DHCP server only provides a single address to a client in this case the virtual machine that will be started. The network details - the IP address, gateway, routes, DNS servers and suffixes are taken from the pod which will be served to the virtual machine. Networking in detailNow that we have a clearier picture of kubevirt networking we will continue with details regarding kubernetes objects, host, pod and virtual machine networking components. Then we will finish up with two scenarios: virtual machine to virtual machine communication and ingress to virtual machine. Kubernetes-level: services: There are two services defined in the manifest that was deployed above. One each for mongodb and nodejs applications. This allows us to use the hostname mongodb to connect to MongoDB. Review DNS for Services and Pods for additional information. $ kubectl get services -n nodejs-exNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmongodb ClusterIP 10. 108. 188. 170 <none> 27017/TCP 3hnodejs ClusterIP 10. 110. 233. 114 <none> 8080/TCP 3hendpoints: The endpoints below were automatically created because there was a selector spec: selector: kubevirt. io: virt-launcher kubevirt. io/domain: nodejsdefined in the Service object. $ kubectl get endpoints -n nodejs-exNAME ENDPOINTS AGEmongodb 10. 244. 2. 7:27017 1hnodejs 10. 244. 1. 8:8080 1hingress: Also defined in the manifest was the ingress object. This will allow us to contact the NodeJS example application using a URL. $ kubectl get ingress -n nodejs-exNAME HOSTS ADDRESS PORTS AGEnodejs nodejs. ingress. virtomation. com 80 3hHost-level: interfaces: A few important interfaces to note. The flannel. 1 interface is type vxlan for connectivity between hosts. I removed from the ip a output the veth interfaces but the details are shown further below with bridge link show. [root@kn1 ~]# ip a. . . output. . . 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:97:a6:ee brd ff:ff:ff:ff:ff:ff inet 172. 31. 50. 231/24 brd 172. 31. 50. 255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe97:a6ee/64 scope link valid_lft forever preferred_lft forever. . . output. . . 4: flannel. 1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether ce:4e:fb:41:1d:af brd ff:ff:ff:ff:ff:ff inet 10. 244. 1. 0/32 scope global flannel. 1 valid_lft forever preferred_lft forever inet6 fe80::cc4e:fbff:fe41:1daf/64 scope link valid_lft forever preferred_lft forever5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000 link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff inet 10. 244. 1. 1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::341b:eeff:fe06:7ec/64 scope link valid_lft forever preferred_lft forever. . . output. . . cni0 is a bridge where one side of the veth interface pair is attached. [root@kn1 ~]# bridge link show6: vethb4424886 state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 27: veth1657737b state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 28: vethdfd32c87 state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 29: vethed0f8c9a state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 210: veth05e4e005 state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 211: veth25933a54 state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 212: vethe3d701e7 state UP @docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master cni0 state forwarding priority 32 cost 2routes: The pod network subnet is 10. 244. 0. 0/16 and broken up per host: km1 - 10. 244. 0. 0/24 kn1 - 10. 244. 1. 0/24 kn2 - 10. 244. 2. 0/24 So the table will route the packets to correct interface. [root@kn1 ~]# ip rdefault via 172. 31. 50. 1 dev eth010. 244. 0. 0/24 via 10. 244. 0. 0 dev flannel. 1 onlink10. 244. 1. 0/24 dev cni0 proto kernel scope link src 10. 244. 1. 110. 244. 2. 0/24 via 10. 244. 2. 0 dev flannel. 1 onlink172. 17. 0. 0/16 dev docker0 proto kernel scope link src 172. 17. 0. 1172. 31. 50. 0/24 dev eth0 proto kernel scope link src 172. 31. 50. 231iptables: To also support kubernetes services kube-proxy writes iptables rules for those services. In the output below you can see our mongodb and nodejs services with destination NAT rules defined. For more information regarding iptables and services refer to debug-service in the kubernetes documentation. [root@kn1 ~]# iptables -n -L -t nat | grep nodejs-exKUBE-MARK-MASQ all -- 10. 244. 1. 8 0. 0. 0. 0/0 /* nodejs-ex/nodejs: */DNAT tcp -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* nodejs-ex/nodejs: */ tcp to:10. 244. 1. 8:8080KUBE-MARK-MASQ all -- 10. 244. 2. 7 0. 0. 0. 0/0 /* nodejs-ex/mongodb: */DNAT tcp -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* nodejs-ex/mongodb: */ tcp to:10. 244. 2. 7:27017KUBE-MARK-MASQ tcp -- !10. 244. 0. 0/16 10. 108. 188. 170 /* nodejs-ex/mongodb: cluster IP */ tcp dpt:27017KUBE-SVC-Z7W465PEPK7G2UVQ tcp -- 0. 0. 0. 0/0 10. 108. 188. 170 /* nodejs-ex/mongodb: cluster IP */ tcp dpt:27017KUBE-MARK-MASQ tcp -- !10. 244. 0. 0/16 10. 110. 233. 114 /* nodejs-ex/nodejs: cluster IP */ tcp dpt:8080KUBE-SVC-LATB7COHB4ZMDCEC tcp -- 0. 0. 0. 0/0 10. 110. 233. 114 /* nodejs-ex/nodejs: cluster IP */ tcp dpt:8080KUBE-SEP-JOPA2J4R76O5OVH5 all -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* nodejs-ex/nodejs: */KUBE-SEP-QD4L7MQHCIVOWZAO all -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* nodejs-ex/mongodb: */Pod-level: interfaces: The bridge br1 is the main focus in the pod level. It contains the eth0 and vnet0 ports. eth0 becomes the uplink to the bridge which is the other side of the veth pair which is a port on the host’s cni0 bridge. Important Since eth0 has no IP address and br1 is in the self-assigned range the pod has no network access. There are also no routes in the pod. This can be resolved for troubleshooting by creating a veth pair, adding one of the interfaces to the bridge and assigning an IP address in the pod subnet for the host. Routes are also required to be added. This is performed for running skydive in the pod see skydive. sh for more details. $ kubectl exec -n nodejs-ex -c compute virt-launcher-nodejs-5r59c -- ip a. . . output. . . 3: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br1 state UP group default link/ether a6:97:da:96:cf:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::a497:daff:fe96:cf07/64 scope link valid_lft forever preferred_lft forever4: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 32:8a:f5:59:10:02 brd ff:ff:ff:ff:ff:ff inet 169. 254. 75. 86/32 brd 169. 254. 75. 86 scope global br1 valid_lft forever preferred_lft forever inet6 fe80::a497:daff:fe96:cf07/64 scope link valid_lft forever preferred_lft forever5: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master br1 state UNKNOWN group default qlen 1000 link/ether fe:58:0a:f4:01:08 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc58:aff:fef4:108/64 scope link valid_lft forever preferred_lft foreverShowing the bridge br1 member ports. $ kubectl exec -n nodejs-ex -c compute virt-launcher-nodejs-5r59c -- bridge link show3: eth0 state UP @if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master br1 state forwarding priority 32 cost 25: vnet0 state UNKNOWN : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master br1 state forwarding priority 32 cost 100DHCP: The virtual machine network is configured by DHCP. You can see virt-launcher has UDP port 67 open on the br1 interface to serve DHCP to the virtual machine. $ kubectl exec -n nodejs-ex -c compute virt-launcher-nodejs-5r59c -- ss -tuapnNetid State Recv-Q Send-Q Local Address:Port Peer Address:Portudp UNCONN 0 0 0. 0. 0. 0%br1:67 0. 0. 0. 0:* users:(( virt-launcher ,pid=10,fd=12))libvirt: With virsh domiflist we can also see that the vnet0 interface is a port on the br1 bridge. $ kubectl exec -n nodejs-ex -c compute virt-launcher-nodejs-5r59c -- virsh domiflist nodejs-ex_nodejsInterface Type Source Model MACvnet0 bridge br1 e1000 0a:58:0a:f4:01:08VM-level: interfaces: Fortunately the vm interfaces are fairly typical. Just the single interface that has been assigned the original pod ip address. Warning The MTU of the virtual machine interface is set to 1500. The network interfaces upstream are set to 1450. [fedora@nodejs ~]$ ip a. . . output. . . 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 0a:58:0a:f4:01:08 brd ff:ff:ff:ff:ff:ff inet 10. 244. 1. 8/24 brd 10. 244. 1. 255 scope global dynamic eth0 valid_lft 86299761sec preferred_lft 86299761sec inet6 fe80::858:aff:fef4:108/64 scope link valid_lft forever preferred_lft foreverDNS: Just quickly wanted to cat the /etc/resolv. conf file to show that DNS is configured so that kube-dns will be properly queried. [fedora@nodejs ~]$ cat /etc/resolv. conf; generated by /usr/sbin/dhclient-scriptsearch nodejs-ex. svc. cluster. local. svc. cluster. local. cluster. local. nameserver 10. 96. 0. 10VM to VM communication: The virtual machines are on differnet hosts. This was done purposely to show that connectivity between virtual machine and hosts. Here we finally get to use Skydive. The real-time topology below along with arrows annotate the flow of packets between the host, pod and virtual machine network devices. VM to VM Connectivity Tests: To confirm connectivity we are going to do a few things. First check for DNS resolution for the mongodb service. Next look a established connection to MongoDB and finally check the NodeJS logs looking for confirmation of database connection. DNS resolution: Service-based DNS resolution is an important feature of Kubernetes. Since dig,host or nslookup are not installed in our virtual machine a quick python script fills in. This output below shows that the mongodb name is available for resolution. [fedora@nodejs ~]$ python3 -c import socket;print(socket. gethostbyname('mongodb. nodejs-ex. svc. cluster. local')) 10. 108. 188. 170[fedora@nodejs ~]$ python3 -c import socket;print(socket. gethostbyname('mongodb')) 10. 108. 188. 170TCP connection: After connecting to the nodejs virtual machine via ssh we can use ss to determine the current TCP connections. We are specifically looking for the established connections to the MongoDB service that is running on the mongodb virtual machine on node kn2. [fedora@nodejs ~]$ ss -tanpState Recv-Q Send-Q Local Address:Port Peer Address:Port. . . output . . . LISTEN 0 128 *:8080 *:*ESTAB 0 0 10. 244. 1. 8:47826 10. 108. 188. 170:27017ESTAB 0 0 10. 244. 1. 8:47824 10. 108. 188. 170:27017. . . output . . . Logs: [fedora@nodejs ~]$ journalctl -u nodejs. . . output. . Apr 18 20:07:37 nodejs. localdomain node[4303]: Connected to MongoDB at: mongodb://nodejs:nodejspassword@mongodb/nodejs. . . output. . . Ingress to VM communication: The topology image below shows the packet flow when using a ingress kubernetes object. The commands below the image will provide additional details. Ingress to VM The kube-proxy has port 30000 open that was defined by the nodePort of the ingress-nginx service. Additional details on kube-proxy and iptables role is available from Service - IPs and VIPs in the Kubernetes documentation. [root@kn1 ~]# ss -tanp | grep 30000LISTEN 0 128 :::30000 :::* users:(( kube-proxy ,pid=6534,fd=13))[root@kn1 ~]# iptables -n -L -t nat | grep ingress-nginx/ingress-nginx | grep http | grep -v https | grep -v http-mgmtKUBE-MARK-MASQ tcp -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* ingress-nginx/ingress-nginx:http */ tcp dpt:30000KUBE-SVC-REQ4FPVT7WYF4VLA tcp -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* ingress-nginx/ingress-nginx:http */ tcp dpt:30000KUBE-MARK-MASQ all -- 10. 244. 1. 4 0. 0. 0. 0/0 /* ingress-nginx/ingress-nginx:http */DNAT tcp -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* ingress-nginx/ingress-nginx:http */ tcp to:10. 244. 1. 4:80KUBE-MARK-MASQ tcp -- !10. 244. 0. 0/16 10. 110. 173. 97 /* ingress-nginx/ingress-nginx:http cluster IP */ tcp dpt:80KUBE-SVC-REQ4FPVT7WYF4VLA tcp -- 0. 0. 0. 0/0 10. 110. 173. 97 /* ingress-nginx/ingress-nginx:http cluster IP */ tcp dpt:80KUBE-SEP-BKJT4JXHZ3TCOTKA all -- 0. 0. 0. 0/0 0. 0. 0. 0/0 /* ingress-nginx/ingress-nginx:http */Since the ingress-nginx pod is on the same host as the nodejs virtual machine we just need to be routed to the cni0 bridge to communicate with the pod and vm. [root@kn1 ~]# ip r. . . output. . . 10. 244. 1. 0/24 dev cni0 proto kernel scope link src 10. 244. 1. 1. . . output. . . Connectivity Tests: In the section where we installed the application we already tested for connectivity but let’s take this is little further to confirm. Nginx Vhost Traffic Status: ingress-nginx provides an optional setting to enable traffic status - which we already enabled. The screenshot below shows the requests that Nginx is receiving for nodejs. ingress. virtomation. com. nginx-vts Service NodePort to Nginx Pod: My tcpdump fu is lacking so I found an example query that will provide the details we are looking for. I removed a significant amount of the content but you can see my desktop (172. 31. 51. 52) create a GET request to the NodePort 30000. This could have also been done in Skydive but I wanted to provide an alternative if you didn’t want to install it or just stick to the cli. # tcpdump -nni eth0 -A -s 0 'tcp port 30000 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'. . . output. . . 13:24:52. 197092 IP 172. 31. 51. 52. 36494 > 172. 31. 50. 231. 30000: Flags [P. ], seq 2685726663:2685727086, ack 277056091, win 491, options [nop,nop,TS val 267689990 ecr 151714950], length 423E. . . . @. ?. Z. . . 34. . 2. . . u0. . . . . . . [. . . . r. . . . . . . . . . . . GET / HTTP/1. 1Host: nodejs. ingress. virtomation. com:30000User-Agent: Mozilla/5. 0 (X11; Fedora; Linux x86_64; rv:59. 0) Gecko/20100101 Firefox/59. 0Accept: text/html,application/xhtml+xml,application/xml;q=0. 9,*/*;q=0. 8Accept-Language: en-US,en;q=0. 5Accept-Encoding: gzip, deflateConnection: keep-aliveUpgrade-Insecure-Requests: 1If-None-Match: W/ 9edb-O5JGhneli0eCE6G2kFY5haMKg5k Cache-Control: max-age=013:24:52. 215284 IP 172. 31. 50. 231. 30000 > 172. 31. 51. 52. 36494: Flags [P. ], seq 1:2362, ack 423, win 236, options [nop,nop,TS val 151723713 ecr 267689990], length 2361E. m|. @. ?. . . . . 2. . . 34u0. . . . . [. . . n. . . . . . . . . . . . . . . . . . HTTP/1. 1 200 OK Server: nginx/1. 13. 12 Date: Fri, 20 Apr 2018 13:24:52 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding X-Powered-By: Express ETag: W/ 9edb-SZeP35LuygZ9MOrPTIySYOu9sAE Content-Encoding: gzipNginx Pod to NodeJS VM: In (1) we can see flows to and from 10. 244. 1. 4 and 10. 244. 1. 8. . 8 is the nodejs virtual machine and . 4 is as listed below the nginx-ingress-controller. $ kubectl get pod --all-namespaces -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE. . . output. . . ingress-nginx nginx-ingress-controller-85c8787886-vf5tp 1/1 Running 0 1d 10. 244. 1. 4 kn1. virtomation. com. . . output. . . ingress-vm Final ThoughtsWe have went through quite a bit in this deep dive from installation, KubeVirt specific networking details and kubernetes, host, pod and virtual machine level configurations. Finishing up with the packet flow between virtual machine to virtual machine and ingress to virtual machine. " }, { - "id": 151, + "id": 152, "url": "/2018/This-Week-in-Kube-Virt-22.html", "title": "This Week In Kube Virt 22", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a close-to weekly update from the KubeVirt team. In general there is now more work happening outside of the core kubevirtrepository. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Within the last two weeks we achieved to: Release KubeVirt v0. 4. 0(https://github. com/kubevirt/kubevirt/releases/tag/v0. 4. 0) Many networking fixes (@mlsorensen @vladikr)(https://github. com/kubevirt/kubevirt/pull/870https://github. com/kubevirt/kubevirt/pull/869https://github. com/kubevirt/kubevirt/pull/847https://github. com/kubevirt/kubevirt/pull/856https://github. com/kubevirt/kubevirt/pull/839https://github. com/kubevirt/kubevirt/pull/830) Aligned config reading for virtctl (@rmohr)(https://github. com/kubevirt/kubevirt/pull/860) Subresource Aggregated API server for console endpoints (@vossel)(https://github. com/kubevirt/kubevirt/pull/770) Enable OpenShift tests in CI (@alukiano @rmohr)(https://github. com/kubevirt/kubevirt/pull/833) virtctl convenience functions for start/stop of VMs (@sgott)(https://github. com/kubevirt/kubevirt/pull/817) Ansible - Improved Gluster support for kubevirt-ansible(https://github. com/kubevirt/kubevirt-ansible/pull/174) POC Device Plugins for KVM and network (@mpolednik @phoracek)https://github. com/kubevirt/kubernetes-device-plugins In addition to this, we are also working on: Additional network glue approach (@vladikr)(https://github. com/kubevirt/kubevirt/pull/787) CRD validation using OpenAPIv3 (@vossel)(https://github. com/kubevirt/kubevirt/pull/850) Windows VM tests (@alukiano)(https://github. com/kubevirt/kubevirt/pull/809) Data importer - Functional tests(https://github. com/kubevirt/containerized-data-importer/pull/81) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 152, + "id": 153, "url": "/2018/changelog-v0.4.0.html", "title": "KubeVirt v0.4.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 4. 0: Released on: Fri Apr 6 16:40:31 2018 +0200 Fix several networking issues Add and enable OpenShift support to CI Add conditional Windows tests (if an image is present) Add subresources for console access virtctl config alignmnet with kubectl Fix API reference generation Stable UUIDs for OfflineVirtualMachines Build virtctl for MacOS and Windows Set default architecture to x86_64 Major improvement to the CI infrastructure (all containerized) virtctl convenience functions for starting and stopping a VM" }, { - "id": 153, + "id": 154, "url": "/2018/This-Week-in-Kube-Virt-21.html", "title": "This Week In Kube Virt 21", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. In general there is now more work happening outside of the core kubevirtrepository. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Within the last two weeks we achieved to: Multi platform (Windows, Mac, Linux) support for virtctl (@slintes)(https://github. com/kubevirt/kubevirt/pull/811) Stable UUIDs for OfflineVirtualMachines (@fromanirh)(https://github. com/kubevirt/kubevirt/pull/766) OpenShift support for CI (@alukiano, @rmohr)(https://github. com/kubevirt/kubevirt/pull/792) v2v improvements - for easier imports of existing VMs (@pkliczewski)(https://github. com/kubevirt/v2v-job) Data importer - to import existing disk images (@copejon @jeffvance)(https://github. com/kubevirt/containerized-data-importer) POC Device Plugins for KVM and network (@mpolednik @phoracek)https://github. com/kubevirt/kubernetes-device-plugins In addition to this, we are also working on: Subresources for consoles (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/770) Additional network glue approach (@vladikr)(https://github. com/kubevirt/kubevirt/pull/787) virtctl convenience functions for start/stop of VMs (@sgott)(https://github. com/kubevirt/kubevirt/pull/817) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 154, + "id": 155, "url": "/2018/This-Week-in-Kube-Virt-20.html", "title": "This Week In Kube Virt 20", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Within the last two weeks we achieved to: Released KubeVirt v0. 3. 0https://github. com/kubevirt/kubevirt/releases/tag/v0. 3. 0 Merged VirtualMachinePresets (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/652) Merged OfflineVirtualMachine (@pkotas)(https://github. com/kubevirt/kubevirt/pull/667) Merged ephemeral disk support (@alukiano)(https://github. com/kubevirt/kubevirt/pull/728) Fixes to test KubeVirt on OpenShift (@alukiano)(https://github. com/kubevirt/kubevirt/pull/774) Scheduler awareness of VM pods (@vladikr)(https://github. com/kubevirt/kubevirt/pull/673) Plain text inline cloud-init (@alukiano)(https://github. com/kubevirt/kubevirt/pull/757) Define guest specific labels to be used with presets (@yanirq)(https://github. com/kubevirt/kubevirt/pull/767) Special note: A ton of automation, CI, and test fixes (@rmohr) In addition to this, we are also working on: Stable UUIDs for OfflineVirtualMachines (@fromanirh)(https://github. com/kubevirt/kubevirt/pull/766) Subresources for consoles (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/770) Additional network glue approach (@vladikr)(https://github. com/kubevirt/kubevirt/pull/787) Improvement for testing on OpenShift (@alukiano)(https://github. com/kubevirt/kubevirt/pull/792) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 155, + "id": 156, "url": "/2018/changelog-v0.3.0.html", "title": "KubeVirt v0.3.0", "author" : "kube🤖", "tags" : "release notes, changelog", "body": "v0. 3. 0: Released on: Thu Mar 8 10:21:57 2018 +0100 Kubernetes compatible networking Kubernetes compatible PV based storage VirtualMachinePresets support OfflineVirtualMachine support RBAC improvements Switch to q35 machien type by default A large number of test and CI fixes Ephemeral disk support" }, { - "id": 156, + "id": 157, "url": "/2018/This-Week-in-Kube-Virt-19.html", "title": "This Week In Kube Virt 19", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a bi-weekly update from the KubeVirt team. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Within the last two weeks we achieved to: Support for native file-system PVs as disk storage (@alukiano,@davidvossel) (https://github. com/kubevirt/kubevirt/pull/734,https://github. com/kubevirt/kubevirt/pull/671) Support for native pod networking for VMs (@vladikr)(https://github. com/kubevirt/kubevirt/pull/686) Many patches to improve kubevirt-ansible usability(https://github. com/kubevirt/kubevirt-ansible/pulse/monthly) Introduce the kubernetes-device-plugins (@mpolednik)(https://github. com/kubevirt/kubernetes-device-plugins/) Introduce the kubernetes-device-plugin for bridge networking(@mpolednik)(https://github. com/kubevirt/kubernetes-device-plugins/pull/4) Add vendor/ tree (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/715) Expose disk bus (@fromani)(https://github. com/kubevirt/kubevirt/pull/672) Allow deploying OpenShift in vagrant (@alukiano)(https://github. com/kubevirt/kubevirt/pull/631) Release of v0. 3. 0-alpha. 3(https://github. com/kubevirt/kubevirt/releases/tag/v0. 3. 0-alpha. 3) In addition to this, we are also working on: Implement VirtualMachinePresets (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/652) Implement OfflineVirtualMachines (@pkotas)(https://github. com/kubevirt/kubevirt/pull/667) Expose CPU requirements in VM pod (@vladikr)(https://github. com/kubevirt/kubevirt/pull/673) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 157, + "id": 158, "url": "/2018/This-Week-in-Kube-Virt-18.html", "title": "This Week In Kube Virt 18", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Rework our architecture Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Within the last weeks we achieved to: Move to a decentralized the architecture (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/663) Drop live migration for now (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/702) Change default network provider to flannel (@alukiano)(https://github. com/kubevirt/kubevirt/pull/710) Adjust uuid API (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/675) Make cirros and alpine ready for q35 (@rmohr)(https://github. com/kubevirt/kubevirt/pull/688) In addition to this, we are also working on: Decentralized pod networking (@vladikr)(https://github. com/kubevirt/kubevirt/pull/686) Implement VirtualMachinePresets (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/652) Implement OfflineVirtualMachines (@pkotas)(https://github. com/kubevirt/kubevirt/pull/667) Allow deploying OpenShift in vagrant (@alukiano)(https://github. com/kubevirt/kubevirt/pull/631) Expose CPU requirements in VM pod (@vladikr)(https://github. com/kubevirt/kubevirt/pull/673) Add support for PVs via kubelet (@alukiano)(https://github. com/kubevirt/kubevirt/pull/671) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 158, + "id": 159, "url": "/2018/This-Week-in-Kube-Virt-17.html", "title": "This Week In Kube Virt 17", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Rework our architecture Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Over the weekend you could have seen our talks at devconf. cz: “Kubernetes Cloud Autoscaler for IsolatedWorkloads” by @rmohr “Outcast: Virtualization in a containerworld?” by @fabiand Within the last weeks we achieved to: Introduced Fedora Cloud image for testing (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/685) Switch to q35 by default (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/650) In addition to this, we are also working on: Decentralize the architecture (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/663) Decentralized pod networking (@vladikr)(https://github. com/kubevirt/kubevirt/pull/686) Implement VirtualMachinePresets (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/652) Allow deploying OpenShift in vagrant (@alukiano)(https://github. com/kubevirt/kubevirt/pull/631) Expose CPU requirements in VM pod (@vladikr)(https://github. com/kubevirt/kubevirt/pull/673) Adjust uuid API (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/675) Make cirros and alpine ready for q35 (@rmohr)(https://github. com/kubevirt/kubevirt/pull/688) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 159, + "id": 160, "url": "/2018/This-Week-in-Kube-Virt-16-size-XL.html", "title": "This Week In Kube Virt 16 Size Xl", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team - including the holidaybacklog update. We are currently driven by Building a solid user-story around KubeVirt Caring about end-to-end (backend, core, ui) Rework out architecture Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Being easier to be used on Kubernetes and OpenShift Within the last weeks we achieved to: Drop of HAProxy and redeisng of console access (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/618) Dockerized builds to make sure the build env matches the runtime env(@rmohr and others)(https://github. com/kubevirt/kubevirt/pull/647) OwnerReference fixes (@alukiano)(https://github. com/kubevirt/kubevirt/pull/642) OfflineVirtualMachineDesign documentation (@petrkotas)(https://github. com/kubevirt/kubevirt/pull/641) Further RBAC improvements (@gbenhaim)(https://github. com/kubevirt/kubevirt/pull/640) User-Guide The guide saw many updates also for planned stuff Update to reflect v0. 2. 0 changes (@rmohr)(https://github. com/kubevirt/user-guide/pull/12) NodeSelector and affinity (@rmohr)(https://github. com/kubevirt/user-guide/pull/15) Hardware configuration (@rmohr)(https://github. com/kubevirt/user-guide/pull/14) Volumes and disks (@rmohr)(https://github. com/kubevirt/user-guide/pull/13) Cloud-Init (@davidvossel)(https://github. com/kubevirt/user-guide/pull/10) API Reference Now updated regularly (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/643)https://kubevirt. io/api-reference/master/definitions. html Demo Got updated to v0. 2. 0 (@fabiand) But an issue with virtctl was introduced https://github. com/kubevirt/demo UI The WIP KubeVirt provider for ManageIQ was showcased(@masayag @pkliczewski) https://github. com/ManageIQ/manageiq-providers-kubevirt/ Video: https://www. youtube. com/watch?v=9Gf2Nv7h558 Screenshot: UI The Cockpit plugin makes some progress (@mlibra) https://github. com/cockpit-project/cockpit/wiki/Feature:-Kubernetes:-KubeVirt-support-enhancements https://github. com/cockpit-project/cockpit/pull/7830 Screenshot: Ansible Move to stable kubevirt release manifests (@gbenhaim)(https://github. com/kubevirt-incubator/kubevirt-ansible/pull/37) Many improvements to make it work seamlessly(@gbenhaim @lukas-bednar) In addition to this, we are also working on: Decentralize the architecture (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/663) Implement VirtualMachinePresets (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/652) virtctl fixes (@davidvossel and @awels)(https://github. com/kubevirt/kubevirt/pull/648) Move to q35 machine type (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/650) Allow deploying OpenShift in vagrant (@alukiano)(https://github. com/kubevirt/kubevirt/pull/631) User-Guide: Offline Virtual Machine docs (@petrkotas)(https://github. com/kubevirt/user-guide/pull/9) Persistent Virtual Machines (@stu-gott)(https://github. com/kubevirt/user-guide/pull/11) Storage Working on enabling PV cloning using PVannotations (@aglitke)(https://github. com/aglitke/external-storage/tree/clone-poc) Working on optimizing Gluster for in-cluster storage Working on the ability to simplify VM image uploads Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 160, + "id": 161, "url": "/2018/This-Week-in-Kube-Virt-16-Holiday-Wrap-Up-Edition.html", "title": "This Week In Kube Virt 16 Holiday Wrap Up Edition", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team - including the holidaybacklog update. We are currently driven by Being easier to be used on Kubernetes and OpenShift Rework out architecture Getting dependencies into shape (storage) Improve the user-experience for users (UI, deployment) Within the last weeks we achieved to: Drop of HAProxy and redeisng of console access (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/618) Dockerized builds to make sure the build env matches the runtime env(@rmohr and others)(https://github. com/kubevirt/kubevirt/pull/647) OwnerReference fixes (@alukiano)(https://github. com/kubevirt/kubevirt/pull/642) OfflineVirtualMachineDesign documentation (@petrkotas)(https://github. com/kubevirt/kubevirt/pull/641) Further RBAC improvements (@gbenhaim)(https://github. com/kubevirt/kubevirt/pull/640) User-Guide The guide saw many updates also for planned stuff Update to reflect v0. 2. 0 changes (@rmohr)(https://github. com/kubevirt/user-guide/pull/12) NodeSelector and affinity (@rmohr)(https://github. com/kubevirt/user-guide/pull/15) Hardware configuration (@rmohr)(https://github. com/kubevirt/user-guide/pull/14) Volumes and disks (@rmohr)(https://github. com/kubevirt/user-guide/pull/13) Cloud-Init (@davidvossel)(https://github. com/kubevirt/user-guide/pull/10) API Reference Now updated regularly (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/643)https://kubevirt. io/api-reference/master/definitions. html Demo Got updated to v0. 2. 0 (@fabiand) But an issue with virtctl was introduced https://github. com/kubevirt/demo UI The WIP KubeVirt provider for ManageIQ was showcased(@masayag @pkliczewski) https://github. com/ManageIQ/manageiq-providers-kubevirt/ https://www. youtube. com/watch?v=9Gf2Nv7h558 UI The Cockpit plugin makes some progress (@mlibra): Ansible Move to stable kubevirt release manifests (@gbenhaim)(https://github. com/kubevirt-incubator/kubevirt-ansible/pull/37) Many improvements to make it work seamlessly(@gbenhaim @lukas-bednar) In addition to this, we are also working on: Decentralize the architecture (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/663) Implement VirtualMachinePresets (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/652) virtctl fixes (@davidvossel and @awels)(https://github. com/kubevirt/kubevirt/pull/648) Move to q35 machine type (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/650) Allow deploying OpenShift in vagrant (@alukiano)(https://github. com/kubevirt/kubevirt/pull/631) User-Guide: Offline Virtual Machine docs (@petrkotas)(https://github. com/kubevirt/user-guide/pull/9) Persistent Virtual Machines (@stu-gott)(https://github. com/kubevirt/user-guide/pull/11) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 161, + "id": 162, "url": "/2018/Some-notes-on-some-highlights-of-v020.html", "title": "Some Notes On Some Highlights Of V020", "author" : "fabiand", "tags" : "release notes, hilights", "body": "The very first KubeVirt release of KubeVirt in the new year(https://github. com/kubevirt/kubevirt/releases/v0. 2. 0) had a fewnotable highlights which were brewing over the last few weeks. VirtualMachine API redesignPreviously the VirtualMachine API was pretty much aligned, or a 1:1mapping, to libvirt’s domxml. With this change however, we took a stepback and redesigned the API to be more Kubernet-ish than libvirt-ish. Some changes, like the extraction of source volumes, will actually helpus to implement other patterns - like VirtualMachinePresets. Removal of HAProxyThis is another nice one. So far we were using a custom API server forperforming object validation. But the use of this custom API serverrequired that the client was accessing the custom API server, and notthe main one. The multiplexing of redirecting certain requests to ourand other requests to the main API server was done by HA proxy. Somewhatlike a poor mans API server aggregation. However, now we are focusing on CRDs completely (we considered to go toAPI server aggregation, but dropped this approach), which involves doingthe validation of the CRD based on a json scheme. Redesign of VNC/Console accessThe afore mentioned custom API server contained subresources to permitinbound access to the graphical and serial consoel of VMs. But this doesnot work with CRDs and thus we are now using a different approach toprovide access to those. The new implementation leverages the kubectl exec path in order topipe the graphical and serial console from the VM to the client. This ispretty nice, as we are leveraging Kubernetes for doing the piping, wemerely provide a kubectl plugin in order to ease the consumption ofthis. Side note is that the API of the kubectl plugin did actually notchange. " }, { - "id": 162, + "id": 163, "url": "/2018/Kube-Virt-v020.html", "title": "Kube Virt v0.2.0", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This release follows v0. 1. 0 and consists of 131 changes, contributed by6 people, leading to 148 files changed, 9096 insertions(+), 5871deletions(-). The source code and selected binaries are available for download at:https://github. com/kubevirt/kubevirt/releases/tag/v0. 2. 0. The primary release artifact of KubeVirt is the git tree. The releasetag is signed and can be verified using [git-evtag][git-evtag]. Pre-built containers are published on Docker Hub and can be viewed at:https://hub. docker. com/u/kubevirt/. Notable changes VM launch and shutdown flow improvements VirtualMachine API redesign Removal of HAProxy Redesign of VNC/Console access Initial support for different vagrant providers Contributors6 people contributed to this release: 65 Roman Mohr <rmohr@redhat. com>60 David Vossel <dvossel@redhat. com> 2 Fabian Deutsch <fabiand@redhat. com> 2 Stu Gott <sgott@redhat. com> 1 Marek Libra <mlibra@redhat. com> 1 Martin Kletzander <mkletzan@redhat. com>Test Results Ran 40 of 42 Specs in 703. 532 seconds SUCCESS! — 40 Passed 0 Failed     0 Pending 2 Skipped PASS Additional Resources Mailing list: https://groups. google. com/forum/#!forum/kubevirt-dev IRC: <irc://irc. freenode. net/#kubevirt> An easy to use demo: https://github. com/kubevirt/demo [How to contribute][contributing] [License][license] [git-evtag]: https://github. com/cgwalters/git-evtag#using-git-evtag[contributing]:https://github. com/kubevirt/kubevirt/blob/main/CONTRIBUTING. md[license]: https://github. com/kubevirt/kubevirt/blob/main/LICENSE " }, { - "id": 163, + "id": 164, "url": "/2017/This-Week-in-Kube-Virt-15.html", "title": "This Week In Kube Virt 15", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Streamlining and improving the Kubernetes experience This week we achieved to: VM Grace period and shutdown improvements (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/526)On another side we were also successful in: Initiating a Kubernetes WG Virtualization mainlinglist:https://groups. google. com/forum/#!forum/kubernetes-wg-virtualization Triggering a #virtualization slack channel in the Kubernetes org In addition to this, we are also working on quite a few things: Serial console rework (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/613) VM API Redesign (@rmohr)(https://github. com/kubevirt/kubevirt/pull/606) Add OpenShift support (@karimb)(https://github. com/kubevirt/kubevirt/pull/608,https://github. com/kubevirt-incubator/kubevirt-ansible/pull/29) Ansible Broker support (@gbenhaim)(https://github. com/kubevirt-incubator/kubevirt-ansible/pull/30) Improve development builds (@petrkotas)(https://github. com/kubevirt/kubevirt/pull/609) - Take a look atthe pulse, to get an overview over all changes of this week:https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 164, + "id": 165, "url": "/2017/This-Week-in-Kube-Virt-14.html", "title": "This Week In Kube Virt 14", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute This week you could have met us at: KubeCon NA: Virtualizing Workloads Saloon(https://kccncna17. sched. com/event/43ebdf89846d7f4939810bbaeb5a3229)Some minutes athttps://docs. google. com/document/d/1B3zbJA0MTQ82yu2JNMREEiVaQJTG3PfGBfdogsFISBE/editThis week we achieved to: Release KubeVirt v0. 1. 0(https://github. com/kubevirt/kubevirt/releases/tag/v0. 1. 0) Improve the manifest situation (@rmohr)(https://github. com/kubevirt/kubevirt/pull/602) In addition to this, we are also working on: OpenAPI improvements (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/603) Describe how device assignment can work (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/593) VM Unknown state (@rmohr)(https://github. com/kubevirt/kubevirt/issues/543) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 165, + "id": 166, "url": "/2017/Kube-Virt-v010.html", "title": "Kube Virt v0.1.0", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This release follows v0. 0. 4 and consists of 115 changes, contributed by11 people, leading to 121 files changed, 5278 insertions(+), 1916deletions(-). The source code and selected binaries are available for download at:https://github. com/kubevirt/kubevirt/releases/tag/v0. 1. 0. The primary release artifact of KubeVirt is the git tree. The releasetag is signed and can be verified using [git-evtag][git-evtag]. Pre-built containers are published on Docker Hub and can be viewed at:https://hub. docker. com/u/kubevirt/. Notable changes Many API improvements for a proper OpenAPI reference Add watchdog support Drastically improve the deployment on non-vagrant setups Dropped nodeSelectors Separated inner component deployment from edge component deployment Created separate manifests for developer, test, and releasedeployments Moved komponents to kube-system namespace Improved and unified flag parsing Contributors11 people contributed to this release: 42 Roman Mohr <rmohr@redhat. com>20 David Vossel <dvossel@redhat. com>18 Lukas Bednar <lbednar@redhat. com>14 Martin Polednik <mpolednik@redhat. com> 7 Fabian Deutsch <fabiand@redhat. com> 6 Lukianov Artyom <alukiano@redhat. com> 3 Vladik Romanovsky <vromanso@redhat. com> 2 Petr Kotas <petr. kotas@gmail. com> 1 Barak Korren <bkorren@redhat. com> 1 Francois Deppierraz <francois@ctrlaltdel. ch> 1 Saravanan KR <skramaja@redhat. com>Test Results Ran 44 of 46 Specs in 851. 185 seconds SUCCESS! — 44 Passed 0 Failed     0 Pending 2 Skipped PASS Additional Resources Mailing list: https://groups. google. com/forum/#!forum/kubevirt-dev IRC: <irc://irc. freenode. net/#kubevirt> An easy to use demo: https://github. com/kubevirt/demo [How to contribute][contributing] [License][license] [git-evtag]: https://github. com/cgwalters/git-evtag#using-git-evtag[contributing]:https://github. com/kubevirt/kubevirt/blob/main/CONTRIBUTING. md[license]: https://github. com/kubevirt/kubevirt/blob/main/LICENSE " }, { - "id": 166, + "id": 167, "url": "/2017/This-Week-in-Kube-Virt-13.html", "title": "This Week In Kube Virt 13", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute This week you can meet us at: KubeCon NA: Virtualizing Workloads Saloon(https://kccncna17. sched. com/event/43ebdf89846d7f4939810bbaeb5a3229)This week we still achieved to: Owner References for VM ReplicaSet (@rmohr)(https://github. com/kubevirt/kubevirt/pull/596)In addition to this, we are also working on: Manifest refactoring (@rmohr)(https://github. com/kubevirt/kubevirt/pull/602) OpenAPI improvements (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/603) Describe how device assignment can work (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/593) VM Unknown state (@rmohr)(https://github. com/kubevirt/kubevirt/issues/543) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 167, + "id": 168, "url": "/2017/This-Week-in-Kube-Virt-12.html", "title": "This Week In Kube Virt 12", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute This week we was really slow, but we still achieved to: Improve vagrant setup (@cynepco3hahue)(https://github. com/kubevirt/kubevirt/pull/586)In addition to this, we are also working on: GlusterFS support (@humblec)(https://github. com/kubevirt/kubevirt/pull/578) Describe how device assignment can work (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/593) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 168, + "id": 169, "url": "/2017/This-Week-in-Kube-Virt-11.html", "title": "This Week In Kube Virt 11", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute This week we achieved to: Generation of API documentation (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/571)(https://kubevirt. io/api-reference/master/definitions. html) Move components to kube-system namespace (@cynepco3hahue)(https://github. com/kubevirt/kubevirt/pull/558) Use glide again (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/576) In addition to this, we are also working on: GlusterFS support (@humblec)(https://github. com/kubevirt/kubevirt/pull/578)Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 169, + "id": 170, "url": "/2017/This-Week-in-Kube-Virt-10-base-10.html", "title": "This Week In Kube Virt 10 Base 10", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) Non-code wise this week The KVM Forum recording was published: “Running Virtual Machines onKubernetes with libvirt & KVM by Fabian Deutsch & Roman Mohr”(https://www. youtube. com/watch?v=Wh-ejUyuHJ0) Preparing the “virtualization saloon” at KubeCon NA(https://kccncna17. sched. com/event/CU8m) This week we achieved to: Further improve API documentation (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/549) Virtual Machine watchdog device support (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/544) Introduction of virt-dhcp (@vladikr)(https://github. com/kubevirt/kubevirt/pull/525) Less specific manifests(https://github. com/kubevirt/kubevirt/pull/560) (@fabiand) In addition to this, we are also working on: Addition of more tests to pod networking (@vladikr)(https://github. com/kubevirt/kubevirt/pull/525) Adding helm charts (@cynepco3hahue)(https://github. com/kubernetes/charts/pull/2669) Move manifests to kube-system namespace (@cynepco3hahue)(https://github. com/kubevirt/kubevirt/pull/558) Drafting the publishing of API docs (@lukas-bednar)(https://github. com/kubevirt-incubator/api-reference)(https://kubevirt. io/api-reference/master/definitions. html) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 170, + "id": 171, "url": "/2017/Kube-Virt-v004.html", "title": "Kube Virt v0.0.4", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This release follows v0. 0. 3 and consists of 133 changes, contributed by14 people, leading to 109 files changed, 7093 insertions(+), 2437deletions(-). The source code and selected binaries are available for download at:https://github. com/kubevirt/kubevirt/releases/tag/v0. 0. 4. The primary release artifact of KubeVirt is the git tree. The releasetag is signed and can be verified using [git-evtag][git-evtag]. Pre-built containers are published on Docker Hub and can be viewed at:https://hub. docker. com/u/kubevirt/. Notable changes Add support for node affinity to VM. Spec Add OpenAPI specification Drop swagger 1. 2 specification virt-launcher refactoring Leader election mechanism for virt-controller Move from glide to dep for dependency management Improve virt-handler synchronization loops Add support for running the functional tests on oVirt infrastructure Several tests fixes (spice, cleanup, …​) Add console test tool Improve libvirt event notification Contributors14 people contributed to this release: 46 David Vossel <dvossel@redhat. com>46 Roman Mohr <rmohr@redhat. com>12 Lukas Bednar <lbednar@redhat. com>11 Lukianov Artyom <alukiano@redhat. com> 4 Martin Sivak <msivak@redhat. com> 4 Petr Kotas <pkotas@redhat. com> 2 Fabian Deutsch <fabiand@redhat. com> 2 Milan Zamazal <mzamazal@redhat. com> 1 Artyom Lukianov <alukiano@redhat. com> 1 Barak Korren <bkorren@redhat. com> 1 Clifford Perry <coperry94@gmail. com> 1 Martin Polednik <mpolednik@redhat. com> 1 Stephen Gordon <sgordon@redhat. com> 1 Stu Gott <sgott@redhat. com>Test Results Ran 45 of 47 Specs in 797. 286 seconds SUCCESS! — 45 Passed 0 Failed     0 Pending 2 Skipped PASS Additional Resources Mailing list: https://groups. google. com/forum/#!forum/kubevirt-dev IRC: <irc://irc. freenode. net/#kubevirt> An easy to use demo: https://github. com/kubevirt/demo [How to contribute][contributing] [License][license] [git-evtag]: https://github. com/cgwalters/git-evtag#using-git-evtag[contributing]:https://github. com/kubevirt/kubevirt/blob/main/CONTRIBUTING. md[license]: https://github. com/kubevirt/kubevirt/blob/main/LICENSE " }, { - "id": 171, + "id": 172, "url": "/2017/This-Week-in-Kube-Virt-9.html", "title": "This Week In Kube Virt 9", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week we achieved to: Release KubeVirt v0. 0. 4(https://github. com/kubevirt/kubevirt/releases/tag/v0. 0. 4) virt-handler refactoring (@rmohr)(https://github. com/kubevirt/kubevirt/pull/530) Add support for running functional tests on oVirt infrastructure(@bkorren) (https://github. com/kubevirt/kubevirt/pull/379) Add OpenAPI specification (@lbednar)(https://github. com/kubevirt/kubevirt/pull/535) Consolidate console functional tests (@dvossel)(https://github. com/kubevirt/kubevirt/pull/541) Improve libvirt event notification (@rmohr)(https://github. com/kubevirt/kubevirt/pull/351) In addition to this, we are also working on: Addition of more tests to pod networking (@vladikr)(https://github. com/kubevirt/kubevirt/pull/525) Watchdog support (@dvossel)(https://github. com/kubevirt/kubevirt/pull/544) Leveraging ingress (@fabiand)(https://github. com/kubevirt/kubevirt/pull/538) Adding helm charts (@cynepco3hahue)(https://github. com/kubernetes/charts/pull/2669) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 172, + "id": 173, "url": "/2017/This-Week-in-Kube-Virt-8.html", "title": "This Week In Kube Virt 8", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is a weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week we achieved to: Present at KVM Forum, Prague (@rmohr, @fabiand)http://slides. com/fabiand/running-virtual-machines-on-kubernetes-at-kvm-forum-2017# Proposal on how to construct the VM API (@rmohr, @michalskrivanek)https://github. com/kubevirt/kubevirt/pull/466 Pod deletion improvements (@davidvossel)https://github. com/kubevirt/kubevirt/pull/531 In addition to this, we are also working on: Addition of more tests to pod networking (@vladikr)(https://github. com/kubevirt/kubevirt/pull/525) Access to the node control network (@rmohr)(https://github. com/kubevirt/kubevirt/pull/499) Custom VM metrics discussion (@fromanirh)(https://github. com/kubevirt/kubevirt/pull/487) Simple persistence mechanism documentation (@mpolednik)(https://github. com/kubevirt/user-guide/pull/6) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 173, + "id": 174, "url": "/2017/This-Week-in-Kube-Virt-7.html", "title": "This Week In Kube Virt 7", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the seventh weekly update from the KubeVirt team. This week you can read more or speak to us at: KVM Forum, Prague Thursday, October 26, 10:00 - 10:45https://kvmforum2017. sched. com/event/BnoA “KubeWHAT?” by S Gordon - On KubeVirt and OpenStack (past event)https://www. slideshare. net/sgordon2/kubewhat We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week we achieved to: VMs and components are now running in the host pid namespace(@dvossel) https://github. com/kubevirt/kubevirt/pull/506 Move dependency management from glide to dep ()https://github. com/kubevirt/kubevirt/pull/511 Add a leader election mechanism to virt-controller (@)https://github. com/kubevirt/kubevirt/pull/461 Add OpenAPI specification (@)https://github. com/kubevirt/kubevirt/pull/494 Put work on api server aggregation on hold for now (@stu-gott) To beresolved: API server storage(https://github. com/kubevirt/kubevirt/pull/355) In addition to this, we are also working on: Finalization of pod networking (@vladikr)(https://github. com/kubevirt/kubevirt/pull/525) Access to the node control network (@rmohr)(https://github. com/kubevirt/kubevirt/pull/499) Custom VM metrics discussion (@fromanirh)(https://github. com/kubevirt/kubevirt/pull/487) Simple persistence mechanism (@petrkotas)(https://github. com/petrkotas/virt-vmconfig-crd/) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 174, + "id": 175, "url": "/2017/This-Week-in-Kube-Virt-6.html", "title": "This Week In Kube Virt 6", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the sixth weekly update from the KubeVirt team. This week you could watch us at: Kubernetes Community Meeting introducing and demoing KubeVirt:https://www. youtube. com/watch?v=oBhu1MeGbss Or follow us at our new blog: https://kubevirt. github. io/blogs/ We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week we achieved to: Add support for node affinity to VM. Spec (@MarSik)(https://github. com/kubevirt/kubevirt/pull/446)In addition to this, we are also working on: Access to the node control network (@rmohr)(https://github. com/kubevirt/kubevirt/pull/499) Custom VM metrics discussion (@fromanirh)(https://github. com/kubevirt/kubevirt/pull/487) Continued work on api server aggregation (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/355) Revived VM Config discussion (@mpolednik)(https://github. com/kubevirt/kubevirt/pull/408) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 175, + "id": 176, "url": "/2017/This-Week-in-Kube-Virt-5.html", "title": "This Week In Kube Virt 5", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the fith weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week we achieved to: Improved sagger documentation (for SDK generation) (@lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/476) Kubernetes 1. 8 fixes (@cynepco3hahue)(https://github. com/kubevirt/kubevirt/pull/479https://github. com/kubevirt/kubevirt/pull/484) Ephemeral disk rewrite (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/460) Custom VM metrics proposal (@fromanirh )(https://github. com/kubevirt/kubevirt/pull/487) [WIP] Add API server PKI tool (@jhernand)(https://github. com/kubevirt/kubevirt/pull/498) KubeVirt provider for the cluster autoscaler (@rmohr)(https://github. com/rmohr/autoscaler/pull/1) In addition to this, we are also working on: Finally some good progress with layer 3 network connectivity(@vladikr) (https://github. com/kubevirt/kubevirt/pull/450https://github. com/vladikr/kubevirt/tree/veth-bridge-taphttps://github. com/vladikr/kubevirt/tree/veth-macvtap) Continued work on api server aggregation (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/355) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 176, + "id": 177, "url": "/2017/This-Week-in-Kube-Virt-4.html", "title": "This Week In Kube Virt 4", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the fourth weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week you can find us at: Ohio Linux Fest (@stu-gott) “KubeVirt, Virtual Machine ManagementUsing Kubernetes” https://ohiolinux. orgThis week we achieved to: ReplicaSet for VirtualMachines (@rmohr)(https://github. com/kubevirt/kubevirt/pull/453) Swagger documentation improvements (@rmohr, @lukas-bednar)(https://github. com/kubevirt/kubevirt/pull/475) Hot-standby for our controller (@cynepco3hahue)(https://github. com/kubevirt/kubevirt/pull/461) domxml/VM Spec mapping rules proposal (@rmohr, @michalskrivanek)(https://github. com/kubevirt/kubevirt/pull/466) Launch flow improvement proposal (@davidvossel)(https://github. com/kubevirt/kubevirt/pull/469) In addition to this, we are also working on: Debug layer 3 network connectivity issues for VMs (@vladikr)(https://github. com/kubevirt/kubevirt/pull/450) Review of the draft code for the api server aggregation (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/355) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 177, + "id": 178, "url": "/2017/This-Week-in-Kube-Virt-3.html", "title": "This Week In Kube Virt 3", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the third weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute Node Isolator use-case (more informations soon) This week we achieved to: Renamed VM kind to VirtualMachine (@cynepco3hahue)(https://github. com/kubevirt/kubevirt/pull/452) Proposal for VirtualMachineReplicaSet to scale VMs (@rmohr)(https://github. com/kubevirt/kubevirt/pull/453) Ephemeral Registry Disk Rewrite (@vossel)(https://github. com/kubevirt/kubevirt/pull/460) Fix some race in our CI (@rmohr)(https://github. com/kubevirt/kubevirt/pull/459) In addition to this, we are also working on: Review of the draft code to get layer 3 network connectivity for VMs(@vladikr) (https://github. com/kubevirt/kubevirt/pull/450) Review of the draft code for the api server aggregation (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/355) Review of the proposal integrate with host networking (@rmohr)(https://github. com/kubevirt/kubevirt/pull/367) Converging multiple ansible playbooks for deployment on OpenShift(@petrkotas, @cynepco3hahue, @lukas-bednar)(https://github. com/kubevirt-incubator/kubevirt-ansible) Continued discussion of VM persistence and ABI stability(https://groups. google. com/d/topic/kubevirt-dev/G0FpxJYFhf4/discussion) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 178, + "id": 179, "url": "/2017/This-Week-in-Kube-Virt-2.html", "title": "This Week In Kube Virt 2", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the second weekly update from the KubeVirt team. We are currently driven by Being easier to be used on Kubernetes and OpenShift Enabling people to contribute This week we achieved to: Keep cloud-init data in Secrets (@vossel)(https://github. com/kubevirt/kubevirt/pull/433) First draft code to get layer 3 network connectivity for VMs(@vladikr) (https://github. com/kubevirt/kubevirt/pull/450) First draft code for the api server aggregation (@stu-gott)(https://github. com/kubevirt/kubevirt/pull/355) Add further migration documentation (@rmohr)(https://github. com/kubevirt/user-guide/pull/1) In addition to this, we are also working on: Progress on how to integrate with host networking (@rmohr)(https://github. com/kubevirt/kubevirt/pull/367) Converging multiple ansible playbooks for deployment on OpenShift(@petrkotas, @cynepco3hahue, @lukas-bednar)(https://github. com/kubevirt-incubator/kubevirt-ansible) Initial support for Anti- & Affinity for VMs (@MarSik)(https://github. com/kubevirt/kubevirt/issues/438) Initial support for memory and cpu mapping (@MarSik)(https://github. com/kubevirt/kubevirt/pull/388) Discussing VM persistence and ABI stability(https://groups. google. com/d/topic/kubevirt-dev/G0FpxJYFhf4/discussion) Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues And keep track of events at our calendar18pc0jur01k8f2cccvn5j04j1g@group. calendar. google. com If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt> " }, { - "id": 179, + "id": 180, "url": "/2017/This-Week-in-Kube-Virt-1.html", "title": "This Week In Kube Virt 1", "author" : "fabiand", "tags" : "release notes, changelog", "body": "This is the first weekly update from the KubeVirt team. We are currently driven by Being easier to consume on Kubernetes and OpenShiftThis week we achieved to merge a design for cloud-init support(https://github. com/kubevirt/kubevirt/pull/372) release KubeVirt v0. 0. 2(https://github. com/kubevirt/kubevirt/releases/tag/v0. 0. 2) Minikube based demo (https://github. com/kubevirt/demo) OpenShift Community presentation(https://www. youtube. com/watch?v=IfuL2rYhMKY) In addition to this, we are also working on: Support stock Kubernetes networking(https://github. com/kubevirt/kubevirt/issues/261) Move to a custom API Server suitable for API Server aggregation(https://github. com/kubevirt/kubevirt/issues/205) Writing a user facing getting started guide(https://github. com/kubevirt/kubevirt/issues/410) Ansible playbooks for deployment on OpenShift Take a look at the pulse, to get an overview over all changes of thisweek: https://github. com/kubevirt/kubevirt/pulse Finally you can view our open issues athttps://github. com/kubevirt/kubevirt/issues If you need some help or want to chat you can find us on<irc://irc. freenode. net/#kubevirt>. " }, { - "id": 180, + "id": 181, "url": "/2017/technology-comparison.html", "title": "Comparing KubeVirt to other technologies", "author" : "Fabian Deutsch", "tags" : "KubeVirt, ClearContainers, virtlet, CRI, OpenStack, ovirt", "body": "Is KubeVirt a replacement for $MYVMMGMTSYSTEM?: Maybe. The primary goal of KubeVirt is to allow running virtual machines ontop of Kubernetes. It’s focused on the virtualization bits. General virtualization management systems like i. e. OpenStack or oVirt usuallyconsist of some additional services which take care of i. e. network management,host provisioning, data warehousing, just to name a few. These services are outof scope of KubeVirt. That being said, KubeVirt is intended to be part of a virtualization managementsystem. It can be seen as an VM cluster runtime, and additional componentsprovide additional functionality to provide a nice coherent user-experience. Is KubeVirt like ClearContainers?: No. ClearContainersare about using VMs to isolate pods or containers on the container runtimelevel. KubeVirt on the other hand is about allowing to manage virtual machines on acluster level. But beyond that it’s also how virtual machines are exposed. ClearContainers hide the fact that a virtual machine is used, but KubeVirt ishighly interested in providing an API to configure a virtual machine. Is KubeVirt like virtlet?: Somewhat. virtlet is a CRIimplementation to run virtual machines instead of containers. The key differences to KubeVirt are: It’s a CRI. This implies that the VM runtime is on the host, and that thekubelet is configured to use it. KubeVirt on the other hand can be deployed as a native Kubernetes add-on. Pod API. The virtlet is using a Pod API to specify the VM. Certainfields like i. e. volumes are mapped to the corresponding VM functionality. This is problematic, there are many details to VMs which can not be mappedto a Pod counterpart. Eventually annotations can be used to cover thoseproperties. KubeVirt on the other hand exposes a VM specific API, which tries to coverall properties of a VM. Why Kubernetes and not bringing containers to OpenStack or oVirt ?: We think that Container workloads are the future. Therefore we want to add VMsupport on top of a container management system instead of building containersupport into a VM management system. " }, { - "id": 181, + "id": 182, "url": "/2017/role-of-libvirt.html", "title": "The Role of LibVirt", "author" : "Fabian Deutsch", "tags" : "libvirt", "body": "Libvirt project. Can I perform a 1:1 translation of my libvirt domain xml to a VM Spec?: Probably not, libvirt is intended to be run on a host and the domain XML isbased on this assumption, this implies that the domain xml allows you to accesshost local resources i. e. local paths, host devices, and host deviceconfigurations. A VM Spec on the other hand is designed to work with cluster resources. And itdoes not permit to address host resources. Does a VM Spec support all features of libvirt?: No, libvirt has a wide range of features, reaching beyond pure virtualizationfeatures, into host, network, and storage management. The API was driven by therequirements of running virtualization on a host. A VM Spec however is a VM definition on the cluster level, this by itselfmeans that the specification has different requirements, i. e. it also needs toinclude scheduling information and KubeVirt specifically builds on Kubernetes, which allows it to reuse thesubsystems for consuming network and storage, which on the other hand meansthat the corresponding libvirt features will not be exposed. " }, { - "id": 182, + "id": 183, "url": "/galleries/2020-01-31-DevConfCZ2020-in-pictures", "title": "DevConf.cz 2020 in pictures", "author" : "Pablo Iranzo Gómez", "tags" : "", "body": "Here are some of the pictures of KubeVirt presence at DevConf. cz 2020. " }, { - "id": 183, + "id": 184, "url": "/galleries/2020-02-03-Fosdem2020-communty-presence", "title": "FOSDEM 2020 in pictures", "author" : "Pablo Iranzo Gómez", "tags" : "", "body": "Here are some of the pictures of KubeVirt presence at FOSDEM 2020. " }, , { - "id": 184, + "id": 185, "url": "/pages/alicloud", "title": "Easy install using AliCloud", "author" : "", "tags" : "", "body": " - " }, , , { - "id": 185, + "id": 186, "url": "/pages/azure", "title": "Easy install using Azure", "author" : "", "tags" : "", "body": " - " }, , , { - "id": 186, + "id": 187, "url": "/pages/cloud", "title": "Easy install on cloud providers", "author" : "", "tags" : "", "body": " - " }, , { - "id": 187, + "id": 188, "url": "/category/community.html", "title": "Community", "author" : "", "tags" : "", "body": " - " }, , { - "id": 188, + "id": 189, "url": "/blogs/community", "title": "Community", "author" : "", "tags" : "", - "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date " + "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date " }, , , , { - "id": 189, + "id": 190, "url": "/blogs/date", "title": "Grouped by Date", "author" : "", "tags" : "", - "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date Post calendar: JanFebMarAprMayJunJulAugSepOctNovDec2024  2         2023  3 1 3 2 2 20223112121311  20212123112222 12020352432321214201932222253274420185234863134422017      2 4454 2024: March: 📅 19: KubeVirt Summit 2024 CfP is open! 📅 05: KubeVirt v1. 2. 0 2023: November: 📅 07: Announcing KubeVirt v1. 1 📅 06: KubeVirt v1. 1. 0 September: 📅 06: Running KubeVirt with Cluster Autoscaler 📅 05: Managing KubeVirt VMs with Ansible July: 📅 24: NetworkPolicies for KubeVirt VMs secondary networks using OVN-Kubernetes 📅 11: KubeVirt v1. 0 has landed! 📅 06: KubeVirt v1. 0. 0 May: 📅 31: Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes March: 📅 06: Secondary networks for KubeVirt VMs using OVN-Kubernetes 📅 03: KubeVirt Summit 2023! 📅 01: KubeVirt v0. 59. 0 2022: October: 📅 13: KubeVirt v0. 58. 0 September: 📅 12: KubeVirt v0. 57. 0 August: 📅 18: KubeVirt v0. 56. 0 📅 12: Simplifying KubeVirt's `VirtualMachine` UX with Instancetypes and Preferences 📅 02: KubeVirt: installing Microsoft Windows 11 from an ISO July: 📅 14: KubeVirt v0. 55. 0 June: 📅 28: KubeVirt at KubeCon EU 2022 📅 08: KubeVirt v0. 54. 0 May: 📅 09: KubeVirt v0. 53. 0 April: 📅 08: KubeVirt v0. 52. 0 📅 03: Load-balancer for virtual machines on bare metal Kubernetes clusters March: 📅 08: KubeVirt v0. 51. 0 February: 📅 09: KubeVirt v0. 50. 0 January: 📅 25: Dedicated migration network in KubeVirt 📅 24: KubeVirt Summit is coming back! 📅 11: KubeVirt v0. 49. 0 2021: December: 📅 06: KubeVirt v0. 48. 0 October: 📅 13: Running real-time workloads with improved performance 📅 08: KubeVirt v0. 46. 0 September: 📅 21: Import AWS AMIs as KubeVirt Golden Images 📅 08: KubeVirt v0. 45. 0 August: 📅 13: Running virtual machines in Istio service mesh 📅 09: KubeVirt v0. 44. 0 July: 📅 16: Kubernetes Authentication Options using KubeVirt Client Library 📅 09: KubeVirt v0. 43. 0 June: 📅 08: KubeVirt v0. 42. 0 May: 📅 12: KubeVirt v0. 41. 0 April: 📅 30: Using Intel vGPUs with Kubevirt 📅 21: Automated Windows Installation With Tekton Pipelines 📅 19: KubeVirt v0. 40. 0 March: 📅 10: KubeVirt v0. 39. 0 📅 03: The KubeVirt Summit 2021 is a wrap! February: 📅 08: KubeVirt v0. 38. 0 January: 📅 18: KubeVirt v0. 37. 0 📅 12: KubeVirt Summit is coming! 2020: December: 📅 16: KubeVirt v0. 36. 0 📅 10: Monitoring KubeVirt VMs from the inside 📅 10: Customizing images for containerized VMs part I 📅 04: High Availability -- RunStrategies for Virtual Machines November: 📅 09: KubeVirt v0. 35. 0 October: 📅 21: Multiple Network Attachments with bridge CNI 📅 07: KubeVirt v0. 34. 0 September: 📅 15: KubeVirt v0. 33. 0 August: 📅 11: KubeVirt v0. 32. 0 📅 06: Import virtual machine from oVirt July: 📅 20: Minikube KubeVirt addon 📅 09: KubeVirt v0. 31. 0 📅 01: Common-templates June: 📅 22: Migrate a sample Windows workload to Kubernetes using KubeVirt and CDI 📅 05: KubeVirt v0. 30. 0 May: 📅 25: SELinux, from basics to KubeVirt 📅 12: KubeVirt VM Image Usage Patterns 📅 06: KubeVirt v0. 29. 0 April: 📅 30: KubeVirt Operation Fundamentals 📅 29: KubeVirt Security Fundamentals 📅 28: KubeVirt Architecture Fundamentals 📅 09: KubeVirt v0. 28. 0 March: 📅 22: Live Migration in KubeVirt 📅 06: KubeVirt v0. 27. 0 February: 📅 25: Advanced scheduling using affinity and anti-affinity rules 📅 14: KubeVirt: installing Microsoft Windows from an ISO 📅 07: KubeVirt v0. 26. 0 📅 06: NA KubeCon 2019 - KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt - David Vossel, Red Hat & Vishesh Tanksale, NVIDIA 📅 01: NA KubeCon 2019 - KubeVirt introduction by Steve Gordon and Chandrakanth Jakkidi January: 📅 24: Managing KubeVirt with OpenShift Web Console 📅 21: KubeVirt Laboratory 3, upgrades 📅 13: KubeVirt v0. 25. 0 2019: December: 📅 17: KubeVirt user interface options 📅 10: KubeVirt Laboratory 2, experimenting with CDI 📅 04: KubeVirt Laboratory 1, use KubeVirt 📅 03: KubeVirt v0. 24. 0 November: 📅 28: KubeVirt basic operations video 📅 22: Jenkins Infra upgrade 📅 12: KubeVirt at KubeCon + CloudNativeCon North America 📅 04: KubeVirt v0. 23. 0 October: 📅 31: Prow jobs for KubeVirt website and Tutorial repo 📅 31: Jenkins Jobs for KubeVirt lab validation 📅 30: Persistent storage of your Virtual Machines in KubeVirt with Rook 📅 23: KubeVirt on Kubernetes with CRI-O from scratch - Installing KubeVirt 📅 16: KubeVirt on Kubernetes with CRI-O from scratch - Installing Kubernetes 📅 10: KubeVirt v0. 22. 0 📅 09: KubeVirt on Kubernetes with CRI-O from scratch September: 📅 09: KubeVirt v0. 21. 0 📅 09: KubeVirt is now part of CNCF Sandbox August: 📅 09: KubeVirt v0. 20. 0 📅 09: KubeVirt Condition Types Renamed 📅 01: KubeVirt Condition Types Rename in Custom Resource July: 📅 30: Node Drain in KubeVirt 📅 29: How to import VM into KubeVirt 📅 12: Website roadmap 📅 08: KubeVirt with Ansible, part 2 📅 05: KubeVirt v0. 19. 0 June: 📅 05: KubeVirt v0. 18. 0 📅 04: KubeVirt vagrant provider May: 📅 21: KubeVirt with Ansible, part 1 – Introduction 📅 06: KubeVirt v0. 17. 0 April: 📅 17: Hyper Converged Operator 📅 05: KubeVirt v0. 16. 0 March: 📅 14: More About Kubevirt Metrics 📅 05: KubeVirt v0. 15. 0 February: 📅 22: Federated Kubevirt 📅 04: KubeVirt v0. 14. 0 January: 📅 22: An Overview To Kubevirt Metrics 📅 15: KubeVirt v0. 13. 0 📅 11: KubeVirt v0. 12. 0 2018: December: 📅 13: Kubevirt Autolatest 📅 06: KubeVirt v0. 11. 0 November: 📅 26: Kubevirt At Kubecon Na 📅 20: Ignition Support 📅 16: New Volume Types 📅 08: KubeVirt v0. 10. 0 October: 📅 10: Cdi Datavolumes 📅 09: Containerized Data Importer 📅 04: KubeVirt v0. 9. 0 📅 03: Kubevirt Network Rehash September: 📅 12: Attaching To Multiple Networks 📅 11: Kubevirt Memory Overcommit 📅 06: KubeVirt v0. 8. 0 August: 📅 08: Kubevirtci July: 📅 23: Kubevirt V0. 7. 0 📅 04: KubeVirt v0. 7. 0 📅 03: Unit Test Howto June: 📅 21: Run Istio With Kubevirt 📅 20: Kvm Using Device Plugins 📅 13: Proxy VM Conclusion 📅 11: KubeVirt v0. 6. 0 📅 07: Non Dockerized Build 📅 03: Research Run Vms With Istio Service Mesh May: 📅 22: Use Vs Code For Kube Virt Development 📅 16: Ovn Multi Network Plugin For Kubernetes Kubetron 📅 16: Use Glusterfs Cloning With Kubevirt 📅 16: Kubevirt Api Access Control 📅 08: Kubevirt Objects 📅 07: Deploying Vms On Kubernetes Glusterfs Kubevirt 📅 04: KubeVirt v0. 5. 0 📅 04: Deploying Kubevirt On A Single Ovirt Vm April: 📅 27: This Week In Kube Virt 23 📅 25: Kubevirt Network Deep Dive 📅 06: This Week In Kube Virt 22 📅 06: KubeVirt v0. 4. 0 March: 📅 20: This Week In Kube Virt 21 📅 08: This Week In Kube Virt 20 📅 08: KubeVirt v0. 3. 0 February: 📅 23: This Week In Kube Virt 19 📅 10: This Week In Kube Virt 18 January: 📅 30: This Week In Kube Virt 17 📅 19: This Week In Kube Virt 16 Size Xl 📅 19: This Week In Kube Virt 16 Holiday Wrap Up Edition 📅 05: Some Notes On Some Highlights Of V020 📅 05: Kube Virt v0. 2. 0 2017: December: 📅 15: This Week In Kube Virt 15 📅 08: This Week In Kube Virt 14 📅 08: Kube Virt v0. 1. 0 📅 04: This Week In Kube Virt 13 November: 📅 25: This Week In Kube Virt 12 📅 21: This Week In Kube Virt 11 📅 10: This Week In Kube Virt 10 Base 10 📅 07: Kube Virt v0. 0. 4 📅 06: This Week In Kube Virt 9 October: 📅 28: This Week In Kube Virt 8 📅 24: This Week In Kube Virt 7 📅 15: This Week In Kube Virt 6 📅 06: This Week In Kube Virt 5 September: 📅 29: This Week In Kube Virt 4 📅 22: This Week In Kube Virt 3 📅 15: This Week In Kube Virt 2 📅 08: This Week In Kube Virt 1 July: 📅 18: Comparing KubeVirt to other technologies 📅 18: The Role of LibVirt " + "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date Post calendar: JanFebMarAprMayJunJulAugSepOctNovDec2024  2   1     2023  3 1 3 2 2 20223112121311  20212123112222 12020352432321214201932222253274420185234863134422017      2 4454 2024: July: 📅 17: KubeVirt v1. 3. 0 March: 📅 19: KubeVirt Summit 2024 CfP is open! 📅 05: KubeVirt v1. 2. 0 2023: November: 📅 07: Announcing KubeVirt v1. 1 📅 06: KubeVirt v1. 1. 0 September: 📅 06: Running KubeVirt with Cluster Autoscaler 📅 05: Managing KubeVirt VMs with Ansible July: 📅 24: NetworkPolicies for KubeVirt VMs secondary networks using OVN-Kubernetes 📅 11: KubeVirt v1. 0 has landed! 📅 06: KubeVirt v1. 0. 0 May: 📅 31: Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes March: 📅 06: Secondary networks for KubeVirt VMs using OVN-Kubernetes 📅 03: KubeVirt Summit 2023! 📅 01: KubeVirt v0. 59. 0 2022: October: 📅 13: KubeVirt v0. 58. 0 September: 📅 12: KubeVirt v0. 57. 0 August: 📅 18: KubeVirt v0. 56. 0 📅 12: Simplifying KubeVirt's `VirtualMachine` UX with Instancetypes and Preferences 📅 02: KubeVirt: installing Microsoft Windows 11 from an ISO July: 📅 14: KubeVirt v0. 55. 0 June: 📅 28: KubeVirt at KubeCon EU 2022 📅 08: KubeVirt v0. 54. 0 May: 📅 09: KubeVirt v0. 53. 0 April: 📅 08: KubeVirt v0. 52. 0 📅 03: Load-balancer for virtual machines on bare metal Kubernetes clusters March: 📅 08: KubeVirt v0. 51. 0 February: 📅 09: KubeVirt v0. 50. 0 January: 📅 25: Dedicated migration network in KubeVirt 📅 24: KubeVirt Summit is coming back! 📅 11: KubeVirt v0. 49. 0 2021: December: 📅 06: KubeVirt v0. 48. 0 October: 📅 13: Running real-time workloads with improved performance 📅 08: KubeVirt v0. 46. 0 September: 📅 21: Import AWS AMIs as KubeVirt Golden Images 📅 08: KubeVirt v0. 45. 0 August: 📅 13: Running virtual machines in Istio service mesh 📅 09: KubeVirt v0. 44. 0 July: 📅 16: Kubernetes Authentication Options using KubeVirt Client Library 📅 09: KubeVirt v0. 43. 0 June: 📅 08: KubeVirt v0. 42. 0 May: 📅 12: KubeVirt v0. 41. 0 April: 📅 30: Using Intel vGPUs with Kubevirt 📅 21: Automated Windows Installation With Tekton Pipelines 📅 19: KubeVirt v0. 40. 0 March: 📅 10: KubeVirt v0. 39. 0 📅 03: The KubeVirt Summit 2021 is a wrap! February: 📅 08: KubeVirt v0. 38. 0 January: 📅 18: KubeVirt v0. 37. 0 📅 12: KubeVirt Summit is coming! 2020: December: 📅 16: KubeVirt v0. 36. 0 📅 10: Monitoring KubeVirt VMs from the inside 📅 10: Customizing images for containerized VMs part I 📅 04: High Availability -- RunStrategies for Virtual Machines November: 📅 09: KubeVirt v0. 35. 0 October: 📅 21: Multiple Network Attachments with bridge CNI 📅 07: KubeVirt v0. 34. 0 September: 📅 15: KubeVirt v0. 33. 0 August: 📅 11: KubeVirt v0. 32. 0 📅 06: Import virtual machine from oVirt July: 📅 20: Minikube KubeVirt addon 📅 09: KubeVirt v0. 31. 0 📅 01: Common-templates June: 📅 22: Migrate a sample Windows workload to Kubernetes using KubeVirt and CDI 📅 05: KubeVirt v0. 30. 0 May: 📅 25: SELinux, from basics to KubeVirt 📅 12: KubeVirt VM Image Usage Patterns 📅 06: KubeVirt v0. 29. 0 April: 📅 30: KubeVirt Operation Fundamentals 📅 29: KubeVirt Security Fundamentals 📅 28: KubeVirt Architecture Fundamentals 📅 09: KubeVirt v0. 28. 0 March: 📅 22: Live Migration in KubeVirt 📅 06: KubeVirt v0. 27. 0 February: 📅 25: Advanced scheduling using affinity and anti-affinity rules 📅 14: KubeVirt: installing Microsoft Windows from an ISO 📅 07: KubeVirt v0. 26. 0 📅 06: NA KubeCon 2019 - KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt - David Vossel, Red Hat & Vishesh Tanksale, NVIDIA 📅 01: NA KubeCon 2019 - KubeVirt introduction by Steve Gordon and Chandrakanth Jakkidi January: 📅 24: Managing KubeVirt with OpenShift Web Console 📅 21: KubeVirt Laboratory 3, upgrades 📅 13: KubeVirt v0. 25. 0 2019: December: 📅 17: KubeVirt user interface options 📅 10: KubeVirt Laboratory 2, experimenting with CDI 📅 04: KubeVirt Laboratory 1, use KubeVirt 📅 03: KubeVirt v0. 24. 0 November: 📅 28: KubeVirt basic operations video 📅 22: Jenkins Infra upgrade 📅 12: KubeVirt at KubeCon + CloudNativeCon North America 📅 04: KubeVirt v0. 23. 0 October: 📅 31: Prow jobs for KubeVirt website and Tutorial repo 📅 31: Jenkins Jobs for KubeVirt lab validation 📅 30: Persistent storage of your Virtual Machines in KubeVirt with Rook 📅 23: KubeVirt on Kubernetes with CRI-O from scratch - Installing KubeVirt 📅 16: KubeVirt on Kubernetes with CRI-O from scratch - Installing Kubernetes 📅 10: KubeVirt v0. 22. 0 📅 09: KubeVirt on Kubernetes with CRI-O from scratch September: 📅 09: KubeVirt v0. 21. 0 📅 09: KubeVirt is now part of CNCF Sandbox August: 📅 09: KubeVirt v0. 20. 0 📅 09: KubeVirt Condition Types Renamed 📅 01: KubeVirt Condition Types Rename in Custom Resource July: 📅 30: Node Drain in KubeVirt 📅 29: How to import VM into KubeVirt 📅 12: Website roadmap 📅 08: KubeVirt with Ansible, part 2 📅 05: KubeVirt v0. 19. 0 June: 📅 05: KubeVirt v0. 18. 0 📅 04: KubeVirt vagrant provider May: 📅 21: KubeVirt with Ansible, part 1 – Introduction 📅 06: KubeVirt v0. 17. 0 April: 📅 17: Hyper Converged Operator 📅 05: KubeVirt v0. 16. 0 March: 📅 14: More About Kubevirt Metrics 📅 05: KubeVirt v0. 15. 0 February: 📅 22: Federated Kubevirt 📅 04: KubeVirt v0. 14. 0 January: 📅 22: An Overview To Kubevirt Metrics 📅 15: KubeVirt v0. 13. 0 📅 11: KubeVirt v0. 12. 0 2018: December: 📅 13: Kubevirt Autolatest 📅 06: KubeVirt v0. 11. 0 November: 📅 26: Kubevirt At Kubecon Na 📅 20: Ignition Support 📅 16: New Volume Types 📅 08: KubeVirt v0. 10. 0 October: 📅 10: Cdi Datavolumes 📅 09: Containerized Data Importer 📅 04: KubeVirt v0. 9. 0 📅 03: Kubevirt Network Rehash September: 📅 12: Attaching To Multiple Networks 📅 11: Kubevirt Memory Overcommit 📅 06: KubeVirt v0. 8. 0 August: 📅 08: Kubevirtci July: 📅 23: Kubevirt V0. 7. 0 📅 04: KubeVirt v0. 7. 0 📅 03: Unit Test Howto June: 📅 21: Run Istio With Kubevirt 📅 20: Kvm Using Device Plugins 📅 13: Proxy VM Conclusion 📅 11: KubeVirt v0. 6. 0 📅 07: Non Dockerized Build 📅 03: Research Run Vms With Istio Service Mesh May: 📅 22: Use Vs Code For Kube Virt Development 📅 16: Ovn Multi Network Plugin For Kubernetes Kubetron 📅 16: Use Glusterfs Cloning With Kubevirt 📅 16: Kubevirt Api Access Control 📅 08: Kubevirt Objects 📅 07: Deploying Vms On Kubernetes Glusterfs Kubevirt 📅 04: KubeVirt v0. 5. 0 📅 04: Deploying Kubevirt On A Single Ovirt Vm April: 📅 27: This Week In Kube Virt 23 📅 25: Kubevirt Network Deep Dive 📅 06: This Week In Kube Virt 22 📅 06: KubeVirt v0. 4. 0 March: 📅 20: This Week In Kube Virt 21 📅 08: This Week In Kube Virt 20 📅 08: KubeVirt v0. 3. 0 February: 📅 23: This Week In Kube Virt 19 📅 10: This Week In Kube Virt 18 January: 📅 30: This Week In Kube Virt 17 📅 19: This Week In Kube Virt 16 Size Xl 📅 19: This Week In Kube Virt 16 Holiday Wrap Up Edition 📅 05: Some Notes On Some Highlights Of V020 📅 05: Kube Virt v0. 2. 0 2017: December: 📅 15: This Week In Kube Virt 15 📅 08: This Week In Kube Virt 14 📅 08: Kube Virt v0. 1. 0 📅 04: This Week In Kube Virt 13 November: 📅 25: This Week In Kube Virt 12 📅 21: This Week In Kube Virt 11 📅 10: This Week In Kube Virt 10 Base 10 📅 07: Kube Virt v0. 0. 4 📅 06: This Week In Kube Virt 9 October: 📅 28: This Week In Kube Virt 8 📅 24: This Week In Kube Virt 7 📅 15: This Week In Kube Virt 6 📅 06: This Week In Kube Virt 5 September: 📅 29: This Week In Kube Virt 4 📅 22: This Week In Kube Virt 3 📅 15: This Week In Kube Virt 2 📅 08: This Week In Kube Virt 1 July: 📅 18: Comparing KubeVirt to other technologies 📅 18: The Role of LibVirt " }, { - "id": 190, + "id": 191, "url": "/docs/", "title": "Introduction", "author" : "", "tags" : "", "body": " - Check out the user guide! " }, { - "id": 191, + "id": 192, "url": "/pages/ec2", "title": "Easy install using AWS", "author" : "", "tags" : "", "body": " - " }, { - "id": 192, + "id": 193, "url": "/gallery/", "title": "Gallery", "author" : "", "tags" : "picture gallery, photos", "body": " - DevConf. cz 2020 in pictures : January 31, 2020 This article shows some of the KubeVirt presence in DevConf. cz 2020 Read More FOSDEM 2020 in pictures : February 2, 2020 This article shows KubeVirt presence at FOSDEM 2020 Read More " }, { - "id": 193, + "id": 194, "url": "/pages/gcp", "title": "Easy install using GCP", "author" : "", "tags" : "", "body": " - " }, , , , { - "id": 194, + "id": 195, "url": "/blogs/", "title": "Blogs", "author" : "", "tags" : "", - "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date KubeVirt Summit 2024 CfP is open!: Mar 19, 2024 Join us for the KubeVirt community's fourth annual dedicated online event Read More KubeVirt v1. 2. 0: March 05, 2024 This article provides information about KubeVirt release v1. 2. 0 changes Read More Announcing KubeVirt v1. 1: November 07, We are very pleased to announce the release of KubeVirt v1. 1! Read More KubeVirt v1. 1. 0: November 06, 2023 This article provides information about KubeVirt release v1. 1. 0 changes Read More Running KubeVirt with Cluster Autoscaler: September 6, 2023 This post explains how to set up KubeVirt with Cluster Autoscaler on EKS Read More New 1 of 37 Old " + "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date KubeVirt v1. 3. 0: July 17, 2024 This article provides information about KubeVirt release v1. 3. 0 changes Read More KubeVirt Summit 2024 CfP is open!: Mar 19, 2024 Join us for the KubeVirt community's fourth annual dedicated online event Read More KubeVirt v1. 2. 0: March 05, 2024 This article provides information about KubeVirt release v1. 2. 0 changes Read More Announcing KubeVirt v1. 1: November 07, We are very pleased to announce the release of KubeVirt v1. 1! Read More KubeVirt v1. 1. 0: November 06, 2023 This article provides information about KubeVirt release v1. 1. 0 changes Read More New 1 of 37 Old " }, , { - "id": 195, + "id": 196, "url": "/videos/interviews", "title": "Interviews", "author" : "", "tags" : "", "body": " - KubeVirt Community Interviews: Interviews with various media outlets about all things KubeVirt. Watch the most recent interview, and be sure to check out our YouTube Interviews playlist to see more. " }, { - "id": 196, + "id": 197, "url": "/videos/kubevirt-summit", "title": "Kubevirt Summit", "author" : "", "tags" : "", "body": " - KubeVirt Summit 2023: Watch Session 1, Day 1 of KubeVirt Summit 2023 above. Click here for a playlist of all of our KubeVirt Summit 2023 sessions. KubeVirt Summit 2022: Watch Session 1, Day 1 of KubeVirt Summit 2022 above. Click here for a playlist of all of our KubeVirt Summit 2022 sessions. KubeVirt Summit 2021: Watch Session 1, Day 1 of KubeVirt Summit 2021 above. Our inaugural KubeVirt Summit! Click here for a playlist of all of our KubeVirt Summit 2021 sessions. " }, , { - "id": 197, + "id": 198, "url": "/labs/kubernetes/lab1", "title": "Use KubeVirt", "author" : "", "tags" : "laboratory, kubevirt installation, start vm, stop vm, delete vm, access console, lab", - "body": " - Use KubeVirt You can experiment with this lab online at KillercodaCreate a Virtual Machine: Download the VM manifest and explore it. Note it uses a container disk and as such doesn’t persist data. Such container disks currently exist for alpine, cirros and fedora. wget https://kubevirt. io/labs/manifests/vm. yamlless vm. yamlApply the manifest to Kubernetes. kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlvirtualmachine. kubevirt. io testvm created virtualmachineinstancepreset. kubevirt. io small createdManage Virtual Machines (optional):: To get a list of existing Virtual Machines. Note the running status. kubectl get vmskubectl get vms -o yaml testvmTo start a Virtual Machine you can use: virtctl start testvmIf you installed virtctl via krew, you can use kubectl virt: # Start the virtual machine:kubectl virt start testvm# Stop the virtual machine:kubectl virt stop testvmAlternatively you could use kubectl patch: # Start the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ running :true}}'# Stop the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ running :false}}'Now that the Virtual Machine has been started, check the status. Note the running status. kubectl get vmiskubectl get vmis -o yaml testvmAccessing VMs (serial console): Connect to the serial console of the Cirros VM. Hit return / enter a few times and login with the displayed username and password. virtctl console testvmDisconnect from the virtual machine console by typing: ctrl+]. Controlling the State of the VM: To shut it down: virtctl stop testvmTo delete a Virtual Machine: kubectl delete vm testvmThis concludes this section of the lab. You can watch how the laboratory is done in the following video: Next Lab " + "body": " - Use KubeVirt You can experiment with this lab online at KillercodaCreate a Virtual Machine: Download the VM manifest and explore it. Note it uses a container disk and as such doesn’t persist data. Such container disks currently exist for alpine, cirros and fedora. wget https://kubevirt. io/labs/manifests/vm. yamlless vm. yamlApply the manifest to Kubernetes. kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlYou should see following results virtualmachine. kubevirt. io “testvm” createdvirtualmachineinstancepreset. kubevirt. io “small” created Manage Virtual Machines (optional):: To get a list of existing Virtual Machines. Note the running status. kubectl get vmskubectl get vms -o yaml testvmTo start a Virtual Machine you can use: virtctl start testvmIf you installed virtctl via krew, you can use kubectl virt: # Start the virtual machine:kubectl virt start testvm# Stop the virtual machine:kubectl virt stop testvmAlternatively you could use kubectl patch: # Start the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ running :true}}'# Stop the virtual machine:kubectl patch virtualmachine testvm --type merge -p \ '{ spec :{ running :false}}'Now that the Virtual Machine has been started, check the status (kubectl get vms). Note the Running status. You now want to see the instance of the vm you just started : kubectl get vmiskubectl get vmis -o yaml testvmNote the difference between VM (virtual machine) resource and VMI (virtual machine instance) resource. The VMI does not exist before starting the VM and the VMI will be deleted when you stop the VM. (Also note that restart of the VM is needed if you like to change some properties. Just modifying VM is not sufficient, the VMI has to be replaced. ) Accessing VMs (serial console): Connect to the serial console of the Cirros VM. Hit return / enter a few times and login with the displayed username and password. virtctl console testvmDisconnect from the virtual machine console by typing: ctrl+]. If you like to see the complete boot sequence logs from the console. You need to connect to the serial console just after starting the VM (you can test this by stopping and starting the VM again, see below). Controlling the State of the VM: To shut it down: virtctl stop testvmTo delete a Virtual Machine: kubectl delete vm testvmThis concludes this section of the lab. You can watch how the laboratory is done in the following video: Next Lab " }, { - "id": 198, + "id": 199, "url": "/labs/kubernetes/lab2", "title": "Experiment with CDI", "author" : "", "tags" : "laboratory, importer, vm import, containerized data importer, CDI, lab", "body": " - Experiment with the Containerized Data Importer (CDI) You can experiment with this lab online at KillercodaIn this lab, you will learn how to use Containerized Data Importer (CDI) to import Virtual Machine images for use with Kubevirt. CDI simplifies the process of importing data from various sources into Kubernetes Persistent Volumes, making it easier to use that data within your virtual machines. CDI introduces DataVolumes, custom resources meant to be used as abstractions of PVCs. A custom controller watches for DataVolumes and handles the creation of a target PVC with all the spec and annotations required for importing the data. Depending on the type of source, other specific CDI controller will start the import process and create a raw image named disk. img with the desired content into the target PVC. This ‘lab’ targets deployment on one node as it uses Minikube and its hostpath storage class which can create PersistentVolumes (PVs) on only one node at a time. In production use, a StorageClass capable of ReadWriteOnce or better operation should be deployed to ensure PVs are accessible from any node. Install the CDI: In this exercise we deploy the latest release of CDI using its Operator. export VERSION=$(basename $(curl -s -w %{redirect_url} https://github. com/kubevirt/containerized-data-importer/releases/latest))kubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator. yamlkubectl create -f https://github. com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr. yamlCheck the status of the cdi CustomResource (CR) created in the previous step. The CR’s Phase will change from Deploying to Deployed as the pods it deploys are created and reach the Running state. kubectl get cdi cdi -n cdiReview the “cdi” pods that were added. kubectl get pods -n cdiUse CDI to Import a Disk Image: First, you need to create a DataVolume that points to the source data you want to import. In this example, we’ll use a DataVolume to import a Fedora37 Cloud Image into a PVC and launch a Virtual Machine making use of it. cat <<EOF > dv_fedora. ymlapiVersion: cdi. kubevirt. io/v1beta1kind: DataVolumemetadata: name: fedora spec: storage: resources: requests: storage: 5Gi source: http: url: https://download. fedoraproject. org/pub/fedora/linux/releases/37/Cloud/x86_64/images/Fedora-Cloud-Base-37-1. 7. x86_64. raw. xz EOFkubectl create -f dv_fedora. ymlA custom CDI controller will use this DataVolume to create a PVC with the same name and proper spec/annotations so that an import-specific controller detects it and launches an importer pod. This pod will gather the image specified in the source field. kubectl get pvc fedora -o yamlkubectl get pod # Make note of the pod name assigned to the import processkubectl logs -f importer-fedora-pnbqh # Substitute your importer-fedora pod name here. Notice that the importer downloaded the publicly available Fedora Cloud qcow image. Once the importer pod completes, this PVC is ready for use in kubevirt. If the importer pod completes in error, you may need to retry it or specify a different URL to the fedora cloud image. To retry, first delete the importer pod and the DataVolume, and then recreate the DataVolume. kubectl delete -f dv_fedora. yml --waitkubectl create -f dv_fedora. ymlThe following error occurs when the storage provider is not recognized by KubeVirt: message: no accessMode defined DV nor on StorageProfile for standard StorageClassEdit the DataVolume YAML to specify accessMode manually and retry it: spec: storage:+ accessModes:+ - ReadWriteOnceLet’s create a Virtual Machine making use of it. Review the file vm1_pvc. yml. wget https://kubevirt. io/labs/manifests/vm1_pvc. ymlcat vm1_pvc. ymlWe change the yaml definition of this Virtual Machine to inject the default public key of user in the cloud instance. # Generate a password-less SSH key using the default location. ssh-keygenPUBKEY=`cat ~/. ssh/id_rsa. pub`sed -i s%ssh-rsa. *%$PUBKEY% vm1_pvc. ymlkubectl create -f vm1_pvc. ymlThis will create and start a Virtual Machine named vm1. We can use the following command to check our Virtual Machine is running and to gather its IP. You are looking for the IP address beside the virt-launcher pod. kubectl get pod -o wideSince we are running an all in one setup, the corresponding Virtual Machine is actually running on the same node, we can check its qemu process. ps -ef | grep qemu | grep vm1Wait for the Virtual Machine to boot and to be available for login. You may monitor its progress through the console. The speed at which the VM boots depends on whether baremetal hardware is used. It is much slower when nested virtualization is used, which is likely the case if you are completing this lab on an instance on a cloud provider. virtctl console vm1Disconnect from the virtual machine console by typing: ctrl+] Finally, we will connect to vm1 Virtual Machine (VM) as a regular user would do, i. e. via ssh. This can be achieved by just ssh to the gathered ip in case we are in the Kubernetes software defined network (SDN). This is true, if we are connected to a node that belongs to the Kubernetes cluster network. Probably if you followed the Easy install using AWS or Easy install using GCP your cloud instance is already part of the cluster. ssh fedora@VM_IPOn the other side, if you followed Easy install using minikube take into account that you will need to ssh into Minikube first, as shown below. $ kubectl get vmiNAME AGE PHASE IP NODENAMEvm1 109s Running 172. 17. 0. 16 minikube$ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)$ ssh fedora@172. 17. 0. 16The authenticity of host '172. 17. 0. 16 (172. 17. 0. 16)' can't be established. ECDSA key fingerprint is SHA256:QmJUvc8vbM2oXiEonennW7+lZ8rVRGyhUtcQBVBTnHs. Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '172. 17. 0. 16' (ECDSA) to the list of known hosts. fedora@172. 17. 0. 16's password:Finally, on a usual situation you will probably want to give access to your vm1 VM to someone else from outside the Kubernetes cluster nodes. Someone who is actually connecting from his or her laptop. This can be achieved with the virtctl tool already installed in Easy install using minikube. Note that this is the same case as connecting from our laptop to vm1 VM running on our local Minikube instance First, we are going expose the ssh port of the vm1 as NodePort type. Then verify that the Kubernetes object service was created successfully on a random port of the Minikube or cloud instance. $ virtctl expose vmi vm1 --name=vm1-ssh --port=20222 --target-port=22 --type=NodePort Service vm1-ssh successfully exposed for vmi vm1$ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEvm1-ssh NodePort 10. 101. 226. 150 <none> 20222:32495/TCP 24mOnce exposed successfully, check the IP of your Minikube VM or cloud instance and verify you can reach the VM using your public SSH key previously configured. In case of cloud instances verify that security group applied allows traffic to the random port created. minikube ip192. 168. 39. 74$ ssh -i ~/. ssh/id_rsa fedora@192. 168. 39. 74 -p 32495 Last login: Wed Oct 9 13:59:29 2019 from 172. 17. 0. 1 [fedora@vm1 ~]$This concludes this section of the lab. You can watch how the laboratory is done in the following video: Previous Lab " }, { - "id": 199, + "id": 200, "url": "/labs/kubernetes/lab3", "title": "KubeVirt Upgrades", "author" : "", "tags" : "laboratory, kubevirt upgrades, upgrade, lifecycle, lab", "body": " - Experiment with KubeVirt UpgradesDeploy KubeVirt: NOTE: For upgrading to the latest KubeVirt version, first we will install a specific older version of the operator, if you’re already using latest, please start with an older KubeVirt version and follow Lab1 to deploy KubeVirt on it, but using version v0. 56. 1 instead. If you’ve already covered this, jump over this section. Let’s stick to use the release v0. 56. 1: export KUBEVIRT_VERSION=v0. 56. 1Let’s deploy the KubeVirt Operator by running the following command: $ kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlnamespace/kubevirt created. . . deployment. apps/virt-operator createdLet’s wait for the operator to become ready: $ kubectl wait --for condition=ready pod -l kubevirt. io=virt-operator -n kubevirt --timeout=100spod/virt-operator-5ddb4674b9-6fbrv condition metNow let’s deploy KubeVirt by creating a Custom Resource that will trigger the ‘operator’ and perform the deployment: $ kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr. yamlkubevirt. kubevirt. io/kubevirt createdIf you’re running in a virtualized environment, in order to be able to run VMs here we need to pre-configure KubeVirt so it uses software-emulated virtualization instead of trying to use real hardware virtualization. $ kubectl -n kubevirt patch kubevirt kubevirt --type=merge --patch '{ spec :{ configuration :{ developerConfiguration :{ useEmulation :true}}}}'configmap/kubevirt-config createdLet’s check the deployment: $ kubectl get pods -n kubevirtOnce it’s ready, it will show something similar to the information below: $ kubectl get pods -n kubevirtNAME READY STATUS RESTARTS AGEvirt-api-7fc57db6dd-g4s4w 1/1 Running 0 3mvirt-api-7fc57db6dd-zd95q 1/1 Running 0 3mvirt-controller-6849d45bcc-88zd4 1/1 Running 0 3mvirt-controller-6849d45bcc-cmfzk 1/1 Running 0 3mvirt-handler-fvsqw 1/1 Running 0 3mvirt-operator-5649f67475-gmphg 1/1 Running 0 4mvirt-operator-5649f67475-sw78k 1/1 Running 0 4mDeploy a VM: Once all the containers are with the status “Running” you can execute the command below for applying a YAML definition of a virtual machine into our current Kubernetes environment: First, let’s wait for all the pods to be ready like previously provided example: $ kubectl wait --for condition=ready pod -l kubevirt. io=virt-api -n kubevirt --timeout=100spod/virt-api-5ddb4674b9-6fbrv condition met$ kubectl wait --for condition=ready pod -l kubevirt. io=virt-controller -n kubevirt --timeout=100spod/virt-controller-p3d4o-1fvfz condition met$ kubectl wait --for condition=ready pod -l kubevirt. io=virt-handler -n kubevirt --timeout=100spod/virt-handler-1b4n3z4674b9-sf1rl condition metAnd proceed with the VM creation: $ kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlvirtualmachine. kubevirt. io/testvm createdUsing the command below for checking that the VM is defined: $ kubectl get vmsNAME AGE RUNNING VOLUMEtestvm 22s falseNotice from the output that the VM is not running yet. To start a VM, virtctl~~~` should be used: $ virtctl start testvmVM testvm was scheduled to startNow you can check again the VM status: $ kubectl get vmsNAME AGE RUNNING VOLUMEtestvm 0s falseOnce the VM is running you can inspect its status: kubectl get vmis$ kubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 10s SchedulingOnce it’s ready, the command above will print something like: $ kubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 1m Running 10. 32. 0. 11 masterWhile the PHASE is still Scheduling you can run the same command for checking again: $ kubectl get vmisOnce the PHASE will change to Running, we’re ready for upgrading KubeVirt. Define the next version to upgrade to: KubeVirt starting from v0. 17. 0 onwards, allows to upgrade one version at a time, by using two approaches as defined in the user-guide: Patching the imageTag value in the KubeVirt CR spec Updating the operator if no imageTag is defined (defaulting to upgrade to match the operator version)WARNING: In both cases, the supported scenario is updating from N-1 to N NOTE: Zero downtime rolling updates are supported starting with release v0. 17. 0 onwards. Updating from any release prior to the KubeVirt v0. 17. 0 release is not supported. Performing the upgrade: Updating the KubeVirt operator if no imageTag value is setWhen no imageTag value is set in the KubeVirt CR, the system assumes that the version of KubeVirt is locked to the version of the operator. This means that updating the operator will result in the underlying KubeVirt installation being updated as well. Let’s upgrade to the newer version after the one installed (v0. 56. 1 -> v0. 57. 0): $ export KUBEVIRT_VERSION=v0. 57. 0$ kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply. . . deployment. apps/virt-operator configuredNOTE: Compared to the first step of the lab now we are using apply instead of create to deploy the newer version because the operator already exists. In any case, we can check that the VM is still running $ kubectl get vmisNAME AGE PHASE IP NODENAMEtestvm 1m Running 10. 32. 0. 11 masterFinal upgrades: You can keep testing in this lab updating ‘one version at a time’ until reaching the value of KUBEVIRT_LATEST_VERSION: $ export KUBEVIRT_LATEST_VERSION=$(curl -s https://api. github. com/repos/kubevirt/kubevirt/releases/latest | jq -r . tag_name)$ echo -e CURRENT: $KUBEVIRT_VERSION LATEST: $KUBEVIRT_LATEST_VERSION Compare the values between and continue upgrading ‘one release at a time’ by: Choosing the target version: $ export KUBEVIRT_VERSION=vX. XX. XUpdating the operator to that release: $ kubectl apply -f https://github. com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator. yamlWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply. . . deployment. apps/virt-operator configuredThe following command shows how to check the operator version $ echo $(kubectl get deployment. apps virt-operator -n kubevirt -o jsonpath='{. spec. template. spec. containers[0]. env[?(@. name== KUBEVIRT_VERSION )]. value}')Wrap-up: Shutting down a VM works by either using virtctl or editing the VM. $ virtctl stop testvmVM testvm was scheduled to stopFinally, the VM can be deleted using: $ kubectl delete vms testvmvirtualmachine. kubevirt. io testvm deletedWhen updating using the operator, we can see that the ‘AGE’ of containers is similar between them, but when updating only the kubevirt version, the operator ‘AGE’ keeps increasing because it is not ‘recreated’. This concludes this section of the lab. You can watch how the laboratory is done in the following video: Previous Lab " }, { - "id": 200, + "id": 201, "url": "/labs/", "title": "Labs", "author" : "", "tags" : "", "body": " - Check out the Available labs on the side menu " }, , , , { - "id": 201, + "id": 202, "url": "/videos/community/meetings", "title": "Weekly Meetings", "author" : "", "tags" : "", "body": " - KubeVirt Community Meetings: Watch our latest KubeVirt Community Meeting, which meets every Wednesday. Click here for a playlist of all of our KubeVirt community weekly meetings. KubeVirt SIG Performance & Scale Weekly Meetings: Watch our latest SIG Performance and Scale Meeting, which meets every Thursday. Click here for a playlist of all of our KubeVirt SIG Performance & Scale weekly meetings. KubeVirt SIG Storage Bi-weekly Meetings: Watch the latest SIG Storage Meeting above, which meets every second Monday. Click here for a playlist of all of our KubeVirt SIG Storage bi-weekly meetings. Want to be involved?: Be sure to check out the KubeVirt calendar to see all of our recurring and one-off meetings. " }, { - "id": 202, + "id": 203, "url": "/labs/kubernetes/migration", "title": "Live Migration", "author" : "", "tags" : "laboratory, kubevirt installation, feature-gate, VM, Live Migration, lab", "body": " - Live MigrationLive Migration is a common virtualization featuresupported by KubeVirt where virtual machines running on one cluster node moveto another cluster node without shutting down the guest OS or its applications. To experiment with KubeVirt live migration in a Kubernetes test environment, somesetup is required. Start a Kubernetes cluster with the following requirements: Two or more nodes CNI plugin: Flannel is a good pick for proof on concept environments. Nested or emulated virtualization KubeVirtFor a simple test environment using Minikube, refer to the Minikube Quickstart on this site. Check the status of nodes and kubevirt: To check on the nodes and their IP ranges run: kubectl get nodes -o wideThis will return a report like NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEminikube Ready control-plane,master 2m43s v1. 20. 7 192. 168. 39. 240 <none> Buildroot 2020. 02. 12 4. 19. 182 docker://20. 10. 6minikube-m02 Ready <none> 118s v1. 20. 7 192. 168. 39. 245 <none> Buildroot 2020. 02. 12 4. 19. 182 docker://20. 10. 6Check that kubevirt has fully deployed: kubectl -n kubevirt get kubevirtNAME AGE PHASEkubevirt 3m20s DeployedEnable Live Migration: Live migration is, at the time of writing, not a standard feature in KubeVirt. To enable the feature, create a ConfigMap in the “kubevirt” Namespace called “kubevirt-config”. kubectl apply -f - <<EOFapiVersion: v1kind: ConfigMapmetadata: name: kubevirt-config namespace: kubevirt labels: kubevirt. io: data: feature-gates: LiveMigration EOFCreate a Virtual Machine: Next, create a VM. This lab uses the “testvm” from lab1. kubectl apply -f https://kubevirt. io/labs/manifests/vm. yamlvirtctl start testvmIn a multi-node environment, it is helpful to know on which node a pod is running. View its node using -o wide: kubectl get pod -o wideNotice in this example, the pod shows as running on NODE “minikube-m02”: NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESvirt-launcher-testvm-c8nzz 2/2 Running 0 32s 10. 244. 1. 12 minikube-m02 <none> <none>Start a Service on the Virtual Machine: Using virtctl, expose two ports for testing, ssh and http/8080: virtctl expose vmi testvm --name=testvm-ssh --port=22 --type=NodePortvirtctl expose vmi testvm --name=testvm-http --port=8080 --type=NodePortStart by logging in to the console and running a simple web server using netcat: virtctl console testvmThe default user “cirros” and its password are mentioned on the console loginprompt, use them to log in. Next, run the following while loop to continuouslyrespond to any http connection attempt with a test message: while true; do ( echo HTTP/1. 0 200 Ok ; echo; echo Migration test ) | nc -l -p 8080; doneLeave the loop running, and either break out of the console with CTRL-] or openanother terminal on the same machine. To test the service, several bits of information will need to be coordinated. To collect the minikube node IP address and the NodePort of the http service, run: IP=$(minikube ip)PORT=$(kubectl get svc testvm-http -o jsonpath='{. spec. ports[0]. nodePort}')Now use curl to read data from the simple web service: curl ${IP}:${PORT}This should output Migration test. If all is well, it is time to migrate thevirtual machine to another node. Migrate VM: To migrate the testvm vmi from one node to the other, run: virtctl migrate testvmTo ensure migration happens, watch the pods in “wide” view: kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESvirt-launcher-testvm-8src7 0/2 Completed 0 5m 10. 244. 1. 14 minikube-m02 <none> <none>virt-launcher-testvm-zxlts 2/2 Running 0 21s 10. 244. 0. 7 minikube <none> <none>Notice the original virt-launcher pod has entered the Completed state and the virtual machine is now running on the minikube node. Test the service previously started is still running: curl ${IP}:${PORT}Again, this should output Migration test. Summary: This lab is now concluded. This exercise has demonstrated the ability ofKubeVirt Live Migration to move a running virtual machine from one node toanother without requiring restart of running applications. " }, { - "id": 203, + "id": 204, "url": "/category/news.html", "title": "News", "author" : "", "tags" : "", "body": " - " }, { - "id": 204, + "id": 205, "url": "/blogs/news", "title": "News", "author" : "", "tags" : "", - "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date KubeVirt Summit 2024 CfP is open!: Mar 19, 2024 Join us for the KubeVirt community's fourth annual dedicated online event Read More Announcing KubeVirt v1. 1: November 07, We are very pleased to announce the release of KubeVirt v1. 1! Read More Running KubeVirt with Cluster Autoscaler: September 6, 2023 This post explains how to set up KubeVirt with Cluster Autoscaler on EKS Read More Managing KubeVirt VMs with Ansible: September 5, 2023 This post explains how to manage KubeVirt VMs with the kubevirt. core Ansible collection. Read More NetworkPolicies for KubeVirt VMs secondary networks using OVN-Kubernetes: July 21, 2023 This post explains how to configure NetworkPolicies for KubeVirt VMs secondary networks. Read More KubeVirt v1. 0 has landed!: July 11, We are very pleased to announce the release of KubeVirt v1. 0! Read More Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes: May 31, 2023 This post explains how to configure secondary networks connected to the physical underlay for KubeVirt virtual machines. Read More Secondary networks for KubeVirt VMs using OVN-Kubernetes: March 06, 2023 This post explains how to configure cluster-wide overlays as secondary networks for KubeVirt virtual machines. Read More KubeVirt Summit 2023!: Mar 03, 2023 Join us for the KubeVirt community's third annual dedicated online event Read More Simplifying KubeVirt's `VirtualMachine` UX with Instancetypes and Preferences: August 12, 2022 An introduction to Instancetypes and preferences in KubeVirt Read More KubeVirt: installing Microsoft Windows 11 from an ISO: August, 2, 2022 This blog post describes how to create a Microsoft Windows 11 virtual machine with KubeVirt Read More KubeVirt at KubeCon EU 2022: June 28, 2022 A short report on the two sessions KubeVirt presented at KubeCon EU 2022 Read More Load-balancer for virtual machines on bare metal Kubernetes clusters: May 03, 2022 This post illustrates setting up a virtual machine with MetalLB LoadBalancer service. Read More Dedicated migration network in KubeVirt: January 25, 2022 KubeVirt now supports using a separate network for live migrations Read More KubeVirt Summit is coming back!: Jan 24, 2022 Join us for the KubeVirt community's second annual dedicated online event Read More Running real-time workloads with improved performance: October 14, 2021 This blog post details the various enhancements made to improve the performance of real-time workloads in KubeVirt Read More Import AWS AMIs as KubeVirt Golden Images: September 21, 2021 This blog post outlines the fundamentals for how to import VMs from AWS into KubeVirt Read More Running virtual machines in Istio service mesh: August 25, 2021 This blog post demonstrates running virtual machines in Istio service mesh. Read More Kubernetes Authentication Options using KubeVirt Client Library: July 16, 2021 This blog post discusses authentication methods that can be used with the KubeVirt client-go library. Read More Using Intel vGPUs with Kubevirt: April 30, 2021 This blog post guides users on how to improve VM graphics performance using Intel Core processors, GPU Virtualization and Kubevirt. Read More Automated Windows Installation With Tekton Pipelines: April 21, 2021 This blog shows how KubeVirt Tekton Tasks can be utilized to automatically install and setup Windows VMs from scratch Read More The KubeVirt Summit 2021 is a wrap!: Mar 3, 2021 The KubeVirt community held their first dedicated online event last month. Read More KubeVirt Summit is coming!: Jan 12, 2021 The KubeVirt community will have their first dedicated online event Read More Customizing images for containerized VMs part I: December 10, 2020 A use case is exposed where containerized VMs running on top of Kubernetes ease the deployment of standardized VMs required by software developers. In this first part, we focus on creating standard images using different tools and then containerize them so that they can be stored in a container registry. . . . Read More High Availability -- RunStrategies for Virtual Machines: December 04, 2020 This blog post outlines the various RunStrategies available to VMs Read More Multiple Network Attachments with bridge CNI: October 21, 2020 This post illustrates configuring secondary interfaces at VMs with a L2 linux bridge at nodes using just kube api. Read More Import virtual machine from oVirt: August 07, 2020 This blog post describes how to import virtual machine using vm-import-operator Read More Minikube KubeVirt addon: July 20, 2020 This blog post describes how to use minikube and the KubeVirt addon Read More Common-templates: July 01, 2020 This blog post describe basic factors and usage of common-templates Read More Migrate a sample Windows workload to Kubernetes using KubeVirt and CDI: June 22, 2020 This blog post outlines methods to migrate a sample Windows workload to Kubernetes using KubeVirt and CDI Read More SELinux, from basics to KubeVirt: May 25, 2020 This blog details step by step how SELinux is leveraged in KubeVirt to isolate virtual machines from each other. Read More KubeVirt VM Image Usage Patterns: May 12, 2020 This blog post outlines methods for building and using virtual machine images with KubeVirt Read More KubeVirt Operation Fundamentals: April 30, 2020 This blog post outlines fundamentals around the KubeVirt's approach to installs and updates. Read More KubeVirt Security Fundamentals: April 29, 2020 This blog post outlines fundamentals around the KubeVirt's approach to security. Read More KubeVirt Architecture Fundamentals: April 28, 2020 This blog post outlines the core set of design decisions that shaped KubeVirt into what it is today. Read More Live Migration in KubeVirt: March 22, 2020 KubeVirt leverages Live Migration to support workloads to keep running while nodes can be moved to maintenance, etc Check what is needed to get it working and how it works. Read More Advanced scheduling using affinity and anti-affinity rules: February 25, 2020 KubeVirt can take advantage of Kubernetes inner features to provide an advanced scheduling mechanism to virtual machines (VMs). The same or even more complex affinity and anti-affinity rules can be assigned to VMs or Pods in Kubernetes than in traditional virtualization solutions. Read More KubeVirt: installing Microsoft Windows from an ISO: February, 14, 2020 In this blogpost a Virtual Machine is created to install Microsoft Windows in KubeVirt from an ISO following the traditional way. Read More NA KubeCon 2019 - KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt - David Vossel, Red Hat & Vishesh Tanksale, NVIDIA: February, 06, 2019 In this blogpost, we talk about the presentation that David Vossel and Vishesh Tanksale did at the KubeCon 2019 in North America. The talk is called KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt and they go through from a KubeVirt introduction until a complex architecture with NVIDIA GPU devices. . . Read More NA KubeCon 2019 - KubeVirt introduction by Steve Gordon and Chandrakanth Jakkidi: February, 01, 2019 KubeVirt Intro: Virtual Machine Management on Kubernetes - Steve Gordon & Chandrakanth Jakkidi Read More Managing KubeVirt with OpenShift Web Console: January 24, 2020 This article focuses on running the OKD web console in a native Kubernetes cluster leveraging the deep integrations with KubeVirt. OKD web console will allow us to create, manage and delete virtual machines from a friendly user interface Read More KubeVirt Laboratory 3, upgrades: January, 21, 2020 In this video, we are showing the step by step of the KubeVirt Laboratory 3 how to upgrade KubeVirt Read More KubeVirt user interface options: December, 2019 Overview of different user interface options to manage KubeVirt Read More KubeVirt Laboratory 2, experimenting with CDI: December 10, 2019 In this video, we are showing the step by step of the KubeVirt Laboratory 2 Experimenting with CDI Read More KubeVirt Laboratory 1, use KubeVirt: December 4, 2019 In this video, we are showing the step by step of the KubeVirt Laboratory 1 Use KubeVirt Read More KubeVirt basic operations video: November 28, 2019 KubeVirt basic operations video Read More Jenkins Infra upgrade: November 22, 2019 Jenkins CI server upgrade and jobs for KubeVirt labs and image creation refresh Read More KubeVirt at KubeCon + CloudNativeCon North America: November 12, 2019 A summary of KubeVirt related activities during KubeCon + CloudNativeCon North America 2019 in San Diego Read More Prow jobs for KubeVirt website and Tutorial repo: October 31, 2019 How prow is used to keep website and tutorials 'up' Read More Jenkins Jobs for KubeVirt lab validation: October 31, 2019 How Jenkins is leveraged for automation at KubeVirt Cloud Image Builder and Lab Validation Read More Persistent storage of your Virtual Machines in KubeVirt with Rook: October, 2019 Persistent storage of your Virtual Machines in KubeVirt with Rook Read More KubeVirt on Kubernetes with CRI-O from scratch - Installing KubeVirt: October 23, 2019 How to setup a home lab environment with Kubernetes, CRI-O and KubeVirt step by step guide - Installing KubeVirt Read More KubeVirt on Kubernetes with CRI-O from scratch - Installing Kubernetes: October 16, 2019 How to setup a home lab environment with Kubernetes, CRI-O and KubeVirt step by step guide - Installing Kubernetes Read More KubeVirt on Kubernetes with CRI-O from scratch: October, 2019 How to setup a home lab environment with Kubernetes, CRI-O and KubeVirt step by step guide Read More KubeVirt is now part of CNCF Sandbox: September, 2019 KubeVirt has been approved as a project in the sandbox Read More KubeVirt Condition Types Renamed: Aug 9, 2019 Condition Types have been RENAMED Read More KubeVirt Condition Types Rename in Custom Resource: Aug 1, 2019 KubeVirt is renaming Condition Types in next release Read More Node Drain in KubeVirt: Jul 30, 2019 Evicting VM's using Node Drain Functionality Read More How to import VM into KubeVirt: Jul 29, 2019 Import a VM into the Kubernetes Platform using CDI Read More Website roadmap: 8 Jul, 2019 List of identified things that might need an improvement Read More KubeVirt with Ansible, part 2: 8 Jul, 2019 A deeper dive into Ansible 2. 8's KubeVirt features Read More KubeVirt vagrant provider: June 4, 2019 The post describes how to use kubevirt vagrant provider Read More KubeVirt with Ansible, part 1 – Introduction: May 21, 2019 With the release of Ansible 2. 8 comes a new set of KubeVirt modules Read More Hyper Converged Operator: May 08, 2019 Hyper Converged Operator on OCP 4 and K8s(HCO) Read More More About Kubevirt Metrics: Mar 14, 2019 A status update about KubeVirt metrics Read More Federated Kubevirt: Feb 22, 2019 Federated KubeVirt Read More An Overview To Kubevirt Metrics: Jan 22, 2019 An overview to KubeVirt metrics Read More Kubevirt Autolatest: Dec 13, 2018 KubeVirt Autodeployer Read More Kubevirt At Kubecon Na: November 26, 2018 KubeVirt at KubeCon North America 2019 Read More Ignition Support: November 20, 2018 Ignition Support Read More New Volume Types: November 16, 2018 New Volume Types - ConfigMap, Secret and ServiceAccount Read More Cdi Datavolumes: October 11, 2018 CDI DataVolumes Read More Containerized Data Importer: October 09, 2018 This post describes how to import, clone and upload a Virtual Machine disk image to kubernetes cluster. Read More Kubevirt Network Rehash: October 11, 2018 Quick rehash of the network deep-dive Read More Attaching To Multiple Networks: September 12, 2018 This post describes how to connect a Virtual Machine to more than one network using the Multus CNI. Read More Kubevirt Memory Overcommit: Sept 11, 2018 KubeVirt Memory Overcommitment Read More Kubevirtci: August 8, 2018 This post tries to give a quick overview of kubevirtci and why we use it to build our testing clusters. Read More Kubevirt V0. 7. 0: July 23, 2018 KubeVirt 0. 7. 0 Highlights Read More Unit Test Howto: July 3, 2018 This post tries to demystify some of our unit test mechanism, hopefully will make it easier to write more tests and increase our code coverage! Read More Run Istio With Kubevirt: June 21, 2018 Use Istio with KubeVirt Read More Kvm Using Device Plugins: June 20, 2018 KubeVirt Using Device Plugins For KVM Read More Some Notes On Some Highlights Of V020: January 05, 2018 The very first KubeVirt release of KubeVirt in the new year () had a few notable highlights which were brewing over the last few weeks. Read More Comparing KubeVirt to other technologies: July, 18, 2017 In this blogpost, we discuss on the technology provided by KubeVirt and how it stands against other technologies available Read More The Role of LibVirt: July, 18, 2017 In this blogpost, we discuss on libvirt role in KubeVirt Read More " + "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date KubeVirt Summit 2024 CfP is open!: Mar 19, 2024 Join us for the KubeVirt community's fourth annual dedicated online event Read More Announcing KubeVirt v1. 1: November 07, We are very pleased to announce the release of KubeVirt v1. 1! Read More Running KubeVirt with Cluster Autoscaler: September 6, 2023 This post explains how to set up KubeVirt with Cluster Autoscaler on EKS Read More Managing KubeVirt VMs with Ansible: September 5, 2023 This post explains how to manage KubeVirt VMs with the kubevirt. core Ansible collection. Read More NetworkPolicies for KubeVirt VMs secondary networks using OVN-Kubernetes: July 21, 2023 This post explains how to configure NetworkPolicies for KubeVirt VMs secondary networks. Read More KubeVirt v1. 0 has landed!: July 11, We are very pleased to announce the release of KubeVirt v1. 0! Read More Secondary networks connected to the physical underlay for KubeVirt VMs using OVN-Kubernetes: May 31, 2023 This post explains how to configure secondary networks connected to the physical underlay for KubeVirt virtual machines. Read More Secondary networks for KubeVirt VMs using OVN-Kubernetes: March 06, 2023 This post explains how to configure cluster-wide overlays as secondary networks for KubeVirt virtual machines. Read More KubeVirt Summit 2023!: Mar 03, 2023 Join us for the KubeVirt community's third annual dedicated online event Read More Simplifying KubeVirt's `VirtualMachine` UX with Instancetypes and Preferences: August 12, 2022 An introduction to Instancetypes and preferences in KubeVirt Read More KubeVirt: installing Microsoft Windows 11 from an ISO: August, 2, 2022 This blog post describes how to create a Microsoft Windows 11 virtual machine with KubeVirt Read More KubeVirt at KubeCon EU 2022: June 28, 2022 A short report on the two sessions KubeVirt presented at KubeCon EU 2022 Read More Load-balancer for virtual machines on bare metal Kubernetes clusters: May 03, 2022 This post illustrates setting up a virtual machine with MetalLB LoadBalancer service. Read More Dedicated migration network in KubeVirt: January 25, 2022 KubeVirt now supports using a separate network for live migrations Read More KubeVirt Summit is coming back!: Jan 24, 2022 Join us for the KubeVirt community's second annual dedicated online event Read More Running real-time workloads with improved performance: October 14, 2021 This blog post details the various enhancements made to improve the performance of real-time workloads in KubeVirt Read More Import AWS AMIs as KubeVirt Golden Images: September 21, 2021 This blog post outlines the fundamentals for how to import VMs from AWS into KubeVirt Read More Running virtual machines in Istio service mesh: August 25, 2021 This blog post demonstrates running virtual machines in Istio service mesh. Read More Kubernetes Authentication Options using KubeVirt Client Library: July 16, 2021 This blog post discusses authentication methods that can be used with the KubeVirt client-go library. Read More Using Intel vGPUs with Kubevirt: April 30, 2021 This blog post guides users on how to improve VM graphics performance using Intel Core processors, GPU Virtualization and Kubevirt. Read More Automated Windows Installation With Tekton Pipelines: April 21, 2021 This blog shows how KubeVirt Tekton Tasks can be utilized to automatically install and setup Windows VMs from scratch Read More The KubeVirt Summit 2021 is a wrap!: Mar 3, 2021 The KubeVirt community held their first dedicated online event last month. Read More KubeVirt Summit is coming!: Jan 12, 2021 The KubeVirt community will have their first dedicated online event Read More Customizing images for containerized VMs part I: December 10, 2020 A use case is exposed where containerized VMs running on top of Kubernetes ease the deployment of standardized VMs required by software developers. In this first part, we focus on creating standard images using different tools and then containerize them so that they can be stored in a container registry. . . . Read More High Availability -- RunStrategies for Virtual Machines: December 04, 2020 This blog post outlines the various RunStrategies available to VMs Read More Multiple Network Attachments with bridge CNI: October 21, 2020 This post illustrates configuring secondary interfaces at VMs with a L2 linux bridge at nodes using just kube api. Read More Import virtual machine from oVirt: August 07, 2020 This blog post describes how to import virtual machine using vm-import-operator Read More Minikube KubeVirt addon: July 20, 2020 This blog post describes how to use minikube and the KubeVirt addon Read More Common-templates: July 01, 2020 This blog post describe basic factors and usage of common-templates Read More Migrate a sample Windows workload to Kubernetes using KubeVirt and CDI: June 22, 2020 This blog post outlines methods to migrate a sample Windows workload to Kubernetes using KubeVirt and CDI Read More SELinux, from basics to KubeVirt: May 25, 2020 This blog details step by step how SELinux is leveraged in KubeVirt to isolate virtual machines from each other. Read More KubeVirt VM Image Usage Patterns: May 12, 2020 This blog post outlines methods for building and using virtual machine images with KubeVirt Read More KubeVirt Operation Fundamentals: April 30, 2020 This blog post outlines fundamentals around the KubeVirt's approach to installs and updates. Read More KubeVirt Security Fundamentals: April 29, 2020 This blog post outlines fundamentals around the KubeVirt's approach to security. Read More KubeVirt Architecture Fundamentals: April 28, 2020 This blog post outlines the core set of design decisions that shaped KubeVirt into what it is today. Read More Live Migration in KubeVirt: March 22, 2020 KubeVirt leverages Live Migration to support workloads to keep running while nodes can be moved to maintenance, etc Check what is needed to get it working and how it works. Read More Advanced scheduling using affinity and anti-affinity rules: February 25, 2020 KubeVirt can take advantage of Kubernetes inner features to provide an advanced scheduling mechanism to virtual machines (VMs). The same or even more complex affinity and anti-affinity rules can be assigned to VMs or Pods in Kubernetes than in traditional virtualization solutions. Read More KubeVirt: installing Microsoft Windows from an ISO: February, 14, 2020 In this blogpost a Virtual Machine is created to install Microsoft Windows in KubeVirt from an ISO following the traditional way. Read More NA KubeCon 2019 - KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt - David Vossel, Red Hat & Vishesh Tanksale, NVIDIA: February, 06, 2019 In this blogpost, we talk about the presentation that David Vossel and Vishesh Tanksale did at the KubeCon 2019 in North America. The talk is called KubeVirt Deep Dive: Virtualized GPU Workloads on KubeVirt and they go through from a KubeVirt introduction until a complex architecture with NVIDIA GPU devices. . . Read More NA KubeCon 2019 - KubeVirt introduction by Steve Gordon and Chandrakanth Jakkidi: February, 01, 2019 KubeVirt Intro: Virtual Machine Management on Kubernetes - Steve Gordon & Chandrakanth Jakkidi Read More Managing KubeVirt with OpenShift Web Console: January 24, 2020 This article focuses on running the OKD web console in a native Kubernetes cluster leveraging the deep integrations with KubeVirt. OKD web console will allow us to create, manage and delete virtual machines from a friendly user interface Read More KubeVirt Laboratory 3, upgrades: January, 21, 2020 In this video, we are showing the step by step of the KubeVirt Laboratory 3 how to upgrade KubeVirt Read More KubeVirt user interface options: December, 2019 Overview of different user interface options to manage KubeVirt Read More KubeVirt Laboratory 2, experimenting with CDI: December 10, 2019 In this video, we are showing the step by step of the KubeVirt Laboratory 2 Experimenting with CDI Read More KubeVirt Laboratory 1, use KubeVirt: December 4, 2019 In this video, we are showing the step by step of the KubeVirt Laboratory 1 Use KubeVirt Read More KubeVirt basic operations video: November 28, 2019 KubeVirt basic operations video Read More Jenkins Infra upgrade: November 22, 2019 Jenkins CI server upgrade and jobs for KubeVirt labs and image creation refresh Read More KubeVirt at KubeCon + CloudNativeCon North America: November 12, 2019 A summary of KubeVirt related activities during KubeCon + CloudNativeCon North America 2019 in San Diego Read More Prow jobs for KubeVirt website and Tutorial repo: October 31, 2019 How prow is used to keep website and tutorials 'up' Read More Jenkins Jobs for KubeVirt lab validation: October 31, 2019 How Jenkins is leveraged for automation at KubeVirt Cloud Image Builder and Lab Validation Read More Persistent storage of your Virtual Machines in KubeVirt with Rook: October, 2019 Persistent storage of your Virtual Machines in KubeVirt with Rook Read More KubeVirt on Kubernetes with CRI-O from scratch - Installing KubeVirt: October 23, 2019 How to setup a home lab environment with Kubernetes, CRI-O and KubeVirt step by step guide - Installing KubeVirt Read More KubeVirt on Kubernetes with CRI-O from scratch - Installing Kubernetes: October 16, 2019 How to setup a home lab environment with Kubernetes, CRI-O and KubeVirt step by step guide - Installing Kubernetes Read More KubeVirt on Kubernetes with CRI-O from scratch: October, 2019 How to setup a home lab environment with Kubernetes, CRI-O and KubeVirt step by step guide Read More KubeVirt is now part of CNCF Sandbox: September, 2019 KubeVirt has been approved as a project in the sandbox Read More KubeVirt Condition Types Renamed: Aug 9, 2019 Condition Types have been RENAMED Read More KubeVirt Condition Types Rename in Custom Resource: Aug 1, 2019 KubeVirt is renaming Condition Types in next release Read More Node Drain in KubeVirt: Jul 30, 2019 Evicting VM's using Node Drain Functionality Read More How to import VM into KubeVirt: Jul 29, 2019 Import a VM into the Kubernetes Platform using CDI Read More Website roadmap: 8 Jul, 2019 List of identified things that might need an improvement Read More KubeVirt with Ansible, part 2: 8 Jul, 2019 A deeper dive into Ansible 2. 8's KubeVirt features Read More KubeVirt vagrant provider: June 4, 2019 The post describes how to use kubevirt vagrant provider Read More KubeVirt with Ansible, part 1 – Introduction: May 21, 2019 With the release of Ansible 2. 8 comes a new set of KubeVirt modules Read More Hyper Converged Operator: May 08, 2019 Hyper Converged Operator on OCP 4 and K8s(HCO) Read More More About Kubevirt Metrics: Mar 14, 2019 A status update about KubeVirt metrics Read More Federated Kubevirt: Feb 22, 2019 Federated KubeVirt Read More An Overview To Kubevirt Metrics: Jan 22, 2019 An overview to KubeVirt metrics Read More Kubevirt Autolatest: Dec 13, 2018 KubeVirt Autodeployer Read More Kubevirt At Kubecon Na: November 26, 2018 KubeVirt at KubeCon North America 2019 Read More Ignition Support: November 20, 2018 Ignition Support Read More New Volume Types: November 16, 2018 New Volume Types - ConfigMap, Secret and ServiceAccount Read More Cdi Datavolumes: October 11, 2018 CDI DataVolumes Read More Containerized Data Importer: October 09, 2018 This post describes how to import, clone and upload a Virtual Machine disk image to kubernetes cluster. Read More Kubevirt Network Rehash: October 11, 2018 Quick rehash of the network deep-dive Read More Attaching To Multiple Networks: September 12, 2018 This post describes how to connect a Virtual Machine to more than one network using the Multus CNI. Read More Kubevirt Memory Overcommit: Sept 11, 2018 KubeVirt Memory Overcommitment Read More Kubevirtci: August 8, 2018 This post tries to give a quick overview of kubevirtci and why we use it to build our testing clusters. Read More Kubevirt V0. 7. 0: July 23, 2018 KubeVirt 0. 7. 0 Highlights Read More Unit Test Howto: July 3, 2018 This post tries to demystify some of our unit test mechanism, hopefully will make it easier to write more tests and increase our code coverage! Read More Run Istio With Kubevirt: June 21, 2018 Use Istio with KubeVirt Read More Kvm Using Device Plugins: June 20, 2018 KubeVirt Using Device Plugins For KVM Read More Some Notes On Some Highlights Of V020: January 05, 2018 The very first KubeVirt release of KubeVirt in the new year () had a few notable highlights which were brewing over the last few weeks. Read More Comparing KubeVirt to other technologies: July, 18, 2017 In this blogpost, we discuss on the technology provided by KubeVirt and how it stands against other technologies available Read More The Role of LibVirt: July, 18, 2017 In this blogpost, we discuss on libvirt role in KubeVirt Read More " }, , { - "id": 205, + "id": 206, "url": "/privacy/", "title": "Privacy", "author" : "", "tags" : "privacy, cookies, hosting", "body": " - Privacy Statement for the KubeVirt Project: As KubeVirt is a project of the Cloud Native Computing Foundation, this site falls under the Linux Foundation Privacy Policy. All terms of that privacy policy apply to this site. How to Contact Us: If you have any questions about any of these practices or KubeVirt’s use of your personal information, please feel free to contact us or file an Issue in our Github repo. KubeVirt will work with you to resolve any concerns you may have about this Statement. Changes to this Privacy Statement: KubeVirt reserves the right to change this policy from time to time. If we do make changes, the revised Privacy Statement will be posted on this site. A notice will be posted on our blog and/or mailing lists whenever this privacy statement is changed in a material way. This Privacy Statement was last amended on August 17, 2021. " }, , { - "id": 206, + "id": 207, "url": "/quickstart_cloud/", "title": "KubeVirt quickstart with cloud providers", "author" : "", "tags" : "AliCloud, Amazon, AWS, Google, GCP, Kubernetes, KubeVirt, quickstart, tutorial, VM, virtual machine", "body": " - Easy install using cloud providers: KubeVirt can be used on cloud computing providers such as AWS, Azure, GCP, AliCloud. Prepare a cloud based Kubernetes cluster: A kubectl client is necessary for operating a Kubernetes cluster. It is important to install a kubectl client version that matches the kubernetes version to avoid issues regarding skew. To install kubectl client please follow the official documentation for your system using the instructions located here. Check the Kubernetes. io Turnkey Cloud Solutions guide for each cloud provider on how to build infrastructure to match your use case. Be aware of the costs of associated with using infrastructure provided by cloud computing providers. Future labs will require at least 30 GiB of disk space. Deploy KubeVirt: KubeVirt can be installed using the KubeVirt operator, which manages the lifecycle of all the KubeVirt core components. Use kubectl to deploy the KubeVirt operator: export VERSION=$(curl -s https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)echo $VERSIONkubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator. yaml Nested virtualization If the minikube cluster runs on a virtual machine consider enabling nested virtualization. Follow the instructions described here. If for any reason nested virtualization cannot be enabled do enable KubeVirt emulation as follows: kubectl -n kubevirt patch kubevirt kubevirt --type=merge --patch '{ spec :{ configuration :{ developerConfiguration :{ useEmulation :true}}}}' Again use kubectl to deploy the KubeVirt custom resource definitions: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr. yaml Verify components: By default KubeVirt will deploy 7 pods, 3 services, 1 daemonset, 3 deployment apps, 3 replica sets. Check the deployment: kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. phase} Check the components: kubectl get all -n kubevirt Virtctl: KubeVirt provides an additional binary called virtctl for quick access to the serial and graphical ports of a VM and also handle start/stop operations. Install: virtctl can be retrieved from the release page of the KubeVirt github page. Run the following: VERSION=$(kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. observedKubeVirtVersion} )ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64. exeecho ${ARCH}curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}chmod +x virtctlsudo install virtctl /usr/local/bin Install as Krew plugin: virtctl can be installed as a plugin via the krew plugin manager. Occurrences of virtctl <command>. . . can then be read as kubectl virt <command>. . . . Run the following to install: kubectl krew install virt What’s next: Labs: After you have deployed KubeVirt you can work through the labs to help you get acquainted with KubeVirt and how it can be used to create and deploy VMs with Kubernetes. The first lab is “Use KubeVirt”. This lab walks through the creation of a Virtual Machine Instance (VMI) on Kubernetes and then how virtctl is used to interact with its console. The second lab is “Experiment with CDI”. This lab shows how to use the Containerized Data Importer (CDI) to import a VM image into a Persistent Volume Claim (PVC) and then how to attach the PVC to a VM as a block device. The third lab is “KubeVirt upgrades”. This lab shows how easy and safe is to upgrade the KubeVirt installation with zero down-time. Found a bug?: We are interested in hearing about your experience. Please report any problems to the kubevirt. io issue tracker. " }, { - "id": 207, + "id": 208, "url": "/quickstart_kind/", "title": "KubeVirt quickstart with kind", "author" : "", "tags" : "Kubernetes, kind, kubevirt, VM, virtual machine", "body": " - Easy install using kind: Kind quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows allowing software developers to quickly get started working with Kubernetes. Prepare kind Kubernetes environment: A kubectl client is necessary for operating a Kubernetes cluster. It is important to install a kubectl client version that matches the kubernetes version to avoid issues regarding skew. To install kubectl client please follow the official documentation for your system using the instructions located here. To install kind please follow the official documentation for your system using the instructions located here. Starting kind can be as simple as running the following command: kind create cluster See the kind User Guide here for advanced start options and instructions on how to operate kind. Deploy KubeVirt: KubeVirt can be installed using the KubeVirt operator, which manages the lifecycle of all the KubeVirt core components. Use kubectl to deploy the KubeVirt operator: export VERSION=$(curl -s https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)echo $VERSIONkubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator. yaml Nested virtualization If the kind cluster runs on a virtual machine consider enabling nested virtualization. Follow the instructions described here. If for any reason nested virtualization cannot be enabled do enable KubeVirt emulation as follows: kubectl -n kubevirt patch kubevirt kubevirt --type=merge --patch '{ spec :{ configuration :{ developerConfiguration :{ useEmulation :true}}}}' Again use kubectl to deploy the KubeVirt custom resource definitions: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr. yaml Verify components: By default KubeVirt will deploy 7 pods, 3 services, 1 daemonset, 3 deployment apps, 3 replica sets. Check the deployment: kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. phase} Check the components: kubectl get all -n kubevirt Virtctl: KubeVirt provides an additional binary called virtctl for quick access to the serial and graphical ports of a VM and also handle start/stop operations. Install: virtctl can be retrieved from the release page of the KubeVirt github page. Run the following: VERSION=$(kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. observedKubeVirtVersion} )ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64. exeecho ${ARCH}curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}chmod +x virtctlsudo install virtctl /usr/local/bin Install as Krew plugin: virtctl can be installed as a plugin via the krew plugin manager. Occurrences of virtctl <command>. . . can then be read as kubectl virt <command>. . . . Run the following to install: kubectl krew install virt What’s next: Labs: After you have deployed KubeVirt you can work through the labs to help you get acquainted with KubeVirt and how it can be used to create and deploy VMs with Kubernetes. The first lab is “Use KubeVirt”. This lab walks through the creation of a Virtual Machine Instance (VMI) on Kubernetes and then how virtctl is used to interact with its console. The second lab is “Experiment with CDI”. This lab shows how to use the Containerized Data Importer (CDI) to import a VM image into a Persistent Volume Claim (PVC) and then how to attach the PVC to a VM as a block device. The third lab is “KubeVirt upgrades”. This lab shows how easy and safe is to upgrade the KubeVirt installation with zero down-time. Found a bug?: We are interested in hearing about your experience. Please report any problems to the kubevirt. io issue tracker. " }, { - "id": 208, + "id": 209, "url": "/quickstart_minikube/", "title": "KubeVirt quickstart with Minikube", "author" : "", "tags" : "Kubernetes, minikube, minikube addons, kubevirt, VM, virtual machine", "body": " - Easy install using minikube: Minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows allowing software developers to quickly get started working with Kubernetes. Prepare minikube Kubernetes environment: A kubectl client is necessary for operating a Kubernetes cluster. It is important to install a kubectl client version that matches the kubernetes version to avoid issues regarding skew. To install kubectl client please follow the official documentation for your system using the instructions located here. Minikube ships a kubectl client version that matches the kubernetes version to avoid skew issues. To use the minikube shipped client do one of the following: All normal kubectl commands should be performed as minikube kubectl It can be added to aliases by running the following: alias kubectl='minikube kubectl --' It can be installed directly to the host by running the following: VERSION=$(minikube kubectl version | head -1 | awk -F', ' {'print $3'} | awk -F':' {'print $2'} | sed s/\ //g)sudo install ${HOME}/. minikube/cache/linux/${VERSION}/kubectl /usr/local/bin To install minikube please follow the official documentation for your system using the instructions located here. Starting minikube can be as simple as running the following command: minikube start --cni=flannel CNI: We add the container network interface (CNI) called flannel to make minikube work with VMs that use a masquerade type network interface. If a CNI does not work for you, switch instances of “masquerade” to “bridge” in example VM definitions. See the minikube handbook here for advanced start options and instructions on how to operate minikube. Multi-Node Minikube: Minikube has support for adding additional nodes to a cluster. This can behelpful in experimenting with KubeVirt on minikube as some operations like nodeaffinity or live migration require more than one cluster node to demonstrate. Container Network Interface: By default, minikube sets up a kubernetes cluster using either a virtualmachine appliance or a container. For a single node setup, local networkconnectivity is sufficient. In the case where multiple nodes are involved, evenwhen using containers or VMs on the same host, kubernetes needs to define ashared network to allow pods on one host to communicate with pods on the otherhost. To this end, minikube supports a number of Container Network Interface(CNI) pluginsthe simplest of which is flannel. Updating the minikube start command: To have minikube start up with the flannel CNI plugin over two nodes, alter the minikube start command: minikube start --nodes=2 --cni=flannelCore DNS race condition An issue has beenreported where thecoredns pod in multi-node minikube comes up with the wrong IP address. Ifthis happens, kubevirt will fail to install properly. To work around, deletethe coredns pod from the kube-system namespace and disable/enable thekubevirt addon in minikube. Deploy KubeVirt: KubeVirt can be installed using the KubeVirt operator, which manages the lifecycle of all the KubeVirt core components. Below are two examples of how to install KubeVirt using the latest release. The easy way: Addon currently broken An issue has been reported where more recent versions of minikube break the kubevirt addon. Fall back to the “in-depth” section below until this is resolved. Installing KubeVirt can be as simple as the following command: minikube addons enable kubevirt The in-depth way: Use kubectl to deploy the KubeVirt operator: export VERSION=$(curl -s https://storage. googleapis. com/kubevirt-prow/release/kubevirt/kubevirt/stable. txt)echo $VERSIONkubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator. yaml Nested virtualization If the minikube cluster runs on a virtual machine consider enabling nested virtualization. Follow the instructions described here. If for any reason nested virtualization cannot be enabled do enable KubeVirt emulation as follows: kubectl -n kubevirt patch kubevirt kubevirt --type=merge --patch '{ spec :{ configuration :{ developerConfiguration :{ useEmulation :true}}}}' Again use kubectl to deploy the KubeVirt custom resource definitions: kubectl create -f https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr. yaml Verify components: By default KubeVirt will deploy 7 pods, 3 services, 1 daemonset, 3 deployment apps, 3 replica sets. Check the deployment: kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. phase} Check the components: kubectl get all -n kubevirt When using the minikube KubeVirt addon check logs of the kubevirt-install-manager pod: kubectl logs pod/kubevirt-install-manager -n kube-system Virtctl: KubeVirt provides an additional binary called virtctl for quick access to the serial and graphical ports of a VM and also handle start/stop operations. Install: virtctl can be retrieved from the release page of the KubeVirt github page. Run the following: VERSION=$(kubectl get kubevirt. kubevirt. io/kubevirt -n kubevirt -o=jsonpath= {. status. observedKubeVirtVersion} )ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64. exeecho ${ARCH}curl -L -o virtctl https://github. com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}chmod +x virtctlsudo install virtctl /usr/local/bin Install as Krew plugin: virtctl can be installed as a plugin via the krew plugin manager. Occurrences of virtctl <command>. . . can then be read as kubectl virt <command>. . . . Run the following to install: kubectl krew install virt What’s next: Labs: After you have deployed KubeVirt you can work through the labs to help you get acquainted with KubeVirt and how it can be used to create and deploy VMs with Kubernetes. The first lab is “Use KubeVirt”. This lab walks through the creation of a Virtual Machine Instance (VMI) on Kubernetes and then how virtctl is used to interact with its console. The second lab is “Experiment with CDI”. This lab shows how to use the Containerized Data Importer (CDI) to import a VM image into a Persistent Volume Claim (PVC) and then how to attach the PVC to a VM as a block device. The third lab is “KubeVirt upgrades”. This lab shows how easy and safe is to upgrade the KubeVirt installation with zero down-time. Found a bug?: We are interested in hearing about your experience. Please report any problems to the kubevirt. io issue tracker. " }, { - "id": 209, + "id": 210, "url": "/category/releases.html", "title": "Releases", "author" : "", "tags" : "", "body": " - " }, { - "id": 210, + "id": 211, "url": "/blogs/releases", "title": "Releases", "author" : "", "tags" : "", - "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date KubeVirt v1. 2. 0: March 05, 2024 This article provides information about KubeVirt release v1. 2. 0 changes Read More KubeVirt v1. 1. 0: November 06, 2023 This article provides information about KubeVirt release v1. 1. 0 changes Read More KubeVirt v1. 0. 0: July 06, 2023 This article provides information about KubeVirt release v1. 0. 0 changes Read More KubeVirt v0. 59. 0: March 01, 2023 This article provides information about KubeVirt release v0. 59. 0 changes Read More KubeVirt v0. 58. 0: October 13, 2022 This article provides information about KubeVirt release v0. 58. 0 changes Read More KubeVirt v0. 57. 0: September 12, 2022 This article provides information about KubeVirt release v0. 57. 0 changes Read More KubeVirt v0. 56. 0: August 18, 2022 This article provides information about KubeVirt release v0. 56. 0 changes Read More KubeVirt v0. 55. 0: July 14, 2022 This article provides information about KubeVirt release v0. 55. 0 changes Read More KubeVirt v0. 54. 0: June 08, 2022 This article provides information about KubeVirt release v0. 54. 0 changes Read More KubeVirt v0. 53. 0: May 09, 2022 This article provides information about KubeVirt release v0. 53. 0 changes Read More KubeVirt v0. 52. 0: April 08, 2022 This article provides information about KubeVirt release v0. 52. 0 changes Read More KubeVirt v0. 51. 0: March 08, 2022 This article provides information about KubeVirt release v0. 51. 0 changes Read More KubeVirt v0. 50. 0: February 09, 2022 This article provides information about KubeVirt release v0. 50. 0 changes Read More KubeVirt v0. 49. 0: January 11, 2022 This article provides information about KubeVirt release v0. 49. 0 changes Read More KubeVirt v0. 48. 0: December 06, 2021 This article provides information about KubeVirt release v0. 48. 0 changes Read More KubeVirt v0. 46. 0: October 08, 2021 This article provides information about KubeVirt release v0. 46. 0 changes Read More KubeVirt v0. 45. 0: September 08, 2021 This article provides information about KubeVirt release v0. 45. 0 changes Read More KubeVirt v0. 44. 0: August 09, 2021 This article provides information about KubeVirt release v0. 44. 0 changes Read More KubeVirt v0. 43. 0: July 09, 2021 This article provides information about KubeVirt release v0. 43. 0 changes Read More KubeVirt v0. 42. 0: June 08, 2021 This article provides information about KubeVirt release v0. 42. 0 changes Read More KubeVirt v0. 41. 0: May 12, 2021 This article provides information about KubeVirt release v0. 41. 0 changes Read More KubeVirt v0. 40. 0: April 19, 2021 This article provides information about KubeVirt release v0. 40. 0 changes Read More KubeVirt v0. 39. 0: March 10, 2021 This article provides information about KubeVirt release v0. 39. 0 changes Read More KubeVirt v0. 38. 0: February 08, 2021 This article provides information about KubeVirt release v0. 38. 0 changes Read More KubeVirt v0. 37. 0: January 18, 2021 This article provides information about KubeVirt release v0. 37. 0 changes Read More KubeVirt v0. 36. 0: December 16, 2020 This article provides information about KubeVirt release v0. 36. 0 changes Read More KubeVirt v0. 35. 0: November 09, 2020 This article provides information about KubeVirt release v0. 35. 0 changes Read More KubeVirt v0. 34. 0: October 07, 2020 This article provides information about KubeVirt release v0. 34. 0 changes Read More KubeVirt v0. 33. 0: September 15, 2020 This article provides information about KubeVirt release v0. 33. 0 changes Read More KubeVirt v0. 32. 0: August 11, 2020 This article provides information about KubeVirt release v0. 32. 0 changes Read More KubeVirt v0. 31. 0: July 09, 2020 This article provides information about KubeVirt release v0. 31. 0 changes Read More KubeVirt v0. 30. 0: June 05, 2020 This article provides information about KubeVirt release v0. 30. 0 changes Read More KubeVirt v0. 29. 0: May 06, 2020 This article provides information about KubeVirt release v0. 29. 0 changes Read More KubeVirt v0. 28. 0: April 09, 2020 This article provides information about KubeVirt release v0. 28. 0 changes Read More KubeVirt v0. 27. 0: March 06, 2020 This article provides information about KubeVirt release v0. 27. 0 changes Read More KubeVirt v0. 26. 0: February 07, 2020 This article provides information about KubeVirt release v0. 26. 0 changes Read More KubeVirt v0. 25. 0: January 13, 2020 This article provides information about KubeVirt release v0. 25. 0 changes Read More KubeVirt v0. 24. 0: December 03, 2019 This article provides information about KubeVirt release v0. 24. 0 changes Read More KubeVirt v0. 23. 0: November 04, 2019 This article provides information about KubeVirt release v0. 23. 0 changes Read More KubeVirt v0. 22. 0: October 10, 2019 This article provides information about KubeVirt release v0. 22. 0 changes Read More KubeVirt v0. 21. 0: September 09, 2019 This article provides information about KubeVirt release v0. 21. 0 changes Read More KubeVirt v0. 20. 0: August 09, 2019 This article provides information about KubeVirt release v0. 20. 0 changes Read More KubeVirt v0. 19. 0: July 05, 2019 This article provides information about KubeVirt release v0. 19. 0 changes Read More KubeVirt v0. 18. 0: June 05, 2019 This article provides information about KubeVirt release v0. 18. 0 changes Read More KubeVirt v0. 17. 0: May 06, 2019 This article provides information about KubeVirt release v0. 17. 0 changes Read More KubeVirt v0. 16. 0: April 05, 2019 This article provides information about KubeVirt release v0. 16. 0 changes Read More KubeVirt v0. 15. 0: March 05, 2019 This article provides information about KubeVirt release v0. 15. 0 changes Read More KubeVirt v0. 14. 0: February 04, 2019 This article provides information about KubeVirt release v0. 14. 0 changes Read More KubeVirt v0. 13. 0: January 15, 2019 This article provides information about KubeVirt release v0. 13. 0 changes Read More KubeVirt v0. 12. 0: January 11, 2019 This article provides information about KubeVirt release v0. 12. 0 changes Read More KubeVirt v0. 11. 0: December 06, 2018 This article provides information about KubeVirt release v0. 11. 0 changes Read More KubeVirt v0. 10. 0: November 08, 2018 This article provides information about KubeVirt release v0. 10. 0 changes Read More KubeVirt v0. 9. 0: October 04, 2018 This article provides information about KubeVirt release v0. 9. 0 changes Read More KubeVirt v0. 8. 0: September 06, 2018 This article provides information about KubeVirt release v0. 8. 0 changes Read More KubeVirt v0. 7. 0: July 04, 2018 This article provides information about KubeVirt release v0. 7. 0 changes Read More KubeVirt v0. 6. 0: June 11, 2018 This article provides information about KubeVirt release v0. 6. 0 changes Read More KubeVirt v0. 5. 0: May 04, 2018 This article provides information about KubeVirt release v0. 5. 0 changes Read More KubeVirt v0. 4. 0: April 06, 2018 This article provides information about KubeVirt release v0. 4. 0 changes Read More KubeVirt v0. 3. 0: March 08, 2018 This article provides information about KubeVirt release v0. 3. 0 changes Read More Kube Virt v0. 2. 0: January 05, 2018 This release follows v0. 1. 0 and consists of 131 changes, contributed by 6 people, leading to 148 files changed, 9096 insertions(+), 5871 deletions(-). Read More Kube Virt v0. 1. 0: December 08, 2017 This release follows v0. 0. 4 and consists of 115 changes, contributed by 11 people, leading to 121 files changed, 5278 insertions(+), 1916 deletions(-). Read More Kube Virt v0. 0. 4: November 07, 2017 This release follows v0. 0. 3 and consists of 133 changes, contributed by 14 people, leading to 109 files changed, 7093 insertions(+), 2437 deletions(-). Read More " + "body": " - Blogs Categories: News Weekly Updates Releases Uncategorized Additional filters: Grouped by Date KubeVirt v1. 3. 0: July 17, 2024 This article provides information about KubeVirt release v1. 3. 0 changes Read More KubeVirt v1. 2. 0: March 05, 2024 This article provides information about KubeVirt release v1. 2. 0 changes Read More KubeVirt v1. 1. 0: November 06, 2023 This article provides information about KubeVirt release v1. 1. 0 changes Read More KubeVirt v1. 0. 0: July 06, 2023 This article provides information about KubeVirt release v1. 0. 0 changes Read More KubeVirt v0. 59. 0: March 01, 2023 This article provides information about KubeVirt release v0. 59. 0 changes Read More KubeVirt v0. 58. 0: October 13, 2022 This article provides information about KubeVirt release v0. 58. 0 changes Read More KubeVirt v0. 57. 0: September 12, 2022 This article provides information about KubeVirt release v0. 57. 0 changes Read More KubeVirt v0. 56. 0: August 18, 2022 This article provides information about KubeVirt release v0. 56. 0 changes Read More KubeVirt v0. 55. 0: July 14, 2022 This article provides information about KubeVirt release v0. 55. 0 changes Read More KubeVirt v0. 54. 0: June 08, 2022 This article provides information about KubeVirt release v0. 54. 0 changes Read More KubeVirt v0. 53. 0: May 09, 2022 This article provides information about KubeVirt release v0. 53. 0 changes Read More KubeVirt v0. 52. 0: April 08, 2022 This article provides information about KubeVirt release v0. 52. 0 changes Read More KubeVirt v0. 51. 0: March 08, 2022 This article provides information about KubeVirt release v0. 51. 0 changes Read More KubeVirt v0. 50. 0: February 09, 2022 This article provides information about KubeVirt release v0. 50. 0 changes Read More KubeVirt v0. 49. 0: January 11, 2022 This article provides information about KubeVirt release v0. 49. 0 changes Read More KubeVirt v0. 48. 0: December 06, 2021 This article provides information about KubeVirt release v0. 48. 0 changes Read More KubeVirt v0. 46. 0: October 08, 2021 This article provides information about KubeVirt release v0. 46. 0 changes Read More KubeVirt v0. 45. 0: September 08, 2021 This article provides information about KubeVirt release v0. 45. 0 changes Read More KubeVirt v0. 44. 0: August 09, 2021 This article provides information about KubeVirt release v0. 44. 0 changes Read More KubeVirt v0. 43. 0: July 09, 2021 This article provides information about KubeVirt release v0. 43. 0 changes Read More KubeVirt v0. 42. 0: June 08, 2021 This article provides information about KubeVirt release v0. 42. 0 changes Read More KubeVirt v0. 41. 0: May 12, 2021 This article provides information about KubeVirt release v0. 41. 0 changes Read More KubeVirt v0. 40. 0: April 19, 2021 This article provides information about KubeVirt release v0. 40. 0 changes Read More KubeVirt v0. 39. 0: March 10, 2021 This article provides information about KubeVirt release v0. 39. 0 changes Read More KubeVirt v0. 38. 0: February 08, 2021 This article provides information about KubeVirt release v0. 38. 0 changes Read More KubeVirt v0. 37. 0: January 18, 2021 This article provides information about KubeVirt release v0. 37. 0 changes Read More KubeVirt v0. 36. 0: December 16, 2020 This article provides information about KubeVirt release v0. 36. 0 changes Read More KubeVirt v0. 35. 0: November 09, 2020 This article provides information about KubeVirt release v0. 35. 0 changes Read More KubeVirt v0. 34. 0: October 07, 2020 This article provides information about KubeVirt release v0. 34. 0 changes Read More KubeVirt v0. 33. 0: September 15, 2020 This article provides information about KubeVirt release v0. 33. 0 changes Read More KubeVirt v0. 32. 0: August 11, 2020 This article provides information about KubeVirt release v0. 32. 0 changes Read More KubeVirt v0. 31. 0: July 09, 2020 This article provides information about KubeVirt release v0. 31. 0 changes Read More KubeVirt v0. 30. 0: June 05, 2020 This article provides information about KubeVirt release v0. 30. 0 changes Read More KubeVirt v0. 29. 0: May 06, 2020 This article provides information about KubeVirt release v0. 29. 0 changes Read More KubeVirt v0. 28. 0: April 09, 2020 This article provides information about KubeVirt release v0. 28. 0 changes Read More KubeVirt v0. 27. 0: March 06, 2020 This article provides information about KubeVirt release v0. 27. 0 changes Read More KubeVirt v0. 26. 0: February 07, 2020 This article provides information about KubeVirt release v0. 26. 0 changes Read More KubeVirt v0. 25. 0: January 13, 2020 This article provides information about KubeVirt release v0. 25. 0 changes Read More KubeVirt v0. 24. 0: December 03, 2019 This article provides information about KubeVirt release v0. 24. 0 changes Read More KubeVirt v0. 23. 0: November 04, 2019 This article provides information about KubeVirt release v0. 23. 0 changes Read More KubeVirt v0. 22. 0: October 10, 2019 This article provides information about KubeVirt release v0. 22. 0 changes Read More KubeVirt v0. 21. 0: September 09, 2019 This article provides information about KubeVirt release v0. 21. 0 changes Read More KubeVirt v0. 20. 0: August 09, 2019 This article provides information about KubeVirt release v0. 20. 0 changes Read More KubeVirt v0. 19. 0: July 05, 2019 This article provides information about KubeVirt release v0. 19. 0 changes Read More KubeVirt v0. 18. 0: June 05, 2019 This article provides information about KubeVirt release v0. 18. 0 changes Read More KubeVirt v0. 17. 0: May 06, 2019 This article provides information about KubeVirt release v0. 17. 0 changes Read More KubeVirt v0. 16. 0: April 05, 2019 This article provides information about KubeVirt release v0. 16. 0 changes Read More KubeVirt v0. 15. 0: March 05, 2019 This article provides information about KubeVirt release v0. 15. 0 changes Read More KubeVirt v0. 14. 0: February 04, 2019 This article provides information about KubeVirt release v0. 14. 0 changes Read More KubeVirt v0. 13. 0: January 15, 2019 This article provides information about KubeVirt release v0. 13. 0 changes Read More KubeVirt v0. 12. 0: January 11, 2019 This article provides information about KubeVirt release v0. 12. 0 changes Read More KubeVirt v0. 11. 0: December 06, 2018 This article provides information about KubeVirt release v0. 11. 0 changes Read More KubeVirt v0. 10. 0: November 08, 2018 This article provides information about KubeVirt release v0. 10. 0 changes Read More KubeVirt v0. 9. 0: October 04, 2018 This article provides information about KubeVirt release v0. 9. 0 changes Read More KubeVirt v0. 8. 0: September 06, 2018 This article provides information about KubeVirt release v0. 8. 0 changes Read More KubeVirt v0. 7. 0: July 04, 2018 This article provides information about KubeVirt release v0. 7. 0 changes Read More KubeVirt v0. 6. 0: June 11, 2018 This article provides information about KubeVirt release v0. 6. 0 changes Read More KubeVirt v0. 5. 0: May 04, 2018 This article provides information about KubeVirt release v0. 5. 0 changes Read More KubeVirt v0. 4. 0: April 06, 2018 This article provides information about KubeVirt release v0. 4. 0 changes Read More KubeVirt v0. 3. 0: March 08, 2018 This article provides information about KubeVirt release v0. 3. 0 changes Read More Kube Virt v0. 2. 0: January 05, 2018 This release follows v0. 1. 0 and consists of 131 changes, contributed by 6 people, leading to 148 files changed, 9096 insertions(+), 5871 deletions(-). Read More Kube Virt v0. 1. 0: December 08, 2017 This release follows v0. 0. 4 and consists of 115 changes, contributed by 11 people, leading to 121 files changed, 5278 insertions(+), 1916 deletions(-). Read More Kube Virt v0. 0. 4: November 07, 2017 This release follows v0. 0. 3 and consists of 133 changes, contributed by 14 people, leading to 109 files changed, 7093 insertions(+), 2437 deletions(-). Read More " }, , { - "id": 211, + "id": 212, "url": "/sponsor/", "title": "KubeVirt Summit Sponsor Opportunities", "author" : "", "tags" : "", "body": " - " }, , { - "id": 212, + "id": 213, "url": "/summit/", "title": "KubeVirt Summit 2024!", "author" : "", "tags" : "", "body": " - The fourth online KubeVirt Summit is coming on June 24-25! What is KubeVirt Summit?: KubeVirt Summit is our annual online conference, now in its fourth year, in which the entire broader community meets to showcase technical architecture, new features, proposed changes, and in-depth tutorials. We have two tracks to cater for developer talks, and another for end users to share their deployment journey with KubeVirt and their use case(s) at scale. When is it?: The event will take place online over two half-days: Dates: June 24 and 25, 2024 Time: 12:00-17:00 UTC note “Note”Previously Summit had been announced with dates of June 25-26 How do I register?: Register now at the CNCF KubeVirt Summit event page Schedule: All times are in UTC. Monday, June 24: 12:00-12:25 Welcome, Opening Remarks, and Community Updates 12:30-12:55 Keep your VM dancing with volume migration Alice Frosi, Red Hat Description: Want a refresh of your storage? Is there a better performing class available in your cluster? Is an old one being deprecated? Don’t panic, you don’t need to turn off your VM. The new option for updateVolumeStrategy allows you to update and migrate storage while the VM is running. You just need to apply a new VM definition that replaces the existing volumes with new ones. This update will initiate the volume migration, and at the end of the process, the VM will have entirely copied the data from the old volumes to the new ones without any workload disruptions. The talk will cover what you can do with this new feature and go over the technical decisions that lead to the final design. 13:00-13:25 Running Kubernetes clusters the Kubernetes-native way Andrei Kvapil, Ænix Description: Have you ever thought about building your own cloud? I bet you have. But is it possible to do this using only modern technologies and approaches, without leaving the cozy Kubernetes ecosystem? How KubeVirt helps create secure environments. Hard multi-tenancy with KubeVirt and Kamaji. Cluster autoscaler configuration for Cluster API. Operating KubeVirt with its CSI and CNI drivers. Securing API by removing controllers from the user clusters. 13:30-13:55 Using CPU & Memory hotplug for VM vertical scaling Igor Bezukh, Red Hat Description: I am going to present the CPU & Memory hotplug features towards the live update API, how it can be actually used for VM vertical scaling, and the future plans to use in-place pod update instead of live-migration. If time will allow, I will also try to discuss what needs to be accomplished to support auto-scaling. 14:00-14:25 Long-Lived User-Session VM: Enhancing Zero Downtime Rolling Deployments for Virtual Machines with Kubernetes Operators Sheng Lin, Nvidia Description: NVIDIA’s cloud-gaming service, GeForce Now, leverages Kubernetes-managed data centers to render games and stream the output in real-time to a variety of client devices, including computers, smartphones, and tablets. Implementing rolling deployments for long-lived user-session virtual machines presents unique challenges. Each client’s traffic is tied to a specific virtual machine, so any disruption can result in immediate user impact. Depending on the service, a rolling deployment could take up to eight hours as it waits for in-progress sessions to conclude. In this presentation, we will delve into how NVIDIA GeForce Now enhances zero downtime rolling deployments by utilizing a Kubernetes operator. Attendees will gain insights into: The specific challenges of rolling deployments for long-lived VMs in a Kubernetes environment. The architecture, development, and deployment of the Kubernetes operator that facilitates seamless VM rollovers. Real-world case studies comparing service availability and maintenance overhead before and after implementing the Kubernetes operator. Practical recommendations for adapting and generalizing this operator-based approach for other VM-based services. Join us to learn how to achieve efficient and reliable rolling deployments, ensuring uninterrupted user experience and streamlined maintenance processes. 14:30-14:55 KubeVirt vDPA Workflow: From Host to Pod to Domain Taekyung Kim, SK Telecom Description: In this presentation, we will explore the KubeVirt vDPA workflow, tracing its path from the host system to Kubernetes pods and the libvirt domain. We will start with an overview of the switchdev model, highlighting its significance in network hardware offload and the implementation in the host kernel. Following this, we will examine the integration of VF representors into Kubernetes CNI, illustrating how the CNI recognizes and manages both VF network interface and VF representor. Finally, we will explore the KubeVirt network binding plugin, demonstrating the process of passing vhost_vdpa from a Kubernetes pod to a libvirt domain. This session will provide valuable insights into enhancing network performance with KubeVirt and vDPA, offering practical guidance for real-world applications. 15:00-15:25 Build a full measurement chain using the CC-FDE solution in KubeVirt. Lei Zhou, Wenhui Zhang & Xiaocheng Dong, Intel Description: Confidential computing (CC) provides hardware-based protections for data-in-use beyond traditional data-in-transit(network) and data-at-rest(storage). Full Disk Encryption (FDE) adds an additional layer of preserving integrity and confidentiality of the initial state of the workload and also protection by encrypting data-at-rest. This multi-layered approach ensures that even if one layer is breached, sensitive data remains encrypted and inaccessible. With traditional FDE, use DM-verity to safeguard image integrity, and employ encryption techniques to ensure data confidentiality. In CC FDE design, the secret key can be better protected via a passwordless attestation process by binding to CC trust evidence. The process includes enrolling encryption keyid into OVMF, retrieving key via keyid through FDE agent via remote attestation, and decrypt automatically. In the above process, the TCB measurement is the key consideration. Although the advantage of KubeVirt flavor like confidential VM (by comparison with Confidential cluster/container) is smaller TCB, it still has the challenges like vTPM complexity, readiness and various vendor’s security harden Trust foundation. So, we created a vendor unlocked CC Trusted API. With CC FDE design by using CC Trusted API, it guarantees the integrity and confidentiality of images from building to deploying. The audience can learn how confidential computing impacts the traditional FDE, how to adopt CC FDE in KubeVirt for minimal TCB deployment flavor, then they can easily scale this solution in production quickly. 15:30-15:55 Optimizing GPU Capacity across multiple Data Centers Vinay Hangud, Nvidia Description: The talk will cover details about use case, design and workflow on how the Nvidia GeForce Now platform leverage a GPU Capacity to orchestrate VM workloads across a fleet of Data Centers. Use case: Nvidia GeForce Now platform intends to maximize GPU usage in a Data Center while catering to an on-demand continuous churning VM work load. Specific constraint: VM should get to running state and not get stuck at “not scheduled” due to capacity unavailability. Design details: The system leverages a GPU Capacity API to get accurate capacity information in a DC. Design details of the GPU Capacity API and Controller. Events driven mechanism to keep track of accurate GPU Capacity information during workload churn. Nvidia GeForce Now workflow: How Capacity API is leveraged to orchestrate workload across a fleet of Data Centers at full GPU capacity to achieve on-demand scheduling needs. 16:00-16:25 Isolating container workloads in Virtual Machines using KubeVirt Vladik Romanovsky, Red Hat Description: While Kubernetes provides container sandboxes that utilize Linux kernel features for isolation, vulnerabilities still exist, these may cause host kernel panics, exploits, memory access by other applications, and more. Join this session to learn more about how the MaroonedPods project offers to solve the workload isolation problem using KubeVirt and all native Kubernetes components without a need for a specialized environment or dedicated software. 16:30-16:55 DRA in KubeVirt: Overcoming Challenges and Implementing Changes Varun Ramachandra & Alay Patel, Nvidia Description: As the demand for GPU-accelerated workloads continues to surge across various industries, Kubernetes has emerged as a popular orchestration platform to manage these resources efficiently. However, the traditional GPU device plugin model in Kubernetes poses limitations in terms of flexibility, scalability, and resource optimization. This talk introduces the concept of Dynamic Resource Allocation (DRA) and demonstrates how transitioning GPU device plugins to DRA can address these challenges. In this session, attendees will learn about how Nvidia plans to solve the specific challenges in KubeVirt to leverage the advantages of DRA. Tuesday, June 25: 12:00-12:25 Optimizing live-migration for minimizing packet loss with KubeVirt and kube-ovn Zhuanlan & Zhangbingbing, China Mobile(SuZhou)Software Technology Co. ,Ltd. Description: We expose more detailed steps for virtual machine live-migration in KubeVirt. Which enable kube-ovn use the multi_chassis_bindings feature as will as triggering ACL policy changes based on RARP. In this way, we significantly reduces packet loss during live migrations. 12:30-12:55 KubeVirt Observability with Prometheus and Grafana Ananda Dwi Rahmawati, Activate Interactive Pte Ltd Description: As KubeVirt adoption grows, so does the need for effective observability. This talk delves into the essential metrics and tools required to ensure the health and performance of your KubeVirt environments. We’ll explore practical strategies for monitoring KubeVirt workloads, identifying potential issues, and gaining valuable insights into your virtualized environment. 13:00-13:25 Want a VM with local storage? Fine, just try it with Hwameistor! Pang Wei & Zhou MingMing, QiAnXin Description: There are many benefits to use VMs with local storage, like: Superior performance with local disk IOPS/latency/throughput. Less complexity compared to network-based storage system. Extremely low cost, VMs can be created easily even in an all-in-one k8s cluster. I’m glad to introduce a good way to do this. With the help of HwameiStor, VMs can be smoothly used with local storage(LVs or raw disk). 13:30-13:55 Real-Time Network Traffic Monitoring for KubeVirt VMs Using OVN and Switchdev SR-IOV VFIO Interfaces Girish Moodalbail & Vengupal Iyer, Nvidia Description: When the data plane is offloaded from the Linux kernel to Smart NICs that support the Linux SwitchDev driver model, the ability to perform real-time monitoring of network traffic for debugging or anomaly detection is lost. This issue is exacerbated when using legacy non-switchdev SR-IOV Virtual Functions (VFs), where packets sent through these VFs are only visible on the next hop switch. Consequently, any debugging efforts would require collaboration with network administrators managing the switches. Additionally, performing real-time monitoring of network traffic for anomaly detection on switches becomes significantly more challenging. In this presentation, we will explore how to achieve real-time monitoring of KubeVirt VMs/VMIs that are multi-homed with Switchdev SR-IOV VFIO interfaces using the open-source Open Virtual Network (OVN) SDN based on Open vSwitch (OVS). The API is defined by OVN-Kubernetes CNI as a Kubernetes Custom Resource. We will introduce the OVN packet mirroring feature, which enables the capture of packets from these interfaces and their forwarding to Scalable Functions (SFs) on the host. This process can be performed at wire speed thanks to NIC accelerators, allowing for the execution of tools like tcpdump, sFlow, and deep learning inference on the captured packets. 14:00-14:25 Enhancing KubeVirt Management: A Comprehensive Update on KubeVirt-Manager Marcelo Feitoza Parisi, Google Cloud Description: KubeVirt-Manager continues to evolve as a pivotal tool for simplifying and streamlining the management of virtualized environments with KubeVirt. In this session, we’ll present a comprehensive update highlighting the latest enhancements and features designed to further empower administrators and users, and help accelerate even more the adoption of KubeVirt. From improved visibility with unscheduled Virtual Machines grouped in the Virtual Machines page, to a revamped NoVNC implementation supporting complete console functionality, our latest release promises a more seamless experience. Additionally, we introduce a new detailed information page for Virtual Machines and Virtual Machine Pools, providing insights into crucial details such as operating system version, kernel version, networking configuration, and disk specifications. Not stopping there, we also present enhancements extend to the Virtual Machine creation process, now offering options to select cache mode and access mode for disks, as well as the ability to utilize multiple network interfaces. Virtual Machine Pools receive a significant upgrade with support for Liveness Probes and Readiness Probes, alongside the introduction of auto-scaling based on Kubernetes HPA. Moreover, our integration with Cluster API provides now a streamlined option for administrators and users to bring new Kubernetes clusters with just a few clicks. Join us as we delve into these exciting updates to facilitate the management of KubeVirt environments. 14:30-14:55 Hack your own network connectivity Edward Haas & Leonardo Milleri, Red Hat Description: KubeVirt provides common network connectivity options for Virtual Machines that satisfy most scenarios and users. As the project popularity grows, special scenarios and needs surface with requests to tweak and customize network details. The sig-network was faced with a double challenge, to satisfy the growing community needs and at the same time to keep the codebase in a maintainable state. The network binding plugin has been introduced to solve these challenges, giving the ability to extend the network connectivity in KubeVirt and at the same time assist in maintaining the core network functionality. We will present the pluggable infrastructure and demonstrate a success story that used it to extend the supported interfaces (vDPA). 15:00-15:25 Introducing Application Aware Resource Quota Barak Mordehai, Red Hat Description: If you want to hear about how Application Aware Resource Quota solves KubeVirt’s Quota related issues during migrations and obscures Virtual Machines overhead from Quota - this session is for you. You will also hear how this project opens the door for plug-able policies to empower other operators to customize resource counting. 15:30-15:55 KubeVirt enablement on IBM Z and LinuxONE Nourhane Bziouech, IBM Description: Kubevirt keeps evolving and growing in features and capabilities , and one of the growth aspects is the platforms where it runs. This session will cover the usage of Kubevirt for s390x architecture when running it on IBM Z and LinuxONE. Join me in this session and let me take you through the journey of adding Kubevirt to the IBM Z and LinuxONE platform. 16:00-16:25 Ensuring High Availability for KubeVirt Virtual Machines Javier Cano Cano, Red Hat Description: Maintaining high availability of Pod workloads within Kubernetes is a common objective, typically achieved using Kubernetes’ built-in primitives. However, certain workloads, such as KubeVirt Virtual Machines (VMs), necessitate at-most-one semantics, which can only be achieved with the assistance of additional operators like Node Health Checks (NHC) and specific remediation operators. This presentation will explore the methods for establishing a high-availability infrastructure specifically designed for KubeVirt virtual machines (VMs). The discussion will encompass the development of a testing methodology for KubeVirt in conjunction with the Node Health Check (NHC), including the assessment of VM recovery times in the event of node failure. The presentation will detail the challenges encountered, the debugging process, and the solutions implemented, with a focus on the current status of VM recovery times. 16:30-17:00 Coordinated Upgrade for GPU Workloads Natalie Bandel & Ryan Hallisey Description: In this talk, we will explore how the Nvidia GeForce Now platform manages maintenance and upgrade activities on Kubernetes clusters with KubeVirt workloads by leveraging native Kubernetes components. Ongoing maintenance is essential for any Kubernetes cluster. The complexity of coordination and management of the maintenance activities only increases for Kubernetes clusters with KubeVirt and thousands of VMIs running simultaneously. In this talk, we will explore how the Nvidia GeForce Now platform leverages native Kubernetes components to manage and coordinate maintenance activities on Kubernetes clusters with thousands of GPU workloads across multiple data centers. We will present the architecture and benefits of our approach, explaining how we maintain a single source of truth for real-time status updates from the Kubernetes cluster. We will discuss our efficient scheduling algorithm, which takes into consideration existing VMI workloads for prioritizing maintenance tasks. We will cover our validation and failure handling processes. And finally, we will highlight actual improvements of operational efficiency in maintenance times during GeForce Now upgrades. Have questions?: Reach out on our virtualization Slack channel (in the Kubernetes workspace). Keep up to date: Connect with the KubeVirt Community through our community page. KubeVirt Summit is thankful for our sponsors: Our patron sponsors: " }, { - "id": 213, + "id": 214, "url": "/videos/talks", "title": "Talks", "author" : "", "tags" : "", "body": " - {% for item in site. data. videos-talks. list %} KubeVirt Talks and Conference Sessions: The KubeVirt Community gives talks at a variety of conferences and meetups throughout the year. Watch the most recent such session above, and be sure to check out our YouTube Talks playlist to see plenty more. {% endfor %}Upcoming Events: If you want to present a KubeVirt talk, or want to see one at a conference soon and/or meet some KubeVirt folks, check out our KubeVirt Community Events Wiki. This is where we track open CfPs for relevant conferences and meetups, as well as our upcoming conference/meetup sessions. " }, { - "id": 214, + "id": 215, "url": "/videos/tech-demos", "title": "Tech Demos", "author" : "", "tags" : "", "body": " - KubeVirt Basic Operations demo: Basic operations to run KubeVirt from a beginner point of view. KubeVirt Demos Playlist: A range of technical demos contributed by the KubeVirt Community or taken from talks. Watch the most recent such session above, and be sure to check out our YouTube Tech Demos playlist to see plenty more. " }, { - "id": 215, + "id": 216, "url": "/category/uncategorized.html", "title": "Uncategorized", "author" : "", "tags" : "", "body": " - " }, { - "id": 216, + "id": 217, "url": "/blogs/uncategorized", "title": "Uncategorized", "author" : "", "tags" : "", "body": " - {{ page. navbar_active }} {% include sidebar-blogs. html %} {% for post in site. posts %} {% if post. categories contains uncategorized %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html href_link= readmorelink %} {% endif %} {% endfor %} " }, { - "id": 217, + "id": 218, "url": "/blogs/updates", "title": "Weekly Updates", "author" : "", "tags" : "", "body": " - {{ page. navbar_active }} {% include sidebar-blogs. html %} {% for post in site. posts %} {% if post. categories contains updates %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html href_link= readmorelink %} {% endif %} {% endfor %} " }, { - "id": 218, + "id": 219, "url": "/videos/", "title": "Videos", "author" : "", "tags" : "", "body": " - {% for item in site. data. videos-talks. list %} <iframe style= width: 600px; height: 450px; src=https://www. youtube-nocookie. com/embed/uusM5SyK-vc?rel=0&controls=0&showinfo=0 frameborder= 0 allow= autoplay; encrypted-media title= KubeVirt Talks Playlist allowfullscreen></iframe> Our KubeVirt Intro Video: A brief conceptual introduction to the KubeVirt project. {% endfor %}Want to see more?: Check out our YouTube playlists for a range of videos that are updated regularly: Talks: Folks from the KubeVirt Community are regularly talking at conferences and meetups throughout the year. Demos: Technical demos contributed by the community or taken from talks. Interviews: Media interviews with folks from the KubeVirt Community. KubeVirt Summit: The annual KubeVirt Summit videos, where the broader KubeVirt ecosystem meets to showcase technical architecture, new features, proposed changes, and in-depth tutorials. Community Meetings: The KubeVirt Community and our SIGs meet weekly and bi-weekly. These sessions are recorded and invite anyone from the community to join in. You can find the full schedule of these meetings on the KubeVirt calendar. " }, { - "id": 219, + "id": 220, "url": "/category/weekly-updates.html", "title": "Weekly Updates", "author" : "", "tags" : "", "body": " - " }, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , { - "id": 220, + "id": 221, "url": "/blogs/page2/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 221, + "id": 222, "url": "/blogs/page3/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 222, + "id": 223, "url": "/blogs/page4/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 223, + "id": 224, "url": "/blogs/page5/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 224, + "id": 225, "url": "/blogs/page6/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 225, + "id": 226, "url": "/blogs/page7/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 226, + "id": 227, "url": "/blogs/page8/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 227, + "id": 228, "url": "/blogs/page9/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 228, + "id": 229, "url": "/blogs/page10/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 229, + "id": 230, "url": "/blogs/page11/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 230, + "id": 231, "url": "/blogs/page12/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 231, + "id": 232, "url": "/blogs/page13/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 232, + "id": 233, "url": "/blogs/page14/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 233, + "id": 234, "url": "/blogs/page15/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 234, + "id": 235, "url": "/blogs/page16/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 235, + "id": 236, "url": "/blogs/page17/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 236, + "id": 237, "url": "/blogs/page18/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 237, + "id": 238, "url": "/blogs/page19/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 238, + "id": 239, "url": "/blogs/page20/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 239, + "id": 240, "url": "/blogs/page21/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 240, + "id": 241, "url": "/blogs/page22/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 241, + "id": 242, "url": "/blogs/page23/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 242, + "id": 243, "url": "/blogs/page24/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 243, + "id": 244, "url": "/blogs/page25/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 244, + "id": 245, "url": "/blogs/page26/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 245, + "id": 246, "url": "/blogs/page27/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 246, + "id": 247, "url": "/blogs/page28/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 247, + "id": 248, "url": "/blogs/page29/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 248, + "id": 249, "url": "/blogs/page30/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 249, + "id": 250, "url": "/blogs/page31/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 250, + "id": 251, "url": "/blogs/page32/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 251, + "id": 252, "url": "/blogs/page33/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 252, + "id": 253, "url": "/blogs/page34/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 253, + "id": 254, "url": "/blogs/page35/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 254, + "id": 255, "url": "/blogs/page36/", "title": "Blogs", "author" : "", "tags" : "", "body": " - {{ page. title }} {% if site. data. blogs_toc. toc[0] %} {% for item in site. data. blogs_toc. toc %} {% if item. subfolderitems[0] %} {{ item. title }}: {% for entry in item. subfolderitems %} <a href= {{ entry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ entry. page }} </a> {% if entry. subsubfolderitems[0] %} {% for subentry in entry. subsubfolderitems %} <a href= {{ subentry. url }} {% if page. title == entry. page %} class= blogs-navigation--item_link active {% else %} class= blogs-navigation--item_link {% endif %}> {{ subentry. page }} </a> {% endfor %} {% endif %} {% endfor %} {% endif %} {% endfor %} {% endif %} {% for post in paginator. posts %} {{ post. title }}: {{ post. pub-date }}, {{ post. pub-year }} {{ post. description | strip_html | truncatewords:50 }} {% capture readmorelink %} {{ post. url | prepend: site. baseurl }} {% endcapture %} {% include readmore. html %} {% endfor %} {% include paging. html %} " }, { - "id": 255, + "id": 256, "url": "/blogs/page37/", "title": "Blogs", "author" : "", diff --git a/sitemap.xml b/sitemap.xml index e39cffbe8a..54104bbad1 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -737,6 +737,10 @@ 2024-03-19T00:00:00+00:00 +https://kubevirt.io//2024/changelog-v1.3.0.html +2024-07-17T00:00:00+00:00 + + https://kubevirt.io//application-aware-quota/ @@ -1686,6 +1690,6 @@ https://kubevirt.io//assets/files/summit24-sponsor.pdf -2024-07-10T14:39:10+00:00 +2024-07-29T11:24:47+00:00 diff --git a/sponsor/index.html b/sponsor/index.html index 869b9290ba..f0efa0e580 100644 --- a/sponsor/index.html +++ b/sponsor/index.html @@ -52,7 +52,7 @@ - + diff --git a/ssp-operator/index.html b/ssp-operator/index.html index 57020c5530..3bfbbd5021 100644 --- a/ssp-operator/index.html +++ b/ssp-operator/index.html @@ -52,7 +52,7 @@ - + diff --git a/summit/index.html b/summit/index.html index 033d6adc63..08950b6a58 100644 --- a/summit/index.html +++ b/summit/index.html @@ -52,7 +52,7 @@ - + diff --git a/tag/addons.html b/tag/addons.html index f84960fff4..b48ece09c0 100644 --- a/tag/addons.html +++ b/tag/addons.html @@ -52,7 +52,7 @@ - + diff --git a/tag/admin-operations.html b/tag/admin-operations.html index 07b36fe280..a5b9867032 100644 --- a/tag/admin-operations.html +++ b/tag/admin-operations.html @@ -52,7 +52,7 @@ - + diff --git a/tag/admin.html b/tag/admin.html index 8efd356133..4ff7b64d22 100644 --- a/tag/admin.html +++ b/tag/admin.html @@ -52,7 +52,7 @@ - + diff --git a/tag/advanced-vm-scheduling.html b/tag/advanced-vm-scheduling.html index a93e9f1683..833c2d4b8e 100644 --- a/tag/advanced-vm-scheduling.html +++ b/tag/advanced-vm-scheduling.html @@ -52,7 +52,7 @@ - + diff --git a/tag/affinity.html b/tag/affinity.html index 497d0f38f0..3db13ebda8 100644 --- a/tag/affinity.html +++ b/tag/affinity.html @@ -52,7 +52,7 @@ - + diff --git a/tag/america.html b/tag/america.html index f1fe6a7f94..2bfe852e35 100644 --- a/tag/america.html +++ b/tag/america.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ami.html b/tag/ami.html index 051b436b4c..6566eb38c6 100644 --- a/tag/ami.html +++ b/tag/ami.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ansible-collection.html b/tag/ansible-collection.html index 5bc7ed0e04..1b6c9861d1 100644 --- a/tag/ansible-collection.html +++ b/tag/ansible-collection.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ansible.html b/tag/ansible.html index cdbd1cfcac..3e94e7c83c 100644 --- a/tag/ansible.html +++ b/tag/ansible.html @@ -52,7 +52,7 @@ - + diff --git a/tag/api.html b/tag/api.html index 29ea4b6c7f..30bac90577 100644 --- a/tag/api.html +++ b/tag/api.html @@ -52,7 +52,7 @@ - + diff --git a/tag/architecture.html b/tag/architecture.html index 5d11affb73..dd4b5bb088 100644 --- a/tag/architecture.html +++ b/tag/architecture.html @@ -52,7 +52,7 @@ - + diff --git a/tag/authentication.html b/tag/authentication.html index 5906a3c2c3..57f1d8645c 100644 --- a/tag/authentication.html +++ b/tag/authentication.html @@ -52,7 +52,7 @@ - + diff --git a/tag/autodeployer.html b/tag/autodeployer.html index cf7d8bbe44..f48fd68b7a 100644 --- a/tag/autodeployer.html +++ b/tag/autodeployer.html @@ -52,7 +52,7 @@ - + diff --git a/tag/aws.html b/tag/aws.html index b0feb7e400..cfe28d2b74 100644 --- a/tag/aws.html +++ b/tag/aws.html @@ -52,7 +52,7 @@ - + diff --git a/tag/basic-operations.html b/tag/basic-operations.html index c76b189499..cc19817985 100644 --- a/tag/basic-operations.html +++ b/tag/basic-operations.html @@ -52,7 +52,7 @@ - + diff --git a/tag/bridge.html b/tag/bridge.html index 8b61bce4c1..5233915aff 100644 --- a/tag/bridge.html +++ b/tag/bridge.html @@ -52,7 +52,7 @@ - + diff --git a/tag/build.html b/tag/build.html index e96c7d9241..eee993abaa 100644 --- a/tag/build.html +++ b/tag/build.html @@ -52,7 +52,7 @@ - + diff --git a/tag/builder-tool.html b/tag/builder-tool.html index 7c0fd6d307..6fa3188b0e 100644 --- a/tag/builder-tool.html +++ b/tag/builder-tool.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cdi.html b/tag/cdi.html index 3756514c5a..511cfe451b 100644 --- a/tag/cdi.html +++ b/tag/cdi.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ceph.html b/tag/ceph.html index bc0bc7f801..d06f3fb4ae 100644 --- a/tag/ceph.html +++ b/tag/ceph.html @@ -52,7 +52,7 @@ - + diff --git a/tag/changelog.html b/tag/changelog.html index 65a2a0adb7..72c117093f 100644 --- a/tag/changelog.html +++ b/tag/changelog.html @@ -52,7 +52,7 @@ - + @@ -249,6 +249,8 @@

Articles tagged with changelog

    +
  • KubeVirt v1.3.0 (17 Jul 2024 | , )
  • +
  • KubeVirt v1.2.0 (05 Mar 2024 | , )
  • KubeVirt v1.1.0 (06 Nov 2023 | , )
  • diff --git a/tag/chronyd.html b/tag/chronyd.html index a791b47467..fd3a17909f 100644 --- a/tag/chronyd.html +++ b/tag/chronyd.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ci-cd.html b/tag/ci-cd.html index 72f678f676..9700f9c353 100644 --- a/tag/ci-cd.html +++ b/tag/ci-cd.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cicd.html b/tag/cicd.html index 652168773c..03df12cafa 100644 --- a/tag/cicd.html +++ b/tag/cicd.html @@ -52,7 +52,7 @@ - + diff --git a/tag/clearcontainers.html b/tag/clearcontainers.html index ec5f688cad..84e84bbbbd 100644 --- a/tag/clearcontainers.html +++ b/tag/clearcontainers.html @@ -52,7 +52,7 @@ - + diff --git a/tag/clone.html b/tag/clone.html index d88bdd4146..9cb3c1ffd6 100644 --- a/tag/clone.html +++ b/tag/clone.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cloudnativecon.html b/tag/cloudnativecon.html index febb545ded..d7025a5cd7 100644 --- a/tag/cloudnativecon.html +++ b/tag/cloudnativecon.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cluster-autoscaler.html b/tag/cluster-autoscaler.html index 66d6c9c083..90bea96acd 100644 --- a/tag/cluster-autoscaler.html +++ b/tag/cluster-autoscaler.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cluster-network-addons-operator.html b/tag/cluster-network-addons-operator.html index ae01c21325..8c03a0d7a2 100644 --- a/tag/cluster-network-addons-operator.html +++ b/tag/cluster-network-addons-operator.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cnao.html b/tag/cnao.html index 00d8dc75ca..914590a7cb 100644 --- a/tag/cnao.html +++ b/tag/cnao.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cncf.html b/tag/cncf.html index 6379a18754..d0752c0d0e 100644 --- a/tag/cncf.html +++ b/tag/cncf.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cni.html b/tag/cni.html index 6b51ebadf6..aa8f1358b2 100644 --- a/tag/cni.html +++ b/tag/cni.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cockpit.html b/tag/cockpit.html index b399609fb0..8488e97833 100644 --- a/tag/cockpit.html +++ b/tag/cockpit.html @@ -52,7 +52,7 @@ - + diff --git a/tag/common-templates.html b/tag/common-templates.html index 70a7a08b80..c8286f4093 100644 --- a/tag/common-templates.html +++ b/tag/common-templates.html @@ -52,7 +52,7 @@ - + diff --git a/tag/community.html b/tag/community.html index e02e9903c4..96632d910e 100644 --- a/tag/community.html +++ b/tag/community.html @@ -52,7 +52,7 @@ - + diff --git a/tag/composer-cli.html b/tag/composer-cli.html index a94b3d597c..78ed20445c 100644 --- a/tag/composer-cli.html +++ b/tag/composer-cli.html @@ -52,7 +52,7 @@ - + diff --git a/tag/condition-types.html b/tag/condition-types.html index ae8f1df24b..cbecfd96a8 100644 --- a/tag/condition-types.html +++ b/tag/condition-types.html @@ -52,7 +52,7 @@ - + diff --git a/tag/conference.html b/tag/conference.html index 2fb23967f5..872df6f12c 100644 --- a/tag/conference.html +++ b/tag/conference.html @@ -52,7 +52,7 @@ - + diff --git a/tag/connect-to-console.html b/tag/connect-to-console.html index 172a00f04d..1e17ac1e59 100644 --- a/tag/connect-to-console.html +++ b/tag/connect-to-console.html @@ -52,7 +52,7 @@ - + diff --git a/tag/connect-to-ssh.html b/tag/connect-to-ssh.html index 68c27202fc..e348faf1d7 100644 --- a/tag/connect-to-ssh.html +++ b/tag/connect-to-ssh.html @@ -52,7 +52,7 @@ - + diff --git a/tag/container.html b/tag/container.html index 39dbd2cf58..eb90d4bd4b 100644 --- a/tag/container.html +++ b/tag/container.html @@ -52,7 +52,7 @@ - + diff --git a/tag/containerdisk.html b/tag/containerdisk.html index a30a92c994..2560c303de 100644 --- a/tag/containerdisk.html +++ b/tag/containerdisk.html @@ -52,7 +52,7 @@ - + diff --git a/tag/containerized-data-importer.html b/tag/containerized-data-importer.html index 6e7adc4ac7..a286a2dc35 100644 --- a/tag/containerized-data-importer.html +++ b/tag/containerized-data-importer.html @@ -52,7 +52,7 @@ - + diff --git a/tag/continuous-integration.html b/tag/continuous-integration.html index 30a16a3ab8..2387ac9f07 100644 --- a/tag/continuous-integration.html +++ b/tag/continuous-integration.html @@ -52,7 +52,7 @@ - + diff --git a/tag/contra-lib.html b/tag/contra-lib.html index d2c88bcfc5..4ec467ebec 100644 --- a/tag/contra-lib.html +++ b/tag/contra-lib.html @@ -52,7 +52,7 @@ - + diff --git a/tag/coreos.html b/tag/coreos.html index 83cdef8b0a..16adcd8e6a 100644 --- a/tag/coreos.html +++ b/tag/coreos.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cpu-pinning.html b/tag/cpu-pinning.html index 3244a97665..37682f0563 100644 --- a/tag/cpu-pinning.html +++ b/tag/cpu-pinning.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cpumanager.html b/tag/cpumanager.html index 0865726227..12da953820 100644 --- a/tag/cpumanager.html +++ b/tag/cpumanager.html @@ -52,7 +52,7 @@ - + diff --git a/tag/create-vm.html b/tag/create-vm.html index 144e5954f7..0f42fd535e 100644 --- a/tag/create-vm.html +++ b/tag/create-vm.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cri-o.html b/tag/cri-o.html index d17f1c30de..7eff4e6995 100644 --- a/tag/cri-o.html +++ b/tag/cri-o.html @@ -52,7 +52,7 @@ - + diff --git a/tag/cri.html b/tag/cri.html index 960e1b8dd0..a4ff6dc57c 100644 --- a/tag/cri.html +++ b/tag/cri.html @@ -52,7 +52,7 @@ - + diff --git a/tag/custom-resources.html b/tag/custom-resources.html index 18221cb96b..4021246f17 100644 --- a/tag/custom-resources.html +++ b/tag/custom-resources.html @@ -52,7 +52,7 @@ - + diff --git a/tag/datavolumes.html b/tag/datavolumes.html index ac0065d3eb..7f6db80c08 100644 --- a/tag/datavolumes.html +++ b/tag/datavolumes.html @@ -52,7 +52,7 @@ - + diff --git a/tag/debug.html b/tag/debug.html index f3841403c2..6add56ed4c 100644 --- a/tag/debug.html +++ b/tag/debug.html @@ -52,7 +52,7 @@ - + diff --git a/tag/dedicated-network.html b/tag/dedicated-network.html index 27b9398109..01277e9553 100644 --- a/tag/dedicated-network.html +++ b/tag/dedicated-network.html @@ -52,7 +52,7 @@ - + diff --git a/tag/design.html b/tag/design.html index 4d001d74c6..f6b5cf893a 100644 --- a/tag/design.html +++ b/tag/design.html @@ -52,7 +52,7 @@ - + diff --git a/tag/development.html b/tag/development.html index da54993f61..1902bc451a 100644 --- a/tag/development.html +++ b/tag/development.html @@ -52,7 +52,7 @@ - + diff --git a/tag/device-plugins.html b/tag/device-plugins.html index faf5a22f89..94118ca1f1 100644 --- a/tag/device-plugins.html +++ b/tag/device-plugins.html @@ -52,7 +52,7 @@ - + diff --git a/tag/disk-image.html b/tag/disk-image.html index 979c3814a9..c0f7cb60db 100644 --- a/tag/disk-image.html +++ b/tag/disk-image.html @@ -52,7 +52,7 @@ - + diff --git a/tag/docker.html b/tag/docker.html index 0197c0e1cf..ef0cd58c84 100644 --- a/tag/docker.html +++ b/tag/docker.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ebtables.html b/tag/ebtables.html index 2e3397165b..2fba921146 100644 --- a/tag/ebtables.html +++ b/tag/ebtables.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ec2.html b/tag/ec2.html index 13c1680aa1..4d542b214b 100644 --- a/tag/ec2.html +++ b/tag/ec2.html @@ -52,7 +52,7 @@ - + diff --git a/tag/eks.html b/tag/eks.html index 2ab6accf44..677a83e9ba 100644 --- a/tag/eks.html +++ b/tag/eks.html @@ -52,7 +52,7 @@ - + diff --git a/tag/event.html b/tag/event.html index c70138133c..435524cf11 100644 --- a/tag/event.html +++ b/tag/event.html @@ -52,7 +52,7 @@ - + diff --git a/tag/eviction.html b/tag/eviction.html index 723f5fac4c..3720860fa5 100644 --- a/tag/eviction.html +++ b/tag/eviction.html @@ -52,7 +52,7 @@ - + diff --git a/tag/federation.html b/tag/federation.html index 7a9d100ab3..e830188ad4 100644 --- a/tag/federation.html +++ b/tag/federation.html @@ -52,7 +52,7 @@ - + diff --git a/tag/fedora.html b/tag/fedora.html index 70c7c14c1e..6b5568413b 100644 --- a/tag/fedora.html +++ b/tag/fedora.html @@ -52,7 +52,7 @@ - + diff --git a/tag/flannel.html b/tag/flannel.html index 9cf843bb6b..b70138be57 100644 --- a/tag/flannel.html +++ b/tag/flannel.html @@ -52,7 +52,7 @@ - + diff --git a/tag/gathering.html b/tag/gathering.html index 466abf9931..9b7bae011a 100644 --- a/tag/gathering.html +++ b/tag/gathering.html @@ -52,7 +52,7 @@ - + diff --git a/tag/gcp.html b/tag/gcp.html index 16033e7940..71fc858b00 100644 --- a/tag/gcp.html +++ b/tag/gcp.html @@ -52,7 +52,7 @@ - + diff --git a/tag/glusterfs.html b/tag/glusterfs.html index 73f6479ede..451c7a1e38 100644 --- a/tag/glusterfs.html +++ b/tag/glusterfs.html @@ -52,7 +52,7 @@ - + diff --git a/tag/go.html b/tag/go.html index 03882e72f7..db884d0f36 100644 --- a/tag/go.html +++ b/tag/go.html @@ -52,7 +52,7 @@ - + diff --git a/tag/gpu-workloads.html b/tag/gpu-workloads.html index 1cf1685c0e..6df6210e9b 100644 --- a/tag/gpu-workloads.html +++ b/tag/gpu-workloads.html @@ -52,7 +52,7 @@ - + diff --git a/tag/gpu.html b/tag/gpu.html index fbac959ab1..9d81e857f2 100644 --- a/tag/gpu.html +++ b/tag/gpu.html @@ -52,7 +52,7 @@ - + diff --git a/tag/grafana.html b/tag/grafana.html index ee98ca87e5..b0f91daa7c 100644 --- a/tag/grafana.html +++ b/tag/grafana.html @@ -52,7 +52,7 @@ - + diff --git a/tag/hco.html b/tag/hco.html index 6ea39ee1a0..8ec8701c97 100644 --- a/tag/hco.html +++ b/tag/hco.html @@ -52,7 +52,7 @@ - + diff --git a/tag/heketi.html b/tag/heketi.html index e71d3c01f6..15d91ae642 100644 --- a/tag/heketi.html +++ b/tag/heketi.html @@ -52,7 +52,7 @@ - + diff --git a/tag/hilights.html b/tag/hilights.html index 9eb65149de..12e13c0c33 100644 --- a/tag/hilights.html +++ b/tag/hilights.html @@ -52,7 +52,7 @@ - + diff --git a/tag/homelab.html b/tag/homelab.html index 79a0564828..aa95d02546 100644 --- a/tag/homelab.html +++ b/tag/homelab.html @@ -52,7 +52,7 @@ - + diff --git a/tag/hugepages.html b/tag/hugepages.html index c577a4d441..ca4fb7c76d 100644 --- a/tag/hugepages.html +++ b/tag/hugepages.html @@ -52,7 +52,7 @@ - + diff --git a/tag/hyperconverged-operator.html b/tag/hyperconverged-operator.html index 7b0367aae6..e1e2ae4dc7 100644 --- a/tag/hyperconverged-operator.html +++ b/tag/hyperconverged-operator.html @@ -52,7 +52,7 @@ - + diff --git a/tag/iac.html b/tag/iac.html index 6279c2f7ed..3a5b20c26a 100644 --- a/tag/iac.html +++ b/tag/iac.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ignition.html b/tag/ignition.html index a6f21aec4a..31f5fd27ab 100644 --- a/tag/ignition.html +++ b/tag/ignition.html @@ -52,7 +52,7 @@ - + diff --git a/tag/images.html b/tag/images.html index 36d291f26b..67ef84316c 100644 --- a/tag/images.html +++ b/tag/images.html @@ -52,7 +52,7 @@ - + diff --git a/tag/import.html b/tag/import.html index b0e8b20ecc..33d5bb193d 100644 --- a/tag/import.html +++ b/tag/import.html @@ -52,7 +52,7 @@ - + diff --git a/tag/infrastructure.html b/tag/infrastructure.html index c58e7917f4..f278648509 100644 --- a/tag/infrastructure.html +++ b/tag/infrastructure.html @@ -52,7 +52,7 @@ - + diff --git a/tag/installing-kubevirt.html b/tag/installing-kubevirt.html index 6ccf1f36dc..a39122911c 100644 --- a/tag/installing-kubevirt.html +++ b/tag/installing-kubevirt.html @@ -52,7 +52,7 @@ - + diff --git a/tag/instancetypes.html b/tag/instancetypes.html index 461579724e..b752f8d45f 100644 --- a/tag/instancetypes.html +++ b/tag/instancetypes.html @@ -52,7 +52,7 @@ - + diff --git a/tag/intel.html b/tag/intel.html index 35645eb789..939cce0503 100644 --- a/tag/intel.html +++ b/tag/intel.html @@ -52,7 +52,7 @@ - + diff --git a/tag/iptables.html b/tag/iptables.html index 712a3e56be..8c9e5bca7f 100644 --- a/tag/iptables.html +++ b/tag/iptables.html @@ -52,7 +52,7 @@ - + diff --git a/tag/istio.html b/tag/istio.html index 186fa6fe2b..fa972cc4d8 100644 --- a/tag/istio.html +++ b/tag/istio.html @@ -52,7 +52,7 @@ - + diff --git a/tag/jenkins.html b/tag/jenkins.html index 851d60f3b9..e979eb9eec 100644 --- a/tag/jenkins.html +++ b/tag/jenkins.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubecon.html b/tag/kubecon.html index 4fee8a9e04..45f65297cb 100644 --- a/tag/kubecon.html +++ b/tag/kubecon.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubefed.html b/tag/kubefed.html index b95e86fd01..27505171da 100644 --- a/tag/kubefed.html +++ b/tag/kubefed.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubernetes-nmstate.html b/tag/kubernetes-nmstate.html index 0af1b6d21f..d625784130 100644 --- a/tag/kubernetes-nmstate.html +++ b/tag/kubernetes-nmstate.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubernetes.html b/tag/kubernetes.html index dc860d67ed..e359c6838d 100644 --- a/tag/kubernetes.html +++ b/tag/kubernetes.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubetron.html b/tag/kubetron.html index 4b8d0e46bc..1b74955620 100644 --- a/tag/kubetron.html +++ b/tag/kubetron.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-ansible.html b/tag/kubevirt-ansible.html index 3b316d38ac..7de9770af2 100644 --- a/tag/kubevirt-ansible.html +++ b/tag/kubevirt-ansible.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-hyperconverged.html b/tag/kubevirt-hyperconverged.html index de02c234af..b469187a6e 100644 --- a/tag/kubevirt-hyperconverged.html +++ b/tag/kubevirt-hyperconverged.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-installation.html b/tag/kubevirt-installation.html index 14ea6d311d..b0d0428742 100644 --- a/tag/kubevirt-installation.html +++ b/tag/kubevirt-installation.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-objects.html b/tag/kubevirt-objects.html index d5b3ffec05..e7bb1f0f28 100644 --- a/tag/kubevirt-objects.html +++ b/tag/kubevirt-objects.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-tekton-tasks.html b/tag/kubevirt-tekton-tasks.html index e55f631dee..d4b0ab84b4 100644 --- a/tag/kubevirt-tekton-tasks.html +++ b/tag/kubevirt-tekton-tasks.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-tutorial.html b/tag/kubevirt-tutorial.html index 568eb5c867..f28f89ee72 100644 --- a/tag/kubevirt-tutorial.html +++ b/tag/kubevirt-tutorial.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt-upgrade.html b/tag/kubevirt-upgrade.html index 36cd24774b..9531b3434d 100644 --- a/tag/kubevirt-upgrade.html +++ b/tag/kubevirt-upgrade.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt.core.html b/tag/kubevirt.core.html index b415a62698..919ee4fbcc 100644 --- a/tag/kubevirt.core.html +++ b/tag/kubevirt.core.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirt.html b/tag/kubevirt.html index 5186acdc52..e4cf0257e9 100644 --- a/tag/kubevirt.html +++ b/tag/kubevirt.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kubevirtci.html b/tag/kubevirtci.html index 0703a861b9..b34dbe33ac 100644 --- a/tag/kubevirtci.html +++ b/tag/kubevirtci.html @@ -52,7 +52,7 @@ - + diff --git a/tag/kvm.html b/tag/kvm.html index ac32ba04bd..86a0e82059 100644 --- a/tag/kvm.html +++ b/tag/kvm.html @@ -52,7 +52,7 @@ - + diff --git a/tag/lab.html b/tag/lab.html index 9223b9ac75..c2f44cb854 100644 --- a/tag/lab.html +++ b/tag/lab.html @@ -52,7 +52,7 @@ - + diff --git a/tag/laboratory.html b/tag/laboratory.html index 26930c4b66..b33d2741c2 100644 --- a/tag/laboratory.html +++ b/tag/laboratory.html @@ -52,7 +52,7 @@ - + diff --git a/tag/libvirt.html b/tag/libvirt.html index ef45e033f9..5c6110235a 100644 --- a/tag/libvirt.html +++ b/tag/libvirt.html @@ -52,7 +52,7 @@ - + diff --git a/tag/lifecycle.html b/tag/lifecycle.html index 2db15f707d..c79603740e 100644 --- a/tag/lifecycle.html +++ b/tag/lifecycle.html @@ -52,7 +52,7 @@ - + diff --git a/tag/live-migration.html b/tag/live-migration.html index 241e046468..15ed658035 100644 --- a/tag/live-migration.html +++ b/tag/live-migration.html @@ -52,7 +52,7 @@ - + diff --git a/tag/load-balancer.html b/tag/load-balancer.html index bc4e10fc9e..e168e86c13 100644 --- a/tag/load-balancer.html +++ b/tag/load-balancer.html @@ -52,7 +52,7 @@ - + diff --git a/tag/memory.html b/tag/memory.html index c1faf00904..718f1dce5b 100644 --- a/tag/memory.html +++ b/tag/memory.html @@ -52,7 +52,7 @@ - + diff --git a/tag/mesh.html b/tag/mesh.html index 072e24f109..481ea28e7f 100644 --- a/tag/mesh.html +++ b/tag/mesh.html @@ -52,7 +52,7 @@ - + diff --git a/tag/metallb.html b/tag/metallb.html index d2f915a815..f8fd609d05 100644 --- a/tag/metallb.html +++ b/tag/metallb.html @@ -52,7 +52,7 @@ - + diff --git a/tag/metrics.html b/tag/metrics.html index 2531e517e2..5acced909c 100644 --- a/tag/metrics.html +++ b/tag/metrics.html @@ -52,7 +52,7 @@ - + diff --git a/tag/microsoft-windows-container.html b/tag/microsoft-windows-container.html index 6c308f61a8..2b65082fbb 100644 --- a/tag/microsoft-windows-container.html +++ b/tag/microsoft-windows-container.html @@ -52,7 +52,7 @@ - + diff --git a/tag/microsoft-windows-kubernetes.html b/tag/microsoft-windows-kubernetes.html index 19a1c005ac..a3370c12cc 100644 --- a/tag/microsoft-windows-kubernetes.html +++ b/tag/microsoft-windows-kubernetes.html @@ -52,7 +52,7 @@ - + diff --git a/tag/milestone.html b/tag/milestone.html index 63ac28ba00..1490334f15 100644 --- a/tag/milestone.html +++ b/tag/milestone.html @@ -52,7 +52,7 @@ - + diff --git a/tag/minikube.html b/tag/minikube.html index 1ea7ec71bf..e0bf29d19a 100644 --- a/tag/minikube.html +++ b/tag/minikube.html @@ -52,7 +52,7 @@ - + diff --git a/tag/monitoring.html b/tag/monitoring.html index 07262df11f..ac04afecd1 100644 --- a/tag/monitoring.html +++ b/tag/monitoring.html @@ -52,7 +52,7 @@ - + diff --git a/tag/multicluster.html b/tag/multicluster.html index 327f8869c5..2770984f05 100644 --- a/tag/multicluster.html +++ b/tag/multicluster.html @@ -52,7 +52,7 @@ - + diff --git a/tag/multiple-networks.html b/tag/multiple-networks.html index 4feb1b0b6f..fe78b6a7dc 100644 --- a/tag/multiple-networks.html +++ b/tag/multiple-networks.html @@ -52,7 +52,7 @@ - + diff --git a/tag/multus.html b/tag/multus.html index 56f3c6dabf..b4accca408 100644 --- a/tag/multus.html +++ b/tag/multus.html @@ -52,7 +52,7 @@ - + diff --git a/tag/network.html b/tag/network.html index 211c4cdb06..171d1f1a05 100644 --- a/tag/network.html +++ b/tag/network.html @@ -52,7 +52,7 @@ - + diff --git a/tag/networking.html b/tag/networking.html index 2dc1681ab7..f02badfd8d 100644 --- a/tag/networking.html +++ b/tag/networking.html @@ -52,7 +52,7 @@ - + diff --git a/tag/networkpolicy.html b/tag/networkpolicy.html index 357162a3d9..e11edb957f 100644 --- a/tag/networkpolicy.html +++ b/tag/networkpolicy.html @@ -52,7 +52,7 @@ - + diff --git a/tag/neutron.html b/tag/neutron.html index 519016b576..2a30083a6d 100644 --- a/tag/neutron.html +++ b/tag/neutron.html @@ -52,7 +52,7 @@ - + diff --git a/tag/nmo.html b/tag/nmo.html index 5572286111..47b3c1e271 100644 --- a/tag/nmo.html +++ b/tag/nmo.html @@ -52,7 +52,7 @@ - + diff --git a/tag/nmstate.html b/tag/nmstate.html index c8d00ce844..bed238c11a 100644 --- a/tag/nmstate.html +++ b/tag/nmstate.html @@ -52,7 +52,7 @@ - + diff --git a/tag/node-drain.html b/tag/node-drain.html index 94653ffe74..3dcbacfc6f 100644 --- a/tag/node-drain.html +++ b/tag/node-drain.html @@ -52,7 +52,7 @@ - + diff --git a/tag/node-exporter.html b/tag/node-exporter.html index fee4910d0b..6802c1a6bb 100644 --- a/tag/node-exporter.html +++ b/tag/node-exporter.html @@ -52,7 +52,7 @@ - + diff --git a/tag/novnc.html b/tag/novnc.html index bc85a166c6..9d0618167f 100644 --- a/tag/novnc.html +++ b/tag/novnc.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ntp.html b/tag/ntp.html index 2396608d64..78696ca34e 100644 --- a/tag/ntp.html +++ b/tag/ntp.html @@ -52,7 +52,7 @@ - + diff --git a/tag/numa.html b/tag/numa.html index fc8d96d0fa..ca243c686a 100644 --- a/tag/numa.html +++ b/tag/numa.html @@ -52,7 +52,7 @@ - + diff --git a/tag/nvidia.html b/tag/nvidia.html index dd10521cd6..3b12f60ff1 100644 --- a/tag/nvidia.html +++ b/tag/nvidia.html @@ -52,7 +52,7 @@ - + diff --git a/tag/objects.html b/tag/objects.html index 33470a1baf..be837707fd 100644 --- a/tag/objects.html +++ b/tag/objects.html @@ -52,7 +52,7 @@ - + diff --git a/tag/octant.html b/tag/octant.html index ca0c5e8f92..644cc4b027 100644 --- a/tag/octant.html +++ b/tag/octant.html @@ -52,7 +52,7 @@ - + diff --git a/tag/okd-console.html b/tag/okd-console.html index fa7813bf61..4bf6bdf337 100644 --- a/tag/okd-console.html +++ b/tag/okd-console.html @@ -52,7 +52,7 @@ - + diff --git a/tag/okd.html b/tag/okd.html index c6159be08e..a1db0150f7 100644 --- a/tag/okd.html +++ b/tag/okd.html @@ -52,7 +52,7 @@ - + diff --git a/tag/openshift-console.html b/tag/openshift-console.html index 896ef4a8aa..1cd321e6ea 100644 --- a/tag/openshift-console.html +++ b/tag/openshift-console.html @@ -52,7 +52,7 @@ - + diff --git a/tag/openshift-web-console.html b/tag/openshift-web-console.html index cfb8f258dd..165135f9ef 100644 --- a/tag/openshift-web-console.html +++ b/tag/openshift-web-console.html @@ -52,7 +52,7 @@ - + diff --git a/tag/openshift.html b/tag/openshift.html index 751ad6aa9d..763f974f65 100644 --- a/tag/openshift.html +++ b/tag/openshift.html @@ -52,7 +52,7 @@ - + diff --git a/tag/openstack.html b/tag/openstack.html index 92dda19b60..cace3cc6e0 100644 --- a/tag/openstack.html +++ b/tag/openstack.html @@ -52,7 +52,7 @@ - + diff --git a/tag/operation.html b/tag/operation.html index a981dc5ed4..d799215c61 100644 --- a/tag/operation.html +++ b/tag/operation.html @@ -52,7 +52,7 @@ - + diff --git a/tag/operations.html b/tag/operations.html index 941c30d674..581a71b0e5 100644 --- a/tag/operations.html +++ b/tag/operations.html @@ -52,7 +52,7 @@ - + diff --git a/tag/operator-manual.html b/tag/operator-manual.html index 9f7c2683de..0d1d24ea50 100644 --- a/tag/operator-manual.html +++ b/tag/operator-manual.html @@ -52,7 +52,7 @@ - + diff --git a/tag/overcommitment.html b/tag/overcommitment.html index efc837ce91..6447080aee 100644 --- a/tag/overcommitment.html +++ b/tag/overcommitment.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ovirt.html b/tag/ovirt.html index 1fa49806e0..c85fd10f1d 100644 --- a/tag/ovirt.html +++ b/tag/ovirt.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ovn.html b/tag/ovn.html index 720d0972a3..5a85511026 100644 --- a/tag/ovn.html +++ b/tag/ovn.html @@ -52,7 +52,7 @@ - + diff --git a/tag/ovs-cni.html b/tag/ovs-cni.html index 4fbabec01c..1b7ef18da3 100644 --- a/tag/ovs-cni.html +++ b/tag/ovs-cni.html @@ -52,7 +52,7 @@ - + diff --git a/tag/party-time.html b/tag/party-time.html index a9d0fc0c6b..8397b63377 100644 --- a/tag/party-time.html +++ b/tag/party-time.html @@ -52,7 +52,7 @@ - + diff --git a/tag/pass-through.html b/tag/pass-through.html index d8592bed16..8cb65fff79 100644 --- a/tag/pass-through.html +++ b/tag/pass-through.html @@ -52,7 +52,7 @@ - + diff --git a/tag/passthrough.html b/tag/passthrough.html index 8b8960cb35..cc65c5dfc8 100644 --- a/tag/passthrough.html +++ b/tag/passthrough.html @@ -52,7 +52,7 @@ - + diff --git a/tag/preferences.html b/tag/preferences.html index e22a9cf2de..b4ddd31e7f 100644 --- a/tag/preferences.html +++ b/tag/preferences.html @@ -52,7 +52,7 @@ - + diff --git a/tag/prometheus-operator.html b/tag/prometheus-operator.html index d30c05ca01..0022644cd6 100644 --- a/tag/prometheus-operator.html +++ b/tag/prometheus-operator.html @@ -52,7 +52,7 @@ - + diff --git a/tag/prometheus.html b/tag/prometheus.html index b5a6e06da5..ef965d84cd 100644 --- a/tag/prometheus.html +++ b/tag/prometheus.html @@ -52,7 +52,7 @@ - + diff --git a/tag/prow.html b/tag/prow.html index 0f9908488e..f1391f5157 100644 --- a/tag/prow.html +++ b/tag/prow.html @@ -52,7 +52,7 @@ - + diff --git a/tag/qemu.html b/tag/qemu.html index 5173c54fcc..fd1c6e4b67 100644 --- a/tag/qemu.html +++ b/tag/qemu.html @@ -52,7 +52,7 @@ - + diff --git a/tag/quickstart.html b/tag/quickstart.html index 772c8f857f..e78c89f658 100644 --- a/tag/quickstart.html +++ b/tag/quickstart.html @@ -52,7 +52,7 @@ - + diff --git a/tag/rbac.html b/tag/rbac.html index da8f636e32..15a4c64894 100644 --- a/tag/rbac.html +++ b/tag/rbac.html @@ -52,7 +52,7 @@ - + diff --git a/tag/real-time.html b/tag/real-time.html index ead8a52de5..0d52ae7491 100644 --- a/tag/real-time.html +++ b/tag/real-time.html @@ -52,7 +52,7 @@ - + diff --git a/tag/registry.html b/tag/registry.html index 1b5cd5eebf..bce0fc2f38 100644 --- a/tag/registry.html +++ b/tag/registry.html @@ -52,7 +52,7 @@ - + diff --git a/tag/release-notes.html b/tag/release-notes.html index d7e136d2ad..d6dcaaa2e8 100644 --- a/tag/release-notes.html +++ b/tag/release-notes.html @@ -52,7 +52,7 @@ - + @@ -249,6 +249,8 @@

    Articles tagged with release notes

      +
    • KubeVirt v1.3.0 (17 Jul 2024 | , )
    • +
    • KubeVirt v1.2.0 (05 Mar 2024 | , )
    • KubeVirt v1.1.0 (06 Nov 2023 | , )
    • diff --git a/tag/release.html b/tag/release.html index 5db53af145..bd0e60848b 100644 --- a/tag/release.html +++ b/tag/release.html @@ -52,7 +52,7 @@ - + diff --git a/tag/remove-vm.html b/tag/remove-vm.html index 6c5736f6ec..3ff4982ceb 100644 --- a/tag/remove-vm.html +++ b/tag/remove-vm.html @@ -52,7 +52,7 @@ - + diff --git a/tag/review.html b/tag/review.html index d990b45b8c..6383948681 100644 --- a/tag/review.html +++ b/tag/review.html @@ -52,7 +52,7 @@ - + diff --git a/tag/rhcos.html b/tag/rhcos.html index 8d54d49691..cee1259139 100644 --- a/tag/rhcos.html +++ b/tag/rhcos.html @@ -52,7 +52,7 @@ - + diff --git a/tag/roadmap.html b/tag/roadmap.html index cc0b5e832d..624f6c2ac8 100644 --- a/tag/roadmap.html +++ b/tag/roadmap.html @@ -52,7 +52,7 @@ - + diff --git a/tag/roles.html b/tag/roles.html index 42a8de56e2..6ab1d8eca7 100644 --- a/tag/roles.html +++ b/tag/roles.html @@ -52,7 +52,7 @@ - + diff --git a/tag/rook.html b/tag/rook.html index 614bf28297..11767ebca6 100644 --- a/tag/rook.html +++ b/tag/rook.html @@ -52,7 +52,7 @@ - + diff --git a/tag/sandbox.html b/tag/sandbox.html index 4f8a0f5523..ab9f51a2b4 100644 --- a/tag/sandbox.html +++ b/tag/sandbox.html @@ -52,7 +52,7 @@ - + diff --git a/tag/scheduling.html b/tag/scheduling.html index 45af9e2e1e..2ee29f906e 100644 --- a/tag/scheduling.html +++ b/tag/scheduling.html @@ -52,7 +52,7 @@ - + diff --git a/tag/sdn.html b/tag/sdn.html index 07b1eef408..9789dde8a3 100644 --- a/tag/sdn.html +++ b/tag/sdn.html @@ -52,7 +52,7 @@ - + diff --git a/tag/security.html b/tag/security.html index 9962fdb49b..b948775cac 100644 --- a/tag/security.html +++ b/tag/security.html @@ -52,7 +52,7 @@ - + diff --git a/tag/service-mesh.html b/tag/service-mesh.html index f2b2df3f4b..6eaf1f5824 100644 --- a/tag/service-mesh.html +++ b/tag/service-mesh.html @@ -52,7 +52,7 @@ - + diff --git a/tag/serviceaccount.html b/tag/serviceaccount.html index ac15236423..62c2431328 100644 --- a/tag/serviceaccount.html +++ b/tag/serviceaccount.html @@ -52,7 +52,7 @@ - + diff --git a/tag/skydive.html b/tag/skydive.html index a3fe73190b..5b323d8947 100644 --- a/tag/skydive.html +++ b/tag/skydive.html @@ -52,7 +52,7 @@ - + diff --git a/tag/start-vm.html b/tag/start-vm.html index b2a472c47c..7b719a2a66 100644 --- a/tag/start-vm.html +++ b/tag/start-vm.html @@ -52,7 +52,7 @@ - + diff --git a/tag/stop-vm.html b/tag/stop-vm.html index 334ea765a7..68bbeb6398 100644 --- a/tag/stop-vm.html +++ b/tag/stop-vm.html @@ -52,7 +52,7 @@ - + diff --git a/tag/storage.html b/tag/storage.html index 969ffc4826..c47f88f0ee 100644 --- a/tag/storage.html +++ b/tag/storage.html @@ -52,7 +52,7 @@ - + diff --git a/tag/talk.html b/tag/talk.html index d0bd52df6d..4aa85a13c4 100644 --- a/tag/talk.html +++ b/tag/talk.html @@ -52,7 +52,7 @@ - + diff --git a/tag/tekton-pipelines.html b/tag/tekton-pipelines.html index 1b7cc4fdb1..9ff1815c5d 100644 --- a/tag/tekton-pipelines.html +++ b/tag/tekton-pipelines.html @@ -52,7 +52,7 @@ - + diff --git a/tag/topologykeys.html b/tag/topologykeys.html index 552800cba9..bd3b4d896c 100644 --- a/tag/topologykeys.html +++ b/tag/topologykeys.html @@ -52,7 +52,7 @@ - + diff --git a/tag/tproxy.html b/tag/tproxy.html index b8d1e07a40..ea1e8d4e32 100644 --- a/tag/tproxy.html +++ b/tag/tproxy.html @@ -52,7 +52,7 @@ - + diff --git a/tag/unit-testing.html b/tag/unit-testing.html index 0e3568b9d3..cdf726a33a 100644 --- a/tag/unit-testing.html +++ b/tag/unit-testing.html @@ -52,7 +52,7 @@ - + diff --git a/tag/upgrading.html b/tag/upgrading.html index cd899841fd..1c404ad3ac 100644 --- a/tag/upgrading.html +++ b/tag/upgrading.html @@ -52,7 +52,7 @@ - + diff --git a/tag/upload.html b/tag/upload.html index 669e31e550..7fd4318dde 100644 --- a/tag/upload.html +++ b/tag/upload.html @@ -52,7 +52,7 @@ - + diff --git a/tag/use-kubevirt.html b/tag/use-kubevirt.html index a20a9fa7e0..28cc1026bd 100644 --- a/tag/use-kubevirt.html +++ b/tag/use-kubevirt.html @@ -52,7 +52,7 @@ - + diff --git a/tag/user-interface.html b/tag/user-interface.html index 2edbcccf96..6bfa6d842f 100644 --- a/tag/user-interface.html +++ b/tag/user-interface.html @@ -52,7 +52,7 @@ - + diff --git a/tag/v1.0.html b/tag/v1.0.html index 13ace47474..a7da0e07d3 100644 --- a/tag/v1.0.html +++ b/tag/v1.0.html @@ -52,7 +52,7 @@ - + diff --git a/tag/v1.1.0.html b/tag/v1.1.0.html index c425b5804b..b881d983b2 100644 --- a/tag/v1.1.0.html +++ b/tag/v1.1.0.html @@ -52,7 +52,7 @@ - + diff --git a/tag/vagrant.html b/tag/vagrant.html index e01fc0a65f..96caddf63f 100644 --- a/tag/vagrant.html +++ b/tag/vagrant.html @@ -52,7 +52,7 @@ - + diff --git a/tag/vgpu.html b/tag/vgpu.html index 52d4bf1df4..5979463fe2 100644 --- a/tag/vgpu.html +++ b/tag/vgpu.html @@ -52,7 +52,7 @@ - + diff --git a/tag/video.html b/tag/video.html index 3a3d0dc7d1..51f85c48e5 100644 --- a/tag/video.html +++ b/tag/video.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virt-customize.html b/tag/virt-customize.html index 96628ce362..231ac10396 100644 --- a/tag/virt-customize.html +++ b/tag/virt-customize.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtlet.html b/tag/virtlet.html index 54cbcf44fb..3180fb5433 100644 --- a/tag/virtlet.html +++ b/tag/virtlet.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtual-machine-management.html b/tag/virtual-machine-management.html index 18a1746695..a8ae34b35a 100644 --- a/tag/virtual-machine-management.html +++ b/tag/virtual-machine-management.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtual-machine.html b/tag/virtual-machine.html index 654f7b7c5c..17ddc8f5b7 100644 --- a/tag/virtual-machine.html +++ b/tag/virtual-machine.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtual-machines.html b/tag/virtual-machines.html index 0f555dfb21..87ccd3cdd8 100644 --- a/tag/virtual-machines.html +++ b/tag/virtual-machines.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtualmachine.html b/tag/virtualmachine.html index b0711230eb..bb1012d812 100644 --- a/tag/virtualmachine.html +++ b/tag/virtualmachine.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtualmachineinstancetype.html b/tag/virtualmachineinstancetype.html index 5b255590f9..098114672f 100644 --- a/tag/virtualmachineinstancetype.html +++ b/tag/virtualmachineinstancetype.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtualmachinepreference.html b/tag/virtualmachinepreference.html index 605e45f66f..0006228c4a 100644 --- a/tag/virtualmachinepreference.html +++ b/tag/virtualmachinepreference.html @@ -52,7 +52,7 @@ - + diff --git a/tag/virtvnc.html b/tag/virtvnc.html index a269b5c916..38a1a3dfee 100644 --- a/tag/virtvnc.html +++ b/tag/virtvnc.html @@ -52,7 +52,7 @@ - + diff --git a/tag/vm-import.html b/tag/vm-import.html index b96664a8c4..fbcad2fa50 100644 --- a/tag/vm-import.html +++ b/tag/vm-import.html @@ -52,7 +52,7 @@ - + diff --git a/tag/vm.html b/tag/vm.html index 1be37a0a46..8439958fae 100644 --- a/tag/vm.html +++ b/tag/vm.html @@ -52,7 +52,7 @@ - + diff --git a/tag/volume-types.html b/tag/volume-types.html index 3f1b6342d9..f285ad5857 100644 --- a/tag/volume-types.html +++ b/tag/volume-types.html @@ -52,7 +52,7 @@ - + diff --git a/tag/vscode.html b/tag/vscode.html index d822a1749d..f85eae4fb9 100644 --- a/tag/vscode.html +++ b/tag/vscode.html @@ -52,7 +52,7 @@ - + diff --git a/tag/weavenet.html b/tag/weavenet.html index b11bcb4495..e086ec7c93 100644 --- a/tag/weavenet.html +++ b/tag/weavenet.html @@ -52,7 +52,7 @@ - + diff --git a/tag/web-interface.html b/tag/web-interface.html index 801803810a..5f4463de82 100644 --- a/tag/web-interface.html +++ b/tag/web-interface.html @@ -52,7 +52,7 @@ - + diff --git a/tag/website.html b/tag/website.html index 67a538bf6d..f9abecf2a5 100644 --- a/tag/website.html +++ b/tag/website.html @@ -52,7 +52,7 @@ - + diff --git a/tag/windows.html b/tag/windows.html index 1e2579959d..6e111f2235 100644 --- a/tag/windows.html +++ b/tag/windows.html @@ -52,7 +52,7 @@ - + diff --git a/videos/community/meetings.html b/videos/community/meetings.html index 0723b3f6df..ff06621d77 100644 --- a/videos/community/meetings.html +++ b/videos/community/meetings.html @@ -52,7 +52,7 @@ - + diff --git a/videos/index.html b/videos/index.html index 79d7d521ce..319de3c922 100644 --- a/videos/index.html +++ b/videos/index.html @@ -52,7 +52,7 @@ - + diff --git a/videos/interviews.html b/videos/interviews.html index 7042829ad1..50fec1d296 100644 --- a/videos/interviews.html +++ b/videos/interviews.html @@ -52,7 +52,7 @@ - + diff --git a/videos/kubevirt-summit.html b/videos/kubevirt-summit.html index d5d0922754..f1789c5bf2 100644 --- a/videos/kubevirt-summit.html +++ b/videos/kubevirt-summit.html @@ -52,7 +52,7 @@ - + diff --git a/videos/talks.html b/videos/talks.html index e0aed4f172..0f5efa2c72 100644 --- a/videos/talks.html +++ b/videos/talks.html @@ -52,7 +52,7 @@ - + diff --git a/videos/tech-demos.html b/videos/tech-demos.html index b1a8733a5e..9278b00657 100644 --- a/videos/tech-demos.html +++ b/videos/tech-demos.html @@ -52,7 +52,7 @@ - +