Much has changed since my last post introducing the VirtualMachine{Flavor,Preference}
KubeVirt CRDs. In this post I’m going to touch on some of this, what’s coming next and provide a quick demo at the end.
What’s new
Introduction of VirtualMachine{ClusterPreference,Preference}
https://github.com/kubevirt/kubevirt/pull/7554
https://github.com/kubevirt/kubevirt/pull/7578
The two main PRs referenced by my previous post have landed, refactoring the initial code and introducing the VirtualMachine{ClusterPreference,Preference}
CRDs to KubeVirt.
PreferredCPUTopology
now defaults to PreferSockets
https://github.com/kubevirt/kubevirt/pull/7812
This was a trivial change as it was something the VirtualMachineInstance
mutation webhook already defaults to if no topology is provided but a number of vCPUs are defined through resource requests.
func (mutator *VMIsMutator) setDefaultGuestCPUTopology(vmi *v1.VirtualMachineInstance) {
cores := uint32(1)
threads := uint32(1)
sockets := uint32(1)
vmiCPU := vmi.Spec.Domain.CPU
if vmiCPU == nil || (vmiCPU.Cores == 0 && vmiCPU.Sockets == 0 && vmiCPU.Threads == 0) {
// create cpu topology struct
if vmi.Spec.Domain.CPU == nil {
vmi.Spec.Domain.CPU = &v1.CPU{}
}
//if cores, sockets, threads are not set, take value from domain resources request or limits and
//set value into sockets, which have best performance (https://bugzilla.redhat.com/show_bug.cgi?id=1653453)
resources := vmi.Spec.Domain.Resources
if cpuLimit, ok := resources.Limits[k8sv1.ResourceCPU]; ok {
sockets = uint32(cpuLimit.Value())
} else if cpuRequests, ok := resources.Requests[k8sv1.ResourceCPU]; ok {
sockets = uint32(cpuRequests.Value())
}
vmi.Spec.Domain.CPU.Sockets = sockets
vmi.Spec.Domain.CPU.Cores = cores
vmi.Spec.Domain.CPU.Threads = threads
}
}
VirtualMachineInstance
mutation webhook application dropped
https://github.com/kubevirt/kubevirt/pull/7806
Lots of work went into this PR but ultimately the use cases around the direct use of the VirtualMachineInstance
CRD by end users isn’t strong enough to justify the extra complexity introduced by it.
Default device preference application
https://github.com/kubevirt/kubevirt/pull/7618
https://github.com/kubevirt/kubevirt/pull/7919
With application of a flavor and preference no longer moving to the VirtualMachineInstance
mutation webhook we now had ensure that all devices would be present by the time the existing application would happen within the VirtualMachine
controller.
The above change moves and shares code from the VirtualMachineInstance
mutation webhook that adds missing any missing Disk
s for listed Volume
s and also adds a default Network
and associated Interface
if none are provided. This ensures that any preferences applied by the VirtualMachine
controller to the VirtualMachineInstance
object are also applied to these devices.
For example given the following VirtualMachinePreference
that defines a preferredDiskBus
and preferredInterfaceModel
of virtio
a VirtualMachine
that doesn’t list any Disk
s or Interface
s will now have these preferences applied to the devices added during the creation of the VirtualMachineInstance
. With these devices now being introduced by the VirtualMachine
controller itself instead of the VirtualMachineInstance
mutation webhook.
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachineFlavor
metadata:
name: small
spec:
cpu:
guest: 2
memory:
guest: 128Mi
---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachinePreference
metadata:
name: virtio
spec:
devices:
preferredDiskBus: virtio
preferredInterfaceModel: virtio
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example
spec:
flavor:
kind: VirtualMachineFlavor
name: small
preference:
kind: VirtualMachinePreference
name: virtio
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
$ ./cluster-up/virtctl.sh start example && ./cluster-up/kubectl.sh wait vms/example --for=condition=Ready
$ ./cluster-up/kubectl.sh get vmis/example -o json | jq .spec.domain.devices.disks
selecting docker as container runtime
[
{
"disk": {
"bus": "virtio"
},
"name": "containerdisk"
},
{
"disk": {
"bus": "virtio"
},
"name": "cloudinitdisk"
}
]
$ ./cluster-up/kubectl.sh get vmis/example -o json | jq .spec.domain.devices.interfaces
selecting docker as container runtime
[
{
"bridge": {},
"model": "virtio",
"name": "default"
}
]
Removal of SharedInformers
https://github.com/kubevirt/kubevirt/pull/7935
I was alerted to issues around the use of SharedInformers
within our API workers during the PR moving flavor application to the VirtualMachineInstance
mutation webhook.
After this I started to notice the occasional CI failure both up and downstream that appeared to marry with the suggested symptoms. Either a recently created VirtualMachine{Flavor,Preference}
object would not be seen by another worker or the generation of the object seen by the worker would be older than expected, leading to failures.
As such we decided to remove the use of SharedInformers
for flavors and preferences, reverting back to straight client calls for retrieval instead. The impact of this change hasn’t been fully measured yet but is on our radar for the coming weeks to ensure performance isn’t impacted.
Upcoming changes
Versioning through ControllerRevision
s
https://github.com/kubevirt/kubevirt/pull/7875
This is another large foundational PR making the entire concept usable by users in the real world. The current design is for ControllerRevision
s containing the VirtualMachineFlavorSpec
and VirtualMachinePreferenceSpec
to be created after the initial application of a flavor and preference to a VirtualMachineInstance
by the VirtualMachine
controller are start time. A reference to these ControllerRevision
s is then patched into the FlavorMatcher
and PreferenceMatcher
associated with the VirtualMachine
and used to gather the specs in the future. Hopefully ensuring future restarts will continue to produce the same VirtualMachineInstance
object.
s/VirtualMachineFlavor/VirtualMachineInstancetype/g
I was not involved in the initial design and naming of the CRDs but after coming onboard it was quickly highlighted that while OpenStack
uses Flavors
all other public cloud providers use some form of Type
object to contain their resource and performance characteristics. With that in mind we have agreed to rename the CRDs for KubeVirt.
VirtualMachineFlavor
and VirtualMachineClusterFlavor
will become VirtualMachineInstancetype
and VirtualMachineClusterInstancetype
.
This aligns us with the public cloud providers while making it clear that these CRDs relate to the VirtualMachineInstance
. We couldn’t shorten this to VirtualMachineType
anyway as that could easily clash with the MachineType
term from QEMU
that we already expose as part of our API.
VirtualMachinePreference
and VirtualMachineClusterPreference
will also become VirtualMachineInstancePreference
and VirtualMachineInstanceClusterPreference
.
How and when this happens is still up in the air but the current suggestion is that these new CRDs will live alongside the existing CRDs while we deprecate and eventually remove them from the project.
Design document updates
Between versioning and renaming there’s lots of change listed above and I do want to get back to the design document before this all lands.
Demo
As listed at the start, this demo is using an unmerged versioning PR https://github.com/kubevirt/kubevirt/pull/7875.
The demo itself introduces the current CRDs, their basic behaviour, their interaction with default devices and finally their behaviour with the above versioning PR applied.
I was time limited in the downstream presentation I gave using this recording so please be aware it moves pretty quickly between topics. I’d highly recommend downloading the file and using asciinema
to play it locally along with the spacebar to pause between commands.