
I had the pleasure of delivering a very short KubeVirt Summit 2023 presentation with my colleague Felix last week covering some of our work around instance types and virtctl over the last year. Please find the slides and a recording below.

I had the pleasure of delivering a very short KubeVirt Summit 2023 presentation with my colleague Felix last week covering some of our work around instance types and virtctl over the last year. Please find the slides and a recording below.
The implementation of this feature has now landed and will be included in the upcoming v0.59.0 release of KubeVirt.
https://github.com/kubevirt/kubevirt/pull/8480
There have been some small changes to the design. Notably that DataSources are now supported as a target of InferFromVolume and
the previously listed camel-case annotations are now hyphenated labels:
instancetype.kubevirt.io/default-instancetypeinstancetype.kubevirt.io/default-instancetype-kind (Defaults to VirtualMachineClusterInstancetype)instancetype.kubevirt.io/default-preferenceinstancetype.kubevirt.io/default-preference-kind (Defaults to VirtualMachineClusterPreference)I’ve recorded a new demo below using a SSP operator development environment, the demo now covers the following:
DataImportCrons by the SSP operatorkubevirt/common-instancetypes by the SSP operatorDataSources and PVCs by CDIDataSource for a VirtualMachine
Welcome to part #4 of this series following the development of instance types and preferences within KubeVirt!
inferFromVolumeThis feature has now landed in full within KubeVirt with some subtle changes:
https://github.com/kubevirt/kubevirt/pull/8480
The previously discussed annotations have been replaced by labels to allow users (such as the downstream OpenShift UI within Red Hat) to use server side filtering to find suitably decorated resources within a given cluster.
$ env | grep KUBEVIRT
KUBEVIRT_PROVIDER=k8s-1.24
KUBEVIRT_MEMORY=16384
KUBEVIRT_STORAGE=rook-ceph-default
[..]
$ wget https://github.com/cirros-dev/cirros/releases/download/0.6.1/cirros-0.6.1-x86_64-disk.img
[..]
$ ./cluster-up/virtctl.sh image-upload pvc cirros --size=1Gi --image-path=./cirros-0.6.1-x86_64-disk.img
[..]
$ ./cluster-up/kubectl.sh kustomize https://github.com/kubevirt/common-instancetypes.git | ./cluster-up/kubectl.sh apply -f -
[..]
$ ./cluster-up/kubectl.sh label pvc/cirros instancetype.kubevirt.io/default-instancetype=server.tiny instancetype.kubevirt.io/default-preference=cirros
$ ./cluster-up/kubectl.sh get pvc/cirros -o json | jq .metadata.labels
selecting docker as container runtime
{
"instancetype.kubevirt.io/default-instancetype": "server.tiny",
"instancetype.kubevirt.io/default-preference": "cirros"
}
[..]
$ ./cluster-up/kubectl.sh apply -f - << EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: cirros
spec:
instancetype:
inferFromVolume: cirros-disk
preference:
inferFromVolume: cirros-disk
running: true
template:
spec:
domain:
devices: {}
volumes:
- persistentVolumeClaim:
claimName: cirros
name: cirros-disk
EOF
[..]
$ ./cluster-up/kubectl.sh get vms/cirros -o json | jq '.spec.instancetype, .spec.preference'
selecting docker as container runtime
{
"kind": "virtualmachineclusterinstancetype",
"name": "server.tiny",
"revisionName": "cirros-server.tiny-ef0cbfb6-b48c-4e9f-aa7a-a06878b42503-1"
}
{
"kind": "virtualmachineclusterpreference",
"name": "cirros",
"revisionName": "cirros-cirros-5bddae5d-47f8-433b-afa2-d4f846ef1830-1"
}
Changes have also been made to the CDI project ensuring these labels are passed down when importing volumes into an environment using the DataImportCron resource. Any DataVolumes, DataSources or PVCs created by this process will have these labels copied over from the initial DataImportCron. The following example is from an environment where the SSP operator has deployed a labelled DataImportCron to CDI:
$ kubectl get all,pvc -A -l instancetype.kubevirt.io/default-preference
NAMESPACE NAME AGE
kubevirt-os-images datasource.cdi.kubevirt.io/centos-stream8 31m
NAMESPACE NAME AGE
kubevirt-os-images dataimportcron.cdi.kubevirt.io/centos-stream8-image-cron 4m29s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
kubevirt-os-images persistentvolumeclaim/centos-stream8-2f16c067b974 Bound pvc-4be6ea30-9d7d-480a-828c-38fa2abc6597 10Gi RWX rook-ceph-block 4m19s
$ kubectl get persistentvolumeclaim/centos-stream8-2f16c067b974 -n kubevirt-os-images -o json | jq .metadata.labels
{
"app": "containerized-data-importer",
"app.kubernetes.io/component": "storage",
"app.kubernetes.io/managed-by": "cdi-controller",
"cdi.kubevirt.io/dataImportCron": "centos-stream8-image-cron",
"instancetype.kubevirt.io/default-instancetype": "server.medium",
"instancetype.kubevirt.io/default-preference": "centos.8.stream"
}
I plan on recording and posting an updated demo shortly.
PrefferredStorageClassNameA new PrefferredStorageClassName preference has been added:
https://github.com/kubevirt/kubevirt/pull/8802
The common-instancetypes project has moved under the kubevirt namespace and had a number of rc releases:
https://github.com/kubevirt/common-instancetypes/releases
Recent changes include the introduction of new instancetypes, preferences and various bits of house-keeping.
The VirtualMachineCluster{Instancetype,Preference} resources are now also deployed by the SSP operator by default:
https://github.com/kubevirt/ssp-operator/pull/453
$ kubectl get all -A -l app.kubernetes.io/name=common-instancetypes
NAMESPACE NAME AGE
virtualmachineclusterpreference.instancetype.kubevirt.io/alpine 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.desktop 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8.stream 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8.stream.desktop 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9.stream 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9.stream.desktop 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/cirros 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.desktop 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8.desktop 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9.desktop 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10.virtio 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11.virtio 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12.virtio 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16.virtio 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19.virtio 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k22 6h17m
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k22.virtio 6h17m
NAMESPACE NAME AGE
virtualmachineclusterinstancetype.instancetype.kubevirt.io/cx1.2xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/cx1.4xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/cx1.8xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/cx1.large 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/cx1.medium 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/cx1.xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/gn1.2xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/gn1.4xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/gn1.8xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/gn1.xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/highperformance.large 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/highperformance.medium 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/highperformance.small 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/m1.2xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/m1.4xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/m1.8xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/m1.large 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/m1.xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/n1.2xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/n1.4xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/n1.8xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/n1.large 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/n1.medium 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/n1.xlarge 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.large 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.medium 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.micro 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.small 6h17m
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.tiny 6h17m
virtctl create vmA new virtctl command has been introduced that generates a VirtualMachine definition:
https://github.com/kubevirt/kubevirt/pull/8878
This includes basic support for Instance types and Preferences with support for InferFromVolume hopefully landing in the near future:
$ virtctl create vm --instancetype foo --preference bar --running --volume-clone-ds=example/datasource --name test
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
creationTimestamp: null
name: test
spec:
dataVolumeTemplates:
- metadata:
creationTimestamp: null
name: test-ds-datasource
spec:
sourceRef:
kind: DataSource
name: datasource
namespace: example
storage:
resources: {}
instancetype:
name: foo
preference:
name: bar
running: true
template:
metadata:
creationTimestamp: null
spec:
domain:
devices: {}
resources: {}
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: test-ds-datasource
name: test-ds-datasource
status: {}
Work is under way to add resource requests to Instance types:
https://github.com/kubevirt/kubevirt/pull/8729
This will close out a previous gap with Instance types and allow us to use the currently blocked dedicatedCPUPlacement feature again that requires the use of resource requests.
v1alpha3The introduction of resource requests and possible move to make the guest visible resource requests optional has prompted us to look at introducing yet another alpha version of the API:
https://github.com/kubevirt/kubevirt/pull/9052
The logic being that we can’t make part of the API optional without moving to a new version and we can’t move to v1beta1 while making changes to the API. This version should remain backwardly compatible with the older versions but work is still required to see if a conversion strategy is required for stored objects both in etcd and in ControllerRevisions.
v1alpha2 DeprecationWith the introduction of a new API version I also want to start looking into what it will take to deprecate our older versions while we are still in alpha:
https://github.com/kubevirt/kubevirt/issues/9051
This issue sets out the following tasks to be investigated:
- [ ] Introduce a new v1alpha3 version ahead of backwardly incompatible changes landing
- [ ] Deprecate v1alpha1 and v1alpha2 versions
- [ ] Implement a conversion strategy for stored objects from v1alpha1 and v1alpha2
- [ ] Implement a conversion strategy for objects stored in ControllerRevisions associated with existing VirtualMachines
This work could well be differed until after v1beta1 but it’s still a useful mental exercise to plan out what will eventually be required.
A while ago I quickly drafted an idea around expressing the resource requirements of a workload within VirtualMachinePreferenceSpec:
https://github.com/kubevirt/kubevirt/pull/8780
The PR is still pretty rough but the demo text included sets out what I’d like to achieve with the feature eventually. The general idea being to ensure that an Instance type or raw VirtualMachine definition using a given Preference provides the required resources to run a given workload correctly.
$ ./cluster-up/kubectl.sh apply -f https://raw.githubusercontent.com/kubevirt/common-instancetypes/main/common-instancetypes-all-bundle.yaml
[..]
$ ./cluster-up/kubectl.sh get VirtualMachinePreference cirros -o json | jq .spec
selecting docker as container runtime
{
"devices": {
"preferredDiskBus": "virtio",
"preferredInterfaceModel": "virtio"
}
}
$ ./cluster-up/kubectl.sh patch VirtualMachinePreference cirros --type=json -p='[{"op": "add", "path": "/spec/requirements", "value": {"cpu":{"guest": 2}}}]'
$ ./cluster-up/kubectl.sh get VirtualMachinePreference cirros -o json | jq .spec
selecting docker as container runtime
{
"devices": {
"preferredDiskBus": "virtio",
"preferredInterfaceModel": "virtio"
},
"requirements": {
"cpu": {
"guest": 2
}
}
}
$ ./cluster-up/kubectl.sh get virtualmachineinstancetype server.tiny -o json | jq .spec
selecting docker as container runtime
{
"cpu": {
"guest": 1
},
"memory": {
"guest": "1.5Gi"
}
}
$ ./cluster-up/kubectl.sh apply -f - << EOF
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: preference-requirements-demo
spec:
instancetype:
name: server.tiny
kind: virtualmachineinstancetype
preference:
name: cirros
kind: virtualmachinepreference
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
EOF
The request is invalid: spec.instancetype: Failure checking preference requirements: Insufficient CPU resources of 1 vCPU provided by instance type, preference requires 2 vCPU
$ ./cluster-up/kubectl.sh apply -f - << EOF
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: preference-requirements-demo
spec:
preference:
name: cirros
kind: virtualmachinepreference
running: false
template:
spec:
domain:
cpu:
sockets: 1
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
EOF
The request is invalid: spec.template.spec.domain.cpu: Failure checking preference requirements: Insufficient CPU resources of 1 vCPU provided by VirtualMachine, preference requires 2 vCPU
$ ./cluster-up/kubectl.sh get virtualmachineinstancetype server.large -o json | jq .spec
selecting docker as container runtime
{
"cpu": {
"guest": 2
},
"memory": {
"guest": "8Gi"
}
}
$ ./cluster-up/kubectl.sh apply -f - << EOF
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: preference-requirements-demo
spec:
instancetype:
name: server.large
kind: virtualmachineinstancetype
preference:
name: cirros
kind: virtualmachinepreference
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
EOF
virtualmachine.kubevirt.io/preference-requirements-demo created
As I alluded to in my previous demo post we have some exciting new features currently under development concerning instance types and preferences. This post introduces one of these, the ability for KubeVirt to infer the default instance type and preference of a VirtualMachine from a suggested Volume.
The current design document PR is listed below:
design-proposals: Default instance types inferred from volumes https://github.com/kubevirt/community/pull/190
The tl;dr being that the {Instancetype,Preference}Matchers will be extended with an inferFromVolume attribute that will reference the name of a Volume associatied with the VirtualMachine. This Volume will then be used to infer defaults by looking for the following annotations:
instancetype.kubevirt.io/defaultInstancetypeinstancetype.kubevirt.io/defaultInstancetypeKind (Defaults to VirtualMachineClusterInstancetype)instancetype.kubevirt.io/defaultPreferenceinstancetype.kubevirt.io/defaultPreferenceKind (Defaults to VirtualMachineClusterPreference)Initially only PVC and DataVolume derived Volumes will be supported but this will likely be extended to anything with annotations, such as Containers.
Feedback is welcome in the design review document if you have any!
This demo is based on some work in progress code posted below:
WIP - Introduce support for default instance type and preference PVC annotations https://github.com/kubevirt/kubevirt/pull/8480
The demo script, resource definitions and asciinema recording can be found below:
https://github.com/lyarwood/demos/tree/main/kubevirt/instancetypes/4-infer-default-instancetypes
This is my third demo for KubeVirt, this time introducing the following features and bugfixes:
v1alpha2 instancetype API versionAutoAttachInputDevice auto attach & PreferredAutoAttachInputDevice preference attributesPreferredMachineType preferenceIf you’ve been following my instance type development blog series you will note that some of these aren’t that recent but as I’ve not covered them in a demo until now I wanted to touch on them again.
Expect more demos in the coming weeks as I catch up with the current state of development.
I’m not going to include a complete transcript this time but the script and associated examples are available on my demos repo.
Welcome to part #3 of this series following the development of instancetypes and preferences within KubeVirt!
v1alpha2https://github.com/kubevirt/kubevirt/pull/8282
A new v1alpha2 instancetype API version has now been introduced in the above PR from my colleague akrejcir. This switches the ControllerRevisions over to using complete objects instead of just the spec of the object. Amongst other things this means that kubectl can present the complete object to the user within the ControllerRevision as shown below:
$ kubectl.sh apply -f examples/csmall.yaml -f examples/vm-cirros-csmall.yaml
virtualmachineinstancetype.instancetype.kubevirt.io/csmall created
virtualmachine.kubevirt.io/vm-cirros-csmall created
$ kubectl get vm/vm-cirros-csmall -o json | jq .spec.instancetype
{
"kind": "VirtualMachineInstancetype",
"name": "csmall",
"revisionName": "vm-cirros-csmall-csmall-72c3a35b-6e18-487d-bebf-f73c7d4f4a40-1"
}
$ kubectl get controllerrevision/vm-cirros-csmall-csmall-72c3a35b-6e18-487d-bebf-f73c7d4f4a40-1 -o json | jq .
{
"apiVersion": "apps/v1",
"data": {
"apiVersion": "instancetype.kubevirt.io/v1alpha2",
"kind": "VirtualMachineInstancetype",
"metadata": {
"creationTimestamp": "2022-09-30T12:20:19Z",
"generation": 1,
"name": "csmall",
"namespace": "default",
"resourceVersion": "10303",
"uid": "72c3a35b-6e18-487d-bebf-f73c7d4f4a40"
},
"spec": {
"cpu": {
"guest": 1
},
"memory": {
"guest": "128Mi"
}
}
},
"kind": "ControllerRevision",
"metadata": {
"creationTimestamp": "2022-09-30T12:20:19Z",
"name": "vm-cirros-csmall-csmall-72c3a35b-6e18-487d-bebf-f73c7d4f4a40-1",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "kubevirt.io/v1",
"blockOwnerDeletion": true,
"controller": true
"kind": "VirtualMachine",
"name": "vm-cirros-csmall",
"uid": "5216527a-1d31-4637-ad3a-b640cb9949a2"
}
],
"resourceVersion": "10307",
"uid": "a7bc784b-4cea-45d7-8432-15418e1dd7d3"
},
"revision": 0
}
Please note that while the API version has been incremented this new version is fully backwardly compatible with v1alpha1 and as a result requires no user modifications to existing v1alpha1 resources.
expand-spec APIshttps://github.com/kubevirt/kubevirt/pull/7549
akrejcir also landed two new subresource APIs that can take either a raw VirtualMachine definition or an existing VirtualMachine resource and expand the VirtualMachineInstanceSpec within using any referenced VirtualMachineInstancetype or VirtualMachinePreference resources.
expand-spec for existing VirtualMachinesThe following expands the spec of a defined vm-cirros-csmall VirtualMachine resource that references the example csmall instancetype using diff to show the changes between the original and expanded definition returned by the API:
$ ./cluster-up/kubectl.sh apply -f examples/csmall.yaml -f examples/vm-cirros-csmall.yaml
[..]
$ ./cluster-up/kubectl.sh proxy --port=8080 &
[..]
$ diff --color -u <(./cluster-up/kubectl.sh get vms/vm-cirros-csmall -o json | jq -S .spec.template.spec.domain) \
<(curl http://localhost:8080/apis/subresources.kubevirt.io/v1/namespaces/default/virtualmachines/vm-cirros-csmall/expand-spec | jq -S .spec.template.spec.domain)
[..]
--- /dev/fd/63 2022-10-05 15:51:23.599135528 +0100
+++ /dev/fd/62 2022-10-05 15:51:23.599135528 +0100
@@ -1,4 +1,9 @@
{
+ "cpu": {
+ "cores": 1,
+ "sockets": 1,
+ "threads": 1
+ },
"devices": {
"disks": [
{
@@ -16,5 +21,8 @@
"machine": {
"type": "q35"
},
+ "memory": {
+ "guest": "128Mi"
+ },
"resources": {}
}
expand-spec for a RAW VirtualMachine definitionThe following expands the spec of a raw undefined VirtualMachine passed to the API that references the example csmall instancetype again using diff to show the changes between the original raw definition and the returned expanded definition:
$ ./cluster-up/kubectl.sh apply -f examples/csmall.yaml
[..]
$ ./cluster-up/kubectl.sh proxy --port=8080 &
[..]
$ diff --color -u <(jq -S .spec.template.spec.domain ./examples/vm-cirros-csmall.yaml) <(curl -X PUT -H "Content-Type: application/json" -d @./examples/vm-cirros-csmall.yaml http://localhost:8080/apis/subresources.kubevirt.io/v1/expand-spec
--- /dev/fd/63 2022-10-05 16:19:56.035111587 +0100
+++ /dev/fd/62 2022-10-05 16:19:56.035111587 +0100
@@ -1,4 +1,9 @@
{
+ "cpu": {
+ "cores": 1,
+ "sockets": 1,
+ "threads": 1
+ },
"devices": {
"disks": [
{
@@ -16,5 +21,8 @@
"machine": {
"type": "q35"
},
+ "memory": {
+ "guest": "128Mi"
+ },
"resources": {}
}
Please note that there remains some on-going work around the raw VirtualMachine definition API.
AutoattachInputDevicehttps://github.com/kubevirt/kubevirt/pull/8006
A new AutoattachInputDevice toggle to control the attachment of a default input device has been introduced:
$ ./cluster-up/kubectl.sh apply -f examples/csmall.yaml
$ ./cluster-up/kubectl.sh apply -f - << EOF
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-autoattachinputdevice
spec:
instancetype:
name: csmall
kind: virtualmachineinstancetype
running: true
template:
spec:
domain:
devices:
autoattachInputDevice: true
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
EOF
$ ./cluster-up/kubectl.sh get vmis/demo-autoattachinputdevice -o json | jq .spec.domain.devices.inputs
selecting docker as container runtime
[
{
"bus": "usb",
"name": "default-0",
"type": "tablet"
}
]
An associated PreferredAutoattachInputDevice preference has also been introduced to control AutoattachInputDevice along with the existing preferredInputType and preferredInputBus preferences:
$ ./cluster-up/kubectl.sh apply -f examples/csmall.yaml
[..]
$ ./cluster-up/kubectl.sh apply -f - << EOF
---
apiVersion: instancetype.kubevirt.io/v1alpha2
kind: VirtualMachinePreference
metadata:
name: preferredinputdevice
spec:
devices:
preferredAutoattachInputDevice: true
preferredInputType: tablet
preferredInputBus: virtio
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-preferredinputdevice
spec:
instancetype:
name: csmall
kind: virtualmachineinstancetype
preference:
name: preferredinputdevice
kind: virtualmachinepreference
running: true
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
EOF
[..]
./cluster-up/kubectl.sh get vms/demo-preferredinputdevice -o json | jq .spec.template.spec.domain.devices
{}
$ ./cluster-up/kubectl.sh get vmis/demo-preferredinputdevice -o json | jq .spec.domain.devices.autoattachInputDevice
true
$ ./cluster-up/kubectl.sh get vmis/demo-preferredinputdevice -o json | jq .spec.domain.devices.inputs
[
{
"bus": "virtio",
"name": "default-0",
"type": "tablet"
}
]
common-instancetypeshttps://github.com/lyarwood/common-instancetypes
The KubeVirt project has for a while now provided a set of common-templates to help users to define VirtualMachines. These OpenShift/OKD templates cover a range of guest OS’s and workloads (server, desktop, highperformance etc).
I’ve created an instancetype based equivalent to this outside of KubeVirt for the time being. My common-instancetypes repo provides instancetypes and preferences covering all of the combinations covered by common-templates with some hopefully useful additions such as preferences for CirrOS and Alpine Linux.
The repo currently uses kustomize to generate everything so deployment into a cluster is extremely simple:
$ ./cluster-up/kubectl.sh kustomize https://github.com/lyarwood/common-instancetypes.git | ./cluster-up/kubectl.sh apply -f -
[..]
virtualmachineclusterinstancetype.instancetype.kubevirt.io/highperformance.large created
virtualmachineclusterinstancetype.instancetype.kubevirt.io/highperformance.medium created
virtualmachineclusterinstancetype.instancetype.kubevirt.io/highperformance.small created
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.large created
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.medium created
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.small created
virtualmachineclusterinstancetype.instancetype.kubevirt.io/server.tiny created
virtualmachineclusterpreference.instancetype.kubevirt.io/alpine created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7 created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.desktop created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.i440fx created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8 created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8.desktop created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9 created
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9.desktop created
virtualmachineclusterpreference.instancetype.kubevirt.io/cirros created
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora.35 created
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora.36 created
virtualmachineclusterpreference.instancetype.kubevirt.io/pc-i440fx created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7 created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.desktop created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.i440fx created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8 created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8.desktop created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9 created
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9.desktop created
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.18.04 created
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.20.04 created
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.22.04 created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10 created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10.virtio created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11 created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11.virtio created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12 created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12.virtio created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16 created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16.virtio created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19 created
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19.virtio created
virtualmachineinstancetype.instancetype.kubevirt.io/highperformance.large created
virtualmachineinstancetype.instancetype.kubevirt.io/highperformance.medium created
virtualmachineinstancetype.instancetype.kubevirt.io/highperformance.small created
virtualmachineinstancetype.instancetype.kubevirt.io/server.large created
virtualmachineinstancetype.instancetype.kubevirt.io/server.medium created
virtualmachineinstancetype.instancetype.kubevirt.io/server.small created
virtualmachineinstancetype.instancetype.kubevirt.io/server.tiny created
virtualmachinepreference.instancetype.kubevirt.io/alpine created
virtualmachinepreference.instancetype.kubevirt.io/centos.7 created
virtualmachinepreference.instancetype.kubevirt.io/centos.7.desktop created
virtualmachinepreference.instancetype.kubevirt.io/centos.7.i440fx created
virtualmachinepreference.instancetype.kubevirt.io/centos.8 created
virtualmachinepreference.instancetype.kubevirt.io/centos.8.desktop created
virtualmachinepreference.instancetype.kubevirt.io/centos.9 created
virtualmachinepreference.instancetype.kubevirt.io/centos.9.desktop created
virtualmachinepreference.instancetype.kubevirt.io/cirros created
virtualmachinepreference.instancetype.kubevirt.io/fedora.35 created
virtualmachinepreference.instancetype.kubevirt.io/fedora.36 created
virtualmachinepreference.instancetype.kubevirt.io/pc-i440fx created
virtualmachinepreference.instancetype.kubevirt.io/rhel.7 created
virtualmachinepreference.instancetype.kubevirt.io/rhel.7.desktop created
virtualmachinepreference.instancetype.kubevirt.io/rhel.7.i440fx created
virtualmachinepreference.instancetype.kubevirt.io/rhel.8 created
virtualmachinepreference.instancetype.kubevirt.io/rhel.8.desktop created
virtualmachinepreference.instancetype.kubevirt.io/rhel.9 created
virtualmachinepreference.instancetype.kubevirt.io/rhel.9.desktop created
virtualmachinepreference.instancetype.kubevirt.io/ubuntu.18.04 created
virtualmachinepreference.instancetype.kubevirt.io/ubuntu.20.04 created
virtualmachinepreference.instancetype.kubevirt.io/ubuntu.22.04 created
virtualmachinepreference.instancetype.kubevirt.io/windows.10 created
virtualmachinepreference.instancetype.kubevirt.io/windows.10.virtio created
virtualmachinepreference.instancetype.kubevirt.io/windows.11 created
virtualmachinepreference.instancetype.kubevirt.io/windows.11.virtio created
virtualmachinepreference.instancetype.kubevirt.io/windows.2k12 created
virtualmachinepreference.instancetype.kubevirt.io/windows.2k12.virtio created
virtualmachinepreference.instancetype.kubevirt.io/windows.2k16 created
virtualmachinepreference.instancetype.kubevirt.io/windows.2k16.virtio created
virtualmachinepreference.instancetype.kubevirt.io/windows.2k19 created
virtualmachinepreference.instancetype.kubevirt.io/windows.2k19.virtio created
Users can also deploy by generating specific resource Kinds such as VirtualMachineClusterPreferences below:
./cluster-up/kubectl.sh kustomize https://github.com/lyarwood/common-instancetypes.git/VirtualMachineClusterPreferences | ./cluster-up/kubectl.sh apply -f -
[..]
virtualmachineclusterpreference.instancetype.kubevirt.io/alpine unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.i440fx unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/cirros unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora.35 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora.36 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/pc-i440fx unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.i440fx unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.18.04 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.20.04 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.22.04 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19.virtio unchanged
Finally users can also deploy using a set of generated bundles in the repo:
# ./cluster-up/kubectl.sh apply -f https://raw.githubusercontent.com/lyarwood/common-instancetypes/main/common-clusterpreferences-bundle.yaml
selecting docker as container runtime
virtualmachineclusterpreference.instancetype.kubevirt.io/alpine unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.7.i440fx unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.8.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/centos.9.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/cirros unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora.35 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/fedora.36 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/pc-i440fx unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.7.i440fx unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.8.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/rhel.9.desktop unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.18.04 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.20.04 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/ubuntu.22.04 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.10.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.11.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k12.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k16.virtio unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19 unchanged
virtualmachineclusterpreference.instancetype.kubevirt.io/windows.2k19.virtio unchanged
The next step with the repo is obviously to move it under the main KubeVirt namespace which will lots of housekeeping to get CI setup, generate releases etc.In the meantime issues and pull requests against the original repo would still be welcome and encouraged!
We also have plans to have the cluster wide instancetypes and preferences deployed by default by KubeVirt. This has yet to be raised formally with the community and as such a deployment mechanism hasn’t been agreed upon yet. Hopefully more on this in the near future.
VirtualMachineInstancetypes can be deleted before a VirtualMachine startshttps://github.com/kubevirt/kubevirt/issues/8142
https://github.com/kubevirt/kubevirt/pull/8346
As described in the issue the VirtualMachine controller would previously wait until the first start of a VirtualMachine to create any ControllerRevisions for a referenced instancetype or preference. As such users could easily modify or even remove the referenced resources ahead of this first start causing failures when the request is eventually made:
# ./cluster-up/kubectl.sh apply -f examples/csmall.yaml -f examples/vm-cirros-csmall.yaml
[..]
virtualmachineinstancetype.instancetype.kubevirt.io/csmall created
virtualmachine.kubevirt.io/vm-cirros-csmall created
# ./cluster-up/kubectl.sh delete virtualmachineinstancetype/csmall
virtualmachineinstancetype.instancetype.kubevirt.io "csmall" deleted
# ./cluster-up/virtctl.sh start vm-cirros-csmall
Error starting VirtualMachine Internal error occurred: admission webhook "virtualmachine-validator.kubevirt.io" denied the request: Failure to find instancetype: virtualmachineinstancetypes.instancetype.kubevirt.io "csmall" not found
The fix here was to move the creation of these ControllerRevisions to earlier within the VirtualMachine controller reconcile loop, ensuring that they are created as soon as the VirtualMachine is seen for the first time.
$ ./cluster-up/kubectl.sh apply -f examples/csmall.yaml -f examples/vm-cirros-csmall.yaml
[..]
virtualmachineinstancetype.instancetype.kubevirt.io/csmall created
virtualmachine.kubevirt.io/vm-cirros-csmall created
$ ./cluster-up/kubectl.sh get vms/vm-cirros-csmall -o json | jq .spec.instancetype
selecting docker as container runtime
{
"kind": "VirtualMachineInstancetype",
"name": "csmall",
"revisionName": "vm-cirros-csmall-csmall-6486bc40-955a-480f-b38a-19372812e388-1"
}
PreferredMachineType never applied to VirtualMachineInstancehttps://github.com/kubevirt/kubevirt/issues/8338
https://github.com/kubevirt/kubevirt/pull/8352
Preferences are only applied to a VirtualMachineInstance when a user has not already provided a corresponding value within their VirtualMachine [1]. This is an issue for PreferredMachineType however as the VirtualMachine mutation webhook always provides some kind of default [2], resulting in the PreferredMachineType never being applied to the VirtualMachineInstance.
[1] https://github.com/kubevirt/kubevirt/blob/bcfbd78d803e9868e0665b51878a2a093e9b74c2/pkg/instancetype/instancetype.go#L950-L952
[2] https://github.com/kubevirt/kubevirt/blob/bcfbd78d803e9868e0665b51878a2a093e9b74c2/pkg/virt-api/webhooks/mutating-webhook/mutators/vm-mutator.go#L98-L112
The fix here was to lookup and apply preferences during the VirtualMachine mutation webhook to ensure we applied PreferredMachineType when the user hasn’t already provided their own value.
$ ./cluster-up/kubectl.sh apply -f examples/csmall.yaml
[..]
$ ./cluster-up/kubectl.sh apply -f - << EOF
---
apiVersion: instancetype.kubevirt.io/v1alpha2
kind: VirtualMachinePreference
metadata:
name: preferredmachinetype
spec:
machine:
preferredMachineType: "pc-q35-6.2"
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-preferredmachinetype
spec:
instancetype:
name: csmall
kind: virtualmachineinstancetype
preference:
name: preferredmachinetype
kind: virtualmachinepreference
running: true
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
EOF
$ ./cluster-up/kubectl.sh get vms/demo-preferredmachinetype -o json | jq .spec.template.spec.domain.machine
selecting docker as container runtime
{
"type": "pc-q35-6.2"
}
VirtualMachineSnapshot of a VirtualMachine referencing a VirtualMachineInstancetype becomes stuck InProgresshttps://github.com/kubevirt/kubevirt/issues/8435
https://github.com/kubevirt/kubevirt/pull/8448
As set out in the issue attempting to create a VirtualMachineSnapshot of a VirtualMachine referencing a VirtualMachineInstancetype would previously leave the VirtualMachineSnapshot InProgress as the VirtualMachine controller was unable to add the required snapshot finalizer.
The fix here was to ensure that any VirtualMachineInstancetype referenced was applied to a copy of the VirtualMachine when checking for conflicts in the VirtualMachine admission webhook, allowing the snapshot finalizer to later be added.
https://kubevirt.io/2022/KubeVirt-Introduction-of-instancetypes.html
After finally fixing a build issue with the blog we now have a post introducing the basic instancetype concepts with examples.
https://kubevirt.io/user-guide/virtual_machines/instancetypes/
We now have basic user-guide documentation introduction instancetypes. Feedback welcome, please do /cc lyarwood on any issues or PRs related to this doc!
VirtualMachineInstancePreset deprecation in favor of VirtualMachineInstancetypehttps://github.com/kubevirt/kubevirt/pull/8069
This deprecation has now landed. VirtualMachineInstancePresets are based on the PodPresets k8s resource and API that injected data into pods at creation time. However this API never graduated from alpha and was removed in 1.20 [1]. While useful there are some issues with the implementation that have resulted in alternative approaches such as VirtualMachineInstancetypes and VirtualMachinePreferences being made available within KubeVirt.
As per the CRD versioning docs this change updated the generated CRD definition of VirtualMachineInstancePreset marking the currently available versions of v1 and v1alpha3 as deprecated.
More context and discussion is also available on the mailing-list [3].
[1] https://github.com/kubernetes/kubernetes/pull/94090 [2] https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation [3] https://groups.google.com/g/kubevirt-dev/c/eM7JaDV_EU8
area/instancetype label on kubevirt/kubevirthttps://github.com/kubevirt/kubevirt/labels/area%2Finstancetype
We now have an area label for instancetypes on the kubevirt/kubevirt repo that I’ve been manually applying to PRs and issues. Please feel free to use this by commenting /area instancetype on anything you think it related to instancetypes! I do hope to automate this for specific files in the future.
v1beta1https://github.com/kubevirt/kubevirt/issues/8235
With the new v1alpha2 version landing I also wanted to again draw attention to the above issue tracking our progress to v1beta1. Obviously a long way to go but if you do have any suggested changes ahead of v1beta1 please feel free to comment there.
https://github.com/kubevirt/community/pull/190
https://github.com/kubevirt/kubevirt/pull/8480
This topic deserves its’ own blog post but for now I’d just like to highlight the design doc and WIP code series above looking at introducing support for default instancetype and preference annoations into KubeVirt. The following example demonstrates the current PVC support in the series but I’d also like to expand this to other volume types where possible. Again, feedback welcome on the design doc or code series itself!
$ wget https://github.com/cirros-dev/cirros/releases/download/0.5.2/cirros-0.5.2-x86_64-disk.img
[..]
$ ./cluster-up/virtctl.sh image-upload pvc cirros --size=1Gi --image-path=./cirros-0.5.2-x86_64-disk.img
[..]
$ ./cluster-up/kubectl.sh kustomize https://github.com/lyarwood/common-instancetypes.git | ./cluster-up/kubectl.sh apply -f -
[..]
$ ./cluster-up/kubectl.sh annotate pvc/cirros instancetype.kubevirt.io/defaultInstancetype=server.medium instancetype.kubevirt.io/defaultPreference=rhel.7
[..]
$ cat <<EOF | ./cluster-up/kubectl.sh apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: cirros
spec:
running: true
template:
spec:
domain:
devices: {}
volumes:
- persistentVolumeClaim:
claimName: cirros
name: disk
inferInstancetype: true
inferPreference: true
EOF
[..]
$ ./cluster-up/kubectl.sh get vms/cirros -o json | jq '.spec.instancetype, .spec.preference'
selecting docker as container runtime
{
"kind": "virtualmachineclusterinstancetype",
"name": "server.medium",
"revisionName": "cirros-server.medium-85cd3327-1825-45b1-9c8f-7ca18b2e4124-1"
}
{
"kind": "virtualmachineclusterpreference",
"name": "rhel.7",
"revisionName": "cirros-rhel.7-0b081d8a-f216-44e9-8248-9f0c373b2fdc-1"
}
https://github.com/kubevirt/kubevirt/pull/8537
The use of these informers was previously removed by ee4e266. After further discussions on the mailing-list however it has become clear that the removal of these informers from the virt-controller was not required and they can be reintroduced.
https://github.com/kubevirt/kubevirt/issues/7760
I’d like to make a start on this in the coming weeks. The basic idea being that the new command would generate a VirtualMachine definition we could then pipe to kubectl apply, something like the following:
$ virtctl create vm --instancetype csmall --preference cirros --pvc cirros-disk | kubectl apply -f -
In my initial VirtualMachineFlavor Update #1 post I included an asciinema recorded demo towards the end. I’ve re-recorded the demo given the recent rename and I’ve also created a personal repo of demos to store the original script and recordings outside of asciinema.
#
# [..]
#
# Agenda
# 1. The Basics
# 2. VirtualMachineClusterInstancetype and VirtualMachineClusterPreference
# 3. VirtualMachineInstancetype vs VirtualMachinePreference vs VirtualMachine
# 4. Versioning
# 5. What's next...
# - https://github.com/kubevirt/kubevirt/issues/7897
# - https://blog.yarwood.me.uk/2022/07/21/kubevirt_instancetype_update_2/
#
#
# Demo #1 The Basics
#
# Lets start by creating a simple namespaced VirtualMachineInstancetype, VirtualMachinePreference and VirtualMachine
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: instancetype.kubevirt.io/v1alpha1
kind: VirtualMachineInstancetype
metadata:
name: small
spec:
cpu:
guest: 2
memory:
guest: 128Mi
---
apiVersion: instancetype.kubevirt.io/v1alpha1
kind: VirtualMachinePreference
metadata:
name: cirros
spec:
devices:
preferredDiskBus: virtio
preferredInterfaceModel: virtio
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo
spec:
instancetype:
kind: VirtualMachineInstancetype
name: small
preference:
kind: VirtualMachinePreference
name: cirros
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
selecting docker as container runtime
virtualmachineinstancetype.instancetype.kubevirt.io/small created
virtualmachinepreference.instancetype.kubevirt.io/cirros created
virtualmachine.kubevirt.io/demo created
# # Starting the VirtualMachine applies the VirtualMachineInstancetype and VirtualMachinePreference to the VirtualMachineInstance
./cluster-up/virtctl.sh start demo && ./cluster-up/kubectl.sh wait vms/demo --for=condition=Ready
selecting docker as container runtime
VM demo was scheduled to start
selecting docker as container runtime
virtualmachine.kubevirt.io/demo condition met
# #
# We can check this by comparing the two VirtualMachineInstanceSpec fields from the VirualMachine and VirtualMachineInstance
diff --color -u <( ./cluster-up/kubectl.sh get vms/demo -o json | jq .spec.template.spec) <( ./cluster-up/kubectl.sh get vmis/demo -o json | jq .spec)
selecting docker as container runtime
selecting docker as container runtime
--- /dev/fd/63 2022-08-03 13:36:29.588992874 +0100
+++ /dev/fd/62 2022-08-03 13:36:29.588992874 +0100
@@ -1,15 +1,65 @@
{
"domain": {
- "devices": {},
+ "cpu": {
+ "cores": 1,
+ "model": "host-model",
+ "sockets": 2,
+ "threads": 1
+ },
+ "devices": {
+ "disks": [
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "containerdisk"
+ },
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "cloudinitdisk"
+ }
+ ],
+ "interfaces": [
+ {
+ "bridge": {},
+ "model": "virtio",
+ "name": "default"
+ }
+ ]
+ },
+ "features": {
+ "acpi": {
+ "enabled": true
+ }
+ },
+ "firmware": {
+ "uuid": "c89d1344-ee03-5c55-99bd-5df16b72bea0"
+ },
"machine": {
"type": "q35"
},
- "resources": {}
+ "memory": {
+ "guest": "128Mi"
+ },
+ "resources": {
+ "requests": {
+ "memory": "128Mi"
+ }
+ }
},
+ "networks": [
+ {
+ "name": "default",
+ "pod": {}
+ }
+ ],
"volumes": [
{
"containerDisk": {
- "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel"
+ "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel",
+ "imagePullPolicy": "IfNotPresent"
},
"name": "containerdisk"
},
# #
# Demo #2 Cluster wide CRDs
# #
# We also have cluster wide instancetypes and preferences we can use, note these are the default if no kind is provided within the VirtualMachine.
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: instancetype.kubevirt.io/v1alpha1
kind: VirtualMachineClusterInstancetype
metadata:
name: small-cluster
spec:
cpu:
guest: 2
memory:
guest: 128Mi
---
apiVersion: instancetype.kubevirt.io/v1alpha1
kind: VirtualMachineClusterPreference
metadata:
name: cirros-cluster
spec:
devices:
preferredDiskBus: virtio
preferredInterfaceModel: virtio
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-cluster
spec:
instancetype:
name: small-cluster
preference:
name: cirros-cluster
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
selecting docker as container runtime
virtualmachineclusterinstancetype.instancetype.kubevirt.io/small-cluster created
virtualmachineclusterpreference.instancetype.kubevirt.io/cirros-cluster created
virtualmachine.kubevirt.io/demo-cluster created
# #
# InstancetypeMatcher and PreferenceMatcher default to the Cluster CRD Kinds
./cluster-up/kubectl.sh get vms/demo-cluster -o json | jq '.spec.instancetype, .spec.preference'
selecting docker as container runtime
{
"kind": "virtualmachineclusterinstancetype",
"name": "small-cluster"
}
{
"kind": "virtualmachineclusterpreference",
"name": "cirros-cluster"
}
# ./cluster-up/virtctl.sh start demo-cluster && ./cluster-up/kubectl.sh wait vms/demo-cluster --for=condition=Ready
diff --color -u <( ./cluster-up/kubectl.sh get vms/demo-cluster -o json | jq .spec.template.spec) <( ./cluster-up/kubectl.sh get vmis/demo-cluster -o json | jq .spec)
selecting docker as container runtime
VM demo-cluster was scheduled to start
selecting docker as container runtime
virtualmachine.kubevirt.io/demo-cluster condition met
selecting docker as container runtime
selecting docker as container runtime
--- /dev/fd/63 2022-08-03 13:37:04.897273573 +0100
+++ /dev/fd/62 2022-08-03 13:37:04.897273573 +0100
@@ -1,15 +1,65 @@
{
"domain": {
- "devices": {},
+ "cpu": {
+ "cores": 1,
+ "model": "host-model",
+ "sockets": 2,
+ "threads": 1
+ },
+ "devices": {
+ "disks": [
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "containerdisk"
+ },
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "cloudinitdisk"
+ }
+ ],
+ "interfaces": [
+ {
+ "bridge": {},
+ "model": "virtio",
+ "name": "default"
+ }
+ ]
+ },
+ "features": {
+ "acpi": {
+ "enabled": true
+ }
+ },
+ "firmware": {
+ "uuid": "05fa1ec0-3e45-581d-84e2-36ddc6b50633"
+ },
"machine": {
"type": "q35"
},
- "resources": {}
+ "memory": {
+ "guest": "128Mi"
+ },
+ "resources": {
+ "requests": {
+ "memory": "128Mi"
+ }
+ }
},
+ "networks": [
+ {
+ "name": "default",
+ "pod": {}
+ }
+ ],
"volumes": [
{
"containerDisk": {
- "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel"
+ "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel",
+ "imagePullPolicy": "IfNotPresent"
},
"name": "containerdisk"
},
# #
# Demo #3 Instancetypes vs Preferences vs VirtualMachine
# #
# Users cannot overwrite anything set by an instancetype in their VirtualMachine, for example CPU topologies
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-instancetype-conflict
spec:
instancetype:
kind: VirtualMachineInstancetype
name: small
preference:
kind: VirtualMachinePreference
name: cirros
running: false
template:
spec:
domain:
cpu:
threads: 1
cores: 3
sockets: 1
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
selecting docker as container runtime
The request is invalid: spec.template.spec.domain.cpu: VM field conflicts with selected Instancetype
# #
# Users can however overwrite anything set by a preference in their VirtualMachine, for example disk buses etc.
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-instancetype-user-preference
spec:
instancetype:
kind: VirtualMachineInstancetype
name: small
preference:
kind: VirtualMachinePreference
name: cirros
running: false
template:
spec:
domain:
devices:
disks:
- disk:
bus: sata
name: containerdisk
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
selecting docker as container runtime
virtualmachine.kubevirt.io/demo-instancetype-user-preference created
# ./cluster-up/virtctl.sh start demo-instancetype-user-preference && ./cluster-up/kubectl.sh wait vms/demo-instancetype-user-preference --for=condition=Ready
diff --color -u <( ./cluster-up/kubectl.sh get vms/demo-instancetype-user-preference -o json | jq .spec.template.spec) <( ./cluster-up/kubectl.sh get vmis/demo-instancetype-user-preference -o json | jq .spec)
selecting docker as container runtime
VM demo-instancetype-user-preference was scheduled to start
selecting docker as container runtime
virtualmachine.kubevirt.io/demo-instancetype-user-preference condition met
selecting docker as container runtime
selecting docker as container runtime
--- /dev/fd/63 2022-08-03 13:37:38.099537528 +0100
+++ /dev/fd/62 2022-08-03 13:37:38.099537528 +0100
@@ -1,5 +1,11 @@
{
"domain": {
+ "cpu": {
+ "cores": 1,
+ "model": "host-model",
+ "sockets": 2,
+ "threads": 1
+ },
"devices": {
"disks": [
{
@@ -7,18 +13,53 @@
"bus": "sata"
},
"name": "containerdisk"
+ },
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "cloudinitdisk"
+ }
+ ],
+ "interfaces": [
+ {
+ "bridge": {},
+ "model": "virtio",
+ "name": "default"
}
]
},
+ "features": {
+ "acpi": {
+ "enabled": true
+ }
+ },
+ "firmware": {
+ "uuid": "195ea4a3-8505-5368-b068-9536257886ea"
+ },
"machine": {
"type": "q35"
},
- "resources": {}
+ "memory": {
+ "guest": "128Mi"
+ },
+ "resources": {
+ "requests": {
+ "memory": "128Mi"
+ }
+ }
},
+ "networks": [
+ {
+ "name": "default",
+ "pod": {}
+ }
+ ],
"volumes": [
{
"containerDisk": {
- "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel"
+ "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel",
+ "imagePullPolicy": "IfNotPresent"
},
"name": "containerdisk"
},
# #
# Demo #4 Versioning
# #
# We have versioning of instancetypes and preferences, note that the InstancetypeMatcher and PreferenceMatcher now have a populated revisionName field
./cluster-up/kubectl.sh get vms/demo -o json | jq '.spec.instancetype, .spec.preference'
selecting docker as container runtime
{
"kind": "VirtualMachineInstancetype",
"name": "small",
"revisionName": "demo-small-4a28a2f3-fd34-421a-98d8-a2659f9a8eb7-1"
}
{
"kind": "VirtualMachinePreference",
"name": "cirros",
"revisionName": "demo-cirros-d08c3914-7d2b-43b4-a295-9cd3687bf151-1"
}
# #
# These are the names of ControllerRevisions containing a copy of the VirtualMachine{Instancetype,Preference}Spec at the time of application
./cluster-up/kubectl.sh get controllerrevisions/$( ./cluster-up/kubectl.sh get vms/demo -o json | jq .spec.instancetype.revisionName | tr -d '"') -o json | jq .
./cluster-up/kubectl.sh get controllerrevisions/$( ./cluster-up/kubectl.sh get vms/demo -o json | jq .spec.preference.revisionName | tr -d '"') -o json | jq .
selecting docker as container runtime
selecting docker as container runtime
{
"apiVersion": "apps/v1",
"data": {
"apiVersion": "",
"spec": "eyJjcHUiOnsiZ3Vlc3QiOjJ9LCJtZW1vcnkiOnsiZ3Vlc3QiOiIxMjhNaSJ9fQ=="
},
"kind": "ControllerRevision",
"metadata": {
"creationTimestamp": "2022-08-03T12:36:20Z",
"name": "demo-small-4a28a2f3-fd34-421a-98d8-a2659f9a8eb7-1",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "kubevirt.io/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "VirtualMachine",
"name": "demo",
"uid": "e67ad6ba-7792-40ab-9cd2-a411b6161971"
}
],
"resourceVersion": "53965",
"uid": "3f20c656-ea33-45d1-9195-fb3c4f7085b9"
},
"revision": 0
}
selecting docker as container runtime
selecting docker as container runtime
{
"apiVersion": "apps/v1",
"data": {
"apiVersion": "",
"spec": "eyJkZXZpY2VzIjp7InByZWZlcnJlZERpc2tCdXMiOiJ2aXJ0aW8iLCJwcmVmZXJyZWRJbnRlcmZhY2VNb2RlbCI6InZpcnRpbyJ9fQ=="
},
"kind": "ControllerRevision",
"metadata": {
"creationTimestamp": "2022-08-03T12:36:20Z",
"name": "demo-cirros-d08c3914-7d2b-43b4-a295-9cd3687bf151-1",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "kubevirt.io/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "VirtualMachine",
"name": "demo",
"uid": "e67ad6ba-7792-40ab-9cd2-a411b6161971"
}
],
"resourceVersion": "53966",
"uid": "dc47f75f-b548-41fd-b0db-8af4b458994b"
},
"revision": 0
}
# # With versioning we can update the VirtualMachineInstancetype, create a new VirtualMachine to assert the changes and then check that our original VirtualMachine hasn't changed
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: instancetype.kubevirt.io/v1alpha1
kind: VirtualMachineInstancetype
metadata:
name: small
spec:
cpu:
guest: 3
memory:
guest: 256Mi
---
apiVersion: instancetype.kubevirt.io/v1alpha1
kind: VirtualMachinePreference
metadata:
name: cirros
spec:
cpu:
preferredCPUTopology: preferCores
devices:
preferredDiskBus: virtio
preferredInterfaceModel: virtio
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: demo-updated
spec:
instancetype:
kind: VirtualMachineInstancetype
name: small
preference:
kind: VirtualMachinePreference
name: cirros
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
selecting docker as container runtime
virtualmachineinstancetype.instancetype.kubevirt.io/small configured
virtualmachinepreference.instancetype.kubevirt.io/cirros configured
virtualmachine.kubevirt.io/demo-updated created
# #
# Now start the updated VirtualMachine
./cluster-up/virtctl.sh start demo-updated && ./cluster-up/kubectl.sh wait vms/demo-updated --for=condition=Ready
selecting docker as container runtime
VM demo-updated was scheduled to start
selecting docker as container runtime
virtualmachine.kubevirt.io/demo-updated condition met
# #
# We now see the updated instancetype used by the new VirtualMachine and applied to the VirtualMachineInstance
diff --color -u <( ./cluster-up/kubectl.sh get vms/demo-updated -o json | jq .spec.template.spec) <( ./cluster-up/kubectl.sh get vmis/demo-updated -o json | jq .spec)
selecting docker as container runtime
selecting docker as container runtime
--- /dev/fd/63 2022-08-03 13:38:37.203007409 +0100
+++ /dev/fd/62 2022-08-03 13:38:37.204007417 +0100
@@ -1,15 +1,65 @@
{
"domain": {
- "devices": {},
+ "cpu": {
+ "cores": 3,
+ "model": "host-model",
+ "sockets": 1,
+ "threads": 1
+ },
+ "devices": {
+ "disks": [
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "containerdisk"
+ },
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "cloudinitdisk"
+ }
+ ],
+ "interfaces": [
+ {
+ "bridge": {},
+ "model": "virtio",
+ "name": "default"
+ }
+ ]
+ },
+ "features": {
+ "acpi": {
+ "enabled": true
+ }
+ },
+ "firmware": {
+ "uuid": "937dc645-17f0-599b-be81-c1e9dbde8075"
+ },
"machine": {
"type": "q35"
},
- "resources": {}
+ "memory": {
+ "guest": "256Mi"
+ },
+ "resources": {
+ "requests": {
+ "memory": "256Mi"
+ }
+ }
},
+ "networks": [
+ {
+ "name": "default",
+ "pod": {}
+ }
+ ],
"volumes": [
{
"containerDisk": {
- "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel"
+ "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel",
+ "imagePullPolicy": "IfNotPresent"
},
"name": "containerdisk"
},
# #
# With new ControllerRevisions referenced from the underlying VirtualMachine
./cluster-up/kubectl.sh get vms/demo-updated -o json | jq '.spec.instancetype, .spec.preference'
selecting docker as container runtime
{
"kind": "VirtualMachineInstancetype",
"name": "small",
"revisionName": "demo-updated-small-4a28a2f3-fd34-421a-98d8-a2659f9a8eb7-2"
}
{
"kind": "VirtualMachinePreference",
"name": "cirros",
"revisionName": "demo-updated-cirros-d08c3914-7d2b-43b4-a295-9cd3687bf151-2"
}
# #
# We can also stop and start the original VirtualMachine without changing the VirtualMachineInstance it spawns
./cluster-up/virtctl.sh stop demo && ./cluster-up/kubectl.sh wait vms/demo --for=condition=Ready=false
./cluster-up/virtctl.sh start demo && ./cluster-up/kubectl.sh wait vms/demo --for=condition=Ready
diff --color -u <( ./cluster-up/kubectl.sh get vms/demo -o json | jq .spec.template.spec) <( ./cluster-up/kubectl.sh get vmis/demo -o json | jq .spec)
selecting docker as container runtime
VM demo was scheduled to stop
selecting docker as container runtime
virtualmachine.kubevirt.io/demo condition met
selecting docker as container runtime
Error starting VirtualMachine Operation cannot be fulfilled on virtualmachine.kubevirt.io "demo": VM is already running
selecting docker as container runtime
selecting docker as container runtime
--- /dev/fd/63 2022-08-03 13:38:51.291119408 +0100
+++ /dev/fd/62 2022-08-03 13:38:51.291119408 +0100
@@ -1,15 +1,65 @@
{
"domain": {
- "devices": {},
+ "cpu": {
+ "cores": 1,
+ "model": "host-model",
+ "sockets": 2,
+ "threads": 1
+ },
+ "devices": {
+ "disks": [
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "containerdisk"
+ },
+ {
+ "disk": {
+ "bus": "virtio"
+ },
+ "name": "cloudinitdisk"
+ }
+ ],
+ "interfaces": [
+ {
+ "bridge": {},
+ "model": "virtio",
+ "name": "default"
+ }
+ ]
+ },
+ "features": {
+ "acpi": {
+ "enabled": true
+ }
+ },
+ "firmware": {
+ "uuid": "c89d1344-ee03-5c55-99bd-5df16b72bea0"
+ },
"machine": {
"type": "q35"
},
- "resources": {}
+ "memory": {
+ "guest": "128Mi"
+ },
+ "resources": {
+ "requests": {
+ "memory": "128Mi"
+ }
+ }
},
+ "networks": [
+ {
+ "name": "default",
+ "pod": {}
+ }
+ ],
"volumes": [
{
"containerDisk": {
- "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel"
+ "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel",
+ "imagePullPolicy": "IfNotPresent"
},
"name": "containerdisk"
},
# #
# The ControllerRevisions are owned by the VirtualMachines, as such removal of the VirtualMachines now removes the ControllerRevisions
./cluster-up/kubectl.sh get controllerrevisions
./cluster-up/kubectl.sh delete vms/demo vms/demo-updated vms/demo-cluster vms/demo-instancetype-user-preference
./cluster-up/kubectl.sh get controllerrevisions
selecting docker as container runtime
NAME CONTROLLER REVISION AGE
demo-cirros-d08c3914-7d2b-43b4-a295-9cd3687bf151-1 virtualmachine.kubevirt.io/demo 0 2m51s
demo-cluster-cirros-cluster-1562ae69-8a4b-4a75-8507-e0da5041c5d2-1 virtualmachine.kubevirt.io/demo-cluster 0 2m10s
demo-cluster-small-cluster-20c0a541-e24f-47c1-a1d7-1151e981a69c-1 virtualmachine.kubevirt.io/demo-cluster 0 2m10s
demo-instancetype-user-preference-cirros-d08c3914-7d2b-43b4-a295-9cd3687bf151-1 virtualmachine.kubevirt.io/demo-instancetype-user-preference 0 98s
demo-instancetype-user-preference-small-4a28a2f3-fd34-421a-98d8-a2659f9a8eb7-1 virtualmachine.kubevirt.io/demo-instancetype-user-preference 0 98s
demo-small-4a28a2f3-fd34-421a-98d8-a2659f9a8eb7-1 virtualmachine.kubevirt.io/demo 0 2m51s
demo-updated-cirros-d08c3914-7d2b-43b4-a295-9cd3687bf151-2 virtualmachine.kubevirt.io/demo-updated 0 41s
demo-updated-small-4a28a2f3-fd34-421a-98d8-a2659f9a8eb7-2 virtualmachine.kubevirt.io/demo-updated 0 41s
local-volume-provisioner-55dcc65dc7 daemonset.apps/local-volume-provisioner 1 3h32m
revision-start-vm-5786044a-c20b-41a4-bba3-c7744c624935-2 virtualmachine.kubevirt.io/demo-instancetype-user-preference 2 98s
revision-start-vm-a334ac37-aed4-4b98-b8b9-af819f54ffda-2 virtualmachine.kubevirt.io/demo-cluster 2 2m10s
revision-start-vm-e67ad6ba-7792-40ab-9cd2-a411b6161971-2 virtualmachine.kubevirt.io/demo 2 2m51s
revision-start-vm-f19cf23d-0ad6-438b-a166-20879a704fa9-2 virtualmachine.kubevirt.io/demo-updated 2 41s
selecting docker as container runtime
virtualmachine.kubevirt.io "demo" deleted
virtualmachine.kubevirt.io "demo-updated" deleted
virtualmachine.kubevirt.io "demo-cluster" deleted
virtualmachine.kubevirt.io "demo-instancetype-user-preference" deleted
selecting docker as container runtime
NAME CONTROLLER REVISION AGE
local-volume-provisioner-55dcc65dc7 daemonset.apps/local-volume-provisioner 1 3h32m
Welcome to part #2 of this series following the development of instancetypes and preferences within KubeVirt! Please note this is just a development journal of sorts, more formal documentation introducing and describing instancetypes will be forthcoming in the near future!
s/VirtualMachineFlavor/VirtualMachineInstancetype/ghttps://github.com/kubevirt/kubevirt/pull/8039
If you haven’t already guessed from the title of this post the long awaited rename has landed, goodbye flavors and hello instancetypes!
The PR is unfortunately huge but did include an interesting upgrade problem the fix for which will require additional cleanup in a future release.
ControllerRevisionshttps://github.com/kubevirt/kubevirt/pull/7875
Versioning through ControllerRevisions has been introduced. As previously discussed the underlying VirtualMachineInstancetypeSpec or VirtualMachinePreferenceSpec are stored by the VirtualMachine controller in a ControllerRevision unique to the VirtualMachine being started. A reference to the ControllerRevision is then added to the VirtualMachine for future look ups with the VirtualMachine also itself referenced as an owner of these ControllerRevisions ensuring their removal when the VirtualMachine is deleted.
ResourceRequests are present in VirtualMachineVirtualMachine subresource APIs to expand VirtualMachineInstancetypehttps://github.com/kubevirt/kubevirt/pull/7549
This PR will introduce new VirtualMachine subresource APIs to expand a referenced instancetype or set of preferences for an existing VirtualMachine or one provided by the caller.
Hopefully these APIs will be useful to users and fellow KubeVirt/OpenShift devs who want to validate or just present a fully rendered version of their VirtualMachine in some way.
It’s worth noting that during the development of this feature we encountered some interesting OpenAPI behaviour that took a while to debug and fix.
AutoattachInputDevice and PreferredAutoattachInputDevicehttps://github.com/kubevirt/kubevirt/pull/8006
While working on a possible future migration of the common-templates project to using VirtualMachineInstancetypes and VirtualMachinePreferences it was noted that we had no way of automatically attaching an input device to a VirtualMachine.
This change introduces both a AutoattachInputDevice attribute to control this in a vanilla VirtualMachines and a PreferredAutoattachInputDevice preference to control this behaviour from within a set of preferences.
The PR includes a simple rework of the application of DevicePreferences, applying them before any of the Autoattach logic fires within the VirtualMachine controller. This allows the PreferredAutoattach preferences to control the Autoattach logic with the original application of preferences after this logic has fired ensuring any remaining preferences are also applied to any new devices.
VirtualMachineInstancePreset deprecation in favor of VirtualMachineInstancetypehttps://github.com/kubevirt/kubevirt/pull/8069
This proposal still has to be raised formally with the community but as set out in the PR I’d like to start the deprecation cycle of VirtualMachineInstancePreset now as VirtualMachineInstancetype starts to mature as a replacement.
DomainSpec optionalhttps://github.com/kubevirt/kubevirt/pull/7969
Previous work has gone into removing the need to define Disks for all referenced Volumes within a VirtualMachineInstanceSpec and also ensuring preferences are applied correctly to the automatically added Disks.
The end goal for this work has been to make the entire DomainSpec within VirtualMachineInstanceSpec optional, hopefully simplifying our VirtualMachine definitions further when used in conjunction with instancetypes and preferences.
SharedInformershttps://github.com/kubevirt/kubevirt/pull/7935
The use of SharedInformers within the webhooks and VirtualMachine controller had proven problematic and their use was previously removed.
While no discernable performance impact has been seen thus far this change will likely be revisited again in the near future as many controllers appear to have a pattern of retrying failed lookups using SharedInformers with vanilla client calls.
VirtualMachineInstancetypes can be deleted before a VirtualMachine startshttps://github.com/kubevirt/kubevirt/issues/8142
The bug above covers a race with the current versioning implementation. This race allows a user to delete a referenced instancetype before the VirtualMachine referencing it has started, stashing a copy of the instancetype in a ControllerRevision. For example:
# ./cluster-up/kubectl.sh apply -f examples/csmall.yaml
virtualmachineinstancetype.instancetype.kubevirt.io/csmall created
# ./cluster-up/kubectl.sh apply -f examples/vm-cirros-csmall.yaml
virtualmachine.kubevirt.io/vm-cirros-csmall created
# ./cluster-up/kubectl.sh delete virtualmachineinstancetype/csmall
virtualmachineinstancetype.instancetype.kubevirt.io "csmall" deleted
# ./cluster-up/virtctl.sh start vm-cirros-csmall
Error starting VirtualMachine Internal error occurred: admission webhook "virtualmachine-validator.kubevirt.io" denied the request: Failure to find instancetype: virtualmachineinstancetypes.instancetype.kubevirt.io "csmall" not found
I believe we need one or more finalizers here ensuring that referenced instancetypes and preferences are not removed before they are stashed in a ControllerRevision.
An alternative to this would be to create ControllerRevisions within the VirtualMachine admission webhooks earlier in the lifecycle of a VirtualMachine. I had tried this originally but failed to successfully Patch the VirtualMachine with a reference back to the ControllerRevision, often seeing failures with the VirtualMachine controller attempting to reconcile the changes.
v1beta1With the rename now complete and the future direction hopefully set out above I believe now is a good time to start looking into the graduation of the API itself from the experimental v1alpha1 stage to something more stable.
The Kubernetes API versioning documentation provides the following
summary of the beta version:
The software is well tested. Enabling a feature is considered safe. Features are enabled by default.
The support for a feature will not be dropped, though the details may change.
The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens, migration instructions are provided. Schema changes may require deleting, editing, and re-creating API objects. The editing process may not be straightforward. The migration may require downtime for applications that rely on the feature.
The software is not recommended for production uses. Subsequent releases may introduce incompatible changes. If you have multiple clusters which can be upgraded independently, you may be able to relax this restriction.
I believe the instancetype API can meet these criteria in the near future if it isn’t already and so I will be looking to start the process soon.
With the rename complete I have finally started drafting some upstream user-guide documentation that I hope to post in a PR soon.
Following on from the user-guide documentation I also plan on writing and publishing some material introducing instancetypes and preferences on the kubevirt.io blog.
Much has changed since my last post introducing the VirtualMachine{Flavor,Preference} KubeVirt CRDs. In this post I’m going to touch on some of this, what’s coming next and provide a quick demo at the end.
VirtualMachine{ClusterPreference,Preference}https://github.com/kubevirt/kubevirt/pull/7554
https://github.com/kubevirt/kubevirt/pull/7578
The two main PRs referenced by my previous post have landed, refactoring the initial code and introducing the VirtualMachine{ClusterPreference,Preference} CRDs to KubeVirt.
PreferredCPUTopology now defaults to PreferSocketshttps://github.com/kubevirt/kubevirt/pull/7812
This was a trivial change as it was something the VirtualMachineInstance mutation webhook already defaults to if no topology is provided but a number of vCPUs are defined through resource requests.
func (mutator *VMIsMutator) setDefaultGuestCPUTopology(vmi *v1.VirtualMachineInstance) {
cores := uint32(1)
threads := uint32(1)
sockets := uint32(1)
vmiCPU := vmi.Spec.Domain.CPU
if vmiCPU == nil || (vmiCPU.Cores == 0 && vmiCPU.Sockets == 0 && vmiCPU.Threads == 0) {
// create cpu topology struct
if vmi.Spec.Domain.CPU == nil {
vmi.Spec.Domain.CPU = &v1.CPU{}
}
//if cores, sockets, threads are not set, take value from domain resources request or limits and
//set value into sockets, which have best performance (https://bugzilla.redhat.com/show_bug.cgi?id=1653453)
resources := vmi.Spec.Domain.Resources
if cpuLimit, ok := resources.Limits[k8sv1.ResourceCPU]; ok {
sockets = uint32(cpuLimit.Value())
} else if cpuRequests, ok := resources.Requests[k8sv1.ResourceCPU]; ok {
sockets = uint32(cpuRequests.Value())
}
vmi.Spec.Domain.CPU.Sockets = sockets
vmi.Spec.Domain.CPU.Cores = cores
vmi.Spec.Domain.CPU.Threads = threads
}
}
VirtualMachineInstance mutation webhook application droppedhttps://github.com/kubevirt/kubevirt/pull/7806
Lots of work went into this PR but ultimately the use cases around the direct use of the VirtualMachineInstance CRD by end users isn’t strong enough to justify the extra complexity introduced by it.
https://github.com/kubevirt/kubevirt/pull/7618
https://github.com/kubevirt/kubevirt/pull/7919
With application of a flavor and preference no longer moving to the VirtualMachineInstance mutation webhook we now had ensure that all devices would be present by the time the existing application would happen within the VirtualMachine controller.
The above change moves and shares code from the VirtualMachineInstance mutation webhook that adds missing any missing Disks for listed Volumes and also adds a default Network and associated Interface if none are provided. This ensures that any preferences applied by the VirtualMachine controller to the VirtualMachineInstance object are also applied to these devices.
For example given the following VirtualMachinePreference that defines a preferredDiskBus and preferredInterfaceModel of virtio a VirtualMachine that doesn’t list any Disks or Interfaces will now have these preferences applied to the devices added during the creation of the VirtualMachineInstance. With these devices now being introduced by the VirtualMachine controller itself instead of the VirtualMachineInstance mutation webhook.
cat <<EOF | ./cluster-up/kubectl.sh apply -f -
---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachineFlavor
metadata:
name: small
spec:
cpu:
guest: 2
memory:
guest: 128Mi
---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachinePreference
metadata:
name: virtio
spec:
devices:
preferredDiskBus: virtio
preferredInterfaceModel: virtio
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example
spec:
flavor:
kind: VirtualMachineFlavor
name: small
preference:
kind: VirtualMachinePreference
name: virtio
running: false
template:
spec:
domain:
devices: {}
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
EOF
$ ./cluster-up/virtctl.sh start example && ./cluster-up/kubectl.sh wait vms/example --for=condition=Ready
$ ./cluster-up/kubectl.sh get vmis/example -o json | jq .spec.domain.devices.disks
selecting docker as container runtime
[
{
"disk": {
"bus": "virtio"
},
"name": "containerdisk"
},
{
"disk": {
"bus": "virtio"
},
"name": "cloudinitdisk"
}
]
$ ./cluster-up/kubectl.sh get vmis/example -o json | jq .spec.domain.devices.interfaces
selecting docker as container runtime
[
{
"bridge": {},
"model": "virtio",
"name": "default"
}
]
SharedInformershttps://github.com/kubevirt/kubevirt/pull/7935
I was alerted to issues around the use of SharedInformers within our API workers during the PR moving flavor application to the VirtualMachineInstance mutation webhook.
After this I started to notice the occasional CI failure both up and downstream that appeared to marry with the suggested symptoms. Either a recently created VirtualMachine{Flavor,Preference} object would not be seen by another worker or the generation of the object seen by the worker would be older than expected, leading to failures.
As such we decided to remove the use of SharedInformers for flavors and preferences, reverting back to straight client calls for retrieval instead. The impact of this change hasn’t been fully measured yet but is on our radar for the coming weeks to ensure performance isn’t impacted.
ControllerRevisionshttps://github.com/kubevirt/kubevirt/pull/7875
This is another large foundational PR making the entire concept usable by users in the real world. The current design is for ControllerRevisions containing the VirtualMachineFlavorSpec and VirtualMachinePreferenceSpec to be created after the initial application of a flavor and preference to a VirtualMachineInstance by the VirtualMachine controller are start time. A reference to these ControllerRevisions is then patched into the FlavorMatcher and PreferenceMatcher associated with the VirtualMachine and used to gather the specs in the future. Hopefully ensuring future restarts will continue to produce the same VirtualMachineInstance object.
s/VirtualMachineFlavor/VirtualMachineInstancetype/gI was not involved in the initial design and naming of the CRDs but after coming onboard it was quickly highlighted that while OpenStack uses Flavors all other public cloud providers use some form of Type object to contain their resource and performance characteristics. With that in mind we have agreed to rename the CRDs for KubeVirt.
VirtualMachineFlavor and VirtualMachineClusterFlavor will become VirtualMachineInstancetype and VirtualMachineClusterInstancetype.
This aligns us with the public cloud providers while making it clear that these CRDs relate to the VirtualMachineInstance. We couldn’t shorten this to VirtualMachineType anyway as that could easily clash with the MachineType term from QEMU that we already expose as part of our API.
VirtualMachinePreference and VirtualMachineClusterPreference will also become VirtualMachineInstancePreference and VirtualMachineInstanceClusterPreference.
How and when this happens is still up in the air but the current suggestion is that these new CRDs will live alongside the existing CRDs while we deprecate and eventually remove them from the project.
Between versioning and renaming there’s lots of change listed above and I do want to get back to the design document before this all lands.
As listed at the start, this demo is using an unmerged versioning PR https://github.com/kubevirt/kubevirt/pull/7875.
The demo itself introduces the current CRDs, their basic behaviour, their interaction with default devices and finally their behaviour with the above versioning PR applied.
I was time limited in the downstream presentation I gave using this recording so please be aware it moves pretty quickly between topics. I’d highly recommend downloading the file and using asciinema to play it locally along with the spacebar to pause between commands.
The following is based on an active Design Proposal, an initial foundational PR and complete DNM/WIP series PR enhancing the existing Flavors API and introducing Preferences. Reviews are very much welcome on all of these PRs!
A common pattern for IaaS is to have abstractions separating the resource sizing and performance of a workload from the user defined values related to launching their custom application. This pattern is evident across all the major cloud providers (also known as hyperscalers) as well as open source IaaS projects like OpenStack. AWS has instance types, GCP has machine types, Azure has instance VM sizes and OpenStack has flavors.
Let’s take AWS for example to help visualize what this abstraction enables. Launching an EC2 instance only requires a few top level arguments, the disk image, instance type, keypair, security group, and subnet:
$ aws ec2 run-instances --image-id ami-xxxxxxxx \
--count 1 \
--instance-type c4.xlarge \
--key-name MyKeyPair \
--security-group-ids sg-903004f8 \
--subnet-id subnet-6e7f829e
When creating the EC2 instance the user doesn’t define the amount of resources, what processor to use, how to optimize the performance of the instance, or what hardware to schedule the instance on. Instead all of that information is wrapped up in that single --instance-type c4.xlarge CLI argument. c4 denoting a specific performance profile version, in this case from the Compute Optimized family and xlarge denoting a specific amount of compute resources provided by the instance type, in this case 4 vCPUs, 7.5 GiB of RAM, 750 Mbps EBS bandwidth etc.
While hyperscalers can provide predefined types with performance profiles and compute resources already assigned IaaS and virtualization projects such as OpenStack and KubeVirt can only provide the raw abstractions for operators, admins and even vendors to then create instances of these abstractions specific to each deployment.
KubeVirt’s VirtualMachine API contains many advanced options for tuning a virtual machine performance that goes beyond what typical users need to be aware of. Users are unable to simply define the storage/network they want assigned to their VM and then declare in broad terms what quality of resources and kind of performance they need for their VM.
Instead, the user has to be keenly aware how to request specific compute resources alongside all of the performance tunings available on the VirtualMachine API and how those tunings impact their guest’s operating system in order to get a desired result.
The partially implemented and currently v1alpha1 Virtual Machine Flavors API was an attempt to provide operators and users with a mechanism to define resource buckets that could be used during VM creation. At present this implementation provides a cluster-wide VirtualMachineClusterFlavor and a namespaced VirtualMachineFlavor CRDs. Each containing an array of VirtualMachineFlavorProfile that at present only encapsulates CPU resources by applying a full copy of the CPU type to the VirtualMachineInstance at runtime.
This approach has a few pitfalls such as using embedded profiles within the CRDs, relying on the user to select the correct Flavor or VirtualMachineFlavorProfile that will allow their workload to run correctly, not allowing a user to override some viable attributes at runtime etc.
VirtualMachineFlavor refactor
As suggested in the title of this blog post, the ultimate goal of the Design Proposal is to provide the end user with a simple set of choices when defining a VirtualMachine within KubeVirt. We want to limit this to a flavor, optional set of preferences, volumes for storage and networks for connectivity.
To achieve this the existing VirtualMachineFlavor CRDs will be heavily modified and extended to better encapsulate resource, performance or schedulable attributes of a VM.
This will include the removal of the embedded VirtualMachineFlavorProfile type within the CRDs, this will be replaced with a singular VirtualMachineFlavorSpec type per flavor. The decision to remove VirtualMachineFlavorProfile has been made as the concept isn’t prevalent within the wider Kubernetes ecosystem and could be confusing to end users. Instead users looking to avoid duplication when defining flavors will be directed to use tools such as kustomize to generate their flavors. This tooling is already commonly used when defining resources within Kubernetes and should afford users plenty of flexibility when defining their flavors either statically or as part of a larger GitOps based workflow.
VirtualMachineFlavorSpec will also include elements of CPU, Devices, HostDevices, GPUs, Memory and LaunchSecurity defined fully below. Users will be unable to override any aspect of the flavor (for example, vCPU count or amount of Memory) within the VirtualMachine itself, any attempt to do so resulting in the VirtualMachine being rejected.
VirtualMachinePreferenceA new set of VirtualMachinePreference CRDs will then be introduced to define any remaining attributes related to ensuring the selected guestOS can run. As the name suggests the VirtualMachinePreference CRDs will only define preferences, so unlike a flavor if a preference conflicts with something user defined within the VirtualMachine it will be ignored. For example, if a user selects a VirtualMachinePreference that requests a preferredDiskBus of virtio but then sets a disk bus of SATA for one or more disk devices within the VirtualMachine the supplied preferredDiskBus preference will not be applied to these disks. Any remaining disks that do not have a disk bus defined will however use the preferredDiskBus preference of virtio.
The Design Proposal contains a complete break down of where each VirtualMachineInstanceSpec attribute will reside, if at all, in this new approach.
Versioning of these CRDs is key to ensure VirtualMachine and VirtualMachineInstance remain unchanged even with modifications to an associated Flavor or Preference.
This is currently missing from the Design Proposal but is being worked on and will be incorporated shortly.
The current Design Proposal does list some useful ideas as non-goals for the initial implementation, these include:
Introspection of imported images to determine the correct guest OS related VirtualMachinePreferences to apply.
Using image labels to determine the correct guest OS related VirtualMachinePreferences to apply.
Remove the need to define Disks within DomainSpec when providing Volumes within a VirtualMachineInstanceSpec.
Remove the need to define Interfaces within DomainSpec when providing Networks within a VirtualMachineInstanceSpec.
All of which should be revisited before the Flavor API graduates from Alpha.
kustomizeI’ve created an example repo (many thanks to @fabiand for starting this) using kustomize to generate various classes and sizes of flavors alongside preferences.
$ KUBEVIRT_PROVIDER=k8s-1.23 ./cluster-up/kubectl.sh apply -f ../vmdefs/example.yaml
selecting docker as container runtime
virtualmachineflavor.flavor.kubevirt.io/c.large created
virtualmachineflavor.flavor.kubevirt.io/c.medium created
virtualmachineflavor.flavor.kubevirt.io/c.small created
virtualmachineflavor.flavor.kubevirt.io/c.xlarge created
virtualmachineflavor.flavor.kubevirt.io/c.xsmall created
virtualmachineflavor.flavor.kubevirt.io/g.medium created
virtualmachineflavor.flavor.kubevirt.io/g.xlarge created
virtualmachineflavor.flavor.kubevirt.io/g.xsmall created
virtualmachineflavor.flavor.kubevirt.io/m.large created
virtualmachineflavor.flavor.kubevirt.io/m.medium created
virtualmachineflavor.flavor.kubevirt.io/m.small created
virtualmachineflavor.flavor.kubevirt.io/m.xlarge created
virtualmachineflavor.flavor.kubevirt.io/m.xsmall created
virtualmachineflavor.flavor.kubevirt.io/r.large created
virtualmachineflavor.flavor.kubevirt.io/r.medium created
virtualmachineflavor.flavor.kubevirt.io/r.xlarge created
virtualmachineflavor.flavor.kubevirt.io/r.xsmall created
virtualmachinepreference.flavor.kubevirt.io/linux.cirros created
virtualmachinepreference.flavor.kubevirt.io/linux.fedora created
virtualmachinepreference.flavor.kubevirt.io/linux.rhel9 created
virtualmachinepreference.flavor.kubevirt.io/windows.windows10 created
$ cat ../vmdefs/example.yaml
[...]
---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachineFlavor
metadata:
name: m.xsmall
spec:
cpu:
guest: 1
memory:
guest: 512M
[...]
---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachinePreference
metadata:
name: linux.cirros
spec:
devices:
preferredCdromBus: virtio
preferredDiskBus: virtio
preferredRng: {}
[...]
$ cat ../vmdefs/cirros.yaml
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: cirros
name: cirros
spec:
flavor:
name: m.xsmall
kind: VirtualMachineFlavor
preference:
name: linux.cirros
kind: VirtualMachinePreference
running: false
template:
metadata:
labels:
kubevirt.io/vm: cirros
spec:
domain:
devices:
disks:
- disk:
name: containerdisk
- disk:
name: cloudinitdisk
resources: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: registry:5000/kubevirt/cirros-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
userData: |
#!/bin/sh
echo 'printed from cloud-init userdata'
name: cloudinitdisk
$ KUBEVIRT_PROVIDER=k8s-1.23 ./cluster-up/kubectl.sh apply -f ../vmdefs/cirros.yaml
selecting docker as container runtime
virtualmachine.kubevirt.io/cirros created
$ KUBEVIRT_PROVIDER=k8s-1.23 ./cluster-up/virtctl.sh start cirros
selecting docker as container runtime
VM cirros was scheduled to start
$ KUBEVIRT_PROVIDER=k8s-1.23 ./cluster-up/kubectl.sh get vmis
selecting docker as container runtime
NAME AGE PHASE IP NODENAME READY
cirros 9s Running 10.244.196.134 node01 True
$ diff <(KUBEVIRT_PROVIDER=k8s-1.23 ./cluster-up/kubectl.sh get vms/cirros -o json | jq --sort-keys .spec.template.spec) <(KUBEVIRT_PROVIDER=k8s-1.23 ./cluster-up/kubectl.sh get vmis/cirros -o json | jq --sort-keys .spec)
selecting docker as container runtime
selecting docker as container runtime
2a3,8
> "cpu": {
> "cores": 1,
> "model": "host-model",
> "sockets": 1,
> "threads": 1
> },
5a12,14
> "disk": {
> "bus": "virtio"
> },
8a18,20
> "disk": {
> "bus": "virtio"
> },
11c23,38
< ]
---
> ],
> "interfaces": [
> {
> "bridge": {},
> "name": "default"
> }
> ],
> "rng": {}
> },
> "features": {
> "acpi": {
> "enabled": true
> }
> },
> "firmware": {
> "uuid": "6784d43b-39fb-5ee7-8c17-ef10c49af985"
16c43,50
< "resources": {}
---
> "memory": {
> "guest": "512M"
> },
> "resources": {
> "requests": {
> "memory": "512M"
> }
> }
17a52,57
> "networks": [
> {
> "name": "default",
> "pod": {}
> }
> ],
22c62,63
< "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel"
---
> "image": "registry:5000/kubevirt/cirros-container-disk-demo:devel",
> "imagePullPolicy": "IfNotPresent"
Below is a basic example taken from the Design Proposal that defines a single VirtualMachineFlavor and VirtualMachinePreference to simplify the creation of Windows based VirtualMachine and later once started a VirtualMachineInstance:
VirtualMachineFlavor---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachineFlavor
metadata:
name: clarge
spec:
cpu:
guest: 4
memory:
guest: 8Gi
VirtualMachinePreference---
apiVersion: flavor.kubevirt.io/v1alpha1
kind: VirtualMachinePreference
metadata:
name: Windows
spec:
clock:
preferredClockOffset:
utc: {}
preferredTimer:
hpet:
present: false
hyperv: {}
pit:
tickPolicy: delay
rtc:
tickPolicy: catchup
cpu:
preferredCPUTopology: preferSockets
devices:
preferredDiskBus: sata
preferredInterfaceModel: e1000
preferredTPM: {}
features:
preferredAcpi: {}
preferredApic: {}
preferredHyperv:
relaxed: {}
spinlocks:
spinlocks: 8191
vapic: {}
preferredSmm: {}
firmware:
preferredUseEfi: true
preferredUseSecureBoot: true
VirtualMachine---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: vm-windows-clarge-windows
name: vm-windows-clarge-windows
spec:
flavor:
kind: VirtualMachineFlavor
name: clarge
preference:
kind: VirtualMachinePreference
name: Windows
running: false
template:
metadata:
labels:
kubevirt.io/vm: vm-windows-clarge-windows
spec:
domain:
devices:
disks:
- disk: {}
name: containerdisk
resources: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: registry:5000/kubevirt/windows-disk:devel
name: containerdisk
VirtualMachineInstance---
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
annotations:
kubevirt.io/flavor-name: clarge
kubevirt.io/latest-observed-api-version: v1
kubevirt.io/preference-name: Windows
kubevirt.io/storage-observed-api-version: v1alpha3
creationTimestamp: "2022-04-19T10:51:53Z"
finalizers:
- kubevirt.io/virtualMachineControllerFinalize
- foregroundDeleteVirtualMachine
generation: 9
labels:
kubevirt.io/nodeName: node01
kubevirt.io/vm: vm-windows-clarge-windows
name: vm-windows-clarge-windows
namespace: default
ownerReferences:
- apiVersion: kubevirt.io/v1
blockOwnerDeletion: true
controller: true
kind: VirtualMachine
name: vm-windows-clarge-windows
uid: 8974d1e6-5f41-4486-996a-84cd6ebb3b37
resourceVersion: "8052"
uid: 369e9a17-8eca-47cc-91c2-c8f12e0f6f9f
spec:
domain:
clock:
timer:
hpet:
present: false
hyperv:
present: true
pit:
present: true
tickPolicy: delay
rtc:
present: true
tickPolicy: catchup
utc: {}
cpu:
cores: 1
model: host-model
sockets: 4
threads: 1
devices:
disks:
- disk:
bus: sata
name: containerdisk
interfaces:
- bridge: {}
name: default
tpm: {}
features:
acpi:
enabled: true
apic:
enabled: true
hyperv:
relaxed:
enabled: true
spinlocks:
enabled: true
spinlocks: 8191
vapic:
enabled: true
smm:
enabled: true
firmware:
bootloader:
efi:
secureBoot: true
uuid: bc694b87-1373-5514-9694-0f495fbae3b2
machine:
type: q35
memory:
guest: 8Gi
resources:
requests:
memory: 8Gi
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: registry:5000/kubevirt/windows-disk:devel
imagePullPolicy: IfNotPresent
name: containerdisk
status:
activePods:
557c7fef-04b2-47c1-880b-396da944a7d3: node01
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-04-19T10:51:57Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
message: cannot migrate VMI which does not use masquerade to connect to the pod
network
reason: InterfaceNotLiveMigratable
status: "False"
type: LiveMigratable
guestOSInfo: {}
interfaces:
- infoSource: domain
ipAddress: 10.244.196.149
ipAddresses:
- 10.244.196.149
- fd10:244::c494
mac: 66:f7:21:4e:d9:30
name: default
launcherContainerImageVersion: registry:5000/kubevirt/virt-launcher@sha256:40b2036eae39776560a73263198ff42ffd6a8f09c9aa208f8bbdc91ec35b42cf
migrationMethod: BlockMigration
migrationTransport: Unix
nodeName: node01
phase: Running
phaseTransitionTimestamps:
- phase: Pending
phaseTransitionTimestamp: "2022-04-19T10:51:53Z"
- phase: Scheduling
phaseTransitionTimestamp: "2022-04-19T10:51:53Z"
- phase: Scheduled
phaseTransitionTimestamp: "2022-04-19T10:51:57Z"
- phase: Running
phaseTransitionTimestamp: "2022-04-19T10:51:59Z"
qosClass: Burstable
runtimeUser: 0
virtualMachineRevisionName: revision-start-vm-8974d1e6-5f41-4486-996a-84cd6ebb3b37-2
volumeStatus:
- name: cloudinitdisk
size: 1048576
target: sdb