Project Description

Update the physical KubeVirt+Kubernetes test cluster in Provo from SLES15 SP2 and CaaSP to SLES15 SP3 and Rancher's k3s. This will allow us to easily experiment and test Harvester.

Also investigate creating SLES and/or openSUSE based container disks for use with KubeVirt.

Goal for this Hackweek

Use Harvester with the updated cluster.

Use SLES-based container disks on the updated cluster.

Resources

Looking for hackers with the skills:

Nothing? Add some keywords!

This project is part of:

Hack Week 20

Activity

  • about 3 years ago: ories joined this project.
  • about 3 years ago: ories liked this project.
  • about 3 years ago: jfehlig started this project.
  • about 3 years ago: jfehlig originated this project.

  • Comments

    • jfehlig
      about 3 years ago by jfehlig | Reply

      The provo cluster has been updated to SLES15 SP3 snapshot12 + latest k3s. The latest git master of harvester is also installed on the cluster admin node and I've used it to successfully install a Tumbleweed VM from ISO.

      The installation took a very long time. I'll need to compare with a traditional virtualzation host. If there are differences the block IO path is highly suspect.

    • jfehlig
      about 3 years ago by jfehlig | Reply

      To be specific about installation time, it took ~75 minutes to install Tumbleweed in the harvester created VM. By comparison, it took ~15 minutes to install Tumbleweed in a similarly configured VM created on a traditional KVM host with virt-install.

      To support my hypothesis that block IO might account for the long installation time, I ran some initial fio tests in the Tumbleweed VM created with harvester and the Tumbleweed VM created with virt-install

      Traditional VM sequential read: ~150MB/s Harvester VM sequential read: ~45MB/s

      Traditional VM random read: ~732KB/s Harvester VM random read: ~564KB/s

      Traditional VM sequential write: ~289MB/s Harvester VM sequential write: ~340MB/s

      Traditional VM random write: ~35MB/s Harvester VM random write: ~195KB/s

      The long install time of Tumbleweed in the harvester VM might be explained by the difference between random write performance. More investigation of the storage stack used by the harvester VM is needed...

    • jfehlig
      about 3 years ago by jfehlig | Reply

      I've created ContainerDisks based on openSUSE Tumbleweed and SLES15 SP2 JeOS OpenStack Cloud images. For lack of a better place, I've pushed them to quay.io

      https://quay.io/repository/jfehlig/tumbleweed https://quay.io/repository/jfehlig/sles15-sp2

      So far I have not been able to start a VM using a ContainerDisk in the test cluster. The ContainerDisk image is never imported or cloned into a PVC for use by the VM. I'll need to gain a better understanding of CDI to get it working. I've found no helpful hints in any pod, service, system, etc. logs.

      • jfehlig
        about 3 years ago by jfehlig | Reply

        It looks like the installation of k3s followed by Harvester has resulted in two default StorageClass being defined

        kubectl get sc -A
        NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
        local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 3d19h
        longhorn (default) driver.longhorn.io Delete Immediate true 2d11h

    • jfehlig
      about 3 years ago by jfehlig | Reply

      After changing the local-path StorageClass to non-default I'm able to import a ContainerDisk to a PVC, but still having problems starting a VM that uses it.

    Similar Projects

    This project is one of its kind!