Rancher Support Matrix CLI Helper

A tool to bring the Rancher Support Matrix info into your CLI.

> Update: This project was not completed during hackweek 22, however will still continue development as possible and our team is excited to continue the efforts next year! We did make significant progress on both: a) producing a JSON static API scheme and b) a system to store the Rancher release version support information.

Project Description

The goal of this tool (for V1) is quite simply to pull up the Support Matrix info based on user input.

Project Components

CLI Tool - GoLang

This is the meat and potatoes of the Hackweek project. The other parts are important, but are all a means to this end.

The goal is to build it in Go so as to provide a native binary for each platform. This also keeps things close to K8s and Rancher, as opposed to Rust or other popular CLI languages.

Support Matrix Structured Data/API

This component is the data backing the CLI tool - it will be provided as a blob of structured data hosted on GH pages.

In a strict sense this (mostly) static data will function as if it were an API - however it is not interactive at all. It will simply be a statically rendered blob of data hosted online. So only pure GET requests rather than all the HTTP verbs like a true API. The final Scheme of this "API" has not been decided yet - however it will be informed by the needs of the CLI tool.

Matrix Refresh Tool

This component will be used to keep the publish Support Matrix Structured Data fresh and in sync.

Currently the data is not published in a way that is structured. This means we need to either: a) manually massage the data into the right formats, or b) create a system to sync that information. This tool is currently the furthest developed part of the project - having a mostly working proof of concept completed.

It is unlikely that this tool will be published in the open. It merely exists as an "internal" tool to facilitate publishing the data in a structured way. Similarly, this tool is least likely to need collaboration for Hackweek as the other components are the real goal.

Inspiration

As a Premium Support Engineer focused on Rancher we often need to review the support matrix. This is critical to ensure the Rancher instance is properly configured within the expected versions. While doing this via the webpage is fine, as tech staff we often spend a lot of time in CLIs. To that end bringing this essential tool even closer to our "main workflows" is a no brainer.

Mentioned above, the initial goal of hack week is simply to provide the information via CLI report. While more could potentially be achieved within Hackweek, this conservative goal was selected to allow enough time to organize the data at hand. The project will be in much better footing when this data is organized and refresh methods established.

Down the road it can be expanded to provide more functionality. E.g. Validation mode - enter all the versions in use and it will highlight potential issues, Upgrade Path - input current versions and desired Rancher version.

Goal for this Hackweek

  • Establish a structured data source for Support Matrix,
  • Publish (to GitHub pages) the structured data version of Support Matrix,
  • Create a (golang) CLI tool to provide Support matrix info.

Resources

Looking for hackers with the skills:

rancher cli go golang

This project is part of:

Hack Week 21

Activity

  • over 3 years ago: kmaneshni joined this project.
  • over 3 years ago: kmaneshni liked this project.
  • over 3 years ago: mrussell liked this project.
  • over 3 years ago: nyounker joined this project.
  • over 3 years ago: nyounker liked this project.
  • over 3 years ago: dpock joined this project.
  • over 3 years ago: inichols liked this project.
  • over 3 years ago: dpock added keyword "cli" to this project.
  • over 3 years ago: dpock added keyword "go" to this project.
  • over 3 years ago: dpock added keyword "golang" to this project.
  • over 3 years ago: inichols started this project.
  • over 3 years ago: dpock added keyword "rancher" to this project.
  • over 3 years ago: dpock originated this project.

  • Comments

    • dpock
      over 3 years ago by dpock | Reply

      Just wanted to give a brief update on the progress as it's mid-week already.

      Ian and Myself have been working together on the design for the "structured data" version of the matrix. Our hope is that we will be able to land on a good format to export that as and publish a few versions worth of the data. Then start working on the golang CLI client that is the "real end goal".

      Even though these parts I've been working on are just "bootstrap" work to get the CLI project started it's been great learning. I've updated the project info a bit to reflect some changes. I also published a mermaidjs diagram of the DB design being used for the CLI import tool here - https://gist.github.com/mallardduck/6bc19ed05029132370b8dda6b603f99e.

    • dpock
      over 3 years ago by dpock | Reply

      Here is an example of the API we created for the "index":

      ○ → curl http://rancher-support-matrix-full.test/ |jq { "about": "This is a static API that contains the Support information for Rancher releases!", "base_url": "http://rancher-support-matrix-full.test", "routes": { "api.rancherRelease": "api/release/{rancherRelease}.json", "api.rancherRelease.rkeK8sRuntimes": "api/release/{rancherRelease}/RkeK8sRuntimes.json", "api.rancherRelease.rkeK8sRuntimePair": "api/release/{rancherRelease}/RkeK8sRuntimePair.json", "api.rancherRelease.rkeDistroVersionDockerPair": "api/release/{rancherRelease}/RkeDistroVersionDockerPair.json", "api.rancherRelease.hostedRuntimes": "api/release/{rancherRelease}/HostedRuntimeVersions.json" }, "rancherReleases": [ { "data": { "version": "2.6.3" }, "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3.json" } } ] }

    • dpock
      over 3 years ago by dpock | Reply

      And here is one for the 2.6.3 release -note it's not complete and only includes RKE and hosted runtime info:

      ○ → curl http://rancher-support-matrix-full.test/api/release/2.6.3.json |jq { "data": { "version": "2.6.3" }, "relationships": { "rkeK8sRuntimes": { "data": [ { "version": "v1.21.7" }, { "version": "v1.20.13" }, { "version": "v1.19.16" }, { "version": "v1.18.20" } ], "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3/RkeK8sRuntimes.json" } }, "rkeCliRuntimePairs": [ { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.21.7" } }, { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.20.13" } }, { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.19.16" } }, { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.18.20" } } ], "rkeDistroVersionDockerPair": [ { "data": { "distro": "centos", "version": "7.7", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "7.7", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "7.8", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "7.8", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "7.9", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "7.9", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "8.3", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "8.3", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "rocky-linux", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "rocky-linux", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "7.7", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "7.7", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "7.9", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "7.9", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "8.2", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "8.2", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "8.3", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "8.3", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "7.7", "docker": "1.13.x" } }, { "data": { "distro": "rhel", "version": "7.7", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "7.7", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "7.8", "docker": "1.13.x" } }, { "data": { "distro": "rhel", "version": "7.8", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "7.8", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "7.9", "docker": "1.13.x" } }, { "data": { "distro": "rhel", "version": "7.9", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "7.9", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "8.2", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "8.2", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "8.3", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "8.3", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "12 SP5", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "12 SP5", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "15SP1", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "15SP1", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "15SP2", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "15SP2", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "15SP3", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "15SP3", "docker": "20.10.x" } }, { "data": { "distro": "opensuse-leap", "version": "15.3", "docker": "19.03.x" } }, { "data": { "distro": "opensuse-leap", "version": "15.3", "docker": "20.10.x" } }, { "data": { "distro": "ubuntu", "version": "18.04", "docker": "19.03.x" } }, { "data": { "distro": "ubuntu", "version": "18.04", "docker": "20.10.x" } }, { "data": { "distro": "ubuntu", "version": "20.04", "docker": "19.03.x" } }, { "data": { "distro": "ubuntu", "version": "20.04", "docker": "20.10.x" } } ], "hostedRuntimeVersions": { "data": [ { "provider": "aks", "version": "v1.20.9" }, { "provider": "eks", "version": "v1.20.x" }, { "provider": "gke", "version": "v1.21.5-gke.1302" } ], "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3/HostedRuntimeVersions.json" } } }, "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3.json" } }

    Similar Projects

    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    Cluster API Provider for Harvester by rcase

    Project Description

    The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.

    The project has been bootstrapped in HackWeek 23, and its code is available here.

    Work done in HackWeek 2023

    • Have a early working version of the provider available on Rancher Sandbox : *DONE *
    • Demonstrated the created cluster can be imported using Rancher Turtles: DONE
    • Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo

    DONE in HackWeek 24:

    DONE in 2025 (out of Hackweek)

    • Support of ClusterClass
    • Add to clusterctl community providers, you can add it directly with clusterctl
    • Testing on newer versions of Harvester v1.4.X and v1.5.X
    • Support for clusterctl generate cluster ...
    • Improve Status Conditions to reflect current state of Infrastructure
    • Improve CI (some bugs for release creation)

    Goals for HackWeek 2025

    • FIRST and FOREMOST, any topic is important to you
    • Add e2e testing
    • Certify the provider for Rancher Turtles
    • Add Machine pool labeling
    • Add PCI-e passthrough capabilities.
    • Other improvement suggestions are welcome!

    Thanks to @isim and Dominic Giebert for their contributions!

    Resources

    Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.

    This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goals for Hackweek 25

    • Update to modern Rancher and verify that existing tests still work
    • Change testing logic to populate secrets instead of requiring a secondary script
    • Add new tests

    Goals for Hackweek 24 (Complete)

    • Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix
    • Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    • https://github.com/celidon/rancher-troublemaker
    • https://github.com/rancher/terraform-provider-rancher2
    • https://github.com/rancher/tf-rancher-up
    • https://github.com/rancher/quickstart


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    Cluster API Provider for Harvester by rcase

    Project Description

    The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.

    The project has been bootstrapped in HackWeek 23, and its code is available here.

    Work done in HackWeek 2023

    • Have a early working version of the provider available on Rancher Sandbox : *DONE *
    • Demonstrated the created cluster can be imported using Rancher Turtles: DONE
    • Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo

    DONE in HackWeek 24:

    DONE in 2025 (out of Hackweek)

    • Support of ClusterClass
    • Add to clusterctl community providers, you can add it directly with clusterctl
    • Testing on newer versions of Harvester v1.4.X and v1.5.X
    • Support for clusterctl generate cluster ...
    • Improve Status Conditions to reflect current state of Infrastructure
    • Improve CI (some bugs for release creation)

    Goals for HackWeek 2025

    • FIRST and FOREMOST, any topic is important to you
    • Add e2e testing
    • Certify the provider for Rancher Turtles
    • Add Machine pool labeling
    • Add PCI-e passthrough capabilities.
    • Other improvement suggestions are welcome!

    Thanks to @isim and Dominic Giebert for their contributions!

    Resources

    Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.

    This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:


    SUSE Health Check Tools by roseswe

    SUSE HC Tools Overview

    A collection of tools written in Bash or Go 1.24++ to make life easier with handling of a bunch of tar.xz balls created by supportconfig.

    Background: For SUSE HC we receive a bunch of supportconfig tar balls to check them for misconfiguration, areas for improvement or future changes.

    Main focus on these HC are High Availability (pacemaker), SLES itself and SAP workloads, esp. around the SUSE best practices.

    Goals

    • Overall improvement of the tools
    • Adding new collectors
    • Add support for SLES16

    Resources

    csv2xls* example.sh go.mod listprodids.txt sumtext* trails.go README.md csv2xls.go exceltest.go go.sum m.sh* sumtext.go vercheck.py* config.ini csvfiles/ getrpm* listprodids* rpmdate.sh* sumxls* verdriver* credtest.go example.py getrpm.go listprodids.go sccfixer.sh* sumxls.go verdriver.go

    docollall.sh* extracthtml.go gethostnamectl* go.sum numastat.go cpuvul* extractcluster.go firmwarebug* gethostnamectl.go m.sh* numastattest.go cpuvul.go extracthtml* firmwarebug.go go.mod numastat* xtr_cib.sh*


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    Q2Boot - A handy QEMU VM launcher by amanzini

    Description

    Q2Boot (Qemu Quick Boot) is a command-line tool that wraps QEMU to provide a streamlined experience for launching virtual machines. It automatically configures common settings like KVM acceleration, virtio drivers, and networking while allowing customization through both configuration files and command-line options.

    The project originally was a personal utility in D, now recently rewritten in idiomatic Go. It lives at repository https://github.com/ilmanzo/q2boot

    Goals

    Improve the project, testing with different scenarios , address issues and propose new features. It will benefit of some basic integration testing by providing small sample disk images.

    Resources


    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    SUSE Health Check Tools by roseswe

    SUSE HC Tools Overview

    A collection of tools written in Bash or Go 1.24++ to make life easier with handling of a bunch of tar.xz balls created by supportconfig.

    Background: For SUSE HC we receive a bunch of supportconfig tar balls to check them for misconfiguration, areas for improvement or future changes.

    Main focus on these HC are High Availability (pacemaker), SLES itself and SAP workloads, esp. around the SUSE best practices.

    Goals

    • Overall improvement of the tools
    • Adding new collectors
    • Add support for SLES16

    Resources

    csv2xls* example.sh go.mod listprodids.txt sumtext* trails.go README.md csv2xls.go exceltest.go go.sum m.sh* sumtext.go vercheck.py* config.ini csvfiles/ getrpm* listprodids* rpmdate.sh* sumxls* verdriver* credtest.go example.py getrpm.go listprodids.go sccfixer.sh* sumxls.go verdriver.go

    docollall.sh* extracthtml.go gethostnamectl* go.sum numastat.go cpuvul* extractcluster.go firmwarebug* gethostnamectl.go m.sh* numastattest.go cpuvul.go extracthtml* firmwarebug.go go.mod numastat* xtr_cib.sh*


    terraform-provider-feilong by e_bischoff

    Project Description

    People need to test operating systems and applications on s390 platform.

    Installation from scratch solutions include:

    • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
    • use s3270 terminal emulation (used by openQA people?)
    • use LXC from IBM to start CP commands and analyze the results
    • use zPXE to do some PXE-alike booting (used by the orthos team?)
    • use tessia to install from scratch using autoyast
    • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
    • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

    Deployment from image solutions include:

    • use ICIC web interface (openstack in disguise, contributed by IBM)
    • use ICIC from the openstack terraform provider (used by Rancher QA)
    • use zvm_ansible to control SMAPI
    • connect directly to SMAPI low-level socket interface

    IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

    What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

    Other Feilong-based solutions include:

    • make libvirt Feilong-aware
    • simply call Feilong from shell scripts with curl
    • use zvmconnector client python library from Feilong
    • use zthin part of Feilong to directly command SMAPI.

    Goal for Hackweek 23

    My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

    My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

    Goals for Hackweek 24

    Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

    Let's add support for fiberchannel disks and multipath.

    Possible goals for Hackweek 25

    Modernization, maturity, and maintenance.