Rancher Support Matrix CLI Helper

A tool to bring the Rancher Support Matrix info into your CLI.

> Update: This project was not completed during hackweek 22, however will still continue development as possible and our team is excited to continue the efforts next year! We did make significant progress on both: a) producing a JSON static API scheme and b) a system to store the Rancher release version support information.

Project Description

The goal of this tool (for V1) is quite simply to pull up the Support Matrix info based on user input.

Project Components

CLI Tool - GoLang

This is the meat and potatoes of the Hackweek project. The other parts are important, but are all a means to this end.

The goal is to build it in Go so as to provide a native binary for each platform. This also keeps things close to K8s and Rancher, as opposed to Rust or other popular CLI languages.

Support Matrix Structured Data/API

This component is the data backing the CLI tool - it will be provided as a blob of structured data hosted on GH pages.

In a strict sense this (mostly) static data will function as if it were an API - however it is not interactive at all. It will simply be a statically rendered blob of data hosted online. So only pure GET requests rather than all the HTTP verbs like a true API. The final Scheme of this "API" has not been decided yet - however it will be informed by the needs of the CLI tool.

Matrix Refresh Tool

This component will be used to keep the publish Support Matrix Structured Data fresh and in sync.

Currently the data is not published in a way that is structured. This means we need to either: a) manually massage the data into the right formats, or b) create a system to sync that information. This tool is currently the furthest developed part of the project - having a mostly working proof of concept completed.

It is unlikely that this tool will be published in the open. It merely exists as an "internal" tool to facilitate publishing the data in a structured way. Similarly, this tool is least likely to need collaboration for Hackweek as the other components are the real goal.

Inspiration

As a Premium Support Engineer focused on Rancher we often need to review the support matrix. This is critical to ensure the Rancher instance is properly configured within the expected versions. While doing this via the webpage is fine, as tech staff we often spend a lot of time in CLIs. To that end bringing this essential tool even closer to our "main workflows" is a no brainer.

Mentioned above, the initial goal of hack week is simply to provide the information via CLI report. While more could potentially be achieved within Hackweek, this conservative goal was selected to allow enough time to organize the data at hand. The project will be in much better footing when this data is organized and refresh methods established.

Down the road it can be expanded to provide more functionality. E.g. Validation mode - enter all the versions in use and it will highlight potential issues, Upgrade Path - input current versions and desired Rancher version.

Goal for this Hackweek

  • Establish a structured data source for Support Matrix,
  • Publish (to GitHub pages) the structured data version of Support Matrix,
  • Create a (golang) CLI tool to provide Support matrix info.

Resources

Looking for hackers with the skills:

rancher cli go golang

This project is part of:

Hack Week 21

Activity

  • over 3 years ago: kmaneshni joined this project.
  • over 3 years ago: kmaneshni liked this project.
  • over 3 years ago: mrussell liked this project.
  • over 3 years ago: nyounker joined this project.
  • over 3 years ago: nyounker liked this project.
  • over 3 years ago: dpock joined this project.
  • over 3 years ago: inichols liked this project.
  • over 3 years ago: dpock added keyword "cli" to this project.
  • over 3 years ago: dpock added keyword "go" to this project.
  • over 3 years ago: dpock added keyword "golang" to this project.
  • over 3 years ago: inichols started this project.
  • over 3 years ago: dpock added keyword "rancher" to this project.
  • over 3 years ago: dpock originated this project.

  • Comments

    • dpock
      over 3 years ago by dpock | Reply

      Just wanted to give a brief update on the progress as it's mid-week already.

      Ian and Myself have been working together on the design for the "structured data" version of the matrix. Our hope is that we will be able to land on a good format to export that as and publish a few versions worth of the data. Then start working on the golang CLI client that is the "real end goal".

      Even though these parts I've been working on are just "bootstrap" work to get the CLI project started it's been great learning. I've updated the project info a bit to reflect some changes. I also published a mermaidjs diagram of the DB design being used for the CLI import tool here - https://gist.github.com/mallardduck/6bc19ed05029132370b8dda6b603f99e.

    • dpock
      over 3 years ago by dpock | Reply

      Here is an example of the API we created for the "index":

      ○ → curl http://rancher-support-matrix-full.test/ |jq { "about": "This is a static API that contains the Support information for Rancher releases!", "base_url": "http://rancher-support-matrix-full.test", "routes": { "api.rancherRelease": "api/release/{rancherRelease}.json", "api.rancherRelease.rkeK8sRuntimes": "api/release/{rancherRelease}/RkeK8sRuntimes.json", "api.rancherRelease.rkeK8sRuntimePair": "api/release/{rancherRelease}/RkeK8sRuntimePair.json", "api.rancherRelease.rkeDistroVersionDockerPair": "api/release/{rancherRelease}/RkeDistroVersionDockerPair.json", "api.rancherRelease.hostedRuntimes": "api/release/{rancherRelease}/HostedRuntimeVersions.json" }, "rancherReleases": [ { "data": { "version": "2.6.3" }, "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3.json" } } ] }

    • dpock
      over 3 years ago by dpock | Reply

      And here is one for the 2.6.3 release -note it's not complete and only includes RKE and hosted runtime info:

      ○ → curl http://rancher-support-matrix-full.test/api/release/2.6.3.json |jq { "data": { "version": "2.6.3" }, "relationships": { "rkeK8sRuntimes": { "data": [ { "version": "v1.21.7" }, { "version": "v1.20.13" }, { "version": "v1.19.16" }, { "version": "v1.18.20" } ], "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3/RkeK8sRuntimes.json" } }, "rkeCliRuntimePairs": [ { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.21.7" } }, { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.20.13" } }, { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.19.16" } }, { "data": { "cli": "v1.3.3", "k8sRuntime": "v1.18.20" } } ], "rkeDistroVersionDockerPair": [ { "data": { "distro": "centos", "version": "7.7", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "7.7", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "7.8", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "7.8", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "7.9", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "7.9", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "8.3", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "8.3", "docker": "20.10.x" } }, { "data": { "distro": "centos", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "centos", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "rocky-linux", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "rocky-linux", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "7.7", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "7.7", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "7.9", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "7.9", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "8.2", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "8.2", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "8.3", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "8.3", "docker": "20.10.x" } }, { "data": { "distro": "oracle-linux", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "oracle-linux", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "7.7", "docker": "1.13.x" } }, { "data": { "distro": "rhel", "version": "7.7", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "7.7", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "7.8", "docker": "1.13.x" } }, { "data": { "distro": "rhel", "version": "7.8", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "7.8", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "7.9", "docker": "1.13.x" } }, { "data": { "distro": "rhel", "version": "7.9", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "7.9", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "8.2", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "8.2", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "8.3", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "8.3", "docker": "20.10.x" } }, { "data": { "distro": "rhel", "version": "8.4", "docker": "19.03.x" } }, { "data": { "distro": "rhel", "version": "8.4", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "12 SP5", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "12 SP5", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "15SP1", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "15SP1", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "15SP2", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "15SP2", "docker": "20.10.x" } }, { "data": { "distro": "sles", "version": "15SP3", "docker": "19.03.x" } }, { "data": { "distro": "sles", "version": "15SP3", "docker": "20.10.x" } }, { "data": { "distro": "opensuse-leap", "version": "15.3", "docker": "19.03.x" } }, { "data": { "distro": "opensuse-leap", "version": "15.3", "docker": "20.10.x" } }, { "data": { "distro": "ubuntu", "version": "18.04", "docker": "19.03.x" } }, { "data": { "distro": "ubuntu", "version": "18.04", "docker": "20.10.x" } }, { "data": { "distro": "ubuntu", "version": "20.04", "docker": "19.03.x" } }, { "data": { "distro": "ubuntu", "version": "20.04", "docker": "20.10.x" } } ], "hostedRuntimeVersions": { "data": [ { "provider": "aks", "version": "v1.20.9" }, { "provider": "eks", "version": "v1.20.x" }, { "provider": "gke", "version": "v1.21.5-gke.1302" } ], "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3/HostedRuntimeVersions.json" } } }, "links": { "self": "http://rancher-support-matrix-full.test/api/release/2.6.3.json" } }

    Similar Projects

    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goals for Hackweek 25

    • Update to modern Rancher and verify that existing tests still work
    • Change testing logic to populate secrets instead of requiring a secondary script
    • Add new tests

    Goals for Hackweek 24 (Complete)

    • Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix
    • Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    • https://github.com/celidon/rancher-troublemaker
    • https://github.com/rancher/terraform-provider-rancher2
    • https://github.com/rancher/tf-rancher-up
    • https://github.com/rancher/quickstart


    Rancher Cluster Lifecycle Visualizer by jferraz

    Description

    Rancher’s v2 provisioning system represents each downstream cluster with several Kubernetes custom resources across multiple API groups, such as clusters.provisioning.cattle.io and clusters.management.cattle.io. Understanding why a cluster is stuck in states like "Provisioning", "Updating", or "Unavailable" often requires jumping between these resources, reading conditions, and correlating them with agent connectivity and known failure modes. This project will build a Cluster Lifecycle Visualizer: a small, read-only controller that runs in the Rancher management cluster and generates a single, human-friendly view per cluster. It will watch Rancher cluster CRDs, derive a simplified lifecycle phase, keep a history of phase transitions from installation time onward, and attach a short, actionable recommendation string that hints at what the operator should check or do next.

    Goals

    • Provide a compact lifecycle summary for each Rancher-managed cluster (e.g. Provisioning, WaitingForClusterAgent, Active, Updating, Error) derived from provisioning.cattle.io/v1 Cluster and management.cattle.io/v3 Cluster status and conditions.
    • Maintain a phase history for each cluster, allowing operators to see how its state evolved over time since the visualizer was installed.
    • Attach a recommended action to the current phase using a small ruleset based on common Rancher failure modes (for example, cluster agent not connected, cluster still stabilizing after an upgrade, or generic error states), to improve the day-to-day debugging experience.
    • Deliver an easy-to-install, read-only component (single YAML or small Helm chart) that Rancher users can deploy to their management cluster and inspect via kubectl get/describe, without UI changes or direct access to downstream clusters.
    • Use idiomatic Go, wrangler, and Rancher APIs.

    Resources

    • Rancher Manager documentation on RKE2 and K3s cluster configuration and provisioning flows.
    • Rancher API Go types for provisioning.cattle.io/v1 and management.cattle.io/v3 (from the rancher/rancher repository or published Go packages).
    • Existing Rancher architecture docs and internal notes about cluster provisioning, cluster agents, and node agents.
    • A local Rancher management cluster (k3s or RKE2) with a few test downstream clusters to validate phase detection, history tracking, and recommendations.


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji


    Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0

    Self-Scaling LLM Infrastructure Powered by Rancher

    logo


    Description

    The Problem

    Running LLMs can get expensive and complex pretty quickly.

    Today there are typically two choices:

    1. Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
    2. Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.

    What if there was a middle ground?

    What if infrastructure scaled itself instead of making you scale it?

    Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?

    Project Repository: github.com/alexander-demicev/llmserverless


    What This Project Does

    A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.

    A complete, self-scaling LLM infrastructure that:

    • Scales to zero when idle (no idle costs)
    • Scales up automatically when requests come in
    • Adds more nodes when needed, removes them when demand drops
    • Runs on any infrastructure - laptop, bare metal, or cloud

    Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.

    How It Works

    A combination of open source tools working together:

    Flow:

    • Users interact with OpenWebUI (chat interface)
    • Requests go to LiteLLM Gateway
    • LiteLLM routes requests to:
      • Ollama (Knative) for local model inference (auto-scales pods)
      • Or cloud APIs for fallback


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    Rewrite Distrobox in go (POC) by fabriziosestito

    Description

    Rewriting Distrobox in Go.

    Main benefits:

    • Easier to maintain and to test
    • Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)

    Goals

    • Build a minimal starting point with core commands
    • Keep the CLI interface compatible: existing users shouldn't notice any difference
    • Use a clean Go architecture with adapters for different container backends
    • Keep dependencies minimal and binary size small
    • Benchmark against the original shell script

    Resources

    • Upstream project: https://github.com/89luca89/distrobox/
    • Distrobox site: https://distrobox.it/
    • ArchWiki: https://wiki.archlinux.org/title/Distrobox


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    Contribute to terraform-provider-libvirt by pinvernizzi

    Description

    The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.

    It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.

    If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.

    Goals

    • Get more familiar with Terraform provider development and libvirt bindings in Go
    • Solve some issues and/or implement some features
    • Get in touch with the community around the project

    Resources


    Help Create A Chat Control Resistant Turnkey Chatmail/Deltachat Relay Stack - Rootless Podman Compose, OpenSUSE BCI, Hardened, & SELinux by 3nd5h1771fy

    Description

    The Mission: Decentralized & Sovereign Messaging

    FYI: If you have never heard of "Chatmail", you can visit their site here, but simply put it can be thought of as the underlying protocol/platform decentralized messengers like DeltaChat use for their communications. Do not confuse it with the honeypot looking non-opensource paid for prodect with better seo that directs you to chatmailsecure(dot)com

    In an era of increasing centralized surveillance by unaccountable bad actors (aka BigTech), "Chat Control," and the erosion of digital privacy, the need for sovereign communication infrastructure is critical. Chatmail is a pioneering initiative that bridges the gap between classic email and modern instant messaging, offering metadata-minimized, end-to-end encrypted (E2EE) communication that is interoperable and open.

    However, unless you are a seasoned sysadmin, the current recommended deployment method of a Chatmail relay is rigid, fragile, difficult to properly secure, and effectively takes over the entire host the "relay" is deployed on.

    Why This Matters

    A simple, host agnostic, reproducible deployment lowers the entry cost for anyone wanting to run a privacy‑preserving, decentralized messaging relay. In an era of perpetually resurrected chat‑control legislation threats, EU digital‑sovereignty drives, and many dangers of using big‑tech messaging platforms (Apple iMessage, WhatsApp, FB Messenger, Instagram, SMS, Google Messages, etc...) for any type of communication, providing an easy‑to‑use alternative empowers:

    • Censorship resistance - No single entity controls the relay; operators can spin up new nodes quickly.
    • Surveillance mitigation - End‑to‑end OpenPGP encryption ensures relay operators never see plaintext.
    • Digital sovereignty - Communities can host their own infrastructure under local jurisdiction, aligning with national data‑policy goals.

    By turning the Chatmail relay into a plug‑and‑play container stack, we enable broader adoption, foster a resilient messaging fabric, and give developers, activists, and hobbyists a concrete tool to defend privacy online.

    Goals

    As I indicated earlier, this project aims to drastically simplify the deployment of Chatmail relay. By converting this architecture into a portable, containerized stack using Podman and OpenSUSE base container images, we can allow anyone to deploy their own censorship-resistant, privacy-preserving communications node in minutes.

    Our goal for Hack Week: package every component into containers built on openSUSE/MicroOS base images, initially orchestrated with a single container-compose.yml (podman-compose compatible). The stack will:

    • Run on any host that supports Podman (including optimizations and enhancements for SELinux‑enabled systems).
    • Allow network decoupling by refactoring configurations to move from file-system constrained Unix sockets to internal TCP networking, allowing containers achieve stricter isolation.
    • Utilize Enhanced Security with SELinux by using purpose built utilities such as udica we can quickly generate custom SELinux policies for the container stack, ensuring strict confinement superior to standard/typical Docker deployments.
    • Allow the use of bind or remote mounted volumes for shared data (/var/vmail, DKIM keys, TLS certs, etc.).
    • Replace the local DNS server requirement with a remote DNS‑provider API for DKIM/TXT record publishing.

    By delivering a turnkey, host agnostic, reproducible deployment, we lower the barrier for individuals and small communities to launch their own chatmail relays, fostering a decentralized, censorship‑resistant messaging ecosystem that can serve DeltaChat users and/or future services adopting this protocol

    Resources


    Rewrite Distrobox in go (POC) by fabriziosestito

    Description

    Rewriting Distrobox in Go.

    Main benefits:

    • Easier to maintain and to test
    • Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)

    Goals

    • Build a minimal starting point with core commands
    • Keep the CLI interface compatible: existing users shouldn't notice any difference
    • Use a clean Go architecture with adapters for different container backends
    • Keep dependencies minimal and binary size small
    • Benchmark against the original shell script

    Resources

    • Upstream project: https://github.com/89luca89/distrobox/
    • Distrobox site: https://distrobox.it/
    • ArchWiki: https://wiki.archlinux.org/title/Distrobox


    Play with the userfaultfd(2) system call and download on demand using HTTP Range Requests with Golang by rbranco

    Description

    The userfaultfd(2) is a cool system call to handle page faults in user-space. This should allow me to list the contents of an ISO or similar archive without downloading the whole thing. The userfaultfd(2) part can also be done in theory with the PROT_NONE mprotect + SIGSEGV trick, for complete Unix portability, though reportedly being slower.

    Goals

    1. Create my own library for userfaultfd(2) in Golang.
    2. Create my own library for HTTP Range Requests.
    3. Complete portability with Unix.
    4. Benchmarks.
    5. Contribute some tests to LTP.

    Resources

    1. https://docs.kernel.org/admin-guide/mm/userfaultfd.html
    2. https://www.cons.org/cracauer/cracauer-userfaultfd.html


    Create a go module to wrap happy-compta.fr by cbosdonnat

    Description

    https://happy-compta.fr is a tool for french work councils simple book keeping. While it does the job, it has no API to work with and it is tedious to enter loads of operations.

    Goals

    Write a go client module to be used as an API to programmatically manipulate the tool.

    Writing an example tool to load data from a CSV file would be good too.


    SUSE Health Check Tools by roseswe

    SUSE HC Tools Overview

    A collection of tools written in Bash or Go 1.24++ to make life easier with handling of a bunch of tar.xz balls created by supportconfig.

    Background: For SUSE HC we receive a bunch of supportconfig tar balls to check them for misconfiguration, areas for improvement or future changes.

    Main focus on these HC are High Availability (pacemaker), SLES itself and SAP workloads, esp. around the SUSE best practices.

    Goals

    • Overall improvement of the tools
    • Adding new collectors
    • Add support for SLES16

    Resources

    csv2xls* example.sh go.mod listprodids.txt sumtext* trails.go README.md csv2xls.go exceltest.go go.sum m.sh* sumtext.go vercheck.py* config.ini csvfiles/ getrpm* listprodids* rpmdate.sh* sumxls* verdriver* credtest.go example.py getrpm.go listprodids.go sccfixer.sh* sumxls.go verdriver.go

    docollall.sh* extracthtml.go gethostnamectl* go.sum numastat.go cpuvul* extractcluster.go firmwarebug* gethostnamectl.go m.sh* numastattest.go cpuvul.go extracthtml* firmwarebug.go go.mod numastat* xtr_cib.sh*

    $ getrpm -r pacemaker >> Product ID: 2795 (SUSE Linux Enterprise Server for SAP Applications 15 SP7 x86_64), RPM Name: +--------------+----------------------------+--------+--------------+--------------------+ | Package Name | Version | Arch | Release | Repository | +--------------+----------------------------+--------+--------------+--------------------+ | pacemaker | 2.1.10+20250718.fdf796ebc8 | x86_64 | 150700.3.3.1 | sle-ha/15.7/x86_64 | | pacemaker | 2.1.9+20250410.471584e6a2 | x86_64 | 150700.1.9 | sle-ha/15.7/x86_64 | +--------------+----------------------------+--------+--------------+--------------------+ Total packages found: 2


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)


    Rewrite Distrobox in go (POC) by fabriziosestito

    Description

    Rewriting Distrobox in Go.

    Main benefits:

    • Easier to maintain and to test
    • Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)

    Goals

    • Build a minimal starting point with core commands
    • Keep the CLI interface compatible: existing users shouldn't notice any difference
    • Use a clean Go architecture with adapters for different container backends
    • Keep dependencies minimal and binary size small
    • Benchmark against the original shell script

    Resources

    • Upstream project: https://github.com/89luca89/distrobox/
    • Distrobox site: https://distrobox.it/
    • ArchWiki: https://wiki.archlinux.org/title/Distrobox


    Play with the userfaultfd(2) system call and download on demand using HTTP Range Requests with Golang by rbranco

    Description

    The userfaultfd(2) is a cool system call to handle page faults in user-space. This should allow me to list the contents of an ISO or similar archive without downloading the whole thing. The userfaultfd(2) part can also be done in theory with the PROT_NONE mprotect + SIGSEGV trick, for complete Unix portability, though reportedly being slower.

    Goals

    1. Create my own library for userfaultfd(2) in Golang.
    2. Create my own library for HTTP Range Requests.
    3. Complete portability with Unix.
    4. Benchmarks.
    5. Contribute some tests to LTP.

    Resources

    1. https://docs.kernel.org/admin-guide/mm/userfaultfd.html
    2. https://www.cons.org/cracauer/cracauer-userfaultfd.html


    Create a Cloud-Native policy engine with notifying capabilities to optimize resource usage by gbazzotti

    Description

    The goal of this project is to begin the initial phase of development of an all-in-one Cloud-Native Policy Engine that notifies resource owners when their resources infringe predetermined policies. This was inspired by a current issue in the CES-SRE Team where other solutions seemed to not exactly correspond to the needs of the specific workloads running on the Public Cloud Team space.

    The initial architecture can be checked out on the Repository listed under Resources.

    Among the features that will differ this project from other monitoring/notification systems:

    • Pre-defined sensible policies written at the software-level, avoiding a learning curve by requiring users to write their own policies
    • All-in-one functionality: logging, mailing and all other actions are not required to install any additional plugins/packages
    • Easy account management, being able to parse all required configuration by a single JSON file
    • Eliminate integrations by not requiring metrics to go through a data-agreggator

    Goals

    • Create a minimal working prototype following the workflow specified on the documentation
    • Provide instructions on installation/usage
    • Work on email notifying capabilities

    Resources