Description

As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

Goals

  • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
  • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
  • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

Resources

Core concepts:

  • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
  • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
  • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.

Looking for hackers with the skills:

rancher kubernetes poc

This project is part of:

Hack Week 24

Activity

  • about 2 months ago: rapetz joined this project.
  • about 2 months ago: rapetz liked this project.
  • about 2 months ago: moio liked this project.
  • 2 months ago: aruiz started this project.
  • 2 months ago: aruiz added keyword "rancher" to this project.
  • 2 months ago: aruiz added keyword "kubernetes" to this project.
  • 2 months ago: aruiz added keyword "poc" to this project.
  • 2 months ago: aruiz originated this project.

  • Comments

    • aruiz
      about 1 month ago by aruiz | Reply

      We started the week by creating a rough plan of the areas we wanted to explore, in order to divide the problem into smaller parts and identify further areas of work.

      Rough Plan

      • Create example CRDs that allowed experimenting in our local cluster without breaking it.
        • Nothing really fancy, manually crafted. E.g. copy an existing one from the cluster and just rename it.
      • A Kubernetes controller skeleton to use as the base.
        • Likely based on Kubebuilder/controller-runtime
      • Scripts for bringing up&down a test/dev environment.
      • Explore admission controllers:
        • Can we use kubewarden? at least for some parts?
        • Requirements and possible functional alternatives.
      • Explore APIServices
        • What's the state-of-the-art for building APIServices? Does controller-runtime support it?
        • Is it possible to extract user information from requests? Hopefully without having to terminate auth here.
        • Does it support redirects?

      I talked to Rafa about the project and going over the different areas of exploration.

    • aruiz
      about 1 month ago by aruiz | Reply

      I started by exploring Kubewarden to check if we could just use a policy to manage the mapping of CRDs being installed along with a Bottle; the idea was to perform this transformation before the CRD is stored in Kubernetes, so we would need to use Admission/Mutating webhooks.

      I concluded that Kuberwarden was not a good fit for this because:

      • Go policies are compiled with TinyGo, due to limited support for WebAssembly in the official toolchain. TinyGo still has some limitations for building the Go standard library, which prevents us from using the k8s.io libraries, which we would need to perform the desired steps, as they will require a Kubernetes client.
      • CEL policies won't provide enough flexibility for our purpose.
      • Rust-based policies could be an option, but probably require a bigger effort to implement (and differ from the language used from the rest of the project).

      I confirmed that Kubebuilder has support for writing defaulting webhooks, so we could use it for modifying the CRDs before they get persisted.

      This framework also creates scripts and resources for building the controller's image, as well as the manifests to install it in Kubernetes. Although it's very focused on adding your own APIs, with some tweaks we could use it to generate such boiler-plate for a built-in type (CRDs).

      These "difficulties" made me think of alternatives approaches for our goal. Assuming that Helm charts is the selected installation mechanism, we could:

      • Create an offline tool whose input is the complete manifests (including the Bottle spec), and produces the required modifications, so that they can be directly applied to Kubernetes.
      • This could be a kubectl (krew) or helm plugin.
      • An offline transformation is a simpler solution, since it moves the processing client-side (allows dry-run, store definitive manifest for GitOps, etc.), while makes the approach less transparent.

    • aruiz
      about 1 month ago by aruiz | Reply

      Since we had already used a few days, I decided to prioritize exploring the rest of the areas instead of keep building on the admission part, since this was easier to fake in order for the rest to keep working.

      So I started looking into our options for implementing the Aggregation layer in Kubernetes. Besides creating our own CRDs, which is not what we were looking for, the docs suggested 2 options (note that Kubebuilder/controller-runtime is not intended for this use case and have no support for this):

      • Use kubernetes-incubator/apiserver-builder (now named kubernetes-sigs/apiserver-builder-alpha), which aims to provide a similar pattern to Kubebuilder.
        • However, it seems to not be actively maintained (latest release was >2 years ago, no Go modules support, and lack of activity in general.)
      • Use the sample-apiserver project, which seems to be the base for the apiserver-builder generator, but is more up-to-date.
      • Both options seem to build on top of the apiserver-runtime framework, which as I understand is equivalent to controller-runtime for regular controllers. Nonetheless, its last commit was almost one year ago.

      I put some efforts into trying to make apiserver-builder work, including to make the generated project Go-modules aware, but then faced many problem upgrading Kubernetes dependencies to recent versions, so I gave up on that option.

      In this situation, forking sample-apiserver and start modifying it to our needs looked like the best option to go forward.

      This sample project does work, but has very low-level requirements. In particular, it's meant to have access to Etcd itself, in order to serve the APIService. This option was not in our initial plans, as it would make it harder to run our controller. Nonetheless, I also found that the interface implemented does not necessarily have to be backed by Etcd, which brings the option of using a different storage (e.g. any database), as long as the interface methods are implemented. I decided to not pursue this route just yet, though, since it was out of the initial scope and was running out of time. For the sake of the experiment, I tweaked the endpoint to use a Kubernetes client to try to obtain the original data from the main API server and then transforming it. However, this obviously produced an infinite loop, since the control plane would just redirect such requests back to our APIService.

      The last thing that time allowed me to experiment with was the authorization part, as we need to identify which user produced the requests. Even though there is functionality for this, I couldn't manage to make it available to my handlers implementation, and wasn't able to identify why. I need to read more docs about how the workflows for Aggregated APIs, maybe authorization is meant to be resolved by APIServices directly? Sadly, the apiserver-runtime library is not very well documented.

    Similar Projects

    CVE portal for SUSE Rancher products by gmacedo

    Description

    Currently it's a bit difficult for users to quickly see the list of CVEs affecting images in Rancher, RKE2, Harvester and Longhorn releases. Users need to individually look for each CVE in the SUSE CVE database page - https://www.suse.com/security/cve/ . This is not optimal, because those CVE pages are a bit hard to read and contain data for all SLE and BCI products too, making it difficult to easily see only the CVEs affecting the latest release of Rancher, for example. We understand that certain costumers are only looking for CVE data for Rancher and not SLE or BCI.

    Goals

    The objective is to create a simple to read and navigate page that contains only CVE data related to Rancher, RKE2, Harvester and Longhorn, where it's easy to search by a CVE ID, an image name or a release version. The page should also provide the raw data as an exportable CSV file.

    It must be an MVP with the minimal amount of effort/time invested, but still providing great value to our users and saving the wasted time that the Rancher Security team needs to spend by manually sharing such data. It might not be long lived, as it can be replaced in 2-3 years with a better SUSE wide solution.

    Resources

    • The page must be simple and easy to read.
    • The UI/UX must be as straightforward as possible with minimal visual noise.
    • The content must be created automatically from the raw data that we already have internally.
    • It must be updated automatically on a daily basis and on ad-hoc runs (when needed).
    • The CVE status must be aligned with VEX.
    • The raw data must be exportable as CSV file.
    • Ideally it will be written in Go or pure Shell script with basic HTML and no external dependencies in CSS or JS.


    Cluster API Provider for Harvester by rcase

    Project Description

    The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.

    The project has been bootstrapped in HackWeek 23, and its code is available here.

    Work done in HackWeek 2023

    • Have a early working version of the provider available on Rancher Sandbox : *DONE *
    • Demonstrated the created cluster can be imported using Rancher Turtles: DONE
    • Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo

    Goals for HackWeek 2024

    • Add support for ClusterClass
    • Add e2e testing
    • Add more Unit Tests
    • Improve Status Conditions to reflect current state of Infrastructure
    • Improve CI (some bugs for release creation)
    • Testing with newer Harvester version (v1.3.X and v1.4.X)
    • Due to the length and complexity of the templates, maybe package some of them as Helm Charts.
    • Other improvement suggestions are welcome!

    DONE in HackWeek 24:

    Thanks to @isim and Dominic Giebert for their contributions!

    Resources

    Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.

    This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:


    Longhorn UI Extension (POC) by yiya.chen

    Description

    The goal is to create a Longhorn UI extension within Rancher using existing resources.
    Longhorn’s UI is built using React, while Rancher’s UI extensions are built using Vue. Developers will explore different approaches to integrate and extend Longhorn’s UI within Rancher’s Vue-based ecosystem, aiming to create a seamless, functional UI extension.

    Goals

    • Build a Longhorn UI extension (look and feel)
    • Support theme switching to align with Rancher’s UI

    Results

    • https://github.com/a110605/longhorn-hackday
    • https://github.com/a110605/longhorn-ui/tree/darkmode
    • https://github.com/houhoucoop/hackweek/tree/main/hackweek24

    Resources

    • Longhorn UI: https://github.com/longhorn/longhorn-ui
    • Rancher UI Extension: https://extensions.rancher.io/extensions/next/home
    • darkreader: https://www.npmjs.com/package/darkreader
    • veaury: https://github.com/gloriasoft/veaury
    • module federation: https://webpack.js.org/concepts/module-federation/


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API


    Rancher microfrontend extensions by ftorchia

    Description

    Rancher UI Extensions allow users, developers, partners, and customers to extend and enhance the Rancher UI. Extensions are Helm charts that can only be installed once into a cluster. The charts contain a UI built package that is downloaded and linked to the Host UI at runtime; this means that the extension pkg needs to be implemented using the same technology and have the same APIs as Rancher UI.

    Goals

    We want to create a new type of Rancher extension, based on microfrontend pattern. The extension is served in a docker container in the k8s clusters and embedded in the host UI; this would guarantee us to be able to create extensions unrelated to the rancher UI architecture, in any technology.

    Non Goals

    We want to apply the microfrontend pattern to the product-level extensions; we don't want to apply it to cluster-level extensions.

    Resources

    rancher-extension-microfrontend, Rancher extensions


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API


    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    Multi-pod, autoscalable Elixir application in Kubernetes using K8s resources by socon

    Description

    Elixir / Erlang use their own solutions to create clusters that work together. Kubernetes provide its own orchestration. Due to the nature of the BEAM, it looks a very promising technology for applications that run in Kubernetes and requite to be always on, specifically if they are created as web pages using Phoenix.

    Goals

    • Investigate and provide solutions that work in Phoenix LiveView using Kubernetes resources, so a multi-pod application can be used
    • Provide an end to end example that creates and deploy a container from source code.

    Resources

    https://github.com/dwyl/phoenix-liveview-counter-tutorial https://github.com/propedeutica/elixir-k8s-counter


    Extending KubeVirtBMC's capability by adding Redfish support by zchang

    Description

    In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.

    Goals

    Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:

    • Power management
    • Boot device selection
    • Virtual media mount (this one is not so basic add-emoji )

    Resources


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up