Description

In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.

Goals

Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:

  • Power management
  • Boot device selection
  • Virtual media mount (this one is not so basic add-emoji )

Resources

Looking for hackers with the skills:

kubernetes redfish bare-metal virtualization

This project is part of:

Hack Week 24 Hack Week 23

Activity

  • 14 days ago: cooper.tseng liked this project.
  • 14 days ago: cooper.tseng disliked this project.
  • 14 days ago: cooper.tseng liked this project.
  • 16 days ago: kieferchang liked this project.
  • 29 days ago: zchang started this project.
  • about 1 month ago: vcheng liked this project.
  • about 1 month ago: zchang added keyword "bare-metal" to this project.
  • about 1 month ago: zchang added keyword "virtualization" to this project.
  • about 1 month ago: zchang added keyword "kubernetes" to this project.
  • about 1 month ago: zchang added keyword "redfish" to this project.
  • about 1 month ago: zchang originated this project.

  • Comments

    • zchang
      6 days ago by zchang | Reply

      Here's the proof of work that has been done, though they are still considered immature. I will continue polishing them.

    Similar Projects

    ddflare: (Dynamic)DNS management via Cloudflare API in Kubernetes by fgiudici

    Description

    ddflare is a project started a couple of weeks ago to provide DDNS management using v4 Cloudflare APIs: Cloudflare offers management via APIs and access tokens, so it is possible to register a domain and implement a DynDNS client without any other external service but their API.

    Since ddflare allows to set any IP to any domain name, one could manage multiple A and ALIAS domain records. Wouldn't be cool to allow full DNS control from the project and integrate it with your Kubernetes cluster?

    Goals

    Main goals are:

    1. add containerized image for ddflare
    2. extend ddflare to be able to add and remove DNS records (and not just update existing ones)
    3. add documentation, covering also a sample pod deployment for Kubernetes
    4. write a ddflare Kubernetes operator to enable domain management via Kubernetes resources (using kubebuilder)

    Available tasks and improvements tracked on ddflare github.

    Resources

    • https://github.com/fgiudici/ddflare
    • https://developers.cloudflare.com/api/
    • https://book.kubebuilder.io


    Multi-pod, autoscalable Elixir application in Kubernetes using K8s resources by socon

    Description

    Elixir / Erlang use their own solutions to create clusters that work together. Kubernetes provide its own orchestration. Due to the nature of the BEAM, it looks a very promising technology for applications that run in Kubernetes and requite to be always on, specifically if they are created as web pages using Phoenix.

    Goals

    • Investigate and provide solutions that work in Phoenix LiveView using Kubernetes resources, so a multi-pod application can be used
    • Provide an end to end example that creates and deploy a container from source code.

    Resources

    https://github.com/dwyl/phoenix-liveview-counter-tutorial https://github.com/propedeutica/elixir-k8s-counter


    Learn enough Golang and hack on CoreDNS by jkuzilek

    Description

    I'm implementing a split-horizon DNS for my home Kubernetes cluster to be able to access my internal (and external) services over the local network through public domains. I managed to make a PoC with the k8s_gateway plugin for CoreDNS. However, I soon found out it responds with IPs for all Gateways assigned to HTTPRoutes, publishing public IPs as well as the internal Loadbalancer ones.

    To remedy this issue, a simple filtering mechanism has to be implemented.

    Goals

    • Learn an acceptable amount of Golang
    • Implement GatewayClass (and IngressClass) filtering for k8s_gateway
    • Deploy on homelab cluster
    • Profit?

    Resources

    EDIT: Feature mostly complete. An unfinished PR lies here. Successfully tested working on homelab cluster.


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    Integrate Backstage with Rancher Manager by nwmacd

    Description

    Backstage (backstage.io) is an open-source, CNCF project that allows you to create your own developer portal. There are many plugins for Backstage.

    This could be a great compliment to Rancher Manager.

    Goals

    Learn and experiment with Backstage and look at how this could be integrated with Rancher Manager. Goal is to have some kind of integration completed in this Hack week.

    Progress

    Screen shot of home page at the end of Hackweek:

    Home

    Day One

    • Got Backstage running locally, understanding configuration with HTTPs.
    • Got Backstage embedded in an IFRAME inside of Rancher
    • Added content into the software catalog (see: https://backstage.io/docs/features/techdocs/getting-started/)
    • Understood more about the entity model

    Day Two

    • Connected Backstage to the Rancher local cluster and configured the Kubernetes plugin.
    • Created Rancher theme to make the light theme more consistent with Rancher

    Home

    Days Three and Day Four

    • Created two backend plugins for Backstage:

      1. Catalog Entity Provider - this imports users from Rancher into Backstage
      2. Auth Provider - uses the proxied sign-in pattern to check the Rancher session cookie, to user that to authenticate the user with Rancher and then log them into Backstage by connecting this to the imported User entity from the catalog entity provider plugin.
    • With this in place, you can single-sign-on between Rancher and Backstage when it is deployed within Rancher. Note this is only when running locally for development at present

    Home

    Home

    Day Five

    • Start to build out a production deployment for all of the above
    • Made some progress, but hit issues with the authentication and proxying when running proxied within Rancher, which needs further investigation


    iSCSI integration in Warewulf by ncuralli

    Description

    This Hackweek project aims to enhance Warewulf’s capabilities by adding iSCSI support, enabling both remote boot and flexible mounting of iSCSI devices within the filesystem. The project, which already handles NFS, DHCP, and iPXE, will be extended to offer iSCSI services as well, centralizing all necessary services for provisioning and booting cluster nodes.

    Goals

    • iSCSI Boot Option: Enable nodes to boot directly from iSCSI volumes
    • Mounting iSCSI Volumes within the Filesystem: Implement support for mounting iSCSI devices at various points within the filesystem

    Resources

    https://warewulf.org/

    Steps

    • add generic framework to handle remote ressource/filesystems to wwctl [ ]
    • add iSCSI handling to wwctl configure [ ]
    • add iSCSI to dracut files [ ]
    • test it [ ]


    SUSE KVM Best Practices by roseswe

    Description

    SUSE Best Practices around KVM, especially for SAP workloads. Early Google presentation already made from various customer projects and SUSE sources.

    Goals

    Complete presentation we can reuse in SUSE Consulting projects

    Resources

    KVM (virt-manager) images

    SUSE/SAP/KVM Best Practices

    • https://documentation.suse.com/en-us/sles/15-SP6/single-html/SLES-virtualization/
    • SAP Note 1522993 - "Linux: SAP on SUSE KVM - Kernel-based Virtual Machine" && 2284516 - SAP HANA virtualized on SUSE Linux Enterprise hypervisors https://me.sap.com/notes/2284516
    • SUSECon24: [TUTORIAL-1253] Virtualizing SAP workloads with SUSE KVM || https://youtu.be/PTkpRVpX2PM
    • SUSE Best Practices for SAP HANA on KVM - https://documentation.suse.com/sbp/sap-15/html/SBP-SLES4SAP-HANAonKVM-SLES15SP4/index.html


    Contribute to terraform-provider-libvirt by pinvernizzi

    Description

    The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.

    It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.

    If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.

    Goals

    • Get more familiar with Terraform provider development and libvirt bindings in Go
    • Solve some issues and/or implement some features
    • Get in touch with the community around the project

    Resources


    Harvester Packer Plugin by mrohrich

    Description

    Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.

    Goals

    Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.

    Resources

    Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders

    Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin