Rust is a systems programming language from Mozilla. It has stronger safety guarantees than Go, and is well suited to working on cloud native infrastructure.

Most Kubernetes development is focused in Go, and it would be great to have something like https://github.com/kubernetes/client-go in Rust.

Looking for hackers with the skills:

rust kubernetes

This project is part of:

Hack Week 15

Activity

  • almost 9 years ago: robdaemon added keyword "rust" to this project.
  • almost 9 years ago: robdaemon added keyword "kubernetes" to this project.
  • almost 9 years ago: robdaemon originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Build a terminal user-interface (TUI) for Agama by IGonzalezSosa

    Description

    Officially, Agama offers two different user interfaces. On the one hand, we have the web-based interface, which is the one you see when you run the installation media. On the other hand, we have a command-line interface. In both cases, you can use them using a remote system, either using a browser or the agama CLI.

    We would expect most of the cases to be covered by this approach. However, if you cannot use the web-based interface and, for some reason, you cannot access the system through the network, your only option is to use the CLI. This interface offers a mechanism to modify Agama's configuration using an editor (vim, by default), but perhaps you might want to have a more user-friendly way.

    Goals

    The main goal of this project is to built a minimal terminal user-interface for Agama. This interface will allow the user to install the system providing just a few settings (selecting a product, a storage device and a user password). Then it should report the installation progress.

    Resources

    • https://agama-project.github.io/
    • https://ratatui.rs/

    Conclusions

    We have summarized our conclusions in a pull request. It includes screenshots ;-) We did not implement all the features we wanted, but we learn a lot during the process. We know that, if needed, we could write a TUI for Agama and we have an idea about how to build it. Good enough.


    Learn a bit of embedded programming with Rust in a micro:bit v2 by aplanas

    Description

    micro:bit is a small single board computer with a ARM Cortex-M4 with the FPU extension, with a very constrain amount of memory and a bunch of sensors and leds.

    The board is very well documented, with schematics and code for all the features available, so is an excellent platform for learning embedded programming.

    Rust is a system programming language that can generate ARM code, and has crates (libraries) to access the micro:bit hardware. There is plenty documentation about how to make small programs that will run in the micro:bit.

    Goals

    Start learning about embedded programming in Rust, and maybe make some code to the small KS4036F Robot car from keyestudio.

    Resources

    Diary

    Day 1

    • Start reading https://mb2.implrust.com/abstraction-layers.html
    • Prepare the dev environment (cross compiler, probe-rs)
    • Flash first code in the board (blinky led)
    • Checking differences between BSP and HAL
    • Compile and install a more complex example, with stack protection
    • Reading about the simplicity of xtask, as alias for workspace execution
    • Reading the CPP code of the official micro:bit libraries. They have a font!

    Day 2

    • There are multiple BSP for the microbit. One is using async code for non-blocking operations
    • Download and study a bit the API for microbit-v2, the nRF official crate
    • Take a look of the KS4036F programming, seems that the communication is multiplexed via I2C
    • The motor speed can be selected via PWM (pulse with modulation): power it longer (high frequency), and it will increase the speed
    • Scrolling some text
    • Debug by printing! defmt is a crate that can be used with probe-rs to emit logs
    • Start reading input from the board: buttons
    • The logo can be touched and detected as a floating point value

    Day 3

    • A bit confused how to read the float value from a pin


    Exploring Rust's potential: from basics to security by sferracci

    Description

    This project aims to conduct a focused investigation and practical application of the Rust programming language, with a specific emphasis on its security model. A key component will be identifying and understanding the most common vulnerabilities that can be found in Rust code.

    Goals

    Achieve a beginner/intermediate level of proficiency in writing Rust code. This will be measured by trying to solve LeetCode problems focusing on common data structures and algorithms. Study Rust vulnerabilities and learning best practices to avoid them.

    Resources

    Rust book: https://doc.rust-lang.org/book/


    OpenPlatform Self-Service Portal by tmuntan1

    Description

    In SUSE IT, we developed an internal developer platform for our engineers using SUSE technologies such as RKE2, SUSE Virtualization, and Rancher. While it works well for our existing users, the onboarding process could be better.

    To improve our customer experience, I would like to build a self-service portal to make it easy for people to accomplish common actions. To get started, I would have the portal create Jira SD tickets for our customers to have better information in our tickets, but eventually I want to add automation to reduce our workload.

    Goals

    • Build a frontend website (Angular) that helps customers create Jira SD tickets.
    • Build a backend (Rust with Axum) for the backend, which would do all the hard work for the frontend.

    Resources (SUSE VPN only)

    • development site: https://ui-dev.openplatform.suse.com/login?returnUrl=%2Fopenplatform%2Fforms
    • https://gitlab.suse.de/itpe/core/open-platform/op-portal/backend
    • https://gitlab.suse.de/itpe/core/open-platform/op-portal/frontend


    Learn how to use the Relm4 Rust GUI crate by xiaoguang_wang

    Relm4 is based on gtk4-rs and compatible with libadwaita. The gtk4-rs crate provides all the tools necessary to develop applications. Building on this foundation, Relm4 makes developing more idiomatic, simpler, and faster.

    https://github.com/Relm4/Relm4


    Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0

    Self-Scaling LLM Infrastructure Powered by Rancher

    logo


    Description

    The Problem

    Running LLMs can get expensive and complex pretty quickly.

    Today there are typically two choices:

    1. Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
    2. Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.

    What if there was a middle ground?

    What if infrastructure scaled itself instead of making you scale it?

    Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?

    Project Repository: github.com/alexander-demicev/llmserverless


    What This Project Does

    A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.

    A complete, self-scaling LLM infrastructure that:

    • Scales to zero when idle (no idle costs)
    • Scales up automatically when requests come in
    • Adds more nodes when needed, removes them when demand drops
    • Runs on any infrastructure - laptop, bare metal, or cloud

    Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.

    How It Works

    A combination of open source tools working together:

    Flow:

    • Users interact with OpenWebUI (chat interface)
    • Requests go to LiteLLM Gateway
    • LiteLLM routes requests to:
      • Ollama (Knative) for local model inference (auto-scales pods)
      • Or cloud APIs for fallback


    Kubernetes-Based ML Lifecycle Automation by lmiranda

    Description

    This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.

    The pipeline will automate the lifecycle of a machine learning model, including:

    • Data ingestion/collection
    • Model training as a Kubernetes Job
    • Model artifact storage in an S3-compatible registry (e.g. Minio)
    • A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
    • A lightweight inference service that loads and serves the latest model
    • Monitoring of model performance and service health through Prometheus/Grafana

    The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.

    Goals

    By the end of Hack Week, the project should:

    1. Produce a fully functional ML pipeline running on Kubernetes with:

      • Data collection job
      • Training job container
      • Storage and versioning of trained models
      • Automated deployment of new model versions
      • Model inference API service
      • Basic monitoring dashboards
    2. Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.

    3. Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).

    4. Prepare a short demo explaining the end-to-end process and how new models flow through the system.

    Resources

    Project Repository

    Updates

    1. Training pipeline and datasets
    2. Inference Service py


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goals for Hackweek 25

    • Update to modern Rancher and verify that existing tests still work
    • Change testing logic to populate secrets instead of requiring a secondary script
    • Add new tests

    Goals for Hackweek 24 (Complete)

    • Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix
    • Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    • https://github.com/celidon/rancher-troublemaker
    • https://github.com/rancher/terraform-provider-rancher2
    • https://github.com/rancher/tf-rancher-up
    • https://github.com/rancher/quickstart


    A CLI for Harvester by mohamed.belgaied

    Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API
    • Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)