Since Kubernetes already has a clear path of "in-tree" volume plugin to CSI migration. I would like to understand the concept of CSI with writing a simple driver for Kubernetes.
Reference:
Looking for hackers with the skills:
This project is part of:
Hack Week 19
Activity
Comments
Be the first to comment!
Similar Projects
Agama Expert Partitioner by joseivanlopez
Description
Agama is a new Linux installer that will be very likely used for SLES 16.
It offers an UI for configuring the target system (language, patterns, network, etc). One of the more complex sections is the storage configuration, which is going to be revamped. This project consists on exploring the possibility of having something similar to the YaST Expert Partitioner for Agama.
Goals
- Explore different approaches for the storage UI in Agama.
Learn enough Golang and hack on CoreDNS by jkuzilek
Description
I'm implementing a split-horizon DNS for my home Kubernetes cluster to be able to access my internal (and external) services over the local network through public domains. I managed to make a PoC with the k8s_gateway plugin for CoreDNS. However, I soon found out it responds with IPs for all Gateways assigned to HTTPRoutes, publishing public IPs as well as the internal Loadbalancer ones.
To remedy this issue, a simple filtering mechanism has to be implemented.
Goals
- Learn an acceptable amount of Golang
- Implement GatewayClass (and IngressClass) filtering for k8s_gateway
- Deploy on homelab cluster
- Profit?
Resources
- https://github.com/ori-edge/k8s_gateway/issues/36
- https://github.com/coredns/coredns/issues/2465#issuecomment-593910983
EDIT: Feature mostly complete. An unfinished PR lies here. Successfully tested working on homelab cluster.
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games
kubectl clone: Seamlessly Clone Kubernetes Resources Across Multiple Rancher Clusters and Projects by dpunia
Description
kubectl clone is a kubectl plugin that empowers users to clone Kubernetes resources across multiple clusters and projects managed by Rancher. It simplifies the process of duplicating resources from one cluster to another or within different namespaces and projects, with optional on-the-fly modifications. This tool enhances multi-cluster resource management, making it invaluable for environments where Rancher orchestrates numerous Kubernetes clusters.
Goals
- Seamless Multi-Cluster Cloning
- Clone Kubernetes resources across clusters/projects with one command.
- Simplifies management, reduces operational effort.
Resources
Rancher & Kubernetes Docs
- Rancher API, Cluster Management, Kubernetes client libraries.
Development Tools
- Kubectl plugin docs, Go programming resources.
Building and Installing the Plugin
- Set Environment Variables: Export the Rancher URL and API token:
export RANCHER_URL="https://rancher.example.com"
export RANCHER_TOKEN="token-xxxxx:xxxxxxxxxxxxxxxxxxxx"
- Build the Plugin: Compile the Go program:
go build -o kubectl-clone ./pkg/
- Install the Plugin:
Move the executable to a directory in your
PATH
:
mv kubectl-clone /usr/local/bin/
Ensure the file is executable:
chmod +x /usr/local/bin/kubectl-clone
- Verify the Plugin Installation: Test the plugin by running:
kubectl clone --help
You should see the usage information for the kubectl-clone
plugin.
Usage Examples
- Clone a Deployment from One Cluster to Another:
kubectl clone --source-cluster c-abc123 --type deployment --name nginx-deployment --target-cluster c-def456 --new-name nginx-deployment-clone
- Clone a Service into Another Namespace and Modify Labels:
Rancher/k8s Trouble-Maker by tonyhansen
Project Description
When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.
Goal for this Hackweek
Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting
Resources
https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up
Extending KubeVirtBMC's capability by adding Redfish support by zchang
Description
In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.
Goals
Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:
- Power management
- Boot device selection
- Virtual media mount (this one is not so basic
)
Resources