Project Description
Terraform's remote backend requires a Terraform Cloud account and has an initial limited offering. While the initial offering is suitable for small resource management and teams this quickly becomes an issue as usage demands increase. In addition Terraform's remote operations (plan,apply, etc) are executed on HashiCorp's cloud resources which present issues for sensitive data such as passwords and access keys. Users are also constrained to as what tooling is available on the remote execution environment for example having kubectl
as part of a local-resource
. This project aims to address these shortcomings in a native Kubernetes implementation.
Goal for this Hackweek
- Have remote plan and apply operations execution handled in Kubernetes
- State management: locking and storage
- Logging: execution output (plan and apply)
- User interface: view execution output and approve apply operations
- Helm chart for deploying project
Stretch Goals
- Trigger
plan
on Github PR
Implementation
We hope to take the rancher terraform-controller and add a REST API to it to speak Terraform Cloud (sometimes called tfe or terraform enterprise in their docs). We should be able to deploy the controller and write HCL with a backend remote hostname to this new API and run terraform plan/apply from your desktop and use resources in your k8s cluster.
Resources
Looking for hackers with the skills:
This project is part of:
Hack Week 20
Activity
Similar Projects
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
terraform-provider-feilong by e_bischoff
Project Description
People need to test operating systems and applications on s390 platform.
Installation from scratch solutions include:
- just deploy and provision manually (with the help of
ftpboot
script, if you are at SUSE) - use
s3270
terminal emulation (used byopenQA
people?) - use
LXC
from IBM to start CP commands and analyze the results - use
zPXE
to do some PXE-alike booting (used by theorthos
team?) - use
tessia
to install from scratch using autoyast - use
libvirt
for s390 to do some nested virtualization on some already deployed z/VM system - directly install a Linux kernel on a LPAR and use
kvm
+libvirt
from there
Deployment from image solutions include:
- use
ICIC
web interface (openstack
in disguise, contributed by IBM) - use
ICIC
from theopenstack
terraform
provider (used byRancher
QA) - use
zvm_ansible
to controlSMAPI
- connect directly to
SMAPI
low-level socket interface
IBM Cloud Infrastructure Center (ICIC
) harnesses the Feilong API, but you can use Feilong
without installing ICIC
, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.
What about writing a terraform Feilong
provider, just like we have the terraform
libvirt
provider? That would allow to transparently call Feilong
from your main.tf files to deploy and destroy resources on your system/z.
Other Feilong-based solutions include:
- make
libvirt
Feilong-aware - simply call
Feilong
from shell scripts withcurl
- use
zvmconnector
client python library from Feilong - use
zthin
part of Feilong to directly commandSMAPI
.
Goal for Hackweek 23
My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.
My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.
Goals for Hackweek 24
Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!
Let's add support for fiberchannel disks and multipath.
Goals for Hackweek 25
- Finish support for fiberchannel disks and multipath
- Fix problems with registration on hashicorp providers registry
Rancher/k8s Trouble-Maker by tonyhansen
Project Description
When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.
Goal for this Hackweek
Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting
Resources
https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up
Contribute to terraform-provider-libvirt by pinvernizzi
Description
The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.
It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.
If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.
Goals
- Get more familiar with Terraform provider development and libvirt bindings in Go
- Solve some issues and/or implement some features
- Get in touch with the community around the project
Resources
- CONTRIBUTING readme
- Go libvirt library in use by the project
- Terraform plugin development
- "Good first issue" list
Metrics Server viewer for Kubernetes by bkampen
This project is finished please visit the github repo below for the tool.
Description
Build a CLI tools which can visualize Kubernetes metrics from the metrics-server, so you're able to watch these without installing Prometheus and Grafana on a cluster.
Goals
- Learn more about metrics-server
- Learn more about the inner workings of Kubernetes.
- Learn more about Go
Resources
https://github.com/bvankampen/metrics-viewer
Rancher/k8s Trouble-Maker by tonyhansen
Project Description
When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.
Goal for this Hackweek
Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting
Resources
https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up
Setup Kanidm as OIDC provider on Kubernetes by jkuzilek
Description
I am planning to upgrade my homelab Kubernetes cluster to the next level and need an OIDC provider for my services, including K8s itself.
Goals
- Successfully configure and deploy Kanidm on homelab cluster
- Integrate with K8s auth
- Integrate with other services (Envoy Gateway, Container Registry, future deployment of Forgejo?)
Resources
Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng
Description
As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.
Goals
- Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
- Create NFS-Ganesha Container Image on OBS: Image
- Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus
Resources
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API