Build a network of ("edge") humidity sensors using Raspberry Pis with SenseHats and additional cheaper sensors
For our house, I want to make sure I can track the effectiveness of regularly ventilating the rooms by adding humidity sensors and tracking the measurements over time.
We've already started with this little project:
https://github.com/benediktwerner/humidity-logger
Goal for this Hackweek
The setup we built over the holidays works just fine, but there are a few practical issues and a few stretch goals I'd have:
With a Raspberry Pi plus the Sense Hat, a single sensor is pretty expensive and over-specced. Using a Raspberry Pi as the master is ok (especially as I already have two with two Sense Hats), but I'd like to add extra sensors that can be connected wirelessly directly to one of the Raspberry Pis using Bluetooth or WiFi. Those could either be ready to go or a combination of "bare" sensor and a cheap board like the Raspberry Pico W or a similar board (e.g., based on the ESP32).
Currently, there's only a Grafana dashboard with a "forever" history. Would love to add extra reporting, e.g., sending alerts when certain humidity thresholds are exceeded, archiving older data.
None of the setup is "SUSEfied" (using SUSE Linux images, k3s, Rancher, ...). I'd love to change that, so that the setup can be used as a showcase for SUSE Edge. The stretch goal would be to make the SUSE version at least as easy to use as the current Raspberry Pi OS setup.
I'm looking for contributors who want to hack on either the hardware part (building an affordable Bluetooth or WiFi humidity/temperature sensor from components) or the SUSEfied software stack or both.
The software stack has many areas to work on, from building out-of-the box containers that can be deployed from Rancher to improving the Grafana dashboards.
Resources
- https://github.com/benediktwerner/humidity-logger
- https://www.raspberrypi.com/products/sense-hat/
- https://www.raspberrypi.com/documentation/microcontrollers/raspberry-pi-pico.html
- https://community.ibm.com/community/user/cloud/blogs/alexei-karve/2022/05/08/microshift-15
Looking for hackers with the skills:
raspberrypi esp32 microos k3s rancher elemental sensors grafana influxdb containers helm microcontroller edge
This project is part of:
Hack Week 22
Activity
Comments
-
almost 3 years ago by idefx | Reply
Hello! Have you check on the Home Assistant and ESPHome projects?
I run Home Assistant on a k3s cluster, with 2 raspberry pi 4 and 2 intel low-power (a VM inside a NAS and a NUC). Everything is on SLE Micro, and I use Rancher for the management of the cluster, and longhorn for persistent data. For the sensor part, I have a couple of Arduino m5 atoms lite. They support a variety of sensors, and with ESPHome, it is super easy to connect them to Home Assistant. Then you can design automations, mobile notification, etc. from Home Assistant, and even plug it to other services so you get a phone call if something goes wrong, for example.
Don't hesitate to reach out to me if you want to discuss this!
-
almost 3 years ago by joachimwerner | Reply
Thanks for the great pointers! We started off with a much smaller scope (no home automation, really just data gathering and visualisation), but it makes perfect sense to think of it in the context of home assistant for the future (e.g., so that a smart thermostat automatically shuts down the heating in the room when it's being ventilated). Will certainly get back to you with some questions.
-
almost 3 years ago by joachimwerner | Reply
Found this on how to get the Sense Hat to work on openSUSE: https://community.ibm.com/community/user/cloud/blogs/alexei-karve/2022/05/08/microshift-15
-
almost 3 years ago by gpathak | Reply
Hi @joachimwerner For adding extra sensors, I found out that it can be done with DHT22 and ESP8266. Some information about interfacing DHT22 with ESP8266 can be found here: Getting Started With the ESP8266 and DHT22 Sensor
-
almost 3 years ago by bigironman | Reply
An alternative solution might be using a Raspberry Pi Pico W with MicroPython and BME280 sensor (temperature, humidity, pressure). It is easy to program and you can integrate it into nearly everything via Wifi. I'm using it in combination with Home Assistant and MQTT.
Similar Projects
Build a Single Camera 3D Scanner (Photogrammetry). by lparkin
Description
I want to see how fast I can develop a single-camera (pi camera module v3) rig with a stepper motor controlling a turntable that rotates the model being scanned. The trick here is not to be super fancy with 100's of sensors and data inputs, quite the opposite. I want to see how accurate I can scan objects into 3D-printable models using only a camera and as many fixed and known parameters as possible.
Speed to be augmented with agentic AI coding companion. As it stands, I have a 3D printer, pretty much all the electronics I need.
Goals
- Design and print working/workable camera rig
- Design and print working/workable turntable (considering printing my own cylinder-style bearings as well)
- Assemble rig components into MVP assembly
- Develop application that can hook into existing tools, or leverage a library like openCV, to process 2D images into a 3D model.
- Iterate until models are good enough to 3D print.
Resources
- https://www.instructables.com/3D-scanning-Photogrammetry-with-a-rotating-platfor/
- https://www.instructables.com/3d-Scan-Anything-Using-Just-a-Camera/
- https://www.instructables.com/Build-a-DIY-Desktop-3d-Scanner-With-Infinite-Resol/
- https://www.instructables.com/3D-Laser-Scanning-DIY/
Play with esp32 to create domotics stuff by aginies
Description
Play with ESP32 board and multiple small peripherals
https://github.com/aginies/domotique
Goals
- Finish the pool project
- add support of NFC auth in the door project
- improve the doc
- project to manage solar panel (router)
Resources
esp32 home
ESPClock: An open-source smart desk clock with Home Assistant integration by jbaier_cz
Description
ESPClock will be an open-source, Wi-Fi connected digital clock powered by ESP32 and ESPHome, designed to seamlessly integrate with Home Assistant. Featuring a 3D-printable case, the clock combines modern style with smart home functionality.
Goals
Key features:
- real-time clock
- native Home Assistant integration
- optional sensors for temperature, humidity and ambient light
- custom 3D-printable case
- open-source firmware and hardware design
- easy YAML-based configuration
Resources
- https://esphome.io/
- https://gist.github.com/baierjan/773e20a5061780f0a27ed86619dbffba
The Hacking
Chapter 1: Inventory
After thoroughly inspecting my closet, I managed to gather a handful of useful components. I decided to keep things simple and avoid making the project unnecessarily complex, opting for ready-made modules instead of assembling everything from individual parts. This approach saves time and reduces the chances of compatibility issues. The components I settled on are:
- Microcontroller: ESP32-LPkit
- 4-digit 7-segment display with integrated controller: TM1637
- Temperature and humidity sensor: DHT22
- Carbon dioxide sensor: MH-Z19
- PIR motion sensor: AM312
- Illumination sensor: VEML7700
- I2S-compatible microphone module: SPH0645LM4H
- A couple of micro switches
- A few LED diodes with appropriate resistors
With this list, the essential environmental parameters should be well covered. The clock’s main function—displaying the current time—is handled by the bright 0.56-inch display. Additionally, the setup provides simple input options through buttons and possibly even voice commands in the future.
Chapter 2: Wiring Diagram
I went through the datasheets for all the components to determine the most effective way to connect them. After comparing different options and checking for compatibility, I finalized the following wiring diagram.
Chapter 3: Firmware
For the software part, I decided to use ESPHome, which offers an easy and reliable way to integrate the clock with Home Assistant. All the components from the inventory are natively supported, so there is no need to write much additional code.
The following example shows how the YAML configuration for the clock may look: espclock.yaml
Capyboard, ESP32 Development Board for Education by emiler
Capyboard is an ESP32 development board built to accept individual custom-made modules. The board is created primarily for use in education, where you want to focus on embedded programming instead of spending time with connecting cables and parts on a breadboard, as you would with Arduino and other such devices. The board is not limited only to education and it can be used to build, for instance, a very powerful internal meteo-station and so on.
- github.com/realcharmer/capyboard
- github.com/realcharmer/capyboard-starter
- github.com/realcharmer/capyboard-docs
- github.com/realcharmer/capyboard-examples
- docs.capyboard.dev
Hack Week 25
My plan is to create a new revision of the board with updated dimensions and possibly even use a new ESP32 with Zigbee/Thread support. I also want to create an extensive library of example projects and expand the documentation. It would be nice to also design additional modules, such as multiplexer or an environment module.
Goals
- Implement changes to a new board revision
- Design additional modules
- Expand documentation and examples
- Migrate documentation backend from MkDocs to Zensical
Hack Week 24
I created a new motherboard revision after testing my previous prototype, as well as a light module. This project was also a part of my master's thesis, which was defended successfully.
Goals
- Finish testing of a new prototype
- Publish source files
- Documentation completion
- Finish writing thesis
Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0
Self-Scaling LLM Infrastructure Powered by Rancher

Description
The Problem
Running LLMs can get expensive and complex pretty quickly.
Today there are typically two choices:
- Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
- Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.
What if there was a middle ground?
What if infrastructure scaled itself instead of making you scale it?
Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?
Project Repository: github.com/alexander-demicev/llmserverless
What This Project Does
A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.
A complete, self-scaling LLM infrastructure that:
- Scales to zero when idle (no idle costs)
- Scales up automatically when requests come in
- Adds more nodes when needed, removes them when demand drops
- Runs on any infrastructure - laptop, bare metal, or cloud
Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.
How It Works
A combination of open source tools working together:
Flow:
- Users interact with OpenWebUI (chat interface)
- Requests go to LiteLLM Gateway
- LiteLLM routes requests to:
- Ollama (Knative) for local model inference (auto-scales pods)
- Or cloud APIs for fallback
Liz - Prompt autocomplete by ftorchia
Description
Liz is the Rancher AI assistant for cluster operations.
Goals
We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.
Example:
- User prompt: "Can you show me the list of p"
- Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"
Example:
- User prompt: "Show me the logs of #rancher-"
- Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".
Technical Overview
- The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
- The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.
Resources
The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio
Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. 
The Plan
Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!
Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:
❥ The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.
❥ The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.
❥ Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.
If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.
Why?
We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.
The Outputs
❥ A "State of the Agentic Union" for SUSE engineers, detailing what works, what explodes, and how much coffee we can drink while the robots do the rebasing.
❥ Honest, Daily Updates With All the Gory Details
Cluster API Provider for Harvester by rcase
Project Description
The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.
The project has been bootstrapped in HackWeek 23, and its code is available here.
Work done in HackWeek 2023
- Have a early working version of the provider available on Rancher Sandbox : *DONE *
- Demonstrated the created cluster can be imported using Rancher Turtles: DONE
- Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo
DONE in HackWeek 24:
- Add more Unit Tests
- Improve Status Conditions for some phases
- Add cloud provider config generation
- Testing with Harvester v1.3.2
- Template improvements
- Issues creation
DONE in 2025 (out of Hackweek)
- Support of ClusterClass
- Add to
clusterctlcommunity providers, you can add it directly withclusterctl - Testing on newer versions of Harvester v1.4.X and v1.5.X
- Support for
clusterctl generate cluster ... - Improve Status Conditions to reflect current state of Infrastructure
- Improve CI (some bugs for release creation)
Goals for HackWeek 2025
- FIRST and FOREMOST, any topic is important to you
- Add e2e testing
- Certify the provider for Rancher Turtles
- Add Machine pool labeling
- Add PCI-e passthrough capabilities.
- Other improvement suggestions are welcome!
Thanks to @isim and Dominic Giebert for their contributions!
Resources
Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.
This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:
A CLI for Harvester by mohamed.belgaied
Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01 to my-vm-05.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
- Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)
Uyuni Health-check Grafana AI Troubleshooter by ygutierrez
Description
This project explores the feasibility of using the open-source Grafana LLM plugin to enhance the Uyuni Health-check tool with LLM capabilities. The idea is to integrate a chat-based "AI Troubleshooter" directly into existing dashboards, allowing users to ask natural-language questions about errors, anomalies, or performance issues.
Goals
- Investigate if and how the
grafana-llm-appplug-in can be used within the Uyuni Health-check tool. - Investigate if this plug-in can be used to query LLMs for troubleshooting scenarios.
- Evaluate support for local LLMs and external APIs through the plugin.
- Evaluate if and how the Uyuni MCP server could be integrated as another source of information.
Resources
Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios
Description
Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.
This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.
The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.
Goals
By the end of Hack Week, we aim to have a single, working Python script that:
- Connects to Prometheus and executes a query to fetch detailed test failure history.
- Processes the raw data into a format suitable for the Gemini API.
- Successfully calls the Gemini API with the data and a clear prompt.
- Parses the AI's response to extract a simple list of flaky tests.
- Saves the list to a JSON file that can be displayed in Grafana.
- New panel in our Dashboard listing the Flaky tests
Resources
- Jenkins Prometheus Exporter: https://github.com/uyuni-project/jenkins-exporter/
- Data Source: Our internal Prometheus server.
- Key Metric:
jenkins_build_test_case_failure_age{jobname, buildid, suite, case, status, failedsince}. - Existing Query for Reference:
count by (suite) (max_over_time(jenkins_build_test_case_failure_age{status=~"FAILED|REGRESSION", jobname="$jobname"}[$__range])). - AI Model: The Google Gemini API.
- Example about how to interact with Gemini API: https://github.com/srbarrios/FailTale/
- Visualization: Our internal Grafana Dashboard.
- Internal IaC: https://gitlab.suse.de/galaxy/infrastructure/-/tree/master/srv/salt/monitoring
Outcome
- Jenkins Flaky Test Detector: https://github.com/srbarrios/jenkins-flaky-tests-detector and its container
- IaC on MLM Team: https://gitlab.suse.de/galaxy/infrastructure/-/tree/master/srv/salt/monitoring/jenkinsflakytestsdetector?reftype=heads, https://gitlab.suse.de/galaxy/infrastructure/-/blob/master/srv/salt/monitoring/grafana/dashboards/flaky-tests.json?ref_type=heads, and others.
- Grafana Dashboard: https://grafana.mgr.suse.de/d/flaky-tests/flaky-tests-detection @ @ text
Help Create A Chat Control Resistant Turnkey Chatmail/Deltachat Relay Stack - Rootless Podman Compose, OpenSUSE BCI, Hardened, & SELinux by 3nd5h1771fy
Description
The Mission: Decentralized & Sovereign Messaging
FYI: If you have never heard of "Chatmail", you can visit their site here, but simply put it can be thought of as the underlying protocol/platform decentralized messengers like DeltaChat use for their communications. Do not confuse it with the honeypot looking non-opensource paid for prodect with better seo that directs you to chatmailsecure(dot)com
In an era of increasing centralized surveillance by unaccountable bad actors (aka BigTech), "Chat Control," and the erosion of digital privacy, the need for sovereign communication infrastructure is critical. Chatmail is a pioneering initiative that bridges the gap between classic email and modern instant messaging, offering metadata-minimized, end-to-end encrypted (E2EE) communication that is interoperable and open.
However, unless you are a seasoned sysadmin, the current recommended deployment method of a Chatmail relay is rigid, fragile, difficult to properly secure, and effectively takes over the entire host the "relay" is deployed on.
Why This Matters
A simple, host agnostic, reproducible deployment lowers the entry cost for anyone wanting to run a privacy‑preserving, decentralized messaging relay. In an era of perpetually resurrected chat‑control legislation threats, EU digital‑sovereignty drives, and many dangers of using big‑tech messaging platforms (Apple iMessage, WhatsApp, FB Messenger, Instagram, SMS, Google Messages, etc...) for any type of communication, providing an easy‑to‑use alternative empowers:
- Censorship resistance - No single entity controls the relay; operators can spin up new nodes quickly.
- Surveillance mitigation - End‑to‑end OpenPGP encryption ensures relay operators never see plaintext.
- Digital sovereignty - Communities can host their own infrastructure under local jurisdiction, aligning with national data‑policy goals.
By turning the Chatmail relay into a plug‑and‑play container stack, we enable broader adoption, foster a resilient messaging fabric, and give developers, activists, and hobbyists a concrete tool to defend privacy online.
Goals
As I indicated earlier, this project aims to drastically simplify the deployment of Chatmail relay. By converting this architecture into a portable, containerized stack using Podman and OpenSUSE base container images, we can allow anyone to deploy their own censorship-resistant, privacy-preserving communications node in minutes.
Our goal for Hack Week: package every component into containers built on openSUSE/MicroOS base images, initially orchestrated with a single container-compose.yml (podman-compose compatible). The stack will:
- Run on any host that supports Podman (including optimizations and enhancements for SELinux‑enabled systems).
- Allow network decoupling by refactoring configurations to move from file-system constrained Unix sockets to internal TCP networking, allowing containers achieve stricter isolation.
- Utilize Enhanced Security with SELinux by using purpose built utilities such as udica we can quickly generate custom SELinux policies for the container stack, ensuring strict confinement superior to standard/typical Docker deployments.
- Allow the use of bind or remote mounted volumes for shared data (
/var/vmail, DKIM keys, TLS certs, etc.). - Replace the local DNS server requirement with a remote DNS‑provider API for DKIM/TXT record publishing.
By delivering a turnkey, host agnostic, reproducible deployment, we lower the barrier for individuals and small communities to launch their own chatmail relays, fostering a decentralized, censorship‑resistant messaging ecosystem that can serve DeltaChat users and/or future services adopting this protocol
Resources
- The links included above
- https://chatmail.at/doc/relay/
- https://delta.chat/en/help
- Project repo -> https://codeberg.org/EndShittification/containerized-chatmail-relay
Technical talks at universities by agamez
Description
This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.
For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.
Goals
- Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
- Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
- Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.
Resources
- Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
- SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.
Rewrite Distrobox in go (POC) by fabriziosestito
Description
Rewriting Distrobox in Go.
Main benefits:
- Easier to maintain and to test
- Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)
Goals
- Build a minimal starting point with core commands
- Keep the CLI interface compatible: existing users shouldn't notice any difference
- Use a clean Go architecture with adapters for different container backends
- Keep dependencies minimal and binary size small
- Benchmark against the original shell script
Resources
- Upstream project: https://github.com/89luca89/distrobox/
- Distrobox site: https://distrobox.it/
- ArchWiki: https://wiki.archlinux.org/title/Distrobox
Set Uyuni to manage edge clusters at scale by RDiasMateus
Description
Prepare a Poc on how to use MLM to manage edge clusters. Those cluster are normally equal across each location, and we have a large number of them.
The goal is to produce a set of sets/best practices/scripts to help users manage this kind of setup.
Goals
step 1: Manual set-up
Goal: Have a running application in k3s and be able to update it using System Update Controler (SUC)
- Deploy Micro 6.2 machine
Deploy k3s - single node
- https://docs.k3s.io/quick-start
Build/find a simple web application (static page)
- Build/find a helmchart to deploy the application
Deploy the application on the k3s cluster
Install App updates through helm update
Install OS updates using MLM
step 2: Automate day 1
Goal: Trigger the application deployment and update from MLM
- Salt states For application (with static data)
- Deploy the application helmchart, if not present
- install app updates through helmchart parameters
- Link it to GIT
- Define how to link the state to the machines (based in some pillar data? Using configuration channels by importing the state? Naming convention?)
- Use git update to trigger helmchart app update
- Recurrent state applying configuration channel?
step 3: Multi-node cluster
Goal: Use SUC to update a multi-node cluster.
- Create a multi-node cluster
- Deploy application
- call the helm update/install only on control plane?
- Install App updates through helm update
- Prepare a SUC for OS update (k3s also? How?)
- https://github.com/rancher/system-upgrade-controller
- https://documentation.suse.com/cloudnative/k3s/latest/en/upgrades/automated.html
- Update/deploy the SUC?
- Update/deploy the SUC CRD with the update procedure
SUSE Edge Image Builder json schema by eminguez
Description
Current SUSE Edge Image Builder tool doesn't provide a json schema (yes, I know EIB uses yaml but it seems JSON Schema can be used to validate YAML documents yay!) that defines the configuration file syntax, values, etc.
Having a json schema will make integrations straightforward, as once the json schema is in place, it can be used as the interface for other tools to consume and generate EIB definition files (like TUI wizards, web UIs, etc.)
I'll make use of AI tools for this so I'd learn more about vibe coding, agents, etc.
Goals
- Learn about json schemas
- Try to implement something that can take the EIB source code and output an initial json schema definition
- Create a PR for EIB to be adopted
- Learn more about AI tools and how those can help on similar projects.
Resources
- json-schema.org
- suse-edge/edge-image-builder
- Any AI tool that can help me!
Result
Pull Request created! https://github.com/suse-edge/edge-image-builder/pull/821
I've extensively used gemini via the VScode "gemini code assist" plugin but I found it not too good... my workstation froze for minutes using it... I have a pretty beefy macbook pro M2 and AFAIK the model is being executed on the cloud... so I basically spent a few days fighting with it... Then I switched to antigravity and its agent mode... and it worked much better.
I've ended up learning a few things about "prompting", json schemas in general, some golang and AI in general :)
SUSE Edge Image Builder MCP by eminguez
Description
Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.
Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.
Goals
- Familiarize myself with MCPs
- Unrealistic: Have an MCP that can generate an EIB config file
Resources
Result
https://github.com/e-minguez/eib-mcp
I've extensively used antigravity and its agent mode to code this. This heavily uses https://hackweek.opensuse.org/25/projects/suse-edge-image-builder-json-schema for the MCP to be built.
I've ended up learning a lot of things about "prompting", json schemas in general, some golang, MCPs and AI in general :)
Example:
Generate an Edge Image Builder configuration for an ISO image based on slmicro-6.2.iso, targeting x86_64 architecture. The output name should be 'my-edge-image' and it should install to /dev/sda. It should deploy
a 3 nodes kubernetes cluster with nodes names "node1", "node2" and "node3" as:
* hostname: node1, IP: 1.1.1.1, role: initializer
* hostname: node2, IP: 1.1.1.2, role: agent
* hostname: node3, IP: 1.1.1.3, role: agent
The kubernetes version should be k3s 1.33.4-k3s1 and it should deploy a cert-manager helm chart (the latest one available according to https://cert-manager.io/docs/installation/helm/). It should create a user
called "suse" with password "suse" and set ntp to "foo.ntp.org". The VIP address for the API should be 1.2.3.4
Generates:
``` apiVersion: "1.0" image: arch: x86_64 baseImage: slmicro-6.2.iso imageType: iso outputImageName: my-edge-image kubernetes: helm: charts: - name: cert-manager repositoryName: jetstack
