Build a network of ("edge") humidity sensors using Raspberry Pis with SenseHats and additional cheaper sensors

For our house, I want to make sure I can track the effectiveness of regularly ventilating the rooms by adding humidity sensors and tracking the measurements over time.

We've already started with this little project:

https://github.com/benediktwerner/humidity-logger

Goal for this Hackweek

The setup we built over the holidays works just fine, but there are a few practical issues and a few stretch goals I'd have:

  • With a Raspberry Pi plus the Sense Hat, a single sensor is pretty expensive and over-specced. Using a Raspberry Pi as the master is ok (especially as I already have two with two Sense Hats), but I'd like to add extra sensors that can be connected wirelessly directly to one of the Raspberry Pis using Bluetooth or WiFi. Those could either be ready to go or a combination of "bare" sensor and a cheap board like the Raspberry Pico W or a similar board (e.g., based on the ESP32).

  • Currently, there's only a Grafana dashboard with a "forever" history. Would love to add extra reporting, e.g., sending alerts when certain humidity thresholds are exceeded, archiving older data.

  • None of the setup is "SUSEfied" (using SUSE Linux images, k3s, Rancher, ...). I'd love to change that, so that the setup can be used as a showcase for SUSE Edge. The stretch goal would be to make the SUSE version at least as easy to use as the current Raspberry Pi OS setup.

I'm looking for contributors who want to hack on either the hardware part (building an affordable Bluetooth or WiFi humidity/temperature sensor from components) or the SUSEfied software stack or both.

The software stack has many areas to work on, from building out-of-the box containers that can be deployed from Rancher to improving the Grafana dashboards.

Resources

  • https://github.com/benediktwerner/humidity-logger
  • https://www.raspberrypi.com/products/sense-hat/
  • https://www.raspberrypi.com/documentation/microcontrollers/raspberry-pi-pico.html
  • https://community.ibm.com/community/user/cloud/blogs/alexei-karve/2022/05/08/microshift-15

This project is part of:

Hack Week 22

Activity

  • about 2 months ago: gpathak liked this project.
  • about 2 months ago: gpathak disliked this project.
  • almost 3 years ago: maritawerner liked this project.
  • almost 3 years ago: dancermak liked this project.
  • almost 3 years ago: gpathak liked this project.
  • almost 3 years ago: gpathak started this project.
  • almost 3 years ago: mbrugger liked this project.
  • almost 3 years ago: aschnell liked this project.
  • almost 3 years ago: joachimwerner added keyword "containers" to this project.
  • almost 3 years ago: joachimwerner added keyword "helm" to this project.
  • almost 3 years ago: joachimwerner added keyword "microcontroller" to this project.
  • almost 3 years ago: joachimwerner added keyword "edge" to this project.
  • almost 3 years ago: joachimwerner added keyword "elemental" to this project.
  • almost 3 years ago: joachimwerner added keyword "sensors" to this project.
  • almost 3 years ago: joachimwerner added keyword "grafana" to this project.
  • almost 3 years ago: joachimwerner added keyword "influxdb" to this project.
  • almost 3 years ago: joachimwerner added keyword "raspberrypi" to this project.
  • almost 3 years ago: joachimwerner added keyword "esp32" to this project.
  • almost 3 years ago: joachimwerner added keyword "microos" to this project.
  • almost 3 years ago: joachimwerner added keyword "k3s" to this project.
  • almost 3 years ago: joachimwerner added keyword "rancher" to this project.
  • almost 3 years ago: joachimwerner liked this project.
  • almost 3 years ago: joachimwerner originated this project.

  • Comments

    • idefx
      almost 3 years ago by idefx | Reply

      Hello! Have you check on the Home Assistant and ESPHome projects?

      I run Home Assistant on a k3s cluster, with 2 raspberry pi 4 and 2 intel low-power (a VM inside a NAS and a NUC). Everything is on SLE Micro, and I use Rancher for the management of the cluster, and longhorn for persistent data. For the sensor part, I have a couple of Arduino m5 atoms lite. They support a variety of sensors, and with ESPHome, it is super easy to connect them to Home Assistant. Then you can design automations, mobile notification, etc. from Home Assistant, and even plug it to other services so you get a phone call if something goes wrong, for example.

      Don't hesitate to reach out to me if you want to discuss this!

    • joachimwerner
      almost 3 years ago by joachimwerner | Reply

      Thanks for the great pointers! We started off with a much smaller scope (no home automation, really just data gathering and visualisation), but it makes perfect sense to think of it in the context of home assistant for the future (e.g., so that a smart thermostat automatically shuts down the heating in the room when it's being ventilated). Will certainly get back to you with some questions.

    • joachimwerner
      almost 3 years ago by joachimwerner | Reply

      Found this on how to get the Sense Hat to work on openSUSE: https://community.ibm.com/community/user/cloud/blogs/alexei-karve/2022/05/08/microshift-15

    • gpathak
      almost 3 years ago by gpathak | Reply

      Hi @joachimwerner For adding extra sensors, I found out that it can be done with DHT22 and ESP8266. Some information about interfacing DHT22 with ESP8266 can be found here: Getting Started With the ESP8266 and DHT22 Sensor

    • bigironman
      almost 3 years ago by bigironman | Reply

      An alternative solution might be using a Raspberry Pi Pico W with MicroPython and BME280 sensor (temperature, humidity, pressure). It is easy to program and you can integrate it into nearly everything via Wifi. I'm using it in combination with Home Assistant and MQTT.

    Similar Projects

    Build a Single Camera 3D Scanner (Photogrammetry). by lparkin

    Description

    I want to see how fast I can develop a single-camera (pi camera module v3) rig with a stepper motor controlling a turntable that rotates the model being scanned. The trick here is not to be super fancy with 100's of sensors and data inputs, quite the opposite. I want to see how accurate I can scan objects into 3D-printable models using only a camera and as many fixed and known parameters as possible.

    Speed to be augmented with agentic AI coding companion. As it stands, I have a 3D printer, pretty much all the electronics I need.

    Goals

    • Design and print working/workable camera rig
    • Design and print working/workable turntable (considering printing my own cylinder-style bearings as well)
    • Assemble rig components into MVP assembly
    • Develop application that can hook into existing tools, or leverage a library like openCV, to process 2D images into a 3D model.
    • Iterate until models are good enough to 3D print.

    Resources

    • https://www.instructables.com/3D-scanning-Photogrammetry-with-a-rotating-platfor/
    • https://www.instructables.com/3d-Scan-Anything-Using-Just-a-Camera/
    • https://www.instructables.com/Build-a-DIY-Desktop-3d-Scanner-With-Infinite-Resol/
    • https://www.instructables.com/3D-Laser-Scanning-DIY/


    Capyboard, ESP32 Development Board for Education by emiler

    Capyboard is an ESP32 development board built to accept individual custom-made modules. The board is created primarily for use in education, where you want to focus on embedded programming instead of spending time with connecting cables and parts on a breadboard, as you would with Arduino and other such devices. The board is not limited only to education and it can be used to build, for instance, a very powerful internal meteo-station and so on.

    Hack Week 25

    My plan is to create a new revision of the board with updated dimensions and possibly even use a new ESP32 with Zigbee/Thread support. I also want to create an extensive library of example projects and expand the documentation. It would be nice to also design additional modules, such as multiplexer or an environment module.

    Goals

    • Implement changes to a new board revision
    • Design additional modules
    • Expand documentation and examples
    • Migrate documentation backend from MkDocs to Zensical

    Hack Week 24

    I created a new motherboard revision after testing my previous prototype, as well as a light module. This project was also a part of my master's thesis, which was defended successfully.

    Goals

    • Finish testing of a new prototype
    • Publish source files
    • Documentation completion
    • Finish writing thesis


    Play with esp32 to create domotics stuff by aginies

    Description

    Play with ESP32 board and multiple small peripherals

    https://github.com/aginies/domotique

    Goals

    • Finish the pool project
    • add support of NFC auth in the door project
    • improve the doc
    • project to manage solar panel (router)

    Resources

    esp32 home


    ESPClock: An open-source smart desk clock with Home Assistant integration by jbaier_cz

    Description

    ESPClock will be an open-source, Wi-Fi connected digital clock powered by ESP32 and ESPHome, designed to seamlessly integrate with Home Assistant. Featuring a 3D-printable case, the clock combines modern style with smart home functionality.

    Goals

    Key features:

    • real-time clock
    • native Home Assistant integration
    • optional sensors for temperature, humidity and ambient light
    • custom 3D-printable case
    • open-source firmware and hardware design
    • easy YAML-based configuration

    Resources

    1. https://esphome.io/
    2. https://gist.github.com/baierjan/773e20a5061780f0a27ed86619dbffba

    The Hacking

    Chapter 1: Inventory

    After thoroughly inspecting my closet, I managed to gather a handful of useful components. I decided to keep things simple and avoid making the project unnecessarily complex, opting for ready-made modules instead of assembling everything from individual parts. This approach saves time and reduces the chances of compatibility issues. The components I settled on are:

    • Microcontroller: ESP32-LPkit
    • 4-digit 7-segment display with integrated controller: TM1637
    • Temperature and humidity sensor: DHT22
    • Carbon dioxide sensor: MH-Z19
    • PIR motion sensor: AM312
    • Illumination sensor: VEML7700
    • I2S-compatible microphone module: SPH0645LM4H
    • A couple of micro switches
    • A few LED diodes with appropriate resistors

    With this list, the essential environmental parameters should be well covered. The clock’s main function—displaying the current time—is handled by the bright 0.56-inch display. Additionally, the setup provides simple input options through buttons and possibly even voice commands in the future.

    Chapter 2: Wiring Diagram

    I went through the datasheets for all the components to determine the most effective way to connect them. After comparing different options and checking for compatibility, I finalized the following wiring diagram.

    Chapter 3: Firmware

    For the software part, I decided to use ESPHome, which offers an easy and reliable way to integrate the clock with Home Assistant. All the components from the inventory are natively supported, so there is no need to write much additional code.

    The following example shows how the YAML configuration for the clock may look: espclock.yaml


    SUSE Virtualization (Harvester): VM Import UI flow by wombelix

    Description

    SUSE Virtualization (Harvester) has a vm-import-controller that allows migrating VMs from VMware and OpenStack, but users need to write manifest files and apply them with kubectl to use it. This project is about adding the missing UI pieces to the harvester-ui-extension, making VM Imports accessible without requiring Kubernetes and YAML knowledge.

    VMware and OpenStack admins aren't automatically familiar with Kubernetes and YAML. Implementing the UI part for the VM Import feature makes it easier to use and more accessible. The Harvester Enhancement Proposal (HEP) VM Migration controller included a UI flow implementation in its scope. Issue #2274 received multiple comments that an UI integration would be a nice addition, and issue #4663 was created to request the implementation but eventually stalled.

    Right now users need to manually create either VmwareSource or OpenstackSource resources, then write VirtualMachineImport manifests with network mappings and all the other configuration options. Users should be able to do that and track import status through the UI without writing YAML.

    Work during the Hack Week will be done in this fork in a branch called suse-hack-week-25, making progress publicly visible and open for contributions. When everything works out and the branch is in good shape, it will be submitted as a pull request to harvester-ui-extension to get it included in the next Harvester release.

    Testing will focus on VMware since that's what is available in the lab environment (SUSE Virtualization 1.6 single-node cluster, ESXi 8.0 standalone host). Given that this is about UI and surfacing what the vm-import-controller handles, the implementation should work for OpenStack imports as well.

    This project is also a personal challenge to learn vue.js and get familiar with Rancher Extensions development, since harvester-ui-extension is built on that framework.

    Goals

    • Learn Vue.js and Rancher Extensions fundamentals required to finish the project
    • Read and learn from other Rancher UI Extensions code, especially understanding the harvester-ui-extension code base
    • Understand what the vm-import-controller and its CRDs require, identify ready to use components in the Rancher UI Extension API that can be leveraged
    • Implement UI logic for creating and managing VmwareSource / OpenstackSource and VirtualMachineImport resources with all relevant configuration options and credentials
    • Implemnt UI elements to display VirtualMachineImport status and errors

    Resources

    HEP and related discussion

    SUSE Virtualization VM Import Documentation

    Rancher Extensions Documentation

    Rancher UI Plugin Examples

    Vue Router Essentials

    Vue Router API

    Vuex Documentation


    Rancher Cluster Lifecycle Visualizer by jferraz

    Description

    Rancher’s v2 provisioning system represents each downstream cluster with several Kubernetes custom resources across multiple API groups, such as clusters.provisioning.cattle.io and clusters.management.cattle.io. Understanding why a cluster is stuck in states like "Provisioning", "Updating", or "Unavailable" often requires jumping between these resources, reading conditions, and correlating them with agent connectivity and known failure modes. This project will build a Cluster Lifecycle Visualizer: a small, read-only controller that runs in the Rancher management cluster and generates a single, human-friendly view per cluster. It will watch Rancher cluster CRDs, derive a simplified lifecycle phase, keep a history of phase transitions from installation time onward, and attach a short, actionable recommendation string that hints at what the operator should check or do next.

    Goals

    • Provide a compact lifecycle summary for each Rancher-managed cluster (e.g. Provisioning, WaitingForClusterAgent, Active, Updating, Error) derived from provisioning.cattle.io/v1 Cluster and management.cattle.io/v3 Cluster status and conditions.
    • Maintain a phase history for each cluster, allowing operators to see how its state evolved over time since the visualizer was installed.
    • Attach a recommended action to the current phase using a small ruleset based on common Rancher failure modes (for example, cluster agent not connected, cluster still stabilizing after an upgrade, or generic error states), to improve the day-to-day debugging experience.
    • Deliver an easy-to-install, read-only component (single YAML or small Helm chart) that Rancher users can deploy to their management cluster and inspect via kubectl get/describe, without UI changes or direct access to downstream clusters.
    • Use idiomatic Go, wrangler, and Rancher APIs.

    Resources

    • Rancher Manager documentation on RKE2 and K3s cluster configuration and provisioning flows.
    • Rancher API Go types for provisioning.cattle.io/v1 and management.cattle.io/v3 (from the rancher/rancher repository or published Go packages).
    • Existing Rancher architecture docs and internal notes about cluster provisioning, cluster agents, and node agents.
    • A local Rancher management cluster (k3s or RKE2) with a few test downstream clusters to validate phase detection, history tracking, and recommendations.


    Cluster API Provider for Harvester by rcase

    Project Description

    The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.

    The project has been bootstrapped in HackWeek 23, and its code is available here.

    Work done in HackWeek 2023

    • Have a early working version of the provider available on Rancher Sandbox : *DONE *
    • Demonstrated the created cluster can be imported using Rancher Turtles: DONE
    • Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo

    DONE in HackWeek 24:

    DONE in 2025 (out of Hackweek)

    • Support of ClusterClass
    • Add to clusterctl community providers, you can add it directly with clusterctl
    • Testing on newer versions of Harvester v1.4.X and v1.5.X
    • Support for clusterctl generate cluster ...
    • Improve Status Conditions to reflect current state of Infrastructure
    • Improve CI (some bugs for release creation)

    Goals for HackWeek 2025

    • FIRST and FOREMOST, any topic is important to you
    • Add e2e testing
    • Certify the provider for Rancher Turtles
    • Add Machine pool labeling
    • Add PCI-e passthrough capabilities.
    • Other improvement suggestions are welcome!

    Thanks to @isim and Dominic Giebert for their contributions!

    Resources

    Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.

    This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goals for Hackweek 25

    • Update to modern Rancher and verify that existing tests still work
    • Change testing logic to populate secrets instead of requiring a secondary script
    • Add new tests

    Goals for Hackweek 24 (Complete)

    • Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix
    • Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    • https://github.com/celidon/rancher-troublemaker
    • https://github.com/rancher/terraform-provider-rancher2
    • https://github.com/rancher/tf-rancher-up
    • https://github.com/rancher/quickstart


    Liz - Prompt autocomplete by ftorchia

    Description

    Liz is the Rancher AI assistant for cluster operations.

    Goals

    We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.

    Example:

    • User prompt: "Can you show me the list of p"
    • Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"

    Example:

    • User prompt: "Show me the logs of #rancher-"
    • Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".

    Technical Overview

    1. The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
    2. The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.

    Resources

    GitHub repository


    Uyuni Health-check Grafana AI Troubleshooter by ygutierrez

    Description

    This project explores the feasibility of using the open-source Grafana LLM plugin to enhance the Uyuni Health-check tool with LLM capabilities. The idea is to integrate a chat-based "AI Troubleshooter" directly into existing dashboards, allowing users to ask natural-language questions about errors, anomalies, or performance issues.

    Goals

    • Investigate if and how the grafana-llm-app plug-in can be used within the Uyuni Health-check tool.
    • Investigate if this plug-in can be used to query LLMs for troubleshooting scenarios.
    • Evaluate support for local LLMs and external APIs through the plugin.
    • Evaluate if and how the Uyuni MCP server could be integrated as another source of information.

    Resources

    Grafana LMM plug-in

    Uyuni Health-check


    Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios

    Description

    Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.

    This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.

    The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.

    Goals

    By the end of Hack Week, we aim to have a single, working Python script that:

    1. Connects to Prometheus and executes a query to fetch detailed test failure history.
    2. Processes the raw data into a format suitable for the Gemini API.
    3. Successfully calls the Gemini API with the data and a clear prompt.
    4. Parses the AI's response to extract a simple list of flaky tests.
    5. Saves the list to a JSON file that can be displayed in Grafana.
    6. New panel in our Dashboard listing the Flaky tests

    Resources

    Outcome


    Technical talks at universities by agamez

    Description

    This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.

    For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.

    Goals

    • Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
    • Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
    • Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.

    Resources

    • Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
    • SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.


    Help Create A Chat Control Resistant Turnkey Chatmail/Deltachat Relay Stack - Rootless Podman Compose, OpenSUSE BCI, Hardened, & SELinux by 3nd5h1771fy

    Description

    The Mission: Decentralized & Sovereign Messaging

    FYI: If you have never heard of "Chatmail", you can visit their site here, but simply put it can be thought of as the underlying protocol/platform decentralized messengers like DeltaChat use for their communications. Do not confuse it with the honeypot looking non-opensource paid for prodect with better seo that directs you to chatmailsecure(dot)com

    In an era of increasing centralized surveillance by unaccountable bad actors (aka BigTech), "Chat Control," and the erosion of digital privacy, the need for sovereign communication infrastructure is critical. Chatmail is a pioneering initiative that bridges the gap between classic email and modern instant messaging, offering metadata-minimized, end-to-end encrypted (E2EE) communication that is interoperable and open.

    However, unless you are a seasoned sysadmin, the current recommended deployment method of a Chatmail relay is rigid, fragile, difficult to properly secure, and effectively takes over the entire host the "relay" is deployed on.

    Why This Matters

    A simple, host agnostic, reproducible deployment lowers the entry cost for anyone wanting to run a privacy‑preserving, decentralized messaging relay. In an era of perpetually resurrected chat‑control legislation threats, EU digital‑sovereignty drives, and many dangers of using big‑tech messaging platforms (Apple iMessage, WhatsApp, FB Messenger, Instagram, SMS, Google Messages, etc...) for any type of communication, providing an easy‑to‑use alternative empowers:

    • Censorship resistance - No single entity controls the relay; operators can spin up new nodes quickly.
    • Surveillance mitigation - End‑to‑end OpenPGP encryption ensures relay operators never see plaintext.
    • Digital sovereignty - Communities can host their own infrastructure under local jurisdiction, aligning with national data‑policy goals.

    By turning the Chatmail relay into a plug‑and‑play container stack, we enable broader adoption, foster a resilient messaging fabric, and give developers, activists, and hobbyists a concrete tool to defend privacy online.

    Goals

    As I indicated earlier, this project aims to drastically simplify the deployment of Chatmail relay. By converting this architecture into a portable, containerized stack using Podman and OpenSUSE base container images, we can allow anyone to deploy their own censorship-resistant, privacy-preserving communications node in minutes.

    Our goal for Hack Week: package every component into containers built on openSUSE/MicroOS base images, initially orchestrated with a single container-compose.yml (podman-compose compatible). The stack will:

    • Run on any host that supports Podman (including optimizations and enhancements for SELinux‑enabled systems).
    • Allow network decoupling by refactoring configurations to move from file-system constrained Unix sockets to internal TCP networking, allowing containers achieve stricter isolation.
    • Utilize Enhanced Security with SELinux by using purpose built utilities such as udica we can quickly generate custom SELinux policies for the container stack, ensuring strict confinement superior to standard/typical Docker deployments.
    • Allow the use of bind or remote mounted volumes for shared data (/var/vmail, DKIM keys, TLS certs, etc.).
    • Replace the local DNS server requirement with a remote DNS‑provider API for DKIM/TXT record publishing.

    By delivering a turnkey, host agnostic, reproducible deployment, we lower the barrier for individuals and small communities to launch their own chatmail relays, fostering a decentralized, censorship‑resistant messaging ecosystem that can serve DeltaChat users and/or future services adopting this protocol

    Resources


    Rewrite Distrobox in go (POC) by fabriziosestito

    Description

    Rewriting Distrobox in Go.

    Main benefits:

    • Easier to maintain and to test
    • Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)

    Goals

    • Build a minimal starting point with core commands
    • Keep the CLI interface compatible: existing users shouldn't notice any difference
    • Use a clean Go architecture with adapters for different container backends
    • Keep dependencies minimal and binary size small
    • Benchmark against the original shell script

    Resources

    • Upstream project: https://github.com/89luca89/distrobox/
    • Distrobox site: https://distrobox.it/
    • ArchWiki: https://wiki.archlinux.org/title/Distrobox


    SUSE Edge Image Builder json schema by eminguez

    Description

    Current SUSE Edge Image Builder tool doesn't provide a json schema (yes, I know EIB uses yaml but it seems JSON Schema can be used to validate YAML documents yay!) that defines the configuration file syntax, values, etc.

    Having a json schema will make integrations straightforward, as once the json schema is in place, it can be used as the interface for other tools to consume and generate EIB definition files (like TUI wizards, web UIs, etc.)

    I'll make use of AI tools for this so I'd learn more about vibe coding, agents, etc.

    Goals

    • Learn about json schemas
    • Try to implement something that can take the EIB source code and output an initial json schema definition
    • Create a PR for EIB to be adopted
    • Learn more about AI tools and how those can help on similar projects.

    Resources

    Result

    Pull Request created! https://github.com/suse-edge/edge-image-builder/pull/821

    I've extensively used gemini via the VScode "gemini code assist" plugin but I found it not too good... my workstation froze for minutes using it... I have a pretty beefy macbook pro M2 and AFAIK the model is being executed on the cloud... so I basically spent a few days fighting with it... Then I switched to antigravity and its agent mode... and it worked much better.

    I've ended up learning a few things about "prompting", json schemas in general, some golang and AI in general :)


    SUSE Edge Image Builder MCP by eminguez

    Description

    Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.

    Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.

    Goals

    • Familiarize myself with MCPs
    • Unrealistic: Have an MCP that can generate an EIB config file

    Resources

    Result

    https://github.com/e-minguez/eib-mcp

    I've extensively used antigravity and its agent mode to code this. This heavily uses https://hackweek.opensuse.org/25/projects/suse-edge-image-builder-json-schema for the MCP to be built.

    I've ended up learning a lot of things about "prompting", json schemas in general, some golang, MCPs and AI in general :)

    Example:

    Generate an Edge Image Builder configuration for an ISO image based on slmicro-6.2.iso, targeting x86_64 architecture. The output name should be 'my-edge-image' and it should install to /dev/sda. It should deploy a 3 nodes kubernetes cluster with nodes names "node1", "node2" and "node3" as: * hostname: node1, IP: 1.1.1.1, role: initializer * hostname: node2, IP: 1.1.1.2, role: agent * hostname: node3, IP: 1.1.1.3, role: agent The kubernetes version should be k3s 1.33.4-k3s1 and it should deploy a cert-manager helm chart (the latest one available according to https://cert-manager.io/docs/installation/helm/). It should create a user called "suse" with password "suse" and set ntp to "foo.ntp.org". The VIP address for the API should be 1.2.3.4

    Generates:

    ``` apiVersion: "1.0" image: arch: x86_64 baseImage: slmicro-6.2.iso imageType: iso outputImageName: my-edge-image kubernetes: helm: charts: - name: cert-manager repositoryName: jetstack


    Set Uyuni to manage edge clusters at scale by RDiasMateus

    Description

    Prepare a Poc on how to use MLM to manage edge clusters. Those cluster are normally equal across each location, and we have a large number of them.

    The goal is to produce a set of sets/best practices/scripts to help users manage this kind of setup.

    Goals

    step 1: Manual set-up

    Goal: Have a running application in k3s and be able to update it using System Update Controler (SUC)

    • Deploy Micro 6.2 machine
    • Deploy k3s - single node

      • https://docs.k3s.io/quick-start
    • Build/find a simple web application (static page)

      • Build/find a helmchart to deploy the application
    • Deploy the application on the k3s cluster

    • Install App updates through helm update

    • Install OS updates using MLM

    step 2: Automate day 1

    Goal: Trigger the application deployment and update from MLM

    • Salt states For application (with static data)
      • Deploy the application helmchart, if not present
      • install app updates through helmchart parameters
    • Link it to GIT
      • Define how to link the state to the machines (based in some pillar data? Using configuration channels by importing the state? Naming convention?)
      • Use git update to trigger helmchart app update
    • Recurrent state applying configuration channel?

    step 3: Multi-node cluster

    Goal: Use SUC to update a multi-node cluster.

    • Create a multi-node cluster
    • Deploy application
      • call the helm update/install only on control plane?
    • Install App updates through helm update
    • Prepare a SUC for OS update (k3s also? How?)
      • https://github.com/rancher/system-upgrade-controller
      • https://documentation.suse.com/cloudnative/k3s/latest/en/upgrades/automated.html
      • Update/deploy the SUC?
      • Update/deploy the SUC CRD with the update procedure