Description

Prepare a Poc on how to use MLM to manage edge clusters. Those cluster are normally equal across each location, and we have a large number of them.

The goal is to produce a set of sets/best practices/scripts to help users manage this kind of setup.

Goals

step 1: Manual set-up

Goal: Have a running application in k3s and be able to update it using System Update Controler (SUC)

  • Deploy Micro 6.2 machine
  • Deploy k3s - single node

    • https://docs.k3s.io/quick-start
  • Build/find a simple web application (static page)

    • Build/find a helmchart to deploy the application
  • Deploy the application on the k3s cluster

  • Install App updates through helm update

  • Install OS updates using MLM

step 2: Automate day 1

Goal: Trigger the application deployment and update from MLM

  • Salt states For application (with static data)
    • Deploy the application helmchart, if not present
    • install app updates through helmchart parameters
  • Link it to GIT
    • Define how to link the state to the machines (based in some pillar data? Using configuration channels by importing the state? Naming convention?)
    • Use git update to trigger helmchart app update
  • Recurrent state applying configuration channel?

step 3: Multi-node cluster

Goal: Use SUC to update a multi-node cluster.

  • Create a multi-node cluster
  • Deploy application
    • call the helm update/install only on control plane?
  • Install App updates through helm update
  • Prepare a SUC for OS update (k3s also? How?)
    • https://github.com/rancher/system-upgrade-controller
    • https://documentation.suse.com/cloudnative/k3s/latest/en/upgrades/automated.html
    • Update/deploy the SUC?
    • Update/deploy the SUC CRD with the update procedure
    • Update git SUC - CR and apply the state to trigger the update of the machine.
  • Salt states to deploy k3s cluster?

step 4: Customize salt states

Goal: Usability on how to set the installation version.

  • Do we need to customize the pillar data when setup the salt states from step 2?
  • Provide a mechanism to customize salt states (application version) based on pillar

    • Pillars can be set on GIT, and we can have a heararquy definition. Test on how to set pillar specific to minions. How to set pillars for each machine or groups of machines? Use System groups (only exposes groupid, would need some small code changes) or glue with Configuration Channels?
    • Where to store the pillar data? Git pillar data or should/can we have it on MLM?
  • Another option could be to define a new salt formula where we can have a UI to collect the pillar data, and customize the pillar

    • The formula state can be just include the state on git

step 5: Scale it

  • Make it work on HUB deployment, with more then 1 peripheral server.

step 6: Build a customized image for provisioning

  • Build a deployable image with k3s
  • Should we have an ignition script that downloads the bootstrap script and runs it?
    • An alternative would be to have the agent pre-installed
    • We would need to have the MLM server FQDN pre-defined

Resources

Looking for hackers with the skills:

uyuni susemanager edge

This project is part of:

Hack Week 25

Activity

  • 15 days ago: ygutierrez liked this project.
  • 17 days ago: pgonin liked this project.
  • 17 days ago: mfarina joined this project.
  • 17 days ago: mfarina liked this project.
  • 18 days ago: j_renner liked this project.
  • 18 days ago: moio liked this project.
  • 18 days ago: e_bischoff liked this project.
  • 20 days ago: RDiasMateus added keyword "susemanager" to this project.
  • 20 days ago: RDiasMateus added keyword "edge" to this project.
  • 20 days ago: RDiasMateus started this project.
  • 20 days ago: RDiasMateus added keyword "uyuni" to this project.
  • 20 days ago: RDiasMateus originated this project.

  • Comments

    • RDiasMateus
      16 days ago by RDiasMateus | Reply

      1. working VM with sles-micro 6.2

        • k3s
        • helm
        • k9s
      2. working image: deployed to dockerhub https://github.com/rjmateus/hackweek_2025/tree/main/image https://hub.docker.com/repository/registry-1.docker.io/rjmateus/hw2025/tags

      3. working helmchar https://github.com/rjmateus/hackweek_2025/tree/main/helm https://hub.docker.com/repository/docker/rjmateus/demo-app/general

      - deployed to a k3s single node cluster
      

      4) configure connection to salt: states and pillars - states is done - pillar is done

      5) Machine registered to a MLM server, version 5.1.1

      6) Run a salt state comming form git on the machine - lookin here for the available states and how to install them: https://github.com/rjmateus/hackweek2025/tree/main/salt - run `salt 'US*' state.apply deployapp - command failled, since is running inside a transaction. If running outside it should work - running in direct mode makes one ot the states to work: salt 'US*' --module-executors='[ directcall]' state.apply deployapp`

      Decision 1: Helm definition and salt installation

      1) Use salt helm/kubernetes salt states https://docs.saltproject.io/en/latest/ref/states/all/salt.states.kubernetes.html https://docs.saltproject.io/en/latest/ref/states/all/salt.states.helm.html - User should define each compomnent in salt states. This changes the way user normally work, without much benefits from using the standard approach

      2) helm definition inside salt folder - allow use of salt pillar data direclty in the helm definition - only one version at the time. To have more then one version, user need to duplicate the helm chart or conditionally handle the differences - copy the helm files to the local machine

      3) Helm publish in registry - Allow to have different versions of the helm chart - Pillar data can be passed as variables to the helm chart, and this way customize each deployment - Multiple helm charts can easily co-exist and be deployed in different machines (facilitates incremental roll-out)

      *** Sugestion: *** Option 3)

      Decision 2: Pillar Definition and salt state assigned

      We would need to be able to customize the salt states by using pillar data: Helmchar version/location, Application version/location, etc. Normally, at large scale deployment customers have a naming convention in place. We can try to levarage it.

      State can be defined in the git repository. How should this states be assigned to each machine? 1) top.sls file, with regex matching the machines (can be use in system groups) 2) use MLM configuration channels 3) salt formula assigned to system groups or directly to the machine (MLM for retail uses this apparoch)

      Pillar data can also be defined on git, but MLM can also set it: 1) Pillars in git repo using top.sls file, and needs to match the machines - Users would be able to control the version to install in each machine through GIT. It would ge gitops 2) Use custom pillar data on each machine - needs to be defined manually for each machine, and updated individually 3) Salt formulas. Defines the pillar data at system or group level. - tight to the salt state - not gitops

      *** Sugestion: *** If we want to focus on using gitops, we should not use salt formulas. If we don't need gitops, salt formulas would be the simplest appraoch on MLM. For pure gitops, we should use option 1) in states and pillars. We can also use a hibrid appraoch and and have the states on git, assign them with MLM, and have the pillar data with option 3)

      *** approach: *** Will try to set up pillars assignement on GIT using system groups for matching. Salt states definition on git, but salt state assignement on MLM (using groups)

      Naming convention

      • Assuming naming convention
        • US01-S001-T001-N0 - machine deployed
        • US01-S001-T002-N0
        • US01-S001-T002-N1

    • RDiasMateus
      15 days ago by RDiasMateus | Reply

      Day 2:

      • Provide a structure for pillar and states definition
        • Done.
      • Automate the application deployment through salt.
        • Done
      • Automate k3s deployment
        • Done
      • Trigger application update through version update on git pillar data, using MLM server/salt
        • DONE
      • Automatically install application and it's dependencies on machine registration on MLM
        • DONE
      • Recurrent state apply
        • Not tested

      Deploy MLM with

      • activation key with sles15sp7 channels
      • Assign the configuration channel to the system
        • For automatic deploy on registration, set the flag apply high state on registration

      Naming convention:

      • Assuming naming convention
        • US01-S001-T001-N0 - machine deployed with sle micro 5.1
        • US01-S001-T002-N0 - sles15sp7
        • US01-S001-T003-N0 - sles15sp7
        • US01-S001-T003-N1 - sles15sp7

      Pillar definition

      • top.sls file to assigne pillars to mchines.

        • Give flexibility to users so they can target the systems as fit them the best.
        • In this example, target all machines that are terminals
        • Note that users cannot use pillar data to define other pillar information. In the target include, it can only look for machine id and grains.

      Have a look in the repo: https://github.com/rjmateus/hackweek_2025

    • moio
      14 days ago by moio | Reply

      Great job so far!

    • RDiasMateus
      14 days ago by RDiasMateus | Reply

      Day 3

      1) Update the instal sls file to create the initial cluster with a static key

      2) Single node with System Update Controller installed. K3s version controlled from the git repo

      3) Being able to trigger k3s update in a single node k3s cluster by changing the k3s version on the pillar data

      4) Configure pillar data on git to automatically install k3s and join it to an existing cluster

      5) Trigger a k3s update on a multi-cluster node with System Update Controller and the k3s version being defined at git

      6) Control application version and number of replicas from the git pillar repository.

    • RDiasMateus
      13 days ago by RDiasMateus | Reply

      Day 4.

      Run more tests and prepare a demo presentation. It will be published soon.

    • RDiasMateus
      13 days ago by RDiasMateus | Reply

      Demo is temporary in this location. If someone cannot access it let me know and I share it in another location: https://drive.google.com/file/d/1EBn91WgllXsthcsQzg6fz0T6MIQlQrLm/view?usp=drive_link

      • pgonin
        9 days ago by pgonin | Reply

        video is ok for me

    • jeremy_moffitt
      13 days ago by jeremy_moffitt | Reply

      I get a "file not found" error on the demo url

    Similar Projects

    Ansible to Salt integration by vizhestkov

    Description

    We already have initial integration of Ansible in Salt with the possibility to run playbooks from the salt-master on the salt-minion used as an Ansible Control node.

    In this project I want to check if it possible to make Ansible working on the transport of Salt. Basically run playbooks with Ansible through existing established Salt (ZeroMQ) transport and not using ssh at all.

    It could be a good solution for the end users to reuse Ansible playbooks or run Ansible modules they got used to with no effort of complex configuration with existing Salt (or Uyuni/SUSE Multi Linux Manager) infrastructure.

    Goals

    • [v] Prepare the testing environment with Salt and Ansible installed
    • [v] Discover Ansible codebase to figure out possible ways of integration
    • [v] Create Salt/Uyuni inventory module
    • [v] Make basic modules to work with no using separate ssh connection, but reusing existing Salt connection
    • [v] Test some most basic playbooks

    Resources

    GitHub page

    Video of the demo


    mgr-ansible-ssh - Intelligent, Lightweight CLI for Distributed Remote Execution by deve5h

    Description

    By the end of Hack Week, the target will be to deliver a minimal functional version 1 (MVP) of a custom command-line tool named mgr-ansible-ssh (a unified wrapper for BOTH ad-hoc shell & playbooks) that allows operators to:

    1. Execute arbitrary shell commands on thousand of remote machines simultaneously using Ansible Runner with artifacts saved locally.
    2. Pass runtime options such as inventory file, remote command string/ playbook execution, parallel forks, limits, dry-run mode, or no-std-ansible-output.
    3. Leverage existing SSH trust relationships without additional setup.
    4. Provide a clean, intuitive CLI interface with --help for ease of use. It should provide consistent UX & CI-friendly interface.
    5. Establish a foundation that can later be extended with advanced features such as logging, grouping, interactive shell mode, safe-command checks, and parallel execution tuning.

    The MVP should enable day-to-day operations to efficiently target thousands of machines with a single, consistent interface.

    Goals

    Primary Goals (MVP):

    Build a functional CLI tool (mgr-ansible-ssh) capable of executing shell commands on multiple remote hosts using Ansible Runner. Test the tool across a large distributed environment (1000+ machines) to validate its performance and reliability.

    Looking forward to significantly reducing the zypper deployment time across all 351 RMT VM servers in our MLM cluster by eliminating the dependency on the taskomatic service, bringing execution down to a fraction of the current duration. The tool should also support multiple runtime flags, such as:

    mgr-ansible-ssh: Remote command execution wrapper using Ansible Runner
    
    Usage: mgr-ansible-ssh [--help] [--version] [--inventory INVENTORY]
                       [--run RUN] [--playbook PLAYBOOK] [--limit LIMIT]
                       [--forks FORKS] [--dry-run] [--no-ansible-output]
    
    Required Arguments
    --inventory, -i      Path to Ansible inventory file to use
    
    Any One of the Arguments Is Required
    --run, -r            Execute the specified shell command on target hosts
    --playbook, -p       Execute the specified Ansible playbook on target hosts
    
    Optional Arguments
    --help, -h           Show the help message and exit
    --version, -v        Show the version and exit
    --limit, -l          Limit execution to specific hosts or groups
    --forks, -f          Number of parallel Ansible forks
    --dry-run            Run in Ansible check mode (requires -p or --playbook)
    --no-ansible-output  Suppress Ansible stdout output
    

    Secondary/Stretched Goals (if time permits):

    1. Add pretty output formatting (success/failure summary per host).
    2. Implement basic logging of executed commands and results.
    3. Introduce safety checks for risky commands (shutdown, rm -rf, etc.).
    4. Package the tool so it can be installed with pip or stored internally.

    Resources

    Collaboration is welcome from anyone interested in CLI tooling, automation, or distributed systems. Skills that would be particularly valuable include:

    1. Python especially around CLI dev (argparse, click, rich)


    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.

    Nah, let's be honest add-emoji AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources


    Set Up an Ephemeral Uyuni Instance by mbussolotto

    Description

    To test, check, and verify the latest changes in the master branch, we want to easily set up an ephemeral environment.

    Goals

    • Create an ephemeral environment manually
    • Create an ephemeral environment automatically

      Resources

    • https://github.com/uyuni-project/uyuni

    • https://www.uyuni-project.org/uyuni-docs/en/uyuni/index.html


    Uyuni Health-check Grafana AI Troubleshooter by ygutierrez

    Description

    This project explores the feasibility of using the open-source Grafana LLM plugin to enhance the Uyuni Health-check tool with LLM capabilities. The idea is to integrate a chat-based "AI Troubleshooter" directly into existing dashboards, allowing users to ask natural-language questions about errors, anomalies, or performance issues.

    Goals

    • Investigate if and how the grafana-llm-app plug-in can be used within the Uyuni Health-check tool.
    • Investigate if this plug-in can be used to query LLMs for troubleshooting scenarios.
    • Evaluate support for local LLMs and external APIs through the plugin.
    • Evaluate if and how the Uyuni MCP server could be integrated as another source of information.

    Resources

    Grafana LMM plug-in

    Uyuni Health-check


    Enhance setup wizard for Uyuni by PSuarezHernandez

    Description

    This project wants to enhance the intial setup on Uyuni after its installation, so it's easier for a user to start using with it.

    Uyuni currently uses "uyuni-tools" (mgradm) as the installation entrypoint, to trigger the installation of Uyuni in the given host, but does not really perform an initial setup, for instance:

    • user creation
    • adding products / channels
    • generating bootstrap repos
    • create activation keys
    • ...

    Goals

    • Provide initial setup wizard as part of mgradm uyuni installation

    Resources


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt (including bootstrapping with bootstrap script) and Salt-ssh clients

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    In progress/done for Hack Week 25

    Guide

    We started writin a Guide: Adding a new client GNU Linux distribution to Uyuni at https://github.com/uyuni-project/uyuni/wiki/Guide:-Adding-a-new-client-GNU-Linux-distribution-to-Uyuni, to make things easier for everyone, specially those not too familiar wht Uyuni or not technical.

    openSUSE Leap 16.0

    The distribution will all love!

    https://en.opensuse.org/openSUSE:Roadmap#DRAFTScheduleforLeap16.0

    Curent Status We started last year, it's complete now for Hack Week 25! :-D

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file) NOTE: Done, client tools for SLMicro6 are using as those for SLE16.0/openSUSE Leap 16.0 are not available yet
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    • [W] Package management (install, remove, update...). Works, even reboot requirement detection


    SUSE Edge Image Builder MCP by eminguez

    Description

    Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.

    Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.

    Goals

    • Familiarize myself with MCPs
    • Unrealistic: Have an MCP that can generate an EIB config file

    Resources

    Result

    https://github.com/e-minguez/eib-mcp

    I've extensively used antigravity and its agent mode to code this. This heavily uses https://hackweek.opensuse.org/25/projects/suse-edge-image-builder-json-schema for the MCP to be built.

    I've ended up learning a lot of things about "prompting", json schemas in general, some golang, MCPs and AI in general :)

    Example:

    Generate an Edge Image Builder configuration for an ISO image based on slmicro-6.2.iso, targeting x86_64 architecture. The output name should be 'my-edge-image' and it should install to /dev/sda. It should deploy a 3 nodes kubernetes cluster with nodes names "node1", "node2" and "node3" as: * hostname: node1, IP: 1.1.1.1, role: initializer * hostname: node2, IP: 1.1.1.2, role: agent * hostname: node3, IP: 1.1.1.3, role: agent The kubernetes version should be k3s 1.33.4-k3s1 and it should deploy a cert-manager helm chart (the latest one available according to https://cert-manager.io/docs/installation/helm/). It should create a user called "suse" with password "suse" and set ntp to "foo.ntp.org". The VIP address for the API should be 1.2.3.4

    Generates:

    ``` apiVersion: "1.0" image: arch: x86_64 baseImage: slmicro-6.2.iso imageType: iso outputImageName: my-edge-image kubernetes: helm: charts: - name: cert-manager repositoryName: jetstack


    SUSE Edge Image Builder json schema by eminguez

    Description

    Current SUSE Edge Image Builder tool doesn't provide a json schema (yes, I know EIB uses yaml but it seems JSON Schema can be used to validate YAML documents yay!) that defines the configuration file syntax, values, etc.

    Having a json schema will make integrations straightforward, as once the json schema is in place, it can be used as the interface for other tools to consume and generate EIB definition files (like TUI wizards, web UIs, etc.)

    I'll make use of AI tools for this so I'd learn more about vibe coding, agents, etc.

    Goals

    • Learn about json schemas
    • Try to implement something that can take the EIB source code and output an initial json schema definition
    • Create a PR for EIB to be adopted
    • Learn more about AI tools and how those can help on similar projects.

    Resources

    Result

    Pull Request created! https://github.com/suse-edge/edge-image-builder/pull/821

    I've extensively used gemini via the VScode "gemini code assist" plugin but I found it not too good... my workstation froze for minutes using it... I have a pretty beefy macbook pro M2 and AFAIK the model is being executed on the cloud... so I basically spent a few days fighting with it... Then I switched to antigravity and its agent mode... and it worked much better.

    I've ended up learning a few things about "prompting", json schemas in general, some golang and AI in general :)