Description

AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

User Story

Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

Goals

  • Leverage a chat interface to help Allison
  • Create a model from scratch based on data from openQA
  • Proof of concept for automated analysis of openQA test results

Bonus

  • Use AI to suggest solutions to merge conflicts
    • This would need a merge conflict editor that can suggest solving the conflict
  • Use image recognition for needles

Resources

Timeline

Day 1

  • Conversing with open-webui to teach me how to create a model based on openQA test results

Day 2

Highlights

  • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
  • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
  • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
  • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

Outcomes

  • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
  • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
  • The proof of concept for a model based on test results (Testimony) looks promising, although for real-world use more effort needs to be put into improving the dataset and selecting relevant features.

Looking for hackers with the skills:

ai openqa tensorflow testing python

This project is part of:

Hack Week 24

Activity

  • 15 days ago: livdywan added keyword "python" to this project.
  • 19 days ago: livdywan added keyword "testing" to this project.
  • 21 days ago: livdywan started this project.
  • 22 days ago: livdywan added keyword "ai" to this project.
  • 22 days ago: livdywan added keyword "openqa" to this project.
  • 22 days ago: livdywan added keyword "tensorflow" to this project.
  • 22 days ago: livdywan originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    Save pytorch models in OCI registries by jguilhermevanz

    Description

    A prerequisite for running applications in a cloud environment is the presence of a container registry. Another common scenario is users performing machine learning workloads in such environments. However, these types of workloads require dedicated infrastructure to run properly. We can leverage these two facts to help users save resources by storing their machine learning models in OCI registries, similar to how we handle some WebAssembly modules. This approach will save users the resources typically required for a machine learning model repository for the applications they need to run.

    Goals

    Allow PyTorch users to save and load machine learning models in OCI registries.

    Resources


    ghostwrAIter - a local AI assisted tool for helping with support cases by paolodepa

    Description

    This project is meant to fight the loneliness of the support team members, providing them an AI assistant (hopefully) capable of scraping supportconfigs in a RAG fashion, trying to answer specific questions.

    Goals

    • Setup an Ollama backend, spinning one (or more??) code-focused LLMs selected by license, performance and quality of the results between:
    • Setup a Web UI for it, choosing an easily extensible and customizable option between:
    • Extend the solution in order to be able to:
      • Add ZIU/Concord shared folders to its RAG context
      • Add BZ cases, splitted in comments to its RAG context
        • A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query BZ
      • Add specific packages picking them from IBS repos
        • A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query IBS
        • A plus would be to desume the packages of interest and the right channel and version to be picked from the added BZ cases


    Use local/private LLM for semantic knowledge search by digitaltomm

    Description

    Use a local LLM, based on SUSE AI (ollama, openwebui) to power geeko search (public instance: https://geeko.port0.org/).

    Goals

    Build a SUSE internal instance of https://geeko.port0.org/ that can operate on internal resources, crawling confluence.suse.com, gitlab.suse.de, etc.

    Resources

    Repo: https://github.com/digitaltom/semantic-knowledge-search

    Public instance: https://geeko.port0.org/

    Results

    Internal instance:

    I have an internal test instance running which has indexed a couple of internal wiki pages from the SCC team. It's using the ollama (llama3.1:8b) backend of suse-ai.openplatform.suse.com to create embedding vectors for indexed resources and to create a chat response. The semantic search for documents is done with a vector search inside of sqlite, using sqlite-vec.

    image


    Use AI tools to convert legacy perl scripts to bash by nadvornik

    Description

    Use AI tools to convert legacy perl scripts to bash

    Goals

    Uyuni project contains legacy perl scripts used for setup. The perl dependency could be removed, to reduce the container size. The goal of this project is to research use of AI tools for this task.

    Resources

    Aider

    Results:

    Aider is not the right tool for this. It works ok for small changes, but not for complete rewrite from one language to another.

    I got better results with direct API use from script.


    New features in openqa-trigger-from-obs for openQA by jlausuch

    Description

    Implement new features in openqa-trigger-from-obs to make xml more flexible.

    Goals

    One of the features to be implemented: - Possibility to define "VERSION" and "ARCH" variables per flavor instead of global.

    Resources

    https://github.com/os-autoinst/openqa-trigger-from-obs


    Enhance UV openQA helper script by mdonis

    Description

    A couple months ago an UV openQA helper script was created to help/automate the searching phase inside openQA for a given MU to test. The script searches inside all our openQA job groups (qam-sle) related with a given MU and generates an output suitable to add (copy & paste) inside the update log.

    This is still a WIP and could use some enhancements.

    Goals

    • Move script from bash to python: this would be useful in case we want to include this into MTUI in the future. The script will be separate from MTUI for now. The idea is to have this as a CLI tool using the click library or something similar.
    • Add option to look for jobs in other sections inside aggregated updates: right now, when looking for regression tests under aggregated updates for a given MU, the script only looks inside the Core MU job group. This is where most of the regression tests we need are located, but some MUs have their regression tests under the YaST/Containers/Security MU job groups. We should keep the Core MU group as a default, but add an option to be able to look into other job groups under aggregated updates.
    • Remove the -a option: this option is used to indicate the update ID and is mandatory right now. This is a bit weird and goes against posix stardards. It was developed this way in order to avoid using positional parameters. This problem should be fixed if we move the script to python.

    Some other ideas to consider:

    • Look into the QAM dashboard API. This has more info on each MU, could use this to link general openQA build results, whether the related RR is approved or not, etc
    • Make it easier to see if there's regression tests for a package in an openQA test build. Check if there's a possibility to search for tests that have the package name in them inside each testsuite.
    • Unit testing?

    More ideas TBD

    Resources

    https://github.com/os-autoinst/scripts/blob/master/openqa-search-maintenance-core-jobs

    https://confluence.suse.com/display/maintenanceqa/Guide+on+how+to+test+Updates

    Post-Hackweek update

    All major features were implemented. Unit tests are still in progress, and project will be moved to the SUSE github org once everything's done. https://github.com/mjdonis/oqa-search


    Setup a new openQA on more powerful server by JNa

    Description

    • currently local openQA storage is insufficient

    Goals

    -Migrate to more powerful machine

    Resources

    -Service Rainbow


    Learn obs/ibs sync tool by xlai

    Description

    Once images/repo are built from IBS/OBS, there is a tool to sync the image from IBS/OBS to openqa asset directory and trigger openqa jobs accordingly.

    Goals

    Check how the tool is implemented, and be capable to add/modify our needed images/repo in future by ourselves.

    Resources

    • https://github.com/os-autoinst/openqa-trigger-from-obs
    • https://gitlab.suse.de/openqa/openqa-trigger-from-ibs-plugin/-/tree/master?ref_type=heads


    Hack on isotest-ng - a rust port of isotovideo (os-autoinst aka testrunner of openQA) by szarate

    Description

    Some time ago, I managed to convince ByteOtter to hack something that resembles isotovideo but in Rust, not because I believe that Perl is dead, but more because there are certain limitations in the perl code (how it was written), and its always hard to add new functionalities when they are about implementing a new backend, or fixing bugs (Along with people complaining that Perl is dead, and that they don't like it)

    In reality, I wanted to see if this could be done, and ByteOtter proved that it could be, while doing an amazing job at hacking a vnc console, and helping me understand better what RuPerl needs to work.

    I plan to keep working on this for the next few years, and while I don't aim for feature completion or replacing isotovideo tih isotest-ng (name in progress), I do plan to be able to use it on a daily basis, using specialized tooling with interfaces, instead of reimplementing everything in the backend

    Todo

    • Add make targets for testability, e.g "spawn qemu and type"
    • Add image search matching algorithm
    • Add a Null test distribution provider
    • Add a Perl Test Distribution Provider
    • Fix unittests https://github.com/os-autoinst/isotest-ng/issues/5
    • Research OpenTofu how to add new hypervisors/baremetal to OpenTofu
    • Add an interface to openQA cli

    Goals

    • Implement at least one of the above, prepare proposals for GSoC
    • Boot a system via it's BMC

    Resources

    See https://github.com/os-autoinst/isotest-ng


    Automated Test Report reviewer by oscar-barrios

    Description

    In SUMA/Uyuni team we spend a lot of time reviewing test reports, analyzing each of the test cases failing, checking if the test is a flaky test, checking logs, etc.

    Goals

    Speed up the review by automating some parts through AI, in a way that we can consume some summary of that report that could be meaningful for the reviewer.

    Resources

    No idea about the resources yet, but we will make use of:

    • HTML/JSON Report (text + screenshots)
    • The Test Suite Status GithHub board (via API)
    • The environment tested (via SSH)
    • The test framework code (via files)


    Hack on isotest-ng - a rust port of isotovideo (os-autoinst aka testrunner of openQA) by szarate

    Description

    Some time ago, I managed to convince ByteOtter to hack something that resembles isotovideo but in Rust, not because I believe that Perl is dead, but more because there are certain limitations in the perl code (how it was written), and its always hard to add new functionalities when they are about implementing a new backend, or fixing bugs (Along with people complaining that Perl is dead, and that they don't like it)

    In reality, I wanted to see if this could be done, and ByteOtter proved that it could be, while doing an amazing job at hacking a vnc console, and helping me understand better what RuPerl needs to work.

    I plan to keep working on this for the next few years, and while I don't aim for feature completion or replacing isotovideo tih isotest-ng (name in progress), I do plan to be able to use it on a daily basis, using specialized tooling with interfaces, instead of reimplementing everything in the backend

    Todo

    • Add make targets for testability, e.g "spawn qemu and type"
    • Add image search matching algorithm
    • Add a Null test distribution provider
    • Add a Perl Test Distribution Provider
    • Fix unittests https://github.com/os-autoinst/isotest-ng/issues/5
    • Research OpenTofu how to add new hypervisors/baremetal to OpenTofu
    • Add an interface to openQA cli

    Goals

    • Implement at least one of the above, prepare proposals for GSoC
    • Boot a system via it's BMC

    Resources

    See https://github.com/os-autoinst/isotest-ng


    Drag Race - comparative performance testing for pull requests by balanza

    Description

    «Sophia, a backend developer, submitted a pull request with optimizations for a critical database query. Once she pushed her code, an automated load test ran, comparing her query against the main branch. Moments later, she saw a new comment automatically added to her PR: the comparison results showed reduced execution time and improved efficiency. Smiling, Sophia messaged her team, “Performance gains confirmed!”»

    Goals

    • To have a convenient and ergonomic framework to describe test scenarios, including environment and seed;
    • to compare results from different tests
    • to have a GitHub action that executes such tests on a CI environment

    Resources

    The MVP will be built on top of Preevy and K6.


    Yearly Quality Engineering Ask me Anything - AMA for not-engineering by szarate

    Goal

    Get a closer look at how developers work on the Engineering team (R & D) of SUSE, and close the collaboration gap between GSI and Engineering

    Why?

    Santiago can go over different development workflows, and can do a deepdive into how Quality Engineering works (think of my QE Team, the advocates for your customers), The idea of this session is to help open the doors to opportunities for collaboration, and broaden our understanding of SUSE as a whole.

    Objectives

    • Give $audience a small window on how to get some questions answered either on the spot or within days of how some things at engineering are done
    • Give Santiago Zarate from Quality Engineering a look into how $audience sees the engineering departments, and find out possibilities of further collaboration

    How?

    By running an "Ask me Anything" session, which is a format of a kind of open Q & A session, where participants ask the host multiple questions.

    How to make it happen?

    I'm happy to help joining a call or we can do it async (online/in person is more fun). Ping me over email-slack and lets make the magic happen!. Doesn't need to be during hackweek, but we gotta kickstart the idea during hackweek ;)

    Rules

    The rules are simple, the more questions the more fun it will be; while this will be only a window into engineering, it can also be the place to help all of us get to a similar level of understanding of the processes that are behind our respective areas of the organization.

    Dynamics

    The host will be monitoring the questions on some pre-agreed page, and try to answer to the best of their knowledge, if a question is too difficult or the host doesn't have the answer, he will do his best to provide an answer at a later date.

    Atendees are encouraged to add questions beforehand; in the case there aren't any, we would be looking at how Quality Engineering tests new products or performs regression tests

    Agenda

    • Introduction of Santiago Zarate, Product Owner of Quality Engineering Core team
    • Introduction of the Group/Team/Persons interested
    • Ice breaker
    • AMA time! Add your questions $PAGE
    • Looking at QE Workflows: How is
      • A maintenance update being tested before being released to our customers
      • Products in development are tested before making it generally available
    • Engineering Opportunity Board


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).
    • [W] Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.
    • [I] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?
    • [W] Applying any basic salt state (including a formula)
    • [W] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov

    Project Description

    Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.

    In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.

    Continue with implementation of the missing features and improve the existing implementation:

    • authentication (need to decide how it should be/or not related to salt auth)

    • web service providing the control of states deployment

    Goal for this Hackweek

    • Implement missing key features

    • Implement the tool for state deployment control with CLI

    Resources

    https://github.com/openSUSE/saline


    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    Results: Infrastructure Achievements

    We successfully built and automated a containerized stack to support our AI experiments. This included:

    A screenshot of k9s and nvtop showing PyTAG running in Kubernetes with GPU acceleration

    ./deploy.sh and voilà - Kubernetes running PyTAG (k9s, above) with GPU acceleration (nvtop, below)

    Results: Game Design Insights

    Our project focused on modeling and analyzing two card games of our own design within the TAG framework:

    • Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
    • AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
    • Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .

    Cards from the three games

    A family picture of our card games in progress. From the top: Bamboo, Totoro, R3

    Results: Learning, Collaboration, and Innovation

    Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:

    • "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
    • AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
    • GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
    • Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.

    Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!

    The Context: AI + Board Games


    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini

    Description

    ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal or local installations. However, the goal is to expand its use to encompass all installations of Kubernetes for local development purposes.
    It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based configuration config.yml.

    Overview

    • Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
    • Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
    • Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
    • Extensibility: Easily extend functionality with custom plugins and configurations.
    • Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
    • Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.

    Features

    • distribution and engine independence. Install your favorite kubernetes engine with your package manager, execute one script and you'll have a complete working environment at your disposal.
    • Basic config approach. One single config.yml file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...).
    • Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
    • Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
    • Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
    • One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
    • Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.

    Planned features (Wishlist / TODOs)

    • Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).
    • [W] Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.
    • [I] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?
    • [W] Applying any basic salt state (including a formula)
    • [W] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement