Description

In SUMA/Uyuni team we spend a lot of time reviewing test reports, analyzing each of the test cases failing, checking if the test is a flaky test, checking logs, etc.

Goals

Speed up the review by automating some parts through AI, in a way that we can consume some summary of that report that could be meaningful for the reviewer.

Resources

No idea about the resources yet, but we will make use of:

  • HTML/JSON Report (text + screenshots)
  • The Test Suite Status GithHub board (via API)
  • The environment tested (via SSH)
  • The test framework code (via files)

Looking for hackers with the skills:

uyuni ai reports testing

This project is part of:

Hack Week 24

Activity

  • about 1 year ago: juliogonzalezgil liked this project.
  • about 1 year ago: livdywan liked this project.
  • about 1 year ago: oscar-barrios added keyword "uyuni" to this project.
  • about 1 year ago: oscar-barrios added keyword "ai" to this project.
  • about 1 year ago: oscar-barrios added keyword "reports" to this project.
  • about 1 year ago: oscar-barrios added keyword "testing" to this project.
  • about 1 year ago: oscar-barrios originated this project.

  • Comments

    • oscar-barrios
    • oscar-barrios
      28 days ago by oscar-barrios | Reply

      I end up continuing this project on my free time, and I made some progress here: https://github.com/srbarrios/FailTale

    Similar Projects

    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.

    Nah, let's be honest add-emoji AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources


    Uyuni Health-check Grafana AI Troubleshooter by ygutierrez

    Description

    This project explores the feasibility of using the open-source Grafana LLM plugin to enhance the Uyuni Health-check tool with LLM capabilities. The idea is to integrate a chat-based "AI Troubleshooter" directly into existing dashboards, allowing users to ask natural-language questions about errors, anomalies, or performance issues.

    Goals

    • Investigate if and how the grafana-llm-app plug-in can be used within the Uyuni Health-check tool.
    • Investigate if this plug-in can be used to query LLMs for troubleshooting scenarios.
    • Evaluate support for local LLMs and external APIs through the plugin.
    • Evaluate if and how the Uyuni MCP server could be integrated as another source of information.

    Resources

    Grafana LMM plug-in

    Uyuni Health-check


    Ansible to Salt integration by vizhestkov

    Description

    We already have initial integration of Ansible in Salt with the possibility to run playbooks from the salt-master on the salt-minion used as an Ansible Control node.

    In this project I want to check if it possible to make Ansible working on the transport of Salt. Basically run playbooks with Ansible through existing established Salt (ZeroMQ) transport and not using ssh at all.

    It could be a good solution for the end users to reuse Ansible playbooks or run Ansible modules they got used to with no effort of complex configuration with existing Salt (or Uyuni/SUSE Multi Linux Manager) infrastructure.

    Goals

    • [v] Prepare the testing environment with Salt and Ansible installed
    • [v] Discover Ansible codebase to figure out possible ways of integration
    • [v] Create Salt/Uyuni inventory module
    • [v] Make basic modules to work with no using separate ssh connection, but reusing existing Salt connection
    • [v] Test some most basic playbooks

    Resources

    GitHub page

    Video of the demo


    Uyuni Saltboot rework by oholecek

    Description

    When Uyuni switched over to the containerized proxies we had to abandon salt based saltboot infrastructure we had before. Uyuni already had integration with a Cobbler provisioning server and saltboot infra was re-implemented on top of this Cobbler integration.

    What was not obvious from the start was that Cobbler, having all it's features, woefully slow when dealing with saltboot size environments. We did some improvements in performance, introduced transactions, and generally tried to make this setup usable. However the underlying slowness remained.

    Goals

    This project is not something trying to invent new things, it is just finally implementing saltboot infrastructure directly with the Uyuni server core.

    Instead of generating grub and pxelinux configurations by Cobbler for all thousands of systems and branches, we will provide a GET access point to retrieve grub or pxelinux file during the boot:

    /saltboot/group/grub/$fqdn and similar for systems /saltboot/system/grub/$mac

    Next we adapt our tftpd translator to query these points when asked for default or mac based config.

    Lastly similar thing needs to be done on our apache server when HTTP UEFI boot is used.

    Resources


    Enhance setup wizard for Uyuni by PSuarezHernandez

    Description

    This project wants to enhance the intial setup on Uyuni after its installation, so it's easier for a user to start using with it.

    Uyuni currently uses "uyuni-tools" (mgradm) as the installation entrypoint, to trigger the installation of Uyuni in the given host, but does not really perform an initial setup, for instance:

    • user creation
    • adding products / channels
    • generating bootstrap repos
    • create activation keys
    • ...

    Goals

    • Provide initial setup wizard as part of mgradm uyuni installation

    Resources


    Bugzilla goes AI - Phase 1 by nwalter

    Description

    This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.

    Goals

    To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.

    Project Charter

    Bugzilla goes AI Phase 1

    Description

    Project Achievements during Hackweek

    In this file you can read about what we achieved during Hackweek.

    Project Achievements


    Update M2Crypto by mcepl

    There are couple of projects I work on, which need my attention and putting them to shape:

    Goal for this Hackweek

    • Put M2Crypto into better shape (most issues closed, all pull requests processed)
    • More fun to learn jujutsu
    • Play more with Gemini, how much it help (or not).
    • Perhaps, also (just slightly related), help to fix vis to work with LuaJIT, particularly to make vis-lspc working.


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    MCP Server for SCC by digitaltomm

    Description

    Provide an MCP Server implementation for customers to access data on scc.suse.com via MCP protocol. The core benefit of this MCP interface is that it has direct (read) access to customer data in SCC, so the AI agent gets enhanced knowledge about individual customer data, like subscriptions, orders and registered systems.

    Architecture

    Schema

    Goals

    We want to demonstrate a proof of concept to connect to the SCC MCP server with any AI agent, for example gemini-cli or codex. Enabling the user to ask questions regarding their SCC inventory.

    For this Hackweek, we target that users get proper responses to these example questions:

    • Which of my currently active systems are running products that are out of support?
    • Do I have ready to use registration codes for SLES?
    • What are the latest 5 released patches for SLES 15 SP6? Output as a list with release date, patch name, affected package names and fixed CVEs.
    • Which versions of kernel-default are available on SLES 15 SP6?

    Technical Notes

    Similar to the organization APIs, this can expose to customers data about their subscriptions, orders, systems and products. Authentication should be done by organization credentials, similar to what needs to be provided to RMT/MLM. Customers can connect to the SCC MCP server from their own MCP-compatible client and Large Language Model (LLM), so no third party is involved.

    Milestones

    [x] Basic MCP API setup
      MCP endpoints
      [x] Products / Repositories
      [x] Subscriptions / Orders 
      [x] Systems
      [x] Packages
    [x] Document usage with Gemini CLI, Codex
    

    Resources

    Gemini CLI setup:

    ~/.gemini/settings.json:


    Background Coding Agent by mmanno

    Description

    I had only bad experiences with AI one-shots. However, monitoring agent work closely and interfering often did result in productivity gains.

    Now, other companies are using agents in pipelines. That makes sense to me, just like CI, we want to offload work to pipelines: Our engineering teams are consistently slowed down by "toil": low-impact, repetitive maintenance tasks. A simple linter rule change, a dependency bump, rebasing patch-sets on top of newer releases or API deprecation requires dozens of manual PRs, draining time from feature development.

    So far we have been writing deterministic, script-based automation for these tasks. And it turns out to be a common trap. These scripts are brittle, complex, and become a massive maintenance burden themselves.

    Can we make prompts and workflows smart enough to succeed at background coding?

    Goals

    We will build a platform that allows engineers to execute complex code transformations using prompts.

    By automating this toil, we accelerate large-scale migrations and allow teams to focus on high-value work.

    Our platform will consist of three main components:

    • "Change" Definition: Engineers will define a transformation as a simple, declarative manifest:
      • The target repositories.
      • A wrapper to run a "coding agent", e.g., "gemini-cli".
      • The task as a natural language prompt.
    • "Change" Management Service: A central service that orchestrates the jobs. It will receive Change definitions and be responsible for the job lifecycle.
    • Execution Runners: We could use existing sandboxed CI runners (like GitHub/GitLab runners) to execute each job or spawn a container.

    MVP

    • Define the Change manifest format.
    • Build the core Management Service that can accept and queue a Change.
    • Connect management service and runners, dynamically dispatch jobs to runners.
    • Create a basic runner script that can run a hard-coded prompt against a test repo and open a PR.

    Stretch Goals:

    • Multi-layered approach, Workflow Agents trigger Coding Agents:
      1. Workflow Agent: Gather information about the task interactively from the user.
      2. Coding Agent: Once the interactive agent has refined the task into a clear prompt, it hands this prompt off to the "coding agent." This background agent is responsible for executing the task and producing the actual pull request.
    • Use MCP:
      1. Workflow Agent gathers context information from Slack, Github, etc.
      2. Workflow Agent triggers a Coding Agent.
    • Create a "Standard Task" library with reliable prompts.
      1. Rebasing rancher-monitoring to a new version of kube-prom-stack
      2. Update charts to use new images
      3. Apply changes to comply with a new linter
      4. Bump complex Go dependencies, like k8s modules
      5. Backport pull requests to other branches
    • Add “review agents” that review the generated PR.

    See also


    openQA tests needles elaboration using AI image recognition by mdati

    Description

    In the openQA test framework, to identify the status of a target SUT image, a screenshots of GUI or CLI-terminal images, the needles framework scans the many pictures in its repository, having associated a given set of tags (strings), selecting specific smaller parts of each available image. For the needles management actually we need to keep stored many screenshots, variants of GUI and CLI-terminal images, eachone accompanied by a dedicated set of data references (json).

    A smarter framework, using image recognition based on AI or other image elaborations tools, nowadays widely available, could improve the matching process and hopefully reduce time and errors, during the images verification and detection process.

    Goals

    Main scope of this idea is to match a "graphical" image of the console or GUI status of a running openQA test, an image of a shell console or application-GUI screenshot, using less time and resources and with less errors in data preparation and use, than the actual openQA needles framework; that is:

    • having a given SUT (system under test) GUI or CLI-terminal screenshot, with a local distribution of pixels or text commands related to a running test status,
    • we want to identify a desired target, e.g. a screen image status or data/commands context,
      • based on AI/ML-pretrained archives containing object or other proper elaboration tools,
      • possibly able to identify also object not present in the archive, i.e. by means of AI/ML mechanisms.
    • the matching result should be then adapted to continue working in the openQA test, likewise and in place of the same result that would have been produced by the original openQA needles framework.
    • We expect an improvement of the matching-time(less time), reliability of the expected result(less error) and simplification of archive maintenance in adding/removing objects(smaller DB and less actions).

    Hackweek POC:

    Main steps

    • Phase 1 - Plan
      • study the available tools
      • prepare a plan for the process to build
    • Phase 2 - Implement
      • write and build a draft application
    • Phase 3 - Data
      • prepare the data archive from a subset of needles
      • initialize/pre-train the base archive
      • select a screenshot from the subset, removing/changing some part
    • Phase 4 - Test
      • run the POC application
      • expect the image type is identified in a good %.

    Resources

    First step of this project is quite identification of useful resources for the scope; some possibilities are:

    • SUSE AI and other ML tools (i.e. Tensorflow)
    • Tools able to manage images
    • RPA test tools (like i.e. Robot framework)
    • other.

    Project references


    Multimachine on-prem test with opentofu, ansible and Robot Framework by apappas

    Description

    A long time ago I explored using the Robot Framework for testing. A big deficiency over our openQA setup is that bringing up and configuring the connection to a test machine is out of scope.

    Nowadays we have a way¹ to deploy SUTs outside openqa, but we only use if for cloud tests in conjuction with openqa. Using knowledge gained from that project I am going to try to create a test scenario that replicates an openqa test but this time including the deployment and setup of the SUT.

    Goals

    Create a simple multimachine test scenario with the support server and SUT all created by the robot framework.

    Resources

    1. https://github.com/SUSE/qe-sap-deployment
    2. terraform-libvirt-provider


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt (including bootstrapping with bootstrap script) and Salt-ssh clients

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    In progress/done for Hack Week 25

    Guide

    We started writin a Guide: Adding a new client GNU Linux distribution to Uyuni at https://github.com/uyuni-project/uyuni/wiki/Guide:-Adding-a-new-client-GNU-Linux-distribution-to-Uyuni, to make things easier for everyone, specially those not too familiar wht Uyuni or not technical.

    openSUSE Leap 16.0

    The distribution will all love!

    https://en.opensuse.org/openSUSE:Roadmap#DRAFTScheduleforLeap16.0

    Curent Status We started last year, it's complete now for Hack Week 25! :-D

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file) NOTE: Done, client tools for SLMicro6 are using as those for SLE16.0/openSUSE Leap 16.0 are not available yet
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    • [W] Package management (install, remove, update...). Works, even reboot requirement detection