I once had a bad dream.

I started good, a sunny day. I had just fixed an issue and push it to my fork, in order to create a Pull Request. I was happy. It felt awesome to have found a fix so elegant. Two lines of code.

But then, something happened. A cloud appear in the sky and partially hide the sun. github triggered the end to end test suite. You could see the waiting icon, you could almost hear the engines starting, ... a lightning and a thunder appeared in the sky, the sky turned dark and the e2e test started in our private jenkins server.

1 hour passed by... 2 hours .... the e2e tests still running ... 4 hours, not even 50%. Finally, I had to go, get some dinner, get some rest, so I decided to look at it the next day.

Sleep was not good. Dreaming the tests would fail and the deadline be missed .... I couldn't sleep no more so I wake up early in the morning, before sunrise. The computer still opened was lightning up the room. Out, the storm turned out to be a heavy rain. And the tests did not passed! 4 hours and half after starting them, so half an hour after leaving for dinner, and there was a fail test.

Finding out what went wrong took 2 hours...it was not even related to the code, but an infrastructure issue. A timeout when connecting to a mirror which was being restarted when the tests run. Apparently we hit a maintenance window.

Damn! So let's just "restart" the tests. This time, I decided to put an alarm 4 hours after and switch to another task, just to avoid the anxiety of the previous evening looking at the results. 4 hours passed by, and the tests are good. It is not raining anymore, a git of sun is filtering throw the window, hope is in the air. 4 more hours, and the tests still have not failed. Going to dinner and bed again and putting the alarm early in the morning. First thing in the morning I looked at the tests ... running again, feels good. All tests have passed and there is only one final "cleanup" state. Shower, breakfeast, sun is shining again!

Finally, all is green, so let's go and push the merge button ... oh oh ... clouds hide the sun again ... where is the merge button?? no way, there is a conflict with the code in master and I can't merge!!! I can't merge!!! shocked, I looked into the history ... John merged a PR overnight that was touching the same file ... hopeless I start crying on my desk...

Scared?

This is not so different on what could happen if we were running the whole e2e test suite in the SUSE Manager (uyuni) Pull Requests. However, not running any e2e tests has also a bad consequence. We find the issues after code has been merged in master, and then we spend days looking at what could have caused this, reverting commits, and starting again.

However, if we could run a subset of the e2e tests, we could shorten the time it takes and so run them at the PR level, so no broken code could get merged.

The thing is , how do we select which tests to run?

This project is about using Machine Learning to do predictive test selection, thus selecting which tests to run based on the history of previous test runs.

Inspired by: https://engineering.fb.com/2018/11/21/developer-tools/predictive-test-selection/

Looking for hackers with the skills:

ml ci qa ai

This project is part of:

Hack Week 20

Activity

  • almost 5 years ago: PSuarezHernandez liked this project.
  • almost 5 years ago: llansky3 liked this project.
  • almost 5 years ago: ories liked this project.
  • almost 5 years ago: j_renner liked this project.
  • almost 5 years ago: RDiasMateus liked this project.
  • almost 5 years ago: pagarcia liked this project.
  • almost 5 years ago: jordimassaguerpla added keyword "ci" to this project.
  • almost 5 years ago: jordimassaguerpla added keyword "qa" to this project.
  • almost 5 years ago: jordimassaguerpla added keyword "ai" to this project.
  • almost 5 years ago: jordimassaguerpla added keyword "ml" to this project.
  • almost 5 years ago: jordimassaguerpla originated this project.

  • Comments

    • llansky3
      almost 5 years ago by llansky3 | Reply

      This could have some wider potential - clever picking of tests to shorten the time yet maximize the findings (and learning that in time based on test execution history to predict). I can see how that saves tons of money in cases where test on a real industrial machines need to be executed (e.g. cost of running engine on a test bed is 5-10k$ per day so there is huge difference if you need to run for 1 or 3 days)

    Similar Projects

    Kubernetes-Based ML Lifecycle Automation by lmiranda

    Description

    This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.

    The pipeline will automate the lifecycle of a machine learning model, including:

    • Data ingestion/collection
    • Model training as a Kubernetes Job
    • Model artifact storage in an S3-compatible registry (e.g. Minio)
    • A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
    • A lightweight inference service that loads and serves the latest model
    • Monitoring of model performance and service health through Prometheus/Grafana

    The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.

    Goals

    By the end of Hack Week, the project should:

    1. Produce a fully functional ML pipeline running on Kubernetes with:

      • Data collection job
      • Training job container
      • Storage and versioning of trained models
      • Automated deployment of new model versions
      • Model inference API service
      • Basic monitoring dashboards
    2. Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.

    3. Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).

    4. Prepare a short demo explaining the end-to-end process and how new models flow through the system.

    Resources

    Project Repository

    Updates

    1. Training pipeline and datasets
    2. Inference Service py


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji


    Multi-agent AI assistant for Linux troubleshooting by doreilly

    Description

    Explore multi-agent architecture as a way to avoid MCP context rot.

    Having one agent with many tools bloats the context with low-level details about tool descriptions, parameter schemas etc which hurts LLM performance. Instead have many specialised agents, each with just the tools it needs for its role. A top level supervisor agent takes the user prompt and delegates to appropriate sub-agents.

    Goals

    Create an AI assistant with some sub-agents that are specialists at troubleshooting Linux subsystems, e.g. systemd, selinux, firewalld etc. The agents can get information from the system by implementing their own tools with simple function calls, or use tools from MCP servers, e.g. a systemd-agent can use tools from systemd-mcp.

    Example prompts/responses:

    user$ the system seems slow
    assistant$ process foo with pid 12345 is using 1000% cpu ...
    
    user$ I can't connect to the apache webserver
    assistant$ the firewall is blocking http ... you can open the port with firewall-cmd --add-port ...
    

    Resources

    Language Python. The Python ADK is more mature than Golang.

    https://google.github.io/adk-docs/

    https://github.com/djoreilly/linux-helper


    Song Search with CLAP by gcolangiuli

    Description

    Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface

    SUSE Hackweek AI Song Search

    Goals

    Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:

    • Music Tagging;
    • Free text search;
    • Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.

    The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.

    Result

    In this MVP we implemented:

    • Async Song Analysis with Clap model
    • Free Text Search of the songs
    • Similar song search based on vector representation
    • Containerised version with web interface

    We also documented what went well and what can be improved in the use of AI.

    You can have a look at the result here:

    Future implementation can be related to performance improvement and stability of the analysis.

    References


    AI-Powered Unit Test Automation for Agama by joseivanlopez

    The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:

    • Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
    • TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
    • Ruby: Integrates existing, robust YaST libraries (e.g., yast-storage-ng) to reuse established functionality.

    The Problem: Testing Overhead

    Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.

    The Solution: AI-Driven Automation

    This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:

    1. Automatically generate new unit tests as code is developed.
    2. Intelligently correct and update existing unit tests when the application code changes.

    By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.

    Goals

    • Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g., gemini-cli) to automatically generate unit tests.
    • Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
    • Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.

    Contribution & Resources

    We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.

    If you want to dive deep into AI for software quality, please reach out and join the effort!

    • Authorized AI Tools: Tools supported by SUSE (e.g., gemini-cli)
    • Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.

    Interesting Links


    Exploring Modern AI Trends and Kubernetes-Based AI Infrastructure by jluo

    Description

    Build a solid understanding of the current landscape of Artificial Intelligence and how modern cloud-native technologies—especially Kubernetes—support AI workloads.

    Goals

    Use Gemini Learning Mode to guide the exploration, surface relevant concepts, and structure the learning journey:

    • Gain insight into the latest AI trends, tools, and architectural concepts.
    • Understand how Kubernetes and related cloud-native technologies are used in the AI ecosystem (model training, deployment, orchestration, MLOps).

    Resources

    • Red Hat AI Topic Articles

      • https://www.redhat.com/en/topics/ai
    • Kubeflow Documentation

      • https://www.kubeflow.org/docs/
    • Q4 2025 CNCF Technology Landscape Radar report:

      • https://www.cncf.io/announcements/2025/11/11/cncf-and-slashdata-report-finds-leading-ai-tools-gaining-adoption-in-cloud-native-ecosystems/
      • https://www.cncf.io/wp-content/uploads/2025/11/cncfreporttechradar_111025a.pdf
    • Agent-to-Agent (A2A) Protocol

      • https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/