I once had a bad dream.

I started good, a sunny day. I had just fixed an issue and push it to my fork, in order to create a Pull Request. I was happy. It felt awesome to have found a fix so elegant. Two lines of code.

But then, something happened. A cloud appear in the sky and partially hide the sun. github triggered the end to end test suite. You could see the waiting icon, you could almost hear the engines starting, ... a lightning and a thunder appeared in the sky, the sky turned dark and the e2e test started in our private jenkins server.

1 hour passed by... 2 hours .... the e2e tests still running ... 4 hours, not even 50%. Finally, I had to go, get some dinner, get some rest, so I decided to look at it the next day.

Sleep was not good. Dreaming the tests would fail and the deadline be missed .... I couldn't sleep no more so I wake up early in the morning, before sunrise. The computer still opened was lightning up the room. Out, the storm turned out to be a heavy rain. And the tests did not passed! 4 hours and half after starting them, so half an hour after leaving for dinner, and there was a fail test.

Finding out what went wrong took 2 hours...it was not even related to the code, but an infrastructure issue. A timeout when connecting to a mirror which was being restarted when the tests run. Apparently we hit a maintenance window.

Damn! So let's just "restart" the tests. This time, I decided to put an alarm 4 hours after and switch to another task, just to avoid the anxiety of the previous evening looking at the results. 4 hours passed by, and the tests are good. It is not raining anymore, a git of sun is filtering throw the window, hope is in the air. 4 more hours, and the tests still have not failed. Going to dinner and bed again and putting the alarm early in the morning. First thing in the morning I looked at the tests ... running again, feels good. All tests have passed and there is only one final "cleanup" state. Shower, breakfeast, sun is shining again!

Finally, all is green, so let's go and push the merge button ... oh oh ... clouds hide the sun again ... where is the merge button?? no way, there is a conflict with the code in master and I can't merge!!! I can't merge!!! shocked, I looked into the history ... John merged a PR overnight that was touching the same file ... hopeless I start crying on my desk...

Scared?

This is not so different on what could happen if we were running the whole e2e test suite in the SUSE Manager (uyuni) Pull Requests. However, not running any e2e tests has also a bad consequence. We find the issues after code has been merged in master, and then we spend days looking at what could have caused this, reverting commits, and starting again.

However, if we could run a subset of the e2e tests, we could shorten the time it takes and so run them at the PR level, so no broken code could get merged.

The thing is , how do we select which tests to run?

This project is about using Machine Learning to do predictive test selection, thus selecting which tests to run based on the history of previous test runs.

Inspired by: https://engineering.fb.com/2018/11/21/developer-tools/predictive-test-selection/

Looking for hackers with the skills:

ml ci qa ai

This project is part of:

Hack Week 20

Activity

  • almost 5 years ago: PSuarezHernandez liked this project.
  • almost 5 years ago: llansky3 liked this project.
  • almost 5 years ago: ories liked this project.
  • almost 5 years ago: j_renner liked this project.
  • almost 5 years ago: RDiasMateus liked this project.
  • almost 5 years ago: pagarcia liked this project.
  • almost 5 years ago: jordimassaguerpla added keyword "ci" to this project.
  • almost 5 years ago: jordimassaguerpla added keyword "qa" to this project.
  • almost 5 years ago: jordimassaguerpla added keyword "ai" to this project.
  • almost 5 years ago: jordimassaguerpla added keyword "ml" to this project.
  • almost 5 years ago: jordimassaguerpla originated this project.

  • Comments

    • llansky3
      almost 5 years ago by llansky3 | Reply

      This could have some wider potential - clever picking of tests to shorten the time yet maximize the findings (and learning that in time based on test execution history to predict). I can see how that saves tons of money in cases where test on a real industrial machines need to be executed (e.g. cost of running engine on a test bed is 5-10k$ per day so there is huge difference if you need to run for 1 or 3 days)

    Similar Projects

    Kubernetes-Based ML Lifecycle Automation by lmiranda

    Description

    This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.

    The pipeline will automate the lifecycle of a machine learning model, including:

    • Data ingestion/collection
    • Model training as a Kubernetes Job
    • Model artifact storage in an S3-compatible registry (e.g. Minio)
    • A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
    • A lightweight inference service that loads and serves the latest model
    • Monitoring of model performance and service health through Prometheus/Grafana

    The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.

    Goals

    By the end of Hack Week, the project should:

    1. Produce a fully functional ML pipeline running on Kubernetes with:

      • Data collection job
      • Training job container
      • Storage and versioning of trained models
      • Automated deployment of new model versions
      • Model inference API service
      • Basic monitoring dashboards
    2. Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.

    3. Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).

    4. Prepare a short demo explaining the end-to-end process and how new models flow through the system.

    Resources

    Project Repository

    Updates

    1. Training pipeline and datasets
    2. Inference Service py


    AI-Powered Unit Test Automation for Agama by joseivanlopez

    The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:

    • Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
    • TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
    • Ruby: Integrates existing, robust YaST libraries (e.g., yast-storage-ng) to reuse established functionality.

    The Problem: Testing Overhead

    Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.

    The Solution: AI-Driven Automation

    This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:

    1. Automatically generate new unit tests as code is developed.
    2. Intelligently correct and update existing unit tests when the application code changes.

    By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.

    Goals

    • Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g., gemini-cli) to automatically generate unit tests.
    • Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
    • Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.

    Contribution & Resources

    We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.

    If you want to dive deep into AI for software quality, please reach out and join the effort!

    • Authorized AI Tools: Tools supported by SUSE (e.g., gemini-cli)
    • Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.

    Interesting Links


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    Liz - Prompt autocomplete by ftorchia

    Description

    Liz is the Rancher AI assistant for cluster operations.

    Goals

    We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.

    Example:

    • User prompt: "Can you show me the list of p"
    • Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"

    Example:

    • User prompt: "Show me the logs of #rancher-"
    • Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".

    Technical Overview

    1. The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
    2. The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.

    Resources

    GitHub repository


    Backporting patches using LLM by jankara

    Description

    Backporting Linux kernel fixes (either for CVE issues or as part of general git-fixes workflow) is boring and mostly mechanical work (dealing with changes in context, renamed variables, new helper functions etc.). The idea of this project is to explore usage of LLM for backporting Linux kernel commits to SUSE kernels using LLM.

    Goals

    • Create safe environment allowing LLM to run and backport patches without exposing the whole filesystem to it (for privacy and security reasons).
    • Write prompt that will guide LLM through the backporting process. Fine tune it based on experimental results.
    • Explore success rate of LLMs when backporting various patches.

    Resources

    • Docker
    • Gemini CLI

    Repository

    Current version of the container with some instructions for use are at: https://gitlab.suse.de/jankara/gemini-cli-backporter


    Background Coding Agent by mmanno

    Description

    I had only bad experiences with AI one-shots. However, monitoring agent work closely and interfering often did result in productivity gains.

    Now, other companies are using agents in pipelines. That makes sense to me, just like CI, we want to offload work to pipelines: Our engineering teams are consistently slowed down by "toil": low-impact, repetitive maintenance tasks. A simple linter rule change, a dependency bump, rebasing patch-sets on top of newer releases or API deprecation requires dozens of manual PRs, draining time from feature development.

    So far we have been writing deterministic, script-based automation for these tasks. And it turns out to be a common trap. These scripts are brittle, complex, and become a massive maintenance burden themselves.

    Can we make prompts and workflows smart enough to succeed at background coding?

    Goals

    We will build a platform that allows engineers to execute complex code transformations using prompts.

    By automating this toil, we accelerate large-scale migrations and allow teams to focus on high-value work.

    Our platform will consist of three main components:

    • "Change" Definition: Engineers will define a transformation as a simple, declarative manifest:
      • The target repositories.
      • A wrapper to run a "coding agent", e.g., "gemini-cli".
      • The task as a natural language prompt.
    • "Change" Management Service: A central service that orchestrates the jobs. It will receive Change definitions and be responsible for the job lifecycle.
    • Execution Runners: We could use existing sandboxed CI runners (like GitHub/GitLab runners) to execute each job or spawn a container.

    MVP

    • Define the Change manifest format.
    • Build the core Management Service that can accept and queue a Change.
    • Connect management service and runners, dynamically dispatch jobs to runners.
    • Create a basic runner script that can run a hard-coded prompt against a test repo and open a PR.

    Stretch Goals:

    • Multi-layered approach, Workflow Agents trigger Coding Agents:
      1. Workflow Agent: Gather information about the task interactively from the user.
      2. Coding Agent: Once the interactive agent has refined the task into a clear prompt, it hands this prompt off to the "coding agent." This background agent is responsible for executing the task and producing the actual pull request.
    • Use MCP:
      1. Workflow Agent gathers context information from Slack, Github, etc.
      2. Workflow Agent triggers a Coding Agent.
    • Create a "Standard Task" library with reliable prompts.
      1. Rebasing rancher-monitoring to a new version of kube-prom-stack
      2. Update charts to use new images
      3. Apply changes to comply with a new linter
      4. Bump complex Go dependencies, like k8s modules
      5. Backport pull requests to other branches
    • Add “review agents” that review the generated PR.

    See also