Project Description
Over the years, our bugzilla database has grown up in size, becoming a very valuable source of truth for most support and development cases; still searching for specific items is quite tricky and the results do not always match the expectations.
What about feeding a Maching Learning platform with the Bugzilla Database, in order to be able to query it through AI interface? Wouldn't it be nice/convenient to ask to AI: "Gimme hints about this kernel dump!" or "What is the root cause of this stack trace?"
It is the age of choice in the end, isn't it?
Goal for this Hackweek
For this Hackweek, the focus is to trigger a discussion on the following non-exhaustive list:
- What are the boundaries to be set when considering such an approach (legal, ethical, technological, whatever)
- How much of the Bugzilla DB can be used for feeding ML? ( can we use customer's data? what about partner's data?)
- Find out an open source ML solution fitting our needs;
- Find out some hardware where the solution can be eventually run on.
Anyone interested can join the discussion on the open Slack channel #discuss-bugzilla-ai
Resources
[1] https://blog.opensource.org/towards-a-definition-of-open-artificial-intelligence-first-meeting-recap/
No Hackers yet
Looking for hackers with the skills:
This project is part of:
Hack Week 23
Activity
Comments
-
almost 2 years ago by paolodepa | Reply
Preliminary findings: talking to Amartya Chakraborty, who works to the Rancher AI project (https://github.com/rancher/opni), it seems that their framework can be attached to a Bugzilla instance for machine learning and pobably this will be explorated in the future
-
-
-
Similar Projects
Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios
Description
Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.
This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age
metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.
The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.
Goals
By the end of Hack Week, we aim to have a single, working Python script that:
- Connects to Prometheus and executes a query to fetch detailed test failure history.
- Processes the raw data into a format suitable for the Gemini API.
- Successfully calls the Gemini API with the data and a clear prompt.
- Parses the AI's response to extract a simple list of flaky tests.
- Saves the list to a JSON file that can be displayed in Grafana.
- New panel in our Dashboard listing the Flaky tests
Resources
- Jenkins Prometheus Exporter: https://github.com/uyuni-project/jenkins-exporter/
- Data Source: Our internal Prometheus server.
- Key Metric:
jenkins_build_test_case_failure_age{jobname, buildid, suite, case, status, failedsince}
. - Existing Query for Reference:
count by (suite) (max_over_time(jenkins_build_test_case_failure_age{status=~"FAILED|REGRESSION", jobname="$jobname"}[$__range]))
. - AI Model: The Google Gemini API.
- Example about how to interact with Gemini API: https://github.com/srbarrios/FailTale/
- Visualization: Our internal Grafana Dashboard.
- Internal IaC: https://gitlab.suse.de/galaxy/infrastructure/-/tree/master/srv/salt/monitoring