Project Description
I have casually investigated that recent open source image generation AI systems are relatively invasive of the host system if one starts to install them that way. Usually container is better but needs special configuration to access the needed hardware. I'd like to run something in a container utilizing the RDNA2 Radeon gfx card I have on my desktop computer.
The exact container type would be evaluated, and of course existing solutions will be seeked.
Goal for this Hackweek
The goals for the Hackweek include to have suitable optimized container that can be created from scratch with one command and can generate SUSE related images with the AMD graphics with 8GB RAM (which is a bit limited apparently).
Resources
https://github.com/tjyrinki/sd-rocm
Results
See the github link above, images below and the blog post at https://timojyrinki.gitlab.io/hugo/post/2023-02-02-stablediffusion-docker/
This project is part of:
Hack Week 22
Activity
Comments
-
over 2 years ago by tjyrinki_suse | Reply
Blog post at https://timojyrinki.gitlab.io/hugo/post/2023-02-02-stablediffusion-docker/ – read more there!
See the git repo for what has been done as part of this project.
-
Similar Projects
Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios
Description
Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.
This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age
metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.
The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.
Goals
By the end of Hack Week, we aim to have a single, working Python script that:
- Connects to Prometheus and executes a query to fetch detailed test failure history.
- Processes the raw data into a format suitable for the Gemini API.
- Successfully calls the Gemini API with the data and a clear prompt.
- Parses the AI's response to extract a simple list of flaky tests.
- Saves the list to a JSON file that can be displayed in Grafana.
- New panel in our Dashboard listing the Flaky tests
Resources
- Jenkins Prometheus Exporter: https://github.com/uyuni-project/jenkins-exporter/
- Data Source: Our internal Prometheus server.
- Key Metric:
jenkins_build_test_case_failure_age{jobname, buildid, suite, case, status, failedsince}
. - Existing Query for Reference:
count by (suite) (max_over_time(jenkins_build_test_case_failure_age{status=~"FAILED|REGRESSION", jobname="$jobname"}[$__range]))
. - AI Model: The Google Gemini API.
- Example about how to interact with Gemini API: https://github.com/srbarrios/FailTale/
- Visualization: Our internal Grafana Dashboard.
- Internal IaC: https://gitlab.suse.de/galaxy/infrastructure/-/tree/master/srv/salt/monitoring