Project Description
This project is planned as a collection of random changes to the documentation files in order to rid them of cluttered in the content, outdated comments, inconsistent style (minor issues), unused parameters, duplications, etc.
Due to their nature, these issues do not represent the problem for the documentation in terms of the content itself or the building process, and are a natural outcome of the fast-growing body of content. Therefore, occasional deep clean-up is both desired and needed.
These changes are mostly related to the quality and style of the raw files.
The ones that are a good-first-issue listed at the top and are marked with (*).
If you want to contribute, please keep in mind to:
- create an issue in github to indicate what you want to work on, to avoid duplication of work
- make all changes in small increments
- group the changes by the type of change (don't mix the changes that are different in nature)
- PRs affecting too many different changes or more than one book will be rejected
Goal for this Hackweek
- regular deep clean-up
- create an opportunity to contribute to the Uyuni documentation
Issues to work on
1(*) External list
- create a list of all external links in the documenattion
- check if they are up-to-date
- define who creates a request for the external link change
2(*) Architecture parameters
- check if all architecture parameters are used; remove unused ones
- check for the duplicates
3(*) Existing snippets
- review all snippets
- are they all used?
- if the snippetis used only once - decide if we should keep the snippet in that case, or move it as the direct content to the relevant file
- tidy up the snippet if possible (remove comments, check line spacing)
4(*) Quick Start Guide - create as a snippet?
- revisit the option of having a snippet that is overarching all books (currently not possible)
- if possible, see if Quick Start Guide and Install Guide instructions can be chunked up and relevant pages created from the sequences of shared snippets
- if not possible, look into the lowest possible maintenance process of these files (their content overlaps significantly)
5(*) Titles capitalizations
- Only first letter needs to be capitalized (master branch only for now)
- Check: Chapters, paragraphs, procedures
- Work in this order: first fix all page titles and navigation - book by book, then move to chapters, paragraphs and procedures (file by file)
6(*) Comments
- check ALL comments, book by book. Most relevant: Client Config Guide, Install and Upgrade Guide, Admin Guide. Use blame command to see when they were added. (If older than 2 years, they are likely candidate to be removed completely.)
- check all FIXME comments. Fix them, or delete them.
- check all comments with the authors' initials: LB, KE, JC, OM
7 Documentation analysis:
- images and tables - create a list, check if they are actually necessary, or they need updating, check for the duplicates, can we come up with the Best Practice?
- mark which images are version dependent (thus having priority for updating), and which are more generic
8 - To be added...
No Hackers yet
Looking for hackers with the skills:
This project is part of:
Hack Week 23
Activity
Comments
Be the first to comment!
Similar Projects
Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios
Description
Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.
This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age
metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.
The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.
Goals
By the end of Hack Week, we aim to have a single, working Python script that:
- Connects to Prometheus and executes a query to fetch detailed test failure history.
- Processes the raw data into a format suitable for the Gemini API.
- Successfully calls the Gemini API with the data and a clear prompt.
- Parses the AI's response to extract a simple list of flaky tests.
- Saves the list to a JSON file that can be displayed in Grafana.
- New panel in our Dashboard listing the Flaky tests
Resources
- Jenkins Prometheus Exporter: https://github.com/uyuni-project/jenkins-exporter/
- Data Source: Our internal Prometheus server.
- Key Metric:
jenkins_build_test_case_failure_age{jobname, buildid, suite, case, status, failedsince}
. - Existing Query for Reference:
count by (suite) (max_over_time(jenkins_build_test_case_failure_age{status=~"FAILED|REGRESSION", jobname="$jobname"}[$__range]))
. - AI Model: The Google Gemini API.
- Example about how to interact with Gemini API: https://github.com/srbarrios/FailTale/
- Visualization: Our internal Grafana Dashboard.
- Internal IaC: https://gitlab.suse.de/galaxy/infrastructure/-/tree/master/srv/salt/monitoring
Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios
Description
This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.
If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.
Goals
- Migrate Core tests including Onboarding of clients
- Improve test reliabillity: Measure and confirm a significant reduction of flakynes.
- Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS
Resources
- Existing Uyuni Test Framework (Cucumber Ruby + Capybara + Selenium)
- My Template for CucumberJS + Playwright in TypeScript
- Started Hackweek Project
Advent of Code: The Diaries by amanzini
Description
It was the Night Before Compile Time ...
Hackweek 25 (December 1-5) perfectly coincides with the first five days of Advent of Code 2025. This project will leverage this overlap to participate in the event in real-time.
To add a layer of challenge and exploration (in the true spirit of Hackweek), the puzzles will be solved using a non-mainstream, modern language like D, Crystal, Gleam or Zig.
The primary project intent is not just simply to solve the puzzles, but to exercise result sharing and documentation. I'd create a public-facing repository documenting the process. This involves treating each day's puzzle as a mini-project: solving it, then documenting the solution with detailed write-ups, analysis of the language's performance and ergonomics, and visualizations.
| \ ' / -- (*) -- >*< >0<@< >>>@<<* >@>*<0<<< >*>>@<<<@<< >@>>0<<<*<<@< >*>>0<<@<<<@<<< >@>>*<<@<>*<<0<*< \*/ >0>>*<<@<>0><<*<@<< ___\\U//___ >*>>@><0<<*>>@><*<0<< |\\ | | \\| >@>>0<*<0>>@<<0<<<*<@<< | \\| | _(UU)_ >((*))_>0><*<0><@<<<0<*< |\ \| || / //||.*.*.*.|>>@<<*<<@>><0<<< |\\_|_|&&_// ||*.*.*.*|_\\db//_ """"|'.'.'.|~~|.*.*.*| ____|_ |'.'.'.| ^^^^^^|____|>>>>>>| ~~~~~~~~ '""""`------' ------------------------------------------------ This ASCII pic can be found at https://asciiart.website/art/1831
Goals
Code, Docs, and Memes: An AoC Story
Have fun!
Involve more people, play together
Solve Days 1-5: Successfully solve both parts of the Advent of Code 2025 puzzles for Days 1-5 using the chosen non-mainstream language.
Daily Documentation & Language Review: Publish a detailed write-up for each day. This documentation will include the solution analysis, the chosen algorithm, and specific commentary on the language's ergonomics, performance, and standard library for the given task.