The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:
- Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
- TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
- Ruby: Integrates existing, robust YaST libraries (e.g.,
yast-storage-ng) to reuse established functionality.
The Problem: Testing Overhead
Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.
The Solution: AI-Driven Automation
This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:
- Automatically generate new unit tests as code is developed.
- Intelligently correct and update existing unit tests when the application code changes.
By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.
Goals
- Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g.,
gemini-cli) to automatically generate unit tests. - Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
- Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.
Contribution & Resources
We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.
If you want to dive deep into AI for software quality, please reach out and join the effort!
- Authorized AI Tools: Tools supported by SUSE (e.g.,
gemini-cli) - Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.
Interesting Links
Looking for hackers with the skills:
This project is part of:
Hack Week 25
Activity
Comments
-
25 days ago by ancorgs | Reply
Time for some reporting.
Both @joseivanlopez and me have been doing experiments with AI and the unit tests of Agama's web interface (Javascript + React).
Probably @joseivanlopez will write in more detail about his experience. But this report is about some common experiments we both did using different AI solutions. Let's start with some context.
There is a branch
api-v2in the Agama repository that dramatically changes how the web UI interacts with the backend. The code already works but the javascript unit tests are not adapted accordingly yet. The main idea was to simplify the process of adapting those unit tests with the help of AI.@joseivanlopez did it using the company-provided Gemini, this pull request shows some partial results. Gemini was able to adapt several tests. Although it would be more accurate to say that it rewrote the tests. It feels like it ignored the current unit tests and wrote another ones from scratch. Those generated unit tests are indeed useful, they cover many scenarios and look quite sane, although some of them are not very semantic.
Gemini was not blazing fast (it took 10+ minutes to adapt a single test) and sometimes it struggled to find its way (felt like a pure trial and error process). But the outcome is certainly useful. The experiment can be labeled as a relative success.
But all that applies only to the
gemini-promodel. Sadly it looks like the SUSE-provided license provides a very limited number of tokens to be spend ongemini-pro. After spending those in adapting 4 or 5 unit tests, everything fall backs to the uselessgemini-flashmodel. That means only a few tests per developer can be adapted every day.In parallel I ran a very similar experiment but using Claude.ai, an AI solution that is not endorsed by SUSE, so we cannot use it for production code. I used the completely free version that only provides access to a web console so I had to upload many source-code files manually) and that only allows a few queries to their intermediate model (using it for longer or accessing the advanced model would have implied a fee).
Even with all those limitations, I feel the experiment was clearly more successful than the Gemini one. You can see some partial results in this pull request.
When asked to adapt existing unit tests, Claude really did all the necessary changes to get them running again, but without rewriting everything. Sometimes it added a missing scenario, but it respected the approach of the existing tests and scenarios. When asked to write a new test from scratch, it apparently produced a quite comprehensive and semantic unit test. Claude really felt like a tool that could potentially save a lot of manual work in a reliable way.
Compared to Gemini, Claude was way faster and straight to the point. It was able to produce good results in seconds without really having access to my development environment. Gemini seemed to work a bit more by trial and error, with several iterations of adjusting things to then run the tests and adjust things again.
-
25 days ago by joseivanlopez | Reply
AI Experiment Report: Gemini-CLI for Agama Unit Test Automation
This report summarizes the results of an experiment using the
gemini-clitool (powered by the Gemini Pro model) to automatically update outdated React unit tests in the Agama UI codebase.Scenario & Goal
The Agama UI code was adapted to use a new HTTP API, leaving existing unit tests broken and outdated. The goal was to use
gemini-clito automatically fix and adapt these broken React unit tests.- Tool:
gemini-cli - Model: Gemini Pro
- Example Prompt:
"Fix tests from [@src](/users/src)/components/storage/PartitionPage.test.tsx"
Key Results and Observations
Success and Capability
- High Adaptation Rate: The AI demonstrated its capability to adapt a significant number of existing React tests to the new API structure and component logic. (See results: https://github.com/agama-project/agama/pull/2927)
- Actionable Output: The output was often directly usable, requiring minimal manual cleanup or correction.
Performance and Efficiency Challenges
- Speed/Time: The process was very slow. Adapting a single test suite typically took around 15 minutes. This time investment sometimes approaches or exceeds the time a developer might take to fix the tests manually, impacting developer workflow adoption.
- Reliability: The process was unstable and sometimes stalled completely. This requires developer intervention (canceling the request and resubmitting) to complete the task.
- Strategy: The model appeared to operate in a "try/error" mode (iterative guessing based on error messages) rather than demonstrating a deep comprehension of the code. This trial-and-error approach contributes directly to the poor performance and high latency observed.
Conclusion
Based on the experiment's results, while the Gemini Pro model currently exhibits significant performance issues (slowness and stalling) that make large-scale, automated fixes impractical, it demonstrates core capabilities that point to its potential value in specific scenarios within the Agama project.
Creating Tests From Scratch
Gemini is highly useful for generating the initial boilerplate and structure for new unit tests. A developer shouldn't spend time setting up mocks, imports, and basic assertion structures for a new component. The AI can quickly create a functional test file based solely on the component's public interface. This dramatically lowers the barrier to writing new tests and speeds up the initial development phase, turning test creation from a chore into a rapid scaffolding process.
Progressive and Incremental Adaptation
The AI is valuable for progressive adaptations as code evolves. Instead of waiting for a massive refactor that breaks hundreds of tests (creating a daunting backlog), a developer should use the AI immediately after making small, targeted changes to a component's internal logic, API, or prop structure. This strategy ensures unit tests are fixed incrementally, preventing the large backlog of broken tests that often results from major refactoring efforts.
Resource Constraint: Token Limits
A critical limiting factor impacting the viability of extensive AI usage is the limited token quota provided by SUSE for the Gemini Pro model. Due to the model's observed "try/error" strategy and the resulting high number of queries needed to complete a task, the tokens are consumed rapidly, typically becoming exhausted after only about two hours of intensive usage.
This severe constraint means that even if the performance were better, continuous, large-scale automation is not possible under the current resource allocation.
In summary, given the constraints of high latency and limited token availability, we must pivot our strategy. We should shift the focus from using the AI as a brute-force bug-fixing tool to using it as a scaffolding and incremental maintenance assistant.
- Tool:
-
-
24 days ago by joseivanlopez | Reply
I also experimented with other command-line interface tools, specifically cline. The tool performed exceptionally well, offering the key advantage of enabling concurrent execution of different AI models. This allows for testing free models available through platforms like Ollama (e.g., gpt-oss or deepseek-r1). I utilized it successfully with the cloude-soonet model. However, the severe limitations of the free usage tier ultimately prevented me from conducting any meaningful or conclusive tests.
-
23 days ago by ancorgs | Reply
I ran an extra experiment. Not about unit tests but about code refactoring. TBH, I didn't have time yet to analyze the result. But some of the unit tests are still green (not all of them). See this pull request
-
Similar Projects
Build a terminal user-interface (TUI) for Agama by IGonzalezSosa
Description
Officially, Agama offers two different user interfaces. On the one hand, we have the web-based interface, which is the one you see when you run the installation media. On the other hand, we have a command-line interface. In both cases, you can use them using a remote system, either using a browser or the agama CLI.
We would expect most of the cases to be covered by this approach. However, if you cannot use the web-based interface and, for some reason, you cannot access the system through the network, your only option is to use the CLI. This interface offers a mechanism to modify Agama's configuration using an editor (vim, by default), but perhaps you might want to have a more user-friendly way.
Goals
The main goal of this project is to built a minimal terminal user-interface for Agama. This interface will allow the user to install the system providing just a few settings (selecting a product, a storage device and a user password). Then it should report the installation progress.
Resources
- https://agama-project.github.io/
- https://ratatui.rs/
Conclusions
We have summarized our conclusions in a pull request. It includes screenshots ;-) We did not implement all the features we wanted, but we learn a lot during the process. We know that, if needed, we could write a TUI for Agama and we have an idea about how to build it. Good enough.
Bring up Agama based tests for openSUSE Tumbleweed by szarate
Description
Agama has been around for some time already, and we have some tests for it on Tumbleweed however they are only on the development job group and are too few to be helpful in assessing the quality of a build
This project aims at enabling and creating new testsuites for the agama flavor, using the already existsing DVD and NET flavors as starting points
Goals
- Introduce tests based on the Agama flavor in the main Tumbleweed job group
- Create Tumbleweed yaml schedules for agama installer and its own jsonette profile (The one being used now are reused from leap)
- Fan out tests that have long runtimes (i.e tackle this ticket)
- Reduce redundancy in tests
Resources
Bugzilla goes AI - Phase 1 by nwalter
Description
This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.
Goals
To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.
Project Charter
Description
Project Achievements during Hackweek
In this file you can read about what we achieved during Hackweek.
Local AI assistant with optional integrations and mobile companion by livdywan
Description
Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.
User Story
- Allison Average wants a one-click local AI assistent on their openSUSE laptop.
- Ash Awesome wants AI on their phone without an expensive subscription.
Goals
- Evaluate a local SurfSense setup for day to day productivity
- Test opencode for vibe coding and tool calling
Timeline
Day 1
- Took a look at SurfSense and started setting up a local instance.
- Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.
Day 2
- Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.
Day 3
- Trying out opencode with Qwen3-Coder and Qwen2.5-Coder.
Day 4
- Context size is a thing, and models are not equally usable for vibe coding.
- Through arduous browsing for ollama models I did find some like
myaniu/qwen2.5-1m:7bwith 1m but even then it is not obvious if they are meant for tool calls.
Day 5
- Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.
Outcomes
surfsense
I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.
opencode
Installing opencode and ollama in my distrobox container along with the following configs worked for me.
When preparing a new project from scratch it is a good idea to start out with a template.
opencode.json
``` {
Exploring Modern AI Trends and Kubernetes-Based AI Infrastructure by jluo
Description
Build a solid understanding of the current landscape of Artificial Intelligence and how modern cloud-native technologies—especially Kubernetes—support AI workloads.
Goals
Use Gemini Learning Mode to guide the exploration, surface relevant concepts, and structure the learning journey:
- Gain insight into the latest AI trends, tools, and architectural concepts.
- Understand how Kubernetes and related cloud-native technologies are used in the AI ecosystem (model training, deployment, orchestration, MLOps).
Resources
Red Hat AI Topic Articles
- https://www.redhat.com/en/topics/ai
Kubeflow Documentation
- https://www.kubeflow.org/docs/
Q4 2025 CNCF Technology Landscape Radar report:
- https://www.cncf.io/announcements/2025/11/11/cncf-and-slashdata-report-finds-leading-ai-tools-gaining-adoption-in-cloud-native-ecosystems/
- https://www.cncf.io/wp-content/uploads/2025/11/cncfreporttechradar_111025a.pdf
Agent-to-Agent (A2A) Protocol
- https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/
Song Search with CLAP by gcolangiuli
Description
Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface
Goals
Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:
- Music Tagging;
- Free text search;
- Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.
The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.
Result
In this MVP we implemented:
- Async Song Analysis with Clap model
- Free Text Search of the songs
- Similar song search based on vector representation
- Containerised version with web interface
We also documented what went well and what can be improved in the use of AI.
You can have a look at the result here:
Future implementation can be related to performance improvement and stability of the analysis.
References
- CLAP: The main model being researched;
- huggingface: Pre-trained models for CLAP;
- Free Music Archive: Creative Commons songs that can be used for testing;
Update M2Crypto by mcepl
There are couple of projects I work on, which need my attention and putting them to shape:
Goal for this Hackweek
- Put M2Crypto into better shape (most issues closed, all pull requests processed)
- More fun to learn jujutsu
- Play more with Gemini, how much it help (or not).
- Perhaps, also (just slightly related), help to fix vis to work with LuaJIT, particularly to make vis-lspc working.
RMT.rs: High-Performance Registration Path for RMT using Rust by gbasso
Description
The SUSE Repository Mirroring Tool (RMT) is a critical component for managing software updates and subscriptions, especially for our Public Cloud Team (PCT). In a cloud environment, hundreds or even thousands of new SUSE instances (VPS/EC2) can be provisioned simultaneously. Each new instance attempts to register against an RMT server, creating a "thundering herd" scenario.
We have observed that the current RMT server, written in Ruby, faces performance issues under this high-concurrency registration load. This can lead to request overhead, slow registration times, and outright registration failures, delaying the readiness of new cloud instances.
This Hackweek project aims to explore a solution by re-implementing the performance-critical registration path in Rust. The goal is to leverage Rust's high performance, memory safety, and first-class concurrency handling to create an alternative registration endpoint that is fast, reliable, and can gracefully manage massive, simultaneous request spikes.
The new Rust module will be integrated into the existing RMT Ruby application, allowing us to directly compare the performance of both implementations.
Goals
The primary objective is to build and benchmark a high-performance Rust-based alternative for the RMT server registration endpoint.
Key goals for the week:
- Analyze & Identify: Dive into the
SUSE/rmtRuby codebase to identify and map out the exact critical path for server registration (e.g., controllers, services, database interactions). - Develop in Rust: Implement a functionally equivalent version of this registration logic in Rust.
- Integrate: Explore and implement a method for Ruby/Rust integration to "hot-wire" the new Rust module into the RMT application. This may involve using FFI, or libraries like
rb-sysormagnus. - Benchmark: Create a benchmarking script (e.g., using
k6,ab, or a custom tool) that simulates the high-concurrency registration load from thousands of clients. - Compare & Present: Conduct a comparative performance analysis (requests per second, latency, success/error rates, CPU/memory usage) between the original Ruby path and the new Rust path. The deliverable will be this data and a summary of the findings.
Resources
- RMT Source Code (Ruby):
https://github.com/SUSE/rmt
- RMT Documentation:
https://documentation.suse.com/sles/15-SP7/html/SLES-all/book-rmt.html
- Tooling & Stacks:
- RMT/Ruby development environment (for running the base RMT)
- Rust development environment (
rustup,cargo)
- Potential Integration Libraries:
- rb-sys:
https://github.com/oxidize-rb/rb-sys - Magnus:
https://github.com/matsadler/magnus
- rb-sys:
- Benchmarking Tools:
k6(https://k6.io/)ab(ApacheBench)
Mail client with mailing list workflow support in Rust by acervesato
Description
To create a mail user interface using Rust programming language, supporting mailing list patches workflow. I know, aerc is already there, but I would like to create something simpler, without integrated protocols. Just a plain user interface that is using some crates to read and create emails which are fetched and sent via external tools.
I already know Rust, but not the async support, which is needed in this case in order to handle events inside the mail folder and to send notifications.
Goals
- simple user interface in the style of
aerc, with some vim keybindings for motions and search - automatic run of external tools (like
mbsync) for checking emails - automatic run commands for notifications
- apply patch set from ML
- tree-sitter support with styles
Resources
- ratatui: user interface (https://ratatui.rs/)
- notify: folder watcher (https://docs.rs/notify/latest/notify/)
- mail-parser: parser for emails (https://crates.io/crates/mail-parser)
- mail-builder: create emails in proper format (https://docs.rs/mail-builder/latest/mail_builder/)
- gitpatch: ML support (https://crates.io/crates/gitpatch)
- tree-sitter-rust: support for mail format (https://crates.io/crates/tree-sitter)
Exploring Rust's potential: from basics to security by sferracci
Description
This project aims to conduct a focused investigation and practical application of the Rust programming language, with a specific emphasis on its security model. A key component will be identifying and understanding the most common vulnerabilities that can be found in Rust code.
Goals
Achieve a beginner/intermediate level of proficiency in writing Rust code. This will be measured by trying to solve LeetCode problems focusing on common data structures and algorithms. Study Rust vulnerabilities and learning best practices to avoid them.
Resources
Rust book: https://doc.rust-lang.org/book/
Learn how to use the Relm4 Rust GUI crate by xiaoguang_wang
Relm4 is based on gtk4-rs and compatible with libadwaita. The gtk4-rs crate provides all the tools necessary to develop applications. Building on this foundation, Relm4 makes developing more idiomatic, simpler, and faster.
https://github.com/Relm4/Relm4
Learn a bit of embedded programming with Rust in a micro:bit v2 by aplanas
Description
micro:bit is a small single board computer with a ARM Cortex-M4 with the FPU extension, with a very constrain amount of memory and a bunch of sensors and leds.
The board is very well documented, with schematics and code for all the features available, so is an excellent platform for learning embedded programming.
Rust is a system programming language that can generate ARM code, and has crates (libraries) to access the micro:bit hardware. There is plenty documentation about how to make small programs that will run in the micro:bit.
Goals
Start learning about embedded programming in Rust, and maybe make some code to the small KS4036F Robot car from keyestudio.
Resources
- micro:bit
- KS4036F
- microbit technical documentation
- schematic
- impl Rust for micro:bit
- Rust Embedded MB2 Discovery Book
- nRF-HAL
- nRF Microbit-v2 BSP (blocking)
- knurling-rs
- C++ microbit codal
- microbit-bsp for Embassy
- Embassy
Diary
Day 1
- Start reading https://mb2.implrust.com/abstraction-layers.html
- Prepare the dev environment (cross compiler, probe-rs)
- Flash first code in the board (blinky led)
- Checking differences between BSP and HAL
- Compile and install a more complex example, with stack protection
- Reading about the simplicity of xtask, as alias for workspace execution
- Reading the CPP code of the official micro:bit libraries. They have a font!
Day 2
- There are multiple BSP for the microbit. One is using async code for non-blocking operations
- Download and study a bit the API for microbit-v2, the nRF official crate
- Take a look of the KS4036F programming, seems that the communication is multiplexed via I2C
- The motor speed can be selected via PWM (pulse with modulation): power it longer (high frequency), and it will increase the speed
- Scrolling some text
- Debug by printing! defmt is a crate that can be used with probe-rs to emit logs
- Start reading input from the board: buttons
- The logo can be touched and detected as a floating point value
Day 3
- A bit confused how to read the float value from a pin
Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

Description
This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.
If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.
Nah, let's be honest
AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.
Goals
- Migrate Core tests including Onboarding of clients
- Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
- Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS
Resources
- Existing Uyuni Test Framework (Cucumber Ruby + Capybara + Selenium)
- My Template for CucumberJS + Playwright in TypeScript
- Started Hackweek Project