The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:

  • Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
  • TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
  • Ruby: Integrates existing, robust YaST libraries (e.g., yast-storage-ng) to reuse established functionality.

The Problem: Testing Overhead

Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.

The Solution: AI-Driven Automation

This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:

  1. Automatically generate new unit tests as code is developed.
  2. Intelligently correct and update existing unit tests when the application code changes.

By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.

Goals

  • Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g., gemini-cli) to automatically generate unit tests.
  • Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
  • Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.

Contribution & Resources

We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.

If you want to dive deep into AI for software quality, please reach out and join the effort!

  • Authorized AI Tools: Tools supported by SUSE (e.g., gemini-cli)
  • Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.

Interesting Links

Looking for hackers with the skills:

agama ai rust typescript react

This project is part of:

Hack Week 25

Activity

  • 6 days ago: joseivanlopez liked this project.
  • 6 days ago: jreidinger liked this project.
  • 9 days ago: enavarro_suse liked this project.
  • 10 days ago: lkocman liked this project.
  • 14 days ago: mvidner joined this project.
  • 14 days ago: mvidner liked this project.
  • 22 days ago: joseivanlopez added keyword "rust" to this project.
  • 22 days ago: joseivanlopez added keyword "typescript" to this project.
  • 22 days ago: joseivanlopez added keyword "react" to this project.
  • 22 days ago: joseivanlopez joined this project.
  • 22 days ago: joseivanlopez added keyword "agama" to this project.
  • 22 days ago: joseivanlopez added keyword "ai" to this project.
  • 22 days ago: ygutierrez liked this project.
  • 22 days ago: dgdavid liked this project.
  • 22 days ago: ancorgs started this project.
  • 22 days ago: ancorgs liked this project.
  • 22 days ago: joseivanlopez originated this project.

  • Comments

    • ancorgs
      7 days ago by ancorgs | Reply

      Time for some reporting.

      Both @joseivanlopez and me have been doing experiments with AI and the unit tests of Agama's web interface (Javascript + React).

      Probably @joseivanlopez will write in more detail about his experience. But this report is about some common experiments we both did using different AI solutions. Let's start with some context.

      There is a branch api-v2 in the Agama repository that dramatically changes how the web UI interacts with the backend. The code already works but the javascript unit tests are not adapted accordingly yet. The main idea was to simplify the process of adapting those unit tests with the help of AI.

      @joseivanlopez did it using the company-provided Gemini, this pull request shows some partial results. Gemini was able to adapt several tests. Although it would be more accurate to say that it rewrote the tests. It feels like it ignored the current unit tests and wrote another ones from scratch. Those generated unit tests are indeed useful, they cover many scenarios and look quite sane, although some of them are not very semantic.

      Gemini was not blazing fast (it took 10+ minutes to adapt a single test) and sometimes it struggled to find its way (felt like a pure trial and error process). But the outcome is certainly useful. The experiment can be labeled as a relative success.

      But all that applies only to the gemini-pro model. Sadly it looks like the SUSE-provided license provides a very limited number of tokens to be spend on gemini-pro. After spending those in adapting 4 or 5 unit tests, everything fall backs to the useless gemini-flash model. That means only a few tests per developer can be adapted every day.

      In parallel I ran a very similar experiment but using Claude.ai, an AI solution that is not endorsed by SUSE, so we cannot use it for production code. I used the completely free version that only provides access to a web console so I had to upload many source-code files manually) and that only allows a few queries to their intermediate model (using it for longer or accessing the advanced model would have implied a fee).

      Even with all those limitations, I feel the experiment was clearly more successful than the Gemini one. You can see some partial results in this pull request.

      When asked to adapt existing unit tests, Claude really did all the necessary changes to get them running again, but without rewriting everything. Sometimes it added a missing scenario, but it respected the approach of the existing tests and scenarios. When asked to write a new test from scratch, it apparently produced a quite comprehensive and semantic unit test. Claude really felt like a tool that could potentially save a lot of manual work in a reliable way.

      Compared to Gemini, Claude was way faster and straight to the point. It was able to produce good results in seconds without really having access to my development environment. Gemini seemed to work a bit more by trial and error, with several iterations of adjusting things to then run the tests and adjust things again.

    • joseivanlopez
      6 days ago by joseivanlopez | Reply

      AI Experiment Report: Gemini-CLI for Agama Unit Test Automation

      This report summarizes the results of an experiment using the gemini-cli tool (powered by the Gemini Pro model) to automatically update outdated React unit tests in the Agama UI codebase.

      Scenario & Goal

      The Agama UI code was adapted to use a new HTTP API, leaving existing unit tests broken and outdated. The goal was to use gemini-cli to automatically fix and adapt these broken React unit tests.

      • Tool: gemini-cli
      • Model: Gemini Pro
      • Example Prompt: "Fix tests from [@src](/users/src)/components/storage/PartitionPage.test.tsx"

      Key Results and Observations

      Success and Capability

      • High Adaptation Rate: The AI demonstrated its capability to adapt a significant number of existing React tests to the new API structure and component logic. (See results: https://github.com/agama-project/agama/pull/2927)
      • Actionable Output: The output was often directly usable, requiring minimal manual cleanup or correction.

      Performance and Efficiency Challenges

      • Speed/Time: The process was very slow. Adapting a single test suite typically took around 15 minutes. This time investment sometimes approaches or exceeds the time a developer might take to fix the tests manually, impacting developer workflow adoption.
      • Reliability: The process was unstable and sometimes stalled completely. This requires developer intervention (canceling the request and resubmitting) to complete the task.
      • Strategy: The model appeared to operate in a "try/error" mode (iterative guessing based on error messages) rather than demonstrating a deep comprehension of the code. This trial-and-error approach contributes directly to the poor performance and high latency observed.

      Conclusion

      Based on the experiment's results, while the Gemini Pro model currently exhibits significant performance issues (slowness and stalling) that make large-scale, automated fixes impractical, it demonstrates core capabilities that point to its potential value in specific scenarios within the Agama project.

      Creating Tests From Scratch

      Gemini is highly useful for generating the initial boilerplate and structure for new unit tests. A developer shouldn't spend time setting up mocks, imports, and basic assertion structures for a new component. The AI can quickly create a functional test file based solely on the component's public interface. This dramatically lowers the barrier to writing new tests and speeds up the initial development phase, turning test creation from a chore into a rapid scaffolding process.

      Progressive and Incremental Adaptation

      The AI is valuable for progressive adaptations as code evolves. Instead of waiting for a massive refactor that breaks hundreds of tests (creating a daunting backlog), a developer should use the AI immediately after making small, targeted changes to a component's internal logic, API, or prop structure. This strategy ensures unit tests are fixed incrementally, preventing the large backlog of broken tests that often results from major refactoring efforts.

      Resource Constraint: Token Limits

      A critical limiting factor impacting the viability of extensive AI usage is the limited token quota provided by SUSE for the Gemini Pro model. Due to the model's observed "try/error" strategy and the resulting high number of queries needed to complete a task, the tokens are consumed rapidly, typically becoming exhausted after only about two hours of intensive usage.

      This severe constraint means that even if the performance were better, continuous, large-scale automation is not possible under the current resource allocation.

      In summary, given the constraints of high latency and limited token availability, we must pivot our strategy. We should shift the focus from using the AI as a brute-force bug-fixing tool to using it as a scaffolding and incremental maintenance assistant.

    • ancorgs
      5 days ago by ancorgs | Reply

      I was planning to also try copilot with some public LLM engine (eg. ChatGPT) only for completeness. Unfortunately, I got sick in Thursday and spent the whole Thursday's afternoon and bed. I don't expect it to get any better during Friday.

    • joseivanlopez
      5 days ago by joseivanlopez | Reply

      I also experimented with other command-line interface tools, specifically cline. The tool performed exceptionally well, offering the key advantage of enabling concurrent execution of different AI models. This allows for testing free models available through platforms like Ollama (e.g., gpt-oss or deepseek-r1). I utilized it successfully with the cloude-soonet model. However, the severe limitations of the free usage tier ultimately prevented me from conducting any meaningful or conclusive tests.

    • ancorgs
      5 days ago by ancorgs | Reply

      I ran an extra experiment. Not about unit tests but about code refactoring. TBH, I didn't have time yet to analyze the result. But some of the unit tests are still green (not all of them). See this pull request

    • mvidner
      1 day ago by mvidner | Reply

      My part: using Gemini 2.5 CLI to add a Rust integration test for agama-software::ZyppServer PR#2925

    Similar Projects

    Build a terminal user-interface (TUI) for Agama by IGonzalezSosa

    Description

    Officially, Agama offers two different user interfaces. On the one hand, we have the web-based interface, which is the one you see when you run the installation media. On the other hand, we have a command-line interface. In both cases, you can use them using a remote system, either using a browser or the agama CLI.

    We would expect most of the cases to be covered by this approach. However, if you cannot use the web-based interface and, for some reason, you cannot access the system through the network, your only option is to use the CLI. This interface offers a mechanism to modify Agama's configuration using an editor (vim, by default), but perhaps you might want to have a more user-friendly way.

    Goals

    The main goal of this project is to built a minimal terminal user-interface for Agama. This interface will allow the user to install the system providing just a few settings (selecting a product, a storage device and a user password). Then it should report the installation progress.

    Resources

    • https://agama-project.github.io/
    • https://ratatui.rs/

    Conclusions

    We have summarized our conclusions in a pull request. It includes screenshots ;-) We did not implement all the features we wanted, but we learn a lot during the process. We know that, if needed, we could write a TUI for Agama and we have an idea about how to build it. Good enough.


    Bring up Agama based tests for openSUSE Tumbleweed by szarate

    Description

    Agama has been around for some time already, and we have some tests for it on Tumbleweed however they are only on the development job group and are too few to be helpful in assessing the quality of a build

    This project aims at enabling and creating new testsuites for the agama flavor, using the already existsing DVD and NET flavors as starting points

    Goals

    • Introduce tests based on the Agama flavor in the main Tumbleweed job group
    • Create Tumbleweed yaml schedules for agama installer and its own jsonette profile (The one being used now are reused from leap)
    • Fan out tests that have long runtimes (i.e tackle this ticket)
    • Reduce redundancy in tests

    Resources


    Backporting patches using LLM by jankara

    Description

    Backporting Linux kernel fixes (either for CVE issues or as part of general git-fixes workflow) is boring and mostly mechanical work (dealing with changes in context, renamed variables, new helper functions etc.). The idea of this project is to explore usage of LLM for backporting Linux kernel commits to SUSE kernels using LLM.

    Goals

    • Create safe environment allowing LLM to run and backport patches without exposing the whole filesystem to it (for privacy and security reasons).
    • Write prompt that will guide LLM through the backporting process. Fine tune it based on experimental results.
    • Explore success rate of LLMs when backporting various patches.

    Resources

    • Docker
    • Gemini CLI

    Repository

    Current version of the container with some instructions for use are at: https://gitlab.suse.de/jankara/gemini-cli-backporter


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji


    Is SUSE Trending? Popularity and Developer Sentiment Insight Using Native AI Capabilities by terezacerna

    Description

    This project aims to explore the popularity and developer sentiment around SUSE and its technologies compared to Red Hat and their technologies. Using publicly available data sources, I will analyze search trends, developer preferences, repository activity, and media presence. The final outcome will be an interactive Power BI dashboard that provides insights into how SUSE is perceived and discussed across the web and among developers.

    Goals

    1. Assess the popularity of SUSE products and brand compared to Red Hat using Google Trends.
    2. Analyze developer satisfaction and usage trends from the Stack Overflow Developer Survey.
    3. Use the GitHub API to compare SUSE and Red Hat repositories in terms of stars, forks, contributors, and issue activity.
    4. Perform sentiment analysis on GitHub issue comments to measure community tone and engagement using built-in Copilot capabilities.
    5. Perform sentiment analysis on Reddit comments related to SUSE technologies using built-in Copilot capabilities.
    6. Use Gnews.io to track and compare the volume of news articles mentioning SUSE and Red Hat technologies.
    7. Test the integration of Copilot (AI) within Power BI for enhanced data analysis and visualization.
    8. Deliver a comprehensive Power BI report summarizing findings and insights.
    9. Test the full potential of Power BI, including its AI features and native language Q&A.

    Resources

    1. Google Trends: Web scraping for search popularity data
    2. Stack Overflow Developer Survey: For technology popularity and satisfaction comparison
    3. GitHub API: For repository data (stars, forks, contributors, issues, comments).
    4. Gnews.io API: For article volume and mentions analysis.
    5. Reddit: SUSE related topics with comments.


    Extended private brain - RAG my own scripts and data into offline LLM AI by tjyrinki_suse

    Description

    For purely studying purposes, I'd like to find out if I could teach an LLM some of my own accumulated knowledge, to use it as a sort of extended brain.

    I might use qwen3-coder or something similar as a starting point.

    Everything would be done 100% offline without network available to the container, since I prefer to see when network is needed, and make it so it's never needed (other than initial downloads).

    Goals

    1. Learn something about RAG, LLM, AI.
    2. Find out if everything works offline as intended.
    3. As an end result have a new way to access my own existing know-how, but so that I can query the wisdom in them.
    4. Be flexible to pivot in any direction, as long as there are new things learned.

    Resources

    To be found on the fly.

    Timeline

    Day 1 (of 4)

    • Tried out a RAG demo, expanded on feeding it my own data
    • Experimented with qwen3-coder to add a persistent chat functionality, and keeping vectors in a pickle file
    • Optimizations to keep everything within context window
    • Learn and add a bit of PyTest

    Day 2

    • More experimenting and more data
    • Study ChromaDB
    • Add a Web UI that works from another computer even though the container sees network is down

    Day 3

    • The above RAG is working well enough for demonstration purposes.
    • Pivot to trying out OpenCode, configuring local Ollama qwen3-coder there, to analyze the RAG demo.
    • Figured out how to configure Ollama template to be usable under OpenCode. OpenCode locally is super slow to just running qwen3-coder alone.

    Day 4 (final day)

    • Battle with OpenCode that was both slow and kept on piling up broken things.
    • Call it success as after all the agentic AI was working locally.
    • Clean up the mess left behind a bit.

    Blog Post

    Summarized the findings at blog post.


    Background Coding Agent by mmanno

    Description

    I had only bad experiences with AI one-shots. However, monitoring agent work closely and interfering often did result in productivity gains.

    Now, other companies are using agents in pipelines. That makes sense to me, just like CI, we want to offload work to pipelines: Our engineering teams are consistently slowed down by "toil": low-impact, repetitive maintenance tasks. A simple linter rule change, a dependency bump, rebasing patch-sets on top of newer releases or API deprecation requires dozens of manual PRs, draining time from feature development.

    So far we have been writing deterministic, script-based automation for these tasks. And it turns out to be a common trap. These scripts are brittle, complex, and become a massive maintenance burden themselves.

    Can we make prompts and workflows smart enough to succeed at background coding?

    Goals

    We will build a platform that allows engineers to execute complex code transformations using prompts.

    By automating this toil, we accelerate large-scale migrations and allow teams to focus on high-value work.

    Our platform will consist of three main components:

    • "Change" Definition: Engineers will define a transformation as a simple, declarative manifest:
      • The target repositories.
      • A wrapper to run a "coding agent", e.g., "gemini-cli".
      • The task as a natural language prompt.
    • "Change" Management Service: A central service that orchestrates the jobs. It will receive Change definitions and be responsible for the job lifecycle.
    • Execution Runners: We could use existing sandboxed CI runners (like GitHub/GitLab runners) to execute each job or spawn a container.

    MVP

    • Define the Change manifest format.
    • Build the core Management Service that can accept and queue a Change.
    • Connect management service and runners, dynamically dispatch jobs to runners.
    • Create a basic runner script that can run a hard-coded prompt against a test repo and open a PR.

    Stretch Goals:

    • Multi-layered approach, Workflow Agents trigger Coding Agents:
      1. Workflow Agent: Gather information about the task interactively from the user.
      2. Coding Agent: Once the interactive agent has refined the task into a clear prompt, it hands this prompt off to the "coding agent." This background agent is responsible for executing the task and producing the actual pull request.
    • Use MCP:
      1. Workflow Agent gathers context information from Slack, Github, etc.
      2. Workflow Agent triggers a Coding Agent.
    • Create a "Standard Task" library with reliable prompts.
      1. Rebasing rancher-monitoring to a new version of kube-prom-stack
      2. Update charts to use new images
      3. Apply changes to comply with a new linter
      4. Bump complex Go dependencies, like k8s modules
      5. Backport pull requests to other branches
    • Add “review agents” that review the generated PR.

    See also


    Mail client with mailing list workflow support in Rust by acervesato

    Description

    To create a mail user interface using Rust programming language, supporting mailing list patches workflow. I know, aerc is already there, but I would like to create something simpler, without integrated protocols. Just a plain user interface that is using some crates to read and create emails which are fetched and sent via external tools.

    I already know Rust, but not the async support, which is needed in this case in order to handle events inside the mail folder and to send notifications.

    Goals

    • simple user interface in the style of aerc, with some vim keybindings for motions and search
    • automatic run of external tools (like mbsync) for checking emails
    • automatic run commands for notifications
    • apply patch set from ML
    • tree-sitter support with styles

    Resources

    • ratatui: user interface (https://ratatui.rs/)
    • notify: folder watcher (https://docs.rs/notify/latest/notify/)
    • mail-parser: parser for emails (https://crates.io/crates/mail-parser)
    • mail-builder: create emails in proper format (https://docs.rs/mail-builder/latest/mail_builder/)
    • gitpatch: ML support (https://crates.io/crates/gitpatch)
    • tree-sitter-rust: support for mail format (https://crates.io/crates/tree-sitter)


    Build a terminal user-interface (TUI) for Agama by IGonzalezSosa

    Description

    Officially, Agama offers two different user interfaces. On the one hand, we have the web-based interface, which is the one you see when you run the installation media. On the other hand, we have a command-line interface. In both cases, you can use them using a remote system, either using a browser or the agama CLI.

    We would expect most of the cases to be covered by this approach. However, if you cannot use the web-based interface and, for some reason, you cannot access the system through the network, your only option is to use the CLI. This interface offers a mechanism to modify Agama's configuration using an editor (vim, by default), but perhaps you might want to have a more user-friendly way.

    Goals

    The main goal of this project is to built a minimal terminal user-interface for Agama. This interface will allow the user to install the system providing just a few settings (selecting a product, a storage device and a user password). Then it should report the installation progress.

    Resources

    • https://agama-project.github.io/
    • https://ratatui.rs/

    Conclusions

    We have summarized our conclusions in a pull request. It includes screenshots ;-) We did not implement all the features we wanted, but we learn a lot during the process. We know that, if needed, we could write a TUI for Agama and we have an idea about how to build it. Good enough.


    OpenPlatform Self-Service Portal by tmuntan1

    Description

    In SUSE IT, we developed an internal developer platform for our engineers using SUSE technologies such as RKE2, SUSE Virtualization, and Rancher. While it works well for our existing users, the onboarding process could be better.

    To improve our customer experience, I would like to build a self-service portal to make it easy for people to accomplish common actions. To get started, I would have the portal create Jira SD tickets for our customers to have better information in our tickets, but eventually I want to add automation to reduce our workload.

    Goals

    • Build a frontend website (Angular) that helps customers create Jira SD tickets.
    • Build a backend (Rust with Axum) for the backend, which would do all the hard work for the frontend.

    Resources (SUSE VPN only)

    • development site: https://ui-dev.openplatform.suse.com/login?returnUrl=%2Fopenplatform%2Fforms
    • https://gitlab.suse.de/itpe/core/open-platform/op-portal/backend
    • https://gitlab.suse.de/itpe/core/open-platform/op-portal/frontend


    Looking at Rust if it could be an interesting programming language by jsmeix

    Get some basic understanding of Rust security related features from a general point of view.

    This Hack Week project is not to learn Rust to become a Rust programmer. This might happen later but it is not the goal of this Hack Week project.

    The goal of this Hack Week project is to evaluate if Rust could be an interesting programming language.

    An interesting programming language must make it easier to write code that is correct and stays correct when over time others maintain and enhance it than the opposite.


    RMT.rs: High-Performance Registration Path for RMT using Rust by gbasso

    Description

    The SUSE Repository Mirroring Tool (RMT) is a critical component for managing software updates and subscriptions, especially for our Public Cloud Team (PCT). In a cloud environment, hundreds or even thousands of new SUSE instances (VPS/EC2) can be provisioned simultaneously. Each new instance attempts to register against an RMT server, creating a "thundering herd" scenario.

    We have observed that the current RMT server, written in Ruby, faces performance issues under this high-concurrency registration load. This can lead to request overhead, slow registration times, and outright registration failures, delaying the readiness of new cloud instances.

    This Hackweek project aims to explore a solution by re-implementing the performance-critical registration path in Rust. The goal is to leverage Rust's high performance, memory safety, and first-class concurrency handling to create an alternative registration endpoint that is fast, reliable, and can gracefully manage massive, simultaneous request spikes.

    The new Rust module will be integrated into the existing RMT Ruby application, allowing us to directly compare the performance of both implementations.

    Goals

    The primary objective is to build and benchmark a high-performance Rust-based alternative for the RMT server registration endpoint.

    Key goals for the week:

    1. Analyze & Identify: Dive into the SUSE/rmt Ruby codebase to identify and map out the exact critical path for server registration (e.g., controllers, services, database interactions).
    2. Develop in Rust: Implement a functionally equivalent version of this registration logic in Rust.
    3. Integrate: Explore and implement a method for Ruby/Rust integration to "hot-wire" the new Rust module into the RMT application. This may involve using FFI, or libraries like rb-sys or magnus.
    4. Benchmark: Create a benchmarking script (e.g., using k6, ab, or a custom tool) that simulates the high-concurrency registration load from thousands of clients.
    5. Compare & Present: Conduct a comparative performance analysis (requests per second, latency, success/error rates, CPU/memory usage) between the original Ruby path and the new Rust path. The deliverable will be this data and a summary of the findings.

    Resources

    • RMT Source Code (Ruby):
      • https://github.com/SUSE/rmt
    • RMT Documentation:
      • https://documentation.suse.com/sles/15-SP7/html/SLES-all/book-rmt.html
    • Tooling & Stacks:
      • RMT/Ruby development environment (for running the base RMT)
      • Rust development environment (rustup, cargo)
    • Potential Integration Libraries:
      • rb-sys: https://github.com/oxidize-rb/rb-sys
      • Magnus: https://github.com/matsadler/magnus
    • Benchmarking Tools:
      • k6 (https://k6.io/)
      • ab (ApacheBench)


    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.

    Nah, let's be honest add-emoji AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources