The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:

  • Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
  • TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
  • Ruby: Integrates existing, robust YaST libraries (e.g., yast-storage-ng) to reuse established functionality.

The Problem: Testing Overhead

Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.

The Solution: AI-Driven Automation

This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:

  1. Automatically generate new unit tests as code is developed.
  2. Intelligently correct and update existing unit tests when the application code changes.

By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.

Goals

  • Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g., gemini-cli) to automatically generate unit tests.
  • Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
  • Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.

Contribution & Resources

We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.

If you want to dive deep into AI for software quality, please reach out and join the effort!

  • Authorized AI Tools: Tools supported by SUSE (e.g., gemini-cli)
  • Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.

Interesting Links

Looking for hackers with the skills:

agama ai rust typescript react

This project is part of:

Hack Week 25

Activity

  • about 1 month ago: joseivanlopez liked this project.
  • about 1 month ago: jreidinger liked this project.
  • about 2 months ago: enavarro_suse liked this project.
  • about 2 months ago: lkocman liked this project.
  • about 2 months ago: mvidner joined this project.
  • about 2 months ago: mvidner liked this project.
  • 2 months ago: joseivanlopez added keyword "rust" to this project.
  • 2 months ago: joseivanlopez added keyword "typescript" to this project.
  • 2 months ago: joseivanlopez added keyword "react" to this project.
  • 2 months ago: joseivanlopez joined this project.
  • 2 months ago: joseivanlopez added keyword "agama" to this project.
  • 2 months ago: joseivanlopez added keyword "ai" to this project.
  • 2 months ago: ygutierrez liked this project.
  • 2 months ago: dgdavid liked this project.
  • 2 months ago: ancorgs started this project.
  • 2 months ago: ancorgs liked this project.
  • 2 months ago: joseivanlopez originated this project.

  • Comments

    • ancorgs
      about 2 months ago by ancorgs | Reply

      Time for some reporting.

      Both @joseivanlopez and me have been doing experiments with AI and the unit tests of Agama's web interface (Javascript + React).

      Probably @joseivanlopez will write in more detail about his experience. But this report is about some common experiments we both did using different AI solutions. Let's start with some context.

      There is a branch api-v2 in the Agama repository that dramatically changes how the web UI interacts with the backend. The code already works but the javascript unit tests are not adapted accordingly yet. The main idea was to simplify the process of adapting those unit tests with the help of AI.

      @joseivanlopez did it using the company-provided Gemini, this pull request shows some partial results. Gemini was able to adapt several tests. Although it would be more accurate to say that it rewrote the tests. It feels like it ignored the current unit tests and wrote another ones from scratch. Those generated unit tests are indeed useful, they cover many scenarios and look quite sane, although some of them are not very semantic.

      Gemini was not blazing fast (it took 10+ minutes to adapt a single test) and sometimes it struggled to find its way (felt like a pure trial and error process). But the outcome is certainly useful. The experiment can be labeled as a relative success.

      But all that applies only to the gemini-pro model. Sadly it looks like the SUSE-provided license provides a very limited number of tokens to be spend on gemini-pro. After spending those in adapting 4 or 5 unit tests, everything fall backs to the useless gemini-flash model. That means only a few tests per developer can be adapted every day.

      In parallel I ran a very similar experiment but using Claude.ai, an AI solution that is not endorsed by SUSE, so we cannot use it for production code. I used the completely free version that only provides access to a web console so I had to upload many source-code files manually) and that only allows a few queries to their intermediate model (using it for longer or accessing the advanced model would have implied a fee).

      Even with all those limitations, I feel the experiment was clearly more successful than the Gemini one. You can see some partial results in this pull request.

      When asked to adapt existing unit tests, Claude really did all the necessary changes to get them running again, but without rewriting everything. Sometimes it added a missing scenario, but it respected the approach of the existing tests and scenarios. When asked to write a new test from scratch, it apparently produced a quite comprehensive and semantic unit test. Claude really felt like a tool that could potentially save a lot of manual work in a reliable way.

      Compared to Gemini, Claude was way faster and straight to the point. It was able to produce good results in seconds without really having access to my development environment. Gemini seemed to work a bit more by trial and error, with several iterations of adjusting things to then run the tests and adjust things again.

    • joseivanlopez
      about 1 month ago by joseivanlopez | Reply

      AI Experiment Report: Gemini-CLI for Agama Unit Test Automation

      This report summarizes the results of an experiment using the gemini-cli tool (powered by the Gemini Pro model) to automatically update outdated React unit tests in the Agama UI codebase.

      Scenario & Goal

      The Agama UI code was adapted to use a new HTTP API, leaving existing unit tests broken and outdated. The goal was to use gemini-cli to automatically fix and adapt these broken React unit tests.

      • Tool: gemini-cli
      • Model: Gemini Pro
      • Example Prompt: "Fix tests from [@src](/users/src)/components/storage/PartitionPage.test.tsx"

      Key Results and Observations

      Success and Capability

      • High Adaptation Rate: The AI demonstrated its capability to adapt a significant number of existing React tests to the new API structure and component logic. (See results: https://github.com/agama-project/agama/pull/2927)
      • Actionable Output: The output was often directly usable, requiring minimal manual cleanup or correction.

      Performance and Efficiency Challenges

      • Speed/Time: The process was very slow. Adapting a single test suite typically took around 15 minutes. This time investment sometimes approaches or exceeds the time a developer might take to fix the tests manually, impacting developer workflow adoption.
      • Reliability: The process was unstable and sometimes stalled completely. This requires developer intervention (canceling the request and resubmitting) to complete the task.
      • Strategy: The model appeared to operate in a "try/error" mode (iterative guessing based on error messages) rather than demonstrating a deep comprehension of the code. This trial-and-error approach contributes directly to the poor performance and high latency observed.

      Conclusion

      Based on the experiment's results, while the Gemini Pro model currently exhibits significant performance issues (slowness and stalling) that make large-scale, automated fixes impractical, it demonstrates core capabilities that point to its potential value in specific scenarios within the Agama project.

      Creating Tests From Scratch

      Gemini is highly useful for generating the initial boilerplate and structure for new unit tests. A developer shouldn't spend time setting up mocks, imports, and basic assertion structures for a new component. The AI can quickly create a functional test file based solely on the component's public interface. This dramatically lowers the barrier to writing new tests and speeds up the initial development phase, turning test creation from a chore into a rapid scaffolding process.

      Progressive and Incremental Adaptation

      The AI is valuable for progressive adaptations as code evolves. Instead of waiting for a massive refactor that breaks hundreds of tests (creating a daunting backlog), a developer should use the AI immediately after making small, targeted changes to a component's internal logic, API, or prop structure. This strategy ensures unit tests are fixed incrementally, preventing the large backlog of broken tests that often results from major refactoring efforts.

      Resource Constraint: Token Limits

      A critical limiting factor impacting the viability of extensive AI usage is the limited token quota provided by SUSE for the Gemini Pro model. Due to the model's observed "try/error" strategy and the resulting high number of queries needed to complete a task, the tokens are consumed rapidly, typically becoming exhausted after only about two hours of intensive usage.

      This severe constraint means that even if the performance were better, continuous, large-scale automation is not possible under the current resource allocation.

      In summary, given the constraints of high latency and limited token availability, we must pivot our strategy. We should shift the focus from using the AI as a brute-force bug-fixing tool to using it as a scaffolding and incremental maintenance assistant.

    • ancorgs
      about 1 month ago by ancorgs | Reply

      I was planning to also try copilot with some public LLM engine (eg. ChatGPT) only for completeness. Unfortunately, I got sick in Thursday and spent the whole Thursday's afternoon and bed. I don't expect it to get any better during Friday.

    • joseivanlopez
      about 1 month ago by joseivanlopez | Reply

      I also experimented with other command-line interface tools, specifically cline. The tool performed exceptionally well, offering the key advantage of enabling concurrent execution of different AI models. This allows for testing free models available through platforms like Ollama (e.g., gpt-oss or deepseek-r1). I utilized it successfully with the cloude-soonet model. However, the severe limitations of the free usage tier ultimately prevented me from conducting any meaningful or conclusive tests.

    • ancorgs
      about 1 month ago by ancorgs | Reply

      I ran an extra experiment. Not about unit tests but about code refactoring. TBH, I didn't have time yet to analyze the result. But some of the unit tests are still green (not all of them). See this pull request

    • mvidner
      about 1 month ago by mvidner | Reply

      My part: using Gemini 2.5 CLI to add a Rust integration test for agama-software::ZyppServer PR#2925

    Similar Projects

    Bring up Agama based tests for openSUSE Tumbleweed by szarate

    Description

    Agama has been around for some time already, and we have some tests for it on Tumbleweed however they are only on the development job group and are too few to be helpful in assessing the quality of a build

    This project aims at enabling and creating new testsuites for the agama flavor, using the already existsing DVD and NET flavors as starting points

    Goals

    • Introduce tests based on the Agama flavor in the main Tumbleweed job group
    • Create Tumbleweed yaml schedules for agama installer and its own jsonette profile (The one being used now are reused from leap)
    • Fan out tests that have long runtimes (i.e tackle this ticket)
    • Reduce redundancy in tests

    Resources


    Build a terminal user-interface (TUI) for Agama by IGonzalezSosa

    Description

    Officially, Agama offers two different user interfaces. On the one hand, we have the web-based interface, which is the one you see when you run the installation media. On the other hand, we have a command-line interface. In both cases, you can use them using a remote system, either using a browser or the agama CLI.

    We would expect most of the cases to be covered by this approach. However, if you cannot use the web-based interface and, for some reason, you cannot access the system through the network, your only option is to use the CLI. This interface offers a mechanism to modify Agama's configuration using an editor (vim, by default), but perhaps you might want to have a more user-friendly way.

    Goals

    The main goal of this project is to built a minimal terminal user-interface for Agama. This interface will allow the user to install the system providing just a few settings (selecting a product, a storage device and a user password). Then it should report the installation progress.

    Resources

    • https://agama-project.github.io/
    • https://ratatui.rs/

    Conclusions

    We have summarized our conclusions in a pull request. It includes screenshots ;-) We did not implement all the features we wanted, but we learn a lot during the process. We know that, if needed, we could write a TUI for Agama and we have an idea about how to build it. Good enough.


    GenAI-Powered Systemic Bug Evaluation and Management Assistant by rtsvetkov

    Motivation

    What is the decision critical question which one can ask on a bug? How this question affects the decision on a bug and why?

    Let's make GenAI look on the bug from the systemic point and evaluate what we don't know. Which piece of information is missing to take a decision?

    Description

    To build a tool that takes a raw bug report (including error messages and context) and uses a large language model (LLM) to generate a series of structured, Socratic-style or Systemic questions designed to guide a the integration and development toward the root cause, rather than just providing a direct, potentially incorrect fix.

    Goals

    Set up a Python environment

    Set the environment and get a Gemini API key. 2. Collect 5-10 realistic bug reports (from open-source projects, personal projects, or public forums like Stack Overflow—include the error message and the initial context).

    Build the Dialogue Loop

    1. Write a basic Python script using the Gemini API.
    2. Implement a simple conversational loop: User Input (Bug) -> AI Output (Question) -> User Input (Answer to AI's question) -> AI Output (Next Question). Code Implementation

    Socratic/Systemic Strategy Implementation

    1. Refine the logic to ensure the questions follow a Socratic and Systemic path (e.g., from symptom-> context -> assumptions -> -> critical parts -> ).
    2. Implement Function Calling (an advanced feature of the Gemini API) to suggest specific actions to the user, like "Run a ping test" or "Check the database logs."
    3. Implement Bugzillla call to collect the
    4. Implement Questioning Framework as LLVM pre-conditioning
    5. Define set of instructions
    6. Assemble the Tool

    Resources

    What are Systemic Questions?

    Systemic questions explore the relationships, patterns, and interactions within a system rather than focusing on isolated elements.
    In IT, they help uncover hidden dependencies, feedback loops, assumptions, and side-effects during debugging or architecture analysis.

    Gitlab Project

    gitlab.suse.de/sle-prjmgr/BugDecisionCritical_Question


    Song Search with CLAP by gcolangiuli

    Description

    Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface

    SUSE Hackweek AI Song Search

    Goals

    Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:

    • Music Tagging;
    • Free text search;
    • Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.

    The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.

    Result

    In this MVP we implemented:

    • Async Song Analysis with Clap model
    • Free Text Search of the songs
    • Similar song search based on vector representation
    • Containerised version with web interface

    We also documented what went well and what can be improved in the use of AI.

    You can have a look at the result here:

    Future implementation can be related to performance improvement and stability of the analysis.

    References


    SUSE Observability MCP server by drutigliano

    Description

    The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.

    This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.

    Goals

    • Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
    • Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
    • Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
    • Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.

     Hackweek STEP

    • Create a functional MCP endpoint exposing one (or more) tool(s) to answer queries like "What is the health of service X?") by fetching, normalizing, and returning live StackState data in an LLM-ready format.

     Scope

    • Implement read-only MCP server that can:
      • Connect to a live SUSE Observability instance and authenticate (with API token)
      • Use tools to fetch data for a specific component URN (e.g., current health state, metrics, possibly topology neighbors, ...).
      • Normalize response fields (e.g., URN to "Service Name," health state DEVIATING to "Unhealthy", raw metrics).
      • Return the data as a structured JSON payload compliant with the MCP specification.

    Deliverables

    • MCP Server v0.1 A running Golang MCP server with at least one tool.
    • A README.md and a test script (e.g., curl commands or a simple notebook) showing how an AI agent would call the endpoint and the resulting JSON payload.

    Outcome A functional and testable API endpoint that proves the core concept: translating complex StackState data into a simple, LLM-ready format. This provides the foundation for developing AI-driven diagnostics and automated remediation.

    Resources

    • https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
    • https://www.datadoghq.com/blog/datadog-remote-mcp-server
    • https://modelcontextprotocol.io/specification/2025-06-18/index
    • https://modelcontextprotocol.io/docs/develop/build-server

     Basic implementation

    • https://github.com/drutigliano19/suse-observability-mcp-server

    Results

    Successfully developed and delivered a fully functional SUSE Observability MCP Server that bridges language models with SUSE Observability's operational data. This project demonstrates how AI agents can perform intelligent troubleshooting and root cause analysis using structured access to real-time infrastructure data.

    Example execution


    Bugzilla goes AI - Phase 1 by nwalter

    Description

    This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.

    Goals

    To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.

    Project Charter

    Bugzilla goes AI Phase 1

    Description

    Project Achievements during Hackweek

    In this file you can read about what we achieved during Hackweek.

    Project Achievements


    Background Coding Agent by mmanno

    Description

    I had only bad experiences with AI one-shots. However, monitoring agent work closely and interfering often did result in productivity gains.

    Now, other companies are using agents in pipelines. That makes sense to me, just like CI, we want to offload work to pipelines: Our engineering teams are consistently slowed down by "toil": low-impact, repetitive maintenance tasks. A simple linter rule change, a dependency bump, rebasing patch-sets on top of newer releases or API deprecation requires dozens of manual PRs, draining time from feature development.

    So far we have been writing deterministic, script-based automation for these tasks. And it turns out to be a common trap. These scripts are brittle, complex, and become a massive maintenance burden themselves.

    Can we make prompts and workflows smart enough to succeed at background coding?

    Goals

    We will build a platform that allows engineers to execute complex code transformations using prompts.

    By automating this toil, we accelerate large-scale migrations and allow teams to focus on high-value work.

    Our platform will consist of three main components:

    • "Change" Definition: Engineers will define a transformation as a simple, declarative manifest:
      • The target repositories.
      • A wrapper to run a "coding agent", e.g., "gemini-cli".
      • The task as a natural language prompt.
    • "Change" Management Service: A central service that orchestrates the jobs. It will receive Change definitions and be responsible for the job lifecycle.
    • Execution Runners: We could use existing sandboxed CI runners (like GitHub/GitLab runners) to execute each job or spawn a container.

    MVP

    • Define the Change manifest format.
    • Build the core Management Service that can accept and queue a Change.
    • Connect management service and runners, dynamically dispatch jobs to runners.
    • Create a basic runner script that can run a hard-coded prompt against a test repo and open a PR.

    Stretch Goals:

    • Multi-layered approach, Workflow Agents trigger Coding Agents:
      1. Workflow Agent: Gather information about the task interactively from the user.
      2. Coding Agent: Once the interactive agent has refined the task into a clear prompt, it hands this prompt off to the "coding agent." This background agent is responsible for executing the task and producing the actual pull request.
    • Use MCP:
      1. Workflow Agent gathers context information from Slack, Github, etc.
      2. Workflow Agent triggers a Coding Agent.
    • Create a "Standard Task" library with reliable prompts.
      1. Rebasing rancher-monitoring to a new version of kube-prom-stack
      2. Update charts to use new images
      3. Apply changes to comply with a new linter
      4. Bump complex Go dependencies, like k8s modules
      5. Backport pull requests to other branches
    • Add “review agents” that review the generated PR.

    See also


    OpenPlatform Self-Service Portal by tmuntan1

    Description

    In SUSE IT, we developed an internal developer platform for our engineers using SUSE technologies such as RKE2, SUSE Virtualization, and Rancher. While it works well for our existing users, the onboarding process could be better.

    To improve our customer experience, I would like to build a self-service portal to make it easy for people to accomplish common actions. To get started, I would have the portal create Jira SD tickets for our customers to have better information in our tickets, but eventually I want to add automation to reduce our workload.

    Goals

    • Build a frontend website (Angular) that helps customers create Jira SD tickets.
    • Build a backend (Rust with Axum) for the backend, which would do all the hard work for the frontend.

    Resources (SUSE VPN only)

    • development site: https://ui-dev.openplatform.suse.com/login?returnUrl=%2Fopenplatform%2Fforms
    • https://gitlab.suse.de/itpe/core/open-platform/op-portal/backend
    • https://gitlab.suse.de/itpe/core/open-platform/op-portal/frontend


    Learn how to use the Relm4 Rust GUI crate by xiaoguang_wang

    Relm4 is based on gtk4-rs and compatible with libadwaita. The gtk4-rs crate provides all the tools necessary to develop applications. Building on this foundation, Relm4 makes developing more idiomatic, simpler, and faster.

    https://github.com/Relm4/Relm4


    Exploring Rust's potential: from basics to security by sferracci

    Description

    This project aims to conduct a focused investigation and practical application of the Rust programming language, with a specific emphasis on its security model. A key component will be identifying and understanding the most common vulnerabilities that can be found in Rust code.

    Goals

    Achieve a beginner/intermediate level of proficiency in writing Rust code. This will be measured by trying to solve LeetCode problems focusing on common data structures and algorithms. Study Rust vulnerabilities and learning best practices to avoid them.

    Resources

    Rust book: https://doc.rust-lang.org/book/


    Arcticwolf - A rust based user space NFS server by vcheng

    Description

    Rust has similar performance to C. Also, have a better async IO module and high integration with io_uring. This project aims to develop a user-space NFS server based on Rust.

    Goals

    • Get an understanding of how cargo works
    • Get an understanding of how XDR was generated with xdrgen
    • Create the RUST-based NFS server that supports basic operations like mount/readdir/read/write

    Result (2025 Hackweek)

    • In progress PR: https://github.com/Vicente-Cheng/arcticwolf/pull/1

    Resources

    https://github.com/Vicente-Cheng/arcticwolf


    Mail client with mailing list workflow support in Rust by acervesato

    Description

    To create a mail user interface using Rust programming language, supporting mailing list patches workflow. I know, aerc is already there, but I would like to create something simpler, without integrated protocols. Just a plain user interface that is using some crates to read and create emails which are fetched and sent via external tools.

    I already know Rust, but not the async support, which is needed in this case in order to handle events inside the mail folder and to send notifications.

    Goals

    • simple user interface in the style of aerc, with some vim keybindings for motions and search
    • automatic run of external tools (like mbsync) for checking emails
    • automatic run commands for notifications
    • apply patch set from ML
    • tree-sitter support with styles

    Resources

    • ratatui: user interface (https://ratatui.rs/)
    • notify: folder watcher (https://docs.rs/notify/latest/notify/)
    • mail-parser: parser for emails (https://crates.io/crates/mail-parser)
    • mail-builder: create emails in proper format (https://docs.rs/mail-builder/latest/mail_builder/)
    • gitpatch: ML support (https://crates.io/crates/gitpatch)
    • tree-sitter-rust: support for mail format (https://crates.io/crates/tree-sitter)


    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.

    Nah, let's be honest add-emoji AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources