Project Description

This Hack Week i want to figure out how to best use Playwright to test Mojolicious applications like openQA in unit tests. Playwright is a (mostly better) alternative to Selenium for browser automation. I'd like to find a way to write entire unit tests in JavaScript, and have those run right next to existing Perl tests with the same test runner using Node Tap and the Test Anything Protocol.

Goal for this Hackweek

  • Get to know Playwright and its features
  • Write a node library to start and manage a Mojolicious web server
  • Combine TAP and Playwright into unit tests
  • Find a way to manage database fixtures for the Mojolicious application to be tested
  • Have fun along the way

Looking for hackers with the skills:

javascript perl playwright openqa nodejs mojolicious

This project is part of:

Hack Week 20

Activity

  • over 4 years ago: okurz liked this project.
  • over 4 years ago: kraih added keyword "mojolicious" to this project.
  • over 4 years ago: kraih started this project.
  • over 4 years ago: kraih added keyword "javascript" to this project.
  • over 4 years ago: kraih added keyword "perl" to this project.
  • over 4 years ago: kraih added keyword "playwright" to this project.
  • over 4 years ago: kraih added keyword "openqa" to this project.
  • over 4 years ago: kraih added keyword "nodejs" to this project.
  • over 4 years ago: kraih originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Kudos aka openSUSE Recognition Platform by lkocman

    Description

    Relevant blog post at news-o-o

    I started the Kudos application shortly after Leap 16.0 to create a simple, friendly way to recognize people for their work and contributions to openSUSE. There’s so much more to our community than just submitting requests in OBS or gitea we have translations (not only in Weblate), wiki edits, forum and social media moderation, infrastructure maintenance, booth participation, talks, manual testing, openQA test suites, and more!

    Goals

    • Kudos under github.com/openSUSE/kudos with build previews aka netlify

    • Have a kudos.opensuse.org instance running in production

    • Build an easy-to-contribute recognition platform for the openSUSE communit a place where everyone can send and receive appreciation for their work, across all areas of contribution.

    • In the future, we could even explore reward options such as vouchers for t-shirts or other community swag, small tokens of appreciation to make recognition more tangible.

    Resources

    (Do not create new badge requests during hackweek, unless you'll make the badge during hackweek)


    MCP Perl SDK by kraih

    Description

    We've been using the MCP Perl SDK to connect openQA with AI. And while the basics are working pretty well, the SDK is not fully spec compliant yet. So let's change that!

    Goals

    • Support for Resources
    • All response types (Audio, Resource Links, Embedded Resources...)
    • Tool/Prompt/Resource update notifications
    • Dynamic Tool/Prompt/Resource lists
    • New authentication mechanisms

    Resources


    Create a page with all devel:languages:perl packages and their versions by tinita

    Description

    Perl projects now live in git: https://src.opensuse.org/perl

    It would be useful to have an easy way to check which version of which perl module is in devel:languages:perl. Also we have meta overrides and patches for various modules, and it would be good to have them at a central place, so it is easier to lookup, and we can share with other vendors.

    I did some initial data dump here a while ago: https://github.com/perlpunk/cpan-meta

    But I never had the time to automate this.

    I can also use the data to check if there are necessary updates (currently it uses data from download.opensuse.org, so there is some delay and it depends on building).

    Goals

    • Have a script that updates a central repository (e.g. https://src.opensuse.org/perl/_metadata) with metadata by looking at https://src.opensuse.org/perl/_ObsPrj (check if there are any changes from the last run)
    • Create a HTML page with the list of packages (use Javascript and some table library to make it easily searchable)

    Resources


    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakynes.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources


    openQA tests needles elaboration using AI image recognition by mdati

    Description

    In the openQA test framework, to identify the status of a target SUT image, a screenshots of GUI or CLI-terminal images, the needles framework scans the many pictures in its repository, having associated a given set of tags (strings), selecting specific smaller parts of each available image. For the needles management actually we need to keep stored many screenshots, variants of GUI and CLI-terminal images, eachone accompanied by a dedicated set of data references (json).

    A smarter framework, using image recognition based on AI or other image elaborations tools, nowadays widely available, could improve the matching process and hopefully reduce time and errors, during the images verification and detection process.

    Goals

    Main scope of this idea is to match a non-text status of a running openQA test, an image of a shell console or application-GUI screenshot, using less time and resources and with less errors in data preparation and use, than the actual openQA needles framework; that is:

    • having a given SUT (system under test) GUI or CLI-terminal screenshot, with a local distribution of pixels or text commands related to a running test status,
    • we want to identify a desired target, e.g. a screen image status or data/commands context,
      • based on AI/ML-pretrained archives containing object or other proper elaboration tools,
      • possibly able to identify also object not present in the archive, i.e. by means of AI/ML mechanisms.
    • the matching result should be then adapted to continue working in the openQA test, likewise and in place of the same result that would have been produced by the original openQA needles framework.
    • We expect an improvement of the matching-time(less time), reliability of the expected result(less error) and simplification of archive maintenance in adding/removing objects(smaller DB and less actions).

    Hackweek step

    POC:

    • study the available tools
    • prepare a plan for the process to build
    • write and build a draft application
    • prepare the data archive from a subset of needles
    • initialize/pre-train the base archive
    • select a screenshot from the subset, removing/changing some part
    • run the POC application
    • expect the image type is identified in a good %.

    Resources

    first step of this project is quite identification of useful resources for the scope; some possibilities are:

    • SUSE AI and other ML tools (i.e. Tensorflow)
    • Tools able to manage images
    • RPA test tools (like i.e. Robot framework)
    • other.


    openQA log viewer by mpagot

    Description

    *** Warning: Are You at Risk for VOMIT? ***

    Do you find yourself staring at a screen, your eyes glossing over as thousands of lines of text scroll by? Do you feel a wave of text-based nausea when someone asks you to "just check the logs"?

    You may be suffering from VOMIT (Verbose Output Mental Irritation Toxicity).

    This dangerous, work-induced ailment is triggered by exposure to an overwhelming quantity of log data, especially from parallel systems. The human brain, not designed to mentally process 12 simultaneous autoinst-log.txt files, enters a state of toxic shock. It rejects the "Verbose Output," making it impossible to find the one critical error line buried in a 50,000-line sea of "INFO: doing a thing."

    Before you're forced to rm -rf /var/log in a fit of desperation, we present the digital antacid.

    No panic: we have The openQA Log Visualizer

    This is the UI antidote for handling toxic log environments. It bravely dives into the chaotic, multi-machine mess of your openQA test runs, finds all the related, verbose logs, and force-feeds them into a parser.

    image

    Goals

    Work on the existing POC openqa-log-visualizer about few specific tasks:

    • add support for more type of logs
    • extend the configuration file syntax beyond the actual one
    • work on log parsing performance

    Find some beta-tester and collect feedback and ideas about features

    If time allow for it evaluate other UI frameworks and solutions (something more simple to distribute and run, maybe more low level to gain in performance).

    Resources

    openqa-log-visualizer


    MCP Perl SDK by kraih

    Description

    We've been using the MCP Perl SDK to connect openQA with AI. And while the basics are working pretty well, the SDK is not fully spec compliant yet. So let's change that!

    Goals

    • Support for Resources
    • All response types (Audio, Resource Links, Embedded Resources...)
    • Tool/Prompt/Resource update notifications
    • Dynamic Tool/Prompt/Resource lists
    • New authentication mechanisms

    Resources


    Kudos aka openSUSE Recognition Platform by lkocman

    Description

    Relevant blog post at news-o-o

    I started the Kudos application shortly after Leap 16.0 to create a simple, friendly way to recognize people for their work and contributions to openSUSE. There’s so much more to our community than just submitting requests in OBS or gitea we have translations (not only in Weblate), wiki edits, forum and social media moderation, infrastructure maintenance, booth participation, talks, manual testing, openQA test suites, and more!

    Goals

    • Kudos under github.com/openSUSE/kudos with build previews aka netlify

    • Have a kudos.opensuse.org instance running in production

    • Build an easy-to-contribute recognition platform for the openSUSE communit a place where everyone can send and receive appreciation for their work, across all areas of contribution.

    • In the future, we could even explore reward options such as vouchers for t-shirts or other community swag, small tokens of appreciation to make recognition more tangible.

    Resources

    (Do not create new badge requests during hackweek, unless you'll make the badge during hackweek)