WEB_PAGES:

Introduction:

As a qa-automation tester in Product QA for SLES and SUSE-Manager, the SUTs I test (system under test) (like SLE-12-SP2-beta-etc) are changing every day (new packages, patches are merged to SP2, files changes and so on).

Problem: we don't have a tool that give us metadata about the system, like machinery well do.

machinery inspect SUT machinery show SUT

Problem : what changed from system SLE12-SP-BUILD 8000 from to 8400 ? ( oh, i lost the mail from release manager ! )

machinery compare Problem : i found a regression with systemd-tests-suite on SLEnkins: the testsuite fail on BUILD 7400 , but build 7399 is still OK.

what exactly has changed for the package, but also for the system? -> Machinery

Problem: As QA i found a BUG on NFS. I have to report a bug.

Machinery can help me to fill the bug, giving me exact information about really different systems (SLES-12-SP1, openSUSE), etc, what has changed with NFS ? Or Fedora side?

RESULTS

First i want to thank the machinery team, especially Mauro and Manuel that supported me. On this hackweek, have integrated machinery for qa-automation on the library https://github.com/okirch/susetest, and in the SLEnkins automation Framwork.

This work really nice, for scanning systems under test. (SLES, openSUSE)

For qa-automation, machinery works nice and i achieved what i was expecting ! :)

I can scan, compare systems. This could be a FEDORA, DEBIAN, ArchLinux whatever against a openSUSE or a SLES.

In QA and Development, and even Relase Management Perspective this is awesome.

NEW HACK ! :

Revolutionar Perspective for QAAUTOTESTING with Machinery

I'm really glad, i can show you this :

https://slenkins.suse.de/jenkins/job/suite-machinery/32/console

In this example, i compare a SLES-12-SP2-LATEST, with 3-4 builds before.

RESULTS is amazing

With machinery i achieved to compare differents builds from SLES-12-SP2, thanks to the scope, i can see exactly was has changed and was not. I can compare a SLE_12-SP2-GNOME with a SLE-12-SP2-Default, and tracks perfectly changes.

Concrete examples are here :

Scan of a system With console log for machinery ( after the tests are executed) https://slenkins.suse.de/jenkins/view/Test%20suites/job/suite-machinery/13/console

Or with the inspect command redirect to a file.txt to workspace jenkins:

``` setup() machinery_sut = machinery(sut)

try: sometest(sut) machinerysut.inspect() machinerysut.show("tests-machinery") machinerysut.compare("SLE-12-SP2-BUILDXXX-GNOME") ```

Looking for hackers with the skills:

qa-automation susetest python slenkins machinery

This project is part of:

Hack Week 14

Activity

  • over 8 years ago: okurz liked this project.
  • over 8 years ago: dmaiocchi added keyword "machinery" to this project.
  • over 8 years ago: dmaiocchi added keyword "slenkins" to this project.
  • over 8 years ago: dmaiocchi added keyword "qa-automation" to this project.
  • over 8 years ago: dmaiocchi added keyword "susetest" to this project.
  • over 8 years ago: dmaiocchi added keyword "python" to this project.
  • over 8 years ago: mamorales liked this project.
  • over 8 years ago: evshmarnev liked this project.
  • over 8 years ago: e_bischoff liked this project.
  • over 8 years ago: dmaiocchi started this project.
  • over 8 years ago: dmaiocchi originated this project.

  • Comments

    • e_bischoff
      over 8 years ago by e_bischoff | Reply

      For point 2), snapshots would be an alternative. Which does not mean that using machinery to do that is not interesting - on the contrary!

    • dmaiocchi
      over 8 years ago by dmaiocchi | Reply

      ok first result are avaible here: https://slenkins.suse.de/jenkins/view/Test%20suites/job/suite-machinery/10/console

    Similar Projects

    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    Results: Infrastructure Achievements

    We successfully built and automated a containerized stack to support our AI experiments. This included:

    A screenshot of k9s and nvtop showing PyTAG running in Kubernetes with GPU acceleration

    ./deploy.sh and voilà - Kubernetes running PyTAG (k9s, above) with GPU acceleration (nvtop, below)

    Results: Game Design Insights

    Our project focused on modeling and analyzing two card games of our own design within the TAG framework:

    • Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
    • AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
    • Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .

    Cards from the three games

    A family picture of our card games in progress. From the top: Bamboo, Totoro, R3

    Results: Learning, Collaboration, and Innovation

    Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:

    • "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
    • AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
    • GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
    • Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.

    Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!

    The Context: AI + Board Games


    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    Make more sense of openQA test results using AI by livdywan

    Description

    AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

    User Story

    Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

    Goals

    • Leverage a chat interface to help Allison
    • Create a model from scratch based on data from openQA
    • Proof of concept for automated analysis of openQA test results

    Bonus

    • Use AI to suggest solutions to merge conflicts
      • This would need a merge conflict editor that can suggest solving the conflict
    • Use image recognition for needles

    Resources

    Timeline

    Day 1

    • Conversing with open-webui to teach me how to create a model based on openQA test results

    Day 2

    Highlights

    • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
    • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
    • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
    • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

    Outcomes

    • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
    • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.


    Symbol Relations by hli

    Description

    There are tools to build function call graphs based on parsing source code, for example, cscope.

    This project aims to achieve a similar goal by directly parsing the disasembly (i.e. objdump) of a compiled binary. The assembly code is what the CPU sees, therefore more "direct". This may be useful in certain scenarios, such as gdb/crash debugging.

    Detailed description and Demos can be found in the README file:

    Supports x86 for now (because my customers only use x86 machines), but support for other architectures can be added easily.

    Tested with python3.6

    Goals

    Any comments are welcome.

    Resources

    https://github.com/lhb-cafe/SymbolRelations

    symrellib.py: mplements the symbol relation graph and the disassembly parser

    symrel_tracer*.py: implements tracing (-t option)

    symrel.py: "cli parser"


    Ansible for add-on management by lmanfredi

    Description

    Machines can contains various combinations of add-ons and are often modified during the time.

    The list of repos can change so I would like to create an automation able to reset the status to a given state, based on metadata available for these machines

    Goals

    Create an Ansible automation able to take care of add-on (repo list) configuration using metadata as reference

    Resources

    Results

    Created WIP project Ansible-add-on-openSUSE


    SUSE Prague claw machine by anstalker

    Project Description

    The idea is to build a claw machine similar to e.g. this one:

    example image

    Why? Well, it could be a lot of fun!

    But also it's a great way to dispense SUSE and openSUSE merch like little Geekos at events like conferences, career fairs and open house events.

    Goal for this Hackweek

    Build an arcade claw machine.

    Resources

    In French, an article about why you always lose in claw machine games:

    We're looking for handy/crafty people in the Prague office:

    • woodworking XP or equipment
    • arduino/raspi embedded programming knowledge
    • Anthony can find a budget for going to GM and buying servos and such ;)