Project Description
Project aims to create tool for specific situations in which current cucumber testsuite used for Uyuni and SUSE Manager is too complex tool and, otherwise, in which manual testing is just still too much time consuming.
I would like to create tool, which quickly sets up all necessary stuff for area to be tested, so manual testing is limited to final tests and decision making if feature works or not.
This tool will be written in Rust language, because the language itself looks just cool (and has some very interesting concepts) and could be interesting choice for this purpose in combination of XMLRPC API provided by Uyuni/SUSE Manager as XMLRPC calls are very quick and handling of error states is easy.
Goal for this Hackweek
Implement following for retail features, so:
- retail fomulas configuration
- build hosts preparation
- creation of kiwi image profiles
- scheduling of kiwi image building
- applying of highstate
...will be possible to test via this tool.
Setup of retail formulas will be handled via json files already used to store their configuration.
Resources
This project is part of:
Hack Week 20
Activity
Comments
Be the first to comment!
Similar Projects
mgr-ansible-ssh - Intelligent, Lightweight CLI for Distributed Remote Execution by deve5h
Description
By the end of Hack Week, the target will be to deliver a minimal functional version 1 (MVP) of a custom command-line tool named mgr-ansible-ssh (a unified wrapper for BOTH ad-hoc shell & playbooks) that allows operators to:
- Execute arbitrary shell commands on thousand of remote machines simultaneously using Ansible Runner with artifacts saved locally.
- Pass runtime options such as inventory file, remote command string/ playbook execution, parallel forks, limits, dry-run mode, or no-std-ansible-output.
- Leverage existing SSH trust relationships without additional setup.
- Provide a clean, intuitive CLI interface with --help for ease of use. It should provide consistent UX & CI-friendly interface.
- Establish a foundation that can later be extended with advanced features such as logging, grouping, interactive shell mode, safe-command checks, and parallel execution tuning.
The MVP should enable day-to-day operations to efficiently target thousands of machines with a single, consistent interface.
Goals
Primary Goals (MVP):
Build a functional CLI tool (mgr-ansible-ssh) capable of executing shell commands on multiple remote hosts using Ansible Runner. Test the tool across a large distributed environment (1000+ machines) to validate its performance and reliability.
Looking forward to significantly reducing the zypper deployment time across all 351 RMT VM servers in our MLM cluster by eliminating the dependency on the taskomatic service, bringing execution down to a fraction of the current duration. The tool should also support multiple runtime flags, such as:
mgr-ansible-ssh: Remote command execution wrapper using Ansible Runner
Usage: mgr-ansible-ssh [--help] [--version] [--inventory INVENTORY]
[--run RUN] [--playbook PLAYBOOK] [--limit LIMIT]
[--forks FORKS] [--dry-run] [--no-ansible-output]
Required Arguments
--inventory, -i Path to Ansible inventory file to use
Any One of the Arguments Is Required
--run, -r Execute the specified shell command on target hosts
--playbook, -p Execute the specified Ansible playbook on target hosts
Optional Arguments
--help, -h Show the help message and exit
--version, -v Show the version and exit
--limit, -l Limit execution to specific hosts or groups
--forks, -f Number of parallel Ansible forks
--dry-run Run in Ansible check mode (requires -p or --playbook)
--no-ansible-output Suppress Ansible stdout output
Secondary/Stretched Goals (if time permits):
- Add pretty output formatting (success/failure summary per host).
- Implement basic logging of executed commands and results.
- Introduce safety checks for risky commands (shutdown, rm -rf, etc.).
- Package the tool so it can be installed with pip or stored internally.
Resources
Collaboration is welcome from anyone interested in CLI tooling, automation, or distributed systems. Skills that would be particularly valuable include:
- Python especially around CLI dev (argparse, click, rich)
Ansible to Salt integration by vizhestkov
Description
We already have initial integration of Ansible in Salt with the possibility to run playbooks from the salt-master on the salt-minion used as an Ansible Control node.
In this project I want to check if it possible to make Ansible working on the transport of Salt. Basically run playbooks with Ansible through existing established Salt (ZeroMQ) transport and not using ssh at all.
It could be a good solution for the end users to reuse Ansible playbooks or run Ansible modules they got used to with no effort of complex configuration with existing Salt (or Uyuni/SUSE Multi Linux Manager) infrastructure.
Goals
- [v] Prepare the testing environment with Salt and Ansible installed
- [v] Discover Ansible codebase to figure out possible ways of integration
- [v] Create Salt/Uyuni inventory module
- [v] Make basic modules to work with no using separate ssh connection, but reusing existing Salt connection
- [v] Test some most basic playbooks
Resources
Enhance setup wizard for Uyuni by PSuarezHernandez
Description
This project wants to enhance the intial setup on Uyuni after its installation, so it's easier for a user to start using with it.
Uyuni currently uses "uyuni-tools" (mgradm) as the installation entrypoint, to trigger the installation of Uyuni in the given host, but does not really perform an initial setup, for instance:
- user creation
- adding products / channels
- generating bootstrap repos
- create activation keys
- ...
Goals
- Provide initial setup wizard as part of mgradm uyuni installation
Resources
Uyuni Saltboot rework by oholecek
Description
When Uyuni switched over to the containerized proxies we had to abandon salt based saltboot infrastructure we had before. Uyuni already had integration with a Cobbler provisioning server and saltboot infra was re-implemented on top of this Cobbler integration.
What was not obvious from the start was that Cobbler, having all it's features, woefully slow when dealing with saltboot size environments. We did some improvements in performance, introduced transactions, and generally tried to make this setup usable. However the underlying slowness remained.
Goals
This project is not something trying to invent new things, it is just finally implementing saltboot infrastructure directly with the Uyuni server core.
Instead of generating grub and pxelinux configurations by Cobbler for all thousands of systems and branches, we will provide a GET access point to retrieve grub or pxelinux file during the boot:
/saltboot/group/grub/$fqdn and similar for systems /saltboot/system/grub/$mac
Next we adapt our tftpd translator to query these points when asked for default or mac based config.
Lastly similar thing needs to be done on our apache server when HTTP UEFI boot is used.
Resources
Set Up an Ephemeral Uyuni Instance by mbussolotto
Description
To test, check, and verify the latest changes in the master branch, we want to easily set up an ephemeral environment.
Goals
- Create an ephemeral environment manually
Create an ephemeral environment automatically
Resources
https://github.com/uyuni-project/uyuni
https://www.uyuni-project.org/uyuni-docs/en/uyuni/index.html
Learn a bit of embedded programming with Rust in a micro:bit v2 by aplanas
Description
micro:bit is a small single board computer with a ARM Cortex-M4 with the FPU extension, with a very constrain amount of memory and a bunch of sensors and leds.
The board is very well documented, with schematics and code for all the features available, so is an excellent platform for learning embedded programming.
Rust is a system programming language that can generate ARM code, and has crates (libraries) to access the micro:bit hardware. There is plenty documentation about how to make small programs that will run in the micro:bit.
Goals
Start learning about embedded programming in Rust, and maybe make some code to the small KS4036F Robot car from keyestudio.
Resources
- micro:bit
- KS4036F
- microbit technical documentation
- schematic
- impl Rust for micro:bit
- Rust Embedded MB2 Discovery Book
- nRF-HAL
- nRF Microbit-v2 BSP (blocking)
- knurling-rs
- C++ microbit codal
- microbit-bsp for Embassy
- Embassy
Diary
Day 1
- Start reading https://mb2.implrust.com/abstraction-layers.html
- Prepare the dev environment (cross compiler, probe-rs)
- Flash first code in the board (blinky led)
- Checking differences between BSP and HAL
- Compile and install a more complex example, with stack protection
- Reading about the simplicity of xtask, as alias for workspace execution
- Reading the CPP code of the official micro:bit libraries. They have a font!
Day 2
- There are multiple BSP for the microbit. One is using async code for non-blocking operations
- Download and study a bit the API for microbit-v2, the nRF official crate
- Take a look of the KS4036F programming, seems that the communication is multiplexed via I2C
- The motor speed can be selected via PWM (pulse with modulation): power it longer (high frequency), and it will increase the speed
- Scrolling some text
- Debug by printing! defmt is a crate that can be used with probe-rs to emit logs
- Start reading input from the board: buttons
- The logo can be touched and detected as a floating point value
Day 3
- A bit confused how to read the float value from a pin
Arcticwolf - A rust based user space NFS server by vcheng
Description
Rust has similar performance to C. Also, have a better async IO module and high integration with io_uring. This project aims to develop a user-space NFS server based on Rust.
Goals
- Get an understanding of how cargo works
- Get an understanding of how XDR was generated with xdrgen
- Create the RUST-based NFS server that supports basic operations like mount/readdir/read/write
Result (2025 Hackweek)
- In progress PR: https://github.com/Vicente-Cheng/arcticwolf/pull/1
Resources
https://github.com/Vicente-Cheng/arcticwolf
Learn how to use the Relm4 Rust GUI crate by xiaoguang_wang
Relm4 is based on gtk4-rs and compatible with libadwaita. The gtk4-rs crate provides all the tools necessary to develop applications. Building on this foundation, Relm4 makes developing more idiomatic, simpler, and faster.
https://github.com/Relm4/Relm4
Mail client with mailing list workflow support in Rust by acervesato
Description
To create a mail user interface using Rust programming language, supporting mailing list patches workflow. I know, aerc is already there, but I would like to create something simpler, without integrated protocols. Just a plain user interface that is using some crates to read and create emails which are fetched and sent via external tools.
I already know Rust, but not the async support, which is needed in this case in order to handle events inside the mail folder and to send notifications.
Goals
- simple user interface in the style of
aerc, with some vim keybindings for motions and search - automatic run of external tools (like
mbsync) for checking emails - automatic run commands for notifications
- apply patch set from ML
- tree-sitter support with styles
Resources
- ratatui: user interface (https://ratatui.rs/)
- notify: folder watcher (https://docs.rs/notify/latest/notify/)
- mail-parser: parser for emails (https://crates.io/crates/mail-parser)
- mail-builder: create emails in proper format (https://docs.rs/mail-builder/latest/mail_builder/)
- gitpatch: ML support (https://crates.io/crates/gitpatch)
- tree-sitter-rust: support for mail format (https://crates.io/crates/tree-sitter)
Looking at Rust if it could be an interesting programming language by jsmeix
Get some basic understanding of Rust security related features from a general point of view.
This Hack Week project is not to learn Rust to become a Rust programmer. This might happen later but it is not the goal of this Hack Week project.
The goal of this Hack Week project is to evaluate if Rust could be an interesting programming language.
An interesting programming language must make it easier to write code that is correct and stays correct when over time others maintain and enhance it than the opposite.
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt (including bootstrapping with bootstrap script) and Salt-ssh clients
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
In progress/done for Hack Week 25
Guide
We started writin a Guide: Adding a new client GNU Linux distribution to Uyuni at https://github.com/uyuni-project/uyuni/wiki/Guide:-Adding-a-new-client-GNU-Linux-distribution-to-Uyuni, to make things easier for everyone, specially those not too familiar wht Uyuni or not technical.
openSUSE Leap 16.0
The distribution will all love!
https://en.opensuse.org/openSUSE:Roadmap#DRAFTScheduleforLeap16.0
Curent Status We started last year, it's complete now for Hack Week 25! :-D
[W]Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file) NOTE: Done, client tools for SLMicro6 are using as those for SLE16.0/openSUSE Leap 16.0 are not available yet[W]Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)[W]Package management (install, remove, update...). Works, even reboot requirement detection
Multimachine on-prem test with opentofu, ansible and Robot Framework by apappas
Description
A long time ago I explored using the Robot Framework for testing. A big deficiency over our openQA setup is that bringing up and configuring the connection to a test machine is out of scope.
Nowadays we have a way¹ to deploy SUTs outside openqa, but we only use if for cloud tests in conjuction with openqa. Using knowledge gained from that project I am going to try to create a test scenario that replicates an openqa test but this time including the deployment and setup of the SUT.
Goals
Create a simple multimachine test scenario with the support server and SUT all created by the robot framework.
Resources
- https://github.com/SUSE/qe-sap-deployment
- terraform-libvirt-provider
openQA tests needles elaboration using AI image recognition by mdati
Description
In the openQA test framework, to identify the status of a target SUT image, a screenshots of GUI or CLI-terminal images,
the needles framework scans the many pictures in its repository, having associated a given set of tags (strings), selecting specific smaller parts of each available image. For the needles management actually we need to keep stored many screenshots, variants of GUI and CLI-terminal images, eachone accompanied by a dedicated set of data references (json).
A smarter framework, using image recognition based on AI or other image elaborations tools, nowadays widely available, could improve the matching process and hopefully reduce time and errors, during the images verification and detection process.
Goals
Main scope of this idea is to match a "graphical" image of the console or GUI status of a running openQA test, an image of a shell console or application-GUI screenshot, using less time and resources and with less errors in data preparation and use, than the actual openQA needles framework; that is:
- having a given SUT (system under test) GUI or CLI-terminal screenshot, with a local distribution of pixels or text commands related to a running test status,
- we want to identify a desired target, e.g. a screen image status or data/commands context,
- based on AI/ML-pretrained archives containing object or other proper elaboration tools,
- possibly able to identify also object not present in the archive, i.e. by means of AI/ML mechanisms.
- the matching result should be then adapted to continue working in the openQA test, likewise and in place of the same result that would have been produced by the original openQA needles framework.
- We expect an improvement of the matching-time(less time), reliability of the expected result(less error) and simplification of archive maintenance in adding/removing objects(smaller DB and less actions).
Hackweek POC:
Main steps
- Phase 1 - Plan
- study the available tools
- prepare a plan for the process to build
- Phase 2 - Implement
- write and build a draft application
- Phase 3 - Data
- prepare the data archive from a subset of needles
- initialize/pre-train the base archive
- select a screenshot from the subset, removing/changing some part
- Phase 4 - Test
- run the POC application
- expect the image type is identified in a good %.
Resources
First step of this project is quite identification of useful resources for the scope; some possibilities are:
- SUSE AI and other ML tools (i.e. Tensorflow)
- Tools able to manage images
- RPA test tools (like i.e. Robot framework)
- other.
Project references
- Repository: openqa-needles-AI-driven