I have bought a Raspberry Pi 400 and would like to experiment how it integrates into SUSE ecosystem.
Project Description
Possible manipulations:
- flash a micro SD card from a Macbook to install Ubuntu, SLES and/or openSUSE
- same from an external USB-C SSD disk, compare speed
- deploy SUSE Manager locally and register Pi
- integrate into development VPN and register Pi into existing SUSE Manager instance
- set up a local PXE server and try to install via pure PXE/TFTP
- same from local SUSE Manager instance
- try to develop in ARM assembler on the Pi
- write findings or record a video
That's quite a lot and I will probably be able to do only a small part of it.
Goal for this Hackweek
Get familiar with the Raspberry Pi, our SUSE implementation and SUSE Manager integration.
Resources
Since the hardware is only available locally, it will probably be a one-man show, but feel free to join or just support!
Looking for hackers with the skills:
This project is part of:
Hack Week 20
Activity
Comments
-
over 4 years ago by e_bischoff | Reply
Stealing some documentation from @nadvornik
: Raspberry Pi does not have UEFI implementation in firmware, with SLES it uses U-Boot with UEFI support. This means that it needs SD card with special image for the first boot. The boot process is following: RPi boots from SD card, loads U-Boot, kernel and initrd from SD card, then it connects to SUSE Manager and continues normally - check and eventually deploy system image image and boots it.
-
over 4 years ago by mlnoga | Reply
Hi, some thoughts. Benchmarking has been done a couple of times, e.g. by Tom's Hardware.
A real problem to solve for SD-card based systems is card failure. Automated backup/restore on fresh SD card would find quite some fans. Most best practice sites just refer to full disk cloning. Maybe there's a smarter way, e.g. by using Machinery to tell apart base OS & changes?
What I find really vexing on PI4 is lack of a proper 64 bit OS. Even on a Pi 4 with 8 GB, apps can at most use 2.5 GB. Raspberry PI OS 64-bit seems stuck in perpetual beta since last summer.
-
over 4 years ago by e_bischoff | Reply
@mlnoga yes, the RaspDebian that was installed by default is 32 bits. The very first thing I did was to flash an Ubuntu, and that was 64 bits immediately.
-
over 4 years ago by e_bischoff | Reply
@a_faerber yes, I am planning to try from USB disk as well as from SD card.
-
over 4 years ago by e_bischoff | Reply
I am updating this file as I progress: http://w3.suse.de/~ebischoff/hackweek20.pdf .
-
over 4 years ago by e_bischoff | Reply
@a_faerber I managed from USB HDD, but not from USB ISO. Any tip welcome.
-
-
over 4 years ago by joachimwerner | Reply
A few additional comments from working with Raspberry Pi 4s with 8GB and the new SLE Micro 5.0 images:
The SLE Micro RAW images can easily be copied to SD cards with dd or a tool like the Mac one you used.
The nice thing with the SLE Micro images is that you can even boot them completely headless by defining the root password and other configurations you'd like to be done automatically from combustion and/or ignition. See the (beta) documentation here: https://susedoc.github.io/doc-sle/main/html/SLE-Micro-installation/article-installation.html#sec-slem-image-deployment
The one thing that would be nice is if the ignition/combustion config could also be put directly into a directory on the SD card. Raspbian allows this to a certain extent. For example, you can just touch a file "ssh" in /boot to enable ssh.
I didn't try out booting from harddisk or USB-SSD.
-
over 4 years ago by e_bischoff | Reply
Thanks for the hint Joachim, I read your draft documentation.
During my hackweek, I indeed did my tests with raw images with JeOS, and not with SUSE Linux Enterprise Micro.
I indeed also did not investigate automation of the initial configuration. My expectation was that I would see some cloud-init script in action, but if there was one, I missed it :-P . That being said, resizing the root filesystem to make it span over all the root partition seemed to be the only initial step I had to do by hand.
Similar Projects
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
Debian 13
The new version of the beloved Debian GNU/Linux OS
[W]Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[ ]Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)[ ]Package management (install, remove, update...)[ ]Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). Probably not for Debian as IIRC we don't support patches yet.[ ]Applying any basic salt state (including a formula)[ ]Salt remote commands[ ]Bonus point: Java part for product identification, and monitoring enablement[ ]Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)[ ]Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)[ ]Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
Set Uyuni to manage edge clusters at scale by RDiasMateus
Description
Prepare a Poc on how to use MLM to manage edge clusters. Those cluster are normally equal across each location, and we have a large number of them.
The goal is to produce a set of sets/best practices/scripts to help users manage this kind of setup.
Goals
step 1: Manual set-up
Goal: Have a running application in k3s and be able to update it using System Update Controler (SUC)
- Deploy Micro 6.2 machine
Deploy k3s - single node
- https://docs.k3s.io/quick-start
Build/find a simple web application (static page)
- Build/find a helmchart to deploy the application
Deploy the application on the k3s cluster
Install App updates through helm update
Install OS updates using MLM
step 2: Automate day 1
Goal: Trigger the application deployment and update from MLM
- Salt states For application (with static data)
- Deploy the application helmchart, if not present
- install app updates through helmchart parameters
- Link it to GIT
- Define how to link the state to the machines (based in some pillar data? Using configuration channels by importing the state? Naming convention?)
- Use git update to trigger helmchart app update
- Recurrent state applying configuration channel?
step 3: Multi-node cluster
Goal: Use SUC to update a multi-node cluster.
- Create a multi-node cluster
- Deploy application
- call the helm update/install only on control plane?
- Install App updates through helm update
- Prepare a SUC for OS update (k3s also? How?)
- https://github.com/rancher/system-upgrade-controller
- https://documentation.suse.com/cloudnative/k3s/latest/en/upgrades/automated.html
- Update/deploy the SUC?
- Update/deploy the SUC CRD with the update procedure
Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

Description
This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.
If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.
Nah, let's be honest
AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.
Goals
- Migrate Core tests including Onboarding of clients
- Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
- Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS
Resources
- Existing Uyuni Test Framework (Cucumber Ruby + Capybara + Selenium)
- My Template for CucumberJS + Playwright in TypeScript
- Started Hackweek Project
Ansible to Salt integration by vizhestkov
Description
We already have initial integration of Ansible in Salt with the possibility to run playbooks from the salt-master on the salt-minion used as an Ansible Control node.
In this project I want to check if it possible to make Ansible working on the transport of Salt. Basically run playbooks with Ansible through existing established Salt (ZeroMQ) transport and not using ssh at all.
Goals
- [v] Prepare the testing environment with Salt and Ansible installed
- [v] Discover Ansible codebase to figure out possible ways of integration
- [v] Create Salt/Uyuni inventory module
- [v] Make basic modules to work with no using separate ssh connection, but reusing existing Salt connection
- [ ] Test some most common playbooks
Resources
TBD
Set Uyuni to manage edge clusters at scale by RDiasMateus
Description
Prepare a Poc on how to use MLM to manage edge clusters. Those cluster are normally equal across each location, and we have a large number of them.
The goal is to produce a set of sets/best practices/scripts to help users manage this kind of setup.
Goals
step 1: Manual set-up
Goal: Have a running application in k3s and be able to update it using System Update Controler (SUC)
- Deploy Micro 6.2 machine
Deploy k3s - single node
- https://docs.k3s.io/quick-start
Build/find a simple web application (static page)
- Build/find a helmchart to deploy the application
Deploy the application on the k3s cluster
Install App updates through helm update
Install OS updates using MLM
step 2: Automate day 1
Goal: Trigger the application deployment and update from MLM
- Salt states For application (with static data)
- Deploy the application helmchart, if not present
- install app updates through helmchart parameters
- Link it to GIT
- Define how to link the state to the machines (based in some pillar data? Using configuration channels by importing the state? Naming convention?)
- Use git update to trigger helmchart app update
- Recurrent state applying configuration channel?
step 3: Multi-node cluster
Goal: Use SUC to update a multi-node cluster.
- Create a multi-node cluster
- Deploy application
- call the helm update/install only on control plane?
- Install App updates through helm update
- Prepare a SUC for OS update (k3s also? How?)
- https://github.com/rancher/system-upgrade-controller
- https://documentation.suse.com/cloudnative/k3s/latest/en/upgrades/automated.html
- Update/deploy the SUC?
- Update/deploy the SUC CRD with the update procedure
Set Up an Ephemeral Uyuni Instance by mbussolotto
Description
To test, check, and verify the latest changes in the master branch, we want to easily set up an ephemeral environment.
Goals
- Create an ephemeral environment manually
Create an ephemeral environment automatically
Resources
https://github.com/uyuni-project/uyuni
https://www.uyuni-project.org/uyuni-docs/en/uyuni/index.html
Uyuni read-only replica by cbosdonnat
Description
For now, there is no possible HA setup for Uyuni. The idea is to explore setting up a read-only shadow instance of an Uyuni and make it as useful as possible.
Possible things to look at:
- live sync of the database, probably using the WAL. Some of the tables may have to be skipped or some features disabled on the RO instance (taskomatic, PXT sessions…)
- Can we use a load balancer that routes read-only queries to either instance and the other to the RW one? For example, packages or PXE data can be served by both, the API GET requests too. The rest would be RW.
Goals
- Prepare a document explaining how to do it.
- PR with the needed code changes to support it