Project Description
The Uyuni doc requires a lot of dependencies installed to be built. Keeping your system on the right versions or reinstalling can be a challenge.
Moreover, we don't build the documentation on the PRs, so we can inspect it, and we don't have it in any place before we prepare releases.
Goal for this Hackweek
- Create a container image that can build the doc from a set of parameters (git repository, git reference, product)
- Publish the container to GitHub (at least for now, OBS is not an option, as a lot of gems and npm packages are required) -> to Docker Hub for now
- Create a github action to build, on demand, the doc and (somehow) publish it -> postponed
Skills
- Able to write docker files
- Familiarity with GitHub actions
- Familiarity with container images on GitHub
- Familiarity with publishing objects (if possible static website) from PRs, by using GitHub tooling
Resources
- https://github.com/uyuni-project/uyuni-docs
- https://github.com/uyuni-project/uyuni-docs/wiki/Install-the-latest-documentation-toolchain
- https://github.com/jordimassaguerpla/uyuni/blob/master/.github/workflows/build_containers.yml
- https://github.com/jordimassaguerpla/uyuni/actions/runs/4024484061/workflow
Outcome
https://github.com/uyuni-project/uyuni-docs-helper
Looking for hackers with the skills:
uyuni containers github_actions github_page antora susemanager
This project is part of:
Hack Week 22
Activity
Comments
-
almost 2 years ago by juliogonzalezgil | Reply
WIP at: https://github.com/juliogonzalez/uyuni-docs-container-image
For now, I focused on publishing the image to the DockerHub, and being able to build the doc from either a local clone or a remote git repository, so I can easily prepare a demo.
Publishing to GitHub or using this on PRs, will come later.
-
almost 2 years ago by juliogonzalezgil | Reply
While part of the work is pending, it will be handled as part of my work for SUSE Manager.
The basics are there, and will be presented to the Uyuni Community and the SUSE Manager stakeholders in the next meetings.
Similar Projects
Create SUSE Manager users from ldap/ad groups by mbrookhuis
Description
This tool is used to create users in SUSE Manager Server based on LDAP/AD groups. For each LDAP/AD group a role within SUSE Manager Server is defined. Also, the tool will check if existing users still have the role they should have, and, if not, it will be corrected. The same for if a user is disabled, it will be enabled again. If a users is not present in the LDAP/AD groups anymore, it will be disabled or deleted, depending on the configuration.
The code is written for Python 3.6 (the default with SLES15.x), but will also work with newer versions. And works against SUSE Manger 4.3 and 5.x
Goals
Create a python and/or golang utility that will manage users in SUSE Manager based on LDAP/AD group-membership. In a configuration file is defined which roles the members of a group will get.
Table of contents
Installation
To install this project, perform the following steps:
- Be sure that python 3.6 is installed and also the module python3-PyYAML. Also the ldap3 module is needed:
bash
zypper in python3 python3-PyYAML
pip install yaml
On the server or PC, where it should run, create a directory. On linux, e.g. /opt/sm-ldap-users
Copy all the file to this directory.
Edit the configsm.yaml. All parameters should be entered. Tip: for the ldap information, the best would be to use the same as for SSSD.
Be sure that the file sm-ldap-users.py is executable. It would be good to change the owner to root:root and only root can read and execute:
bash
chmod 600 *
chmod 700 sm-ldap-users.py
chown root:root *
Usage
This is very simple. Once the configsm.yaml contains the correct information, executing the following will do the magic:
bash
/sm-ldap-users.py
repository link
https://github.com/mbrookhuis/sm-ldap-users
Enable the containerized Uyuni server to run on different host OS by j_renner
Description
The Uyuni server is provided as a container, but we still require it to run on Leap Micro? This is not how people expect to use containerized applications, so it would be great if we tested other host OSs and enabled them by providing builds of necessary tools for (e.g. mgradm). Interesting candidates should be:
- openSUSE Leap
- Cent OS 7
- Ubuntu
- ???
Goals
Make it really easy for anyone to run the Uyuni containerized server on whatever OS they want (with support for containers of course).
Uyuni developer-centric documentation by deneb_alpha
Description
While we currently have extensive documentation on user-oriented tasks such as adding minions, patching, fine-tuning, etc, there is a notable gap when it comes to centralizing and documenting core functionalities for developers.
The number of functionalities and side tools we have in Uyuni can be overwhelming. It would be nice to have a centralized place with descriptive list of main/core functionalities.
Goals
Create, aggregate and review on the Uyuni wiki a set of resources, focused on developers, that include also some known common problems/troubleshooting.
The documentation will be helpful not only for everyone who is trying to learn the functionalities with all their inner processes like newcomer developers or community enthusiasts, but also for anyone who need a refresh.
Resources
The resources are currently aggregated here: https://github.com/uyuni-project/uyuni/wiki
Saltboot ability to deploy OEM images by oholecek
Description
Saltboot is a system deployment part of Uyuni. It is the mechanism behind deploying Kiwi built system images from central Uyuni server location.
System image is when the image is only of one partition and does not contain whole disk image and deployment system has to take care of partitioning, fstab on top of integrity validation.
However systems like Aeon, SUSE Linux Enterprise Micro and similar are distributed as disk images (also so called OEM images). Saltboot currently cannot deploy these systems.
The main problem to saltboot is however that currently saltboot support is built into the image itself. This step is not desired when using OEM images.
Goals
Saltboot needs to be standalone and be able to deploy OEM images. Responsibility of saltboot would then shrink to selecting correct image, image integrity validation, deployment and boot to deployed system.
Resources
- Saltboot - https://github.com/uyuni-project/retail/tree/master
- Uyuni - https://github.com/uyuni-project/uyuni
Improve Development Environment on Uyuni by mbussolotto
Description
Currently create a dev environment on Uyuni might be complicated. The steps are:
- add the correct repo
- download packages
- configure your IDE (checkstyle, format rules, sonarlint....)
- setup debug environment
- ...
The current doc can be improved: some information are hard to be find out, some others are completely missing.
Dev Container might solve this situation.
Goals
Uyuni development in no time:
- using VSCode:
- setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
- dev container should contains all dependencies
- setup debug environment
- implement a GitHub Workspace solution
- re-write documentation
Lots of pieces are already implemented: we need to connect them in a consistent solution.
Resources
- https://github.com/uyuni-project/uyuni/wiki
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games
Enable the containerized Uyuni server to run on different host OS by j_renner
Description
The Uyuni server is provided as a container, but we still require it to run on Leap Micro? This is not how people expect to use containerized applications, so it would be great if we tested other host OSs and enabled them by providing builds of necessary tools for (e.g. mgradm). Interesting candidates should be:
- openSUSE Leap
- Cent OS 7
- Ubuntu
- ???
Goals
Make it really easy for anyone to run the Uyuni containerized server on whatever OS they want (with support for containers of course).
ADS-B receiver with MicroOS by epaolantonio
I would like to put one of my spare Raspberry Pis to good use, and what better way to see what flies above my head at any time?
There are various ready-to-use distros already set-up to provide feeder data to platforms like Flightradar24, ADS-B Exchange, FlightAware etc... The goal here would be to do it using MicroOS as a base and containerized decoding of ADS-B data (via tools like dump1090
) and web frontend (tar1090
).
Goals
- Create a working receiver using MicroOS as a base, and containers based on Tumbleweed
- Make it easy to install
- Optimize for maximum laziness (i.e. it should take care of itself with minimum intervention)
Resources
- 1x Small Board Computer capable of running MicroOS
- 1x RTL2832U DVB-T dongle
- 1x MicroSD card
- https://github.com/antirez/dump1090
- https://github.com/flightaware/dump1090 (dump1090 fork by FlightAware)
- https://github.com/wiedehopf/tar1090
Project status (2024-11-22)
So I'd say that I'm pretty satisfied with how it turned out. I've packaged readsb
(as a replacement for dump1090
), tar1090
, tar1090-db
and mlat-client
(not used yet).
Current status:
- Able to set-up a working receiver using combustion+ignition (web app based on Fuel Ignition)
- Able to feed to various feeds using the Beast protocol (Airplanes.live, ADSB.fi, ADSB.lol, ADSBExchange.com, Flyitalyadsb.com, Planespotters.net)
- Able to feed to Flightradar24 (initial-setup available but NOT tested! I've only tested using a key I already had)
- Local web interface (tar1090) to easily visualize the results
- Cockpit pre-configured to ease maintenance
What's missing:
- MLAT (Multilateration) support. I've packaged mlat-client already, but I have to wire it up
- FlightAware support
Give it a go at https://g7.github.io/adsbreceiver/ !
Project links
- https://g7.github.io/adsbreceiver/
- https://github.com/g7/adsbreceiver
- https://build.opensuse.org/project/show/home:epaolantonio:adsbreceiver
Technical talks at universities by agamez
Description
This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.
For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.
Goals
- Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
- Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
- Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.
Resources
- Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
- SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for
Automate PR process by idplscalabrini
Description
This project is to streamline and enhance the pr review process by adding automation for identifying some issues like missing comments, identifying sensitive information in the PRs like credentials. etc. By leveraging GitHub Actions and golang hooks we can focus more on high-level reviews
Goals
- Automate lints and code validations on Github actions
- Automate code validation on hook
- Implement a bot to pre-review the PRs
Resources
Golang hooks and Github actions
ddflare: (Dynamic)DNS management via Cloudflare API in Kubernetes by fgiudici
Description
ddflare is a project started a couple of weeks ago to provide DDNS management using v4 Cloudflare APIs: Cloudflare offers management via APIs and access tokens, so it is possible to register a domain and implement a DynDNS client without any other external service but their API.
Since ddflare allows to set any IP to any domain name, one could manage multiple A and ALIAS domain records. Wouldn't be cool to allow full DNS control from the project and integrate it with your Kubernetes cluster?
Goals
Main goals are:
- add containerized image for ddflare
- extend ddflare to be able to add and remove DNS records (and not just update existing ones)
- add documentation, covering also a sample pod deployment for Kubernetes
- write a ddflare Kubernetes operator to enable domain management via Kubernetes resources (using kubebuilder)
Available tasks and improvements tracked on ddflare github.
Resources
- https://github.com/fgiudici/ddflare
- https://developers.cloudflare.com/api/
- https://book.kubebuilder.io
Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov
Project Description
Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.
In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.
Continue with implementation of the missing features and improve the existing implementation:
authentication (need to decide how it should be/or not related to salt auth)
web service providing the control of states deployment
Goal for this Hackweek
Implement missing key features
Implement the tool for state deployment control with CLI
Resources
https://github.com/openSUSE/saline
Improve Development Environment on Uyuni by mbussolotto
Description
Currently create a dev environment on Uyuni might be complicated. The steps are:
- add the correct repo
- download packages
- configure your IDE (checkstyle, format rules, sonarlint....)
- setup debug environment
- ...
The current doc can be improved: some information are hard to be find out, some others are completely missing.
Dev Container might solve this situation.
Goals
Uyuni development in no time:
- using VSCode:
- setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
- dev container should contains all dependencies
- setup debug environment
- implement a GitHub Workspace solution
- re-write documentation
Lots of pieces are already implemented: we need to connect them in a consistent solution.
Resources
- https://github.com/uyuni-project/uyuni/wiki
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Create SUSE Manager users from ldap/ad groups by mbrookhuis
Description
This tool is used to create users in SUSE Manager Server based on LDAP/AD groups. For each LDAP/AD group a role within SUSE Manager Server is defined. Also, the tool will check if existing users still have the role they should have, and, if not, it will be corrected. The same for if a user is disabled, it will be enabled again. If a users is not present in the LDAP/AD groups anymore, it will be disabled or deleted, depending on the configuration.
The code is written for Python 3.6 (the default with SLES15.x), but will also work with newer versions. And works against SUSE Manger 4.3 and 5.x
Goals
Create a python and/or golang utility that will manage users in SUSE Manager based on LDAP/AD group-membership. In a configuration file is defined which roles the members of a group will get.
Table of contents
Installation
To install this project, perform the following steps:
- Be sure that python 3.6 is installed and also the module python3-PyYAML. Also the ldap3 module is needed:
bash
zypper in python3 python3-PyYAML
pip install yaml
On the server or PC, where it should run, create a directory. On linux, e.g. /opt/sm-ldap-users
Copy all the file to this directory.
Edit the configsm.yaml. All parameters should be entered. Tip: for the ldap information, the best would be to use the same as for SSSD.
Be sure that the file sm-ldap-users.py is executable. It would be good to change the owner to root:root and only root can read and execute:
bash
chmod 600 *
chmod 700 sm-ldap-users.py
chown root:root *
Usage
This is very simple. Once the configsm.yaml contains the correct information, executing the following will do the magic:
bash
/sm-ldap-users.py
repository link
https://github.com/mbrookhuis/sm-ldap-users