Description

One part of Uyuni system management tool is ability to build custom images. Currently Uyuni supports only Kiwi image builder.

Kiwi however is not the only image building system out there and with the goal to also become familiar with other systems, this projects aim to add support for Edge Image builder and systemd's mkosi systems.

Goals

Uyuni is able to

  • provision EIB and mkosi build hosts
  • build EIB and mkosi images and store them

Resources

  • Uyuni - https://github.com/uyuni-project/uyuni
  • Edge Image builder - https://github.com/suse-edge/edge-image-builder
  • mkosi - https://github.com/systemd/mkosi

Looking for hackers with the skills:

uyuni edge eib mkosi imagebuilding

This project is part of:

Hack Week 24

Activity

  • 3 days ago: juliogonzalezgil liked this project.
  • 7 days ago: mweiss2 liked this project.
  • 10 days ago: llansky3 liked this project.
  • 13 days ago: vizhestkov liked this project.
  • 14 days ago: oholecek added keyword "imagebuilding" to this project.
  • 14 days ago: oholecek added keyword "uyuni" to this project.
  • 14 days ago: oholecek added keyword "edge" to this project.
  • 14 days ago: oholecek added keyword "eib" to this project.
  • 14 days ago: oholecek added keyword "mkosi" to this project.
  • 14 days ago: oholecek started this project.
  • 14 days ago: oholecek originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov

    Project Description

    Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.

    In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.

    Continue with implementation of the missing features and improve the existing implementation:

    • authentication (need to decide how it should be/or not related to salt auth)

    • web service providing the control of states deployment

    Goal for this Hackweek

    • Implement missing key features

    • Implement the tool for state deployment control with CLI

    Resources

    https://github.com/openSUSE/saline


    Uyuni developer-centric documentation by deneb_alpha

    Description

    While we currently have extensive documentation on user-oriented tasks such as adding minions, patching, fine-tuning, etc, there is a notable gap when it comes to centralizing and documenting core functionalities for developers.

    The number of functionalities and side tools we have in Uyuni can be overwhelming. It would be nice to have a centralized place with descriptive list of main/core functionalities.

    Goals

    Create, aggregate and review on the Uyuni wiki a set of resources, focused on developers, that include also some known common problems/troubleshooting.

    The documentation will be helpful not only for everyone who is trying to learn the functionalities with all their inner processes like newcomer developers or community enthusiasts, but also for anyone who need a refresh.

    Resources

    The resources are currently aggregated here: https://github.com/uyuni-project/uyuni/wiki


    Install Uyuni on Kubernetes in cloud-native way by cbosdonnat

    Description

    For now installing Uyuni on Kubernetes requires running mgradm on a cluster node... which is not what users would do in the Kubernetes world. The idea is to implement an installation based only on helm charts and probably an operator.

    Goals

    Install Uyuni from Rancher UI.

    Resources


    Improve Development Environment on Uyuni by mbussolotto

    Description

    Currently create a dev environment on Uyuni might be complicated. The steps are:

    • add the correct repo
    • download packages
    • configure your IDE (checkstyle, format rules, sonarlint....)
    • setup debug environment
    • ...

    The current doc can be improved: some information are hard to be find out, some others are completely missing.

    Dev Container might solve this situation.

    Goals

    Uyuni development in no time:

    • using VSCode:
      • setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
      • dev container should contains all dependencies
      • setup debug environment
    • implement a GitHub Workspace solution
    • re-write documentation

    Lots of pieces are already implemented: we need to connect them in a consistent solution.

    Resources

    • https://github.com/uyuni-project/uyuni/wiki


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [ ] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [ ] Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    • [ ] Package management (install, remove, update...)
    • [ ] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already)
    • [ ] Applying any basic salt state (including a formula)
    • [ ] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    Build Edge Image Builder ISO with SUSE Manager by mweiss2

    Description

    With SUSE Manager, we can build OS Images using KIWI and container images. As we have Edge Image Builder, we want to see if it is possible to use SUSE Manager to build/customize OS Images by integrating Edge Image Builder as well.

    Goals

    To make the process easier for customers, a single-build pipeline that automatically adds the combustion and artifact files from the EIB process is desirable.

    • Kiwi and EIB need to come from a Git Repository.
    • Kiwi and EIB need to be running as containers.
    • Configuration options for the images used for Kiwi and EIB build.
    • X86 and ARM64 Support.
    • SUSE Manager 4.3 and 5.X Support.
    • SLES 15 SP6 / SL Micro 6.0 and SL Micro 6.1 Support.

    Outcome

    • Change the Kiwi build process to use Podman with the Kiwi image registry.suse.com/bci/kiwi:10.1.10
    • Change the Edge Image Builder to produce a combustion-only ISO
    • Extract the contents and write them to a dedicated /OEM partition integrated via Kiwi into the ISO Kiwi creates.

    Sources and PRs

    • https://github.com/Martin-Weiss/kiwi-image-micro-gpu-60
    • https://github.com/suse-edge/edge-image-builder/pull/618
    • https://github.com/uyuni-project/uyuni/pull/9507


    Small healthcheck tool for Longhorn by mbrookhuis

    Project Description

    We have often problems (e.g. pods not starting) that are related to PVCs not running, cluster (nodes) not all up or deployments not running or completely running. This all prevents administration activities. Having something that can regular be run to validate the status of the cluster would be helpful, and not as of today do a lot of manual tasks.

    As addition (read enough time), we could add changing reservation, adding new disks, etc. --> This didn't made it. But the scripts can easily be adopted.

    This tool would decrease troubleshooting time, giving admins rights to the rancher GUI and could be used in automation.

    Goal for this Hackweek

    At the end we should have a small python tool that is doing a (very) basic health check on nodes, deployments and PVCs. First attempt was to make it in golang, but that was taking to much time.

    Overview

    This tool will run a simple healthcheck on a kubernetes cluster. It will perform the following actions:

    • node check: This will check all nodes, and display the status and the k3s version. If the status of the nodes is not "Ready" (this should be only reported), the cluster will be reported as having problems

    • deployment check: This check will list all deployments, and display the number of expected replicas and the used replica. If there are unused replicas this will be displayed. The cluster will be reported as having problems.

    • pvc check: This check will list of all pvc's, and display the status and the robustness. If the robustness is not "Healthy", the cluster will be reported as having problems.

    If there is a problem registered in the checks, there will be a warning that the cluster is not healthy and the program will exit with 1.

    The script has 1 mandatory parameter and that is the kubeconf of the cluster or of a node off the cluster.

    The code is writen for Python 3.11, but will also work on 3.6 (the default with SLES15.x). There is a venv present that will contain all needed packages. Also, the script can be run on the cluster itself or any other linux server.

    Installation

    To install this project, perform the following steps:

    • Create the directory /opt/k8s-check

    mkdir /opt/k8s-check

    • Copy all the file to this directory and make the following changes:

    chmod +x k8s-check.py


    Build Edge Image Builder ISO with SUSE Manager by mweiss2

    Description

    With SUSE Manager, we can build OS Images using KIWI and container images. As we have Edge Image Builder, we want to see if it is possible to use SUSE Manager to build/customize OS Images by integrating Edge Image Builder as well.

    Goals

    To make the process easier for customers, a single-build pipeline that automatically adds the combustion and artifact files from the EIB process is desirable.

    • Kiwi and EIB need to come from a Git Repository.
    • Kiwi and EIB need to be running as containers.
    • Configuration options for the images used for Kiwi and EIB build.
    • X86 and ARM64 Support.
    • SUSE Manager 4.3 and 5.X Support.
    • SLES 15 SP6 / SL Micro 6.0 and SL Micro 6.1 Support.

    Outcome

    • Change the Kiwi build process to use Podman with the Kiwi image registry.suse.com/bci/kiwi:10.1.10
    • Change the Edge Image Builder to produce a combustion-only ISO
    • Extract the contents and write them to a dedicated /OEM partition integrated via Kiwi into the ISO Kiwi creates.

    Sources and PRs

    • https://github.com/Martin-Weiss/kiwi-image-micro-gpu-60
    • https://github.com/suse-edge/edge-image-builder/pull/618
    • https://github.com/uyuni-project/uyuni/pull/9507


    Explore simple and distro indipendent declarative Linux starting on Tumbleweed or Arch Linux by janvhs

    Description

    Inspired by mkosi the idea is to experiment with a declarative approach of defining Linux systems. A lot of tools already make it possible to manage the systems infrastructure by using description files, rather than manual invocation. An example for this are systemd presets for managing enabled services or the /etc/fstab file for describing how partitions should be mounted.

    If we would take inspiration from openSUSE MicroOS and their handling of the /etc/ directory, we could theoretically use systemd-sysupdate to swap out the /usr/ partition and create an A/B boot scheme, where the /usr/ partition is always freshly built according to a central system description. In the best case it would be possible to still utilise snapshots, but an A/B root scheme would be sufficient for the beginning. This way you could get the benefit of NixOS's declarative system definition, but still use the distros package repositories and don't have to deal with the overhead of Flakes or the Nix language.

    Goals

    • A simple and understandable system
    • Check fitness of mkosi or write a simple extensible image builder tool for it
    • Create a declarative system specification
    • Create a system with swappable /usr/ partition
    • Create an A/B root scheme
    • Swap to the new system without reboot (kexec?)

    Resources

    • Ideas that have been floating around in my head for a while
    • https://0pointer.net/blog/fitting-everything-together.html
    • GNOME OS
    • MicroOS
    • systemd mkosi
    • Vanilla OS