Description

For now, there is no possible HA setup for Uyuni. The idea is to explore setting up a read-only shadow instance of an Uyuni and make it as useful as possible.

Possible things to look at:

  • live sync of the database, probably using the WAL. Some of the tables may have to be skipped or some features disabled on the RO instance (taskomatic, PXT sessions…)
  • Can we use a load balancer that routes read-only queries to either instance and the other to the RW one? For example, packages or PXE data can be served by both, the API GET requests too. The rest would be RW.

Goals

  • Prepare a document explaining how to do it.
  • PR with the needed code changes to support it

Looking for hackers with the skills:

uyuni ha database postgresql

This project is part of:

Hack Week 25

Activity

  • about 1 month ago: j_renner liked this project.
  • about 1 month ago: e_bischoff liked this project.
  • about 1 month ago: deneb_alpha liked this project.
  • about 1 month ago: epenchev liked this project.
  • about 1 month ago: juliogonzalezgil liked this project.
  • about 1 month ago: ygutierrez liked this project.
  • about 1 month ago: oholecek liked this project.
  • about 1 month ago: oholecek joined this project.
  • about 1 month ago: oscar-barrios liked this project.
  • about 1 month ago: cbosdonnat added keyword "database" to this project.
  • about 1 month ago: cbosdonnat added keyword "postgresql" to this project.
  • about 1 month ago: cbosdonnat added keyword "ha" to this project.
  • about 1 month ago: cbosdonnat added keyword "uyuni" to this project.
  • about 1 month ago: cbosdonnat started this project.
  • about 1 month ago: cbosdonnat originated this project.

  • Comments

    • epenchev
      about 1 month ago by epenchev | Reply

      Hi, I think there are a few solutions that might help.

      Since I'm dealing a lot with HA and databases, would like to share my thoughts.

      One possible solution would be to go with pgpool-II - Scaling PostgreSQL Master-Replica Load Balancing and Automatic Failover.

      Such approach is described very much in details -> https://medium.com/@deysouvik700/scaling-postgresql-with-pgpool-ii-master-replica-load-balancing-and-automatic-failover-091983d4dd9a. In the example architecture the PgPool proxy itself is a single point of failure. The example setup could be extended by adding an additional proxy instance. Both proxy instances could be managed by keepalived + VirtualIP config. Of course there are other resources you can refer to as well.

      Another possible solution which is kind of more automated would be to go with cnpg. This would require however to have a K8s cluster for your statefull PostgreSQL workload. So ideally you would need at least 3 Nodes HA K8s cluster. This is the minimal setup and all 3 nodes should be (control plane + worker roles) otherwise the standard setup will go up to 5 nodes (control planes and additional worker nodes.) With cnpg you can create multiple services (rw, ro, r) within you cluster and point clients to them https://cloudnative-pg.io/documentation/1.27/service_management/ and https://cloudnative-pg.io/documentation/1.27/architecture/.

      Something more experimental that I'm working on recently and hoping to be way easier in operational perspective is https://github.com/kqlite/kqlite. It's a SQLite over the PostgreSQL wire protocol, with support for replication and clustering. However this limits the scope of database functionality down to SQLite only. Unfortunately using any PostgreSQL specific features and data types will not work with kqlite .

      P.S. Also there is plenty of documentation on going with the standard approach patroni + HAProxy + etcd.

      • cbosdonnat
        about 1 month ago by cbosdonnat | Reply

        The first issue will be the replication of the DB itself. Since we have sequences and those are not logically replicated, we will have to check the possible options there.

    Similar Projects

    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.

    Nah, let's be honest add-emoji AI helped a lot to vibe code a good part of the Ruby methods of the Test framework, moving them to Typescript, along with the migration from Capybara to Playwright. I've been using "Cline" as plugin for WebStorm IDE, using Gemini API behind it.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakiness.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    Uyuni Saltboot rework by oholecek

    Description

    When Uyuni switched over to the containerized proxies we had to abandon salt based saltboot infrastructure we had before. Uyuni already had integration with a Cobbler provisioning server and saltboot infra was re-implemented on top of this Cobbler integration.

    What was not obvious from the start was that Cobbler, having all it's features, woefully slow when dealing with saltboot size environments. We did some improvements in performance, introduced transactions, and generally tried to make this setup usable. However the underlying slowness remained.

    Goals

    This project is not something trying to invent new things, it is just finally implementing saltboot infrastructure directly with the Uyuni server core.

    Instead of generating grub and pxelinux configurations by Cobbler for all thousands of systems and branches, we will provide a GET access point to retrieve grub or pxelinux file during the boot:

    /saltboot/group/grub/$fqdn and similar for systems /saltboot/system/grub/$mac

    Next we adapt our tftpd translator to query these points when asked for default or mac based config.

    Lastly similar thing needs to be done on our apache server when HTTP UEFI boot is used.

    Resources


    mgr-ansible-ssh - Intelligent, Lightweight CLI for Distributed Remote Execution by deve5h

    Description

    By the end of Hack Week, the target will be to deliver a minimal functional version 1 (MVP) of a custom command-line tool named mgr-ansible-ssh (a unified wrapper for BOTH ad-hoc shell & playbooks) that allows operators to:

    1. Execute arbitrary shell commands on thousand of remote machines simultaneously using Ansible Runner with artifacts saved locally.
    2. Pass runtime options such as inventory file, remote command string/ playbook execution, parallel forks, limits, dry-run mode, or no-std-ansible-output.
    3. Leverage existing SSH trust relationships without additional setup.
    4. Provide a clean, intuitive CLI interface with --help for ease of use. It should provide consistent UX & CI-friendly interface.
    5. Establish a foundation that can later be extended with advanced features such as logging, grouping, interactive shell mode, safe-command checks, and parallel execution tuning.

    The MVP should enable day-to-day operations to efficiently target thousands of machines with a single, consistent interface.

    Goals

    Primary Goals (MVP):

    Build a functional CLI tool (mgr-ansible-ssh) capable of executing shell commands on multiple remote hosts using Ansible Runner. Test the tool across a large distributed environment (1000+ machines) to validate its performance and reliability.

    Looking forward to significantly reducing the zypper deployment time across all 351 RMT VM servers in our MLM cluster by eliminating the dependency on the taskomatic service, bringing execution down to a fraction of the current duration. The tool should also support multiple runtime flags, such as:

    mgr-ansible-ssh: Remote command execution wrapper using Ansible Runner
    
    Usage: mgr-ansible-ssh [--help] [--version] [--inventory INVENTORY]
                       [--run RUN] [--playbook PLAYBOOK] [--limit LIMIT]
                       [--forks FORKS] [--dry-run] [--no-ansible-output]
    
    Required Arguments
    --inventory, -i      Path to Ansible inventory file to use
    
    Any One of the Arguments Is Required
    --run, -r            Execute the specified shell command on target hosts
    --playbook, -p       Execute the specified Ansible playbook on target hosts
    
    Optional Arguments
    --help, -h           Show the help message and exit
    --version, -v        Show the version and exit
    --limit, -l          Limit execution to specific hosts or groups
    --forks, -f          Number of parallel Ansible forks
    --dry-run            Run in Ansible check mode (requires -p or --playbook)
    --no-ansible-output  Suppress Ansible stdout output
    

    Secondary/Stretched Goals (if time permits):

    1. Add pretty output formatting (success/failure summary per host).
    2. Implement basic logging of executed commands and results.
    3. Introduce safety checks for risky commands (shutdown, rm -rf, etc.).
    4. Package the tool so it can be installed with pip or stored internally.

    Resources

    Collaboration is welcome from anyone interested in CLI tooling, automation, or distributed systems. Skills that would be particularly valuable include:

    1. Python especially around CLI dev (argparse, click, rich)


    Ansible to Salt integration by vizhestkov

    Description

    We already have initial integration of Ansible in Salt with the possibility to run playbooks from the salt-master on the salt-minion used as an Ansible Control node.

    In this project I want to check if it possible to make Ansible working on the transport of Salt. Basically run playbooks with Ansible through existing established Salt (ZeroMQ) transport and not using ssh at all.

    It could be a good solution for the end users to reuse Ansible playbooks or run Ansible modules they got used to with no effort of complex configuration with existing Salt (or Uyuni/SUSE Multi Linux Manager) infrastructure.

    Goals

    • [v] Prepare the testing environment with Salt and Ansible installed
    • [v] Discover Ansible codebase to figure out possible ways of integration
    • [v] Create Salt/Uyuni inventory module
    • [v] Make basic modules to work with no using separate ssh connection, but reusing existing Salt connection
    • [v] Test some most basic playbooks

    Resources

    GitHub page

    Video of the demo


    Casky – Lightweight C Key-Value Engine with Crash Recovery by pperego

    Description

    Casky is a lightweight, crash-safe key-value store written in C, designed for fast storage and retrieval of data with a minimal footprint. Built using Test-Driven Development (TDD), Casky ensures reliability while keeping the codebase clean and maintainable. It is inspired by Bitcask and aims to provide a simple, embeddable storage engine that can be integrated into microservices, IoT devices, and other C-based applications.

    Objectives:

    • Implement a minimal key-value store with append-only file storage.
    • Support crash-safe persistence and recovery.
    • Expose a simple public API: store(key, value), load(key), delete(key).
    • Follow TDD methodology for robust and testable code.
    • Provide a foundation for future extensions, such as in-memory caching, compaction, and eventual integration with vector-based databases like PixelDB.

    Why This Project is Interesting:

    Casky combines low-level C programming with modern database concepts, making it an ideal playground to explore storage engines, crash safety, and performance optimization. It’s small enough to complete during Hackweek, yet it provides a solid base for future experiments and more complex projects.

    Goals

    • Working prototype with append-only storage and memtable.
    • TDD test suite covering core functionality and recovery.
    • Demonstration of basic operations: insert, load, delete.
    • Optional bonus: LRU caching, file compaction, performance benchmarks.

    Future Directions:

    After Hackweek, Casky can evolve into a backend engine for projects like PixelDB, supporting vector storage and approximate nearest neighbor search, combining low-level performance with cutting-edge AI retrieval applications.

    Resources

    The Bitcask paper: https://riak.com/assets/bitcask-intro.pdf The Casky repository: https://github.com/thesp0nge/casky

    Day 1

    [0.10.0] - 2025-12-01

    Added

    • Core in-memory KeyDir and EntryNode structures
    • API functions: caskyopen, caskyclose, caskyput, caskyget, casky_delete
    • Hash function: caskydjb2hash_xor
    • Error handling via casky_errno
    • Unit tests for all APIs using standard asserts
    • Test cleanup of temporary files

    Changed

    • None (first MVP)

    Fixed

    • None (first MVP)

    Day 2

    [0.20.0] - 2025-12-02


    Collection and organisation of information about Bulgarian schools by iivanov

    Description

    To achieve this it will be necessary:

    • Collect/download raw data from various government and non-governmental organizations
    • Clean up raw data and organise it in some kind database.
    • Create tool to make queries easy.
    • Or perhaps dump all data into AI and ask questions in natural language.

    Goals

    By selecting particular school information like this will be provided:

    • School scores on national exams.
    • School scores from the external evaluations exams.
    • School town, municipality and region.
    • Employment rate in a town or municipality.
    • Average health of the population in the region.

    Resources

    Some of these are available only in bulgarian.

    • https://danybon.com/klasazia
    • https://nvoresults.com/index.html
    • https://ri.mon.bg/active-institutions
    • https://www.nsi.bg/nrnm/ekatte/archive

    Results

    • Information about all Bulgarian schools with their scores during recent years cleaned and organised into SQL tables
    • Information about all Bulgarian villages, cities, municipalities and districts cleaned and organised into SQL tables
    • Information about all Bulgarian villages and cities census since beginning of this century cleaned and organised into SQL tables.
    • Information about all Bulgarian municipalities about religion, ethnicity cleaned and organised into SQL tables.
    • Data successfully loaded to locally running Ollama with help to Vanna.AI
    • Seems to be usable.

    TODO

    • Add more statistical information about municipalities and ....

    Code and data


    Work on kqlite (Lightweight remote SQLite with high availability and auto failover). by epenchev

    Description

    Continue the work on kqlite (Lightweight remote SQLite with high availability and auto failover).
    It's a solution for applications that require High Availability but don't need all the features of a complete RDBMS and can fit SQLite in their use case.
    Also kqlite can be considered to be used as a lightweight storage backend for K8s (https://docs.k3s.io/datastore) and the Edge, and allowing to have only 2 Nodes for HA.

    Goals

    Push kqlite to a beta version.
    kqlite as library for Go programs.

    Resources

    https://github.com/kqlite/kqlite


    GRIT: GRaphs In Time by fvanlankvelt

    Description

    The current implementation of the Time-Travelling Topology database, StackGraph, has served SUSE Observability well over the years. But it is dependent on a number of complex components - Zookeeper, HDFS, HBase, Tephra. These lead to a large number of failure scenarios and parameters to tweak for optimal performance.

    The goal of this project is to take the high-level requirements (time-travelling topology, querying over time, transactional changes to topology, scalability) and design/prototype key components, to see where they would lead us if we were to start from scratch today.

    An example would be to use RocksDB to persist topology history. Its user-defined timestamps seem to match well with time-travelling, has transaction support with fine-grained conflict detection.

    Goals

    Determine feasibility of implementing the model on a whole new architecture. See how to model the graph and its history such that updates and querying are performant, transactional conflicts are minimized. Build a prototype to validate the model.

    Resources

    Backend developers, preferably experienced in distributed systems. Programming language: scala 3 with some C++ for low-level.

    Progress

    The project has started at github GRaphs In Time - a C++ project that

    • embeds RocksDB for persistence,
    • uses (nu)Raft for replication/consensus,
    • supports large transactions, with
    • SNAPSHOT isolation
    • time-travel
    • graph operations (add/remove vertex/edge, indexes)


    Sim racing track database by avicenzi

    Description

    Do you wonder which tracks are available in each sim racing game? Wonder no more.

    Goals

    Create a simple website that includes details about sim racing games.

    The website should be static and built with Alpine.JS and TailwindCSS. Data should be consumed from JSON, easily done with Alpine.JS.

    The main goal is to gather track information, because tracks vary by game. Older games might have older layouts, and newer games might have up-to-date layouts. Some games include historical layouts, some are laser scanned. Many tracks are available as DLCs.

    Initially include official tracks from:

    • ACC
    • iRacing
    • PC2
    • LMU
    • Raceroom
    • Rennsport

    These games have a short list of tracks and DLCs.

    Resources

    The hardest part is collecting information about tracks in each game. Active games usually have information on their website or even on Steam. Older games might be on Fandom or a Wiki. Real track information can be extracted from Wikipedia or the track website.