Let's make reposync faster

Every day,

Multiple times a day,

Every SUSE Manager customer,

Every Red Hat Satellite customer,

Every Spacewalk user,

And every Uyuni user...

...spends a lot of CPU and wall clock time in reposyncing.

Intro

A lot of that time is wasted by an old, overcomplicated and most of all inefficient algorithm that contributes heavily on heat dissipation and user patience depletion!

HackWeek hackers, we can change that!

Past attempts only partially succeeded: https://trello.com/c/inl9Wu0p/40-reduce-global-warming, https://trello.com/c/dYAR0J8K/13-reduce-global-warming-take-2

But we have better tools now!

Tooling

py-spy to the rescue: introduction

Install with: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python get-pip.py pip install py-spy

Trace a running spacewalk-repo-sync with: py-spy --nonblocking --pid `ps aux | grep spacewalk-repo-sync | grep -v grep | awk '{print $2}'` --flame output.svg --duration 10

Look at the results with:

python -m SimpleHTTPServer 8666

And point your browser to http://:8666/output.html. Here is one such example:

Flame Graph

Current remarks:

  • we currently spend a lot of time in lookup functions
  • lookup functions SELECT rows at every INSERT
  • this is especially bad for checksums, capabilities and some other cases
  • design comes from Oracle and can probably be changed!

Looking for hackers with the skills:

python performance databases postgresql

This project is part of:

Hack Week 18

Activity

  • over 5 years ago: joachimwerner liked this project.
  • over 5 years ago: mkoutny liked this project.
  • over 5 years ago: Pharaoh_Atem liked this project.
  • over 5 years ago: PSuarezHernandez liked this project.
  • over 5 years ago: ebischoff liked this project.
  • over 5 years ago: moio added keyword "python" to this project.
  • over 5 years ago: moio added keyword "performance" to this project.
  • over 5 years ago: moio added keyword "databases" to this project.
  • over 5 years ago: moio added keyword "postgresql" to this project.
  • over 5 years ago: cbosdonnat liked this project.
  • over 5 years ago: moio started this project.
  • over 5 years ago: moio liked this project.
  • over 5 years ago: jbrielmaier liked this project.
  • over 5 years ago: moio originated this project.

  • Comments

    • ebischoff
      over 5 years ago by ebischoff | Reply

      See also this fate request "Have a synchronization that does not take hours (or days)"

    • joachimwerner
      over 5 years ago by joachimwerner | Reply

      Related, but probably out of scope for your hack week project: Once we've optimized the syncing code, I think we could also reduce the scope of what needs to be synced for many customers: Especially for pilots, but also in real life, many of the older updates (e.g. several complete kernels, several Java updates) are never going to be needed, but still synced. We should investigate how we can offer something like a "JeR" ("Just enough Repo") to speed things up even more. This could be done server-side (provide alternative repo metadata for a "current stuff only" repo or client-side (but then some dependency resolution magic is probably needed).

    • chasecrum
      over 5 years ago by chasecrum | Reply

      Any update on how this turned out?

    Similar Projects

    Team Hedgehogs' Data Observability Dashboard by gsamardzhiev

    Description

    This project aims to develop a comprehensive Data Observability Dashboard that provides r insights into key aspects of data quality and reliability. The dashboard will track:

    Data Freshness: Monitor when data was last updated and flag potential delays.

    Data Volume: Track table row counts to detect unexpected surges or drops in data.

    Data Distribution: Analyze data for null values, outliers, and anomalies to ensure accuracy.

    Data Schema: Track schema changes over time to prevent breaking changes.

    The dashboard's aim is to support historical tracking to support proactive data management and enhance data trust across the data function.

    Goals

    Although the final goal is to create a power bi dashboard that we are able to monitor, our goals is to 1. Create the necessary tables that track the relevant metadata about our current data 2. Automate the process so it runs in a timely manner

    Resources

    AWS Redshift; AWS Glue, Airflow, Python, SQL

    Why Hedgehogs?

    Because we like them.


    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    Symbol Relations by hli

    Description

    There are tools to build function call graphs based on parsing source code, for example, cscope.

    This project aims to achieve a similar goal by directly parsing the disasembly (i.e. objdump) of a compiled binary. The assembly code is what the CPU sees, therefore more "direct". This may be useful in certain scenarios, such as gdb/crash debugging.

    Detailed description and Demos can be found in the README file:

    Supports x86 for now (because my customers only use x86 machines), but support for other architectures can be added easily.

    Tested with python3.6

    Goals

    Any comments are welcome.

    Resources

    https://github.com/lhb-cafe/SymbolRelations

    symrellib.py: mplements the symbol relation graph and the disassembly parser

    symrel_tracer*.py: implements tracing (-t option)

    symrel.py: "cli parser"


    ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini

    Description

    ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal or local installations. However, the goal is to expand its use to encompass all installations of Kubernetes for local development purposes.
    It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based configuration config.yml.

    Overview

    • Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
    • Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
    • Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
    • Extensibility: Easily extend functionality with custom plugins and configurations.
    • Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
    • Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.

    Features

    • distribution and engine independence. Install your favorite kubernetes engine with your package manager, execute one script and you'll have a complete working environment at your disposal.
    • Basic config approach. One single config.yml file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...).
    • Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
    • Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
    • Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
    • One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
    • Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.

    Planned features (Wishlist / TODOs)

    • Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for


    Make more sense of openQA test results using AI by livdywan

    Description

    AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

    User Story

    Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

    Goals

    • Leverage a chat interface to help Allison
    • Create a model from scratch based on data from openQA
    • Proof of concept for automated analysis of openQA test results

    Bonus

    • Use AI to suggest solutions to merge conflicts
      • This would need a merge conflict editor that can suggest solving the conflict
    • Use image recognition for needles

    Resources

    Timeline

    Day 1

    • Conversing with open-webui to teach me how to create a model based on openQA test results

    Day 2

    Highlights

    • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
    • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
    • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
    • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

    Outcomes

    • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
    • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.