There are several use cases where it's beneficial to be able to automatically rearrange VM instances in a cloud into a different placement. These usually fall into one of two categories, which could be called defragmentation, and rebalancing:

  • Defragmentation - condense VMs onto fewer physical VM host machines (conceptually similar to VMware DPM). Example use cases:
    • Reduce power consumption
    • Increase CPU / RAM / I/O utilization (e.g. by over-committing and/or memory page sharing)
    • Evacuate physical servers
      • for repurposing
      • for maintenance (BIOS upgrades, re-cabling etc.)
      • to protect SLAs, e.g. when SMART monitors indicate potential imminent hardware failure, or when HVAC failures are likely to cause servers to shutdown due to over-heating
  • Rebalancing - spread the VMs evenly across as many physical VM host machines as possible (conceptually similar to VMware DRS). Example use cases:
    • Optimise workloads for performance, by reducing CPU / I/O hotspots
    • Maximise headroom on each physical machine
    • Reduce thermal hotspots in order to reduce power consumption

Custom rearrangements may be required according to other IT- or business-driven policies, e.g. only rearrange VM instances relating to a specific workload, in order to increase locality of reference, reduce latency, respect availability zones, or facilitate other out-of-band workflows.

It is clear from the above that VM placement policies are likely to vary greatly across clouds, and sometimes even within a single cloud. OpenStack Compute (nova) has fairly sophisticated scheduling capabilities which can be configured to implement some of the above policies on an incremental basis, i.e. every time a VM instance is started or migrated, the destination VM host can be chosen according to filters and weighted cost functions. However, this approach is somewhat limited, because the placement policies are implemented cloud-wide, and only considered one migration at a time.

Since OpenStack is rapidly evolving to the point where a VM's network and storage dependencies can be live migrated along with the workload in a near seamless fashion, it is advantageous to develop mechanisms for implementing finer-grained placement policies, where not only is VM rearrangement performed automatically, but the policies themselves can be varied dynamically over time as workload requirements change.

Developing algorithms to determine optimal placement is distinctly non-trivial. For example, the defragmentation scenario above is a complex variant of the bin packing problem, which is NP-hard. The following constraints add significant complexity to the problem:

  • A useful real world solution should take into account not only the RAM footprint of the VMs, but also CPU, disk, and network.

  • It also needs to ensure that SLAs are maintained whilst any rearrangement takes place.

  • If the cloud is sufficiently near capacity, it may not be possible to rearrange the VMs from their current placement to a more optimal placement without first shutting down some VMs, which could be prohibited by the SLAs.

  • Even if the arrangement is achievable purely via a sequence of live migrations, the algorithm must also be sensitive to the performance impact to running workloads when performing multiple live migrations, since live migrations require intensive bursts of network I/O in order to synchronize the VM's memory contents between the source and target hosts, followed by a momentary freezing of the VM as it flips from the source to the target. This trade-off between optimal resource utilization and service availability means that a sub-optimal final placement may be preferable to an optimal one.

  • In the case where the hypervisor is capable of sharing memory pages between VMs, the algorithm should try to place together VMs which are likely to share memory pages (e.g. VMs running the same OS platform, OS version, software libraries, or applications. A research paper published in 2011 demonstrated that VM packing which optimises placement in this fashion can be approximated in polytime, achieving 32% to 50% reduction in servers and a 25% to 57% reduction in memory footprint compared to sharing-oblivious algorithms.

As noted by the above paper, this area of computer science is still evolving. There is one constant however: any rearrangement solution must not only provide a final VM placement optimised according to the chosen constraints, but also a sequence of migrations to it from the current placement. There will often be multiple migration sequences reaching the optimised placement from the current one, and their efficiency can vary widely. In other words, there are two questions which need answering:

  1. Given a starting placement A, which is the best final placement B to head for?
  2. What's the best way to get from A to B?

The above considerations strongly suggest that the first question is much harder to answer than the second. I propose that by adopting a divide and conquer approach, solving the second may simplify the first. Decoupling the two should also provide a mechanism for comparatively evaluating the effectiveness of potential answers to the first. Another bonus of this decoupling is that it should be possible for the path-finding algorithm to also discover opportunities for parallelizing live migrations when walking the path, so that the target placement B can be reached more quickly.

Therefore, for this Hack Week project, I intend to design and implement an algorithm which reliably calculates a reasonably optimal sequence of VM migrations from a given initial placement to a given final placement. My goals are as follows:

  • If there is any migration path, the algorithm must find at least one.
  • It should find a path which is reasonably optimal with respect to the migration cost function. In prior work, I already tried Dijkstra's shortest path algorithm and demonstrated that an exhaustive search for the shortest path has intolerably high complexity.
  • The migration cost function should be pluggable. Initially it will simply consider the cost as proportional to the RAM footprint of the VM being migrated.
  • For now, use a smoke-and-mirrors in-memory model of the cloud's state. Later, the code can be ported to consume the nova API.
  • In light of the above, the implementation should be in Python.
  • Determination of which states are sane should be pluggable. For example, the algorithm should not hardcode any assumptions about whether physical servers can over-commit CPU or RAM.

Looking for hackers with the skills:

cloud virtualization orchestration openstack performance python

This project is part of:

Hack Week 10

Activity

  • about 12 years ago: aspiers liked this project.
  • about 12 years ago: aspiers started this project.
  • about 12 years ago: aspiers added keyword "orchestration" to this project.
  • about 12 years ago: aspiers added keyword "openstack" to this project.
  • about 12 years ago: aspiers added keyword "performance" to this project.
  • about 12 years ago: aspiers added keyword "python" to this project.
  • about 12 years ago: aspiers added keyword "cloud" to this project.
  • about 12 years ago: aspiers added keyword "virtualization" to this project.
  • about 12 years ago: aspiers originated this project.

  • Comments

    • aspiers
      about 12 years ago by aspiers | Reply

      I completed this project, and intend to publish the code and also blog about it. I uploaded a YouTube video demoing what the code can do.

    Similar Projects

    Create a Cloud-Native policy engine with notifying capabilities to optimize resource usage by gbazzotti

    Description

    The goal of this project is to begin the initial phase of development of an all-in-one Cloud-Native Policy Engine that notifies resource owners when their resources infringe predetermined policies. This was inspired by a current issue in the CES-SRE Team where other solutions seemed to not exactly correspond to the needs of the specific workloads running on the Public Cloud Team space.

    The initial architecture can be checked out on the Repository listed under Resources.

    Among the features that will differ this project from other monitoring/notification systems:

    • Pre-defined sensible policies written at the software-level, avoiding a learning curve by requiring users to write their own policies
    • All-in-one functionality: logging, mailing and all other actions are not required to install any additional plugins/packages
    • Easy account management, being able to parse all required configuration by a single JSON file
    • Eliminate integrations by not requiring metrics to go through a data-agreggator

    Goals

    • Create a minimal working prototype following the workflow specified on the documentation
    • Provide instructions on installation/usage
    • Work on email notifying capabilities

    Resources


    Reassess HiFive Premier P550 board (for RISC-V virtualization) by a_faerber

    Description

    With growing interest in the RISC-V instruction set architecture, we need to re-evaluate ways of building packages for it:

    Currently openSUSE OBS is using x86_64 build workers, using QEMU userspace-level (syscall) emulation inside KVM VMs. Occasionally this setup causes build failures, due to timing differences or incomplete emulation. Andreas Schwab and others have collected workarounds in projects like openSUSE:Factory:RISCV to deal with some of those issues.

    Ideally we would be using native riscv64 KVM VMs instead. This requires CPUs with the H extension. Two generally available development boards feature the ESWIN 7700X System-on-Chip with SiFive P550 CPUs, HiFive Premier P550 and Milk-V Megrez. We've had access to the HiFive Premier P550 for some time now, but the early version (based on Yocto) had issues with the bootloader, and reportedly later boards were booting to a dracut emergency shell for lack of block device drivers.

    Goals

    • Update the boot firmware
    • Test whether and how far openSUSE Tumbleweed boots

    Results

    • Boot firmware image 2025.11.00 successfully flashed onto board
      • Enables UEFI boot in U-Boot by default
      • U-Boot's embedded Flat Device Tree is lacking a timebase-frequency, required for recent (6.16.3) mainline kernels (panic leading to reset, visible via earlycon=sbi)
    • Tested eswin/eic7700-hifive-premier-p550.dtb from Ubuntu 2025.11.00 image
      • Allows to boot past the above panic, but times out in JeOS image while waiting for block device, dropping to dracut emergency shell
      • No devices shown in lsblk -- 6.16 appears to be lacking device drivers still

    Resources


    SUSE Virtualization (Harvester): VM Import UI flow by wombelix

    Description

    SUSE Virtualization (Harvester) has a vm-import-controller that allows migrating VMs from VMware and OpenStack, but users need to write manifest files and apply them with kubectl to use it. This project is about adding the missing UI pieces to the harvester-ui-extension, making VM Imports accessible without requiring Kubernetes and YAML knowledge.

    VMware and OpenStack admins aren't automatically familiar with Kubernetes and YAML. Implementing the UI part for the VM Import feature makes it easier to use and more accessible. The Harvester Enhancement Proposal (HEP) VM Migration controller included a UI flow implementation in its scope. Issue #2274 received multiple comments that an UI integration would be a nice addition, and issue #4663 was created to request the implementation but eventually stalled.

    Right now users need to manually create either VmwareSource or OpenstackSource resources, then write VirtualMachineImport manifests with network mappings and all the other configuration options. Users should be able to do that and track import status through the UI without writing YAML.

    Work during the Hack Week will be done in this fork in a branch called suse-hack-week-25, making progress publicly visible and open for contributions. When everything works out and the branch is in good shape, it will be submitted as a pull request to harvester-ui-extension to get it included in the next Harvester release.

    Testing will focus on VMware since that's what is available in the lab environment (SUSE Virtualization 1.6 single-node cluster, ESXi 8.0 standalone host). Given that this is about UI and surfacing what the vm-import-controller handles, the implementation should work for OpenStack imports as well.

    This project is also a personal challenge to learn vue.js and get familiar with Rancher Extensions development, since harvester-ui-extension is built on that framework.

    Goals

    • Learn Vue.js and Rancher Extensions fundamentals required to finish the project
    • Read and learn from other Rancher UI Extensions code, especially understanding the harvester-ui-extension code base
    • Understand what the vm-import-controller and its CRDs require, identify ready to use components in the Rancher UI Extension API that can be leveraged
    • Implement UI logic for creating and managing VmwareSource / OpenstackSource and VirtualMachineImport resources with all relevant configuration options and credentials
    • Implemnt UI elements to display VirtualMachineImport status and errors

    Resources

    HEP and related discussion

    SUSE Virtualization VM Import Documentation

    Rancher Extensions Documentation

    Rancher UI Plugin Examples

    Vue Router Essentials

    Vue Router API

    Vuex Documentation


    SUSE KVM Best Practices - Focus on SAP Workloads and Use Cases by roseswe

    Description

    SUSE Best Practices around KVM, especially for SAP workloads. Early Google presentation already made from various customer projects and SUSE sources.

    Goals

    • Complete presentation we can reuse in SUSE Consulting projects
    • 2025: Bring it to version 1.00 ready for customers

    Resources

    KVM (virt-manager) images

    SUSE/SAP/KVM Best Practices

    • https://documentation.suse.com/en-us/sles/15-SP6/single-html/SLES-virtualization/
    • SAP Note 1522993 - "Linux: SAP on SUSE KVM - Kernel-based Virtual Machine" && 2284516 - SAP HANA virtualized on SUSE Linux Enterprise hypervisors https://me.sap.com/notes/2284516
    • SUSECon24: [TUTORIAL-1253] Virtualizing SAP workloads with SUSE KVM || https://youtu.be/PTkpRVpX2PM
    • SUSE Best Practices for SAP HANA on KVM - https://documentation.suse.com/sbp/sap-15/html/SBP-SLES4SAP-HANAonKVM-SLES15SP4/index.html


    Extracting, converting and importing VMs from Nutanix into SUSE Virtualization by emendonca

    Description

    The idea is to delve into understanding Nutanix AHV internals on how it stores and runs VMs, and how to extract them in an automated way for importing into a KVM-compatible hypervisor, like SUSE Virtualization/Harvester. The final product will be not only be documentation, but a working prototype that can be used to automate the process.

    Goals

    1) document how to create a simple lab with NutaniX AHV community edition 2) determine the basic elements we need to interact with 3) determine what are the best paths to grab the images through, balancing speed and complexity 4) document possible issues and create a roadmap for tackling them 4) should we adapt an existing solution or implement a new one? 5) implement the solution!

    Resources

    Similar project I created: https://github.com/doccaz/vm-import-ui Nutanix AHV forums Nutanix technical bulletins


    Q2Boot - A handy QEMU VM launcher by amanzini

    Description

    Q2Boot (Qemu Quick Boot) is a command-line tool that wraps QEMU to provide a streamlined experience for launching virtual machines. It automatically configures common settings like KVM acceleration, virtio drivers, and networking while allowing customization through both configuration files and command-line options.

    The project originally was a personal utility in D, now recently rewritten in idiomatic Go. It lives at repository https://github.com/ilmanzo/q2boot

    Goals

    Improve the project, testing with different scenarios , address issues and propose new features. It will benefit of some basic integration testing by providing small sample disk images.

    Updates

    • Dec 1, 2025 : refactor command line options, added structured logging. Released v0.0.2
    • Dec 2, 2025 : added external monitor via telnet option
    • Dec 4, 2025 : released v0.0.3 with architecture auto-detection
    • Dec 5, 2025 : filing new issues and general polishment. Designing E2E testing

    Resources


    RMT.rs: High-Performance Registration Path for RMT using Rust by gbasso

    Description

    The SUSE Repository Mirroring Tool (RMT) is a critical component for managing software updates and subscriptions, especially for our Public Cloud Team (PCT). In a cloud environment, hundreds or even thousands of new SUSE instances (VPS/EC2) can be provisioned simultaneously. Each new instance attempts to register against an RMT server, creating a "thundering herd" scenario.

    We have observed that the current RMT server, written in Ruby, faces performance issues under this high-concurrency registration load. This can lead to request overhead, slow registration times, and outright registration failures, delaying the readiness of new cloud instances.

    This Hackweek project aims to explore a solution by re-implementing the performance-critical registration path in Rust. The goal is to leverage Rust's high performance, memory safety, and first-class concurrency handling to create an alternative registration endpoint that is fast, reliable, and can gracefully manage massive, simultaneous request spikes.

    The new Rust module will be integrated into the existing RMT Ruby application, allowing us to directly compare the performance of both implementations.

    Goals

    The primary objective is to build and benchmark a high-performance Rust-based alternative for the RMT server registration endpoint.

    Key goals for the week:

    1. Analyze & Identify: Dive into the SUSE/rmt Ruby codebase to identify and map out the exact critical path for server registration (e.g., controllers, services, database interactions).
    2. Develop in Rust: Implement a functionally equivalent version of this registration logic in Rust.
    3. Integrate: Explore and implement a method for Ruby/Rust integration to "hot-wire" the new Rust module into the RMT application. This may involve using FFI, or libraries like rb-sys or magnus.
    4. Benchmark: Create a benchmarking script (e.g., using k6, ab, or a custom tool) that simulates the high-concurrency registration load from thousands of clients.
    5. Compare & Present: Conduct a comparative performance analysis (requests per second, latency, success/error rates, CPU/memory usage) between the original Ruby path and the new Rust path. The deliverable will be this data and a summary of the findings.

    Resources

    • RMT Source Code (Ruby):
      • https://github.com/SUSE/rmt
    • RMT Documentation:
      • https://documentation.suse.com/sles/15-SP7/html/SLES-all/book-rmt.html
    • Tooling & Stacks:
      • RMT/Ruby development environment (for running the base RMT)
      • Rust development environment (rustup, cargo)
    • Potential Integration Libraries:
      • rb-sys: https://github.com/oxidize-rb/rb-sys
      • Magnus: https://github.com/matsadler/magnus
    • Benchmarking Tools:
      • k6 (https://k6.io/)
      • ab (ApacheBench)


    dynticks-testing: analyse perf / trace-cmd output and aggregate data by m.crivellari

    Description

    dynticks-testing is a project started years ago by Frederic Weisbecker. One of the feature is to check the actual configuration (isolcpus, irqaffinity etc etc) and give feedback on it.

    An important goal of this tool is to parse the output of trace-cmd / perf and provide more readable data, showing the duration of every events grouped by PID (showing also the CPU number, if the tasks has been migrated etc).

    An example of data captured on my laptop (incomplete!!):

              -0     [005] dN.2. 20310.270699: sched_wakeup:         WaylandProxy:46380 [120] CPU:005
              -0     [005] d..2. 20310.270702: sched_switch:         swapper/5:0 [120] R ==> WaylandProxy:46380 [120]
    ...
        WaylandProxy-46380 [004] d..2. 20310.295397: sched_switch:         WaylandProxy:46380 [120] S ==> swapper/4:0 [120]
              -0     [006] d..2. 20310.295397: sched_switch:         swapper/6:0 [120] R ==> firefox:46373 [120]
             firefox-46373 [006] d..2. 20310.295408: sched_switch:         firefox:46373 [120] S ==> swapper/6:0 [120]
              -0     [004] dN.2. 20310.295466: sched_wakeup:         WaylandProxy:46380 [120] CPU:004
    

    Output of noise_parse.py:

    Task: WaylandProxy Pid: 46380 cpus: {4, 5} (Migrated!!!)
            Wakeup Latency                                Nr:        24     Duration:          89
            Sched switch: kworker/12:2                    Nr:         1     Duration:           6
    

    My first contribution is around Nov. 2024!

    Goals

    • add more features (eg cpuset)
    • test / bugfix

    Resources

    Progresses

    isolcpus and cpusets implemented and merged in master: dynticks-testing.git commit


    Liz - Prompt autocomplete by ftorchia

    Description

    Liz is the Rancher AI assistant for cluster operations.

    Goals

    We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.

    Example:

    • User prompt: "Can you show me the list of p"
    • Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"

    Example:

    • User prompt: "Show me the logs of #rancher-"
    • Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".

    Technical Overview

    1. The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
    2. The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.

    Resources

    GitHub repository


    Update M2Crypto by mcepl

    There are couple of projects I work on, which need my attention and putting them to shape:

    Goal for this Hackweek

    • Put M2Crypto into better shape (most issues closed, all pull requests processed)
    • More fun to learn jujutsu
    • Play more with Gemini, how much it help (or not).
    • Perhaps, also (just slightly related), help to fix vis to work with LuaJIT, particularly to make vis-lspc working.


    Collection and organisation of information about Bulgarian schools by iivanov

    Description

    To achieve this it will be necessary:

    • Collect/download raw data from various government and non-governmental organizations
    • Clean up raw data and organise it in some kind database.
    • Create tool to make queries easy.
    • Or perhaps dump all data into AI and ask questions in natural language.

    Goals

    By selecting particular school information like this will be provided:

    • School scores on national exams.
    • School scores from the external evaluations exams.
    • School town, municipality and region.
    • Employment rate in a town or municipality.
    • Average health of the population in the region.

    Resources

    Some of these are available only in bulgarian.

    • https://danybon.com/klasazia
    • https://nvoresults.com/index.html
    • https://ri.mon.bg/active-institutions
    • https://www.nsi.bg/nrnm/ekatte/archive

    Results

    • Not fully ready.
    • Information about all Bulgarian schools with their scores during recent years cleaned and organised into SQL tables
    • Information about all Bulgarian villages, cities, municipalities and districts cleaned and organised into SQL tables
    • Information about all Bulgarian villages and cities census since beginning of this century cleaned and organised into SQL tables.
    • Information about all Bulgarian municipalities about religion, ethnicity cleaned and organised into SQL tables.
    • Data successfully loaded to locally running Ollama with help to Vanna.AI

    TODO

    • Seems that tables columns have to have better description to allow LLM to do its magic
    • Add comments to column names. This will require switch from SQLite to ... PostgreSQL which supports COMMENTS
    • Add more statistical information about municipalities and ....

    Code and data


    Help Create A Chat Control Resistant Turnkey Chatmail/Deltachat Relay Stack - Rootless Podman Compose, OpenSUSE BCI, Hardened, & SELinux by 3nd5h1771fy

    Description

    The Mission: Decentralized & Sovereign Messaging

    FYI: If you have never heard of "Chatmail", you can visit their site here, but simply put it can be thought of as the underlying protocol/platform decentralized messengers like DeltaChat use for their communications. Do not confuse it with the honeypot looking non-opensource paid for prodect with better seo that directs you to chatmailsecure(dot)com

    In an era of increasing centralized surveillance by unaccountable bad actors (aka BigTech), "Chat Control," and the erosion of digital privacy, the need for sovereign communication infrastructure is critical. Chatmail is a pioneering initiative that bridges the gap between classic email and modern instant messaging, offering metadata-minimized, end-to-end encrypted (E2EE) communication that is interoperable and open.

    However, unless you are a seasoned sysadmin, the current recommended deployment method of a Chatmail relay is rigid, fragile, difficult to properly secure, and effectively takes over the entire host the "relay" is deployed on.

    Why This Matters

    A simple, host agnostic, reproducible deployment lowers the entry cost for anyone wanting to run a privacy‑preserving, decentralized messaging relay. In an era of perpetually resurrected chat‑control legislation threats, EU digital‑sovereignty drives, and many dangers of using big‑tech messaging platforms (Apple iMessage, WhatsApp, FB Messenger, Instagram, SMS, Google Messages, etc...) for any type of communication, providing an easy‑to‑use alternative empowers:

    • Censorship resistance - No single entity controls the relay; operators can spin up new nodes quickly.
    • Surveillance mitigation - End‑to‑end OpenPGP encryption ensures relay operators never see plaintext.
    • Digital sovereignty - Communities can host their own infrastructure under local jurisdiction, aligning with national data‑policy goals.

    By turning the Chatmail relay into a plug‑and‑play container stack, we enable broader adoption, foster a resilient messaging fabric, and give developers, activists, and hobbyists a concrete tool to defend privacy online.

    Goals

    As I indicated earlier, this project aims to drastically simplify the deployment of Chatmail relay. By converting this architecture into a portable, containerized stack using Podman and OpenSUSE base container images, we can allow anyone to deploy their own censorship-resistant, privacy-preserving communications node in minutes.

    Our goal for Hack Week: package every component into containers built on openSUSE/MicroOS base images, initially orchestrated with a single container-compose.yml (podman-compose compatible). The stack will:

    • Run on any host that supports Podman (including optimizations and enhancements for SELinux‑enabled systems).
    • Allow network decoupling by refactoring configurations to move from file-system constrained Unix sockets to internal TCP networking, allowing containers achieve stricter isolation.
    • Utilize Enhanced Security with SELinux by using purpose built utilities such as udica we can quickly generate custom SELinux policies for the container stack, ensuring strict confinement superior to standard/typical Docker deployments.
    • Allow the use of bind or remote mounted volumes for shared data (/var/vmail, DKIM keys, TLS certs, etc.).
    • Replace the local DNS server requirement with a remote DNS‑provider API for DKIM/TXT record publishing.

    By delivering a turnkey, host agnostic, reproducible deployment, we lower the barrier for individuals and small communities to launch their own chatmail relays, fostering a decentralized, censorship‑resistant messaging ecosystem that can serve DeltaChat users and/or future services adopting this protocol

    Resources


    Enhance git-sha-verify: A tool to checkout validated git hashes by gpathak

    Description

    git-sha-verify is a simple shell utility to verify and checkout trusted git commits signed using GPG key. This tool helps ensure that only authorized or validated commit hashes are checked out from a git repository, supporting better code integrity and security within the workflow.

    Supports:

    • Verifying commit authenticity signed using gpg key
    • Checking out trusted commits

    Ideal for teams and projects where the integrity of git history is crucial.

    Goals

    A minimal python code of the shell script exists as a pull request.

    The goal of this hackweek is to:

    • DONE: Add more unit tests
      • New and more tests can be added later
    • Partially DONE: Make the python code modular
    • DONE: Add code coverage if possible

    Resources