There are several use cases where it's beneficial to be able to automatically rearrange VM instances in a cloud into a different placement. These usually fall into one of two categories, which could be called defragmentation, and rebalancing:
- Defragmentation - condense VMs onto fewer physical VM host
machines (conceptually similar to VMware
DPM).
Example use cases:
- Reduce power consumption
- Increase CPU / RAM / I/O utilization (e.g. by over-committing and/or memory page sharing)
- Evacuate physical servers
- for repurposing
- for maintenance (BIOS upgrades, re-cabling etc.)
- to protect SLAs, e.g. when SMART monitors indicate potential imminent hardware failure, or when HVAC failures are likely to cause servers to shutdown due to over-heating
- Rebalancing - spread the VMs evenly across as many physical VM host
machines as possible (conceptually similar to
VMware DRS).
Example use cases:
- Optimise workloads for performance, by reducing CPU / I/O hotspots
- Maximise headroom on each physical machine
- Reduce thermal hotspots in order to reduce power consumption
Custom rearrangements may be required according to other IT- or business-driven policies, e.g. only rearrange VM instances relating to a specific workload, in order to increase locality of reference, reduce latency, respect availability zones, or facilitate other out-of-band workflows.
It is clear from the above that VM placement policies are likely to
vary greatly across clouds, and sometimes even within a single cloud.
OpenStack Compute
(nova
) has fairly sophisticated scheduling capabilities
which can be configured to implement some of the above policies on an
incremental basis, i.e. every time a VM instance is started or
migrated, the destination VM host can be chosen according to filters
and weighted cost functions. However, this approach is somewhat
limited, because the placement policies are implemented cloud-wide,
and only considered one migration at a time.
Since OpenStack is rapidly evolving to the point where a VM's network and storage dependencies can be live migrated along with the workload in a near seamless fashion, it is advantageous to develop mechanisms for implementing finer-grained placement policies, where not only is VM rearrangement performed automatically, but the policies themselves can be varied dynamically over time as workload requirements change.
Developing algorithms to determine optimal placement is distinctly non-trivial. For example, the defragmentation scenario above is a complex variant of the bin packing problem, which is NP-hard. The following constraints add significant complexity to the problem:
A useful real world solution should take into account not only the RAM footprint of the VMs, but also CPU, disk, and network.
It also needs to ensure that SLAs are maintained whilst any rearrangement takes place.
If the cloud is sufficiently near capacity, it may not be possible to rearrange the VMs from their current placement to a more optimal placement without first shutting down some VMs, which could be prohibited by the SLAs.
Even if the arrangement is achievable purely via a sequence of live migrations, the algorithm must also be sensitive to the performance impact to running workloads when performing multiple live migrations, since live migrations require intensive bursts of network I/O in order to synchronize the VM's memory contents between the source and target hosts, followed by a momentary freezing of the VM as it flips from the source to the target. This trade-off between optimal resource utilization and service availability means that a sub-optimal final placement may be preferable to an optimal one.
In the case where the hypervisor is capable of sharing memory pages between VMs, the algorithm should try to place together VMs which are likely to share memory pages (e.g. VMs running the same OS platform, OS version, software libraries, or applications. A research paper published in 2011 demonstrated that VM packing which optimises placement in this fashion can be approximated in polytime, achieving 32% to 50% reduction in servers and a 25% to 57% reduction in memory footprint compared to sharing-oblivious algorithms.
As noted by the above paper, this area of computer science is still evolving. There is one constant however: any rearrangement solution must not only provide a final VM placement optimised according to the chosen constraints, but also a sequence of migrations to it from the current placement. There will often be multiple migration sequences reaching the optimised placement from the current one, and their efficiency can vary widely. In other words, there are two questions which need answering:
- Given a starting placement A, which is the best final placement B to head for?
- What's the best way to get from A to B?
The above considerations strongly suggest that the first question is much harder to answer than the second. I propose that by adopting a divide and conquer approach, solving the second may simplify the first. Decoupling the two should also provide a mechanism for comparatively evaluating the effectiveness of potential answers to the first. Another bonus of this decoupling is that it should be possible for the path-finding algorithm to also discover opportunities for parallelizing live migrations when walking the path, so that the target placement B can be reached more quickly.
Therefore, for this Hack Week project, I intend to design and implement an algorithm which reliably calculates a reasonably optimal sequence of VM migrations from a given initial placement to a given final placement. My goals are as follows:
- If there is any migration path, the algorithm must find at least one.
- It should find a path which is reasonably optimal with respect to the migration cost function. In prior work, I already tried Dijkstra's shortest path algorithm and demonstrated that an exhaustive search for the shortest path has intolerably high complexity.
- The migration cost function should be pluggable. Initially it will simply consider the cost as proportional to the RAM footprint of the VM being migrated.
- For now, use a smoke-and-mirrors in-memory model of the cloud's state.
Later, the code can be ported to consume the
nova
API. - In light of the above, the implementation should be in Python.
- Determination of which states are sane should be pluggable. For example, the algorithm should not hardcode any assumptions about whether physical servers can over-commit CPU or RAM.
Looking for hackers with the skills:
cloud virtualization orchestration openstack performance python
This project is part of:
Hack Week 10
Activity
Comments
-
about 11 years ago by aspiers | Reply
I completed this project, and intend to publish the code and also blog about it. I uploaded a YouTube video demoing what the code can do.
Similar Projects
Save pytorch models in OCI registries by jguilhermevanz
Description
A prerequisite for running applications in a cloud environment is the presence of a container registry. Another common scenario is users performing machine learning workloads in such environments. However, these types of workloads require dedicated infrastructure to run properly. We can leverage these two facts to help users save resources by storing their machine learning models in OCI registries, similar to how we handle some WebAssembly modules. This approach will save users the resources typically required for a machine learning model repository for the applications they need to run.
Goals
Allow PyTorch users to save and load machine learning models in OCI registries.
Resources
Mortgage Plan Analyzer by RMestre
Project Description
Many people face challenges when trying to renegotiate their mortgages with different banks. They receive offers from multiple lenders and struggle to compare them effectively. Each proposal may have slightly different terms and data presentation, making it hard to make informed decisions. Additionally, understanding the impact of various taxes and variables can be complex. The Mortgage Plan Analyzer project aims to address these issues.
Project Overview:
The Mortgage Plan Analyzer is a web-based tool built using PHP, Laravel, Livewire, and AdminLTE/bootstrap. It provides a user-friendly platform for individuals to input basic specifications about their mortgage, adjust taxes and variables, and obtain short-term projections for each proposal. Users can also compare multiple mortgage offers side by side, enabling them to make informed decisions about their mortgage renegotiation.
Why Start This Project:
I found myself in this position and most tools I found around are either for marketing/selling purposes or not flexible enough. As i was starting getting lost in a jungle of spreadsheets i thought I could just create a tool to help me and others that may be experiencing the same struggles to provide clarity and transparency in the decision-making process.
Goals for Hackweek 24
- Improve the general behaviour of the forms:
- * Delay calculations after input change;
- * Allow to copy from a specific simulation;
- Improve the comparison graphs: Instead of having 4 graphs, maybe have it all in one.
- Support multi languages
Result of Hackweek 23
- Developed a base site that:
- * Allows adding up to 3 simulations;
- * Create financial plans;
- * Simulations comparison graph for the first 4 years;
- Created Github project @ https://github.com/rjpmestre/mortgage-plan-analyzer
- Launched a demo instance using Oracle Cloud Free Tier currently @ http://138.3.251.182/
Goal for Hackweek 23
By the end of the week, I'm planning to have a basic web application with the following features:
- User-friendly interface for inputting mortgage specifications like the loan amount, term, fixed and variable terms taxes.
- Display of financial plan
- Ability to adjust taxes and variables to see their impact on financial plan.
- Aesthetically pleasing output using AdminLTE/bootstrap for a pleasant and intuitive user experience.
Resources
- Banco de Portugal: Main simulator all portuguese banks have to follow ( https://clientebancario.bportugal.pt/credito-habitacao )
- Laravel: A PHP web application framework for building robust and scalable applications. ( https://laravel.com/ )
- Livewire: A Laravel library for building dynamic interfaces without writing JavaScript. ( https://livewire.laravel.com/ )
- AdminLTE: A responsive admin dashboard template for creating a visually appealing interface. ( https://adminlte.io/ )
- GitHub: We will host the project on GitHub for version control and collaboration. ( bet you didn't know this one, https://github.com/ )
- Vercel ( https://vercel.com/ ) or Oracle Cloud ( https://www.oracle.com/cloud/free/ ): Consideration will be given to hosting a running instance of the application using Vercel or Oracle Cloud's free tier, ensuring accessibility and availability.
This will be my first collaborative project in github. I'm pretty confident about setting up the bases of the project and financial math around it but i'll probably need some help:
- supporting multi language ( if i have to pick only one language, probably will choose Portuguese as this will for sure fit their needs and i'm more comfortable with Portuguese financial slang)
- setting up the running instance (as i'm not familiar neither with vercel nor oracle cloud services)
SUSE KVM Best Practices by roseswe
Description
SUSE Best Practices around KVM, especially for SAP workloads. Early Google presentation already made from various customer projects and SUSE sources.
Goals
Complete presentation we can reuse in SUSE Consulting projects
Resources
KVM (virt-manager) images
SUSE/SAP/KVM Best Practices
- https://documentation.suse.com/en-us/sles/15-SP6/single-html/SLES-virtualization/
- SAP Note 1522993 - "Linux: SAP on SUSE KVM - Kernel-based Virtual Machine" && 2284516 - SAP HANA virtualized on SUSE Linux Enterprise hypervisors https://me.sap.com/notes/2284516
- SUSECon24: [TUTORIAL-1253] Virtualizing SAP workloads with SUSE KVM || https://youtu.be/PTkpRVpX2PM
- SUSE Best Practices for SAP HANA on KVM - https://documentation.suse.com/sbp/sap-15/html/SBP-SLES4SAP-HANAonKVM-SLES15SP4/index.html
Contribute to terraform-provider-libvirt by pinvernizzi
Description
The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.
It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.
If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.
Goals
- Get more familiar with Terraform provider development and libvirt bindings in Go
- Solve some issues and/or implement some features
- Get in touch with the community around the project
Resources
- CONTRIBUTING readme
- Go libvirt library in use by the project
- Terraform plugin development
- "Good first issue" list
Extending KubeVirtBMC's capability by adding Redfish support by zchang
Description
In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.
Goals
Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:
- Power management
- Boot device selection
- Virtual media mount (this one is not so basic )
Resources
Symbol Relations by hli
Description
There are tools to build function call graphs based on parsing source code, for example, cscope
.
This project aims to achieve a similar goal by directly parsing the disasembly (i.e. objdump) of a compiled binary. The assembly code is what the CPU sees, therefore more "direct". This may be useful in certain scenarios, such as gdb/crash debugging.
Detailed description and Demos can be found in the README file:
Supports x86 for now (because my customers only use x86 machines), but support for other architectures can be added easily.
Tested with python3.6
Goals
Any comments are welcome.
Resources
https://github.com/lhb-cafe/SymbolRelations
symrellib.py: mplements the symbol relation graph and the disassembly parser
symrel_tracer*.py: implements tracing (-t option)
symrel.py: "cli parser"
Ansible for add-on management by lmanfredi
Description
Machines can contains various combinations of add-ons and are often modified during the time.
The list of repos can change so I would like to create an automation able to reset the status to a given state, based on metadata available for these machines
Goals
Create an Ansible automation able to take care of add-on (repo list) configuration using metadata as reference
Resources
- Machines
- Repositories
- Developing modules
- Basic VM Guest management
- Module
zypper_repository_list
- ansible-collections community.general
Results
Created WIP project Ansible-add-on-openSUSE
Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov
Project Description
Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.
In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.
Continue with implementation of the missing features and improve the existing implementation:
authentication (need to decide how it should be/or not related to salt auth)
web service providing the control of states deployment
Goal for this Hackweek
Implement missing key features
Implement the tool for state deployment control with CLI
Resources
https://github.com/openSUSE/saline
Team Hedgehogs' Data Observability Dashboard by gsamardzhiev
Description
This project aims to develop a comprehensive Data Observability Dashboard that provides r insights into key aspects of data quality and reliability. The dashboard will track:
Data Freshness: Monitor when data was last updated and flag potential delays.
Data Volume: Track table row counts to detect unexpected surges or drops in data.
Data Distribution: Analyze data for null values, outliers, and anomalies to ensure accuracy.
Data Schema: Track schema changes over time to prevent breaking changes.
The dashboard's aim is to support historical tracking to support proactive data management and enhance data trust across the data function.
Goals
Although the final goal is to create a power bi dashboard that we are able to monitor, our goals is to 1. Create the necessary tables that track the relevant metadata about our current data 2. Automate the process so it runs in a timely manner
Resources
AWS Redshift; AWS Glue, Airflow, Python, SQL
Why Hedgehogs?
Because we like them.
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for