State: vm snapshoting and resume are working, but everything is still in a very hacky state.
Project Description
The build script (which is used in OBS, osc and pbuild) already supports the usage of preinstall images. These are bascially tar balls which get extracted instead of preinstall and install packages. This is speeding up the build, because no database operations and no scripts need to run.
The goal of the project is to go one step further and to convert these to entire VM snapshots. As a consequence all variable data, like the source and additional packages of the build need to be delivered to the VM via a hotplug device.
This has a number of positive consequences:
- Much of the IO spend on preparing the VM can be saved
- Kernel is already booted and reduces the startup time further
- The isolation of variable data solves the problems of reusing the VM with changed sources for local builds. => No more errors when doing builds as user without root permissions
Goal for this Hackweek
Current state can be found here: https://github.com/adrianschroeter/obs-build/tree/vm
open topics
- cleanup code to make it mergeable
- add support in obs-worker
- SOLVED (create some tooling to convert tar balls to ext filesystem without requiring root permissions and without loosing inode informations like ownership, attributes and so on)
Looking for hackers with the skills:
This project is part of:
Hack Week 20
Activity
Comments
Be the first to comment!
Similar Projects
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Bootstrap openSUSE on LoongArch by glaubitz
Description
LoongArch is a new architecture from China which has its roots in the MIPS architecture. It has been created by Loongson and is already supported by Debian Ports, Gentoo and Loongnix.
Upstream support for LoongArch is already quite complete which includes LLVM, Rust, Golang, GRUB, QEMU, LibreOffice and many more. In Debian Ports, where the port is called "loong64", more than 95% of the whole Debian archive have been successfully built for LoongArch.
QEMU support is rather complete and stable such that packages can be built in emulated environments. Hardware can also be requested by Loongson on request for free. Access to real hardware is also provided through the GCC Compile Farm.
Goals
The initial goal should be to add LoongArch to OBS and build a minimal set of packages.
Resources
- Introduction to LoongArch: https://docs.kernel.org/arch/loongarch/introduction.html
- LoongArch community on Github: https://github.com/loongarchlinux
- Debian Ports repository for loong64: http://ftp.ports.debian.org/debian-ports/pool-loong64/main/
- Gentoo stage3 for loong: https://www.gentoo.org/downloads/#loong
Results
- An initial set of packages for openSUSE loongarch64 has been successfully bootstrapped
- An OBS project has been set up to build packages for openSUSE loongarch64 with more than 3000 packages being built already
- A work-in-progress guide on how to bootstrap a new openSUSE port from Debian has been created
- A work-in-progress guide on how to add a new target to the openSUSE toolchain has been created
Acknowledgements
- Thanks to Adrian Schröter and Rüdiger Oertl for the help with setting up the FTP space and OBS project
- Thanks to Dirk Müller for the input on how to get started with a new port
- Thanks to Richard Biener for quickly accepting my submit requests to add loongarch64 support to the toolchain
Research openqa-trigger-from-obs and openqa-trigger-from-ibs-plugin by qwang
Description
openqa-trigger-from-obs project is a framework that OSD is using it to automatically sync the defined images and repositories from OBS/IBS to its assets for testing. This framework very likely will be used for the synchronize to each location's openqa include openqa.qa2.suse.asia Beijing local procy scc scc-proxy.suse.asia(although it's not a MUST to our testing) it's now rewriting requests to openqa.qa2.suse.asia instead of openqa.suse.de, the assets/repo should be consistent the format Beijing local openQA is maintaining an own script but still need many manually activities when new build comes, and not consistent to OSD, that will request many test code change due to CC network change
Goals
Research this framework in case it will be re-used for Beijing local openQA, and will need to be setup and maintained by ourselves
Resources
https://github.com/os-autoinst/openqa-trigger-from-obs/tree/master https://gitlab.suse.de/openqa/openqa-trigger-from-ibs-plugin
beijing :rainbow machine
Explore the integration between OBS and GitHub by pdostal
Project Description
The goals:
1) When GitHub pull request is created or modified the OBS project will be forked and the build results reported back to GitHub. 2) When new version of the GitHub project will be published the OBS will redownload the source and rebuild the project.
Goal for this Hackweek
Do as much as possible, blog about it and maybe use it another existing project.
Resources
- The Blog post
- Issue: poo#123858 - build.opensuse.org: /usr/lib/obs/service//go_modules.service No such file or directory
Implement a full OBS api client in Rust by nbelouin
Description
I recently started to work on tooling for OBS using rust, to do so I started a Rust create to interact with OBS API, I only implemented a few routes/resources for what I needed. What about making it a full fledged OBS client library.
Goals
- Implement more routes/resources
- Implement a test suite against the actual OBS implementation
- Bonus: Create an osc like cli in Rust using the library
Resources
- https://github.com/suse-edge/obs-tools/tree/main/obs-client
- https://api.opensuse.org/apidocs/
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
SUSE KVM Best Practices by roseswe
Description
SUSE Best Practices around KVM, especially for SAP workloads. Early Google presentation already made from various customer projects and SUSE sources.
Goals
Complete presentation we can reuse in SUSE Consulting projects
Resources
KVM (virt-manager) images
SUSE/SAP/KVM Best Practices
- https://documentation.suse.com/en-us/sles/15-SP6/single-html/SLES-virtualization/
- SAP Note 1522993 - "Linux: SAP on SUSE KVM - Kernel-based Virtual Machine" && 2284516 - SAP HANA virtualized on SUSE Linux Enterprise hypervisors https://me.sap.com/notes/2284516
- SUSECon24: [TUTORIAL-1253] Virtualizing SAP workloads with SUSE KVM || https://youtu.be/PTkpRVpX2PM
- SUSE Best Practices for SAP HANA on KVM - https://documentation.suse.com/sbp/sap-15/html/SBP-SLES4SAP-HANAonKVM-SLES15SP4/index.html
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for