When we started brain storming a project for hack week, one of the floated ideas was to remake the 1983 film WarGames, and for lack of available space, a local lot with storage units was proposed. Over the course of the following years, while we planned, we realized that this whole idea would not be the most feasible, but it still felt like we were onto something.
Eventually, we settled with keeping the name and changing the scope.
Instead, we are proposing to create a virtual environment on which war games will be played, with a focus on storage -- in particular, with a focus on Ceph and its ecosystem.
What are war games?
Wikipedia has a few entries talking about the concept of war games, pertaining to military exercices and simulations. We are not going to go into much detail about what the whole concept is about for two reasons: 1) getting definitions 100% right is a deep rabbit hole that would consume our whole week, and 2) because "war game" is chosen mostly because it's a cool name.
In essence though, the concept relies on simulating adverse conditions, to develop, trial and refine possible solutions, without actual exposing the participants to real-life scenarios where failure would (potentially) be catastrophic.
In the context of software-defined storage, Ceph in particular, we are looking to leverage this concept to allow participants to develop their capabilities in recovering from cluster failures, as well understanding how failures are caused and how to prevent them.
General Overview
The ten thousand feet view involves two teams: Red, and Blue.
The Red team's takes the adversarial position, meant to cause as much trouble for the Blue Team as possible, while observing whatever constraints are established for the duration of the exercise. The Blue team's objective will be to keep a healthy, functioning cluster, thwarting Red's attempts at mayhem.
Each exercise will be bound by a set of constraints, defined before the exercise begins, and to be observed by both teams. For instance, if a constraint is "no data shall be deleted", then the Red team shall not delete the data on the disks. Remember, this is meant for people to learn, may it be by causing the problems or by fixing them.
The exercises will take place on virtual machines: a healthy cluster will be set up, with all the services that are meant to be running for a given exercise; there will be a login node, that shall be accessible by both teams. Teams will log into this node, and will issue their actions into the cluster from it.
Each team will have a predefined, non-overlapping time window to perform their actions on the cluster. Once the window closes, teams will no longer be able to login, connections will be booted off the node.
All actions, commands, shall be logged to a remote node, alongside with cluster health and other relevant information, for further analysis, postmortem, etc.
Hack Week's Objective
Getting this working. Some of it? All of it? Finding out how much we will be diverging from the initial objective by week's end. :)
Looking for hackers with the skills:
This project is part of:
Hack Week 17
Activity
Comments
Be the first to comment!
Similar Projects
Contribute to terraform-provider-libvirt by pinvernizzi
Description
The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.
It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.
If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.
Goals
- Get more familiar with Terraform provider development and libvirt bindings in Go
- Solve some issues and/or implement some features
- Get in touch with the community around the project
Resources
- CONTRIBUTING readme
- Go libvirt library in use by the project
- Terraform plugin development
- "Good first issue" list
SUSE KVM Best Practices by roseswe
Description
SUSE Best Practices around KVM, especially for SAP workloads. Early Google presentation already made from various customer projects and SUSE sources.
Goals
Complete presentation we can reuse in SUSE Consulting projects
Resources
KVM (virt-manager) images
SUSE/SAP/KVM Best Practices
- https://documentation.suse.com/en-us/sles/15-SP6/single-html/SLES-virtualization/
- SAP Note 1522993 - "Linux: SAP on SUSE KVM - Kernel-based Virtual Machine" && 2284516 - SAP HANA virtualized on SUSE Linux Enterprise hypervisors https://me.sap.com/notes/2284516
- SUSECon24: [TUTORIAL-1253] Virtualizing SAP workloads with SUSE KVM || https://youtu.be/PTkpRVpX2PM
- SUSE Best Practices for SAP HANA on KVM - https://documentation.suse.com/sbp/sap-15/html/SBP-SLES4SAP-HANAonKVM-SLES15SP4/index.html
Harvester Packer Plugin by mrohrich
Description
Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.
Goals
Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.
Resources
Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders
Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin
Extending KubeVirtBMC's capability by adding Redfish support by zchang
Description
In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.
Goals
Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:
- Power management
- Boot device selection
- Virtual media mount (this one is not so basic )
Resources