For events like engineering summit or hackweeks, it would be nice to have a SUSE instance of workadventu.re, and have our own maps, wired with (open)SUSE's jitsi!
I am looking for folks willing to help on those 3 teams:
- Hosting workadventure (before and seeing it how it scales during hackweek)
- Integrating with our other tools (rocket chat, jitsi)
- Building maps.
What does it involve:
- contribute to workadventu.re upstream code (fixing self-hosting issues first, improving features like "interactions", management API, documentation)
- integrate with different teams at SUSE to make it official
- build maps using tiled
- Enjoy the workadventure instance at SUSE!
We are syncing on rocket chat, channel #workadventure-at-suse. Don't hesitate to join us!
The idea for this project would be to prepare the hackweek by working on the hosting and the maps, then see how it scales during hackweek. We'll need your help not only to build but also to TEST and USE it during hackweek. Have fun with us!
Looking for hackers with the skills:
This project is part of:
Hack Week 20
Activity
Comments
-
about 4 years ago by digitaltomm | Reply
I'd like to help integrating the office maps that we created, for example: http://geekos.prv.suse.net/locations/NUE
-
about 4 years ago by jevrard | Reply
Awesome @digitaltomm ! We are building a crew that helps on this, feel free to join us in our chats on RC! #workadventure-at-suse
-
-
almost 4 years ago by mlnoga | Reply
@SaraStephens has a somewhat related HackWeek idea of creating a game for SUSECon. Please be introduced.
-
-
almost 4 years ago by dleidi | Reply
In case of need for more inspiration, I am aware of this instead gather.town
-
almost 4 years ago by jevrard | Reply
gather.town is not open source! :(
-
almost 4 years ago by jevrard | Reply
It's still a good inspiration :) @dleidi do you have something particular in mind?
-
almost 4 years ago by dleidi | Reply
I like the idea of feeling to be in the same office even if you are not. In these days where we are all remotes, there are people missing human interaction (not me though, I've always been remote, but still I am aware of this common feeling), and those guys are used to standup, go to the desk of the colleague, and ask questions or do some jokes. Of course this is not meant to "interrupt other while working", but more in the mood of acting in the same way office workers are used to. Like feeling we are back in the University study room or so :) , or even for huge meeging like workshops or kickoff: you could have an office with multiple rooms where someone is having conversations/presentations, and you can join just by passing by, more for brainstorming and sharing ideas every now and then, without the need of turning the audio on and off manually everytime or re-joining mumble rooms or Teams having meeting links or so, if you know what I mean. Imagine hackweek (just to name one) where everyone is at home: such a visual room/office would help feeling closer and having fun together, instead of alone. Just my 2c
-
-
-
-
Similar Projects
Metrics Server viewer for Kubernetes by bkampen
This project is finished please visit the github repo below for the tool.
Description
Build a CLI tools which can visualize Kubernetes metrics from the metrics-server, so you're able to watch these without installing Prometheus and Grafana on a cluster.
Goals
- Learn more about metrics-server
- Learn more about the inner workings of Kubernetes.
- Learn more about Go
Resources
https://github.com/bvankampen/metrics-viewer
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz
Description
As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.
Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.
Goals
- Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
- Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
- Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.
Resources
Core concepts:
- ConfigMaps: Bottles could be defined and configured using ConfigMaps.
- Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
- Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.
Learn enough Golang and hack on CoreDNS by jkuzilek
Description
I'm implementing a split-horizon DNS for my home Kubernetes cluster to be able to access my internal (and external) services over the local network through public domains. I managed to make a PoC with the k8s_gateway plugin for CoreDNS. However, I soon found out it responds with IPs for all Gateways assigned to HTTPRoutes, publishing public IPs as well as the internal Loadbalancer ones.
To remedy this issue, a simple filtering mechanism has to be implemented.
Goals
- Learn an acceptable amount of Golang
- Implement GatewayClass (and IngressClass) filtering for k8s_gateway
- Deploy on homelab cluster
- Profit?
Resources
- https://github.com/ori-edge/k8s_gateway/issues/36
- https://github.com/coredns/coredns/issues/2465#issuecomment-593910983
EDIT: Feature mostly complete. An unfinished PR lies here. Successfully tested working on homelab cluster.
Extending KubeVirtBMC's capability by adding Redfish support by zchang
Description
In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.
Goals
Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:
- Power management
- Boot device selection
- Virtual media mount (this one is not so basic
)
Resources
obs-service-vendor_node_modules by cdimonaco
Description
When building a javascript package for obs, one option is to use https://github.com/openSUSE/obs-service-node_modules as source service to get the project npm dependencies available for package bulding.
obs-service-vendornodemodules aims to be a source service that vendors npm dependencies, installing them with npm install (optionally only production ones) and then creating a tar package of the installed dependencies.
The tar will be used as source in the package building definitions.
Goals
- Create an obs service package that vendors the npm dependencies as tar archive.
- Maybe add some macros to unpack the vendor package in the specfiles
Resources
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for