For events like engineering summit or hackweeks, it would be nice to have a SUSE instance of workadventu.re, and have our own maps, wired with (open)SUSE's jitsi!

I am looking for folks willing to help on those 3 teams:

  • Hosting workadventure (before and seeing it how it scales during hackweek)
  • Integrating with our other tools (rocket chat, jitsi)
  • Building maps.

What does it involve:

  • contribute to workadventu.re upstream code (fixing self-hosting issues first, improving features like "interactions", management API, documentation)
  • integrate with different teams at SUSE to make it official
  • build maps using tiled
  • Enjoy the workadventure instance at SUSE!

We are syncing on rocket chat, channel #workadventure-at-suse. Don't hesitate to join us!

The idea for this project would be to prepare the hackweek by working on the hosting and the maps, then see how it scales during hackweek. We'll need your help not only to build but also to TEST and USE it during hackweek. Have fun with us!

Looking for hackers with the skills:

kubernetes nodejs social k3s

This project is part of:

Hack Week 20

Activity

  • almost 4 years ago: jevrard added keyword "k3s" to this project.
  • almost 4 years ago: jevrard added keyword "social" to this project.
  • almost 4 years ago: jevrard added keyword "kubernetes" to this project.
  • almost 4 years ago: jevrard added keyword "nodejs" to this project.
  • almost 4 years ago: lcaparroz liked this project.
  • almost 4 years ago: rsimai liked this project.
  • almost 4 years ago: digitaltomm left this project.
  • almost 4 years ago: dleidi liked this project.
  • almost 4 years ago: AZhou liked this project.
  • almost 4 years ago: kstreitova joined this project.
  • almost 4 years ago: asettle liked this project.
  • almost 4 years ago: dgedon liked this project.
  • almost 4 years ago: pagarcia liked this project.
  • almost 4 years ago: lnussel liked this project.
  • almost 4 years ago: ybonatakis joined this project.
  • almost 4 years ago: ybonatakis liked this project.
  • almost 4 years ago: mlnoga liked this project.
  • almost 4 years ago: kstreitova liked this project.
  • almost 4 years ago: rbueker joined this project.
  • almost 4 years ago: dancermak liked this project.
  • almost 4 years ago: dfaggioli liked this project.
  • almost 4 years ago: fos liked this project.
  • almost 4 years ago: hennevogel liked this project.
  • almost 4 years ago: digitaltomm joined this project.
  • almost 4 years ago: digitaltomm left this project.
  • All Activity

    Comments

    • digitaltomm
      almost 4 years ago by digitaltomm | Reply

      I'd like to help integrating the office maps that we created, for example: http://geekos.prv.suse.net/locations/NUE

      • jevrard
        almost 4 years ago by jevrard | Reply

        Awesome @digitaltomm ! We are building a crew that helps on this, feel free to join us in our chats on RC! #workadventure-at-suse

    • mlnoga
      almost 4 years ago by mlnoga | Reply

      @SaraStephens has a somewhat related HackWeek idea of creating a game for SUSECon. Please be introduced.

    • jevrard
      almost 4 years ago by jevrard | Reply

      Hello everyone! This hackweek idea is progressing! If you want to participate, don't hesitate to join our rocket chat channel, or contact us here!

    • dleidi
      almost 4 years ago by dleidi | Reply

      In case of need for more inspiration, I am aware of this instead gather.town

      • jevrard
        almost 4 years ago by jevrard | Reply

        gather.town is not open source! :(

        • jevrard
          almost 4 years ago by jevrard | Reply

          It's still a good inspiration :) @dleidi do you have something particular in mind?

          • dleidi
            almost 4 years ago by dleidi | Reply

            I like the idea of feeling to be in the same office even if you are not. In these days where we are all remotes, there are people missing human interaction (not me though, I've always been remote, but still I am aware of this common feeling), and those guys are used to standup, go to the desk of the colleague, and ask questions or do some jokes. Of course this is not meant to "interrupt other while working", but more in the mood of acting in the same way office workers are used to. Like feeling we are back in the University study room or so :) , or even for huge meeging like workshops or kickoff: you could have an office with multiple rooms where someone is having conversations/presentations, and you can join just by passing by, more for brainstorming and sharing ideas every now and then, without the need of turning the audio on and off manually everytime or re-joining mumble rooms or Teams having meeting links or so, if you know what I mean. Imagine hackweek (just to name one) where everyone is at home: such a visual room/office would help feeling closer and having fun together, instead of alone. Just my 2c

        • dleidi
          almost 4 years ago by dleidi | Reply

          Yeah, that's a shame, I know :/

    Similar Projects

    ddflare: (Dynamic)DNS management via Cloudflare API in Kubernetes by fgiudici

    Description

    ddflare is a project started a couple of weeks ago to provide DDNS management using v4 Cloudflare APIs: Cloudflare offers management via APIs and access tokens, so it is possible to register a domain and implement a DynDNS client without any other external service but their API.

    Since ddflare allows to set any IP to any domain name, one could manage multiple A and ALIAS domain records. Wouldn't be cool to allow full DNS control from the project and integrate it with your Kubernetes cluster?

    Goals

    Main goals are:

    1. add containerized image for ddflare
    2. extend ddflare to be able to add and remove DNS records (and not just update existing ones)
    3. add documentation, covering also a sample pod deployment for Kubernetes
    4. write a ddflare Kubernetes operator to enable domain management via Kubernetes resources (using kubebuilder)

    Available tasks and improvements tracked on ddflare github.

    Resources

    • https://github.com/fgiudici/ddflare
    • https://developers.cloudflare.com/api/
    • https://book.kubebuilder.io


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API


    Setup Kanidm as OIDC provider on Kubernetes by jkuzilek

    Description

    I am planning to upgrade my homelab Kubernetes cluster to the next level and need an OIDC provider for my services, including K8s itself.

    Goals

    • Successfully configure and deploy Kanidm on homelab cluster
    • Integrate with K8s auth
    • Integrate with other services (Envoy Gateway, Container Registry, future deployment of Forgejo?)

    Resources


    obs-service-vendor_node_modules by cdimonaco

    Description

    When building a javascript package for obs, one option is to use https://github.com/openSUSE/obs-service-node_modules as source service to get the project npm dependencies available for package bulding.

    obs-service-vendornodemodules aims to be a source service that vendors npm dependencies, installing them with npm install (optionally only production ones) and then creating a tar package of the installed dependencies.

    The tar will be used as source in the package building definitions.

    Goals

    • Create an obs service package that vendors the npm dependencies as tar archive.
    • Maybe add some macros to unpack the vendor package in the specfiles

    Resources


    ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini

    Description

    ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal or local installations. However, the goal is to expand its use to encompass all installations of Kubernetes for local development purposes.
    It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based configuration config.yml.

    Overview

    • Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
    • Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
    • Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
    • Extensibility: Easily extend functionality with custom plugins and configurations.
    • Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
    • Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.

    Features

    • distribution and engine independence. Install your favorite kubernetes engine with your package manager, execute one script and you'll have a complete working environment at your disposal.
    • Basic config approach. One single config.yml file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...).
    • Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
    • Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
    • Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
    • One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
    • Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.

    Planned features (Wishlist / TODOs)

    • Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for