For events like engineering summit or hackweeks, it would be nice to have a SUSE instance of workadventu.re, and have our own maps, wired with (open)SUSE's jitsi!

I am looking for folks willing to help on those 3 teams:

  • Hosting workadventure (before and seeing it how it scales during hackweek)
  • Integrating with our other tools (rocket chat, jitsi)
  • Building maps.

What does it involve:

  • contribute to workadventu.re upstream code (fixing self-hosting issues first, improving features like "interactions", management API, documentation)
  • integrate with different teams at SUSE to make it official
  • build maps using tiled
  • Enjoy the workadventure instance at SUSE!

We are syncing on rocket chat, channel #workadventure-at-suse. Don't hesitate to join us!

The idea for this project would be to prepare the hackweek by working on the hosting and the maps, then see how it scales during hackweek. We'll need your help not only to build but also to TEST and USE it during hackweek. Have fun with us!

Looking for hackers with the skills:

kubernetes nodejs social k3s

This project is part of:

Hack Week 20

Activity

  • over 4 years ago: jevrard added keyword "k3s" to this project.
  • almost 5 years ago: jevrard added keyword "social" to this project.
  • almost 5 years ago: jevrard added keyword "kubernetes" to this project.
  • almost 5 years ago: jevrard added keyword "nodejs" to this project.
  • almost 5 years ago: lcaparroz liked this project.
  • almost 5 years ago: rsimai liked this project.
  • almost 5 years ago: digitaltomm left this project.
  • almost 5 years ago: dleidi liked this project.
  • almost 5 years ago: AZhou liked this project.
  • almost 5 years ago: kstreitova joined this project.
  • almost 5 years ago: asettle liked this project.
  • almost 5 years ago: dgedon liked this project.
  • almost 5 years ago: pagarcia liked this project.
  • almost 5 years ago: lnussel liked this project.
  • almost 5 years ago: ybonatakis joined this project.
  • almost 5 years ago: ybonatakis liked this project.
  • almost 5 years ago: mlnoga liked this project.
  • almost 5 years ago: kstreitova liked this project.
  • almost 5 years ago: rbueker joined this project.
  • almost 5 years ago: dancermak liked this project.
  • almost 5 years ago: dfaggioli liked this project.
  • almost 5 years ago: fos liked this project.
  • almost 5 years ago: hennevogel liked this project.
  • almost 5 years ago: digitaltomm joined this project.
  • almost 5 years ago: digitaltomm left this project.
  • All Activity

    Comments

    • digitaltomm
      almost 5 years ago by digitaltomm | Reply

      I'd like to help integrating the office maps that we created, for example: http://geekos.prv.suse.net/locations/NUE

      • jevrard
        almost 5 years ago by jevrard | Reply

        Awesome @digitaltomm ! We are building a crew that helps on this, feel free to join us in our chats on RC! #workadventure-at-suse

    • mlnoga
      almost 5 years ago by mlnoga | Reply

      @SaraStephens has a somewhat related HackWeek idea of creating a game for SUSECon. Please be introduced.

    • jevrard
      almost 5 years ago by jevrard | Reply

      Hello everyone! This hackweek idea is progressing! If you want to participate, don't hesitate to join our rocket chat channel, or contact us here!

    • dleidi
      almost 5 years ago by dleidi | Reply

      In case of need for more inspiration, I am aware of this instead gather.town

      • jevrard
        almost 5 years ago by jevrard | Reply

        gather.town is not open source! :(

        • jevrard
          over 4 years ago by jevrard | Reply

          It's still a good inspiration :) @dleidi do you have something particular in mind?

          • dleidi
            over 4 years ago by dleidi | Reply

            I like the idea of feeling to be in the same office even if you are not. In these days where we are all remotes, there are people missing human interaction (not me though, I've always been remote, but still I am aware of this common feeling), and those guys are used to standup, go to the desk of the colleague, and ask questions or do some jokes. Of course this is not meant to "interrupt other while working", but more in the mood of acting in the same way office workers are used to. Like feeling we are back in the University study room or so :) , or even for huge meeging like workshops or kickoff: you could have an office with multiple rooms where someone is having conversations/presentations, and you can join just by passing by, more for brainstorming and sharing ideas every now and then, without the need of turning the audio on and off manually everytime or re-joining mumble rooms or Teams having meeting links or so, if you know what I mean. Imagine hackweek (just to name one) where everyone is at home: such a visual room/office would help feeling closer and having fun together, instead of alone. Just my 2c

        • dleidi
          over 4 years ago by dleidi | Reply

          Yeah, that's a shame, I know :/

    Similar Projects

    Kubernetes-Based ML Lifecycle Automation by lmiranda

    Description

    This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.

    The pipeline will automate the lifecycle of a machine learning model, including:

    • Data ingestion/collection
    • Model training as a Kubernetes Job
    • Model artifact storage in an S3-compatible registry (e.g. Minio)
    • A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
    • A lightweight inference service that loads and serves the latest model
    • Monitoring of model performance and service health through Prometheus/Grafana

    The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.

    Goals

    By the end of Hack Week, the project should:

    1. Produce a fully functional ML pipeline running on Kubernetes with:

      • Data collection job
      • Training job container
      • Storage and versioning of trained models
      • Automated deployment of new model versions
      • Model inference API service
      • Basic monitoring dashboards
    2. Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.

    3. Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).

    4. Prepare a short demo explaining the end-to-end process and how new models flow through the system.

    Resources

    Project Repository

    Updates

    1. Training pipeline and datasets
    2. Inference Service py


    Technical talks at universities by agamez

    Description

    This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.

    For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.

    Goals

    • Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
    • Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
    • Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.

    Resources

    • Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
    • SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.


    Cluster API Provider for Harvester by rcase

    Project Description

    The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.

    The project has been bootstrapped in HackWeek 23, and its code is available here.

    Work done in HackWeek 2023

    • Have a early working version of the provider available on Rancher Sandbox : *DONE *
    • Demonstrated the created cluster can be imported using Rancher Turtles: DONE
    • Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo

    DONE in HackWeek 24:

    DONE in 2025 (out of Hackweek)

    • Support of ClusterClass
    • Add to clusterctl community providers, you can add it directly with clusterctl
    • Testing on newer versions of Harvester v1.4.X and v1.5.X
    • Support for clusterctl generate cluster ...
    • Improve Status Conditions to reflect current state of Infrastructure
    • Improve CI (some bugs for release creation)

    Goals for HackWeek 2025

    • FIRST and FOREMOST, any topic is important to you
    • Add e2e testing
    • Certify the provider for Rancher Turtles
    • Add Machine pool labeling
    • Add PCI-e passthrough capabilities.
    • Other improvement suggestions are welcome!

    Thanks to @isim and Dominic Giebert for their contributions!

    Resources

    Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.

    This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:


    Preparing KubeVirtBMC for project transfer to the KubeVirt organization by zchang

    Description

    KubeVirtBMC is preparing to transfer the project to the KubeVirt organization. One requirement is to enhance the modeling design's security. The current v1alpha1 API (the VirtualMachineBMC CRD) was designed during the proof-of-concept stage. It's immature and inherently insecure due to its cross-namespace object references, exposing security concerns from an RBAC perspective.

    The other long-awaited feature is the ability to mount virtual media so that virtual machines can boot from remote ISO images.

    Goals

    1. Deliver the v1beta1 API and its corresponding controller implementation
    2. Enable the Redfish virtual media mount function for KubeVirt virtual machines

    Resources


    Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0

    Self-Scaling LLM Infrastructure Powered by Rancher

    logo


    Description

    The Problem

    Running LLMs can get expensive and complex pretty quickly.

    Today there are typically two choices:

    1. Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
    2. Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.

    What if there was a middle ground?

    What if infrastructure scaled itself instead of making you scale it?

    Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?

    Project Repository: github.com/alexander-demicev/llmserverless


    What This Project Does

    A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.

    A complete, self-scaling LLM infrastructure that:

    • Scales to zero when idle (no idle costs)
    • Scales up automatically when requests come in
    • Adds more nodes when needed, removes them when demand drops
    • Runs on any infrastructure - laptop, bare metal, or cloud

    Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.

    How It Works

    A combination of open source tools working together:

    Flow:

    • Users interact with OpenWebUI (chat interface)
    • Requests go to LiteLLM Gateway
    • LiteLLM routes requests to:
      • Ollama (Knative) for local model inference (auto-scales pods)
      • Or cloud APIs for fallback


    Kudos aka openSUSE Recognition Platform by lkocman

    Description

    Relevant blog post at news-o-o

    I started the Kudos application shortly after Leap 16.0 to create a simple, friendly way to recognize people for their work and contributions to openSUSE. There’s so much more to our community than just submitting requests in OBS or gitea we have translations (not only in Weblate), wiki edits, forum and social media moderation, infrastructure maintenance, booth participation, talks, manual testing, openQA test suites, and more!

    Goals

    • Kudos under github.com/openSUSE/kudos with build previews aka netlify

    • Have a kudos.opensuse.org instance running in production

    • Build an easy-to-contribute recognition platform for the openSUSE community a place where everyone can send and receive appreciation for their work, across all areas of contribution.

    • In the future, we could even explore reward options such as vouchers for t-shirts or other community swag, small tokens of appreciation to make recognition more tangible.

    Resources

    (Do not create new badge requests during hackweek, unless you'll make the badge during hackweek)