The goal of this project is to get an overview of the state-of-the-art technology on training and deploying machine learning projects with kubernetes and apply that to a SUSE CaaSP cluster.
With that in mind, we will train and deploy a model for summarizing github issues:
https://github.com/kubeflow/examples/tree/master/githubissuesummarization
This example, will make use of the following technology:
- kubeflow: Machine Learning Toolkit for Kubernetes
- Keras: The Python Deep Learning library
- Seldon Core: Machine Learning Deployment for Kubernetes
- Tensorfow: An open source machine learning framework for everyone
- cri-o: Lightweight Container Runtime for Kubernetes
- Kubernetes
- SUSE CaaSP: SUSE Container as a Service Platform
- Nvidia container engine
- Cuda
For this project, I will use a workstation with a nvidia GeForce GTX 1060 which is supported by CUDA and I will install SUSE CaaSP in it.
Looking for hackers with the skills:
machinelearning kubeflow keras seldoncore tensorflow cri-o kubernetes caasp nvidia cuda gpu containers
This project is part of:
Hack Week 17 Hack Week 18
Activity
Comments
-
over 7 years ago by jordimassaguerpla | Reply
I think I was a bit too ambitious when I wrote this description :) ... but it is been fun any way.
This is what I accomplished
Seting up a SUSE CaaSP cluster where the admin and the master where running on top of kvm and the worker was a workstation with an nvidia GPU. The first trick was to setup the virtual machines to use the ethernet network interface from the host (macvtap). For whatever reason I could not setup this with the virt-manager run as a "normal user" but I could if I started virt-manager from YaST (with root permissions... may that be the reason?). The second trick was to restrict master to 2GB of RAM and admin to 4GB, so I could run this on my laptop (thanks @ereslibre !) Finally the third trick was to add "hostname=UNIQUE_HOSTNAME" as a linuxrc parameter when installing each machine (otherwise they were all be named linux.lan :) )
Building nvidia packages for CaaSP. Nvidia packages built for SLE12SP3 by SUSE, but provided by nvidia at http://download.nvidia.com/suse/sles12sp3, had been built for an older kernel than the one released in CaaSP. Thus, when installing those packages, the nvidia kernel modules could not be loaded. For this reason, I built them for the latest kernel in openSUSE Leap 42.3, and install them at the same time I was upgrading the kernel to the one in openSUSE Leap 42.3 (see [0] why openSUSE Leap 42.3). You can download them from this project.
Installing and fixing nvidia-runtime-hooks and libnvidia-containers: There is no package for SUSE but instead I took the ones from centos 7; the trick was to run a centos7 container, and follow the instructions from https://nvidia.github.io/libnvidia-container/, but add the "--download-only" option to yum. Luckily, the packages installed without any error ... but they were not really working! Using "strace nvidia-container-cli info" I realized the problem was on the permissions of /dev/nvidia* files. Thus, running "chmod 0666 /dev/nvidia*" fixed the installation... but you have to do this on every reboot (actually, everytime the nvidia mod is loaded). The trick was to use "transactional-update shell" to do all these changes :) . Note I am not installing nvidia-container-runtime, but only the hook. That is because we will use cri-o and not docker. For cri-o we don't need to install the nvidia-container-runtime.
See as a "proof":
> nvidia-container-cli info
NVRM version: 390.67
CUDA version: 9.1Device Index: 0
Device Minor: 0
Model: GeForce GTX 1060 3GB
GPU UUID: GPU-f96a76d4-7ba9-07cc-2774-bb7a55ef3e68
Bus Location: 00000000:02:00.0
Architecture: 6.1- Setting up the cri-o hook to use libnvidia-container: I just had to follow the instructions here: https://github.com/kubernetes-incubator/cri-o/issues/1222. I couldn't really verify this, but I am quite confident this worked, as kubelet was starting and parsing the hook.
and this is where I failed
Using a chainned forward proxy to add the workstation into a SUSE CaaSP cluster which was running in a SUSE Cloud cluster I tried configuring 2 proxies with apache2 and using modproxy, modproxyhttp, modproxyconnect, where both were configured as forward proxies and the second one was using the "RemoteProxy" configuration to "chain" with the first one. Then I placed the first one inside the SUSE Cloud cluster, as a virtual machine, and the second one on my laptop. The tricked worked, and I was able to access the autoyast file from the admin node which was in the SUSE Cloud cluster (http://adminnode/autoyast), when installing the workstation via the DVD, even thought the admin node was not accessible outside the SUSE Cloud cluster, and the SUSE Cloud cluster is inside the vpn, where the workstation is not (but the laptop is). It sounds a bit complicated but actually the solution was quite simple. However, salt-minion does not use http but zeromq, and was not going through the proxies.
Building nvidia-container and libnvidia-container packages for SUSE: I tried getting the spec file from github but it required too many tunning that it would have taken me the whole hackweek (or more) to have them building for SUSE, so I ended up using the ones from centos 7.
Setting up k8s to schedule jobs that require gpu: Even thought cri-o seemed correctly configured, jobs were not being scheduled. More docs I found in internet were referring to add the "--experimental-nvidia-gpus=1" option to kubelet, but this is not possible because kubelet does not recognize this option and fails to start. Then, I read in the k8s docs about enabling this via a device plugin: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/. This required enabling feature gates, which by default is not. Here I think I failed cause I didn't know how to do it and unfortunately I run out of time ... However, while writing this report, flavio pointed me to https://wiki.microfocus.com/index.php/SUSECaaSPlatform/FAQ#HowtoenableKubernetesfeature_gates (thanks @flavio_castelli !) where you can see how to enable the feature gates. This is where we should resume the work if we have some time at some point.
Run a kubeflow deployment I didn't had time to reach to this point. This was the last step and a project on its own. Next hackweek, maybe...
[0] Why openSUSE Leap 42.3? SLE12SP3 has the same common code as openSUSE Leap 42.3, and for the hackweek I wanted to build the nvidia package in the openBuildService https://build.opensuse.org. Using openSUSE Leap 42.3 (plus its update repo) was easier than trying to build that for exact kernel that has been shipped in CaaSPv3.
-
over 7 years ago by jordimassaguerpla | Reply
and thanks to @vrothberg for helping me out with cri-o/podman.
-
over 7 years ago by jordimassaguerpla | Reply
The url for how to enable the feature gates got formatted weirdly ... This is the url
https://wiki.microfocus.com/index.php/SUSE_CaaS_Platform/FAQ#How_to_enable_Kubernetes_feature_gates
and I think this is an internal document, so for the ones that do not have access:
How to enable Kubernetes feature gates
Feature gates are a way used by kubernetes to enable experimental features in advance.
It's possible to enable Kubernetes feature gates on SUSE CaaS Platform 3.
Please note: feature gates are experimental features, hence they won't be supported by SUSE.
Let's assume a user wants to use two feature gates:
DevicePlugins ReadOnlyAPIDataVolumesThe user would have to log into the admin node and execute this command:
docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(kubernetes_feature_gates: 'DevicePlugins=true,ReadOnlyAPIDataVolumes=true')"And then issue an orchestration. This can be done using the following command on the admin node:
docker exec $(docker ps | grep salt-master | awk {'print $1'}) salt-run state.orchestrate orch.kubernetes -
over 7 years ago by jordimassaguerpla | Reply
Following the instructions in the previous comment, I was able to enable the device plugin.
However, when deploying the nvidia plugin, this didn't deploy a pod as expected, so I opened an issue upstream asking for more information.
https://github.com/NVIDIA/k8s-device-plugin/issues/62
-
over 7 years ago by jordimassaguerpla | Reply
namespaced RoleBinding would add host path mount privileges , without granting excess privileges over all namespaces:
--- apiVersion: v1 kind: ServiceAccount metadata: name: nvidia-device-plugin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: nvidia-device-plugin-psp-privileged namespace: kube-system roleRef: kind: ClusterRole name: suse:caasp:psp:privileged apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: nvidia-device-plugin namespace: kube-system And then in your DeamonSet spec, `serviceAccount: nvidia-device-plugin` . This creates the ServiceAccount+RoleBinding in the kube-system namespace - if you're deploying into another NS, swap out `kube-system` for the namespace you're using.Thanks to Ludovic and Kiall
-
-
over 6 years ago by jordimassaguerpla | Reply
So I was able to make SUSE CaaSP schedule jobs that need an nvidia GPU to the node that has an nvidia GPU :)
Here the documentation:
> Disclaimer > > This is a hackweek project, so this is not ready for production use. It contains hacks and workarounds just to "make it work". > > This has been tested with SUSE CaaSP Beta 3 (public beta). I used that "kubernetes distribution" because given I am > involved on that project, I am familiar with it (and I love it :) ) > >However, instructions here should be also valid openSUSE Kubic and in general for any kubernetes+cri-o distribution
How to setup SUSE CaaSP to work with GPU
...
Installing SUSE CaaSP
We need a cluster, that is obvious :), so let's start with installing two nodes with SUSE CaaSP 4.0 Beta3 powered by SUSE Linux Enterprise Server 15 SP1, which will become our worker and master nodes:
gpu: A bare metal workstation with NVIDIA Quadro K2000 graphics card
master:A virtual machine
You can do that by following the SUSE CaaSP Beta 3 deployment instructions.
> Tip
Make sure you have a user "sles" which can run "sudo" without a password, and that both nodes have the other's public ssh keys in "~sles/.ssh/authorizedkeys". Also, as stated in the deployment guide, that you have the ssh-agent running, and add the hostnames into /etc/hosts so they are both reachable by hostname. Then, disable _firewalld and enable sshd.> Tip Do not configure a swap partition and set the vm to use 2 CPUs. Otherwise, SUSE CaaSP will fail to install. >
Finally, we need both machines to be in the same network. For this, I setup the vm to use a macvtap host device. You can find more info on macvtap here. However, I just did that with virt-manager on openSUSE Leap 15.1*.
(*): This is not precise. Actually, for whatever reason, I could not setup this with the virt-manager run as a "normal user" but I could if I started virt-manager from YaST (with root permissions... may that be the reason?)
>Tip Do you want to know if the cluster is properly setup? Run the k8s conformance tests. >
Once we have a SUSE CaaSP cluster running, we can proceed to install the NVIDIA drivers.
Installing nvidia graphics driver kernel module
So we have a workstation with NVIDIA GPU compatible with CUDA (in our case Quadro K2000). Now is time to install the right drivers so we can use that.
Drivers can be installed from NVIDIA download servers: zypper ar https://developer.nvidia.com/cuda-gpus nvidia zypper ref zypper install nvidia-gfxG05-kmp-default
You can check if drivers are loaded: lsmod | grep nvidia
Now that we have the drivers, we can install the NVIDIA driver for computing with GPUs using CUDA.
Installing NVIDIA driver for computing with GPUs using CUDA
After installing nvidia drivers, we need to install the nvidia-computeG05. Given we setup the nvidia repo from the previous step, all we need to do is:
zypper install nvidia-computeG05You can check if this is running by running nvidia-smi
and you should get this output
Wed Jun 26 14:30:59 2019
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.26 Driver Version: 430.26 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro K2000 Off | 00000000:05:00.0 Off | N/A | | 31% 47C P8 N/A / N/A | 0MiB / 1998MiB | 0% Default | +-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
Installing libnvidia
Installing nvidia-container-cli
Installing nvidia-container-runtime-hook
Creating a Service Daemon Device Nvidia
Testing
TODOs
- Package
- Package text
-
over 6 years ago by jordimassaguerpla | Reply
I thought I was going to be able to edit the comment afterwards, sorry for this mess. I will instead create a github page and link it from here.
-
over 6 years ago by jordimassaguerpla | Reply
Docs: https://github.com/jordimassaguerpla/SUSEhackweek18/blob/master/HowtosetupSUSECaaSPkubernetescrio_GPU.md
-
-
over 6 years ago by jordimassaguerpla | Reply
The third document needs some "love" but I can say this time I was able to make it and use all these technologies:
- kubeflow: Machine Learning Toolkit for Kubernetes
- Keras: The Python Deep Learning library
- Seldon Core: Machine Learning Deployment for Kubernetes
- Tensorfow: An open source machine learning framework for everyone
- cri-o: Lightweight Container Runtime for Kubernetes
- Kubernetes
- SUSE CaaSP: SUSE Container as a Service Platform
- Nvidia container engine
- Cuda
Similar Projects
Song Search with CLAP by gcolangiuli
Description
Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface
Goals
Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:
- Music Tagging;
- Free text search;
- Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.
The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.
Resources
- CLAP: The main model being researched;
- huggingface: Pre-trained models for CLAP;
- Free Music Archive: Creative Commons songs that can be used for testing;
- Colab: To be used as the development environment;
- hw25-song-search: Github repo of the project.
The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio
Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. 
The Plan
Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!
Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:
❥ The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.
❥ The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.
❥ Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.
If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.
Why?
We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.
The Outputs
❥ A "State of the Agentic Union" for SUSE engineers, detailing what works, what explodes, and how much coffee we can drink while the robots do the rebasing.
❥ Honest, Daily Updates With All the Gory Details
Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0
Self-Scaling LLM Infrastructure Powered by Rancher

Description
The Problem
Running LLMs can get expensive and complex pretty quickly.
Today there are typically two choices:
- Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
- Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.
What if there was a middle ground?
Project Repository: github.com/alexander-demicev/llmserverless
What This Project Does
A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.
A complete, self-scaling LLM infrastructure that:
- Scales to zero when idle (no idle costs)
- Scales up automatically when requests come in
- Adds more nodes when needed, removes them when demand drops
- Runs on any infrastructure - laptop, bare metal, or cloud
Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.
How It Works
A combination of open source tools working together:
Flow:
- Users interact with OpenWebUI (chat interface)
- Requests go to LiteLLM Gateway
- LiteLLM routes requests to:
- Ollama (Knative) for local model inference (auto-scales pods)
- Or cloud APIs for fallback
- Cluster Autoscaler scales nodes up/down as needed
- Fleet keeps everything in sync via GitOps
Goals
OpenPlatform Self-Service Portal by tmuntan1
Description
In SUSE IT, we developed an internal developer platform for our engineers using SUSE technologies such as RKE2, SUSE Virtualization, and Rancher. While it works well for our existing users, the onboarding process could be better.
To improve our customer experience, I would like to build a self-service portal to make it easy for people to accomplish common actions. To get started, I would have the portal create Jira SD tickets for our customers to have better information in our tickets, but eventually I want to add automation to reduce our workload.
Goals
- Build a frontend website (Angular) that helps customers create Jira SD tickets.
- Build a backend (Rust with Axum) for the backend, which would do all the hard work for the frontend.
Resources
Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng
Description
As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.
Goals
- Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
- Create NFS-Ganesha Container Image on OBS: Image
- Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus
Resources
A CLI for Harvester by mohamed.belgaied
Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI. Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01 to my-vm-05.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
- Kubevirt API objects (Manipulating VMs and VM Configuration in Kubernetes using Kubevirt)
Technical talks at universities by agamez
Description
This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.
For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.
Goals
- Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
- Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
- Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.
Resources
- Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
- SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.
Rewrite Distrobox in go (POC) by fabriziosestito
Description
Rewriting Distrobox in Go.
Main benefits:
- Easier to maintain and to test
- Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)
Goals
- Build a minimal starting point with core commands
- Keep the CLI interface compatible: existing users shouldn't notice any difference
- Use a clean Go architecture with adapters for different container backends
- Keep dependencies minimal and binary size small
- Benchmark against the original shell script
Resources
- Upstream project: https://github.com/89luca89/distrobox/
- Distrobox site: https://distrobox.it/
- ArchWiki: https://wiki.archlinux.org/title/Distrobox
