
Make it faster!
There are use cases that put the Kubernetes API under heavy load - using Rancher at scale can be one of them.
Also, there are use cases in which a connection to the Kubernetes API might not always be present, or with good bandwidth - using Rancher for edge use cases can be one of them.
This project aims to create a local cache serving data from the Kubernetes API - with good performance and displaying last-good-results on a flaky connection.
Goal for this Hackweek
Implement Proof-Of-Concept client-go components backed by SQLite.
https://github.com/moio/vai
Resources
Golang and ideally Kubernetes hackers are more than welcome!
Looking for hackers with the skills:
kubernetes k8s api golang go performance testautomation scalability
This project is part of:
Hack Week 22
Activity
Comments
Similar Projects
Technical talks at universities by agamez
Description
This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.
For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.
Goals
- Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
- Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
- Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.
Resources
- Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
- SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.
The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio
Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. 
The Plan
Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!
Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:
❥ The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.
❥ The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.
❥ Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.
If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.
Why?
We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.
The CONCLUSION!!!
A
State of the Union
document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below!
Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0
Self-Scaling LLM Infrastructure Powered by Rancher

Description
The Problem
Running LLMs can get expensive and complex pretty quickly.
Today there are typically two choices:
- Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
- Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.
What if there was a middle ground?
What if infrastructure scaled itself instead of making you scale it?
Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?
Project Repository: github.com/alexander-demicev/llmserverless
What This Project Does
A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.
A complete, self-scaling LLM infrastructure that:
- Scales to zero when idle (no idle costs)
- Scales up automatically when requests come in
- Adds more nodes when needed, removes them when demand drops
- Runs on any infrastructure - laptop, bare metal, or cloud
Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.
How It Works
A combination of open source tools working together:
Flow:
- Users interact with OpenWebUI (chat interface)
- Requests go to LiteLLM Gateway
- LiteLLM routes requests to:
- Ollama (Knative) for local model inference (auto-scales pods)
- Or cloud APIs for fallback
Kubernetes-Based ML Lifecycle Automation by lmiranda
Description
This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.
The pipeline will automate the lifecycle of a machine learning model, including:
- Data ingestion/collection
- Model training as a Kubernetes Job
- Model artifact storage in an S3-compatible registry (e.g. Minio)
- A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
- A lightweight inference service that loads and serves the latest model
- Monitoring of model performance and service health through Prometheus/Grafana
The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.
Goals
By the end of Hack Week, the project should:
Produce a fully functional ML pipeline running on Kubernetes with:
- Data collection job
- Training job container
- Storage and versioning of trained models
- Automated deployment of new model versions
- Model inference API service
- Basic monitoring dashboards
Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.
Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).
Prepare a short demo explaining the end-to-end process and how new models flow through the system.
Resources
Updates
- Training pipeline and datasets
- Inference Service py
Preparing KubeVirtBMC for project transfer to the KubeVirt organization by zchang
Description
KubeVirtBMC is preparing to transfer the project to the KubeVirt organization. One requirement is to enhance the modeling design's security. The current v1alpha1 API (the VirtualMachineBMC CRD) was designed during the proof-of-concept stage. It's immature and inherently insecure due to its cross-namespace object references, exposing security concerns from an RBAC perspective.
The other long-awaited feature is the ability to mount virtual media so that virtual machines can boot from remote ISO images.
Goals
- Deliver the v1beta1 API and its corresponding controller implementation
- Enable the Redfish virtual media mount function for KubeVirt virtual machines
Resources
- The KubeVirtBMC repo: https://github.com/starbops/kubevirtbmc
- The new v1beta1 API: https://github.com/starbops/kubevirtbmc/issues/83
- Redfish virtual media mount: https://github.com/starbops/kubevirtbmc/issues/44
Bugzilla goes AI - Phase 1 by nwalter
Description
This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.
Goals
To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.
Project Charter
Description
Project Achievements during Hackweek
In this file you can read about what we achieved during Hackweek.
HTTP API for nftables by crameleon
Background
The idea originated in https://progress.opensuse.org/issues/164060 and is about building RESTful API which translates authorized HTTP requests to operations in nftables, possibly utilizing libnftables-json(5).
Originally, I started developing such an interface in Go, utilizing https://github.com/google/nftables. The conversion of string networks to nftables set elements was problematic (unfortunately no record of details), and I started a second attempt in Python, which made interaction much simpler thanks to native nftables Python bindings.
Goals
- Find and track the issue with google/nftables
- Revisit and polish the Go or Python code (prefer Go, but possibly depends on implementing missing functionality), primarily the server component
- Finish functionality to interact with nftables sets (retrieving and updating elements), which are of interest for the originating issue
- Align test suite
- Packaging
Resources
- https://git.netfilter.org/nftables/tree/py/src/nftables.py
- https://git.com.de/Georg/nftables-http-api (to be moved to GitHub)
- https://build.opensuse.org/package/show/home:crameleon:containers/pytest-nftables-container
Results
- Started new https://github.com/tacerus/nftables-http-api.
- First Go nftables issue was related to set elements needing to be added with different start and end addresses - coincidentally, this was recently discovered by someone else, who added a useful helper function for this: https://github.com/google/nftables/pull/342.
- Further improvements submitted: https://github.com/google/nftables/pull/347.
Side results
Upon starting to unify the structure and implementing more functionality, missing JSON output support was noticed for some subcommands in libnftables. Submitted patches here as well:
- https://lore.kernel.org/netfilter-devel/20251203131736.4036382-2-georg@syscid.com/T/#u
Create a Cloud-Native policy engine with notifying capabilities to optimize resource usage by gbazzotti
Description
The goal of this project is to begin the initial phase of development of an all-in-one Cloud-Native Policy Engine that notifies resource owners when their resources infringe predetermined policies. This was inspired by a current issue in the CES-SRE Team where other solutions seemed to not exactly correspond to the needs of the specific workloads running on the Public Cloud Team space.
The initial architecture can be checked out on the Repository listed under Resources.
Among the features that will differ this project from other monitoring/notification systems:
- Pre-defined sensible policies written at the software-level, avoiding a learning curve by requiring users to write their own policies
- All-in-one functionality: logging, mailing and all other actions are not required to install any additional plugins/packages
- Easy account management, being able to parse all required configuration by a single JSON file
- Eliminate integrations by not requiring metrics to go through a data-agreggator
Goals
- Create a minimal working prototype following the workflow specified on the documentation
- Provide instructions on installation/usage
- Work on email notifying capabilities
Resources
Play with the userfaultfd(2) system call and download on demand using HTTP Range Requests with Golang by rbranco
Description
The userfaultfd(2) is a cool system call to handle page faults in user-space. This should allow me to list the contents of an ISO or similar archive without downloading the whole thing. The userfaultfd(2) part can also be done in theory with the PROT_NONE mprotect + SIGSEGV trick, for complete Unix portability, though reportedly being slower.
Goals
- Create my own library for userfaultfd(2) in Golang.
- Create my own library for HTTP Range Requests.
- Complete portability with Unix.
- Benchmarks.
- Contribute some tests to LTP.
Resources
- https://docs.kernel.org/admin-guide/mm/userfaultfd.html
- https://www.cons.org/cracauer/cracauer-userfaultfd.html
terraform-provider-feilong by e_bischoff
Project Description
People need to test operating systems and applications on s390 platform. While this is straightforward with KVM, this is very difficult with z/VM.
IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC(see this schema).
What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your z/VM system.
Goal for Hackweek 23
I would like to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.
My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.
Goals for Hackweek 24
Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!
Let's add support for fiberchannel disks and multipath.
Goals for Hackweek 25
Modernization, maturity, and maintenance: support for SLES 16 and openTofu, new API calls, fixes...
Resources
Outcome
Rewrite Distrobox in go (POC) by fabriziosestito
Description
Rewriting Distrobox in Go.
Main benefits:
- Easier to maintain and to test
- Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)
Goals
- Build a minimal starting point with core commands
- Keep the CLI interface compatible: existing users shouldn't notice any difference
- Use a clean Go architecture with adapters for different container backends
- Keep dependencies minimal and binary size small
- Benchmark against the original shell script
Resources
- Upstream project: https://github.com/89luca89/distrobox/
- Distrobox site: https://distrobox.it/
- ArchWiki: https://wiki.archlinux.org/title/Distrobox
Updatecli Autodiscovery supporting WASM plugins by olblak
Description
Updatecli is a Golang Update policy engine that allow to write Update policies in YAML manifest. Updatecli already has a plugin ecosystem for common update strategies such as automating Dockerfile or Kubernetes manifest from Git repositories.
This is what we call autodiscovery where Updatecli generate manifest and apply them dynamically based on some context.
Obviously, the Updatecli project doesn't accept plugins specific to an organization.
I saw project using different languages such as python, C#, or JS to generate those manifest.
It would be great to be able to share and reuse those specific plugins
During the HackWeek, I'll hang on the Updatecli matrix channel
https://matrix.to/#/#Updatecli_community:gitter.im
Goals
Implement autodiscovery plugins using WASM. I am planning to experiment with https://github.com/extism/extism
To build a simple WASM autodiscovery plugin and run it from Updatecli
Resources
- https://github.com/extism/extism
- https://github.com/updatecli/updatecli
- https://www.updatecli.io/docs/core/autodiscovery/
- https://matrix.to/#/#Updatecli_community:gitter.im
HTTP API for nftables by crameleon
Background
The idea originated in https://progress.opensuse.org/issues/164060 and is about building RESTful API which translates authorized HTTP requests to operations in nftables, possibly utilizing libnftables-json(5).
Originally, I started developing such an interface in Go, utilizing https://github.com/google/nftables. The conversion of string networks to nftables set elements was problematic (unfortunately no record of details), and I started a second attempt in Python, which made interaction much simpler thanks to native nftables Python bindings.
Goals
- Find and track the issue with google/nftables
- Revisit and polish the Go or Python code (prefer Go, but possibly depends on implementing missing functionality), primarily the server component
- Finish functionality to interact with nftables sets (retrieving and updating elements), which are of interest for the originating issue
- Align test suite
- Packaging
Resources
- https://git.netfilter.org/nftables/tree/py/src/nftables.py
- https://git.com.de/Georg/nftables-http-api (to be moved to GitHub)
- https://build.opensuse.org/package/show/home:crameleon:containers/pytest-nftables-container
Results
- Started new https://github.com/tacerus/nftables-http-api.
- First Go nftables issue was related to set elements needing to be added with different start and end addresses - coincidentally, this was recently discovered by someone else, who added a useful helper function for this: https://github.com/google/nftables/pull/342.
- Further improvements submitted: https://github.com/google/nftables/pull/347.
Side results
Upon starting to unify the structure and implementing more functionality, missing JSON output support was noticed for some subcommands in libnftables. Submitted patches here as well:
- https://lore.kernel.org/netfilter-devel/20251203131736.4036382-2-georg@syscid.com/T/#u
SUSE Health Check Tools by roseswe
SUSE HC Tools Overview
A collection of tools written in Bash or Go 1.24++ to make life easier with handling of a bunch of tar.xz balls created by supportconfig.
Background: For SUSE HC we receive a bunch of supportconfig tar balls to check them for misconfiguration, areas for improvement or future changes.
Main focus on these HC are High Availability (pacemaker), SLES itself and SAP workloads, esp. around the SUSE best practices.
Goals
- Overall improvement of the tools
- Adding new collectors
- Add support for SLES16
Resources
csv2xls* example.sh go.mod listprodids.txt sumtext* trails.go README.md csv2xls.go exceltest.go go.sum m.sh* sumtext.go vercheck.py* config.ini csvfiles/ getrpm* listprodids* rpmdate.sh* sumxls* verdriver* credtest.go example.py getrpm.go listprodids.go sccfixer.sh* sumxls.go verdriver.go
docollall.sh* extracthtml.go gethostnamectl* go.sum numastat.go cpuvul* extractcluster.go firmwarebug* gethostnamectl.go m.sh* numastattest.go cpuvul.go extracthtml* firmwarebug.go go.mod numastat* xtr_cib.sh*
$ getrpm -r pacemaker
>> Product ID: 2795 (SUSE Linux Enterprise Server for SAP Applications 15 SP7 x86_64), RPM Name:
+--------------+----------------------------+--------+--------------+--------------------+
| Package Name | Version | Arch | Release | Repository |
+--------------+----------------------------+--------+--------------+--------------------+
| pacemaker | 2.1.10+20250718.fdf796ebc8 | x86_64 | 150700.3.3.1 | sle-ha/15.7/x86_64 |
| pacemaker | 2.1.9+20250410.471584e6a2 | x86_64 | 150700.1.9 | sle-ha/15.7/x86_64 |
+--------------+----------------------------+--------+--------------+--------------------+
Total packages found: 2
Add support for todo.sr.ht to git-bug by mcepl
Description
I am a big fan of distributed issue tracking and the best (and possibly) only credible such issue tracker is now git-bug. It has bridges to another centralized issue trackers, so user can download (and modify) issues on GitHub, GitLab, Launchpad, Jira). I am also a fan of SourceHut, which has its own issue tracker, so I would like it bridge the two. Alas, I don’t know much about Go programming language (which the git-bug is written) and absolutely nothing about GraphQL (which todo.sr.ht uses for communication). AI to the rescue. I would like to vibe code (and eventually debug and make functional) bridge to the SourceHut issue tracker.
Goals
Functional fix for https://github.com/git-bug/git-bug/issues/1024
Resources
- anybody how actually understands how GraphQL and authentication on SourceHut (OAuth2) works
Create a go module to wrap happy-compta.fr by cbosdonnat
Description
https://happy-compta.fr is a tool for french work councils simple book keeping. While it does the job, it has no API to work with and it is tedious to enter loads of operations.
Goals
Write a go client module to be used as an API to programmatically manipulate the tool.
Writing an example tool to load data from a CSV file would be good too.
Play with the userfaultfd(2) system call and download on demand using HTTP Range Requests with Golang by rbranco
Description
The userfaultfd(2) is a cool system call to handle page faults in user-space. This should allow me to list the contents of an ISO or similar archive without downloading the whole thing. The userfaultfd(2) part can also be done in theory with the PROT_NONE mprotect + SIGSEGV trick, for complete Unix portability, though reportedly being slower.
Goals
- Create my own library for userfaultfd(2) in Golang.
- Create my own library for HTTP Range Requests.
- Complete portability with Unix.
- Benchmarks.
- Contribute some tests to LTP.
Resources
- https://docs.kernel.org/admin-guide/mm/userfaultfd.html
- https://www.cons.org/cracauer/cracauer-userfaultfd.html
RMT.rs: High-Performance Registration Path for RMT using Rust by gbasso
Description
The SUSE Repository Mirroring Tool (RMT) is a critical component for managing software updates and subscriptions, especially for our Public Cloud Team (PCT). In a cloud environment, hundreds or even thousands of new SUSE instances (VPS/EC2) can be provisioned simultaneously. Each new instance attempts to register against an RMT server, creating a "thundering herd" scenario.
We have observed that the current RMT server, written in Ruby, faces performance issues under this high-concurrency registration load. This can lead to request overhead, slow registration times, and outright registration failures, delaying the readiness of new cloud instances.
This Hackweek project aims to explore a solution by re-implementing the performance-critical registration path in Rust. The goal is to leverage Rust's high performance, memory safety, and first-class concurrency handling to create an alternative registration endpoint that is fast, reliable, and can gracefully manage massive, simultaneous request spikes.
The new Rust module will be integrated into the existing RMT Ruby application, allowing us to directly compare the performance of both implementations.
Goals
The primary objective is to build and benchmark a high-performance Rust-based alternative for the RMT server registration endpoint.
Key goals for the week:
- Analyze & Identify: Dive into the
SUSE/rmtRuby codebase to identify and map out the exact critical path for server registration (e.g., controllers, services, database interactions). - Develop in Rust: Implement a functionally equivalent version of this registration logic in Rust.
- Integrate: Explore and implement a method for Ruby/Rust integration to "hot-wire" the new Rust module into the RMT application. This may involve using FFI, or libraries like
rb-sysormagnus. - Benchmark: Create a benchmarking script (e.g., using
k6,ab, or a custom tool) that simulates the high-concurrency registration load from thousands of clients. - Compare & Present: Conduct a comparative performance analysis (requests per second, latency, success/error rates, CPU/memory usage) between the original Ruby path and the new Rust path. The deliverable will be this data and a summary of the findings.
Resources
- RMT Source Code (Ruby):
https://github.com/SUSE/rmt
- RMT Documentation:
https://documentation.suse.com/sles/15-SP7/html/SLES-all/book-rmt.html
- Tooling & Stacks:
- RMT/Ruby development environment (for running the base RMT)
- Rust development environment (
rustup,cargo)
- Potential Integration Libraries:
- rb-sys:
https://github.com/oxidize-rb/rb-sys - Magnus:
https://github.com/matsadler/magnus
- rb-sys:
- Benchmarking Tools:
k6(https://k6.io/)ab(ApacheBench)
dynticks-testing: analyse perf / trace-cmd output and aggregate data by m.crivellari
Description
dynticks-testing is a project started years ago by Frederic Weisbecker. One of the feature is to check the actual configuration (isolcpus, irqaffinity etc etc) and give feedback on it.
An important goal of this tool is to parse the output of trace-cmd / perf and provide more readable data, showing the duration of every events grouped by PID (showing also the CPU number, if the tasks has been migrated etc).
An example of data captured on my laptop (incomplete!!):
-0 [005] dN.2. 20310.270699: sched_wakeup: WaylandProxy:46380 [120] CPU:005
-0 [005] d..2. 20310.270702: sched_switch: swapper/5:0 [120] R ==> WaylandProxy:46380 [120]
...
WaylandProxy-46380 [004] d..2. 20310.295397: sched_switch: WaylandProxy:46380 [120] S ==> swapper/4:0 [120]
-0 [006] d..2. 20310.295397: sched_switch: swapper/6:0 [120] R ==> firefox:46373 [120]
firefox-46373 [006] d..2. 20310.295408: sched_switch: firefox:46373 [120] S ==> swapper/6:0 [120]
-0 [004] dN.2. 20310.295466: sched_wakeup: WaylandProxy:46380 [120] CPU:004
Output of noise_parse.py:
Task: WaylandProxy Pid: 46380 cpus: {4, 5} (Migrated!!!)
Wakeup Latency Nr: 24 Duration: 89
Sched switch: kworker/12:2 Nr: 1 Duration: 6
My first contribution is around Nov. 2024!
Goals
- add more features (eg cpuset)
- test / bugfix
Resources
- Frederic's public repository: https://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git/
- https://docs.kernel.org/timers/no_hz.html#testing
Progresses
isolcpus and cpusets implemented and merged in master: dynticks-testing.git commit
RMT.rs: High-Performance Registration Path for RMT using Rust by gbasso
Description
The SUSE Repository Mirroring Tool (RMT) is a critical component for managing software updates and subscriptions, especially for our Public Cloud Team (PCT). In a cloud environment, hundreds or even thousands of new SUSE instances (VPS/EC2) can be provisioned simultaneously. Each new instance attempts to register against an RMT server, creating a "thundering herd" scenario.
We have observed that the current RMT server, written in Ruby, faces performance issues under this high-concurrency registration load. This can lead to request overhead, slow registration times, and outright registration failures, delaying the readiness of new cloud instances.
This Hackweek project aims to explore a solution by re-implementing the performance-critical registration path in Rust. The goal is to leverage Rust's high performance, memory safety, and first-class concurrency handling to create an alternative registration endpoint that is fast, reliable, and can gracefully manage massive, simultaneous request spikes.
The new Rust module will be integrated into the existing RMT Ruby application, allowing us to directly compare the performance of both implementations.
Goals
The primary objective is to build and benchmark a high-performance Rust-based alternative for the RMT server registration endpoint.
Key goals for the week:
- Analyze & Identify: Dive into the
SUSE/rmtRuby codebase to identify and map out the exact critical path for server registration (e.g., controllers, services, database interactions). - Develop in Rust: Implement a functionally equivalent version of this registration logic in Rust.
- Integrate: Explore and implement a method for Ruby/Rust integration to "hot-wire" the new Rust module into the RMT application. This may involve using FFI, or libraries like
rb-sysormagnus. - Benchmark: Create a benchmarking script (e.g., using
k6,ab, or a custom tool) that simulates the high-concurrency registration load from thousands of clients. - Compare & Present: Conduct a comparative performance analysis (requests per second, latency, success/error rates, CPU/memory usage) between the original Ruby path and the new Rust path. The deliverable will be this data and a summary of the findings.
Resources
- RMT Source Code (Ruby):
https://github.com/SUSE/rmt
- RMT Documentation:
https://documentation.suse.com/sles/15-SP7/html/SLES-all/book-rmt.html
- Tooling & Stacks:
- RMT/Ruby development environment (for running the base RMT)
- Rust development environment (
rustup,cargo)
- Potential Integration Libraries:
- rb-sys:
https://github.com/oxidize-rb/rb-sys - Magnus:
https://github.com/matsadler/magnus
- rb-sys:
- Benchmarking Tools:
k6(https://k6.io/)ab(ApacheBench)