Project Description
Generate a personalized avatar artwork images by fine-tuning stable diffusion on personal pictures
Goal for this Hackweek
Get a new fancy and unique avatar!
Resources
- https://huggingface.co/docs/diffusers/using-diffusers/sdxl
- https://huggingface.co/docs/diffusers/training/dreambooth
- https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sdxl.md
- https://civitai.com/models/133005/juggernaut-xl?modelVersionId=198530
Looking for hackers with the skills:
This project is part of:
Hack Week 23
Activity
Comments
-
-
about 2 years ago by STorresi | Reply
These are generated after a bespoke LoRA training using DreamBooth over the JuggernautXL model, which in turn is based on SDXL 1.0.
As you can see, hands are still tricky (a known issue of diffusion models, apparently), but I didn't try inpainting and img2img fine-tuning, which are supposed to be the go-to way to solve small issues like that. I must say the overall experience was quite painful due to the hardware requirements of SDXL and the amount of memory leaks in pytorch. A high-end consumer grade GPU like an NVIDIA 4080 with 16GB of VRAM often wasn't enough and ran OOM.
Similar Projects
MCP Trace Suite by r1chard-lyu
Description
This project plans to create an MCP Trace Suite, a system that consolidates commonly used Linux debugging tools such as bpftrace, perf, and ftrace.
The suite is implemented as an MCP Server. This architecture allows an AI agent to leverage the server to diagnose Linux issues and perform targeted system debugging by remotely executing and retrieving tracing data from these powerful tools.
- Repo: https://github.com/r1chard-lyu/systracesuite
- Demo: Slides
Goals
Build an MCP Server that can integrate various Linux debugging and tracing tools, including bpftrace, perf, ftrace, strace, and others, with support for future expansion of additional tools.
Perform testing by intentionally creating bugs or issues that impact system performance, allowing an AI agent to analyze the root cause and identify the underlying problem.
Resources
- Gemini CLI: https://geminicli.com/
- eBPF: https://ebpf.io/
- bpftrace: https://github.com/bpftrace/bpftrace/
- perf: https://perfwiki.github.io/main/
- ftrace: https://github.com/r1chard-lyu/tracium/
Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0
Self-Scaling LLM Infrastructure Powered by Rancher

Description
The Problem
Running LLMs can get expensive and complex pretty quickly.
Today there are typically two choices:
- Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
- Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.
What if there was a middle ground?
What if infrastructure scaled itself instead of making you scale it?
Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?
Project Repository: github.com/alexander-demicev/llmserverless
What This Project Does
A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.
A complete, self-scaling LLM infrastructure that:
- Scales to zero when idle (no idle costs)
- Scales up automatically when requests come in
- Adds more nodes when needed, removes them when demand drops
- Runs on any infrastructure - laptop, bare metal, or cloud
Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.
How It Works
A combination of open source tools working together:
Flow:
- Users interact with OpenWebUI (chat interface)
- Requests go to LiteLLM Gateway
- LiteLLM routes requests to:
- Ollama (Knative) for local model inference (auto-scales pods)
- Or cloud APIs for fallback
SUSE Edge Image Builder MCP by eminguez
Description
Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.
Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.
Goals
- Familiarize myself with MCPs
- Unrealistic: Have an MCP that can generate an EIB config file
Resources
Result
https://github.com/e-minguez/eib-mcp
I've extensively used antigravity and its agent mode to code this. This heavily uses https://hackweek.opensuse.org/25/projects/suse-edge-image-builder-json-schema for the MCP to be built.
I've ended up learning a lot of things about "prompting", json schemas in general, some golang, MCPs and AI in general :)
Example:
Generate an Edge Image Builder configuration for an ISO image based on slmicro-6.2.iso, targeting x86_64 architecture. The output name should be 'my-edge-image' and it should install to /dev/sda. It should deploy
a 3 nodes kubernetes cluster with nodes names "node1", "node2" and "node3" as:
* hostname: node1, IP: 1.1.1.1, role: initializer
* hostname: node2, IP: 1.1.1.2, role: agent
* hostname: node3, IP: 1.1.1.3, role: agent
The kubernetes version should be k3s 1.33.4-k3s1 and it should deploy a cert-manager helm chart (the latest one available according to https://cert-manager.io/docs/installation/helm/). It should create a user
called "suse" with password "suse" and set ntp to "foo.ntp.org". The VIP address for the API should be 1.2.3.4
Generates:
``` apiVersion: "1.0" image: arch: x86_64 baseImage: slmicro-6.2.iso imageType: iso outputImageName: my-edge-image kubernetes: helm: charts: - name: cert-manager repositoryName: jetstack
Try AI training with ROCm and LoRA by bmwiedemann
Description
I want to setup a Radeon RX 9600 XT 16 GB at home with ROCm on Slowroll.
Goals
I want to test how fast AI inference can get with the GPU and if I can use LoRA to re-train an existing free model for some task.
Resources
- https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html
- https://build.opensuse.org/project/show/science:GPU:ROCm
- https://src.opensuse.org/ROCm/
- https://www.suse.com/c/lora-fine-tuning-llms-for-text-classification/
Results
got inference working with llama.cpp:
export LLAMACPP_ROCM_ARCH=gfx1200
HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \
cmake -S . -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=$LLAMACPP_ROCM_ARCH \
-DCMAKE_BUILD_TYPE=Release -DLLAMA_CURL=ON \
-Dhipblas_DIR=/usr/lib64/cmake/hipblaslt/ \
&& cmake --build build --config Release -j8
m=models/gpt-oss-20b-mxfp4.gguf
cd $P/llama.cpp && build/bin/llama-server --model $m --threads 8 --port 8005 --host 0.0.0.0 --device ROCm0 --n-gpu-layers 999
Without the --device option it faulted. Maybe because my APU also appears there?
I updated/fixed various related packages: https://src.opensuse.org/ROCm/rocm-examples/pulls/1 https://src.opensuse.org/ROCm/hipblaslt/pulls/1 SR 1320959
benchmark
I benchmarked inference with llama.cpp + gpt-oss-20b-mxfp4.gguf and ROCm offloading to a Radeon RX 9060 XT 16GB. I varied the number of layers that went to the GPU:
- 0 layers 14.49 tokens/s (8 CPU cores)
- 9 layers 17.79 tokens/s 34% VRAM
- 15 layers 22.39 tokens/s 51% VRAM
- 20 layers 27.49 tokens/s 64% VRAM
- 24 layers 41.18 tokens/s 74% VRAM
- 25+ layers 86.63 tokens/s 75% VRAM (only 200% CPU load)
So there is a significant performance-boost if the whole model fits into the GPU's VRAM.
Backporting patches using LLM by jankara
Description
Backporting Linux kernel fixes (either for CVE issues or as part of general git-fixes workflow) is boring and mostly mechanical work (dealing with changes in context, renamed variables, new helper functions etc.). The idea of this project is to explore usage of LLM for backporting Linux kernel commits to SUSE kernels using LLM.
Goals
- Create safe environment allowing LLM to run and backport patches without exposing the whole filesystem to it (for privacy and security reasons).
- Write prompt that will guide LLM through the backporting process. Fine tune it based on experimental results.
- Explore success rate of LLMs when backporting various patches.
Resources
- Docker
- Gemini CLI
Repository
Current version of the container with some instructions for use are at: https://gitlab.suse.de/jankara/gemini-cli-backporter