Project Description

Generate a personalized avatar artwork images by fine-tuning stable diffusion on personal pictures

Goal for this Hackweek

Get a new fancy and unique avatar!

Resources

  • https://huggingface.co/docs/diffusers/using-diffusers/sdxl
  • https://huggingface.co/docs/diffusers/training/dreambooth
  • https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sdxl.md
  • https://civitai.com/models/133005/juggernaut-xl?modelVersionId=198530

Looking for hackers with the skills:

ai stable-diffusion imaging

This project is part of:

Hack Week 23

Activity

  • over 2 years ago: dfaggioli liked this project.
  • over 2 years ago: mgrossu joined this project.
  • over 2 years ago: jgoldschmidt liked this project.
  • over 2 years ago: STorresi added keyword "ai" to this project.
  • over 2 years ago: STorresi added keyword "stable-diffusion" to this project.
  • over 2 years ago: STorresi added keyword "imaging" to this project.
  • over 2 years ago: STorresi started this project.
  • over 2 years ago: STorresi originated this project.

  • Comments

    • STorresi
      about 2 years ago by STorresi | Reply

      ...and here are the results!

      In paris

      In New York City

    • STorresi
      about 2 years ago by STorresi | Reply

      These are generated after a bespoke LoRA training using DreamBooth over the JuggernautXL model, which in turn is based on SDXL 1.0.

      As you can see, hands are still tricky (a known issue of diffusion models, apparently), but I didn't try inpainting and img2img fine-tuning, which are supposed to be the go-to way to solve small issues like that. I must say the overall experience was quite painful due to the hardware requirements of SDXL and the amount of memory leaks in pytorch. A high-end consumer grade GPU like an NVIDIA 4080 with 16GB of VRAM often wasn't enough and ran OOM.

    Similar Projects

    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    SUSE Observability MCP server by drutigliano

    Description

    The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.

    This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.

    Goals

    • Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
    • Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
    • Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
    • Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.

     Hackweek STEP

    • Create a functional MCP endpoint exposing one (or more) tool(s) to answer queries like "What is the health of service X?") by fetching, normalizing, and returning live StackState data in an LLM-ready format.

     Scope

    • Implement read-only MCP server that can:
      • Connect to a live SUSE Observability instance and authenticate (with API token)
      • Use tools to fetch data for a specific component URN (e.g., current health state, metrics, possibly topology neighbors, ...).
      • Normalize response fields (e.g., URN to "Service Name," health state DEVIATING to "Unhealthy", raw metrics).
      • Return the data as a structured JSON payload compliant with the MCP specification.

    Deliverables

    • MCP Server v0.1 A running Golang MCP server with at least one tool.
    • A README.md and a test script (e.g., curl commands or a simple notebook) showing how an AI agent would call the endpoint and the resulting JSON payload.

    Outcome A functional and testable API endpoint that proves the core concept: translating complex StackState data into a simple, LLM-ready format. This provides the foundation for developing AI-driven diagnostics and automated remediation.

    Resources

    • https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
    • https://www.datadoghq.com/blog/datadog-remote-mcp-server
    • https://modelcontextprotocol.io/specification/2025-06-18/index
    • https://modelcontextprotocol.io/docs/develop/build-server

     Basic implementation

    • https://github.com/drutigliano19/suse-observability-mcp-server

    Results

    Successfully developed and delivered a fully functional SUSE Observability MCP Server that bridges language models with SUSE Observability's operational data. This project demonstrates how AI agents can perform intelligent troubleshooting and root cause analysis using structured access to real-time infrastructure data.

    Example execution


    Bugzilla goes AI - Phase 1 by nwalter

    Description

    This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.

    Goals

    To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.

    Project Charter

    Bugzilla goes AI Phase 1

    Description

    Project Achievements during Hackweek

    In this file you can read about what we achieved during Hackweek.

    Project Achievements


    Try AI training with ROCm and LoRA by bmwiedemann

    Description

    I want to setup a Radeon RX 9600 XT 16 GB at home with ROCm on Slowroll.

    Goals

    I want to test how fast AI inference can get with the GPU and if I can use LoRA to re-train an existing free model for some task.

    Resources

    • https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html
    • https://build.opensuse.org/project/show/science:GPU:ROCm
    • https://src.opensuse.org/ROCm/
    • https://www.suse.com/c/lora-fine-tuning-llms-for-text-classification/

    Results

    got inference working with llama.cpp:

    export LLAMACPP_ROCM_ARCH=gfx1200
    HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \
    cmake -S . -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=$LLAMACPP_ROCM_ARCH \
    -DCMAKE_BUILD_TYPE=Release -DLLAMA_CURL=ON \
    -Dhipblas_DIR=/usr/lib64/cmake/hipblaslt/ \
    && cmake --build build --config Release -j8
    m=models/gpt-oss-20b-mxfp4.gguf
    cd $P/llama.cpp && build/bin/llama-server --model $m --threads 8 --port 8005 --host 0.0.0.0 --device ROCm0 --n-gpu-layers 999
    

    Without the --device option it faulted. Maybe because my APU also appears there?

    I updated/fixed various related packages: https://src.opensuse.org/ROCm/rocm-examples/pulls/1 https://src.opensuse.org/ROCm/hipblaslt/pulls/1 SR 1320959

    benchmark

    I benchmarked inference with llama.cpp + gpt-oss-20b-mxfp4.gguf and ROCm offloading to a Radeon RX 9060 XT 16GB. I varied the number of layers that went to the GPU:

    • 0 layers 14.49 tokens/s (8 CPU cores)
    • 9 layers 17.79 tokens/s 34% VRAM
    • 15 layers 22.39 tokens/s 51% VRAM
    • 20 layers 27.49 tokens/s 64% VRAM
    • 24 layers 41.18 tokens/s 74% VRAM
    • 25+ layers 86.63 tokens/s 75% VRAM (only 200% CPU load)

    So there is a significant performance-boost if the whole model fits into the GPU's VRAM.