Project Description

I have casually investigated that recent open source image generation AI systems are relatively invasive of the host system if one starts to install them that way. Usually container is better but needs special configuration to access the needed hardware. I'd like to run something in a container utilizing the RDNA2 Radeon gfx card I have on my desktop computer.

The exact container type would be evaluated, and of course existing solutions will be seeked.

Goal for this Hackweek

The goals for the Hackweek include to have suitable optimized container that can be created from scratch with one command and can generate SUSE related images with the AMD graphics with 8GB RAM (which is a bit limited apparently).

Resources

https://github.com/tjyrinki/sd-rocm

Results

See the github link above, images below and the blog post at https://timojyrinki.gitlab.io/hugo/post/2023-02-02-stablediffusion-docker/

Looking for hackers with the skills:

gpu containers ai amd radeon rdna2

This project is part of:

Hack Week 22

Activity

  • over 2 years ago: punkioudi liked this project.
  • over 2 years ago: tjyrinki_suse started this project.
  • over 2 years ago: pdostal liked this project.
  • over 2 years ago: ilausuch liked this project.
  • over 2 years ago: dancermak liked this project.
  • almost 3 years ago: tschmitz liked this project.
  • almost 3 years ago: tjyrinki_suse added keyword "rdna2" to this project.
  • almost 3 years ago: tjyrinki_suse added keyword "gpu" to this project.
  • almost 3 years ago: tjyrinki_suse added keyword "containers" to this project.
  • almost 3 years ago: tjyrinki_suse added keyword "ai" to this project.
  • almost 3 years ago: tjyrinki_suse added keyword "amd" to this project.
  • almost 3 years ago: tjyrinki_suse added keyword "radeon" to this project.
  • almost 3 years ago: tjyrinki_suse originated this project.

  • Comments

    • tjyrinki_suse
      over 2 years ago by tjyrinki_suse | Reply

      Blog post at https://timojyrinki.gitlab.io/hugo/post/2023-02-02-stablediffusion-docker/ – read more there!

      See the git repo for what has been done as part of this project.

      example image

    • tjyrinki_suse
      over 2 years ago by tjyrinki_suse | Reply

      example image 2

    Similar Projects

    Technical talks at universities by agamez

    Description

    This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.

    For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.

    Goals

    • Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
    • Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
    • Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.

    Resources

    • Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
    • SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.


    SUSE Observability MCP server by drutigliano

    Description

    The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.

    This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.

    Goals

    • Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
    • Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
    • Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
    • Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.

    Resources

    • https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
    • https://www.datadoghq.com/blog/datadog-remote-mcp-server
    • https://modelcontextprotocol.io/specification/2025-06-18/index

     Basic implementation

    • https://github.com/drutigliano19/suse-observability-mcp-server


    Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios

    Description

    Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.

    This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.

    The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.

    Goals

    By the end of Hack Week, we aim to have a single, working Python script that:

    1. Connects to Prometheus and executes a query to fetch detailed test failure history.
    2. Processes the raw data into a format suitable for the Gemini API.
    3. Successfully calls the Gemini API with the data and a clear prompt.
    4. Parses the AI's response to extract a simple list of flaky tests.
    5. Saves the list to a JSON file that can be displayed in Grafana.
    6. New panel in our Dashboard listing the Flaky tests

    Resources