Description

To build a tool or system that takes a raw bug report (including error messages and context) and uses a large language model (LLM) to generate a series of structured, Socratic-style or Systemic questions designed to guide a the integration and development toward the root cause, rather than just providing a direct, potentially incorrect fix.

Goals

Set up a Python environment

Set the environment and get a Gemini API key. 2. Collect 5-10 realistic bug reports (from open-source projects, personal projects, or public forums like Stack Overflow—include the error message and the initial context).

Build the Dialogue Loop

  1. Write a basic Python script using the Gemini API.
  2. Implement a simple conversational loop: User Input (Bug) -> AI Output (Question) -> User Input (Answer to AI's question) -> AI Output (Next Question). Code Implementation

Socratic Strategy Implementation

  1. Refine the logic to ensure the questions follow a Socratic path (e.g., from symptom-> context -> assumptions -> root cause).
  2. Implement Function Calling (an advanced feature of the Gemini API) to suggest specific actions to the user, like "Run a ping test" or "Check the database logs."

Resources

What are Systemic Questions?

Systemic questions explore the relationships, patterns, and interactions within a system rather than focusing on isolated elements.
In IT, they help uncover hidden dependencies, feedback loops, assumptions, and side-effects during debugging or architecture analysis.

Looking for hackers with the skills:

ai gemini bugzilla

This project is part of:

Hack Week 25

Activity

  • 16 days ago: rtsvetkov added keyword "bugzilla" to this project.
  • 16 days ago: t.huynh liked this project.
  • 17 days ago: ybonatakis liked this project.
  • 20 days ago: doreilly liked this project.
  • 21 days ago: ancorgs liked this project.
  • 21 days ago: rtsvetkov added keyword "ai" to this project.
  • 21 days ago: rtsvetkov added keyword "gemini" to this project.
  • 21 days ago: rtsvetkov started this project.
  • 21 days ago: rtsvetkov liked this project.
  • 21 days ago: rtsvetkov originated this project.

  • Comments

    • rtsvetkov
      1 day ago by rtsvetkov | Reply

      === 1 Circular Questions Focus on feedback loops and mutual influence. Example debugging prompts: - "What components influence this module, and what does this module influence in return?" - "If Service A slows down, how does Service B respond?"

      === 2 Difference Questions Explore variations, exceptions, or changes over time. Example debugging prompts: - "When does the bug not occur? What is different then?" - "What changed in the system right before the issue appeared?"

      === 3 Scaling Questions Quantify experience, severity, or uncertainty. Example debugging prompts: - "On a scale from 1–10, how reproducible is this issue?" - "How much worse does the system behave under peak load versus normal load?"

      === 4 Hypothetical (‘If…Then’) Questions Explore consequences, alternative actions, or simulated scenarios. Example debugging prompts: - "If we disable caching, what do we expect to happen?" - "If the input doubles, which component fails first?" - "If had a unlimited time to prevent this exact bug from ever happening again, where in our development cycle (e.g., design, code review, testing) would we invest the most effort?" - "If we had to ship the next feature without fixing this bug, what workarounds or manual steps would we need to put in place?"

      === 5 Resource / Strength Questions Identify what works well and what can be reused. Example debugging prompts: - "Which environments run without this problem and why?" - "What parts of the system are stable and can guide the fix?"

      === 6 Perspective-Shifting Questions Examine the situation through different roles or components. Example debugging prompts: - "If you were the database, what would you ‘say’ is overwhelming you?" - "How would a network engineer interpret these logs differently from a backend developer?"

      == 2. Example Debugging Process Using Systemic Questions

      === Step 1: Clarify the Pattern - "When exactly does the API fail, and when does it succeed?"

      === Step 2: Identify Boundaries - "Which systems are definitely not involved?"

      === Step 3: Explore Changes - "What recent deployments or config changes might correlate?"

      === Step 4: Map Influences - "How does the latency of Service X influence the behaviour of Service Y?"

      === Step 5: Hypothesis Testing - "If we simulate traffic spikes, does the behaviour match production incidents?"

      === Step 6: Leverage What Works - "Why does staging not show the issue? What can this teach us about production?"

      == 3. Key Benefits for IT and Systems Theory * Makes hidden dependencies visible
      * Avoids tunnel vision in debugging
      * Encourages team alignment through shared system understanding
      * Supports root-cause analysis rather than symptom chasing

    • rtsvetkov
      about 14 hours ago by rtsvetkov | Reply

      Example manual research session: https://docs.google.com/document/d/1kgM0lBVavBnN0VeP1OgssWVjwIdE2hGf3jHmi2rxc/edit?usp=sharing

      as also the additional transaction on https://bugzilla.suse.com/show_bug.cgi?id=1245907

      the session https://gemini.google.com/app/dd379133b4af2ec8?utmsource=applauncher&utmmedium=owned&utmcampaign=base_all

    • rtsvetkov
      about 13 hours ago by rtsvetkov | Reply

      Made a Gem: https://gemini.google.com/gem/1FVNTDtBRR8GD8fd3H01LGHxbhzYtz3vb?usp=sharing

    Similar Projects

    Explore LLM evaluation metrics by thbertoldi

    Description

    Learn the best practices for evaluating LLM performance with an open-source framework such as DeepEval.

    Goals

    Curate the knowledge learned during practice and present it to colleagues.

    -> Maybe publish a blog post on SUSE's blog?

    Resources

    https://deepeval.com

    https://docs.pactflow.io/docs/bi-directional-contract-testing


    Song Search with CLAP by gcolangiuli

    Description

    Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface

    SUSE Hackweek AI Song Search

    Goals

    Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:

    • Music Tagging;
    • Free text search;
    • Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.

    The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.

    Result

    References


    SUSE Edge Image Builder MCP by eminguez

    Description

    Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.

    Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.

    Goals

    • Familiarize myself with MCPs
    • Unrealistic: Have an MCP that can generate an EIB config file

    Resources


    Extended private brain - RAG my own scripts and data into offline LLM AI by tjyrinki_suse

    Description

    For purely studying purposes, I'd like to find out if I could teach an LLM some of my own accumulated knowledge, to use it as a sort of extended brain.

    I might use qwen3-coder or something similar as a starting point.

    Everything would be done 100% offline without network available to the container, since I prefer to see when network is needed, and make it so it's never needed (other than initial downloads).

    Goals

    1. Learn something about RAG, LLM, AI.
    2. Find out if everything works offline as intended.
    3. As an end result have a new way to access my own existing know-how, but so that I can query the wisdom in them.
    4. Be flexible to pivot in any direction, as long as there are new things learned.

    Resources

    To be found on the fly.


    Background Coding Agent by mmanno

    Description

    I had only bad experiences with AI one-shots. However, monitoring agent work closely and interfering often did result in productivity gains.

    Now, other companies are using agents in pipelines. That makes sense to me, just like CI, we want to offload work to pipelines: Our engineering teams are consistently slowed down by "toil": low-impact, repetitive maintenance tasks. A simple linter rule change, a dependency bump, rebasing patch-sets on top of newer releases or API deprecation requires dozens of manual PRs, draining time from feature development.

    So far we have been writing deterministic, script-based automation for these tasks. And it turns out to be a common trap. These scripts are brittle, complex, and become a massive maintenance burden themselves.

    Can we make prompts and workflows smart enough to succeed at background coding?

    Goals

    We will build a platform that allows engineers to execute complex code transformations using prompts.

    By automating this toil, we accelerate large-scale migrations and allow teams to focus on high-value work.

    Our platform will consist of three main components:

    • "Change" Definition: Engineers will define a transformation as a simple, declarative manifest:
      • The target repositories.
      • A wrapper to run a "coding agent", e.g., "gemini-cli".
      • The task as a natural language prompt.
    • "Change" Management Service: A central service that orchestrates the jobs. It will receive Change definitions and be responsible for the job lifecycle.
    • Execution Runners: We could use existing sandboxed CI runners (like GitHub/GitLab runners) to execute each job or spawn a container.

    MVP

    • Define the Change manifest format.
    • Build the core Management Service that can accept and queue a Change.
    • Connect management service and runners, dynamically dispatch jobs to runners.
    • Create a basic runner script that can run a hard-coded prompt against a test repo and open a PR.

    Stretch Goals:

    • Multi-layered approach, Workflow Agents trigger Coding Agents:
      1. Workflow Agent: Gather information about the task interactively from the user.
      2. Coding Agent: Once the interactive agent has refined the task into a clear prompt, it hands this prompt off to the "coding agent." This background agent is responsible for executing the task and producing the actual pull request.
    • Use MCP:
      1. Workflow Agent gathers context information from Slack, Github, etc.
      2. Workflow Agent triggers a Coding Agent.
    • Create a "Standard Task" library with reliable prompts.
      1. Rebasing rancher-monitoring to a new version of kube-prom-stack
      2. Update charts to use new images
      3. Apply changes to comply with a new linter
      4. Bump complex Go dependencies, like k8s modules
      5. Backport pull requests to other branches
    • Add “review agents” that review the generated PR.

    See also


    Try out Neovim Plugins supporting AI Providers by enavarro_suse

    Description

    Experiment with several Neovim plugins that integrate AI model providers such as Gemini and Ollama.

    Goals

    Evaluate how these plugins enhance the development workflow, how they differ in capabilities, and how smoothly they integrate into Neovim for day-to-day coding tasks.

    Resources


    Bugzilla goes AI - Phase 1 by nwalter

    Description

    This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.

    Goals

    To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.

    Project Charter

    https://docs.google.com/document/d/1HbAvgrg8T3pd1FIx74nEfCObCljpO77zz5In_Jpw4as/edit?usp=sharing## Description