Motivation

What is the decision critical question which one can ask on a bug? How this question affects the decision on a bug and why?

Let's make GenAI look on the bug from the systemic point and evaluate what we don't know. Which piece of information is missing to take a decision?

Description

To build a tool that takes a raw bug report (including error messages and context) and uses a large language model (LLM) to generate a series of structured, Socratic-style or Systemic questions designed to guide a the integration and development toward the root cause, rather than just providing a direct, potentially incorrect fix.

Goals

Set up a Python environment

Set the environment and get a Gemini API key. 2. Collect 5-10 realistic bug reports (from open-source projects, personal projects, or public forums like Stack Overflow—include the error message and the initial context).

Build the Dialogue Loop

  1. Write a basic Python script using the Gemini API.
  2. Implement a simple conversational loop: User Input (Bug) -> AI Output (Question) -> User Input (Answer to AI's question) -> AI Output (Next Question). Code Implementation

Socratic/Systemic Strategy Implementation

  1. Refine the logic to ensure the questions follow a Socratic and Systemic path (e.g., from symptom-> context -> assumptions -> -> critical parts -> ).
  2. Implement Function Calling (an advanced feature of the Gemini API) to suggest specific actions to the user, like "Run a ping test" or "Check the database logs."
  3. Implement Bugzillla call to collect the
  4. Implement Questioning Framework as LLVM pre-conditioning
  5. Define set of instructions
  6. Assemble the Tool

Resources

What are Systemic Questions?

Systemic questions explore the relationships, patterns, and interactions within a system rather than focusing on isolated elements.
In IT, they help uncover hidden dependencies, feedback loops, assumptions, and side-effects during debugging or architecture analysis.

Gitlab Project

gitlab.suse.de/sle-prjmgr/BugDecisionCritical_Question

Looking for hackers with the skills:

ai gemini bugzilla genai

This project is part of:

Hack Week 25

Activity

  • 20 days ago: rtsvetkov added keyword "genai" to this project.
  • about 1 month ago: rtsvetkov added keyword "bugzilla" to this project.
  • about 1 month ago: t.huynh liked this project.
  • about 1 month ago: ybonatakis liked this project.
  • about 1 month ago: doreilly liked this project.
  • about 1 month ago: ancorgs liked this project.
  • about 1 month ago: rtsvetkov added keyword "ai" to this project.
  • about 1 month ago: rtsvetkov added keyword "gemini" to this project.
  • about 1 month ago: rtsvetkov started this project.
  • about 1 month ago: rtsvetkov liked this project.
  • about 1 month ago: rtsvetkov originated this project.

  • Comments

    • rtsvetkov
      22 days ago by rtsvetkov | Reply

      === 1 Circular Questions Focus on feedback loops and mutual influence. Example debugging prompts: - "What components influence this module, and what does this module influence in return?" - "If Service A slows down, how does Service B respond?"

      === 2 Difference Questions Explore variations, exceptions, or changes over time. Example debugging prompts: - "When does the bug not occur? What is different then?" - "What changed in the system right before the issue appeared?"

      === 3 Scaling Questions Quantify experience, severity, or uncertainty. Example debugging prompts: - "On a scale from 1–10, how reproducible is this issue?" - "How much worse does the system behave under peak load versus normal load?"

      === 4 Hypothetical (‘If…Then’) Questions Explore consequences, alternative actions, or simulated scenarios. Example debugging prompts: - "If we disable caching, what do we expect to happen?" - "If the input doubles, which component fails first?" - "If had a unlimited time to prevent this exact bug from ever happening again, where in our development cycle (e.g., design, code review, testing) would we invest the most effort?" - "If we had to ship the next feature without fixing this bug, what workarounds or manual steps would we need to put in place?"

      === 5 Resource / Strength Questions Identify what works well and what can be reused. Example debugging prompts: - "Which environments run without this problem and why?" - "What parts of the system are stable and can guide the fix?"

      === 6 Perspective-Shifting Questions Examine the situation through different roles or components. Example debugging prompts: - "If you were the database, what would you ‘say’ is overwhelming you?" - "How would a network engineer interpret these logs differently from a backend developer?"

      == 2. Example Debugging Process Using Systemic Questions

      === Step 1: Clarify the Pattern - "When exactly does the API fail, and when does it succeed?"

      === Step 2: Identify Boundaries - "Which systems are definitely not involved?"

      === Step 3: Explore Changes - "What recent deployments or config changes might correlate?"

      === Step 4: Map Influences - "How does the latency of Service X influence the behaviour of Service Y?"

      === Step 5: Hypothesis Testing - "If we simulate traffic spikes, does the behaviour match production incidents?"

      === Step 6: Leverage What Works - "Why does staging not show the issue? What can this teach us about production?"

      == 3. Key Benefits for IT and Systems Theory * Makes hidden dependencies visible
      * Avoids tunnel vision in debugging
      * Encourages team alignment through shared system understanding
      * Supports root-cause analysis rather than symptom chasing

    • rtsvetkov
      21 days ago by rtsvetkov | Reply

      Example manual research session: https://docs.google.com/document/d/1kgM0lBVavBnN0VeP1OgssWVjwIdE2hGf3jHmi2rxc/edit?usp=sharing

      as also the additional transaction on https://bugzilla.suse.com/show_bug.cgi?id=1245907

      the session https://gemini.google.com/app/dd379133b4af2ec8?utmsource=applauncher&utmmedium=owned&utmcampaign=base_all

    • rtsvetkov
      21 days ago by rtsvetkov | Reply

      Made a Gem: https://gemini.google.com/gem/1FVNTDtBRR8GD8fd3H01LGHxbhzYtz3vb?usp=sharing

    • rtsvetkov
      20 days ago by rtsvetkov | Reply

      Git lab Code: https://gitlab.suse.de/sle-prjmgr/BugDecisionCritical_Question/

    • rtsvetkov
      20 days ago by rtsvetkov | Reply

      Some results: https://bugzilla.suse.com/show_bug.cgi?id=1245907#c8

    • rtsvetkov
      19 days ago by rtsvetkov | Reply

      Adding a port for DeepSeek add-emoji

    • rtsvetkov
      19 days ago by rtsvetkov | Reply

      Bugzilla code works as als the Gemini Gem part. Need to commit the last changes to the repo

    Similar Projects

    AI-Powered Unit Test Automation for Agama by joseivanlopez

    The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:

    • Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
    • TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
    • Ruby: Integrates existing, robust YaST libraries (e.g., yast-storage-ng) to reuse established functionality.

    The Problem: Testing Overhead

    Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.

    The Solution: AI-Driven Automation

    This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:

    1. Automatically generate new unit tests as code is developed.
    2. Intelligently correct and update existing unit tests when the application code changes.

    By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.

    Goals

    • Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g., gemini-cli) to automatically generate unit tests.
    • Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
    • Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.

    Contribution & Resources

    We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.

    If you want to dive deep into AI for software quality, please reach out and join the effort!

    • Authorized AI Tools: Tools supported by SUSE (e.g., gemini-cli)
    • Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.

    Interesting Links


    Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios

    Description

    Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.

    This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.

    The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.

    Goals

    By the end of Hack Week, we aim to have a single, working Python script that:

    1. Connects to Prometheus and executes a query to fetch detailed test failure history.
    2. Processes the raw data into a format suitable for the Gemini API.
    3. Successfully calls the Gemini API with the data and a clear prompt.
    4. Parses the AI's response to extract a simple list of flaky tests.
    5. Saves the list to a JSON file that can be displayed in Grafana.
    6. New panel in our Dashboard listing the Flaky tests

    Resources

    Outcome


    Background Coding Agent by mmanno

    Description

    I had only bad experiences with AI one-shots. However, monitoring agent work closely and interfering often did result in productivity gains.

    Now, other companies are using agents in pipelines. That makes sense to me, just like CI, we want to offload work to pipelines: Our engineering teams are consistently slowed down by "toil": low-impact, repetitive maintenance tasks. A simple linter rule change, a dependency bump, rebasing patch-sets on top of newer releases or API deprecation requires dozens of manual PRs, draining time from feature development.

    So far we have been writing deterministic, script-based automation for these tasks. And it turns out to be a common trap. These scripts are brittle, complex, and become a massive maintenance burden themselves.

    Can we make prompts and workflows smart enough to succeed at background coding?

    Goals

    We will build a platform that allows engineers to execute complex code transformations using prompts.

    By automating this toil, we accelerate large-scale migrations and allow teams to focus on high-value work.

    Our platform will consist of three main components:

    • "Change" Definition: Engineers will define a transformation as a simple, declarative manifest:
      • The target repositories.
      • A wrapper to run a "coding agent", e.g., "gemini-cli".
      • The task as a natural language prompt.
    • "Change" Management Service: A central service that orchestrates the jobs. It will receive Change definitions and be responsible for the job lifecycle.
    • Execution Runners: We could use existing sandboxed CI runners (like GitHub/GitLab runners) to execute each job or spawn a container.

    MVP

    • Define the Change manifest format.
    • Build the core Management Service that can accept and queue a Change.
    • Connect management service and runners, dynamically dispatch jobs to runners.
    • Create a basic runner script that can run a hard-coded prompt against a test repo and open a PR.

    Stretch Goals:

    • Multi-layered approach, Workflow Agents trigger Coding Agents:
      1. Workflow Agent: Gather information about the task interactively from the user.
      2. Coding Agent: Once the interactive agent has refined the task into a clear prompt, it hands this prompt off to the "coding agent." This background agent is responsible for executing the task and producing the actual pull request.
    • Use MCP:
      1. Workflow Agent gathers context information from Slack, Github, etc.
      2. Workflow Agent triggers a Coding Agent.
    • Create a "Standard Task" library with reliable prompts.
      1. Rebasing rancher-monitoring to a new version of kube-prom-stack
      2. Update charts to use new images
      3. Apply changes to comply with a new linter
      4. Bump complex Go dependencies, like k8s modules
      5. Backport pull requests to other branches
    • Add “review agents” that review the generated PR.

    See also


    Extended private brain - RAG my own scripts and data into offline LLM AI by tjyrinki_suse

    Description

    For purely studying purposes, I'd like to find out if I could teach an LLM some of my own accumulated knowledge, to use it as a sort of extended brain.

    I might use qwen3-coder or something similar as a starting point.

    Everything would be done 100% offline without network available to the container, since I prefer to see when network is needed, and make it so it's never needed (other than initial downloads).

    Goals

    1. Learn something about RAG, LLM, AI.
    2. Find out if everything works offline as intended.
    3. As an end result have a new way to access my own existing know-how, but so that I can query the wisdom in them.
    4. Be flexible to pivot in any direction, as long as there are new things learned.

    Resources

    To be found on the fly.

    Timeline

    Day 1 (of 4)

    • Tried out a RAG demo, expanded on feeding it my own data
    • Experimented with qwen3-coder to add a persistent chat functionality, and keeping vectors in a pickle file
    • Optimizations to keep everything within context window
    • Learn and add a bit of PyTest

    Day 2

    • More experimenting and more data
    • Study ChromaDB
    • Add a Web UI that works from another computer even though the container sees network is down

    Day 3

    • The above RAG is working well enough for demonstration purposes.
    • Pivot to trying out OpenCode, configuring local Ollama qwen3-coder there, to analyze the RAG demo.
    • Figured out how to configure Ollama template to be usable under OpenCode. OpenCode locally is super slow to just running qwen3-coder alone.

    Day 4 (final day)

    • Battle with OpenCode that was both slow and kept on piling up broken things.
    • Call it success as after all the agentic AI was working locally.
    • Clean up the mess left behind a bit.

    Blog Post

    Summarized the findings at blog post.


    Local AI assistant with optional integrations and mobile companion by livdywan

    Description

    Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.

    User Story

    • Allison Average wants a one-click local AI assistent on their openSUSE laptop.
    • Ash Awesome wants AI on their phone without an expensive subscription.

    Goals

    • Evaluate a local SurfSense setup for day to day productivity
    • Test opencode for vibe coding and tool calling

    Timeline

    Day 1

    • Took a look at SurfSense and started setting up a local instance.
    • Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.

    Day 2

    • Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.

    Day 3

    Day 4

    • Context size is a thing, and models are not equally usable for vibe coding.
    • Through arduous browsing for ollama models I did find some like myaniu/qwen2.5-1m:7b with 1m but even then it is not obvious if they are meant for tool calls.

    Day 5

    • Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.

    Outcomes

    surfsense

    I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.

    opencode

    Installing opencode and ollama in my distrobox container along with the following configs worked for me.

    When preparing a new project from scratch it is a good idea to start out with a template.

    opencode.json

    ``` {


    Try out Neovim Plugins supporting AI Providers by enavarro_suse

    Description

    Experiment with several Neovim plugins that integrate AI model providers such as Gemini and Ollama.

    Goals

    Evaluate how these plugins enhance the development workflow, how they differ in capabilities, and how smoothly they integrate into Neovim for day-to-day coding tasks.

    Resources


    issuefs: FUSE filesystem representing issues (e.g. JIRA) for the use with AI agents code-assistants by llansky3

    Description

    Creating a FUSE filesystem (issuefs) that mounts issues from various ticketing systems (Github, Jira, Bugzilla, Redmine) as files to your local file system.

    And why this is good idea?

    • User can use favorite command line tools to view and search the tickets from various sources
    • User can use AI agents capabilities from your favorite IDE or cli to ask question about the issues, project or functionality while providing relevant tickets as context without extra work.
    • User can use it during development of the new features when you let the AI agent to jump start the solution. The issuefs will give the AI agent the context (AI agents just read few more files) about the bug or requested features. No need for copying and pasting issues to user prompt or by using extra MCP tools to access the issues. These you can still do but this approach is on purpose different.

    Goals

    1. Add Github issue support
    2. Proof the concept/approach by apply the approach on itself using Github issues for tracking and development of new features
    3. Add support for Bugzilla and Redmine using this approach in the process of doing it. Record a video of it.
    4. Clean-up and test the implementation and create some documentation
    5. Create a blog post about this approach

    Resources

    There is a prototype implementation here. This currently sort of works with JIRA only.


    Bugzilla goes AI - Phase 1 by nwalter

    Description

    This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.

    Goals

    To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.

    Project Charter

    Bugzilla goes AI Phase 1

    Description

    Project Achievements during Hackweek

    In this file you can read about what we achieved during Hackweek.

    Project Achievements