Description

This project aims to explore the popularity and developer sentiment around SUSE and its technologies compared to Red Hat and their technologies. Using publicly available data sources, I will analyze search trends, developer preferences, repository activity, and media presence. The final outcome will be an interactive Power BI dashboard that provides insights into how SUSE is perceived and discussed across the web and among developers.

Goals

  1. Assess the popularity of SUSE products and brand compared to Red Hat using Google Trends.
  2. Analyze developer satisfaction and usage trends from the Stack Overflow Developer Survey.
  3. Use the GitHub API to compare SUSE and Red Hat repositories in terms of stars, forks, contributors, and issue activity.
  4. Perform sentiment analysis on GitHub issue comments to measure community tone and engagement using built-in Copilot capabilities.
  5. Perform sentiment analysis on Reddit comments related to SUSE technologies using built-in Copilot capabilities.
  6. Use Gnews.io to track and compare the volume of news articles mentioning SUSE and Red Hat technologies.
  7. Test the integration of Copilot (AI) within Power BI for enhanced data analysis and visualization.
  8. Deliver a comprehensive Power BI report summarizing findings and insights.
  9. Test the full potential of Power BI, including its AI features and native language Q&A.

Resources

  1. Google Trends: Web scraping for search popularity data
  2. Stack Overflow Developer Survey: For technology popularity and satisfaction comparison
  3. GitHub API: For repository data (stars, forks, contributors, issues, comments).
  4. Gnews.io API: For article volume and mentions analysis.
  5. Reddit: SUSE related topics with comments.

Looking for hackers with the skills:

ai marketing powerbi analysis copilot trend github reddit

This project is part of:

Hack Week 25

Activity

  • about 2 months ago: lkocman liked this project.
  • about 2 months ago: terezacerna added keyword "copilot" to this project.
  • about 2 months ago: terezacerna added keyword "trend" to this project.
  • about 2 months ago: terezacerna added keyword "github" to this project.
  • about 2 months ago: terezacerna added keyword "reddit" to this project.
  • about 2 months ago: terezacerna added keyword "ai" to this project.
  • about 2 months ago: terezacerna added keyword "marketing" to this project.
  • about 2 months ago: terezacerna added keyword "powerbi" to this project.
  • about 2 months ago: terezacerna added keyword "analysis" to this project.
  • about 2 months ago: katiarojas liked this project.
  • about 2 months ago: terezacerna disliked this project.
  • about 2 months ago: terezacerna liked this project.
  • 2 months ago: horon liked this project.
  • 3 months ago: terezacerna started this project.
  • 3 months ago: terezacerna originated this project.

  • Comments

    • terezacerna
      about 1 month ago by terezacerna | Reply

      This project provides a comprehensive, data-driven assessment of SUSE’s presence, perception, and alignment within the global developer and open-source landscape. By integrating insights from the Stack Overflow Developer Survey, Google Trends, GitHub activity, GitHub issue sentiment, and Reddit discussions, the analysis offers a multi-layered view of how SUSE compares with key competitors—particularly Red Hat—and how the broader technical community engages with SUSE technologies. It is important to note that GitHub Issues and Reddit data were limited to approximately one month of available data, which constrains the depth of historical trend analysis, though still provides valuable directional insights into current community sentiment and interaction patterns.

      The Developer Survey analysis reveals how Linux users differ from non-Linux users in terms of platform choices, programming languages, professional roles, and technology preferences. This highlights the size and characteristics of SUSE’s core audience, while also identifying the tools and languages most relevant to SUSE’s ecosystem. Analyses of DevOps, SREs, SysAdmins, and cloud-native roles further quantify SUSE’s addressable market and assess alignment with industry trends.

      The Google Trends analysis adds an external perspective on brand interest, showing how public attention toward SUSE and Red Hat evolves over time and across regions. Related search terms provide insight into how each brand is associated with specific technologies and topics, highlighting opportunities for increased visibility or repositioning.

      The GitHub repository overview offers a look at SUSE’s open-source footprint relative to Red Hat, focusing on repository activity, stars, forks, issues, and programming language diversity. Trends in repository creation and updates illustrate innovation momentum and community engagement, while language usage highlights SUSE’s technical direction and ecosystem breadth.

      The SUSE GitHub Issues analysis deepens understanding of community interaction by examining issue volume, resolution speed, contributor patterns, and sentiment expressed in issue titles, bodies, and comments. Although based on one month of data, this analysis provides meaningful insights into developer satisfaction, recurring challenges, and project health. Categorization of issues helps identify potential areas for product improvement or documentation enhancement.

      The Reddit analysis extends sentiment exploration into broader public discussions, comparing SUSE-related and Red Hat–related posts and comments. Despite the one-month limitation, sentiment trends, discussion categories, and key influencers reveal how SUSE is perceived in informal technical communities and what factors drive positive or negative sentiment.

      Together, these components create a holistic view of SUSE’s position across developer preferences, market interest, community engagement, and open-source activity. The combined insights support strategic decision-making for product development, community outreach, marketing, and competitive positioning—helping SUSE understand where it stands today and where the strongest opportunities exist within the modern infrastructure and cloud-native ecosystem.

    • terezacerna
      about 1 month ago by terezacerna | Reply

      Demo View: LINK

      Full Power BI Report: LINK (additional access may be required)

    • terezacerna
      about 1 month ago by terezacerna | Reply

      Obstacles and limitations I have encountered:

      1. I was limited with the amount of items I could have scraped with API from GitHub and Reddit and I only could have got the last month of data from both platforms.

      2. Since I last explored, the cognitive AI analysis like sentiment analysis or categorization was moved by Microsoft behind a separate licensing, which we don't have available in SUSE. Thus I had to change my plan and use Gemini outside of Power BI for these analysis.

      3. Analyzing Stack Overflow could and should take much longer to really get a real profile of a SUSE community user. I would however need a help from a person who knows SUSE products technically well and potentially have some marketing knowledge as well.

      4. The next steps of this analysis could be to analyze when a community user becomes a paying customer.

    Similar Projects

    Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios

    Description

    Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.

    This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.

    The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.

    Goals

    By the end of Hack Week, we aim to have a single, working Python script that:

    1. Connects to Prometheus and executes a query to fetch detailed test failure history.
    2. Processes the raw data into a format suitable for the Gemini API.
    3. Successfully calls the Gemini API with the data and a clear prompt.
    4. Parses the AI's response to extract a simple list of flaky tests.
    5. Saves the list to a JSON file that can be displayed in Grafana.
    6. New panel in our Dashboard listing the Flaky tests

    Resources

    Outcome


    Explore LLM evaluation metrics by thbertoldi

    Description

    Learn the best practices for evaluating LLM performance with an open-source framework such as DeepEval.

    Goals

    Curate the knowledge learned during practice and present it to colleagues.

    -> Maybe publish a blog post on SUSE's blog?

    Resources

    https://deepeval.com

    https://docs.pactflow.io/docs/bi-directional-contract-testing


    GenAI-Powered Systemic Bug Evaluation and Management Assistant by rtsvetkov

    Motivation

    What is the decision critical question which one can ask on a bug? How this question affects the decision on a bug and why?

    Let's make GenAI look on the bug from the systemic point and evaluate what we don't know. Which piece of information is missing to take a decision?

    Description

    To build a tool that takes a raw bug report (including error messages and context) and uses a large language model (LLM) to generate a series of structured, Socratic-style or Systemic questions designed to guide a the integration and development toward the root cause, rather than just providing a direct, potentially incorrect fix.

    Goals

    Set up a Python environment

    Set the environment and get a Gemini API key. 2. Collect 5-10 realistic bug reports (from open-source projects, personal projects, or public forums like Stack Overflow—include the error message and the initial context).

    Build the Dialogue Loop

    1. Write a basic Python script using the Gemini API.
    2. Implement a simple conversational loop: User Input (Bug) -> AI Output (Question) -> User Input (Answer to AI's question) -> AI Output (Next Question). Code Implementation

    Socratic/Systemic Strategy Implementation

    1. Refine the logic to ensure the questions follow a Socratic and Systemic path (e.g., from symptom-> context -> assumptions -> -> critical parts -> ).
    2. Implement Function Calling (an advanced feature of the Gemini API) to suggest specific actions to the user, like "Run a ping test" or "Check the database logs."
    3. Implement Bugzillla call to collect the
    4. Implement Questioning Framework as LLVM pre-conditioning
    5. Define set of instructions
    6. Assemble the Tool

    Resources

    What are Systemic Questions?

    Systemic questions explore the relationships, patterns, and interactions within a system rather than focusing on isolated elements.
    In IT, they help uncover hidden dependencies, feedback loops, assumptions, and side-effects during debugging or architecture analysis.

    Gitlab Project

    gitlab.suse.de/sle-prjmgr/BugDecisionCritical_Question


    "what is it" file and directory analysis via MCP and local LLM, for console and KDE by rsimai

    Description

    Users sometimes wonder what files or directories they find on their local PC are good for. If they can't determine from the filename or metadata, there should an easy way to quickly analyze the content and at least guess the meaning. An LLM could help with that, through the use of a filesystem MCP and to-text-converters for typical file types. Ideally this is integrated into the desktop environment but works as well from a console. All data is processed locally or "on premise", no artifacts remain or leave the system.

    Goals

    • The user can run a command from the console, to check on a file or directory
    • The filemanager contains the "analyze" feature within the context menu
    • The local LLM could serve for other use cases where privacy matters

    TBD

    • Find or write capable one-shot and interactive MCP client
    • Find or write simple+secure file access MCP server
    • Create local LLM service with appropriate footprint, containerized
    • Shell command with options
    • KDE integration (Dolphin)
    • Package
    • Document

    Resources


    SUSE Observability MCP server by drutigliano

    Description

    The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.

    This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.

    Goals

    • Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
    • Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
    • Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
    • Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.

     Hackweek STEP

    • Create a functional MCP endpoint exposing one (or more) tool(s) to answer queries like "What is the health of service X?") by fetching, normalizing, and returning live StackState data in an LLM-ready format.

     Scope

    • Implement read-only MCP server that can:
      • Connect to a live SUSE Observability instance and authenticate (with API token)
      • Use tools to fetch data for a specific component URN (e.g., current health state, metrics, possibly topology neighbors, ...).
      • Normalize response fields (e.g., URN to "Service Name," health state DEVIATING to "Unhealthy", raw metrics).
      • Return the data as a structured JSON payload compliant with the MCP specification.

    Deliverables

    • MCP Server v0.1 A running Golang MCP server with at least one tool.
    • A README.md and a test script (e.g., curl commands or a simple notebook) showing how an AI agent would call the endpoint and the resulting JSON payload.

    Outcome A functional and testable API endpoint that proves the core concept: translating complex StackState data into a simple, LLM-ready format. This provides the foundation for developing AI-driven diagnostics and automated remediation.

    Resources

    • https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
    • https://www.datadoghq.com/blog/datadog-remote-mcp-server
    • https://modelcontextprotocol.io/specification/2025-06-18/index
    • https://modelcontextprotocol.io/docs/develop/build-server

     Basic implementation

    • https://github.com/drutigliano19/suse-observability-mcp-server

    Results

    Successfully developed and delivered a fully functional SUSE Observability MCP Server that bridges language models with SUSE Observability's operational data. This project demonstrates how AI agents can perform intelligent troubleshooting and root cause analysis using structured access to real-time infrastructure data.

    Example execution


    issuefs: FUSE filesystem representing issues (e.g. JIRA) for the use with AI agents code-assistants by llansky3

    Description

    Creating a FUSE filesystem (issuefs) that mounts issues from various ticketing systems (Github, Jira, Bugzilla, Redmine) as files to your local file system.

    And why this is good idea?

    • User can use favorite command line tools to view and search the tickets from various sources
    • User can use AI agents capabilities from your favorite IDE or cli to ask question about the issues, project or functionality while providing relevant tickets as context without extra work.
    • User can use it during development of the new features when you let the AI agent to jump start the solution. The issuefs will give the AI agent the context (AI agents just read few more files) about the bug or requested features. No need for copying and pasting issues to user prompt or by using extra MCP tools to access the issues. These you can still do but this approach is on purpose different.

    Goals

    1. Add Github issue support
    2. Proof the concept/approach by apply the approach on itself using Github issues for tracking and development of new features
    3. Add support for Bugzilla and Redmine using this approach in the process of doing it. Record a video of it.
    4. Clean-up and test the implementation and create some documentation
    5. Create a blog post about this approach

    Resources

    There is a prototype implementation here. This currently sort of works with JIRA only.


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji