Description

This project aims to explore the popularity and developer sentiment around SUSE and its technologies compared to Red Hat and their technologies. Using publicly available data sources, I will analyze search trends, developer preferences, repository activity, and media presence. The final outcome will be an interactive Power BI dashboard that provides insights into how SUSE is perceived and discussed across the web and among developers.

Goals

  1. Assess the popularity of SUSE products and brand compared to Red Hat using Google Trends.
  2. Analyze developer satisfaction and usage trends from the Stack Overflow Developer Survey.
  3. Use the GitHub API to compare SUSE and Red Hat repositories in terms of stars, forks, contributors, and issue activity.
  4. Perform sentiment analysis on GitHub issue comments to measure community tone and engagement using built-in Copilot capabilities.
  5. Perform sentiment analysis on Reddit comments related to SUSE technologies using built-in Copilot capabilities.
  6. Use Gnews.io to track and compare the volume of news articles mentioning SUSE and Red Hat technologies.
  7. Test the integration of Copilot (AI) within Power BI for enhanced data analysis and visualization.
  8. Deliver a comprehensive Power BI report summarizing findings and insights.
  9. Test the full potential of Power BI, including its AI features and native language Q&A.

Resources

  1. Google Trends: Web scraping for search popularity data
  2. Stack Overflow Developer Survey: For technology popularity and satisfaction comparison
  3. GitHub API: For repository data (stars, forks, contributors, issues, comments).
  4. Gnews.io API: For article volume and mentions analysis.
  5. Reddit: SUSE related topics with comments.

Looking for hackers with the skills:

ai marketing powerbi analysis copilot trend github reddit

This project is part of:

Hack Week 25

Activity

  • 2 months ago: lkocman liked this project.
  • 2 months ago: terezacerna added keyword "copilot" to this project.
  • 2 months ago: terezacerna added keyword "trend" to this project.
  • 2 months ago: terezacerna added keyword "github" to this project.
  • 2 months ago: terezacerna added keyword "reddit" to this project.
  • 2 months ago: terezacerna added keyword "ai" to this project.
  • 2 months ago: terezacerna added keyword "marketing" to this project.
  • 2 months ago: terezacerna added keyword "powerbi" to this project.
  • 2 months ago: terezacerna added keyword "analysis" to this project.
  • 3 months ago: katiarojas liked this project.
  • 3 months ago: terezacerna disliked this project.
  • 3 months ago: terezacerna liked this project.
  • 3 months ago: horon liked this project.
  • 3 months ago: terezacerna started this project.
  • 3 months ago: terezacerna originated this project.

  • Comments

    • terezacerna
      2 months ago by terezacerna | Reply

      This project provides a comprehensive, data-driven assessment of SUSE’s presence, perception, and alignment within the global developer and open-source landscape. By integrating insights from the Stack Overflow Developer Survey, Google Trends, GitHub activity, GitHub issue sentiment, and Reddit discussions, the analysis offers a multi-layered view of how SUSE compares with key competitors—particularly Red Hat—and how the broader technical community engages with SUSE technologies. It is important to note that GitHub Issues and Reddit data were limited to approximately one month of available data, which constrains the depth of historical trend analysis, though still provides valuable directional insights into current community sentiment and interaction patterns.

      The Developer Survey analysis reveals how Linux users differ from non-Linux users in terms of platform choices, programming languages, professional roles, and technology preferences. This highlights the size and characteristics of SUSE’s core audience, while also identifying the tools and languages most relevant to SUSE’s ecosystem. Analyses of DevOps, SREs, SysAdmins, and cloud-native roles further quantify SUSE’s addressable market and assess alignment with industry trends.

      The Google Trends analysis adds an external perspective on brand interest, showing how public attention toward SUSE and Red Hat evolves over time and across regions. Related search terms provide insight into how each brand is associated with specific technologies and topics, highlighting opportunities for increased visibility or repositioning.

      The GitHub repository overview offers a look at SUSE’s open-source footprint relative to Red Hat, focusing on repository activity, stars, forks, issues, and programming language diversity. Trends in repository creation and updates illustrate innovation momentum and community engagement, while language usage highlights SUSE’s technical direction and ecosystem breadth.

      The SUSE GitHub Issues analysis deepens understanding of community interaction by examining issue volume, resolution speed, contributor patterns, and sentiment expressed in issue titles, bodies, and comments. Although based on one month of data, this analysis provides meaningful insights into developer satisfaction, recurring challenges, and project health. Categorization of issues helps identify potential areas for product improvement or documentation enhancement.

      The Reddit analysis extends sentiment exploration into broader public discussions, comparing SUSE-related and Red Hat–related posts and comments. Despite the one-month limitation, sentiment trends, discussion categories, and key influencers reveal how SUSE is perceived in informal technical communities and what factors drive positive or negative sentiment.

      Together, these components create a holistic view of SUSE’s position across developer preferences, market interest, community engagement, and open-source activity. The combined insights support strategic decision-making for product development, community outreach, marketing, and competitive positioning—helping SUSE understand where it stands today and where the strongest opportunities exist within the modern infrastructure and cloud-native ecosystem.

    • terezacerna
      2 months ago by terezacerna | Reply

      Demo View: LINK

      Full Power BI Report: LINK (additional access may be required)

    • terezacerna
      2 months ago by terezacerna | Reply

      Obstacles and limitations I have encountered:

      1. I was limited with the amount of items I could have scraped with API from GitHub and Reddit and I only could have got the last month of data from both platforms.

      2. Since I last explored, the cognitive AI analysis like sentiment analysis or categorization was moved by Microsoft behind a separate licensing, which we don't have available in SUSE. Thus I had to change my plan and use Gemini outside of Power BI for these analysis.

      3. Analyzing Stack Overflow could and should take much longer to really get a real profile of a SUSE community user. I would however need a help from a person who knows SUSE products technically well and potentially have some marketing knowledge as well.

      4. The next steps of this analysis could be to analyze when a community user becomes a paying customer.

    Similar Projects

    Kubernetes-Based ML Lifecycle Automation by lmiranda

    Description

    This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.

    The pipeline will automate the lifecycle of a machine learning model, including:

    • Data ingestion/collection
    • Model training as a Kubernetes Job
    • Model artifact storage in an S3-compatible registry (e.g. Minio)
    • A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
    • A lightweight inference service that loads and serves the latest model
    • Monitoring of model performance and service health through Prometheus/Grafana

    The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.

    Goals

    By the end of Hack Week, the project should:

    1. Produce a fully functional ML pipeline running on Kubernetes with:

      • Data collection job
      • Training job container
      • Storage and versioning of trained models
      • Automated deployment of new model versions
      • Model inference API service
      • Basic monitoring dashboards
    2. Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.

    3. Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).

    4. Prepare a short demo explaining the end-to-end process and how new models flow through the system.

    Resources

    Project Repository

    Updates

    1. Training pipeline and datasets
    2. Inference Service py


    Explore LLM evaluation metrics by thbertoldi

    Description

    Learn the best practices for evaluating LLM performance with an open-source framework such as DeepEval.

    Goals

    Curate the knowledge learned during practice and present it to colleagues.

    -> Maybe publish a blog post on SUSE's blog?

    Resources

    https://deepeval.com

    https://docs.pactflow.io/docs/bi-directional-contract-testing


    Try AI training with ROCm and LoRA by bmwiedemann

    Description

    I want to setup a Radeon RX 9600 XT 16 GB at home with ROCm on Slowroll.

    Goals

    I want to test how fast AI inference can get with the GPU and if I can use LoRA to re-train an existing free model for some task.

    Resources

    • https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html
    • https://build.opensuse.org/project/show/science:GPU:ROCm
    • https://src.opensuse.org/ROCm/
    • https://www.suse.com/c/lora-fine-tuning-llms-for-text-classification/

    Results

    got inference working with llama.cpp:

    export LLAMACPP_ROCM_ARCH=gfx1200
    HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \
    cmake -S . -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=$LLAMACPP_ROCM_ARCH \
    -DCMAKE_BUILD_TYPE=Release -DLLAMA_CURL=ON \
    -Dhipblas_DIR=/usr/lib64/cmake/hipblaslt/ \
    && cmake --build build --config Release -j8
    m=models/gpt-oss-20b-mxfp4.gguf
    cd $P/llama.cpp && build/bin/llama-server --model $m --threads 8 --port 8005 --host 0.0.0.0 --device ROCm0 --n-gpu-layers 999
    

    Without the --device option it faulted. Maybe because my APU also appears there?

    I updated/fixed various related packages: https://src.opensuse.org/ROCm/rocm-examples/pulls/1 https://src.opensuse.org/ROCm/hipblaslt/pulls/1 SR 1320959

    benchmark

    I benchmarked inference with llama.cpp + gpt-oss-20b-mxfp4.gguf and ROCm offloading to a Radeon RX 9060 XT 16GB. I varied the number of layers that went to the GPU:

    • 0 layers 14.49 tokens/s (8 CPU cores)
    • 9 layers 17.79 tokens/s 34% VRAM
    • 15 layers 22.39 tokens/s 51% VRAM
    • 20 layers 27.49 tokens/s 64% VRAM
    • 24 layers 41.18 tokens/s 74% VRAM
    • 25+ layers 86.63 tokens/s 75% VRAM (only 200% CPU load)

    So there is a significant performance-boost if the whole model fits into the GPU's VRAM.


    AI-Powered Unit Test Automation for Agama by joseivanlopez

    The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:

    • Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
    • TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
    • Ruby: Integrates existing, robust YaST libraries (e.g., yast-storage-ng) to reuse established functionality.

    The Problem: Testing Overhead

    Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.

    The Solution: AI-Driven Automation

    This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:

    1. Automatically generate new unit tests as code is developed.
    2. Intelligently correct and update existing unit tests when the application code changes.

    By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.

    Goals

    • Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g., gemini-cli) to automatically generate unit tests.
    • Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
    • Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.

    Contribution & Resources

    We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.

    If you want to dive deep into AI for software quality, please reach out and join the effort!

    • Authorized AI Tools: Tools supported by SUSE (e.g., gemini-cli)
    • Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.

    Interesting Links


    SUSE Edge Image Builder MCP by eminguez

    Description

    Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.

    Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.

    Goals

    • Familiarize myself with MCPs
    • Unrealistic: Have an MCP that can generate an EIB config file

    Resources

    Result

    https://github.com/e-minguez/eib-mcp

    I've extensively used antigravity and its agent mode to code this. This heavily uses https://hackweek.opensuse.org/25/projects/suse-edge-image-builder-json-schema for the MCP to be built.

    I've ended up learning a lot of things about "prompting", json schemas in general, some golang, MCPs and AI in general :)

    Example:

    Generate an Edge Image Builder configuration for an ISO image based on slmicro-6.2.iso, targeting x86_64 architecture. The output name should be 'my-edge-image' and it should install to /dev/sda. It should deploy a 3 nodes kubernetes cluster with nodes names "node1", "node2" and "node3" as: * hostname: node1, IP: 1.1.1.1, role: initializer * hostname: node2, IP: 1.1.1.2, role: agent * hostname: node3, IP: 1.1.1.3, role: agent The kubernetes version should be k3s 1.33.4-k3s1 and it should deploy a cert-manager helm chart (the latest one available according to https://cert-manager.io/docs/installation/helm/). It should create a user called "suse" with password "suse" and set ntp to "foo.ntp.org". The VIP address for the API should be 1.2.3.4

    Generates:

    ``` apiVersion: "1.0" image: arch: x86_64 baseImage: slmicro-6.2.iso imageType: iso outputImageName: my-edge-image kubernetes: helm: charts: - name: cert-manager repositoryName: jetstack


    issuefs: FUSE filesystem representing issues (e.g. JIRA) for the use with AI agents code-assistants by llansky3

    Description

    Creating a FUSE filesystem (issuefs) that mounts issues from various ticketing systems (Github, Jira, Bugzilla, Redmine) as files to your local file system.

    And why this is good idea?

    • User can use favorite command line tools to view and search the tickets from various sources
    • User can use AI agents capabilities from your favorite IDE or cli to ask question about the issues, project or functionality while providing relevant tickets as context without extra work.
    • User can use it during development of the new features when you let the AI agent to jump start the solution. The issuefs will give the AI agent the context (AI agents just read few more files) about the bug or requested features. No need for copying and pasting issues to user prompt or by using extra MCP tools to access the issues. These you can still do but this approach is on purpose different.

    Goals

    1. Add Github issue support
    2. Proof the concept/approach by apply the approach on itself using Github issues for tracking and development of new features
    3. Add support for Bugzilla and Redmine using this approach in the process of doing it. Record a video of it.
    4. Clean-up and test the implementation and create some documentation
    5. Create a blog post about this approach

    Resources

    There is a prototype implementation here. This currently sort of works with JIRA only.


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji