Last hackweek filtra was created – a tool to extract information like lead and cycle times from Github repos for (but not limited to) projects that are doing Kanban. The collected metrics can then be visualized with Grafana.

Currently there are two problems with filtra:

  1. There are two branches. The master branch, that can query information from one project per instance and the multi-board branch, that can query from multiple projects. Sadly the metrics looks a bit different for both of the branches and this needs to be fixed. Also unittests are badly needed! See: Metrics are different #20 and Missing Tests #23
  2. Currently filtra is basically a Prometheus endpoint. But Prometheus is not a perfect fit for this use-case, since we are only collecting a few metrics per day but with a very high retention. This is usually different for data collected with Prometheus. So the conclusion was that Postgresql would be a better fit for storing that data.

This project is part of:

Hack Week 19

Activity

  • over 5 years ago: PSuarezHernandez liked this project.
  • over 5 years ago: jcavalheiro liked this project.
  • over 5 years ago: jcavalheiro joined this project.
  • over 5 years ago: jochenbreuer started this project.
  • over 5 years ago: jochenbreuer added keyword "go" to this project.
  • over 5 years ago: jochenbreuer added keyword "golang" to this project.
  • over 5 years ago: jochenbreuer added keyword "github" to this project.
  • over 5 years ago: jochenbreuer added keyword "metrics" to this project.
  • over 5 years ago: jochenbreuer added keyword "graphql" to this project.
  • over 5 years ago: jochenbreuer added keyword "grafana" to this project.
  • over 5 years ago: jochenbreuer added keyword "postgresql" to this project.
  • over 5 years ago: jochenbreuer added keyword "leadtime" to this project.
  • over 5 years ago: jochenbreuer added keyword "cycletime" to this project.
  • over 5 years ago: jochenbreuer added keyword "projectmanagement" to this project.
  • over 5 years ago: jochenbreuer originated this project.

  • Comments

    • jochenbreuer
      over 5 years ago by jochenbreuer | Reply

      First update: We'll go with Sqlite instead of Postgresql.

    Similar Projects

    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    terraform-provider-feilong by e_bischoff

    Project Description

    People need to test operating systems and applications on s390 platform.

    Installation from scratch solutions include:

    • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
    • use s3270 terminal emulation (used by openQA people?)
    • use LXC from IBM to start CP commands and analyze the results
    • use zPXE to do some PXE-alike booting (used by the orthos team?)
    • use tessia to install from scratch using autoyast
    • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
    • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

    Deployment from image solutions include:

    • use ICIC web interface (openstack in disguise, contributed by IBM)
    • use ICIC from the openstack terraform provider (used by Rancher QA)
    • use zvm_ansible to control SMAPI
    • connect directly to SMAPI low-level socket interface

    IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

    What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

    Other Feilong-based solutions include:

    • make libvirt Feilong-aware
    • simply call Feilong from shell scripts with curl
    • use zvmconnector client python library from Feilong
    • use zthin part of Feilong to directly command SMAPI.

    Goal for Hackweek 23

    My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

    My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

    Goals for Hackweek 24

    Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

    Let's add support for fiberchannel disks and multipath.

    Possible goals for Hackweek 25

    Modernization, maturity, and maintenance.


    Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios

    Description

    Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.

    This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.

    The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.

    Goals

    By the end of Hack Week, we aim to have a single, working Python script that:

    1. Connects to Prometheus and executes a query to fetch detailed test failure history.
    2. Processes the raw data into a format suitable for the Gemini API.
    3. Successfully calls the Gemini API with the data and a clear prompt.
    4. Parses the AI's response to extract a simple list of flaky tests.
    5. Saves the list to a JSON file that can be displayed in Grafana.
    6. New panel in our Dashboard listing the Flaky tests

    Resources