Description

Start experimenting the generative SUSE-AI chat bot, asking questions on different areas of knowledge or science and possibly analyze the quality of the LLM model response, specific and comparative, checking the answers provided by different LLM models to a same query, using proper quality metrics or tools or methodologies.

Try to define basic guidelines and requirements for quality test automation of AI-generated responses.

First approach of investigation can be based on manual testing: methodologies, findings and data can be useful then to organize valid automated testing.

Goals

  • Identify criteria and measuring scales for assessment of a text content.
  • Define quality of an answer/text based on defined criteria .
  • Identify some knowledge sectors and a proper list of problems/questions per sector.
  • Manually run query session and apply evaluation criteria to answers.
  • Draft requirements for test automation of AI answers.

Resources

  • Announcement of SUSE-AI for Hack Week in Slack
  • Openplatform and related 3 LLM models gemma:2b, llama3.1:8b, qwen2.5-coder:3b.

Notes

  • Foundation models (FMs):
    are large deep learning neural networks, trained on massive datasets, that have changed the way data scientists approach machine learning (ML). Rather than develop artificial intelligence (AI) from scratch, data scientists use a foundation model as a starting point to develop ML models that power new applications more quickly and cost-effectively.

  • Large language models (LLMs):
    are a category of foundation models pre-trained on immense amounts of data acquiring abilities by learning statistical relationships from vast amounts of text during a self- and semi-supervised training process, making them capable of understanding and generating natural language and other types of content , to perform a wide range of tasks.
    LLMs can be used for generative AI (artificial intelligence) to produce content based on input prompts in human language.

Validation of a AI-generated answer is not an easy task to perform, as manually as automated.
An LLM answer text shall contain a given level of informations: correcness, completeness, reasoning description etc.
We shall rely in properly applicable and measurable criteria of validation to get an assessment in a limited amount of time and resources.

Looking for hackers with the skills:

ai llm

This project is part of:

Hack Week 24

Activity

  • 11 months ago: mdati added keyword "llm" to this project.
  • 11 months ago: mdati added keyword "ai" to this project.
  • 11 months ago: mdati liked this project.
  • 11 months ago: mdati started this project.
  • 11 months ago: mdati originated this project.

  • Comments

    • livdywan
      11 months ago by livdywan | Reply

      You might want to add an ai tag

    Similar Projects

    Flaky Tests AI Finder for Uyuni and MLM Test Suites by oscar-barrios

    Description

    Our current Grafana dashboards provide a great overview of test suite health, including a panel for "Top failed tests." However, identifying which of these failures are due to legitimate bugs versus intermittent "flaky tests" is a manual, time-consuming process. These flaky tests erode trust in our test suites and slow down development.

    This project aims to build a simple but powerful Python script that automates flaky test detection. The script will directly query our Prometheus instance for the historical data of each failed test, using the jenkins_build_test_case_failure_age metric. It will then format this data and send it to the Gemini API with a carefully crafted prompt, asking it to identify which tests show a flaky pattern.

    The final output will be a clean JSON list of the most probable flaky tests, which can then be used to populate a new "Top Flaky Tests" panel in our existing Grafana test suite dashboard.

    Goals

    By the end of Hack Week, we aim to have a single, working Python script that:

    1. Connects to Prometheus and executes a query to fetch detailed test failure history.
    2. Processes the raw data into a format suitable for the Gemini API.
    3. Successfully calls the Gemini API with the data and a clear prompt.
    4. Parses the AI's response to extract a simple list of flaky tests.
    5. Saves the list to a JSON file that can be displayed in Grafana.
    6. New panel in our Dashboard listing the Flaky tests

    Resources