Project Description

At SCC, we have a rotating task of COOTW (Commanding Office of the Week). This task involves responding to customer requests from jira and slack help channels, monitoring production systems and doing small chores. Usually, we have documentation to help the COOTW answer questions and quickly find fixes. Most of these are distributed across github, trello and SUSE Support documentation. The aim of this project is to explore the magic of LLMs and create a conversational bot.

Goal for this Hackweek

  • Build data ingestion Data source:
    • SUSE KB docs
    • scc github docs
    • scc trello knowledge board
  • Test out new RAG architecture

  • https://gitlab.suse.de/ngetahun/cootwbot

Looking for hackers with the skills:

ai llm console gpu ml scc

This project is part of:

Hack Week 23 Hack Week 24

Activity

  • about 1 month ago: ngetahun added keyword "scc" to this project.
  • about 1 year ago: ngetahun liked this project.
  • about 1 year ago: ngetahun started this project.
  • about 1 year ago: ngetahun added keyword "ai" to this project.
  • about 1 year ago: ngetahun added keyword "llm" to this project.
  • about 1 year ago: ngetahun added keyword "console" to this project.
  • about 1 year ago: ngetahun added keyword "gpu" to this project.
  • about 1 year ago: ngetahun added keyword "ml" to this project.
  • about 1 year ago: ngetahun originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Make more sense of openQA test results using AI by livdywan

    Description

    AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

    User Story

    Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

    Goals

    • Leverage a chat interface to help Allison
    • Create a model from scratch based on data from openQA
    • Proof of concept for automated analysis of openQA test results

    Bonus

    • Use AI to suggest solutions to merge conflicts
      • This would need a merge conflict editor that can suggest solving the conflict
    • Use image recognition for needles

    Resources

    Timeline

    Day 1

    • Conversing with open-webui to teach me how to create a model based on openQA test results

    Day 2

    Highlights

    • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
    • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
    • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
    • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

    Outcomes

    • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
    • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.


    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    Results: Infrastructure Achievements

    We successfully built and automated a containerized stack to support our AI experiments. This included:

    A screenshot of k9s and nvtop showing PyTAG running in Kubernetes with GPU acceleration

    ./deploy.sh and voilà - Kubernetes running PyTAG (k9s, above) with GPU acceleration (nvtop, below)

    Results: Game Design Insights

    Our project focused on modeling and analyzing two card games of our own design within the TAG framework:

    • Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
    • AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
    • Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .

    Cards from the three games

    A family picture of our card games in progress. From the top: Bamboo, Totoro, R3

    Results: Learning, Collaboration, and Innovation

    Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:

    • "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
    • AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
    • GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
    • Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.

    Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!

    The Context: AI + Board Games


    AI for product management by a_jaeger

    Description

    Learn about AI and how it can help myself

    What are the jobs that a PM does where AI can help - and how?

    Goals

    • Investigate how AI can help with different tasks
    • Check out different AI tools, which one is best for which job
    • Summarize learning

    Resources

    • Reading some blog posts by PMs that looked into it
    • Popular and less popular AI tools

    Work is done SUSE internally at https://confluence.suse.com/display/~a_jaeger/Hackweek+25+-+AI+for+a+PM and subpages.


    Save pytorch models in OCI registries by jguilhermevanz

    Description

    A prerequisite for running applications in a cloud environment is the presence of a container registry. Another common scenario is users performing machine learning workloads in such environments. However, these types of workloads require dedicated infrastructure to run properly. We can leverage these two facts to help users save resources by storing their machine learning models in OCI registries, similar to how we handle some WebAssembly modules. This approach will save users the resources typically required for a machine learning model repository for the applications they need to run.

    Goals

    Allow PyTorch users to save and load machine learning models in OCI registries.

    Resources


    Learn how to integrate Elixir and Phoenix Liveview with LLMs by ninopaparo

    Description

    Learn how to integrate Elixir and Phoenix Liveview with LLMs by building an application that can provide answers to user queries based on a corpus of custom-trained data.

    Goals

    Develop an Elixir application via the Phoenix framework that:

    • Employs Retrieval Augmented Generation (RAG) techniques
    • Supports the integration and utilization of various Large Language Models (LLMs).
    • Is designed with extensibility and adaptability in mind to accommodate future enhancements and modifications.

    Resources

    • https://elixir-lang.org/
    • https://www.phoenixframework.org/
    • https://github.com/elixir-nx/bumblebee
    • https://ollama.com/


    Gen-AI chatbots and test-automation of generated responses by mdati

    Description

    Start experimenting the generative SUSE-AI chat bot, asking questions on different areas of knowledge or science and possibly analyze the quality of the LLM model response, specific and comparative, checking the answers provided by different LLM models to a same query, using proper quality metrics or tools or methodologies.

    Try to define basic guidelines and requirements for quality test automation of AI-generated responses.

    First approach of investigation can be based on manual testing: methodologies, findings and data can be useful then to organize valid automated testing.

    Goals

    • Identify criteria and measuring scales for assessment of a text content.
    • Define quality of an answer/text based on defined criteria .
    • Identify some knowledge sectors and a proper list of problems/questions per sector.
    • Manually run query session and apply evaluation criteria to answers.
    • Draft requirements for test automation of AI answers.

    Resources

    • Announcement of SUSE-AI for Hack Week in Slack
    • Openplatform and related 3 LLM models gemma:2b, llama3.1:8b, qwen2.5-coder:3b.

    Notes

    • Foundation models (FMs):
      are large deep learning neural networks, trained on massive datasets, that have changed the way data scientists approach machine learning (ML). Rather than develop artificial intelligence (AI) from scratch, data scientists use a foundation model as a starting point to develop ML models that power new applications more quickly and cost-effectively.

    • Large language models (LLMs):
      are a category of foundation models pre-trained on immense amounts of data acquiring abilities by learning statistical relationships from vast amounts of text during a self- and semi-supervised training process, making them capable of understanding and generating natural language and other types of content , to perform a wide range of tasks.
      LLMs can be used for generative AI (artificial intelligence) to produce content based on input prompts in human language.

    Validation of a AI-generated answer is not an easy task to perform, as manually as automated.
    An LLM answer text shall contain a given level of informations: correcness, completeness, reasoning description etc.
    We shall rely in properly applicable and measurable criteria of validation to get an assessment in a limited amount of time and resources.


    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    New migration tool for Leap by lkocman

    Update

    I will call a meeting with other interested people at 11:00 CET https://meet.opensuse.org/migrationtool

    Description

    SLES 16 plans to have no yast tool in it. Leap 16 might keep some bits, however, we need a new tool for Leap to SLES migration, as this was previously handled by a yast2-migration-sle

    Goals

    A tool able to migrate Leap 16 to SLES 16, I would like to cover also other scenarios within openSUSE, as in many cases users would have to edit repository files manually.

    • Leap -> Leap n+1 (minor and major version updates)
    • Leap -> SLES docs
    • Leap -> Tumbleweed
    • Leap -> Slowroll
    • Leap Micro -> Leap Micro n+1 (minor and major version updates)
    • Leap Micro -> MicroOS

    Hackweek 24 update

    Marcela and I were working on the project from Brno coworking as well as finalizing pieces after the hackweek. We've tested several migration scenarios and it works. But it needs further polishing and testing.

    Projected was renamed to opensuse-migration-tool and was submitted to devel project https://build.opensuse.org/requests/1227281

    Repository

    https://github.com/openSUSE/opensuse-migration-tool

    Out of scope is any migration to an immutable system. I know Richard already has some tool for that.

    Resources

    Tracker for yast stack reduction code-o-o/leap/features#173 YaST stack reduction