Description

Large language models like ChatGPT have demonstrated remarkable capabilities across a variety of applications. However, their potential for enhancing the Linux development and user ecosystem remains largely unexplored. This project seeks to bridge that gap by researching practical applications of LLMs to improve workflows in areas such as backporting, packaging, log analysis, system migration, and more. By identifying patterns that LLMs can leverage, we aim to uncover new efficiencies and automation strategies that can benefit developers, maintainers, and end users alike.

Goals

  • Evaluate Existing LLM Capabilities: Research and document the current state of LLM usage in open-source and Linux development projects, noting successes and limitations.
  • Prototype Tools and Scripts: Develop proof-of-concept scripts or tools that leverage LLMs to perform specific tasks like automated log analysis, assisting with backporting patches, or generating packaging metadata.
  • Assess Performance and Reliability: Test the tools' effectiveness on real-world Linux data and analyze their accuracy, speed, and reliability.
  • Identify Best Use Cases: Pinpoint which tasks are most suitable for LLM support, distinguishing between high-impact and impractical applications.
  • Document Findings and Recommendations: Summarize results with clear documentation and suggest next steps for potential integration or further development.

Resources

  • Local LLM Implementations: Access to locally hosted LLMs such as LLaMA, GPT-J, or similar open-source models that can be run and fine-tuned on local hardware.
  • Computing Resources: Workstations or servers capable of running LLMs locally, equipped with sufficient GPU power for training and inference.
  • Sample Data: Logs, source code, patches, and packaging data from openSUSE or SUSE repositories for model training and testing.
  • Public LLMs for Benchmarking: Access to APIs from platforms like OpenAI or Hugging Face for comparative testing and performance assessment.
  • Existing NLP Tools: Libraries such as spaCy, Hugging Face Transformers, and PyTorch for building and interacting with local LLMs.
  • Technical Documentation: Tutorials and resources focused on setting up and optimizing local LLMs for tasks relevant to Linux development.
  • Collaboration: Engagement with community experts and teams experienced in AI and Linux for feedback and joint exploration.

Looking for hackers with the skills:

ai

This project is part of:

Hack Week 24

Activity

  • about 1 year ago: PSuarezHernandez liked this project.
  • about 1 year ago: jiriwiesner liked this project.
  • about 1 year ago: anicka added keyword "ai" to this project.
  • about 1 year ago: moio liked this project.
  • about 1 year ago: livdywan liked this project.
  • about 1 year ago: mwilck liked this project.
  • about 1 year ago: bfilho liked this project.
  • about 1 year ago: vlefebvre liked this project.
  • about 1 year ago: wfrisch liked this project.
  • about 1 year ago: anicka started this project.
  • about 1 year ago: anicka originated this project.

  • Comments

    • wfrisch
      about 1 year ago by wfrisch | Reply

      If someone could recreate Google's Project Naptime, or at least something similar to it, that would be very interesting:

      Two key features:

      • Tool use in general
      • Tool-assisted verification of LLM results

    • jiriwiesner
      about 1 year ago by jiriwiesner | Reply

      I would like to ask an LLM instance about the inner workings on the Linux kernel code. It is a common task of mine to look for a bug in a subsystem or a layer that can easily have tens of thousands of lines of code (e.g. bsc 1216813). I know having an understanding of the Linux code is what we do as developers but my understanding and knowledge is always limited because I simply do not have the time to read all of the code possibly involved in an issue. If the LLM was trained to process the source code of a specific version of Linux a developer could then ask involved questions about the code using the terms found in the code base. It should basically be something that allows a developer find the interesting parts of the code better than when using just grep.

      • anicka
        about 1 year ago by anicka | Reply

        Actually, it looks like that off-the-shelf ChatGPT 4 can be already quite helpful in such tasks.

        But training something like code llama on our kernels is something I indeed want to look into next time because if there is any way how to leverage LLMs in our bugfixing or backporting, this is it.

    Similar Projects

    "what is it" file and directory analysis via MCP and local LLM, for console and KDE by rsimai

    Description

    Users sometimes wonder what files or directories they find on their local PC are good for. If they can't determine from the filename or metadata, there should an easy way to quickly analyze the content and at least guess the meaning. An LLM could help with that, through the use of a filesystem MCP and to-text-converters for typical file types. Ideally this is integrated into the desktop environment but works as well from a console. All data is processed locally or "on premise", no artifacts remain or leave the system.

    Goals

    • The user can run a command from the console, to check on a file or directory
    • The filemanager contains the "analyze" feature within the context menu
    • The local LLM could serve for other use cases where privacy matters

    TBD

    • Find or write capable one-shot and interactive MCP client
    • Find or write simple+secure file access MCP server
    • Create local LLM service with appropriate footprint, containerized
    • Shell command with options
    • KDE integration (Dolphin)
    • Package
    • Document

    Resources


    Explore LLM evaluation metrics by thbertoldi

    Description

    Learn the best practices for evaluating LLM performance with an open-source framework such as DeepEval.

    Goals

    Curate the knowledge learned during practice and present it to colleagues.

    -> Maybe publish a blog post on SUSE's blog?

    Resources

    https://deepeval.com

    https://docs.pactflow.io/docs/bi-directional-contract-testing


    Try out Neovim Plugins supporting AI Providers by enavarro_suse

    Description

    Experiment with several Neovim plugins that integrate AI model providers such as Gemini and Ollama.

    Goals

    Evaluate how these plugins enhance the development workflow, how they differ in capabilities, and how smoothly they integrate into Neovim for day-to-day coding tasks.

    Resources


    SUSE Edge Image Builder MCP by eminguez

    Description

    Based on my other hackweek project, SUSE Edge Image Builder's Json Schema I would like to build also a MCP to be able to generate EIB config files the AI way.

    Realistically I don't think I'll be able to have something consumable at the end of this hackweek but at least I would like to start exploring MCPs, the difference between an API and MCP, etc.

    Goals

    • Familiarize myself with MCPs
    • Unrealistic: Have an MCP that can generate an EIB config file

    Resources

    Result

    https://github.com/e-minguez/eib-mcp

    I've extensively used antigravity and its agent mode to code this. This heavily uses https://hackweek.opensuse.org/25/projects/suse-edge-image-builder-json-schema for the MCP to be built.

    I've ended up learning a lot of things about "prompting", json schemas in general, some golang, MCPs and AI in general :)

    Example:

    Generate an Edge Image Builder configuration for an ISO image based on slmicro-6.2.iso, targeting x86_64 architecture. The output name should be 'my-edge-image' and it should install to /dev/sda. It should deploy a 3 nodes kubernetes cluster with nodes names "node1", "node2" and "node3" as: * hostname: node1, IP: 1.1.1.1, role: initializer * hostname: node2, IP: 1.1.1.2, role: agent * hostname: node3, IP: 1.1.1.3, role: agent The kubernetes version should be k3s 1.33.4-k3s1 and it should deploy a cert-manager helm chart (the latest one available according to https://cert-manager.io/docs/installation/helm/). It should create a user called "suse" with password "suse" and set ntp to "foo.ntp.org". The VIP address for the API should be 1.2.3.4

    Generates:

    ``` apiVersion: "1.0" image: arch: x86_64 baseImage: slmicro-6.2.iso imageType: iso outputImageName: my-edge-image kubernetes: helm: charts: - name: cert-manager repositoryName: jetstack


    Local AI assistant with optional integrations and mobile companion by livdywan

    Description

    Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.

    User Story

    • Allison Average wants a one-click local AI assistent on their openSUSE laptop.
    • Ash Awesome wants AI on their phone without an expensive subscription.

    Goals

    • Evaluate a local SurfSense setup for day to day productivity
    • Test opencode for vibe coding and tool calling

    Timeline

    Day 1

    • Took a look at SurfSense and started setting up a local instance.
    • Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.

    Day 2

    • Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.

    Day 3

    Day 4

    • Context size is a thing, and models are not equally usable for vibe coding.
    • Through arduous browsing for ollama models I did find some like myaniu/qwen2.5-1m:7b with 1m but even then it is not obvious if they are meant for tool calls.

    Day 5

    • Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.

    Outcomes

    surfsense

    I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.

    opencode

    Installing opencode and ollama in my distrobox container along with the following configs worked for me.

    When preparing a new project from scratch it is a good idea to start out with a template.

    opencode.json

    ``` {