Description
Large language models like ChatGPT have demonstrated remarkable capabilities across a variety of applications. However, their potential for enhancing the Linux development and user ecosystem remains largely unexplored. This project seeks to bridge that gap by researching practical applications of LLMs to improve workflows in areas such as backporting, packaging, log analysis, system migration, and more. By identifying patterns that LLMs can leverage, we aim to uncover new efficiencies and automation strategies that can benefit developers, maintainers, and end users alike.
Goals
- Evaluate Existing LLM Capabilities: Research and document the current state of LLM usage in open-source and Linux development projects, noting successes and limitations.
- Prototype Tools and Scripts: Develop proof-of-concept scripts or tools that leverage LLMs to perform specific tasks like automated log analysis, assisting with backporting patches, or generating packaging metadata.
- Assess Performance and Reliability: Test the tools' effectiveness on real-world Linux data and analyze their accuracy, speed, and reliability.
- Identify Best Use Cases: Pinpoint which tasks are most suitable for LLM support, distinguishing between high-impact and impractical applications.
- Document Findings and Recommendations: Summarize results with clear documentation and suggest next steps for potential integration or further development.
Resources
- Local LLM Implementations: Access to locally hosted LLMs such as LLaMA, GPT-J, or similar open-source models that can be run and fine-tuned on local hardware.
- Computing Resources: Workstations or servers capable of running LLMs locally, equipped with sufficient GPU power for training and inference.
- Sample Data: Logs, source code, patches, and packaging data from openSUSE or SUSE repositories for model training and testing.
- Public LLMs for Benchmarking: Access to APIs from platforms like OpenAI or Hugging Face for comparative testing and performance assessment.
- Existing NLP Tools: Libraries such as spaCy, Hugging Face Transformers, and PyTorch for building and interacting with local LLMs.
- Technical Documentation: Tutorials and resources focused on setting up and optimizing local LLMs for tasks relevant to Linux development.
- Collaboration: Engagement with community experts and teams experienced in AI and Linux for feedback and joint exploration.
Looking for hackers with the skills:
This project is part of:
Hack Week 24
Activity
Comments
-
-
about 1 year ago by jiriwiesner | Reply
I would like to ask an LLM instance about the inner workings on the Linux kernel code. It is a common task of mine to look for a bug in a subsystem or a layer that can easily have tens of thousands of lines of code (e.g. bsc 1216813). I know having an understanding of the Linux code is what we do as developers but my understanding and knowledge is always limited because I simply do not have the time to read all of the code possibly involved in an issue. If the LLM was trained to process the source code of a specific version of Linux a developer could then ask involved questions about the code using the terms found in the code base. It should basically be something that allows a developer find the interesting parts of the code better than when using just grep.
-
about 1 year ago by anicka | Reply
Actually, it looks like that off-the-shelf ChatGPT 4 can be already quite helpful in such tasks.
But training something like code llama on our kernels is something I indeed want to look into next time because if there is any way how to leverage LLMs in our bugfixing or backporting, this is it.
-
Similar Projects
Kubernetes-Based ML Lifecycle Automation by lmiranda
Description
This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.
The pipeline will automate the lifecycle of a machine learning model, including:
- Data ingestion/collection
- Model training as a Kubernetes Job
- Model artifact storage in an S3-compatible registry (e.g. Minio)
- A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
- A lightweight inference service that loads and serves the latest model
- Monitoring of model performance and service health through Prometheus/Grafana
The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.
Goals
By the end of Hack Week, the project should:
Produce a fully functional ML pipeline running on Kubernetes with:
- Data collection job
- Training job container
- Storage and versioning of trained models
- Automated deployment of new model versions
- Model inference API service
- Basic monitoring dashboards
Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.
Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).
Prepare a short demo explaining the end-to-end process and how new models flow through the system.
Resources
Updates
- Training pipeline and datasets
- Inference Service py
Local AI assistant with optional integrations and mobile companion by livdywan
Description
Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.
User Story
- Allison Average wants a one-click local AI assistent on their openSUSE laptop.
- Ash Awesome wants AI on their phone without an expensive subscription.
Goals
- Evaluate a local SurfSense setup for day to day productivity
- Test opencode for vibe coding and tool calling
Timeline
Day 1
- Took a look at SurfSense and started setting up a local instance.
- Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.
Day 2
- Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.
Day 3
- Trying out opencode with Qwen3-Coder and Qwen2.5-Coder.
Day 4
- Context size is a thing, and models are not equally usable for vibe coding.
- Through arduous browsing for ollama models I did find some like
myaniu/qwen2.5-1m:7bwith 1m but even then it is not obvious if they are meant for tool calls.
Day 5
- Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.
Outcomes
surfsense
I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.
opencode
Installing opencode and ollama in my distrobox container along with the following configs worked for me.
When preparing a new project from scratch it is a good idea to start out with a template.
opencode.json
``` {
Update M2Crypto by mcepl
There are couple of projects I work on, which need my attention and putting them to shape:
Goal for this Hackweek
- Put M2Crypto into better shape (most issues closed, all pull requests processed)
- More fun to learn jujutsu
- Play more with Gemini, how much it help (or not).
- Perhaps, also (just slightly related), help to fix vis to work with LuaJIT, particularly to make vis-lspc working.
AI-Powered Unit Test Automation for Agama by joseivanlopez
The Agama project is a multi-language Linux installer that leverages the distinct strengths of several key technologies:
- Rust: Used for the back-end services and the core HTTP API, providing performance and safety.
- TypeScript (React/PatternFly): Powers the modern web user interface (UI), ensuring a consistent and responsive user experience.
- Ruby: Integrates existing, robust YaST libraries (e.g.,
yast-storage-ng) to reuse established functionality.
The Problem: Testing Overhead
Developing and maintaining code across these three languages requires a significant, tedious effort in writing, reviewing, and updating unit tests for each component. This high cost of testing is a drain on developer resources and can slow down the project's evolution.
The Solution: AI-Driven Automation
This project aims to eliminate the manual overhead of unit testing by exploring and integrating AI-driven code generation tools. We will investigate how AI can:
- Automatically generate new unit tests as code is developed.
- Intelligently correct and update existing unit tests when the application code changes.
By automating this crucial but monotonous task, we can free developers to focus on feature implementation and significantly improve the speed and maintainability of the Agama codebase.
Goals
- Proof of Concept: Successfully integrate and demonstrate an authorized AI tool (e.g.,
gemini-cli) to automatically generate unit tests. - Workflow Integration: Define and document a new unit test automation workflow that seamlessly integrates the selected AI tool into the existing Agama development pipeline.
- Knowledge Sharing: Establish a set of best practices for using AI in code generation, sharing the learned expertise with the broader team.
Contribution & Resources
We are seeking contributors interested in AI-powered development and improving developer efficiency. Whether you have previous experience with code generation tools or are eager to learn, your participation is highly valuable.
If you want to dive deep into AI for software quality, please reach out and join the effort!
- Authorized AI Tools: Tools supported by SUSE (e.g.,
gemini-cli) - Focus Areas: Rust, TypeScript, and Ruby components within the Agama project.
Interesting Links
issuefs: FUSE filesystem representing issues (e.g. JIRA) for the use with AI agents code-assistants by llansky3
Description
Creating a FUSE filesystem (issuefs) that mounts issues from various ticketing systems (Github, Jira, Bugzilla, Redmine) as files to your local file system.
And why this is good idea?
- User can use favorite command line tools to view and search the tickets from various sources
- User can use AI agents capabilities from your favorite IDE or cli to ask question about the issues, project or functionality while providing relevant tickets as context without extra work.
- User can use it during development of the new features when you let the AI agent to jump start the solution. The issuefs will give the AI agent the context (AI agents just read few more files) about the bug or requested features. No need for copying and pasting issues to user prompt or by using extra MCP tools to access the issues. These you can still do but this approach is on purpose different.

Goals
- Add Github issue support
- Proof the concept/approach by apply the approach on itself using Github issues for tracking and development of new features
- Add support for Bugzilla and Redmine using this approach in the process of doing it. Record a video of it.
- Clean-up and test the implementation and create some documentation
- Create a blog post about this approach
Resources
There is a prototype implementation here. This currently sort of works with JIRA only.