Description
This project is meant to fight the loneliness of the support team members, providing them an AI assistant (hopefully) capable of scraping supportconfigs in a RAG fashion, trying to answer specific questions.
Goals
- Setup an Ollama backend, spinning one (or more??) code-focused LLMs selected by license, performance and quality of the results between:
- deepseek-coder-v2
- dolphin-mistral
- starcoder2
- (...others??)
- Setup a Web UI for it, choosing an easily extensible and customizable option between:
- Extend the solution in order to be able to:
- Add ZIU/Concord shared folders to its RAG context
- Add BZ cases, splitted in comments to its RAG context
- A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query BZ
- Add specific packages picking them from IBS repos
- A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query IBS
- A plus would be to desume the packages of interest and the right channel and version to be picked from the added BZ cases
This project is part of:
Hack Week 24
Activity
Comments
-
about 1 year ago by paolodepa | Reply
The project soon moved to CLI, as the skills for integrating a WEB-UI are not my cup of tea :-/
Its description and source code can be found at ghostwrAIter
I tested the listed LLMs and also the following embedding models: mxbai-embed-large, nomic-embed-text, all-minilm.
My impression is that the current state of the art for the really open-source llms and embedding models is not still mature and ready for production grade and that a big gap exists with the most well-known commercial product.
Hopefully will run a refresh for the next hackweek.
Similar Projects
Local AI assistant with optional integrations and mobile companion by livdywan
Description
Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.
User Story
- Allison Average wants a one-click local AI assistent on their openSUSE laptop.
- Ash Awesome wants AI on their phone without an expensive subscription.
Goals
- Evaluate a local SurfSense setup for day to day productivity
- Test opencode for vibe coding and tool calling
Timeline
Day 1
- Took a look at SurfSense and started setting up a local instance.
- Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.
Day 2
- Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.
Day 3
- Trying out opencode with Qwen3-Coder and Qwen2.5-Coder.
Day 4
- Context size is a thing, and models are not equally usable for vibe coding.
- Through arduous browsing for ollama models I did find some like
myaniu/qwen2.5-1m:7bwith 1m but even then it is not obvious if they are meant for tool calls.
Day 5
- Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.
Outcomes
surfsense
I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.
opencode
Installing opencode and ollama in my distrobox container along with the following configs worked for me.
When preparing a new project from scratch it is a good idea to start out with a template.
opencode.json
``` {
issuefs: FUSE filesystem representing issues (e.g. JIRA) for the use with AI agents code-assistants by llansky3
Description
Creating a FUSE filesystem (issuefs) that mounts issues from various ticketing systems (Github, Jira, Bugzilla, Redmine) as files to your local file system.
And why this is good idea?
- User can use favorite command line tools to view and search the tickets from various sources
- User can use AI agents capabilities from your favorite IDE or cli to ask question about the issues, project or functionality while providing relevant tickets as context without extra work.
- User can use it during development of the new features when you let the AI agent to jump start the solution. The issuefs will give the AI agent the context (AI agents just read few more files) about the bug or requested features. No need for copying and pasting issues to user prompt or by using extra MCP tools to access the issues. These you can still do but this approach is on purpose different.

Goals
- Add Github issue support
- Proof the concept/approach by apply the approach on itself using Github issues for tracking and development of new features
- Add support for Bugzilla and Redmine using this approach in the process of doing it. Record a video of it.
- Clean-up and test the implementation and create some documentation
- Create a blog post about this approach
Resources
There is a prototype implementation here. This currently sort of works with JIRA only.
Uyuni Health-check Grafana AI Troubleshooter by ygutierrez
Description
This project explores the feasibility of using the open-source Grafana LLM plugin to enhance the Uyuni Health-check tool with LLM capabilities. The idea is to integrate a chat-based "AI Troubleshooter" directly into existing dashboards, allowing users to ask natural-language questions about errors, anomalies, or performance issues.
Goals
- Investigate if and how the
grafana-llm-appplug-in can be used within the Uyuni Health-check tool. - Investigate if this plug-in can be used to query LLMs for troubleshooting scenarios.
- Evaluate support for local LLMs and external APIs through the plugin.
- Evaluate if and how the Uyuni MCP server could be integrated as another source of information.
Resources
Multi-agent AI assistant for Linux troubleshooting by doreilly
Description
Explore multi-agent architecture as a way to avoid MCP context rot.
Having one agent with many tools bloats the context with low-level details about tool descriptions, parameter schemas etc which hurts LLM performance. Instead have many specialised agents, each with just the tools it needs for its role. A top level supervisor agent takes the user prompt and delegates to appropriate sub-agents.
Goals
Create an AI assistant with some sub-agents that are specialists at troubleshooting Linux subsystems, e.g. systemd, selinux, firewalld etc. The agents can get information from the system by implementing their own tools with simple function calls, or use tools from MCP servers, e.g. a systemd-agent can use tools from systemd-mcp.
Example prompts/responses:
user$ the system seems slow
assistant$ process foo with pid 12345 is using 1000% cpu ...
user$ I can't connect to the apache webserver
assistant$ the firewall is blocking http ... you can open the port with firewall-cmd --add-port ...
Resources
Language Python. The Python ADK is more mature than Golang.
https://google.github.io/adk-docs/
https://github.com/djoreilly/linux-helper
MCP Server for SCC by digitaltomm
Description
Provide an MCP Server implementation for customers to access data on scc.suse.com via MCP protocol. The core benefit of this MCP interface is that it has direct (read) access to customer data in SCC, so the AI agent gets enhanced knowledge about individual customer data, like subscriptions, orders and registered systems.
Architecture

Goals
We want to demonstrate a proof of concept to connect to the SCC MCP server with any AI agent, for example gemini-cli or codex. Enabling the user to ask questions regarding their SCC inventory.
For this Hackweek, we target that users get proper responses to these example questions:
- Which of my currently active systems are running products that are out of support?
- Do I have ready to use registration codes for SLES?
- What are the latest 5 released patches for SLES 15 SP6? Output as a list with release date, patch name, affected package names and fixed CVEs.
- Which versions of kernel-default are available on SLES 15 SP6?
Technical Notes
Similar to the organization APIs, this can expose to customers data about their subscriptions, orders, systems and products. Authentication should be done by organization credentials, similar to what needs to be provided to RMT/MLM. Customers can connect to the SCC MCP server from their own MCP-compatible client and Large Language Model (LLM), so no third party is involved.
Milestones
[x] Basic MCP API setup MCP endpoints [x] Products / Repositories [x] Subscriptions / Orders [x] Systems [x] Packages [x] Document usage with Gemini CLI, Codex
Resources
Gemini CLI setup:
~/.gemini/settings.json: