Description

Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.

User Story

  • Allison Average wants a one-click local AI assistent on their openSUSE laptop.
  • Ash Awesome wants AI on their phone without an expensive subscription.

Goals

  • Evaluate a local SurfSense setup for day to day productivity
  • Test opencode for vibe coding and tool calling

Timeline

Day 1

  • Took a look at SurfSense and started setting up a local instance.
  • Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.

Day 2

  • Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.

Day 3

Day 4

  • Context size is a thing, and models are not equally usable for vibe coding.
  • Through arduous browsing for ollama models I did find some like myaniu/qwen2.5-1m:7b with 1m but even then it is not obvious if they are meant for tool calls.

Day 5

  • Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.

Outcomes

surfsense

I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.

opencode

Installing opencode and ollama in my distrobox container along with the following configs worked for me.

When preparing a new project from scratch it is a good idea to start out with a template.

opencode.json

{ "$schema": "https://opencode.ai/config.json", "theme": "catppuccin", "model": "ollama/qwen2.5-coder:1.5b", "mode": { "plan": { "temperature": 0.0 }, "build": { "temperature": 0.0 } }, "provider": { "ollama": { "npm": "[@ai-sdk](/users/ai-sdk)/openai-compatible", "name": "Ollama (local)", "options": { "baseURL": "http://localhost:11434/v1" }, "models": { "qwen2.5-coder:1.5b": { "name": "Qwem2.5-Coder" } } } }, "mcp": { "openqa": { "type": "remote", "enabled": true, "url": "https://openqa.opensuse.org/experimental/mcp", "headers": { "Authorization": "Bearer {env:OPENQA_USER}:{env:OPENQA_APIKEY}:{env:OPENQA_APISECRET}" } }, "gh_grep": { "type": "remote", "url": "https://mcp.grep.app" } } }

The models need to be ollama pulled first, and ollama needs to be serving.

AGENTS.md

Agents can be instruced per project or globally like so:

When you need to lookup openQA jobs or job groups, use `openqa` tools. If you are unsure how to do something, use `gh_grep` to search code examples from github.

Note: My results varied a lot between models. Available context length e.g. OLLAMA_CONTEXT_LENGTH=8192 ollama serve & gives it more wiggle room and lowering the temerature should also help, but I found myself tweaking the configuration a lot.

Horrible performance even with small models

Normally I don't hear the fan in this laptop much. Responses were processed so slowly by opencode that I barely got much done. Even figuring out why responses were unreliable took longer because I had to wait a lot for useless responses.

Airgapped models

While Investigating the horrible performance of opencode I stumbled upon ramalama which runs models in containers optimized for different cpu's which are also isolated:

ramalama serve --ctx-size 8192 -p 8080 -d kirito1/qwen3-coder:1.7b

I could not get it to work with opencode which just silently failed to communicate with it. Even so, ramalama is awesome.

Looking for hackers with the skills:

ai

This project is part of:

Hack Week 25

Activity

  • about 1 month ago: livdywan added keyword "ai" to this project.
  • about 1 month ago: livdywan started this project.
  • about 1 month ago: rsimai liked this project.
  • about 2 months ago: livdywan originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Liz - Prompt autocomplete by ftorchia

    Description

    Liz is the Rancher AI assistant for cluster operations.

    Goals

    We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.

    Example:

    • User prompt: "Can you show me the list of p"
    • Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"

    Example:

    • User prompt: "Show me the logs of #rancher-"
    • Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".

    Technical Overview

    1. The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
    2. The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.

    Resources

    GitHub repository


    Explore LLM evaluation metrics by thbertoldi

    Description

    Learn the best practices for evaluating LLM performance with an open-source framework such as DeepEval.

    Goals

    Curate the knowledge learned during practice and present it to colleagues.

    -> Maybe publish a blog post on SUSE's blog?

    Resources

    https://deepeval.com

    https://docs.pactflow.io/docs/bi-directional-contract-testing


    "what is it" file and directory analysis via MCP and local LLM, for console and KDE by rsimai

    Description

    Users sometimes wonder what files or directories they find on their local PC are good for. If they can't determine from the filename or metadata, there should an easy way to quickly analyze the content and at least guess the meaning. An LLM could help with that, through the use of a filesystem MCP and to-text-converters for typical file types. Ideally this is integrated into the desktop environment but works as well from a console. All data is processed locally or "on premise", no artifacts remain or leave the system.

    Goals

    • The user can run a command from the console, to check on a file or directory
    • The filemanager contains the "analyze" feature within the context menu
    • The local LLM could serve for other use cases where privacy matters

    TBD

    • Find or write capable one-shot and interactive MCP client
    • Find or write simple+secure file access MCP server
    • Create local LLM service with appropriate footprint, containerized
    • Shell command with options
    • KDE integration (Dolphin)
    • Package
    • Document

    Resources


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources