Description

Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.

User Story

  • Allison Average wants a one-click local AI assistent on their openSUSE laptop.
  • Ash Awesome wants AI on their phone without an expensive subscription.

Goals

  • Evaluate a local SurfSense setup for day to day productivity
  • Test opencode for vibe coding and tool calling

Timeline

Day 1

  • Took a look at SurfSense and started setting up a local instance.
  • Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.

Day 2

  • Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.

Day 3

Day 4

  • Context size is a thing, and models are not equally usable for vibe coding.
  • Through arduous browsing for ollama models I did find some like myaniu/qwen2.5-1m:7b with 1m but even then it is not obvious if they are meant for tool calls.

Day 5

  • Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.

Outcomes

surfsense

I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.

opencode

Installing opencode and ollama in my distrobox container along with the following configs worked for me.

When preparing a new project from scratch it is a good idea to start out with a template.

opencode.json

{ "$schema": "https://opencode.ai/config.json", "theme": "catppuccin", "model": "ollama/qwen2.5-coder:1.5b", "mode": { "plan": { "temperature": 0.0 }, "build": { "temperature": 0.0 } }, "provider": { "ollama": { "npm": "[@ai-sdk](/users/ai-sdk)/openai-compatible", "name": "Ollama (local)", "options": { "baseURL": "http://localhost:11434/v1" }, "models": { "qwen2.5-coder:1.5b": { "name": "Qwem2.5-Coder" } } } }, "mcp": { "openqa": { "type": "remote", "enabled": true, "url": "https://openqa.opensuse.org/experimental/mcp", "headers": { "Authorization": "Bearer {env:OPENQA_USER}:{env:OPENQA_APIKEY}:{env:OPENQA_APISECRET}" } }, "gh_grep": { "type": "remote", "url": "https://mcp.grep.app" } } }

The models need to be ollama pulled first, and ollama needs to be serving.

AGENTS.md

Agents can be instruced per project or globally like so:

When you need to lookup openQA jobs or job groups, use `openqa` tools. If you are unsure how to do something, use `gh_grep` to search code examples from github.

Note: My results varied a lot between models. Available context length e.g. OLLAMA_CONTEXT_LENGTH=8192 ollama serve & gives it more wiggle room and lowering the temerature should also help, but I found myself tweaking the configuration a lot.

Horrible performance even with small models

Normally I don't hear the fan in this laptop much. Responses were processed so slowly by opencode that I barely got much done. Even figuring out why responses were unreliable took longer because I had to wait a lot for useless responses.

Airgapped models

While Investigating the horrible performance of opencode I stumbled upon ramalama which runs models in containers optimized for different cpu's which are also isolated:

ramalama serve --ctx-size 8192 -p 8080 -d kirito1/qwen3-coder:1.7b

I could not get it to work with opencode which just silently failed to communicate with it. Even so, ramalama is awesome.

Looking for hackers with the skills:

ai

This project is part of:

Hack Week 25

Activity

  • 10 days ago: livdywan added keyword "ai" to this project.
  • 10 days ago: livdywan started this project.
  • 15 days ago: rsimai liked this project.
  • 29 days ago: livdywan originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Bugzilla goes AI - Phase 1 by nwalter

    Description

    This project, Bugzilla goes AI, aims to boost developer productivity by creating an autonomous AI bug agent during Hackweek. The primary goal is to reduce the time employees spend triaging bugs by integrating Ollama to summarize issues, recommend next steps, and push focused daily reports to a Web Interface.

    Goals

    To reduce employee time spent on Bugzilla by implementing an AI tool that triages and summarizes bug reports, providing actionable recommendations to the team via Web Interface.

    Project Charter

    https://docs.google.com/document/d/1HbAvgrg8T3pd1FIx74nEfCObCljpO77zz5In_Jpw4as/edit?usp=sharing## Description

    Project Achievements during Hackweek

    In this file you can read about what we achieved during Hackweek.

    https://docs.google.com/document/d/14gtG9-ZvVpBgkh33Z4AM6iLFWqZcicQPD41MM-Pg0/edit?usp=sharing


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    issuefs: FUSE filesystem representing issues (e.g. JIRA) for the use with AI agents code-assistants by llansky3

    Description

    Creating a FUSE filesystem (issuefs) that mounts issues from various ticketing systems (Github, Jira, Bugzilla, Redmine) as files to your local file system.

    And why this is good idea?

    • User can use favorite command line tools to view and search the tickets from various sources
    • User can use AI agents capabilities from your favorite IDE or cli to ask question about the issues, project or functionality while providing relevant tickets as context without extra work.
    • User can use it during development of the new features when you let the AI agent to jump start the solution. The issuefs will give the AI agent the context (AI agents just read few more files) about the bug or requested features. No need for copying and pasting issues to user prompt or by using extra MCP tools to access the issues. These you can still do but this approach is on purpose different.

    Goals

    1. Add Github issue support
    2. Proof the concept/approach by apply the approach on itself using Github issues for tracking and development of new features
    3. Add support for Bugzilla and Redmine using this approach in the process of doing it. Record a video of it.
    4. Clean-up and test the implementation and create some documentation
    5. Create a blog post about this approach

    Resources

    There is a prototype implementation here. This currently sort of works with JIRA only.


    Extended private brain - RAG my own scripts and data into offline LLM AI by tjyrinki_suse

    Description

    For purely studying purposes, I'd like to find out if I could teach an LLM some of my own accumulated knowledge, to use it as a sort of extended brain.

    I might use qwen3-coder or something similar as a starting point.

    Everything would be done 100% offline without network available to the container, since I prefer to see when network is needed, and make it so it's never needed (other than initial downloads).

    Goals

    1. Learn something about RAG, LLM, AI.
    2. Find out if everything works offline as intended.
    3. As an end result have a new way to access my own existing know-how, but so that I can query the wisdom in them.
    4. Be flexible to pivot in any direction, as long as there are new things learned.

    Resources

    To be found on the fly.

    Timeline

    Day 1 (of 4)

    • Tried out a RAG demo, expanded on feeding it my own data
    • Experimented with qwen3-coder to add a persistent chat functionality, and keeping vectors in a pickle file
    • Optimizations to keep everything within context window
    • Learn and add a bit of PyTest

    Day 2

    • More experimenting and more data
    • Study ChromaDB
    • Add a Web UI that works from another computer even though the container sees network is down

    Day 3

    • The above RAG is working well enough for demonstration purposes.
    • Pivot to trying out OpenCode, configuring local Ollama qwen3-coder there, to analyze the RAG demo.
    • Figured out how to configure Ollama template to be usable under OpenCode. OpenCode locally is super slow to just running qwen3-coder alone.

    Day 4 (final day)

    • Battle with OpenCode that was both slow and kept on piling up broken things.
    • Call it success as after all the agentic AI was working locally.
    • Clean up the mess left behind a bit.

    Blog Post

    Summarized the findings at blog post.


    The Agentic Rancher Experiment: Do Androids Dream of Electric Cattle? by moio

    Rancher is a beast of a codebase. Let's investigate if the new 2025 generation of GitHub Autonomous Coding Agents and Copilot Workspaces can actually tame it. A GitHub robot mascot trying to lasso a blue bull with a Kubernetes logo tatooed on it


    The Plan

    Create a sandbox GitHub Organization, clone in key Rancher repositories, and let the AI loose to see if it can handle real-world enterprise OSS maintenance - or if it just hallucinates new breeds of Kubernetes resources!

    Specifically, throw "Agentic Coders" some typical tasks in a complex, long-lived open-source project, such as:


    The Grunt Work: generate missing GoDocs, unit tests, and refactorings. Rebase PRs.

    The Complex Stuff: fix actual (historical) bugs and feature requests to see if they can traverse the complexity without (too much) human hand-holding.

    Hunting Down Gaps: find areas lacking in docs, areas of improvement in code, dependency bumps, and so on.


    If time allows, also experiment with Model Context Protocol (MCP) to give agents context on our specific build pipelines and CI/CD logs.

    Why?

    We know AI can write "Hello World." and also moderately complex programs from a green field. But can it rebase a 3-month-old PR with conflicts in rancher/rancher? I want to find the breaking point of current AI agents to determine if and how they can help us to reduce our technical debt, work faster and better. At the same time, find out about pitfalls and shortcomings.

    The CONCLUSION!!!

    A add-emoji State of the Union add-emoji document was compiled to summarize lessons learned this week. For more gory details, just read on the diary below! add-emoji