Description

Setup a local AI assistant for research, brainstorming and proof reading. Look into SurfSense, Open WebUI and possibly alternatives. Explore integration with services like openQA. There should be no cloud dependencies. Mobile phone support or an additional companion app would be a bonus. The goal is not to develop everything from scratch.

User Story

  • Allison Average wants a one-click local AI assistent on their openSUSE laptop.
  • Ash Awesome wants AI on their phone without an expensive subscription.

Goals

  • Evaluate a local SurfSense setup for day to day productivity
  • Test opencode for vibe coding and tool calling

Timeline

Day 1

  • Took a look at SurfSense and started setting up a local instance.
  • Unfortunately the container setup did not work well. Tho this was a great opportunity to learn some new podman commands and refresh my memory on how to recover a corrupted btrfs filesystem.

Day 2

  • Due to its sheer size and complexity SurfSense seems to have triggered btrfs fragmentation. Naturally this was not visible in any podman-related errors or in the journal. So this took up much of my second day.

Day 3

Day 4

  • Context size is a thing, and models are not equally usable for vibe coding.
  • Through arduous browsing for ollama models I did find some like myaniu/qwen2.5-1m:7b with 1m but even then it is not obvious if they are meant for tool calls.

Day 5

  • Whilst trying to make opencode usable I discovered ramalama which worked instantly and very well.

Outcomes

surfsense

I could not easily set this up completely. Maybe in part due to my filesystem issues. Was expecting this to be less of an effort.

opencode

Installing opencode and ollama in my distrobox container along with the following configs worked for me.

When preparing a new project from scratch it is a good idea to start out with a template.

opencode.json

{ "$schema": "https://opencode.ai/config.json", "theme": "catppuccin", "model": "ollama/qwen2.5-coder:1.5b", "mode": { "plan": { "temperature": 0.0 }, "build": { "temperature": 0.0 } }, "provider": { "ollama": { "npm": "[@ai-sdk](/users/ai-sdk)/openai-compatible", "name": "Ollama (local)", "options": { "baseURL": "http://localhost:11434/v1" }, "models": { "qwen2.5-coder:1.5b": { "name": "Qwem2.5-Coder" } } } }, "mcp": { "openqa": { "type": "remote", "enabled": true, "url": "https://openqa.opensuse.org/experimental/mcp", "headers": { "Authorization": "Bearer {env:OPENQA_USER}:{env:OPENQA_APIKEY}:{env:OPENQA_APISECRET}" } }, "gh_grep": { "type": "remote", "url": "https://mcp.grep.app" } } }

The models need to be ollama pulled first, and ollama needs to be serving.

AGENTS.md

Agents can be instruced per project or globally like so:

When you need to lookup openQA jobs or job groups, use `openqa` tools. If you are unsure how to do something, use `gh_grep` to search code examples from github.

Note: My results varied a lot between models. Available context length e.g. OLLAMA_CONTEXT_LENGTH=8192 ollama serve & gives it more wiggle room and lowering the temerature should also help, but I found myself tweaking the configuration a lot.

Horrible performance even with small models

Normally I don't hear the fan in this laptop much. Responses were processed so slowly by opencode that I barely got much done. Even figuring out why responses were unreliable took longer because I had to wait a lot for useless responses.

Airgapped models

While Investigating the horrible performance of opencode I stumbled upon ramalama which runs models in containers optimized for different cpu's which are also isolated:

ramalama serve --ctx-size 8192 -p 8080 -d kirito1/qwen3-coder:1.7b

I could not get it to work with opencode which just silently failed to communicate with it. Even so, ramalama is awesome.

Looking for hackers with the skills:

ai

This project is part of:

Hack Week 25

Activity

  • 22 days ago: livdywan added keyword "ai" to this project.
  • 22 days ago: livdywan started this project.
  • 27 days ago: rsimai liked this project.
  • about 1 month ago: livdywan originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Update M2Crypto by mcepl

    There are couple of projects I work on, which need my attention and putting them to shape:

    Goal for this Hackweek

    • Put M2Crypto into better shape (most issues closed, all pull requests processed)
    • More fun to learn jujutsu
    • Play more with Gemini, how much it help (or not).
    • Perhaps, also (just slightly related), help to fix vis to work with LuaJIT, particularly to make vis-lspc working.


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    Is SUSE Trending? Popularity and Developer Sentiment Insight Using Native AI Capabilities by terezacerna

    Description

    This project aims to explore the popularity and developer sentiment around SUSE and its technologies compared to Red Hat and their technologies. Using publicly available data sources, I will analyze search trends, developer preferences, repository activity, and media presence. The final outcome will be an interactive Power BI dashboard that provides insights into how SUSE is perceived and discussed across the web and among developers.

    Goals

    1. Assess the popularity of SUSE products and brand compared to Red Hat using Google Trends.
    2. Analyze developer satisfaction and usage trends from the Stack Overflow Developer Survey.
    3. Use the GitHub API to compare SUSE and Red Hat repositories in terms of stars, forks, contributors, and issue activity.
    4. Perform sentiment analysis on GitHub issue comments to measure community tone and engagement using built-in Copilot capabilities.
    5. Perform sentiment analysis on Reddit comments related to SUSE technologies using built-in Copilot capabilities.
    6. Use Gnews.io to track and compare the volume of news articles mentioning SUSE and Red Hat technologies.
    7. Test the integration of Copilot (AI) within Power BI for enhanced data analysis and visualization.
    8. Deliver a comprehensive Power BI report summarizing findings and insights.
    9. Test the full potential of Power BI, including its AI features and native language Q&A.

    Resources

    1. Google Trends: Web scraping for search popularity data
    2. Stack Overflow Developer Survey: For technology popularity and satisfaction comparison
    3. GitHub API: For repository data (stars, forks, contributors, issues, comments).
    4. Gnews.io API: For article volume and mentions analysis.
    5. Reddit: SUSE related topics with comments.


    Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0

    Self-Scaling LLM Infrastructure Powered by Rancher

    logo


    Description

    The Problem

    Running LLMs can get expensive and complex pretty quickly.

    Today there are typically two choices:

    1. Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
    2. Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.

    What if there was a middle ground?

    What if infrastructure scaled itself instead of making you scale it?

    Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?

    Project Repository: github.com/alexander-demicev/llmserverless


    What This Project Does

    A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.

    A complete, self-scaling LLM infrastructure that:

    • Scales to zero when idle (no idle costs)
    • Scales up automatically when requests come in
    • Adds more nodes when needed, removes them when demand drops
    • Runs on any infrastructure - laptop, bare metal, or cloud

    Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.

    How It Works

    A combination of open source tools working together:

    Flow:

    • Users interact with OpenWebUI (chat interface)
    • Requests go to LiteLLM Gateway
    • LiteLLM routes requests to:
      • Ollama (Knative) for local model inference (auto-scales pods)
      • Or cloud APIs for fallback


    Docs Navigator MCP: SUSE Edition by mackenzie.techdocs

    MCP Docs Navigator: SUSE Edition

    Description

    Docs Navigator MCP: SUSE Edition is an AI-powered documentation navigator that makes finding information across SUSE, Rancher, K3s, and RKE2 documentation effortless. Built as a Model Context Protocol (MCP) server, it enables semantic search, intelligent Q&A, and documentation summarization using 100% open-source AI models (no API keys required!). The project also allows you to bring your own keys from Anthropic and Open AI for parallel processing.

    Goals

    • [ X ] Build functional MCP server with documentation tools
    • [ X ] Implement semantic search with vector embeddings
    • [ X ] Create user-friendly web interface
    • [ X ] Optimize indexing performance (parallel processing)
    • [ X ] Add SUSE branding and polish UX
    • [ X ] Stretch Goal: Add more documentation sources
    • [ X ] Stretch Goal: Implement document change detection for auto-updates

    Coming Soon!

    • Community Feedback: Test with real users and gather improvement suggestions

    Resources