There are couple of projects I work on, which need my attention and putting them to shape:

Goal for this Hackweek

  • Put M2Crypto into better shape (most issues closed, all pull requests processed)
  • More fun to learn jujutsu
  • Play more with Gemini, how much it help (or not).
  • Perhaps, also (just slightly related), help to fix vis to work with LuaJIT, particularly to make vis-lspc working.

Looking for hackers with the skills:

vim python openssl jujutsu ai

This project is part of:

Hack Week 20 Hack Week 22 Hack Week 25

Activity

  • about 1 month ago: vizhestkov liked this project.
  • about 2 months ago: mcepl added keyword "ai" to this project.
  • about 2 months ago: mcepl added keyword "jujutsu" to this project.
  • about 2 months ago: mcepl removed keyword neovim from this project.
  • about 2 months ago: mcepl removed keyword lua from this project.
  • almost 3 years ago: asmorodskyi joined this project.
  • almost 3 years ago: msaquib liked this project.
  • almost 3 years ago: msaquib joined this project.
  • almost 5 years ago: mstrigl liked this project.
  • almost 5 years ago: kstreitova liked this project.
  • almost 5 years ago: mcepl started this project.
  • almost 5 years ago: mcepl added keyword "vim" to this project.
  • almost 5 years ago: mcepl added keyword "neovim" to this project.
  • almost 5 years ago: mcepl added keyword "lua" to this project.
  • almost 5 years ago: mcepl added keyword "python" to this project.
  • almost 5 years ago: mcepl added keyword "openssl" to this project.
  • almost 5 years ago: mcepl originated this project.

  • Comments

    • mcepl
      almost 3 years ago by mcepl | Reply

      • rope-based LSP server exists https://github.com/python-rope/pylsp-rope
      • spellsitter as a standalone hunspell-based spellchecker for nvim has been abandoned

    • asmorodskyi
      almost 3 years ago by asmorodskyi | Reply

      I have mid-level python knowledge and basic OBS knowledge and close to zero knowledge about encryption algorithms . I can try to fix some python-specific problem within package or try to do some packaging task in OBS . Can you recommend me something certain ?

      • mcepl
        almost 3 years ago by mcepl | Reply

        Yeah, it is too late now, but many of https://gitlab.com/m2crypto/m2crypto/-/issues don’t require much encryption knowledge.

    • mcepl
      almost 3 years ago by mcepl | Reply

      There was actually some progress on this project: master branch now passes the test suite through on all platforms (including Windows! hint: I don’t have one ;)), and the release of the next milestone is blocked just by https://gitlab.com/m2crypto/m2crypto/-/merge_requests/234 not passing through one test. If anybody knows anything about HTTP Transfer-Encoding: chunked and she is willing to help, I am all ears!

    Similar Projects

    VimGolf Station by emiler

    Description

    VimGolf is a challenge game where the goal is to edit a given piece of text into a desired final form using as few keystrokes as possible in Vim.

    Some time ago, I built a rough portable station using a Raspberry Pi and a spare monitor. It was initially used to play VimGolf at the office and later repurposed for publicity at several events. This project aims to create a more robust version of that station and provide the necessary scripts and Ansible playbooks to make configuring your own VimGolf station easy.

    Goals

    • Refactor old existing scripts
    • Implement challenge selecion
    • Load external configuration files
    • Create Ansible playbooks
    • Publish on GitHub

    Resources

    • https://www.vimgolf.com/
    • https://github.com/dstein64/vimgolf
    • https://github.com/igrigorik/vimgolf


    Mail client with mailing list workflow support in Rust by acervesato

    Description

    To create a mail user interface using Rust programming language, supporting mailing list patches workflow. I know, aerc is already there, but I would like to create something simpler, without integrated protocols. Just a plain user interface that is using some crates to read and create emails which are fetched and sent via external tools.

    I already know Rust, but not the async support, which is needed in this case in order to handle events inside the mail folder and to send notifications.

    Goals

    • simple user interface in the style of aerc, with some vim keybindings for motions and search
    • automatic run of external tools (like mbsync) for checking emails
    • automatic run commands for notifications
    • apply patch set from ML
    • tree-sitter support with styles

    Resources

    • ratatui: user interface (https://ratatui.rs/)
    • notify: folder watcher (https://docs.rs/notify/latest/notify/)
    • mail-parser: parser for emails (https://crates.io/crates/mail-parser)
    • mail-builder: create emails in proper format (https://docs.rs/mail-builder/latest/mail_builder/)
    • gitpatch: ML support (https://crates.io/crates/gitpatch)
    • tree-sitter-rust: support for mail format (https://crates.io/crates/tree-sitter)


    Collection and organisation of information about Bulgarian schools by iivanov

    Description

    To achieve this it will be necessary:

    • Collect/download raw data from various government and non-governmental organizations
    • Clean up raw data and organise it in some kind database.
    • Create tool to make queries easy.
    • Or perhaps dump all data into AI and ask questions in natural language.

    Goals

    By selecting particular school information like this will be provided:

    • School scores on national exams.
    • School scores from the external evaluations exams.
    • School town, municipality and region.
    • Employment rate in a town or municipality.
    • Average health of the population in the region.

    Resources

    Some of these are available only in bulgarian.

    • https://danybon.com/klasazia
    • https://nvoresults.com/index.html
    • https://ri.mon.bg/active-institutions
    • https://www.nsi.bg/nrnm/ekatte/archive

    Results

    • Information about all Bulgarian schools with their scores during recent years cleaned and organised into SQL tables
    • Information about all Bulgarian villages, cities, municipalities and districts cleaned and organised into SQL tables
    • Information about all Bulgarian villages and cities census since beginning of this century cleaned and organised into SQL tables.
    • Information about all Bulgarian municipalities about religion, ethnicity cleaned and organised into SQL tables.
    • Data successfully loaded to locally running Ollama with help to Vanna.AI
    • Seems to be usable.

    TODO

    • Add more statistical information about municipalities and ....

    Code and data


    Enhance git-sha-verify: A tool to checkout validated git hashes by gpathak

    Description

    git-sha-verify is a simple shell utility to verify and checkout trusted git commits signed using GPG key. This tool helps ensure that only authorized or validated commit hashes are checked out from a git repository, supporting better code integrity and security within the workflow.

    Supports:

    • Verifying commit authenticity signed using gpg key
    • Checking out trusted commits

    Ideal for teams and projects where the integrity of git history is crucial.

    Goals

    A minimal python code of the shell script exists as a pull request.

    The goal of this hackweek is to:

    • DONE: Add more unit tests
      • New and more tests can be added later
    • Partially DONE: Make the python code modular
    • DONE: Add code coverage if possible

    Resources


    Song Search with CLAP by gcolangiuli

    Description

    Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface

    SUSE Hackweek AI Song Search

    Goals

    Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:

    • Music Tagging;
    • Free text search;
    • Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.

    The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.

    Result

    In this MVP we implemented:

    • Async Song Analysis with Clap model
    • Free Text Search of the songs
    • Similar song search based on vector representation
    • Containerised version with web interface

    We also documented what went well and what can be improved in the use of AI.

    You can have a look at the result here:

    Future implementation can be related to performance improvement and stability of the analysis.

    References


    Improve chore and screen time doc generator script `wochenplaner` by gniebler

    Description

    I wrote a little Python script to generate PDF docs, which can be used to track daily chore completion and screen time usage for several people, with one page per person/week.

    I named this script wochenplaner and have been using it for a few months now.

    It needs some improvements and adjustments in how the screen time should be tracked and how chores are displayed.

    Goals

    • Fix chore field separation lines
    • Change screen time tracking logic from "global" (week-long) to daily subtraction and weekly addition of remainders (more intuitive than current "weekly time budget method)
    • Add logic to fill in chore fields/lines, ideally with pictures, falling back to text.

    Resources

    tbd (Gitlab repo)


    Improve/rework household chore tracker `chorazon` by gniebler

    Description

    I wrote a household chore tracker named chorazon, which is meant to be deployed as a web application in the household's local network.

    It features the ability to set up different (so far only weekly) schedules per task and per person, where tasks may span several days.

    There are "tokens", which can be collected by users. Tasks can (and usually will) have rewards configured where they yield a certain amount of tokens. The idea is that they can later be redeemed for (surprise) gifts, but this is not implemented yet. (So right now one needs to edit the DB manually to subtract tokens when they're redeemed.)

    Days are not rolled over automatically, to allow for task completion control.

    We used it in my household for several months, with mixed success. There are many limitations in the system that would warrant a revisit.

    It's written using the Pyramid Python framework with URL traversal, ZODB as the data store and Web Components for the frontend.

    Goals

    • Add admin screens for users, tasks and schedules
    • Add models, pages etc. to allow redeeming tokens for gifts/surprises
    • …?

    Resources

    tbd (Gitlab repo)


    Hackweek 25 from openSSL office in Brno, Czechia by lkocman

    Description

    Join South Moravian colleagues, Austrian friends, and local community members for Hackweek 25 at the openSSL corporation office in Brno, Czechia. This will be a relaxed and enjoyable in-person gathering where we can work on our Hackweek projects side by side, share ideas, help each other, and simply enjoy the atmosphere of hacking together for a week.

    Food, snacks, coffee will be available to keep everyone energized and happy throughout the week. We'd like to throw a small party on Tuesday.

    Goals

    • Bring together SUSE employees and community members from the South Moravian region and nearby Austria.
    • Create a friendly space for collaboration and creativity during Hackweek 25.
    • Support each other’s projects, exchange knowledge, and experiment freely.
    • Strengthen local connections and enjoy a refreshing break from remote work.

    Resources

    Report from Grand openning of the office

    Photos on google photos


    Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0

    Self-Scaling LLM Infrastructure Powered by Rancher

    logo


    Description

    The Problem

    Running LLMs can get expensive and complex pretty quickly.

    Today there are typically two choices:

    1. Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
    2. Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.

    What if there was a middle ground?

    What if infrastructure scaled itself instead of making you scale it?

    Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?

    Project Repository: github.com/alexander-demicev/llmserverless


    What This Project Does

    A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.

    A complete, self-scaling LLM infrastructure that:

    • Scales to zero when idle (no idle costs)
    • Scales up automatically when requests come in
    • Adds more nodes when needed, removes them when demand drops
    • Runs on any infrastructure - laptop, bare metal, or cloud

    Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.

    How It Works

    A combination of open source tools working together:

    Flow:

    • Users interact with OpenWebUI (chat interface)
    • Requests go to LiteLLM Gateway
    • LiteLLM routes requests to:
      • Ollama (Knative) for local model inference (auto-scales pods)
      • Or cloud APIs for fallback


    Multi-agent AI assistant for Linux troubleshooting by doreilly

    Description

    Explore multi-agent architecture as a way to avoid MCP context rot.

    Having one agent with many tools bloats the context with low-level details about tool descriptions, parameter schemas etc which hurts LLM performance. Instead have many specialised agents, each with just the tools it needs for its role. A top level supervisor agent takes the user prompt and delegates to appropriate sub-agents.

    Goals

    Create an AI assistant with some sub-agents that are specialists at troubleshooting Linux subsystems, e.g. systemd, selinux, firewalld etc. The agents can get information from the system by implementing their own tools with simple function calls, or use tools from MCP servers, e.g. a systemd-agent can use tools from systemd-mcp.

    Example prompts/responses:

    user$ the system seems slow
    assistant$ process foo with pid 12345 is using 1000% cpu ...
    
    user$ I can't connect to the apache webserver
    assistant$ the firewall is blocking http ... you can open the port with firewall-cmd --add-port ...
    

    Resources

    Language Python. The Python ADK is more mature than Golang.

    https://google.github.io/adk-docs/

    https://github.com/djoreilly/linux-helper


    Enable more features in mcp-server-uyuni by j_renner

    Description

    I would like to contribute to mcp-server-uyuni, the MCP server for Uyuni / Multi-Linux Manager) exposing additional features as tools. There is lots of relevant features to be found throughout the API, for example:

    • System operations and infos
    • System groups
    • Maintenance windows
    • Ansible
    • Reporting
    • ...

    At the end of the week I managed to enable basic system group operations:

    • List all system groups visible to the user
    • Create new system groups
    • List systems assigned to a group
    • Add and remove systems from groups

    Goals

    • Set up test environment locally with the MCP server and client + a recent MLM server [DONE]
    • Identify features and use cases offering a benefit with limited effort required for enablement [DONE]
    • Create a PR to the repo [DONE]

    Resources


    MCP Server for SCC by digitaltomm

    Description

    Provide an MCP Server implementation for customers to access data on scc.suse.com via MCP protocol. The core benefit of this MCP interface is that it has direct (read) access to customer data in SCC, so the AI agent gets enhanced knowledge about individual customer data, like subscriptions, orders and registered systems.

    Architecture

    Schema

    Goals

    We want to demonstrate a proof of concept to connect to the SCC MCP server with any AI agent, for example gemini-cli or codex. Enabling the user to ask questions regarding their SCC inventory.

    For this Hackweek, we target that users get proper responses to these example questions:

    • Which of my currently active systems are running products that are out of support?
    • Do I have ready to use registration codes for SLES?
    • What are the latest 5 released patches for SLES 15 SP6? Output as a list with release date, patch name, affected package names and fixed CVEs.
    • Which versions of kernel-default are available on SLES 15 SP6?

    Technical Notes

    Similar to the organization APIs, this can expose to customers data about their subscriptions, orders, systems and products. Authentication should be done by organization credentials, similar to what needs to be provided to RMT/MLM. Customers can connect to the SCC MCP server from their own MCP-compatible client and Large Language Model (LLM), so no third party is involved.

    Milestones

    [x] Basic MCP API setup
      MCP endpoints
      [x] Products / Repositories
      [x] Subscriptions / Orders 
      [x] Systems
      [x] Packages
    [x] Document usage with Gemini CLI, Codex
    

    Resources

    Gemini CLI setup:

    ~/.gemini/settings.json:


    Extended private brain - RAG my own scripts and data into offline LLM AI by tjyrinki_suse

    Description

    For purely studying purposes, I'd like to find out if I could teach an LLM some of my own accumulated knowledge, to use it as a sort of extended brain.

    I might use qwen3-coder or something similar as a starting point.

    Everything would be done 100% offline without network available to the container, since I prefer to see when network is needed, and make it so it's never needed (other than initial downloads).

    Goals

    1. Learn something about RAG, LLM, AI.
    2. Find out if everything works offline as intended.
    3. As an end result have a new way to access my own existing know-how, but so that I can query the wisdom in them.
    4. Be flexible to pivot in any direction, as long as there are new things learned.

    Resources

    To be found on the fly.

    Timeline

    Day 1 (of 4)

    • Tried out a RAG demo, expanded on feeding it my own data
    • Experimented with qwen3-coder to add a persistent chat functionality, and keeping vectors in a pickle file
    • Optimizations to keep everything within context window
    • Learn and add a bit of PyTest

    Day 2

    • More experimenting and more data
    • Study ChromaDB
    • Add a Web UI that works from another computer even though the container sees network is down

    Day 3

    • The above RAG is working well enough for demonstration purposes.
    • Pivot to trying out OpenCode, configuring local Ollama qwen3-coder there, to analyze the RAG demo.
    • Figured out how to configure Ollama template to be usable under OpenCode. OpenCode locally is super slow to just running qwen3-coder alone.

    Day 4 (final day)

    • Battle with OpenCode that was both slow and kept on piling up broken things.
    • Call it success as after all the agentic AI was working locally.
    • Clean up the mess left behind a bit.

    Blog Post

    Summarized the findings at blog post.