Description

Two years ago, I evaluated solar routers as part of hackweek24, I've assembled one and it is running almost smoothly.

However, its code quality is not perfect and the codebase doesn't have any testcase (which is tricky, since it is embedded code and rely on getting external data to react).

Before improving the code itself, a testsuite should be created to ensure code additional don't cause regression.

Goals

Create a testsuite, allowing to test solar router code in a virtual environment. Using LLM to help to create this test suite.

If succesful, try to improve the codebase itself by having it reviewed by LLM.

Resources

Solar router github project

Looking for hackers with the skills:

llm testcases testingframework embedded solarpanel

This project is part of:

Hack Week 25

Activity

  • about 1 month ago: fcrozat left this project.
  • about 1 month ago: pgonin liked this project.
  • 2 months ago: fcrozat started this project.
  • 2 months ago: fcrozat added keyword "llm" to this project.
  • 2 months ago: fcrozat added keyword "testcases" to this project.
  • 2 months ago: fcrozat added keyword "testingframework" to this project.
  • 2 months ago: fcrozat added keyword "embedded" to this project.
  • 2 months ago: fcrozat added keyword "solarpanel" to this project.
  • 2 months ago: fcrozat originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Backporting patches using LLM by jankara

    Description

    Backporting Linux kernel fixes (either for CVE issues or as part of general git-fixes workflow) is boring and mostly mechanical work (dealing with changes in context, renamed variables, new helper functions etc.). The idea of this project is to explore usage of LLM for backporting Linux kernel commits to SUSE kernels using LLM.

    Goals

    • Create safe environment allowing LLM to run and backport patches without exposing the whole filesystem to it (for privacy and security reasons).
    • Write prompt that will guide LLM through the backporting process. Fine tune it based on experimental results.
    • Explore success rate of LLMs when backporting various patches.

    Resources

    • Docker
    • Gemini CLI

    Repository

    Current version of the container with some instructions for use are at: https://gitlab.suse.de/jankara/gemini-cli-backporter


    Explore LLM evaluation metrics by thbertoldi

    Description

    Learn the best practices for evaluating LLM performance with an open-source framework such as DeepEval.

    Goals

    Curate the knowledge learned during practice and present it to colleagues.

    -> Maybe publish a blog post on SUSE's blog?

    Resources

    https://deepeval.com

    https://docs.pactflow.io/docs/bi-directional-contract-testing


    Self-Scaling LLM Infrastructure Powered by Rancher by ademicev0

    Self-Scaling LLM Infrastructure Powered by Rancher

    logo


    Description

    The Problem

    Running LLMs can get expensive and complex pretty quickly.

    Today there are typically two choices:

    1. Use cloud APIs like OpenAI or Anthropic. Easy to start with, but costs add up at scale.
    2. Self-host everything - set up Kubernetes, figure out GPU scheduling, handle scaling, manage model serving... it's a lot of work.

    What if there was a middle ground?

    What if infrastructure scaled itself instead of making you scale it?

    Can we use existing Rancher capabilities like CAPI, autoscaling, and GitOps to make this simpler instead of building everything from scratch?

    Project Repository: github.com/alexander-demicev/llmserverless


    What This Project Does

    A key feature is hybrid deployment: requests can be routed based on complexity or privacy needs. Simple or low-sensitivity queries can use public APIs (like OpenAI), while complex or private requests are handled in-house on local infrastructure. This flexibility allows balancing cost, privacy, and performance - using cloud for routine tasks and on-premises resources for sensitive or demanding workloads.

    A complete, self-scaling LLM infrastructure that:

    • Scales to zero when idle (no idle costs)
    • Scales up automatically when requests come in
    • Adds more nodes when needed, removes them when demand drops
    • Runs on any infrastructure - laptop, bare metal, or cloud

    Think of it as "serverless for LLMs" - focus on building, the infrastructure handles itself.

    How It Works

    A combination of open source tools working together:

    Flow:

    • Users interact with OpenWebUI (chat interface)
    • Requests go to LiteLLM Gateway
    • LiteLLM routes requests to:
      • Ollama (Knative) for local model inference (auto-scales pods)
      • Or cloud APIs for fallback


    Song Search with CLAP by gcolangiuli

    Description

    Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface

    SUSE Hackweek AI Song Search

    Goals

    Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:

    • Music Tagging;
    • Free text search;
    • Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.

    The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.

    Result

    In this MVP we implemented:

    • Async Song Analysis with Clap model
    • Free Text Search of the songs
    • Similar song search based on vector representation
    • Containerised version with web interface

    We also documented what went well and what can be improved in the use of AI.

    You can have a look at the result here:

    Future implementation can be related to performance improvement and stability of the analysis.

    References


    Extended private brain - RAG my own scripts and data into offline LLM AI by tjyrinki_suse

    Description

    For purely studying purposes, I'd like to find out if I could teach an LLM some of my own accumulated knowledge, to use it as a sort of extended brain.

    I might use qwen3-coder or something similar as a starting point.

    Everything would be done 100% offline without network available to the container, since I prefer to see when network is needed, and make it so it's never needed (other than initial downloads).

    Goals

    1. Learn something about RAG, LLM, AI.
    2. Find out if everything works offline as intended.
    3. As an end result have a new way to access my own existing know-how, but so that I can query the wisdom in them.
    4. Be flexible to pivot in any direction, as long as there are new things learned.

    Resources

    To be found on the fly.

    Timeline

    Day 1 (of 4)

    • Tried out a RAG demo, expanded on feeding it my own data
    • Experimented with qwen3-coder to add a persistent chat functionality, and keeping vectors in a pickle file
    • Optimizations to keep everything within context window
    • Learn and add a bit of PyTest

    Day 2

    • More experimenting and more data
    • Study ChromaDB
    • Add a Web UI that works from another computer even though the container sees network is down

    Day 3

    • The above RAG is working well enough for demonstration purposes.
    • Pivot to trying out OpenCode, configuring local Ollama qwen3-coder there, to analyze the RAG demo.
    • Figured out how to configure Ollama template to be usable under OpenCode. OpenCode locally is super slow to just running qwen3-coder alone.

    Day 4 (final day)

    • Battle with OpenCode that was both slow and kept on piling up broken things.
    • Call it success as after all the agentic AI was working locally.
    • Clean up the mess left behind a bit.

    Blog Post

    Summarized the findings at blog post.


    Learn a bit of embedded programming with Rust in a micro:bit v2 by aplanas

    Description

    micro:bit is a small single board computer with a ARM Cortex-M4 with the FPU extension, with a very constrain amount of memory and a bunch of sensors and leds.

    The board is very well documented, with schematics and code for all the features available, so is an excellent platform for learning embedded programming.

    Rust is a system programming language that can generate ARM code, and has crates (libraries) to access the micro:bit hardware. There is plenty documentation about how to make small programs that will run in the micro:bit.

    Goals

    Start learning about embedded programming in Rust, and maybe make some code to the small KS4036F Robot car from keyestudio.

    Resources

    Diary

    Day 1

    • Start reading https://mb2.implrust.com/abstraction-layers.html
    • Prepare the dev environment (cross compiler, probe-rs)
    • Flash first code in the board (blinky led)
    • Checking differences between BSP and HAL
    • Compile and install a more complex example, with stack protection
    • Reading about the simplicity of xtask, as alias for workspace execution
    • Reading the CPP code of the official micro:bit libraries. They have a font!

    Day 2

    • There are multiple BSP for the microbit. One is using async code for non-blocking operations
    • Download and study a bit the API for microbit-v2, the nRF official crate
    • Take a look of the KS4036F programming, seems that the communication is multiplexed via I2C
    • The motor speed can be selected via PWM (pulse with modulation): power it longer (high frequency), and it will increase the speed
    • Scrolling some text
    • Debug by printing! defmt is a crate that can be used with probe-rs to emit logs
    • Start reading input from the board: buttons
    • The logo can be touched and detected as a floating point value

    Day 3

    • A bit confused how to read the float value from a pin


    OSHW USB token for Passkeys (FIDO2, U2F, WebAuthn) and PGP by duwe

    Description

    The idea to carry your precious key material along in a specially secured hardware item is almost as old as public keys themselves, starting with the OpenPGP card. Nowadays, an USB plug or NFC are the hardware interfaces of choice, and password-less log-ins are fortunately becoming more popular and standardised.

    Meanwhile there are a few products available in that field, for example

    • yubikey - the "market leader", who continues to sell off buggy, allegedly unfixable firmware ROMs from old stock. Needless to say, it's all but open source, so assume backdoors.

    • nitrokey - the "start" variant is open source, but the hardware was found to leak its flash ROM content via the SWD debugging interface (even when the flash is read protected !) Compute power is barely enough for Curve25519, Flash memory leaves room for only 3 keys.

    • solokey(2) - quite neat hardware, with a secure enclave called "TrustZone-M". Unfortunately, the OSS firmware development is stuck in a rusty dead end and cannot use it. Besides, NXP's support for open source toolchains for its devboards is extremely limited.

    I plan to base this project on the not-so-tiny USB stack, which is extremely easy to retarget, and to rewrite / refactor the crypto protocols to use the keys only via handles, so the actual key material can be stored securely. Best OSS support seems to be for STM32-based products.

    Goals

    Create a proof-of-concept item that can provide a second factor for logins and/or decrypt a PGP mail with your private key without disclosing the key itself. Implement or at least show a migration path to store the private key in a location with elevated hardware security.

    Resources

    STM32 Nucleo, blackmagic probe, tropicsquare tropic01, arm-none cross toolchain


    Smart lighting with Pico 2 by jmodak

    Description

    I am trying to create a smart-lighting project with a Raspberry Pi Pico that reacts to a movie's visuals and audio that involves combining two distinct functions: ambient screen lighting(visual response) and sound-reactive lighting(audio response)

    Goals

    • Visuals: Capturing the screen's colour requires an external device to analyse screen content and send colour data to the MCU via serial communication.
    • Audio: A sound sensor module connected directly to the Pico that can detect sound volume.
    • Pico 2W: The MCU receives data fro, both inputs and controls an LED strip.

    Resources

    • Raspberry Pi Pico 2 W
    • RGB LED strip
    • Sound detecting sensor
    • Power supply
    • breadboard and wires