Description

Two years ago, I evaluated solar routers as part of hackweek24, I've assembled one and it is running almost smoothly.

However, its code quality is not perfect and the codebase doesn't have any testcase (which is tricky, since it is embedded code and rely on getting external data to react).

Before improving the code itself, a testsuite should be created to ensure code additional don't cause regression.

Goals

Create a testsuite, allowing to test solar router code in a virtual environment. Using LLM to help to create this test suite.

If succesful, try to improve the codebase itself by having it reviewed by LLM.

Resources

Solar router github project

Looking for hackers with the skills:

llm testcases testingframework embedded solarpanel

This project is part of:

Hack Week 25

Activity

  • about 5 hours ago: fcrozat started this project.
  • about 5 hours ago: fcrozat added keyword "llm" to this project.
  • about 5 hours ago: fcrozat added keyword "testcases" to this project.
  • about 5 hours ago: fcrozat added keyword "testingframework" to this project.
  • about 5 hours ago: fcrozat added keyword "embedded" to this project.
  • about 5 hours ago: fcrozat added keyword "solarpanel" to this project.
  • about 5 hours ago: fcrozat originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    SUSE Observability MCP server by drutigliano

    Description

    The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.

    This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.

    Goals

    • Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
    • Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
    • Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
    • Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.

    Resources

    • https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
    • https://www.datadoghq.com/blog/datadog-remote-mcp-server
    • https://modelcontextprotocol.io/specification/2025-06-18/index
    • https://modelcontextprotocol.io/docs/develop/build-server

     Basic implementation

    • https://github.com/drutigliano19/suse-observability-mcp-server