The idea is to explore the technologies and the various components to realize some AI to predict pitfalls in source code which can potentially generate run-time misbehaviours.

The potential area where this idea could have positive implications are:

  • embedded area (e.g. autonomous driving cars, mission critical application, etc.)

  • general purpose software

  • integration into software development IDEs

  • integration into compilers

The goal is to reduce to a bear minimum the false positives during the static code analysis and make sure that hard to identify issues can be highlighted to the developer.

This project is part of:

Hack Week 18

Activity

  • over 6 years ago: lyan liked this project.
  • over 6 years ago: a_faerber liked this project.
  • over 6 years ago: moio liked this project.
  • over 6 years ago: acho liked this project.
  • over 6 years ago: mvarlese started this project.
  • over 6 years ago: afesta liked this project.
  • over 6 years ago: mvarlese added keyword "machinelearning" to this project.
  • over 6 years ago: mvarlese added keyword "artificial-intelligence" to this project.
  • over 6 years ago: mvarlese added keyword "staticanalysis" to this project.
  • over 6 years ago: mvarlese added keyword "toolchains" to this project.
  • over 6 years ago: mvarlese added keyword "compilers" to this project.
  • over 6 years ago: mvarlese originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    SUSE Observability MCP server by drutigliano

    Description

    The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.

    This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.

    Goals

    • Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
    • Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
    • Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
    • Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.

    Resources

    • https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
    • https://www.datadoghq.com/blog/datadog-remote-mcp-server
    • https://modelcontextprotocol.io/specification/2025-06-18/index

     Basic implementation

    • https://github.com/drutigliano19/suse-observability-mcp-server