A common challenge for OpenStack and K8S deployments is debugging the network when things go awry. The aim of DPHAT is to provide operators of cloud infrastructure with tooling that can analyze the environment and supply the following:

  • Feedback that the environment is in a healthy operational state
  • Identification of and guidance about where something in the network fabric is broken
  • Guidance on remediation steps
  • A pluggable interface to enable support for various cloud platforms, their respective networking backends, and any hardware devices (ie switches/routers) present in the deployment
  • RESTful API, CLI, and UI

This involves:

  • Gathering information from any relevant SDN controller, representing the network topology for the cloud, and developing an algorithm for analyzing the topology
  • Probing of VM's and containers via ARP, ICMP (ping), port scan, ofproto trace, etc. to asses forwarding and security policy instantiation
  • Reading pod / compute node state and identifying missing namespaces, tap devices, iptables chains, etc.
  • Building a database of remediation actions that can be correlated with issues flagged by DPHAT

If you want to help alleviate the headache of debugging networking issues in the cloud, let's work together!

Looking for hackers with the skills:

openstack kubernetes networking sdn openvswitch

This project is part of:

Hack Week 18

Activity

  • almost 6 years ago: nicolasbock started this project.
  • almost 6 years ago: nicolasbock liked this project.
  • almost 6 years ago: rtidwell added keyword "openstack" to this project.
  • almost 6 years ago: rtidwell added keyword "kubernetes" to this project.
  • almost 6 years ago: rtidwell added keyword "networking" to this project.
  • almost 6 years ago: rtidwell added keyword "sdn" to this project.
  • almost 6 years ago: rtidwell added keyword "openvswitch" to this project.
  • almost 6 years ago: rtidwell originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    kubectl clone: Seamlessly Clone Kubernetes Resources Across Multiple Rancher Clusters and Projects by dpunia

    Description

    kubectl clone is a kubectl plugin that empowers users to clone Kubernetes resources across multiple clusters and projects managed by Rancher. It simplifies the process of duplicating resources from one cluster to another or within different namespaces and projects, with optional on-the-fly modifications. This tool enhances multi-cluster resource management, making it invaluable for environments where Rancher orchestrates numerous Kubernetes clusters.

    Goals

    1. Seamless Multi-Cluster Cloning
      • Clone Kubernetes resources across clusters/projects with one command.
      • Simplifies management, reduces operational effort.

    Resources

    1. Rancher & Kubernetes Docs

      • Rancher API, Cluster Management, Kubernetes client libraries.
    2. Development Tools

      • Kubectl plugin docs, Go programming resources.

    Building and Installing the Plugin

    1. Set Environment Variables: Export the Rancher URL and API token:
    • export RANCHER_URL="https://rancher.example.com"
    • export RANCHER_TOKEN="token-xxxxx:xxxxxxxxxxxxxxxxxxxx"
    1. Build the Plugin: Compile the Go program:
    • go build -o kubectl-clone ./pkg/
    1. Install the Plugin: Move the executable to a directory in your PATH:
    • mv kubectl-clone /usr/local/bin/

    Ensure the file is executable:

    • chmod +x /usr/local/bin/kubectl-clone
    1. Verify the Plugin Installation: Test the plugin by running:
    • kubectl clone --help

    You should see the usage information for the kubectl-clone plugin.

    Usage Examples

    1. Clone a Deployment from One Cluster to Another:
    • kubectl clone --source-cluster c-abc123 --type deployment --name nginx-deployment --target-cluster c-def456 --new-name nginx-deployment-clone
    1. Clone a Service into Another Namespace and Modify Labels:


    Setup Kanidm as OIDC provider on Kubernetes by jkuzilek

    Description

    I am planning to upgrade my homelab Kubernetes cluster to the next level and need an OIDC provider for my services, including K8s itself.

    Goals

    • Successfully configure and deploy Kanidm on homelab cluster
    • Integrate with K8s auth
    • Integrate with other services (Envoy Gateway, Container Registry, future deployment of Forgejo?)

    Resources


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API


    Harvester Packer Plugin by mrohrich

    Description

    Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.

    Goals

    Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.

    Resources

    Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders

    Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin


    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    Results: Infrastructure Achievements

    We successfully built and automated a containerized stack to support our AI experiments. This included:

    A screenshot of k9s and nvtop showing PyTAG running in Kubernetes with GPU acceleration

    ./deploy.sh and voilà - Kubernetes running PyTAG (k9s, above) with GPU acceleration (nvtop, below)

    Results: Game Design Insights

    Our project focused on modeling and analyzing two card games of our own design within the TAG framework:

    • Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
    • AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
    • Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .

    Cards from the three games

    A family picture of our card games in progress. From the top: Bamboo, Totoro, R3

    Results: Learning, Collaboration, and Innovation

    Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:

    • "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
    • AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
    • GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
    • Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.

    Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!

    The Context: AI + Board Games


    Remote control for Adam Audio active monitor speakers by dmach

    Description

    I own a pair of Adam Audio A7V active studio monitor speakers. They have ethernet connectors that allow changing their settings remotely using the A Control software. From Windows :-( I couldn't find any open source alternative for Linux besides AES70.js library.

    Goals

    • Create a command-line tool for controlling the speakers.
    • Python is the language of choice.
    • Implement only a simple tool with the desired functionality rather than a full coverage of AES70 standard.

    TODO

    • ✅ discover the device
    • ❌ get device manufacturer and model
    • ✅ get serial number
    • ✅ get description
    • ✅ set description
    • ✅ set mute
    • ✅ set sleep
    • ✅ set input (XRL (balanced), RCA (unbalanced))
    • ✅ set room adaptation
      • bass (1, 0, -1, -2)
      • desk (0, -1, -2)
      • presence (1, 0, -1)
      • treble (1, 0, -1)
    • ✅ set voicing (Pure, UNR, Ext)
    • ❌ the Ext voicing enables the following extended functionality:
      • gain
      • equalizer bands
      • on/off
      • type
      • freq
      • q
      • gain
    • ❌ udev rules to sleep/wakeup the speakers together with the sound card

    Resources

    • https://www.adam-audio.com/en/a-series/a7v/
    • https://www.adam-audio.com/en/technology/a-control-remote-software/
    • https://github.com/DeutscheSoft/AES70.js
    • https://www.aes.org/publications/standards/search.cfm?docID=101 - paid
    • https://www.aes.org/standards/webinars/AESStandardsWebinarSC0212L20220531.pdf
    • https://ocaalliance.github.io/downloads/AES143%20Network%20track%20NA10%20-%20AES70%20Controller.pdf

    Result