A common challenge for OpenStack and K8S deployments is debugging the network when things go awry. The aim of DPHAT is to provide operators of cloud infrastructure with tooling that can analyze the environment and supply the following:

  • Feedback that the environment is in a healthy operational state
  • Identification of and guidance about where something in the network fabric is broken
  • Guidance on remediation steps
  • A pluggable interface to enable support for various cloud platforms, their respective networking backends, and any hardware devices (ie switches/routers) present in the deployment
  • RESTful API, CLI, and UI

This involves:

  • Gathering information from any relevant SDN controller, representing the network topology for the cloud, and developing an algorithm for analyzing the topology
  • Probing of VM's and containers via ARP, ICMP (ping), port scan, ofproto trace, etc. to asses forwarding and security policy instantiation
  • Reading pod / compute node state and identifying missing namespaces, tap devices, iptables chains, etc.
  • Building a database of remediation actions that can be correlated with issues flagged by DPHAT

If you want to help alleviate the headache of debugging networking issues in the cloud, let's work together!

Looking for hackers with the skills:

openstack kubernetes networking sdn openvswitch

This project is part of:

Hack Week 18

Activity

  • almost 6 years ago: nicolasbock started this project.
  • almost 6 years ago: nicolasbock liked this project.
  • almost 6 years ago: rtidwell added keyword "openstack" to this project.
  • almost 6 years ago: rtidwell added keyword "kubernetes" to this project.
  • almost 6 years ago: rtidwell added keyword "networking" to this project.
  • almost 6 years ago: rtidwell added keyword "sdn" to this project.
  • almost 6 years ago: rtidwell added keyword "openvswitch" to this project.
  • almost 6 years ago: rtidwell originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    kubectl clone: Seamlessly Clone Kubernetes Resources Across Multiple Rancher Clusters and Projects by dpunia

    Description

    kubectl clone is a kubectl plugin that empowers users to clone Kubernetes resources across multiple clusters and projects managed by Rancher. It simplifies the process of duplicating resources from one cluster to another or within different namespaces and projects, with optional on-the-fly modifications. This tool enhances multi-cluster resource management, making it invaluable for environments where Rancher orchestrates numerous Kubernetes clusters.

    Goals

    1. Seamless Multi-Cluster Cloning
      • Clone Kubernetes resources across clusters/projects with one command.
      • Simplifies management, reduces operational effort.

    Resources

    1. Rancher & Kubernetes Docs

      • Rancher API, Cluster Management, Kubernetes client libraries.
    2. Development Tools

      • Kubectl plugin docs, Go programming resources.

    Building and Installing the Plugin

    1. Set Environment Variables: Export the Rancher URL and API token:
    • export RANCHER_URL="https://rancher.example.com"
    • export RANCHER_TOKEN="token-xxxxx:xxxxxxxxxxxxxxxxxxxx"
    1. Build the Plugin: Compile the Go program:
    • go build -o kubectl-clone ./pkg/
    1. Install the Plugin: Move the executable to a directory in your PATH:
    • mv kubectl-clone /usr/local/bin/

    Ensure the file is executable:

    • chmod +x /usr/local/bin/kubectl-clone
    1. Verify the Plugin Installation: Test the plugin by running:
    • kubectl clone --help

    You should see the usage information for the kubectl-clone plugin.

    Usage Examples

    1. Clone a Deployment from One Cluster to Another:
    • kubectl clone --source-cluster c-abc123 --type deployment --name nginx-deployment --target-cluster c-def456 --new-name nginx-deployment-clone
    1. Clone a Service into Another Namespace and Modify Labels:


    Setup Kanidm as OIDC provider on Kubernetes by jkuzilek

    Description

    I am planning to upgrade my homelab Kubernetes cluster to the next level and need an OIDC provider for my services, including K8s itself.

    Goals

    • Successfully configure and deploy Kanidm on homelab cluster
    • Integrate with K8s auth
    • Integrate with other services (Envoy Gateway, Container Registry, future deployment of Forgejo?)

    Resources


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API


    Integrate Backstage with Rancher Manager by nwmacd

    Description

    Backstage (backstage.io) is an open-source, CNCF project that allows you to create your own developer portal. There are many plugins for Backstage.

    This could be a great compliment to Rancher Manager.

    Goals

    Learn and experiment with Backstage and look at how this could be integrated with Rancher Manager. Goal is to have some kind of integration completed in this Hack week.

    Progress

    Screen shot of home page at the end of Hackweek:

    Home

    Day One

    • Got Backstage running locally, understanding configuration with HTTPs.
    • Got Backstage embedded in an IFRAME inside of Rancher
    • Added content into the software catalog (see: https://backstage.io/docs/features/techdocs/getting-started/)
    • Understood more about the entity model

    Day Two

    • Connected Backstage to the Rancher local cluster and configured the Kubernetes plugin.
    • Created Rancher theme to make the light theme more consistent with Rancher

    Home

    Days Three and Day Four

    • Created two backend plugins for Backstage:

      1. Catalog Entity Provider - this imports users from Rancher into Backstage
      2. Auth Provider - uses the proxied sign-in pattern to check the Rancher session cookie, to user that to authenticate the user with Rancher and then log them into Backstage by connecting this to the imported User entity from the catalog entity provider plugin.
    • With this in place, you can single-sign-on between Rancher and Backstage when it is deployed within Rancher. Note this is only when running locally for development at present

    Home

    Home

    Day Five

    • Start to build out a production deployment for all of the above
    • Made some progress, but hit issues with the authentication and proxying when running proxied within Rancher, which needs further investigation


    Remote control for Adam Audio active monitor speakers by dmach

    Description

    I own a pair of Adam Audio A7V active studio monitor speakers. They have ethernet connectors that allow changing their settings remotely using the A Control software. From Windows :-( I couldn't find any open source alternative for Linux besides AES70.js library.

    Goals

    • Create a command-line tool for controlling the speakers.
    • Python is the language of choice.
    • Implement only a simple tool with the desired functionality rather than a full coverage of AES70 standard.

    TODO

    • ✅ discover the device
    • ❌ get device manufacturer and model
    • ✅ get serial number
    • ✅ get description
    • ✅ set description
    • ✅ set mute
    • ✅ set sleep
    • ✅ set input (XRL (balanced), RCA (unbalanced))
    • ✅ set room adaptation
      • bass (1, 0, -1, -2)
      • desk (0, -1, -2)
      • presence (1, 0, -1)
      • treble (1, 0, -1)
    • ✅ set voicing (Pure, UNR, Ext)
    • ❌ the Ext voicing enables the following extended functionality:
      • gain
      • equalizer bands
      • on/off
      • type
      • freq
      • q
      • gain
    • ❌ udev rules to sleep/wakeup the speakers together with the sound card

    Resources

    • https://www.adam-audio.com/en/a-series/a7v/
    • https://www.adam-audio.com/en/technology/a-control-remote-software/
    • https://github.com/DeutscheSoft/AES70.js
    • https://www.aes.org/publications/standards/search.cfm?docID=101 - paid
    • https://www.aes.org/standards/webinars/AESStandardsWebinarSC0212L20220531.pdf
    • https://ocaalliance.github.io/downloads/AES143%20Network%20track%20NA10%20-%20AES70%20Controller.pdf

    Result