Since Kubernetes already has a clear path of "in-tree" volume plugin to CSI migration. I would like to understand the concept of CSI with writing a simple driver for Kubernetes.

Reference:

Looking for hackers with the skills:

csi storage kubernetes

This project is part of:

Hack Week 19

Activity

  • almost 5 years ago: tbechtold liked this project.
  • almost 5 years ago: pchacin liked this project.
  • almost 5 years ago: chinyahuang added keyword "csi" to this project.
  • almost 5 years ago: chinyahuang added keyword "storage" to this project.
  • almost 5 years ago: chinyahuang added keyword "kubernetes" to this project.
  • almost 5 years ago: chinyahuang started this project.
  • almost 5 years ago: chinyahuang originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Agama Expert Partitioner by joseivanlopez

    Description

    Agama is a new Linux installer that will be very likely used for SLES 16.

    It offers an UI for configuring the target system (language, patterns, network, etc). One of the more complex sections is the storage configuration, which is going to be revamped. This project consists on exploring the possibility of having something similar to the YaST Expert Partitioner for Agama.

    Goals

    • Explore different approaches for the storage UI in Agama.


    Install Uyuni on Kubernetes in cloud-native way by cbosdonnat

    Description

    For now installing Uyuni on Kubernetes requires running mgradm on a cluster node... which is not what users would do in the Kubernetes world. The idea is to implement an installation based only on helm charts and probably an operator.

    Goals

    Install Uyuni from Rancher UI.

    Resources


    Extending KubeVirtBMC's capability by adding Redfish support by zchang

    Description

    In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.

    Goals

    Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:

    • Power management
    • Boot device selection
    • Virtual media mount (this one is not so basic add-emoji )

    Resources


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    kubectl clone: Seamlessly Clone Kubernetes Resources Across Multiple Rancher Clusters and Projects by dpunia

    Description

    kubectl clone is a kubectl plugin that empowers users to clone Kubernetes resources across multiple clusters and projects managed by Rancher. It simplifies the process of duplicating resources from one cluster to another or within different namespaces and projects, with optional on-the-fly modifications. This tool enhances multi-cluster resource management, making it invaluable for environments where Rancher orchestrates numerous Kubernetes clusters.

    Goals

    1. Seamless Multi-Cluster Cloning
      • Clone Kubernetes resources across clusters/projects with one command.
      • Simplifies management, reduces operational effort.

    Resources

    1. Rancher & Kubernetes Docs

      • Rancher API, Cluster Management, Kubernetes client libraries.
    2. Development Tools

      • Kubectl plugin docs, Go programming resources.

    Building and Installing the Plugin

    1. Set Environment Variables: Export the Rancher URL and API token:
    • export RANCHER_URL="https://rancher.example.com"
    • export RANCHER_TOKEN="token-xxxxx:xxxxxxxxxxxxxxxxxxxx"
    1. Build the Plugin: Compile the Go program:
    • go build -o kubectl-clone ./pkg/
    1. Install the Plugin: Move the executable to a directory in your PATH:
    • mv kubectl-clone /usr/local/bin/

    Ensure the file is executable:

    • chmod +x /usr/local/bin/kubectl-clone
    1. Verify the Plugin Installation: Test the plugin by running:
    • kubectl clone --help

    You should see the usage information for the kubectl-clone plugin.

    Usage Examples

    1. Clone a Deployment from One Cluster to Another:
    • kubectl clone --source-cluster c-abc123 --type deployment --name nginx-deployment --target-cluster c-def456 --new-name nginx-deployment-clone
    1. Clone a Service into Another Namespace and Modify Labels: