Description

kubectl clone is a kubectl plugin that empowers users to clone Kubernetes resources across multiple clusters and projects managed by Rancher. It simplifies the process of duplicating resources from one cluster to another or within different namespaces and projects, with optional on-the-fly modifications. This tool enhances multi-cluster resource management, making it invaluable for environments where Rancher orchestrates numerous Kubernetes clusters.

Goals

  1. Seamless Multi-Cluster Cloning
    • Clone Kubernetes resources across clusters/projects with one command.
    • Simplifies management, reduces operational effort.

Resources

  1. Rancher & Kubernetes Docs

    • Rancher API, Cluster Management, Kubernetes client libraries.
  2. Development Tools

    • Kubectl plugin docs, Go programming resources.

Building and Installing the Plugin

  1. Set Environment Variables: Export the Rancher URL and API token:
  • export RANCHER_URL="https://rancher.example.com"
  • export RANCHER_TOKEN="token-xxxxx:xxxxxxxxxxxxxxxxxxxx"
  1. Build the Plugin: Compile the Go program:
  • go build -o kubectl-clone ./pkg/
  1. Install the Plugin: Move the executable to a directory in your PATH:
  • mv kubectl-clone /usr/local/bin/

Ensure the file is executable:

  • chmod +x /usr/local/bin/kubectl-clone
  1. Verify the Plugin Installation: Test the plugin by running:
  • kubectl clone --help

You should see the usage information for the kubectl-clone plugin.

Usage Examples

  1. Clone a Deployment from One Cluster to Another:
  • kubectl clone --source-cluster c-abc123 --type deployment --name nginx-deployment --target-cluster c-def456 --new-name nginx-deployment-clone
  1. Clone a Service into Another Namespace and Modify Labels:
  • kubectl clone --source-cluster c-abc123 --type service --name my-service --source-namespace default --target-cluster c-def456 --target-namespace staging --modify "metadata.labels.env=staging"
  1. Clone a ConfigMap within the Same Cluster but Different Project:
  • kubectl clone --source-cluster c-abc123 --source-project p-abc123 --type configmap --name my-config --target-cluster c-abc123 --target-project p-def456 --target-namespace dev
  1. Clone a Secret with a New Name and Modifications:
  • kubectl clone --source-cluster c-abc123 --type secret --name my-secret --target-cluster c-def456 --new-name my-secret-copy --modify "metadata.annotations.description=Cloned Secret"

Git Repository: https://github.com/deepakpunia-suse/kubectl-clone

Looking for hackers with the skills:

kubernetes golang

This project is part of:

Hack Week 24

Activity

  • 11 months ago: dpunia liked this project.
  • 11 months ago: dpunia added keyword "kubernetes" to this project.
  • 11 months ago: dpunia added keyword "golang" to this project.
  • 11 months ago: dpunia started this project.
  • 11 months ago: dpunia originated this project.

  • Comments

    • dpunia
      11 months ago by dpunia | Reply

      Project completed, here is the detail further details: https://github.com/deepakpunia-suse/kubectl-clone

    Similar Projects

    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    terraform-provider-feilong by e_bischoff

    Project Description

    People need to test operating systems and applications on s390 platform.

    Installation from scratch solutions include:

    • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
    • use s3270 terminal emulation (used by openQA people?)
    • use LXC from IBM to start CP commands and analyze the results
    • use zPXE to do some PXE-alike booting (used by the orthos team?)
    • use tessia to install from scratch using autoyast
    • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
    • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

    Deployment from image solutions include:

    • use ICIC web interface (openstack in disguise, contributed by IBM)
    • use ICIC from the openstack terraform provider (used by Rancher QA)
    • use zvm_ansible to control SMAPI
    • connect directly to SMAPI low-level socket interface

    IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

    What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

    Other Feilong-based solutions include:

    • make libvirt Feilong-aware
    • simply call Feilong from shell scripts with curl
    • use zvmconnector client python library from Feilong
    • use zthin part of Feilong to directly command SMAPI.

    Goal for Hackweek 23

    My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

    My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

    Goals for Hackweek 24

    Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

    Let's add support for fiberchannel disks and multipath.

    Possible goals for Hackweek 25

    Modernization, maturity, and maintenance.


    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha