Project Description

Cloud Foundry For Kubernetes (cf-for-k8s) blends the popular CF developer API with Kubernetes, Istio, and other open source technologies. The project aims to improve developer productivity for organizations using Kubernetes. cf-for-k8s can be installed atop any conformant environment in minutes. Cloud Foundry is an open-source cloud platform as a service (PaaS) on which developers can build, deploy, run and scale applications.

Coming from a few years experience at SAP managing some big CF Platforms deployed on VMs, I would like to try out this new architecture on top of k8s. This is a great opportunity to learn more about Rancher products and Kubernetes environments!

Goal for this Hackweek

  • Get to know Rancher products (Rancher, RKE, k3s)
  • Get to know the new architecture of cf-on-k8s
  • Setup a Rancher-managed Kubernetes environment
  • Deploy cf-on-k8s on top of it and run a demo application
  • Contribute to official documentation in case something is lacking

Resources

I see a part of it as self-study on a single dev machine but of course anyone is welcome to join this DevOps journey!

Looking for hackers with the skills:

rancher cloudfoundry kubernetes

This project is part of:

Hack Week 20

Activity

  • over 3 years ago: markgharvey liked this project.
  • over 3 years ago: mgrifalconi liked this project.
  • over 3 years ago: mgrifalconi added keyword "rancher" to this project.
  • over 3 years ago: mgrifalconi added keyword "cloudfoundry" to this project.
  • over 3 years ago: mgrifalconi added keyword "kubernetes" to this project.
  • over 3 years ago: mgrifalconi originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Small healthcheck tool for Longhorn by mbrookhuis

    Project Description

    We have often problems (e.g. pods not starting) that are related to PVCs not running, cluster (nodes) not all up or deployments not running or completely running. This all prevents administration activities. Having something that can regular be run to validate the status of the cluster would be helpful, and not as of today do a lot of manual tasks.

    As addition (read enough time), we could add changing reservation, adding new disks, etc. --> This didn't made it. But the scripts can easily be adopted.

    This tool would decrease troubleshooting time, giving admins rights to the rancher GUI and could be used in automation.

    Goal for this Hackweek

    At the end we should have a small python tool that is doing a (very) basic health check on nodes, deployments and PVCs. First attempt was to make it in golang, but that was taking to much time.

    Overview

    This tool will run a simple healthcheck on a kubernetes cluster. It will perform the following actions:

    • node check: This will check all nodes, and display the status and the k3s version. If the status of the nodes is not "Ready" (this should be only reported), the cluster will be reported as having problems

    • deployment check: This check will list all deployments, and display the number of expected replicas and the used replica. If there are unused replicas this will be displayed. The cluster will be reported as having problems.

    • pvc check: This check will list of all pvc's, and display the status and the robustness. If the robustness is not "Healthy", the cluster will be reported as having problems.

    If there is a problem registered in the checks, there will be a warning that the cluster is not healthy and the program will exit with 1.

    The script has 1 mandatory parameter and that is the kubeconf of the cluster or of a node off the cluster.

    The code is writen for Python 3.11, but will also work on 3.6 (the default with SLES15.x). There is a venv present that will contain all needed packages. Also, the script can be run on the cluster itself or any other linux server.

    Installation

    To install this project, perform the following steps:

    • Create the directory /opt/k8s-check

    mkdir /opt/k8s-check

    • Copy all the file to this directory and make the following changes:

    chmod +x k8s-check.py


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    Enabling Rancher as an OIDC Provider by rcabello

    Description

    Kubernetes supports OpenID Connect (OIDC) natively as an authentication mechanism, enabling token-based user authentication. This can be configured through flags in the Kubernetes API server or by using AuthenticationConfiguration.

    The purpose of this project is to enable Rancher to function as an OIDC provider, allowing Rancher's local cluster to act as an OIDC identity provider for downstream clusters. This setup will allow users to authenticate directly with downstream clusters without relying on Rancher’s proxy and impersonation mechanisms.

    Rancher will continue to support all authentication providers. When a user attempts to log in via the Rancher OIDC provider, they will be redirected to the authentication provider configured in Rancher.

    This approach also facilitates integration with third-party tools (e.g StackState)

    Goals

    • Implement Rancher as an OIDC provider using the ORY Fosite library, focusing only on the essential functionality required for basic integration.
    • Enable downstream clusters to authenticate using JWT tokens issued by Rancher.
    • Configure StackState to authenticate using Rancher as an OIDC provider.

    Resources

    https://github.com/ory/fosite


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up


    CVE portal for SUSE Rancher products by gmacedo

    Description

    Currently it's a bit difficult for users to quickly see the list of CVEs affecting images in Rancher, RKE2, Harvester and Longhorn releases. Users need to individually look for each CVE in the SUSE CVE database page - https://www.suse.com/security/cve/ . This is not optimal, because those CVE pages are a bit hard to read and contain data for all SLE and BCI products too, making it difficult to easily see only the CVEs affecting the latest release of Rancher, for example. We understand that certain costumers are only looking for CVE data for Rancher and not SLE or BCI.

    Goals

    The objective is to create a simple to read and navigate page that contains only CVE data related to Rancher, RKE2, Harvester and Longhorn, where it's easy to search by a CVE ID, an image name or a release version. The page should also provide the raw data as an exportable CSV file.

    It must be an MVP with the minimal amount of effort/time invested, but still providing great value to our users and saving the wasted time that the Rancher Security team needs to spend by manually sharing such data. It might not be long lived, as it can be replaced in 2-3 years with a better SUSE wide solution.

    Resources

    • The page must be simple and easy to read.
    • The UI/UX must be as straightforward as possible with minimal visual noise.
    • The content must be created automatically from the raw data that we already have internally.
    • It must be updated automatically on a daily basis and on ad-hoc runs (when needed).
    • The CVE status must be aligned with VEX.
    • The raw data must be exportable as CSV file.
    • Ideally it will be written in Go or pure Shell script with basic HTML and no external dependencies in CSS or JS.


    Metrics Server viewer for Kubernetes by bkampen

    This project is finished please visit the github repo below for the tool.

    Description

    Build a CLI tools which can visualize Kubernetes metrics from the metrics-server, so you're able to watch these without installing Prometheus and Grafana on a cluster.

    Goals

    • Learn more about metrics-server
    • Learn more about the inner workings of Kubernetes.
    • Learn more about Go

    Resources

    https://github.com/bvankampen/metrics-viewer


    Extending KubeVirtBMC's capability by adding Redfish support by zchang

    Description

    In Hack Week 23, we delivered a project called KubeBMC (renamed to KubeVirtBMC now), which brings the good old-fashioned IPMI ways to manage virtual machines running on KubeVirt-powered clusters. This opens the possibility of integrating existing bare-metal provisioning solutions like Tinkerbell with virtualized environments. We even received an inquiry about transferring the project to the KubeVirt organization. So, a proposal was filed, which was accepted by the KubeVirt community, and the project was renamed after that. We have many tasks on our to-do list. Some of them are administrative tasks; some are feature-related. One of the most requested features is Redfish support.

    Goals

    Extend the capability of KubeVirtBMC by adding Redfish support. Currently, the virtbmc component only exposes IPMI endpoints. We need to implement another simulator to expose Redfish endpoints, as we did with the IPMI module. We aim at a basic set of functionalities:

    • Power management
    • Boot device selection
    • Virtual media mount (this one is not so basic add-emoji )

    Resources


    Small healthcheck tool for Longhorn by mbrookhuis

    Project Description

    We have often problems (e.g. pods not starting) that are related to PVCs not running, cluster (nodes) not all up or deployments not running or completely running. This all prevents administration activities. Having something that can regular be run to validate the status of the cluster would be helpful, and not as of today do a lot of manual tasks.

    As addition (read enough time), we could add changing reservation, adding new disks, etc. --> This didn't made it. But the scripts can easily be adopted.

    This tool would decrease troubleshooting time, giving admins rights to the rancher GUI and could be used in automation.

    Goal for this Hackweek

    At the end we should have a small python tool that is doing a (very) basic health check on nodes, deployments and PVCs. First attempt was to make it in golang, but that was taking to much time.

    Overview

    This tool will run a simple healthcheck on a kubernetes cluster. It will perform the following actions:

    • node check: This will check all nodes, and display the status and the k3s version. If the status of the nodes is not "Ready" (this should be only reported), the cluster will be reported as having problems

    • deployment check: This check will list all deployments, and display the number of expected replicas and the used replica. If there are unused replicas this will be displayed. The cluster will be reported as having problems.

    • pvc check: This check will list of all pvc's, and display the status and the robustness. If the robustness is not "Healthy", the cluster will be reported as having problems.

    If there is a problem registered in the checks, there will be a warning that the cluster is not healthy and the program will exit with 1.

    The script has 1 mandatory parameter and that is the kubeconf of the cluster or of a node off the cluster.

    The code is writen for Python 3.11, but will also work on 3.6 (the default with SLES15.x). There is a venv present that will contain all needed packages. Also, the script can be run on the cluster itself or any other linux server.

    Installation

    To install this project, perform the following steps:

    • Create the directory /opt/k8s-check

    mkdir /opt/k8s-check

    • Copy all the file to this directory and make the following changes:

    chmod +x k8s-check.py


    Harvester Packer Plugin by mrohrich

    Description

    Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.

    Goals

    Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.

    Resources

    Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders

    Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin


    ddflare: (Dynamic)DNS management via Cloudflare API in Kubernetes by fgiudici

    Description

    ddflare is a project started a couple of weeks ago to provide DDNS management using v4 Cloudflare APIs: Cloudflare offers management via APIs and access tokens, so it is possible to register a domain and implement a DynDNS client without any other external service but their API.

    Since ddflare allows to set any IP to any domain name, one could manage multiple A and ALIAS domain records. Wouldn't be cool to allow full DNS control from the project and integrate it with your Kubernetes cluster?

    Goals

    Main goals are:

    1. add containerized image for ddflare
    2. extend ddflare to be able to add and remove DNS records (and not just update existing ones)
    3. add documentation, covering also a sample pod deployment for Kubernetes
    4. write a ddflare Kubernetes operator to enable domain management via Kubernetes resources (using kubebuilder)

    Available tasks and improvements tracked on ddflare github.

    Resources

    • https://github.com/fgiudici/ddflare
    • https://developers.cloudflare.com/api/
    • https://book.kubebuilder.io