or how to replace git push heroku master and cf push with Kubernetes

PaaS has made deployment of applications very easy. Kubernetes has made deployment of applications very flexible but not easy. There are efforts to add the "easy" part to Kubernetes. That would make Kubernetes a good alternative to PaaS. With so many public cloud Kubernetes offerings nowadays, it would be nice if one could simply pick up their preferred cloud and have an app running in minutes. This HackWeek project will be and exploration of the available tools that can make Kubernetes as friendly as a PaaS for deployment but also how much Kubernetes can help development.

An example application will be used to test development and deployment to both staging and production.

Tools available

  • Buildpacks
  • Knative (build, server etc)
  • Local kube clusters (kind, microk8s etc)
  • Concourse, buildkite, drone.io etc (for CI)
  • Kube dashboard (for monitoring etc)

Desired results

At the end of the Hackweek there should be a repository that could be used as a "framework" when starting a project. It should have instructions on how to use the various helper scripts and ideally a single entry point (like a cli or Makefile of bash script).

User stories

  • As a developer having the code of a Ruby application on my local filesystem, I want to be able to run my application without installing any dependencies.
  • As a developer, I want the running (development) instance of my application to be updated very quickly when I change the code (with or without manual intervention)
  • As a developer, I want to be able to deploy my application on public cloud with one command.
  • As a developer, I want to be able to rollback a deployment.
  • As a developer, I want to be able to run database migrations.
  • (more might be added later)

Looking for hackers with the skills:

kubernetes paas development

This project is part of:

Hack Week 18

Activity

  • over 5 years ago: okurz liked this project.
  • over 5 years ago: DKarakasilis added keyword "paas" to this project.
  • over 5 years ago: DKarakasilis added keyword "development" to this project.
  • over 5 years ago: DKarakasilis added keyword "kubernetes" to this project.
  • over 5 years ago: DKarakasilis started this project.
  • over 5 years ago: DKarakasilis originated this project.

  • Comments

    • DKarakasilis
      over 5 years ago by DKarakasilis | Reply

      Created GitHub repository to host results: https://github.com/jimmykarily/kubaas

    • DKarakasilis
      over 5 years ago by DKarakasilis | Reply

      Relevant blog post by Troy: https://www.suse.com/c/the-holy-grail-of-paas-on-kubernetes/

    Similar Projects

    Setup Kanidm as OIDC provider on Kubernetes by jkuzilek

    Description

    I am planning to upgrade my homelab Kubernetes cluster to the next level and need an OIDC provider for my services, including K8s itself.

    Goals

    • Successfully configure and deploy Kanidm on homelab cluster
    • Integrate with K8s auth
    • Integrate with other services (Envoy Gateway, Container Registry, future deployment of Forgejo?)

    Resources


    Multi-pod, autoscalable Elixir application in Kubernetes using K8s resources by socon

    Description

    Elixir / Erlang use their own solutions to create clusters that work together. Kubernetes provide its own orchestration. Due to the nature of the BEAM, it looks a very promising technology for applications that run in Kubernetes and requite to be always on, specifically if they are created as web pages using Phoenix.

    Goals

    • Investigate and provide solutions that work in Phoenix LiveView using Kubernetes resources, so a multi-pod application can be used
    • Provide an end to end example that creates and deploy a container from source code.

    Resources

    https://github.com/dwyl/phoenix-liveview-counter-tutorial https://github.com/propedeutica/elixir-k8s-counter


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    Harvester Packer Plugin by mrohrich

    Description

    Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.

    Goals

    Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.

    Resources

    Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders

    Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin


    ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini

    Description

    ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal or local installations. However, the goal is to expand its use to encompass all installations of Kubernetes for local development purposes.
    It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based configuration config.yml.

    Overview

    • Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
    • Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
    • Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
    • Extensibility: Easily extend functionality with custom plugins and configurations.
    • Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
    • Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.

    Features

    • distribution and engine independence. Install your favorite kubernetes engine with your package manager, execute one script and you'll have a complete working environment at your disposal.
    • Basic config approach. One single config.yml file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...).
    • Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
    • Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
    • Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
    • One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
    • Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.

    Planned features (Wishlist / TODOs)

    • Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for


    Uyuni developer-centric documentation by deneb_alpha

    Description

    While we currently have extensive documentation on user-oriented tasks such as adding minions, patching, fine-tuning, etc, there is a notable gap when it comes to centralizing and documenting core functionalities for developers.

    The number of functionalities and side tools we have in Uyuni can be overwhelming. It would be nice to have a centralized place with descriptive list of main/core functionalities.

    Goals

    Create, aggregate and review on the Uyuni wiki a set of resources, focused on developers, that include also some known common problems/troubleshooting.

    The documentation will be helpful not only for everyone who is trying to learn the functionalities with all their inner processes like newcomer developers or community enthusiasts, but also for anyone who need a refresh.

    Resources

    The resources are currently aggregated here: https://github.com/uyuni-project/uyuni/wiki


    ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini

    Description

    ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal or local installations. However, the goal is to expand its use to encompass all installations of Kubernetes for local development purposes.
    It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based configuration config.yml.

    Overview

    • Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
    • Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
    • Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
    • Extensibility: Easily extend functionality with custom plugins and configurations.
    • Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
    • Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.

    Features

    • distribution and engine independence. Install your favorite kubernetes engine with your package manager, execute one script and you'll have a complete working environment at your disposal.
    • Basic config approach. One single config.yml file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...).
    • Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
    • Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
    • Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
    • One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
    • Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.

    Planned features (Wishlist / TODOs)

    • Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for


    Improve Development Environment on Uyuni by mbussolotto

    Description

    Currently create a dev environment on Uyuni might be complicated. The steps are:

    • add the correct repo
    • download packages
    • configure your IDE (checkstyle, format rules, sonarlint....)
    • setup debug environment
    • ...

    The current doc can be improved: some information are hard to be find out, some others are completely missing.

    Dev Container might solve this situation.

    Goals

    Uyuni development in no time:

    • using VSCode:
      • setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
      • dev container should contains all dependencies
      • setup debug environment
    • implement a GitHub Workspace solution
    • re-write documentation

    Lots of pieces are already implemented: we need to connect them in a consistent solution.

    Resources

    • https://github.com/uyuni-project/uyuni/wiki