or how to replace git push heroku master
and cf push
with Kubernetes
PaaS has made deployment of applications very easy. Kubernetes has made deployment of applications very flexible but not easy. There are efforts to add the "easy" part to Kubernetes. That would make Kubernetes a good alternative to PaaS. With so many public cloud Kubernetes offerings nowadays, it would be nice if one could simply pick up their preferred cloud and have an app running in minutes. This HackWeek project will be and exploration of the available tools that can make Kubernetes as friendly as a PaaS for deployment but also how much Kubernetes can help development.
An example application will be used to test development and deployment to both staging and production.
Tools available
- Buildpacks
- Knative (build, server etc)
- Local kube clusters (kind, microk8s etc)
- Concourse, buildkite, drone.io etc (for CI)
- Kube dashboard (for monitoring etc)
Desired results
At the end of the Hackweek there should be a repository that could be used as a "framework" when starting a project. It should have instructions on how to use the various helper scripts and ideally a single entry point (like a cli or Makefile of bash script).
User stories
- As a developer having the code of a Ruby application on my local filesystem, I want to be able to run my application without installing any dependencies.
- As a developer, I want the running (development) instance of my application to be updated very quickly when I change the code (with or without manual intervention)
- As a developer, I want to be able to deploy my application on public cloud with one command.
- As a developer, I want to be able to rollback a deployment.
- As a developer, I want to be able to run database migrations.
- (more might be added later)
Looking for hackers with the skills:
This project is part of:
Hack Week 18
Activity
Comments
-
over 5 years ago by DKarakasilis | Reply
Created GitHub repository to host results: https://github.com/jimmykarily/kubaas
-
over 5 years ago by DKarakasilis | Reply
Relevant blog post by Troy: https://www.suse.com/c/the-holy-grail-of-paas-on-kubernetes/
Similar Projects
Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng
Description
As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.
Goals
- Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
- Create NFS-Ganesha Container Image on OBS: Image
- Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus
Resources
Multi-pod, autoscalable Elixir application in Kubernetes using K8s resources by socon
Description
Elixir / Erlang use their own solutions to create clusters that work together. Kubernetes provide its own orchestration. Due to the nature of the BEAM, it looks a very promising technology for applications that run in Kubernetes and requite to be always on, specifically if they are created as web pages using Phoenix.
Goals
- Investigate and provide solutions that work in Phoenix LiveView using Kubernetes resources, so a multi-pod application can be used
- Provide an end to end example that creates and deploy a container from source code.
Resources
https://github.com/dwyl/phoenix-liveview-counter-tutorial https://github.com/propedeutica/elixir-k8s-counter
Setup Kanidm as OIDC provider on Kubernetes by jkuzilek
Description
I am planning to upgrade my homelab Kubernetes cluster to the next level and need an OIDC provider for my services, including K8s itself.
Goals
- Successfully configure and deploy Kanidm on homelab cluster
- Integrate with K8s auth
- Integrate with other services (Envoy Gateway, Container Registry, future deployment of Forgejo?)
Resources
Harvester Packer Plugin by mrohrich
Description
Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.
Goals
Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.
Resources
Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders
Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for
Uyuni developer-centric documentation by deneb_alpha
Description
While we currently have extensive documentation on user-oriented tasks such as adding minions, patching, fine-tuning, etc, there is a notable gap when it comes to centralizing and documenting core functionalities for developers.
The number of functionalities and side tools we have in Uyuni can be overwhelming. It would be nice to have a centralized place with descriptive list of main/core functionalities.
Goals
Create, aggregate and review on the Uyuni wiki a set of resources, focused on developers, that include also some known common problems/troubleshooting.
The documentation will be helpful not only for everyone who is trying to learn the functionalities with all their inner processes like newcomer developers or community enthusiasts, but also for anyone who need a refresh.
Resources
The resources are currently aggregated here: https://github.com/uyuni-project/uyuni/wiki
Improve Development Environment on Uyuni by mbussolotto
Description
Currently create a dev environment on Uyuni might be complicated. The steps are:
- add the correct repo
- download packages
- configure your IDE (checkstyle, format rules, sonarlint....)
- setup debug environment
- ...
The current doc can be improved: some information are hard to be find out, some others are completely missing.
Dev Container might solve this situation.
Goals
Uyuni development in no time:
- using VSCode:
- setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
- dev container should contains all dependencies
- setup debug environment
- implement a GitHub Workspace solution
- re-write documentation
Lots of pieces are already implemented: we need to connect them in a consistent solution.
Resources
- https://github.com/uyuni-project/uyuni/wiki