Description

Elixir / Erlang use their own solutions to create clusters that work together. Kubernetes provide its own orchestration. Due to the nature of the BEAM, it looks a very promising technology for applications that run in Kubernetes and requite to be always on, specifically if they are created as web pages using Phoenix.

Goals

  • Investigate and provide solutions that work in Phoenix LiveView using Kubernetes resources, so a multi-pod application can be used
  • Provide an end to end example that creates and deploy a container from source code.

Resources

https://github.com/dwyl/phoenix-liveview-counter-tutorial https://github.com/propedeutica/elixir-k8s-counter

Looking for hackers with the skills:

elixir elixir-lang kubernetes

This project is part of:

Hack Week 24

Activity

  • 2 months ago: socon started this project.
  • 2 months ago: socon added keyword "elixir" to this project.
  • 2 months ago: socon added keyword "elixir-lang" to this project.
  • 2 months ago: socon added keyword "kubernetes" to this project.
  • 2 months ago: socon originated this project.

  • Comments

    • socon
      2 months ago by socon | Reply

      Solution uploaded in the github code. https://github.com/propedeutica/elixir-k8s-counter Article published with the result: https://medium.com/@chargio/how-to-easily-run-your-elixir-application-in-a-local-kubernetes-using-docker-desktop-f0c1ccfd49e6

    Similar Projects

    Learn how to integrate Elixir and Phoenix Liveview with LLMs by ninopaparo

    Description

    Learn how to integrate Elixir and Phoenix Liveview with LLMs by building an application that can provide answers to user queries based on a corpus of custom-trained data.

    Goals

    Develop an Elixir application via the Phoenix framework that:

    • Employs Retrieval Augmented Generation (RAG) techniques
    • Supports the integration and utilization of various Large Language Models (LLMs).
    • Is designed with extensibility and adaptability in mind to accommodate future enhancements and modifications.

    Resources

    • https://elixir-lang.org/
    • https://www.phoenixframework.org/
    • https://github.com/elixir-nx/bumblebee
    • https://ollama.com/


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up


    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    Introducing "Bottles": A Proof of Concept for Multi-Version CRD Management in Kubernetes by aruiz

    Description

    As we delve deeper into the complexities of managing multiple CRD versions within a single Kubernetes cluster, I want to introduce "Bottles" - a proof of concept that aims to address these challenges.

    Bottles propose a novel approach to isolating and deploying different CRD versions in a self-contained environment. This would allow for greater flexibility and efficiency in managing diverse workloads.

    Goals

    • Evaluate Feasibility: determine if this approach is technically viable, as well as identifying possible obstacles and limitations.
    • Reuse existing technology: leverage existing products whenever possible, e.g. build on top of Kubewarden as admission controller.
    • Focus on Rancher's use case: the ultimate goal is to be able to use this approach to solve Rancher users' needs.

    Resources

    Core concepts:

    • ConfigMaps: Bottles could be defined and configured using ConfigMaps.
    • Admission Controller: An admission controller will detect "bootled" CRDs being installed and replace the resource name used to store them.
    • Aggregated API Server: By analyzing the author of a request, the aggregated API server will determine the correct bottle and route the request accordingly, making it transparent for the user.


    ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini

    Description

    ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal or local installations. However, the goal is to expand its use to encompass all installations of Kubernetes for local development purposes.
    It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based configuration config.yml.

    Overview

    • Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
    • Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
    • Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
    • Extensibility: Easily extend functionality with custom plugins and configurations.
    • Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
    • Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.

    Features

    • distribution and engine independence. Install your favorite kubernetes engine with your package manager, execute one script and you'll have a complete working environment at your disposal.
    • Basic config approach. One single config.yml file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...).
    • Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
    • Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
    • Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
    • One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
    • Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.

    Planned features (Wishlist / TODOs)

    • Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for


    Metrics Server viewer for Kubernetes by bkampen

    This project is finished please visit the github repo below for the tool.

    Description

    Build a CLI tools which can visualize Kubernetes metrics from the metrics-server, so you're able to watch these without installing Prometheus and Grafana on a cluster.

    Goals

    • Learn more about metrics-server
    • Learn more about the inner workings of Kubernetes.
    • Learn more about Go

    Resources

    https://github.com/bvankampen/metrics-viewer