Project Description
Currently, the Elemental Operator authenticates the hosts registering for Elemental provisioning via TPM attestation. In particular, the host will perform both Enrollment and Attestation in the same round on the very first registration. Further connections to update the host state will be possible only if the host will be able to proof its identity via TPM (the Enrollment previously done will be used to perform Attestation against the host).
The only available unsupported option to allow nodes without TPM to be provisioned via the Elemental Operator is to use TPM emulation: that would use keys derived by a (random) number to simulate TPM operations and perform attestation (see https://github.com/rancher/elemental-operator/issues/235) .
There are a number of reasons to avoid using random-derived-key TPM emulation in Elemental Operator:
- security is not comparable to the one of a real TPM device: in particular we just allow to derive all TPM keys from one single number, and anyone with the same number will be able to impersonate the host (see https://github.com/rancher-sandbox/go-tpm/issues/6)
- in order to allow the host to update its own data (labels) the random number should be derived by a host unique identifier (UID), in order to let the host re-identify itself, making the whole Attestation useless
Viable alternatives include:
- plain identification (no authentication): just use a host UID for identification, no authentication. This will allow to skip Attestation, providing almost equal security to the one of the current emulated TPM with key derived by a host UID.
- split identification and authentication: identify with some UID from the host and authenticate generating a random key/password, to be stored in the host permanent storage. This could allow a security level between no auth and TPM based Attestation.
- fix random generation of the emulated TPM key (https://github.com/rancher-sandbox/go-tpm/issues/6), generate a new truly random TPM simulator and save its state in the host permanent storage before performing Enrollment and Attestation.
Goal for this Hackweek
The overall goal is to review current authentication methods during registration and explore new ones.
The focus for this Hackweek is to extend the Elemental Operator to allow multiple identification/authentication methods: the target MVP is to allow registration via the alternative 1. (identification and no authentication).
Resources
Looking for hackers with the skills:
This project is part of:
Hack Week 22
Activity
Comments
-
about 2 years ago by fgiudici | Reply
Feb 3, end of the hackweek:
We have a PR introducing a plain identification way to "authenticate" against the elemental-operator, as described at point 1. above.
Instead of a UUID, since we have got report that SMBIOS information can be empty on some hw vendors, we used the MAC address of the "main" network interface as the unique identifier to use during registration (that should really be unique... otherwise, well, you will have bigger issues than registering
)
The "main" network interface is actually the first network interface found in the system with a hw address and an IP address assigned there. Good enough for this PoC, since we expect the ifindex net interface to be lower for phisical nics, so they should be checked before any virtual interface.
Some value in the work was to generalize the authentication code, especially on the client side (using Golang interfaces).
Talk is cheap. Show me the code.
here it is: https://github.com/rancher/elemental-operator/pull/345
-
Similar Projects
iSCSI integration in Warewulf by ncuralli
Description
This Hackweek project aims to enhance Warewulf’s capabilities by adding iSCSI support, enabling both remote boot and flexible mounting of iSCSI devices within the filesystem. The project, which already handles NFS, DHCP, and iPXE, will be extended to offer iSCSI services as well, centralizing all necessary services for provisioning and booting cluster nodes.
Goals
- iSCSI Boot Option: Enable nodes to boot directly from iSCSI volumes
- Mounting iSCSI Volumes within the Filesystem: Implement support for mounting iSCSI devices at various points within the filesystem
Resources
https://warewulf.org/
Steps
- add generic framework to handle remote ressource/filesystems to
wwctl
[ ] - add iSCSI handling to
wwctl configure
[ ] - add iSCSI to dracut files [ ]
- test it [ ]
Install Uyuni on Kubernetes in cloud-native way by cbosdonnat
Description
For now installing Uyuni on Kubernetes requires running mgradm
on a cluster node... which is not what users would do in the Kubernetes world. The idea is to implement an installation based only on helm charts and probably an operator.
Goals
Install Uyuni from Rancher UI.
Resources
mgradm
code: https://github.com/uyuni-project/uyuni-tools- Uyuni operator: https://github.com/cbosdo/uyuni-operator
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for
Harvester Packer Plugin by mrohrich
Description
Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.
Goals
Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.
Resources
Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders
Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API