Project Description
Make possible to notify a about node draining and rebooting (using kured reboot sentinel).
Goal for this Hackweek
Get familiar with spacewalk api and notification-related code. Try to implement a feature to trigger a custom notification from spacewalk api.
Resources
https://github.com/weaveworks/kured
https://github.com/containrrr/shoutrrr
https://github.com/SUSE/spacewalk
Looking for hackers with the skills:
kubernetes notification suma susemanager rke2 kured spacewalk
This project is part of:
Hack Week 20
Activity
Comments
-
almost 4 years ago by pagarcia | Reply
It would be cool to e. g. receive on MS Teams the notifications Uyuni currently sends via e-mail or the notification "inbox". Even cooler if you could react to that: Uyuni: "hey @atighineanu , 23 clients are impacted by CVE-1234-5678" atighineanu: "patch now" Uyuni: "10 SLES 12 SP4, 7 SLES 15 SP2, 3 CentOS 7, 2 Ubuntu 18.04 and 1 CentOS 8 have been scheduled to be patched"
Uyuni: "client X, Y and Z running SLES 12 SP4 have been successfully patched. 20 clients in progres" ...
-
almost 4 years ago by pagarcia | Reply
It would be cool to e. g. receive on MS Teams the notifications Uyuni currently sends via e-mail or the notification "inbox".
Even cooler if you could react to that:
Uyuni: "hey @atighineanu , 23 clients are impacted by CVE-1234-5678"
atighineanu: "patch now"
Uyuni: "10 SLES 12 SP4, 7 SLES 15 SP2, 3 CentOS 7, 2 Ubuntu 18.04 and 1 CentOS 8 have been scheduled to be patched"
Uyuni: "client X, Y and Z running SLES 12 SP4 have been successfully patched. 20 clients in progres" ...
Similar Projects
Install Uyuni on Kubernetes in cloud-native way by cbosdonnat
Description
For now installing Uyuni on Kubernetes requires running mgradm
on a cluster node... which is not what users would do in the Kubernetes world. The idea is to implement an installation based only on helm charts and probably an operator.
Goals
Install Uyuni from Rancher UI.
Resources
mgradm
code: https://github.com/uyuni-project/uyuni-tools- Uyuni operator: https://github.com/cbosdo/uyuni-operator
Rancher/k8s Trouble-Maker by tonyhansen
Project Description
When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.
Goal for this Hackweek
Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting
Resources
https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up
ddflare: (Dynamic)DNS management via Cloudflare API in Kubernetes by fgiudici
Description
ddflare is a project started a couple of weeks ago to provide DDNS management using v4 Cloudflare APIs: Cloudflare offers management via APIs and access tokens, so it is possible to register a domain and implement a DynDNS client without any other external service but their API.
Since ddflare allows to set any IP to any domain name, one could manage multiple A and ALIAS domain records. Wouldn't be cool to allow full DNS control from the project and integrate it with your Kubernetes cluster?
Goals
Main goals are:
- add containerized image for ddflare
- extend ddflare to be able to add and remove DNS records (and not just update existing ones)
- add documentation, covering also a sample pod deployment for Kubernetes
- write a ddflare Kubernetes operator to enable domain management via Kubernetes resources (using kubebuilder)
Available tasks and improvements tracked on ddflare github.
Resources
- https://github.com/fgiudici/ddflare
- https://developers.cloudflare.com/api/
- https://book.kubebuilder.io
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for
Integrate Backstage with Rancher Manager by nwmacd
Description
Backstage (backstage.io) is an open-source, CNCF project that allows you to create your own developer portal. There are many plugins for Backstage.
This could be a great compliment to Rancher Manager.
Goals
Learn and experiment with Backstage and look at how this could be integrated with Rancher Manager. Goal is to have some kind of integration completed in this Hack week.
Progress
Screen shot of home page at the end of Hackweek:
Day One
- Got Backstage running locally, understanding configuration with HTTPs.
- Got Backstage embedded in an IFRAME inside of Rancher
- Added content into the software catalog (see: https://backstage.io/docs/features/techdocs/getting-started/)
- Understood more about the entity model
Day Two
- Connected Backstage to the Rancher local cluster and configured the Kubernetes plugin.
- Created Rancher theme to make the light theme more consistent with Rancher
Days Three and Day Four
Created two backend plugins for Backstage:
- Catalog Entity Provider - this imports users from Rancher into Backstage
- Auth Provider - uses the proxied sign-in pattern to check the Rancher session cookie, to user that to authenticate the user with Rancher and then log them into Backstage by connecting this to the imported User entity from the catalog entity provider plugin.
With this in place, you can single-sign-on between Rancher and Backstage when it is deployed within Rancher. Note this is only when running locally for development at present
Day Five
- Start to build out a production deployment for all of the above
- Made some progress, but hit issues with the authentication and proxying when running proxied within Rancher, which needs further investigation
New KDE Plasma notification app/applet by apappas
Description
My memory is terrible so I depend a lot on notifications to carry me through the workday. As a plasma user I am ok with the current applet, but I don't love it. It is too small for the centrality it has in my day. Also I dislike how you can not go back to notifications you have dismissed
Goals
Develop a plasma app that * must gather notifications without disrupting the existing notification app * must offer the ablity to refer to dismissed/archived/seen notification up to some defined point in the past * must allow deletion of notifications
Build Edge Image Builder ISO with SUSE Manager by mweiss2
Description
With SUSE Manager, we can build OS Images using KIWI and container images. As we have Edge Image Builder, we want to see if it is possible to use SUSE Manager to build/customize OS Images by integrating Edge Image Builder as well.
Goals
To make the process easier for customers, a single-build pipeline that automatically adds the combustion and artifact files from the EIB process is desirable.
- Kiwi and EIB need to come from a Git Repository.
- Kiwi and EIB need to be running as containers.
- Configuration options for the images used for Kiwi and EIB build.
- X86 and ARM64 Support.
- SUSE Manager 4.3 and 5.X Support.
- SLES 15 SP6 / SL Micro 6.0 and SL Micro 6.1 Support.
Outcome
- Change the Kiwi build process to use Podman with the Kiwi image registry.suse.com/bci/kiwi:10.1.10
- Change the Edge Image Builder to produce a combustion-only ISO
- Extract the contents and write them to a dedicated /OEM partition integrated via Kiwi into the ISO Kiwi creates.
Sources and PRs
- https://github.com/Martin-Weiss/kiwi-image-micro-gpu-60
- https://github.com/suse-edge/edge-image-builder/pull/618
- https://github.com/uyuni-project/uyuni/pull/9507
Improve Development Environment on Uyuni by mbussolotto
Description
Currently create a dev environment on Uyuni might be complicated. The steps are:
- add the correct repo
- download packages
- configure your IDE (checkstyle, format rules, sonarlint....)
- setup debug environment
- ...
The current doc can be improved: some information are hard to be find out, some others are completely missing.
Dev Container might solve this situation.
Goals
Uyuni development in no time:
- using VSCode:
- setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
- dev container should contains all dependencies
- setup debug environment
- implement a GitHub Workspace solution
- re-write documentation
Lots of pieces are already implemented: we need to connect them in a consistent solution.
Resources
- https://github.com/uyuni-project/uyuni/wiki
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Create SUSE Manager users from ldap/ad groups by mbrookhuis
Description
This tool is used to create users in SUSE Manager Server based on LDAP/AD groups. For each LDAP/AD group a role within SUSE Manager Server is defined. Also, the tool will check if existing users still have the role they should have, and, if not, it will be corrected. The same for if a user is disabled, it will be enabled again. If a users is not present in the LDAP/AD groups anymore, it will be disabled or deleted, depending on the configuration.
The code is written for Python 3.6 (the default with SLES15.x), but will also work with newer versions. And works against SUSE Manger 4.3 and 5.x
Goals
Create a python and/or golang utility that will manage users in SUSE Manager based on LDAP/AD group-membership. In a configuration file is defined which roles the members of a group will get.
Table of contents
Installation
To install this project, perform the following steps:
- Be sure that python 3.6 is installed and also the module python3-PyYAML. Also the ldap3 module is needed:
bash
zypper in python3 python3-PyYAML
pip install yaml
On the server or PC, where it should run, create a directory. On linux, e.g. /opt/sm-ldap-users
Copy all the file to this directory.
Edit the configsm.yaml. All parameters should be entered. Tip: for the ldap information, the best would be to use the same as for SSSD.
Be sure that the file sm-ldap-users.py is executable. It would be good to change the owner to root:root and only root can read and execute:
bash
chmod 600 *
chmod 700 sm-ldap-users.py
chown root:root *
Usage
This is very simple. Once the configsm.yaml contains the correct information, executing the following will do the magic:
bash
/sm-ldap-users.py
repository link
https://github.com/mbrookhuis/sm-ldap-users
Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov
Project Description
Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.
In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.
Continue with implementation of the missing features and improve the existing implementation:
authentication (need to decide how it should be/or not related to salt auth)
web service providing the control of states deployment
Goal for this Hackweek
Implement missing key features
Implement the tool for state deployment control with CLI
Resources
https://github.com/openSUSE/saline