Since SUSE Manager doesn't scale out and stacking it into another pyramid of susemanagers won't help here, the real architectural changes needs to be done to achieve true scale-out of this product. This hackweek project is about how to Turn SUSE Manager into a cluster.

Areas to be tackled:

  1. Distributed FS for storage
  2. Distributed messaging bus
  3. Distributed KV for metadata
  4. Control Node prototype (this component turns a SUSE Manager into a stateless-ish event-driven node, where SUSE Manager in a future is deployed into a container -> Kubernetes)
  5. Cluster Director prototype (this component orchestrates the entire cluster via events)
  6. API Gateway, providing 100% compatible API across all SUSE Manager nodes (this is done, demoed at SUMA Winter Summit 2020)
  7. Slightly modified SUSE Manager peripherals (reposync, DB etc).
  8. Client daemon, which is used to bind a single registered system to a cluster node according to Cluster Director service (on shrink, grow, rebalance and disaster recovery events).

The idea is to solve a set of core problems how to turn technologically outdated Uyuni Server / SUMA into a modern cloud-native cluster node, so with after a reasonable time this could be turned into a real product for SUSE Manager.


Progress

Day One (Monday, 10 Feb)

Storm seems over. Realised this website doesn't allow editing comments :astonished:.

Got working initial Cluster Daemon that runs REST API and talks to the distributed KV database.

Got working Client Daemon for every Client System. So far it can:

  • Pool the Client System to the cluster for staging
  • Talk to Cluster Director (CD) and get the status
  • Switch/reconfigure Salt Minion according to the CD's directives

Got initial tool (Python) that calls Cluster API. So far it is too simple to describe. add-emoji

Got initial Cluster Director with OpenAPI spec (Swagger) running.

TODO:

  • [x] Running distributed K/V database
    • [x] Verify it is buildable/package-able
  • [x] Running distributed message bus
    • [x] Verify it is buildable/package-able
  • [x] Manage cluster Zones
  • [x] mgr-clbd-admin tool
    • [x] List nodes
    • [x] List zones
    • [x] Format JSON input
    • [x] Call API via JSON input
  • [x] Add OpenAPI to Cluster Director daemon
    • [x] Swagger UI running
    • [x] APIs are automatically generated (updated Makefile)
    • [x] I can try that in browser
  • [x] Describe all the APIs I've got so far

The day is over. Did many refactorings, found that Gin doesn't do form parsing from the body on DELETE method but solved it, not yet haven't finished Zones management (which is damn easy now, but argghh still!).


Day Two (Tuesday, 11 Feb)

Turned mgr-clbd-admin into a repo subproject of the Cluster Director daemon. There should be a set of common Jinja-based formatters for those common return types, but that's "bells-n-whistles" I will take care later. Right now a raw JSON dump is good 'nuff.

Update (15:20): Group photo taken, many refactorings, Zones management done. Time to write Node Controller.

Done (partially):

  • [*] Node Controller (initial)
    • [x] "Wrap around head" overall code design
    • [x] SSH communication over RSA keypair to the staging Cluster Node
    • [x] Bi-directional pub/sub (initial)
    • [*] Configuration file
    • [x] SSH check remote host
    • [x] SSH disable host verification option
    • [*] Events are emitted from an arbitrary SUSE Manager to the Cluster bus via Node Controller (simple-n-stupid PoC ATM)
    • [*] Commands from Cluster Directory received and mapped to the emitter facility (write one for XML-RPC APIs)
    • [*] Cluster Node staging
      • [x] Initial overall PoC code
      • [x] Execute nanostates[1]
        • [x] SSH Runner runs a nanostate on a remote machine
        • [x] Refactor SSH Runner output type before it is too late. It is complex enough to be very bad as map[interface{}]interface{}.
        • [x] Implement local runner (for a client on localhost)
      • [*] Write few nanostate scenarios to complete Node staging:
        • [*] Reset/prepare PostgreSQL database
        • [*] Prepare/mount distributed File System

NodeController is about to execute nanostates. It is like "nano-ansible" in the pocket, a fusion of Salt and Ansible ideas into a small package, which is not intended to be as broad CMS as those two. Essentially, it just runs a series of commands over SSH to a specific release of supported Cluster Node (AKA SUSE Manager) and does some rudimentary "things" on it, once it was installed and setup. These are like getting machine-id, backing up some configuration files, making standard operations with the database, start/stop/restart some services etc: all what can be done just via plain command line and mostly used for informational purposes.

This alone removes any kind of need of internal configuration management system to just stage a Cluster Node and add it to the swarm.


Day Three (Wednesday, 12 Feb)

An Unwanted Accident... (almost)

While working on a Cluster Node code for staging and playing with SSH sessions and channels, I just accidentally "rewrote" Salt, combining both best practices of Salt and Ansible. Of course, it is far-far-far-far away from what Salt can do today.

Or... is it? add-emoji Let's see.

So the main plan was to manage cluster nodes and their components and nothing else. For that nobody needs a full-blown configuration management infrastructure, right? And so it happens that Cluster since has a Cluster Director (that supposed to scale-out on its own), but it is essentially just like that woulld be a Salt Master. And, consequently, since it is hooked up to an abstract Message Bus (you write an adapter connector for Kafka if you need many millions, but so far I am sure NATS will do for millions of #susemanagers), it talks to a Node Controllers on each #susemanager node, which is... right, an analogy to a Salt Minion. And here you go: bi-directional pub/sub that can perform something when a particular message arrives.

Security? Everything is happening over TLS anyway. But then another layer is based on pure OpenSSL: whatever Cluster Director needs to pass secretly to a specific Cluster Node, it sends to a channel, where every message is also encrypted on its own. Each Cluster Node is subscribed to a TWO channels:

  1. General public
  2. Private (by key fingerprint)

Cluster Director sends everything plain text to a public channel, but secret communication is running over private channel, encrypted with the public key of the recepient. Complexity of returners, pillars etc is no longer needed in this way.

It didn't took me long to craft a simple architecture to foresee even embedding Lua or Starlight into a state system. In fact, it would be even better than Salt, because one wouldn't have if/else imperative clutter in the declarative state (!). How a state looks like? Currently so (again, it is a hackweek of one solo "cowboy", not even pre-alpha):

id: some-test description: This is a test state state: gather-machine-summary: - shell: - get-id: "cat /etc/machine-id" - uptime: "uptime" - hostname: "hostname"

The shell is a module I just wrote. It takes series of the commands and runs it, returning me something like this:

{ "get-id": "12e43783e54f25bb3f505cfeeff94045", "upteime": "13:54:07 up 18:59, 1 user, load average: 0,10, 0,18, 0,17", "hostname": "rabbit" }

It also happens that all the above can be ran locally or remotely. Or one-to-many remotely over SSH.

But... Wait a sec. Only $SHELL? Can it do a bit more than this? Wait-wait. So if I can run arbitrary stuff already (which Ansible and Salt are anyway), then what stops me to call pure Ansible binary modules, and just access all that pile of crazy modules they've got already working? Nothing! Just scp them there and stockpile on the client. In fact, just install the entire Ansible and run it there as is. It is as same as Saltsible works, after all.

Basically, I ended up with a message-driven cluster architecture that happens to be compatible with Ansible modules. Kind of. Not yet completely, but to bring that to 100% compatibility is no brainer, just need few more extra days to get done. Which not my goal and priority at the moment anyway.

So and then imagine if having Lua or Starlight (Python dialect) embedded, you could do the above another way:

import: - sshfunctions id: some-test description: This is a test state state: gather-machine-summary: - shell: - {getid()} - {getuptime()} - {gethostname()}

These would be a functions somewhere in a file ssh_functions.lua. The same functions could do also this:

import: - sshfunctions id: some-test description: This is a test state state: create-user: - shell: - {adduser(uid=getsaltpillar("uid"))}

Well, you've got the idea. Anyway, I will focus on unchecked boxes above from yesterday and will finish this working at least anything. And thus won't extrapolate this up to eleven. At least not at the moment.

Update (17:40) Nanostates are happily running passed on scenarios on remote machines add-emoji Sort of declarative orchestration. Not yet asynchronous. Few minutes left, maybe I will implement local runner?.. add-emoji add-emoji

Update (somewhere evening) Refactored runners and implemented SSH runner as well as local runner. Nah, but the possibility of running Ansible modules FOR FREE is still bugging me! add-emoji Instead of reimplementing PostgreSQL start/stop and prepare/mount distributed file system in a shell command line, how much time it will take to hook the whole Ansible modular system and use it from the nanostates? It is a Hackweek, after all! add-emoji


Day Four (Thursday, 13 Feb)

What one can do in basically four days, having almost nothing? A lot! add-emoji So far, my leftovers from yesterday:

TODO:

  • [x] Integrate Ansible
    • [x] Runs binary modules
    • [x] Runs Python modules
  • [ ] Write few nanostate scenarios to complete Node staging:
    • [ ] Reset/prepare PostgreSQL database
    • [ ] Prepare/mount distributed File System
  • [ ] Cluster Node staging
    • [ ] Integrate staging part together with the Node Controller
  • [ ] Node Controller (initial)
    • [x] Configuration file
    • [x] PostgreSQL event emitter
    • [x] Events are emitted from an arbitrary SUSE Manager to the Cluster bus via Node Controller (simple-n-stupid PoC ATM)
    • [ ] Commands from Cluster Directory received and mapped to the emitter facility (write one for XML-RPC APIs)

OK, well... Ansible would be certainly a right next step to look at, but ATM I'd rather save time and focus on emitting messages from the PostreSQL database, which is deep inside SUSE Manager's guts. So toss-in few basic shell commands for Node staging and that's it.

Update (12:00) SCNR...

Took this official Ansible module. Then wrote a nanostate snippet:

- ansible.helloworld: name: "Cluster"

Result:

{ "Module": "ansible.helloworld", "Errcode": 0, "Errmsg": "", "Response": [ { "Host": "localhost", "Response": { "ansible.helloworld": { "Stdout": "", "Stderr": "", "Errmsg": "", "Errcode": 0, "Json": { "changed": false, "failed": false, "msg": "Hello, Cluster!" } } } } ] }

Of course, this inherited Ansible's main illness: dont_run_this_twice.yaml. Calling nanostates nanostates is too loud at the moment: they won't check the state, but just fire whatever in them "into the woods". But the goal of the project nor to write another Configuration Management, n̶e̶i̶t̶h̶e̶r̶ ̶s̶c̶a̶l̶e̶-̶o̶u̶t̶ ̶A̶n̶s̶i̶b̶l̶e̶ (oops, that just happened unplanned), neither to fix Ansible imperative behaviour and build around it declarative runners (which is not really a problem, BTW).

Oh well. Fun. Now messaging bus story: Postgres, here I come!

Update (somewhere evening) PostgreSQL happily spitting out every changes to its tables through whatever way in Uyuni Server. The XML-RPC APIs are very slow, on the other hand. I was exploring ways how to implement plugins in Go, so then I don't have to bundle everything into one binary. The gRPC way is the only reliable and nicely decouple-able. The "native" Go plugins are an interesting tech preview, working nice (as long as the same $GOPATH and the same compiler is used) but sadly they seems still quite far away from production status. Plugins supposed to be written by different vendors, which is not the case right now seems to be supported.

I am right now solving problems how the Node Controller will reconcile network transaction across the entire cluster, making 100% sure all-or-none nodes has been updated. As always, there are several ways of doing it, but I have to find out which one suits best.


The Last Day of The Hackweek (Friday, 14 Feb)

At least it isn't Friday 13. Starting from touch main.go, so far what I've got per these days:

Someone Did It

But I chose it and put it together. I chose that, because I can also support it and bugfix it.

  • Running equivalent to etcd, which scales out way better than etcd. Check out TiKV. If you know Rust, you will have lots of fun.
  • Running MySQL compatibility layer on top of it. Performance is about 10x times slower than MySQL's InnoDB, but in this case performance isn't an issue at all. Important that this thing scales out infinitely, just a bit of space hungry. Check out TiDB
  • Running distributed storage and mountable filesystem. If SES guys will one day support "SUSE Manager on Ceph nodes", it will be just fantastic. Until then — other solutions. Check out SeaweedFS and IPFS. The IPFS is running Tumbleweed repo at SUSE.
  • Running message bus that supposed to scale out same as Apache Kafka does. The reason not to use Apache Kafka is very trivial: its infrastructure is much harder to maintain. But this is not a reason and so hard infrastructure maintenance on its own does not rules Kafka out! You want it? No problemo: — just add another adapter to Apache Kafka and replace with currently used NATS. In fact, NATS perfectly co-exists with Kafka in some infrastructures. Check out NATS

I Did It

  • Running Client System Daemon (i.e. "runs on registered client system") which main basic role to ask Cluster what node to use, automatically reconfigure Salt Minion and other configuration and then re-point client system to a new Cluster Node (AKA Uyuni Server), if that is needed. It also recovers client system back to the cluster, once Cluster Node puffed in smokes.
  • "one to many and many to one" API Gateway, which allows spacecmd and similar tools to "just work" across multiple nodes. Granted, it wasn't written during this Hackweek and is 99.999% compatible (I was too lazy to get back and implement overloaded XML-RPC signatures for REST, as well I am returning nil instead of an empty dictionary — probably a bug, but... meh... later). This thing also runs Swagger UI for OpenAPI specs against all XML-RPC API for SUSE Manager. add-emoji
  • Very basic Cluster Director that can manage zones in and add cluster nodes. It as well runs OpenAPI/Swagger UI. Very basic, because it has no features yet. But doesn't mean it doesn't have more-less solid architecture.
  • Library that runs Ansible in Salt fashion (via bi-directional pub/sub, which rules out returners/pillars as unnecessary). I am going to use that internally instead of both Salt and Ansible on their own. Again, it is simpler to call Ansible module reusing existing scaled out infrastructure, rather then run-and-take-care-of yet another components. And I don't have to maintain Ansible: it is perfectly tested anyway.
  • Library that resembles SaltSSH by running Ansible modules (both Python and binary). I am considering I've done it, because I could (doesn't mean I should).
  • Very unfinished Initial Node Controller Daemon, which listens to Uyuni Server events and emits messages to the bus for further operations.

Phew. Not bad as for basically four days, I'd say! All that stuff I wrote in Go. I'd say it does make sense to use that language, if you don't want to write Java or Python.

What are my nearest plans?

  • Finish the "loop" and have all components running, talking to each other, client nodes are transfered seamlessly.
  • Achieve network transaction on updating Cluster Nodes.
  • Write some Ansible modules, likely in plain old C and Rust, add their caching on the client so it will perform well, add seamless module updates. Generic Ansible doesn't hurt, but I don't need it for Cluster needs.
  • Modular/pluggable system in Go, so this whole project can be adaptable to other products, not just SUSE Manager.

Presentation Slides

I've put an outline together all that into my Google Drive. Enjoy.

...and stay tuned add-emoji

Looking for hackers with the skills:

distributedsystems cluster cloud kubernetes golang go rust

This project is part of:

Hack Week 19

Activity

  • about 4 years ago: keichwa liked this project.
  • about 4 years ago: ktsamis liked this project.
  • about 4 years ago: bmaryniuk added keyword "rust" to this project.
  • about 4 years ago: bmaryniuk added keyword "golang" to this project.
  • about 4 years ago: bmaryniuk added keyword "go" to this project.
  • about 4 years ago: pagarcia liked this project.
  • about 4 years ago: bmaryniuk added keyword "kubernetes" to this project.
  • about 4 years ago: bmaryniuk added keyword "distributedsystems" to this project.
  • about 4 years ago: bmaryniuk added keyword "cluster" to this project.
  • about 4 years ago: bmaryniuk added keyword "cloud" to this project.
  • about 4 years ago: bmaryniuk started this project.
  • about 4 years ago: bmaryniuk originated this project.

  • Comments

    • bmaryniuk
      about 4 years ago by bmaryniuk | Reply

      Day One

      TODO:

      • [x] Running distributed K/V database
        • [x] Verify it is buildable/package-able
      • [x] Running distributed message bus
        • [x] Verify it is buildable/package-able
      • [ ] Manage cluster Zones
      • [ ]

      Summary

      Got running Client Daemon. It can:

      • Talk to Cluster Director (CD) and ask for status
      • Switch/reconfigure Salt Minion according to the CD's directives

    • keichwa
      about 4 years ago by keichwa | Reply

      Yes, and presentation slides!

    Similar Projects

    Mortgage Plan Analyzer by RMestre

    Project Description

    Many people face chal...


    Package MONAI Machine Learning Models for Medical Applications by jordimassaguerpla

    Project Description

    MONAI Deploy aims to ...


    Predefined app security policy template for NeuVector by feih

    Project Description

    Idea is to predefin...


    RKE2/K3S working on IBM Power by tkelly

    [comment]: # (Please use the project descriptio...


    mikrolite - a cli to create lighweight Kubernetes clusters using microvms by rcase

    [comment]: # (Please use the project descriptio...


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come...


    Learn Golang contribuing to opensource projects by mbussolotto

    Project Description

    Get practice in Golan...


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come...


    terraform-provider-feilong by e_bischoff

    Project Description

    People need to test o...


    Gameboy emulator written in Go by mikeletux

    [comment]: # (Please use the project descriptio...


    Go zip updater: Appending new files to zip archive without decompressing the whole file by StarryWang

    Project Description

    Currently, Golang's `...


    WebUI for your data by avicenzi

    [comment]: # (Please use the project descriptio...


    Rancher Upgrader - Upgrades your rancher install via helm, and communicates critical changes from release A to B. by rweir

    [comment]: # (Please use the project descriptio...


    Learn Golang contribuing to opensource projects by mbussolotto

    Project Description

    Get practice in Golan...


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come...


    Cluster API Provider for Harvester by rcase

    [comment]: # (Please use the project descriptio...


    toniowm by fabriziosestito

    toniowm is yet another window manager written i...


    (Rust) Manage systems in NetBox using NetBox-Sync by chock

    [comment]: # (Please use the project descriptio...


    Create a new markup language with parser in rust by nkrapp

    Project Description

    Write a parser for my...


    A set of utilities to produce a "from scratch" OCI/Docker container using Opensuse/SLE rpms by ldragon

    [comment]: # (Please use the project descriptio...


    Kanidm - Account Policy by firstyear

    Project Description

    Kanidm is a identity ...