Running openATTIC and DeepSea on Multiple Distributions
ABSTRACT
At least a millennia ago, there was this interestingly odd fellow living in a small village in what would one day become the middle of nowhere. No one really knows what this fellow was up to during his days, but we know very well we have zero clue what he was doing during his nights. What is really interesting about this fellow is that he lived in a small village, and the village itself was so remote that it was completely cut-off from the world. The village itself became a self-sustaining cooperative organism, where every single individual contributed some of their to a broader cause: the survival and advancement of their small village.
Even though records found in a given blog's posts points to all the villagers being bound by a single-conscious organism at some point, and eventually being destroyed because one of them got the flu, for most of their existence they were individuals with different (and sometimes very specific) skills who tried to live their lives in the most boring way possible -- why is this relevant? We're not sure, but we are being told we don't need an abstract and we should instead be quite specific on what we add to the projects descriptions, instead of trying to create inspiring (but ultimately pointless) stories about why we need this.
MOTIVATION
In a nutshell, because "this is not nanowrimo", we want to have both openATTIC and DeepSea working on multiple distributions. Some would argue they already work in openSUSE and SLE, so multiple distros amiright??, but come on... -_-'
We need oA and DeepSea to be available in multiple distributions because
- we should share the love with the whole wide world; and,
- projects without strong, vibrant communities tend to stagnate, and die horrible deaths.
We don't want that. We want every single person in the world to be using openATTIC to manage and monitor Ceph, regardless of their distro affiliation card.
This will necessarily mean making sure oA and DeepSea properly work in at least CentOS and Ubuntu, as most of the Ceph community orbits around those two distributions - which is not a surprise, given Ceph packages were traditionally tested and built for ubuntu, eventually for CentOS.
JOIN US!
We have setup a trello board [1] with tasks pending and currently being worked on, so feel free to take a look! If you'd like to join us, poke us through the hackweek project and lets try to figure out a way to sync up :)
[1] https://trello.com/b/K49DeC9D/hackweek-nov-2017
This project is part of:
Hack Week 16
Activity
Comments
Be the first to comment!
Similar Projects
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
Switch software-o-o to parse repomd data by hennevogel
Currently software.opensuse.org search is using the OBS binary search for everything, even for packages inside the openSUSE distributions. Let's switch this to use repomd data from download.opensuse.org
Packaging Mu on OBS by joeyli
Description
Packaging Microsoft Mu project
Goals
Packaging Mu RPM on OBS.
Resources
https://microsoft.github.io/mu/
https://github.com/microsoft/mu
https://github.com/microsoft/mu_basecore
https://github.com/microsoft/mutianoplatforms
https://github.com/microsoft/mutianoplus
https://github.com/microsoft/mu_plus
Hackweek 22: Look at Microsoft Mu project
https://hackweek.opensuse.org/22/projects/look-at-microsoft-mu-project
https://drive.google.com/file/d/1BT31i7z3qh13adj9pdRz3lTUkqIsXvjY/view?usp=drive_link
Framework laptop integration by nkrapp
Project Description
Although openSUSE does run on the Framework laptops out-of-the-box, there is still room to improve the experience. The ultimate goal is to get openSUSE on the list of community supported distros
Goal for this Hackweek
The goal this year is to at least package all of the soft- and firmware for accessories like the embedded controller, Framework 16 inputmodule and other tools. I already made some progress by packaging the inputmodule control software, but the firmware is still missing
Resources
As I only have a Framework laptop 16 and not a 13 I'm looking for people with hardware that can help me test
Progress:
Update 1:
The project lives under my home for now until I can get an independent project on OBS: Framework Laptop project
Also, the first package is already done, it's the cli for the led-matrix spacer module on the Framework Laptop 16. I am also testing this myself, but any feedback or questions are welcome.
You can test the package on the Framework 16 by adding this repo and installing the package inputmodule-control
Update 2:
I finished packaging the python cli/gui for the inputmodule. It is using a bit of a hack because one of the dependencies (PySimpleGUI) recently switched to a noncommercial license so I cannot ship it. But now you can actually play the games on the led-matrix (the rust package doesn't include controls for the games). I'm also working on the Framework system tools now, which should be more interesting for Framework 13 users.
You can test the package on the Framework 16 by installing python311-framework16_inputmodule and then running "ledmatrixctl" from the command line.
Update 3:
I packaged the framework_tool, a general application for interacting with the system. You can find it some detailed information what it can do here. On my system everything related to the embedded controller functionality doesn't work though, so some help testing and debugging would be appreciated.
Update 4:
Today I finished the qmk interface, which gives you a cli (and gui) to configure your Framework 16 keyboard. Sadly the Python gui is broken upstream, but I added the qmk_hid package with the cli and from my testing it works well.
Final Update:
All the interesting programs are now done, I decided to exclude the firmware for now since upstream also recommends using fwupd to update it. I will hack on more things related to the Framework Laptops in the future so if there are any ideas to improve the experience (or any bugs to report) feel free to message me about it.
As a final summary/help for everyone using a Framework Laptop who wants to use this software:
The source code for all packages can be found in repositories in the Framework organization on Github
All software can be installed from this repo (Tumbleweed)
The available packages are:
framework-inputmodule-control (FW16) - play with the inputmodules on your Framework 16 (b1-display, led-matrix, c1-minimal)
python-framework16_inputmodule (FW16) - same as inputmodule-control but is needed if you want to play and crontrol the built-in games in the led-matrix (call with ledmatrixctl or ledmatrixgui)
framework_tool (FW13 and FW 16) - use to see and configure general things on your framework system. Commands using the embedded controller might not work, it looks like there are some problems with the kernel module used by the EC. Fixing this is out of scope for this hackweek but I am working on it
qmk_hid (FW16) - a cli to configure the FW16 qmk keyboard. Sadly the gui for this is broken upstream so only the cli is usable for now
Update Haskell ecosystem in Tumbleweed to GHC-9.10.x by psimons
Description
We are currently at GHC-9.8.x, which a bit old. So I'd like to take a shot at the latest version of the compiler, GHC-9.10.x. This is gonna be interesting because the new version requires major updates to all kinds of libraries and base packages, which typically means patching lots of packages to make them build again.
Goals
Have working builds of GHC-9.10.x and the required Haskell packages in 'devel:languages:haskell` so that we can compile:
git-annex
pandoc
xmonad
cabal-install
Resources
- https://build.opensuse.org/project/show/devel:languages:haskell/
- https://github.com/opensuse-haskell/configuration/
- #discuss-haskell
- https://www.twitch.tv/peti343
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez
Description
Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.
Goals
- Explore Ollama
- Test different models
- Fine tuning
- Explore possible integration in Uyuni
Resources
- https://ollama.com/
- https://huggingface.co/
- https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Symbol Relations by hli
Description
There are tools to build function call graphs based on parsing source code, for example, cscope
.
This project aims to achieve a similar goal by directly parsing the disasembly (i.e. objdump) of a compiled binary. The assembly code is what the CPU sees, therefore more "direct". This may be useful in certain scenarios, such as gdb/crash debugging.
Detailed description and Demos can be found in the README file:
Supports x86 for now (because my customers only use x86 machines), but support for other architectures can be added easily.
Tested with python3.6
Goals
Any comments are welcome.
Resources
https://github.com/lhb-cafe/SymbolRelations
symrellib.py: mplements the symbol relation graph and the disassembly parser
symrel_tracer*.py: implements tracing (-t option)
symrel.py: "cli parser"