an invention by RBrownSUSE
Right now internal SLE development is still organised & structured around the concept of 'Milestones'. Schedules are defined, deadlines are set, and off we go making Alpha 1, 2, 3, Betas 1, 2, 3, RC's, and so on.
Meanwhile, QA has evolved, and with openQA and other automated tooling we are increasingly testing SLE in a more agile, rolling model, testing every single build as soon as it's produced by OBS, and just paying extra attention to the Milestones with additional manual testing.
But this mismatch is starting to cause some problems. QA more and more are filing bugs in new builds which were introduced since the latest Milestone. Developers cannot use that Milestone to reproduce the bug. Developers can get the latest build, but that takes time and might have introduced new issues which prohibit testing, because there is no 'release gating' to ensure that the latest SLE builds are at least a usable level of quality (they normally are, but there is no guarantees).
This is a particular problem during heavy development periods, where the SLE codebase is often as fast or maybe even a little faster moving than Tumbleweed.
However, this is a problem any openSUSEr knows well.. openSUSE Factory used to be like that, and the solution was Tumbleweed, prohibiting human use of Factory and using openQA not only as an indicator of quality but tying it to an explicit release process so before Repositories and ISOs are published as Tumbleweed.
This Hackweek project will experiment with the concept of adopting & adapting Tumbleweed style release tooling and processes to the SLE codebase. If all goes according to plan, we might end up with a constantly usable repository and set of ISOs that QA, Developers, and potentially maybe even one day external Beta testers can use as a reliable 'last known good' SLE development version, regardless of the age of the latest SLE milestones or the status of the absolutely latest SLE builds.
This project is part of:
Hack Week 14
Activity
Comments
-
over 8 years ago by okurz | Reply
What was achieved
Webpage of TumbleSLE is live on tumblesle.qa
TumbleSLE is following the scheme on our shirts for current hack week, "putting the pieces together". You all certainly know Tumbleweed, the rolling distribution of openSUSE, and you should know SLE, the SUSE Linux Enterprise family. TumbleSLE is currently the always latest good build of SLES based on openQA test results.
How does it work
The script tumblesle-release governing the automatic release process is available from okurz/openqa_review extending my python package "openqa-review" which is already used to support the daily openQA review of SLE products on o.s.d. The script is continuously looking for new builds of SLES and comparing it against the last released TumbleSLE version. If a new build is found that does not introduce any regressions (new tests failed) this build is released as the new version of TumbleSLE.
Richard setup a beautiful webpage on tumblesle.qa which is updated automatically whenever the corresponding gitlab repo is changed.
Technical details of system administration
TumbleSLE is stored and made available on a virtual machine in QA departement with name tumblesle.qa.suse.de. The automatic webpage update on changes in the gitlab repo is done by a cronjob of user wwwrun on this machine. The script tumblesle-release is running as user tumblesle continuously. A cronjob of tumblesle is just making sure the script is still running and restarting if not.
-
over 8 years ago by okurz | Reply
The jekyll sources are updated by
/srv/jekyll-source/.git/hooks/post-merge
whenever the jekyll sources change. rbrown added reading the build number into the webpage generation process by adding an appropriate template call. I calltumblesle-release
now from a wrapper script with all the parameters in/home/tumblesle/bin/tumblesle-release-server-12sp2-x86_64
which calls as a post-release-hookupdate_jekyll
.Content of
/home/tumblesle/bin/tumblesle-release-server-12sp2-x86_64
:flock -n /tmp/tumblesle_release.lock -c "tumblesle-release --openqa-host http://openqa.suse.de --group-id 25 --product 'Server' --src openqa:/var/lib/openqa/factory/ --match '*SP2*Server*x86_64*' --match-hdd 'SLES-12-SP2-x86_64*' -vvvv --post-release-hook /home/tumblesle/bin/update_jekyll" 2>&1 >> /var/log/tumblesle/tumblesle-release.log
Content of
/home/tumblesle/bin/update_jekyll
:sudo -u wwwrun /srv/jekyll-source/.git/hooks/post-merge
Added with
visudo
:tumblesle ALL=(wwwrun) NOPASSWD: /srv/jekyll-source/.git/hooks/post-merge
Now, can I better save all those system setup instructions as salt states? ;-)
Similar Projects
Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez
Description
Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.
Goals
- Explore Ollama
- Test different models
- Fine tuning
- Explore possible integration in Uyuni
Resources
- https://ollama.com/
- https://huggingface.co/
- https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
AI + Board Games
Board games have long been fertile ground for AI innovation, pushing the boundaries of capabilities such as strategy, adaptability, and real-time decision-making - from Deep Blue's chess mastery to AlphaZero’s domination of Go. Games aren’t just fun: they’re complex, dynamic problems that often mirror real-world challenges, making them interesting from an engineering perspective.
As avid board gamers, aspiring board game designers, and engineers with careers in open source infrastructure, we’re excited to dive into the latest AI techniques first-hand.
Our goal is to develop an all-open-source, all-green AWS-based stack powered by some serious hardware to drive our board game experiments forward!
Project Goals
Set Up the Stack:
- Install and configure the TAG and PyTAG frameworks on SUSE Linux Enterprise Base Container Images.
- Integrate with the SUSE AI stack for GPU-accelerated training on AWS.
- Validate a sample GPU-accelerated PyTAG workload on SUSE AI.
- Ensure the setup is entirely repeatable with Terraform and configuration scripts, documenting results along the way.
Design and Implement AI Agents:
- Develop AI agents for the two board games, incorporating Statistical Forward Planning and Deep Reinforcement Learning techniques.
- Fine-tune model parameters to optimize game-playing performance.
- Document the advantages and limitations of each technique.
Test, Analyze, and Refine:
- Conduct AI vs. AI and AI vs. human matches to evaluate agent strategies and performance.
- Record insights, document learning outcomes, and refine models based on real-world gameplay.
Technical Stack
- Frameworks: TAG and PyTAG for AI agent development
- Platform: SUSE AI
- Tools: AWS for high-performance GPU acceleration
Why This Project Matters
This project not only deepens our understanding of AI techniques by doing but also showcases the power and flexibility of SUSE’s open-source infrastructure for supporting high-level AI projects. By building on an all-open-source stack, we aim to create a pathway for other developers and AI enthusiasts to explore, experiment, and deploy their own innovative projects within the open-source space.
Our Motivation
We believe hands-on experimentation is the best teacher.
Combining our engineering backgrounds with our passion for board games, we’ll explore AI in a way that’s both challenging and creatively rewarding. Our ultimate goal? To hack an AI agent that’s as strategic and adaptable as a real human opponent (if not better!) — and to leverage it to design even better games... for humans to play!
Ansible for add-on management by lmanfredi
Description
Machines can contains various combinations of add-ons and are often modified during the time.
The list of repos can change so I would like to create an automation able to reset the status to a given state, based on metadata available for these machines
Goals
Create an Ansible automation able to take care of add-on (repo list) configuration using metadata as reference
Resources
- Machines
- Repositories
- Developing modules
- Basic VM Guest management
- Module
zypper_repository_list
- ansible-collections community.general
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[ ]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[ ]
Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)[ ]
Package management (install, remove, update...)[ ]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already)[ ]
Applying any basic salt state (including a formula)[ ]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
Create object oriented API for perl's YAML::XS module by tinita
Description
YAML::XS is a binding to libyaml and already quite old, but the most popular YAML module for perl. There are two main issues:
- It uses global package variables to influence behaviour.
- It didn't implement the loading of types like numbers and booleans according to the YAML spec (neither 1.1 nor 1.2).
Goals
Create a new interface which works object oriented. Currently YAML::XS exports a list of functions.
- The new API will allow to create a YAML::XS object containing configuration influencing the behaviour of loading and dumping.
- It keeps the libyaml parser and emitter structs in memory, so repeated calls can save the creation of those structs
- It will by default implement the YAML 1.2 Core Schema, so it is compatible to other YAML processors in perl and in other languages
- If I have time, I would like to add the merge
<<
key feature as an option. We could then use it in openQA as a replacement for YAML::PP to be faster.
I already created a proof of concept with a minimal functionality some weeks before this HackWeek.
Resources
- Work is currently happening on the oop branch