Project Description
Legal reviews have been a quite painful part of our development process. The current situation in Factory waits for legaldb for a limited amount of time and simply proceeds further if the review is not "approved" within a few hours.
Leap currently waits for legal review to be closed (may take weeks), or manually skipped. We typically contact our legal with ask to review specific requests on a weekly basis.
The goal is to improve our best effort on reviews in openSUSE, and ideally "shorten" the time of legal review of our packages.
Project OSSelot and related work on legal reviews seem to be funded by donations to OSADL (based in Germany). The project seems to use fossology underneath.
I highly recomemnd to start by watching OSSelot videos to get some idea about how their process and results look like. The last one seems to be closest to what I've seen in the Open Chain webinar.
GitHub repository of curated data
This project has two parts. Offloading our legal team, where possible, and contributing back.
Goal for this Hackweek
Contributing back results of our reviews Being a good Open Source Community Citizen and publicly sharing results of our legal reviews of community packages.
Offloading reviews of our community packages. There I can see extension of our existing current process. And extending our legal bot to talk to fossology/OSSelot.
An example could be Let's wait for legaldb for n-hours (currently 1-2h), if the review is still open then let's submit it to OSSelot. I see it as a much better alternative to e.g. lkocman skipping the review and taking the change in, in case that review was not closed for days/weeks.
Resources
We could use somebody who has experience with our legal tooling https://github.com/openSUSE/cavil and could help us export data from legaldb.suse.de to https://github.com/Open-Source-Compliance](https://github.com/Open-Source-Compliance/package-analysis/tree/main/analysed-packages)
A person who could tweak our existing legal bot to submit requests to fossology/osselot
This project is part of:
Hack Week 22
Activity
Comments
-
almost 2 years ago by lkocman | Reply
** An agreed first step from our call with Christopher from our legal team would be to compare our cavil report with the fossology report. Sebastian and Christopher recommended to start with comparing results of openssl**
I'd recommend filing an OSSelot project issue containing our review data (perhaps stripped from the SUSE's RISK assestment) and have a discussion about next steps.
*Notes: *
What's interesting for us is the SPDX license mapping to files, we're still using mappings from before the spdx time. What's interesting to our SUSE legal is what are the criteria for rejection on the OSSelot side. I did ask and we do not have any "strategy" or "rules" rejection documented publically.
I'm not sure if OSSelot team would be willing to work on our reviews to the level that we'd expect (to be clarified, see my note about rejection above), but having these reports public e.g. in a pull request, opens a way for volunteers with legal license background to contribute and offload SUSE legal team on community reviews.
-
almost 2 years ago by lkocman | Reply
Another action item from Sebastian:
One more thing for hack week, you could take a look at a rejected review, maybe there is something they have in their data that matches (search https://legaldb.suse.de/reviews/recent for unacceptable)
There were 11 rejected reviews in the past 3 months
Similar Projects
Create object oriented API for perl's YAML::XS module, with YAML 1.2 Support by tinita
Description
YAML::XS is a binding to libyaml and already quite old, but the most popular YAML module for perl. There are two main issues:
- It uses global package variables to influence behaviour.
- It didn't implement the loading of types like numbers and booleans according to the YAML spec (neither 1.1 nor 1.2).
Goals
Create a new interface which works object oriented. Currently YAML::XS exports a list of functions.
- The new API will allow to create a YAML::XS object containing configuration influencing the behaviour of loading and dumping.
- It keeps the libyaml parser and emitter structs in memory, so repeated calls can save the creation of those structs
- It will by default implement the YAML 1.2 Core Schema, so it is compatible to other YAML processors in perl and in other languages
- If I have time, I would like to add the merge
<<
key feature as an option. We could then use it in openQA as a replacement for YAML::PP to be faster.
I already created a proof of concept with a minimal functionality some weeks before this HackWeek.
Resources
- Work is currently happening on the oop branch
- Experimental release waiting for user feedback: https://github.com/perlpunk/yaml-libyaml-pm/releases
- Diff
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov
Project Description
Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.
In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.
Continue with implementation of the missing features and improve the existing implementation:
authentication (need to decide how it should be/or not related to salt auth)
web service providing the control of states deployment
Goal for this Hackweek
Implement missing key features
Implement the tool for state deployment control with CLI
Resources
https://github.com/openSUSE/saline
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games
Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez
Description
Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.
Goals
- Explore Ollama
- Test different models
- Fine tuning
- Explore possible integration in Uyuni
Resources
- https://ollama.com/
- https://huggingface.co/
- https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/
Team Hedgehogs' Data Observability Dashboard by gsamardzhiev
Description
This project aims to develop a comprehensive Data Observability Dashboard that provides r insights into key aspects of data quality and reliability. The dashboard will track:
Data Freshness: Monitor when data was last updated and flag potential delays.
Data Volume: Track table row counts to detect unexpected surges or drops in data.
Data Distribution: Analyze data for null values, outliers, and anomalies to ensure accuracy.
Data Schema: Track schema changes over time to prevent breaking changes.
The dashboard's aim is to support historical tracking to support proactive data management and enhance data trust across the data function.
Goals
Although the final goal is to create a power bi dashboard that we are able to monitor, our goals is to 1. Create the necessary tables that track the relevant metadata about our current data 2. Automate the process so it runs in a timely manner
Resources
AWS Redshift; AWS Glue, Airflow, Python, SQL
Why Hedgehogs?
Because we like them.
Automation of ABI compatibility checks by ateixeira
Description
ABI compatibility checks could be further automated by using the OBS API to download built RPMs and using existing tools to analyze ABI compatibility between the libraries contained in those packages. This project aims to explore these possibilities and figure out a way to make ABI checks as painless and fast as possible for package maintainers.
Resources
https://github.com/openSUSE/abi-compliance-checker
https://github.com/lvc/abi-compliance-checker
https://sourceware.org/libabigail/
Learn about OSB and contribute to `kustomize` and `k9s` packages to add ARM arch by dpock
Description
There are already k9s
and kustomize
packages that exist for openSUSE today. These could be used as the source for these binaries in our rancher projects. By using them we would benefit from CVE fixes included in our distribution of the packages not in cluded upstream. However they are not providing arm package builds which are required.
Goals
- [ ] Update the kustomize package in OBS to use the newest version and send change request
Resources
- k9s: https://build.opensuse.org/package/show/openSUSE:Factory/k9s
- kustomize: https://build.opensuse.org/package/show/openSUSE:Factory/kustomize
- Learning Docs: https://confluence.suse.com/display/packaging/Training%2C+Talks+and+Videos
New features in openqa-trigger-from-obs for openQA by jlausuch
Description
Implement new features in openqa-trigger-from-obs to make xml more flexible.
Goals
One of the features to be implemented: - Possibility to define "VERSION" and "ARCH" variables per flavor instead of global.
Resources
https://github.com/os-autoinst/openqa-trigger-from-obs
Git CI to automate the creation of product definition by gyribeiro
Description
Automate the creation of product definition
Goals
Create a Git CI that will:
- automatically be triggered once a change (commit) in package list is done.
- run tool responsible to update product definition based on the changes in package list
- test the updated product definition in OBS
- submit a pull request updating the product definition in the repository
NOTE: this Git CI may also be triggered manually
Resources
- https://docs.gitlab.com/ee/ci/
- https://openbuildservice.org/2021/05/31/scm-integration/
- https://github.com/openSUSE/openSUSE-release-tools
obs-service-vendor_node_modules by cdimonaco
Description
When building a javascript package for obs, one option is to use https://github.com/openSUSE/obs-service-node_modules as source service to get the project npm dependencies available for package bulding.
obs-service-vendornodemodules aims to be a source service that vendors npm dependencies, installing them with npm install (optionally only production ones) and then creating a tar package of the installed dependencies.
The tar will be used as source in the package building definitions.
Goals
- Create an obs service package that vendors the npm dependencies as tar archive.
- Maybe add some macros to unpack the vendor package in the specfiles
Resources