Project Description
Legal reviews have been a quite painful part of our development process. The current situation in Factory waits for legaldb for a limited amount of time and simply proceeds further if the review is not "approved" within a few hours.
Leap currently waits for legal review to be closed (may take weeks), or manually skipped. We typically contact our legal with ask to review specific requests on a weekly basis.
The goal is to improve our best effort on reviews in openSUSE, and ideally "shorten" the time of legal review of our packages.
Project OSSelot and related work on legal reviews seem to be funded by donations to OSADL (based in Germany). The project seems to use fossology underneath.
I highly recomemnd to start by watching OSSelot videos to get some idea about how their process and results look like. The last one seems to be closest to what I've seen in the Open Chain webinar.
GitHub repository of curated data
This project has two parts. Offloading our legal team, where possible, and contributing back.
Goal for this Hackweek
Contributing back results of our reviews Being a good Open Source Community Citizen and publicly sharing results of our legal reviews of community packages.
Offloading reviews of our community packages. There I can see extension of our existing current process. And extending our legal bot to talk to fossology/OSSelot.
An example could be Let's wait for legaldb for n-hours (currently 1-2h), if the review is still open then let's submit it to OSSelot. I see it as a much better alternative to e.g. lkocman skipping the review and taking the change in, in case that review was not closed for days/weeks.
Resources
We could use somebody who has experience with our legal tooling https://github.com/openSUSE/cavil and could help us export data from legaldb.suse.de to https://github.com/Open-Source-Compliance](https://github.com/Open-Source-Compliance/package-analysis/tree/main/analysed-packages)
A person who could tweak our existing legal bot to submit requests to fossology/osselot
This project is part of:
Hack Week 22
Activity
Comments
-
almost 2 years ago by lkocman | Reply
** An agreed first step from our call with Christopher from our legal team would be to compare our cavil report with the fossology report. Sebastian and Christopher recommended to start with comparing results of openssl**
I'd recommend filing an OSSelot project issue containing our review data (perhaps stripped from the SUSE's RISK assestment) and have a discussion about next steps.
*Notes: *
What's interesting for us is the SPDX license mapping to files, we're still using mappings from before the spdx time. What's interesting to our SUSE legal is what are the criteria for rejection on the OSSelot side. I did ask and we do not have any "strategy" or "rules" rejection documented publically.
I'm not sure if OSSelot team would be willing to work on our reviews to the level that we'd expect (to be clarified, see my note about rejection above), but having these reports public e.g. in a pull request, opens a way for volunteers with legal license background to contribute and offload SUSE legal team on community reviews.
-
almost 2 years ago by lkocman | Reply
Another action item from Sebastian:
One more thing for hack week, you could take a look at a rejected review, maybe there is something they have in their data that matches (search https://legaldb.suse.de/reviews/recent for unacceptable)
There were 11 rejected reviews in the past 3 months
Similar Projects
Create object oriented API for perl's YAML::XS module, with YAML 1.2 Support by tinita
Description
YAML::XS is a binding to libyaml and already quite old, but the most popular YAML module for perl. There are two main issues:
- It uses global package variables to influence behaviour.
- It didn't implement the loading of types like numbers and booleans according to the YAML spec (neither 1.1 nor 1.2).
Goals
Create a new interface which works object oriented. Currently YAML::XS exports a list of functions.
- The new API will allow to create a YAML::XS object containing configuration influencing the behaviour of loading and dumping.
- It keeps the libyaml parser and emitter structs in memory, so repeated calls can save the creation of those structs
- It will by default implement the YAML 1.2 Core Schema, so it is compatible to other YAML processors in perl and in other languages
- If I have time, I would like to add the merge
<<
key feature as an option. We could then use it in openQA as a replacement for YAML::PP to be faster.
I already created a proof of concept with a minimal functionality some weeks before this HackWeek.
Resources
- Work is currently happening on the oop branch
- Experimental release waiting for user feedback: https://github.com/perlpunk/yaml-libyaml-pm/releases
- Diff
Team Hedgehogs' Data Observability Dashboard by gsamardzhiev
Description
This project aims to develop a comprehensive Data Observability Dashboard that provides r insights into key aspects of data quality and reliability. The dashboard will track:
Data Freshness: Monitor when data was last updated and flag potential delays.
Data Volume: Track table row counts to detect unexpected surges or drops in data.
Data Distribution: Analyze data for null values, outliers, and anomalies to ensure accuracy.
Data Schema: Track schema changes over time to prevent breaking changes.
The dashboard's aim is to support historical tracking to support proactive data management and enhance data trust across the data function.
Goals
Although the final goal is to create a power bi dashboard that we are able to monitor, our goals is to 1. Create the necessary tables that track the relevant metadata about our current data 2. Automate the process so it runs in a timely manner
Resources
AWS Redshift; AWS Glue, Airflow, Python, SQL
Why Hedgehogs?
Because we like them.
Ansible for add-on management by lmanfredi
Description
Machines can contains various combinations of add-ons and are often modified during the time.
The list of repos can change so I would like to create an automation able to reset the status to a given state, based on metadata available for these machines
Goals
Create an Ansible automation able to take care of add-on (repo list) configuration using metadata as reference
Resources
- Machines
- Repositories
- Developing modules
- Basic VM Guest management
- Module
zypper_repository_list
- ansible-collections community.general
Results
Created WIP project Ansible-add-on-openSUSE
Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez
Description
Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.
Goals
- Explore Ollama
- Test different models
- Fine tuning
- Explore possible integration in Uyuni
Resources
- https://ollama.com/
- https://huggingface.co/
- https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
ClusterOps - Easily install and manage your personal kubernetes cluster by andreabenini
Description
ClusterOps is a Kubernetes installer and operator designed to streamline the initial configuration
and ongoing maintenance of kubernetes clusters. The focus of this project is primarily on personal
or local installations. However, the goal is to expand its use to encompass all installations of
Kubernetes for local development purposes.
It simplifies cluster management by automating tasks and providing just one user-friendly YAML-based
configuration config.yml
.
Overview
- Simplified Configuration: Define your desired cluster state in a simple YAML file, and ClusterOps will handle the rest.
- Automated Setup: Automates initial cluster configuration, including network settings, storage provisioning, special requirements (for example GPUs) and essential components installation.
- Ongoing Maintenance: Performs routine maintenance tasks such as upgrades, security updates, and resource monitoring.
- Extensibility: Easily extend functionality with custom plugins and configurations.
- Self-Healing: Detects and recovers from common cluster issues, ensuring stability, idempotence and reliability. Same operation can be performed multiple times without changing the result.
- Discreet: It works only on what it knows, if you are manually configuring parts of your kubernetes and this configuration does not interfere with it you can happily continue to work on several parts and use this tool only for what is needed.
Features
- distribution and engine independence. Install your favorite kubernetes engine with your package
manager, execute one script and you'll have a complete working environment at your disposal.
- Basic config approach. One single
config.yml
file with configuration requirements (add/remove features): human readable, plain and simple. All fancy configs managed automatically (ingress, balancers, services, proxy, ...). - Local Builtin ContainerHub. The default installation provides a fully configured ContainerHub available locally along with the kubernetes installation. This configuration allows the user to build, upload and deploy custom container images as they were provided from external sources. Internet public sources are still available but local development can be kept in this localhost server. Builtin ClusterOps operator will be fetched from this ContainerHub registry too.
- Kubernetes official dashboard installed as a plugin, others planned too (k9s for example).
- Kubevirt plugin installed and properly configured. Unleash the power of classic virtualization (KVM+QEMU) on top of Kubernetes and manage your entire system from there, libvirtd and virsh libs are required.
- One operator to rule them all. The installation script configures your machine automatically during installation and adds one kubernetes operator to manage your local cluster. From there the operator takes care of the cluster on your behalf.
- Clean installation and removal. Just test it, when you are done just use the same program to uninstall everything without leaving configs (or pods) behind.
Planned features (Wishlist / TODOs)
- Containerized Data Importer (CDI). Persistent storage management add-on for Kubernetes to provide a declarative way of building and importing Virtual Machine Disks on PVCs for
Implement a full OBS api client in Rust by nbelouin
Description
I recently started to work on tooling for OBS using rust, to do so I started a Rust create to interact with OBS API, I only implemented a few routes/resources for what I needed. What about making it a full fledged OBS client library.
Goals
- Implement more routes/resources
- Implement a test suite against the actual OBS implementation
- Bonus: Create an osc like cli in Rust using the library
Resources
- https://github.com/suse-edge/obs-tools/tree/main/obs-client
- https://api.opensuse.org/apidocs/
Automation of ABI compatibility checks by ateixeira
Description
ABI compatibility checks could be further automated by using the OBS API to download built RPMs and using existing tools to analyze ABI compatibility between the libraries contained in those packages. This project aims to explore these possibilities and figure out a way to make ABI checks as painless and fast as possible for package maintainers.
Resources
https://github.com/openSUSE/abi-compliance-checker
https://github.com/lvc/abi-compliance-checker
https://sourceware.org/libabigail/
Research openqa-trigger-from-obs and openqa-trigger-from-ibs-plugin by qwang
Description
openqa-trigger-from-obs project is a framework that OSD is using it to automatically sync the defined images and repositories from OBS/IBS to its assets for testing. This framework very likely will be used for the synchronize to each location's openqa include openqa.qa2.suse.asia Beijing local procy scc scc-proxy.suse.asia(although it's not a MUST to our testing) it's now rewriting requests to openqa.qa2.suse.asia instead of openqa.suse.de, the assets/repo should be consistent the format Beijing local openQA is maintaining an own script but still need many manually activities when new build comes, and not consistent to OSD, that will request many test code change due to CC network change
Goals
Research this framework in case it will be re-used for Beijing local openQA, and will need to be setup and maintained by ourselves
Resources
https://github.com/os-autoinst/openqa-trigger-from-obs/tree/master https://gitlab.suse.de/openqa/openqa-trigger-from-ibs-plugin
beijing :rainbow machine
New features in openqa-trigger-from-obs for openQA by jlausuch
Description
Implement new features in openqa-trigger-from-obs to make xml more flexible.
Goals
One of the features to be implemented: - Possibility to define "VERSION" and "ARCH" variables per flavor instead of global.
Resources
https://github.com/os-autoinst/openqa-trigger-from-obs
Learn about OSB and contribute to `kustomize` and `k9s` packages to add ARM arch by dpock
Description
There are already k9s
and kustomize
packages that exist for openSUSE today. These could be used as the source for these binaries in our rancher projects. By using them we would benefit from CVE fixes included in our distribution of the packages not in cluded upstream. However they are not providing arm package builds which are required.
Goals
- [ ] Update the kustomize package in OBS to use the newest version and send change request
Resources
- k9s: https://build.opensuse.org/package/show/openSUSE:Factory/k9s
- kustomize: https://build.opensuse.org/package/show/openSUSE:Factory/kustomize
- Learning Docs: https://confluence.suse.com/display/packaging/Training%2C+Talks+and+Videos