WEB_PAGES:
Introduction:
As a qa-automation tester in Product QA for SLES and SUSE-Manager, the SUTs I test (system under test) (like SLE-12-SP2-beta-etc) are changing every day (new packages, patches are merged to SP2, files changes and so on).
Problem: we don't have a tool that give us metadata about the system, like machinery well do.
machinery inspect SUT
machinery show SUT
Problem : what changed from system SLE12-SP-BUILD 8000 from to 8400 ? ( oh, i lost the mail from release manager ! )
machinery compare
Problem : i found a regression with systemd-tests-suite on SLEnkins:
the testsuite fail on BUILD 7400 , but build 7399 is still OK.
what exactly has changed for the package, but also for the system? -> Machinery
Problem: As QA i found a BUG on NFS. I have to report a bug.
Machinery can help me to fill the bug, giving me exact information about really different systems (SLES-12-SP1, openSUSE), etc, what has changed with NFS ? Or Fedora side?
RESULTS
First i want to thank the machinery team, especially Mauro and Manuel that supported me. On this hackweek, have integrated machinery for qa-automation on the library https://github.com/okirch/susetest, and in the SLEnkins automation Framwork.
This work really nice, for scanning systems under test. (SLES, openSUSE)
For qa-automation, machinery works nice and i achieved what i was expecting ! :)
I can scan, compare systems. This could be a FEDORA, DEBIAN, ArchLinux whatever against a openSUSE or a SLES.
In QA and Development, and even Relase Management Perspective this is awesome.
NEW HACK ! :
Revolutionar Perspective for QAAUTOTESTING with Machinery
I'm really glad, i can show you this :
https://slenkins.suse.de/jenkins/job/suite-machinery/32/console
In this example, i compare a SLES-12-SP2-LATEST, with 3-4 builds before.
RESULTS is amazing
With machinery i achieved to compare differents builds from SLES-12-SP2, thanks to the scope, i can see exactly was has changed and was not. I can compare a SLE_12-SP2-GNOME with a SLE-12-SP2-Default, and tracks perfectly changes.
Concrete examples are here :
Scan of a system With console log for machinery ( after the tests are executed) https://slenkins.suse.de/jenkins/view/Test%20suites/job/suite-machinery/13/console
Or with the inspect command redirect to a file.txt to workspace jenkins:
``` setup() machinery_sut = machinery(sut)
try: sometest(sut) machinerysut.inspect() machinerysut.show("tests-machinery") machinerysut.compare("SLE-12-SP2-BUILDXXX-GNOME") ```
Looking for hackers with the skills:
This project is part of:
Hack Week 14
Activity
Comments
-
over 9 years ago by e_bischoff | Reply
For point 2), snapshots would be an alternative. Which does not mean that using machinery to do that is not interesting - on the contrary!
-
Similar Projects
Song Search with CLAP by gcolangiuli
Description
Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface
Goals
Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:
- Music Tagging;
- Free text search;
- Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.
The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.
Result
In this MVP we implemented:
- Async Song Analysis with Clap model
- Free Text Search of the songs
- Similar song search based on vector representation
- Containerised version with web interface
We also documented what went well and what can be improved in the use of AI.
You can have a look at the result here:
Future implementation can be related to performance improvement and stability of the analysis.
References
- CLAP: The main model being researched;
- huggingface: Pre-trained models for CLAP;
- Free Music Archive: Creative Commons songs that can be used for testing;
Liz - Prompt autocomplete by ftorchia
Description
Liz is the Rancher AI assistant for cluster operations.
Goals
We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.
Example:
- User prompt: "Can you show me the list of p"
- Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"
Example:
- User prompt: "Show me the logs of #rancher-"
- Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".
Technical Overview
- The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
- The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.
Resources
Enhance git-sha-verify: A tool to checkout validated git hashes by gpathak
Description
git-sha-verify is a simple shell utility to verify and checkout trusted git commits signed using GPG key. This tool helps ensure that only authorized or validated commit hashes are checked out from a git repository, supporting better code integrity and security within the workflow.
Supports:
- Verifying commit authenticity signed using gpg key
- Checking out trusted commits
Ideal for teams and projects where the integrity of git history is crucial.
Goals
A minimal python code of the shell script exists as a pull request.
The goal of this hackweek is to:
- DONE: Add more unit tests
- New and more tests can be added later
- New and more tests can be added later
- Partially DONE: Make the python code modular
- DONE: Add code coverage if possible
Resources
- Link to GitHub Repository: https://github.com/openSUSE/git-sha-verify
Improve/rework household chore tracker `chorazon` by gniebler
Description
I wrote a household chore tracker named chorazon, which is meant to be deployed as a web application in the household's local network.
It features the ability to set up different (so far only weekly) schedules per task and per person, where tasks may span several days.
There are "tokens", which can be collected by users. Tasks can (and usually will) have rewards configured where they yield a certain amount of tokens. The idea is that they can later be redeemed for (surprise) gifts, but this is not implemented yet. (So right now one needs to edit the DB manually to subtract tokens when they're redeemed.)
Days are not rolled over automatically, to allow for task completion control.
We used it in my household for several months, with mixed success. There are many limitations in the system that would warrant a revisit.
It's written using the Pyramid Python framework with URL traversal, ZODB as the data store and Web Components for the frontend.
Goals
- Add admin screens for users, tasks and schedules
- Add models, pages etc. to allow redeeming tokens for gifts/surprises
- …?
Resources
tbd (Gitlab repo)
Improve chore and screen time doc generator script `wochenplaner` by gniebler
Description
I wrote a little Python script to generate PDF docs, which can be used to track daily chore completion and screen time usage for several people, with one page per person/week.
I named this script wochenplaner and have been using it for a few months now.
It needs some improvements and adjustments in how the screen time should be tracked and how chores are displayed.
Goals
- Fix chore field separation lines
- Change screen time tracking logic from "global" (week-long) to daily subtraction and weekly addition of remainders (more intuitive than current "weekly time budget method)
- Add logic to fill in chore fields/lines, ideally with pictures, falling back to text.
Resources
tbd (Gitlab repo)