There is a running gag built into openQA called interactive mode. It goes like this: "if you need the interactive mode, it's broken". The reason: the so called interactive mode is a collection of hacks - in theory making it possible to update needles in a running test.
But in fact it's a UI desaster that almost never works. So the goal of this hackweek project is to get rid of it - and instead build a real control from the webui into the backend allowing tests to be written on the fly including needle creation/updates. Easy as that.
Looking for hackers with the skills:
This project is part of:
Hack Week 14 Hack Week 16
Activity
Comments
-
about 7 years ago by coolo | Reply
Current state of afair:
To enable interactive mode:
Javascript of the browser POSTS to /worker//commands of the webui Webui does a DBUS call to WebSocket Service WebSocket Service will issue a command to the worker through WS Worker POSTS to the commands process of isotovideo That one will call into the main loop, which then sets the variable
The problem is not the complexity of this in general - the problem is that the missing return channel / error handling.
If isotovideo is busy with a long running command (i.e. typestring $theworld), it won't react to the commands requests.
Having a direct WS line between JS and commands process sounds like the easiest to handle. This would remove the need to adapt 3 more places if I want to adapt the workflow.
So first step: add a step to create this channel - checking permissions in the process.
Similar Projects
New features in openqa-trigger-from-obs for openQA by jlausuch
Description
Implement new features in openqa-trigger-from-obs to make xml more flexible.
Goals
One of the features to be implemented: - Possibility to define "VERSION" and "ARCH" variables per flavor instead of global.
Resources
https://github.com/os-autoinst/openqa-trigger-from-obs
Learn obs/ibs sync tool by xlai
Description
Once images/repo are built from IBS/OBS, there is a tool to sync the image from IBS/OBS to openqa asset directory and trigger openqa jobs accordingly.
Goals
Check how the tool is implemented, and be capable to add/modify our needed images/repo in future by ourselves.
Resources
- https://github.com/os-autoinst/openqa-trigger-from-obs
- https://gitlab.suse.de/openqa/openqa-trigger-from-ibs-plugin/-/tree/master?ref_type=heads
Hack on isotest-ng - a rust port of isotovideo (os-autoinst aka testrunner of openQA) by szarate
Description
Some time ago, I managed to convince ByteOtter to hack something that resembles isotovideo but in Rust, not because I believe that Perl is dead, but more because there are certain limitations in the perl code (how it was written), and its always hard to add new functionalities when they are about implementing a new backend, or fixing bugs (Along with people complaining that Perl is dead, and that they don't like it)
In reality, I wanted to see if this could be done, and ByteOtter proved that it could be, while doing an amazing job at hacking a vnc console, and helping me understand better what RuPerl needs to work.
I plan to keep working on this for the next few years, and while I don't aim for feature completion or replacing isotovideo tih isotest-ng (name in progress), I do plan to be able to use it on a daily basis, using specialized tooling with interfaces, instead of reimplementing everything in the backend
Todo
- Add
make
targets for testability, e.g "spawn qemu and type" - Add image search matching algorithm
- Add a Null test distribution provider
- Add a Perl Test Distribution Provider
- Fix unittests https://github.com/os-autoinst/isotest-ng/issues/5
- Research OpenTofu how to add new hypervisors/baremetal to OpenTofu
- Add an interface to openQA cli
Goals
- Implement at least one of the above, prepare proposals for GSoC
- Boot a system via it's BMC
Resources
See https://github.com/os-autoinst/isotest-ng
Setup a new openQA on more powerful server by JNa
Description
- currently local openQA storage is insufficient
Goals
-Migrate to more powerful machine
Resources
-Service Rainbow
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.