Today OpenQA mostly runs on virtual machines, but it can get really tricky to find bugs triggered by real hardware. There are only few interfaces required to interact with a machine though:
1) HDMI
2) USB keyboard
3) CD-ROM
4) Remote Power Switching
For 1, I ordered a few HDMI frame grabbers that will only arrive after Hackweek. 2 and 3 should be possible to implement using the USB gadget support in Linux which a lot of ARM devices support - I can definitely donate a Beaglebone Black to whoever is interested. Power switching is a solved problem.
With all these bits in place and a bit of plumbing we should be able to run tests on actual hardware, hopefully extending our test coverage to more tricky scenarios.
Succeeded in emulating USB-mass-storage and USB-keyboard and serial. Also made a RPC for it, so you can use it in code running on any machine. See the demo video
Code is in a github branch
This allows to boot any machine off a virtual USB-CDROM and automate with keystrokes.
Still to do: emulate a tablet with absolute pointer coordinates and capture screen output.
This project is part of:
Hack Week 12
Activity
Comments
-
over 9 years ago by ancorgs | Reply
I guess you are aware of the IPMI and KVM2USB openQA backends and the discussions about real hardware support in the openQA mailing list (sorry, I don't know where to find the archives) that leaded to try IPMI as a first approach.
-
over 9 years ago by algraf | Reply
I just realized that we don't need to jump through hoops with the stm32 board or other cruftyness but instead we can just use the USB gadget support in Linux! So I guess the big task here would be to write a driver for the frame grabber, but that one will still take a few weeks to arrive :(.
That means for now, the main goal of this project would be to script up working USB hid and mass storage emulation by leveraging the already existing Linux infrastructure. Then add some plumbing to hook it up into OpenQA. Then wait until the hdmi grabber arrives ;).
-
over 9 years ago by bmwiedemann | Reply
Btw: from my experience with the kvm2usb, most hardware-specific bugs found are in the graphics drivers (apart from things like having 2 ethernet-ports with one being unconnected, which can be emulated in KVM)
Similar Projects
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
Learn obs/ibs sync tool by xlai
Description
Once images/repo are built from IBS/OBS, there is a tool to sync the image from IBS/OBS to openqa asset directory and trigger openqa jobs accordingly.
Goals
Check how the tool is implemented, and be capable to add/modify our needed images/repo in future by ourselves.
Resources
- https://github.com/os-autoinst/openqa-trigger-from-obs
- https://gitlab.suse.de/openqa/openqa-trigger-from-ibs-plugin/-/tree/master?ref_type=heads
Setup a new openQA on more powerful server by JNa
Description
- currently local openQA storage is insufficient
Goals
-Migrate to more powerful machine
Resources
-Service Rainbow
OpenQA Golang api client by hilchev
Description
I would like to make a simple cli tool to communicate with the OpenQA API
Goals
- OpenQA has a ton of information that is hard to get via the UI. A tool like this would make my life easier :)
- Would potentially make it easier in the future to make UI changes without Perl.
- Improve my Golang skills
Resources
- https://go.dev/doc/
- https://openqa.opensuse.org/api
New features in openqa-trigger-from-obs for openQA by jlausuch
Description
Implement new features in openqa-trigger-from-obs to make xml more flexible.
Goals
One of the features to be implemented: - Possibility to define "VERSION" and "ARCH" variables per flavor instead of global.
Resources
https://github.com/os-autoinst/openqa-trigger-from-obs