Project Description

Everything we do in SUSE requires a certain amount of energy. This energy has a cost and it causes also a certain amount of CO2 emissions. In particular, as Kernel QA team, we run Kernel testing pretty often causing energy consumption that could be saved by introducing optimizations inside the LTP testing.

In this project we use a new parallel execution implementation, in order to talk about how software creation process can save energy and CO2 emissions inside a SW company.

Goal for this Hackweek

We want to answer the following questions:

  • How many tests can run in parallel?
  • How much energy we save per LTP execution in a virtualized system such as openQA?
  • Can we improve the parallelization model to save more energy?

Resources

  • runltp-ng: https://github.com/linux-test-project/runltp-ng/
  • runltp-ng with parallelization support: https://github.com/acerv/runltp-ng/tree/parallel_coroutines

Jan 31

I had some issues with the runltp-ng parallel execution, due to the choice of moving UI thread in the coroutines Thread. Tests took +30% time to complete with previous code, but now UI thread is working back again. Created a script to check how many parallel executions we have for all testing suites.

``` Suite: can Total tests: 3 Parallelizable tests: 2

Suite: cap_bounds Total tests: 1 Parallelizable tests: 0

Suite: commands Total tests: 37 Parallelizable tests: 0

Suite: connectors Total tests: 1 Parallelizable tests: 0

Suite: containers Total tests: 86 Parallelizable tests: 0

Suite: controllers Total tests: 346 Parallelizable tests: 1

Suite: cpuhotplug Total tests: 6 Parallelizable tests: 0

Suite: crashme Total tests: 4 Parallelizable tests: 0

Suite: crypto Total tests: 10 Parallelizable tests: 6

Suite: cve Total tests: 77 Parallelizable tests: 5

Suite: dio Total tests: 30 Parallelizable tests: 0

Suite: dmathreaddiotest Total tests: 7 Parallelizable tests: 0

Suite: fcntl-locktests Total tests: 1 Parallelizable tests: 0

Suite: filecaps Total tests: 1 Parallelizable tests: 0

Suite: fs Total tests: 68 Parallelizable tests: 0

Suite: fs_bind Total tests: 95 Parallelizable tests: 0

Suite: fspermssimple Total tests: 18 Parallelizable tests: 0

Suite: fs_readonly Total tests: 55 Parallelizable tests: 0

Suite: fsx Total tests: 1 Parallelizable tests: 0

Suite: hugetlb Total tests: 50 Parallelizable tests: 0

Suite: hyperthreading Total tests: 2 Parallelizable tests: 0

Suite: ima Total tests: 9 Parallelizable tests: 0

Suite: input Total tests: 6 Parallelizable tests: 0

Suite: io Total tests: 2 Parallelizable tests: 1

Suite: ipc Total tests: 8 Parallelizable tests: 0

Suite: irq Total tests: 1 Parallelizable tests: 1

Suite: kernel_misc Total tests: 16 Parallelizable tests: 0

Suite: kvm Total tests: 1 Parallelizable tests: 0

Suite: ltp-aio-stress Total tests: 54 Parallelizable tests: 0

Suite: ltp-aiodio.part1 Total tests: 140 Parallelizable tests: 0

Suite: ltp-aiodio.part2 Total tests: 83 Parallelizable tests: 0

Suite: ltp-aiodio.part3 Total tests: 48 Parallelizable tests: 0

Suite: ltp-aiodio.part4 Total tests: 57 Parallelizable tests: 0

Suite: math Total tests: 10 Parallelizable tests: 0

Suite: mm Total tests: 75 Parallelizable tests: 2

Suite: net.features Total tests: 62 Parallelizable tests: 0

Suite: net.ipv6 Total tests: 11 Parallelizable tests: 0

Suite: net.ipv6_lib Total tests: 6 Parallelizable tests: 2

Suite: net.multicast Total tests: 4 Parallelizable tests: 0

Suite: net.nfs Total tests: 84 Parallelizable tests: 0

Suite: net.rpc_tests Total tests: 51 Parallelizable tests: 0

Suite: net.sctp Total tests: 41 Parallelizable tests: 0

Suite: net.tcp_cmds Total tests: 21 Parallelizable tests: 0

Suite: net.tirpc_tests Total tests: 41 Parallelizable tests: 0

Suite: net_stress.appl Total tests: 10 Parallelizable tests: 0

Suite: netstress.brokenip Total tests: 11 Parallelizable tests: 0

Suite: net_stress.interface Total tests: 25 Parallelizable tests: 0

Suite: netstress.ipsecdccp Total tests: 104 Parallelizable tests: 0

Suite: netstress.ipsecicmp Total tests: 86 Parallelizable tests: 0

Suite: netstress.ipsecsctp Total tests: 104 Parallelizable tests: 0

Suite: netstress.ipsectcp Total tests: 104 Parallelizable tests: 0

Suite: netstress.ipsecudp Total tests: 106 Parallelizable tests: 0

Suite: net_stress.multicast Total tests: 24 Parallelizable tests: 0

Suite: net_stress.route Total tests: 14 Parallelizable tests: 0

Suite: nptl Total tests: 1 Parallelizable tests: 0

Suite: numa Total tests: 20 Parallelizable tests: 2

Suite: powermanagementtests Total tests: 5 Parallelizable tests: 0

Suite: powermanagementtests_exclusive Total tests: 5 Parallelizable tests: 0

Suite: pty Total tests: 9 Parallelizable tests: 1

Suite: s390x_tests Total tests: 1 Parallelizable tests: 0

Suite: sched Total tests: 11 Parallelizable tests: 0

Suite: scsi_debug.part1 Total tests: 140 Parallelizable tests: 0

Suite: securebits Total tests: 3 Parallelizable tests: 0

Suite: smack Total tests: 10 Parallelizable tests: 0

Suite: smoketest Total tests: 15 Parallelizable tests: 5

Suite: staging Total tests: 1 Parallelizable tests: 0

Suite: syscalls Total tests: 1384 Parallelizable tests: 526

Suite: syscalls-ipc Total tests: 61 Parallelizable tests: 26

Suite: tpm_tools Total tests: 12 Parallelizable tests: 0

Suite: tracing Total tests: 9 Parallelizable tests: 0

Suite: uevent Total tests: 3 Parallelizable tests: 0

Suite: watchqueue Total tests: 9 Parallelizable tests: 9


Total tests: 4017 Parallelizable tests: 589

14.66% of the tests are parallelizable ```

Feb 1

Added a new option runltp-ng --force-parallel to force parallelization even if it's not enabled by tests, but using it causes application crashes, especially for more important suites such as syscalls or syscalls-ipc. Not a good idea to use it. In general, I run a few suites collecting times we need to complete them. It seems the current rule selecting tests for parallel execution is not smart enough and most of the selected tests just end in a seconds or less. This will reflect on time results, where important testing suites, such as syscalls, will end up just a few minutes before the normal execution. We can do probably better on that side by optimizing the rule, which is currently implemented here.

``` Qemu: Distro: Tumbleweed Kernel: 6.1.8-1-default SMP: 16 RAM: 2GB

syscalls: tests: 1384 parallel: 526 (38% of the tests)

16 workers: 31m 54s
1 worker:   36m 18s

syscalls-ipc: tests: 61 parallel: 26 (42.62% of the tests)

16 workers: 2m 4s
1 worker:   2m 7s

mm: tests: 75 parallel: 2 (42.62% of the tests)

16 workers: 8m 2s
1 worker:   8m 10s

cve: tests: 77 parallel: 5 (6.49% of the tests)

16 workers: 29m 53s
1 worker:   29m 57s

```

02-03 Feb

I focused more on syscalls testing suites, since it's the most important suite that can be easily parallelized. All power consumption measurements have been taken using powerstat -a -R -d 0 1 3600 command, bringing data from the start of the testing suite execution until the end. All stats have been taken using my own laptop, since I wasn't able to access openQA workers physically. Also, to improve measurements, it would be better to have an external device for measuring power consumption. All tests run inside a Qemu instance. According with openQA stats, syscalls has been executed 35 times in the last month (Jan 2023), so we take this value into account.

Environment

``` Laptop: Model: Lenovo T14s Gen 1 CPU: AMD Ryzen 7 PRO 4750U Memory: 16GB DDR4 Hard disk: NVMe SSD

Qemu:
    CPUs: 16
    RAM:  4096MB

```

Data

CO2 emission per kWh -> W = 0.244kg CO2/kWh (5% uncertainty) Avg idle consumption -> I = 2.50 W Cost energy in germany -> P = 0.534 $/kWh syscalls exec per month -> R = 35

Normal execution

execution time: T1 = 38m 57s = 2337s energy consumption: E1 = 9 Wh monthly consumption: C1 = 35 * 9 = 0.315 kWh

Parallel execution (16 workers)

execution time: T2 = 35m 22s = 2122s -> 10% less energy consumption: E2 = 10 Wh monthly consumption: C2 = 35 * 10 = 0.350 kWh

Results

As we notice, there's a small difference between parallelization and normal execution, but overall it's so small that it won't particularly affect CO2 emissions or costs. In particular, in one year we have:

diff: D = (0.315 - 0.350) * 12 = +0.42 kWh cost: C = D * P = -0.42 * 0.534 = +0.224 $ emissions: C02 = D * W = -0.42 * 0.244 = +0.102 kg

Considering that servers might consume a bit more energy during the execution, we might have bigger values, but still pretty small. The reason is that during parallelization we use more power to run many tests in parallel.

Optimizations

At the end, we can see that in terms of costs or emissions, we don't have a big impact, but in terms of time we still can have a significant impact in one year. We have the possibility to realease openQA workers in a faster way and to complete also other jobs a bit faster. And that of course will have an impact on production, energy consumption and emissions. By taking into account our data, we can say that in one year we will save:

(T1 - T2) * R * 12 = (2337 - 2122) * 35 * 12 ~25 hours

If we are able to introduce a smarter rule to select tests which can run in parallel, the amount of saved time per year might significantly increase. Also, we still have 332 syscalls tests (about 24%) using old API which can't run in parallel nowadays.

Looking for hackers with the skills:

optimization energy kernel ltp runltp co2 testing

This project is part of:

Hack Week 22

Activity

  • about 2 years ago: mkoutny liked this project.
  • about 2 years ago: maritawerner liked this project.
  • about 2 years ago: okurz liked this project.
  • about 2 years ago: acervesato added keyword "testing" to this project.
  • about 2 years ago: acervesato added keyword "optimization" to this project.
  • about 2 years ago: acervesato added keyword "energy" to this project.
  • about 2 years ago: acervesato added keyword "kernel" to this project.
  • about 2 years ago: acervesato added keyword "ltp" to this project.
  • about 2 years ago: acervesato added keyword "runltp" to this project.
  • about 2 years ago: acervesato added keyword "co2" to this project.
  • about 2 years ago: acervesato started this project.
  • about 2 years ago: acervesato originated this project.

  • Comments

    • acervesato
      about 2 years ago by acervesato | Reply

      .

    Similar Projects

    Improve UML page fault handler by ptesarik

    Description

    Improve UML handling of segmentation faults in kernel mode. Although such page faults are generally caused by a kernel bug, it is annoying if they cause an infinite loop, or panic the kernel. More importantly, a robust implementation allows to write KUnit tests for various guard pages, preventing potential kernel self-protection regressions.

    Goals

    Convert the UML page fault handler to use oops_* helpers, go through a few review rounds and finally get my patch series merged in 6.14.

    Resources

    Wrong initial attempt: https://lore.kernel.org/lkml/20231215121431.680-1-petrtesarik@huaweicloud.com/T/


    early stage kdump support by mbrugger

    Project Description

    When we experience a early boot crash, we are not able to analyze the kernel dump, as user-space wasn't able to load the crash system. The idea is to make the crash system compiled into the host kernel (think of initramfs) so that we can create a kernel dump really early in the boot process.

    Goal for the Hackweeks

    1. Investigate if this is possible and the implications it would have (done in HW21)
    2. Hack up a PoC (done in HW22 and HW23)
    3. Prepare RFC series (giving it's only one week, we are entering wishful thinking territory here).

    update HW23

    • I was able to include the crash kernel into the kernel Image.
    • I'll need to find a way to load that from init/main.c:start_kernel() probably after kcsan_init()
    • I workaround for a smoke test was to hack kexec_file_load() systemcall which has two problems:
      1. My initramfs in the porduction kernel does not have a new enough kexec version, that's not a blocker but where the week ended
      2. As the crash kernel is part of init.data it will be already stale once I can call kexec_file_load() from user-space.

    The solution is probably to rewrite the POC so that the invocation can be done from init.text (that's my theory) but I'm not sure if I can reuse the kexec infrastructure in the kernel from there, which I rely on heavily.

    update HW24

    • Day1
      • rebased on v6.12 with no problems others then me breaking the config
      • setting up a new compilation and qemu/virtme env
      • getting desperate as nothing works that used to work
    • Day 2
      • getting to call the invocation of loading the early kernel from __init after kcsan_init()
    • Day 3

      • fix problem of memdup not being able to alloc so much memory... use 64K page sizes for now
      • code refactoring
      • I'm now able to load the crash kernel
      • When using virtme I can boot into the crash kernel, also it doesn't boot completely (major milestone!), crash in elfcorehdr_read_notes()
    • Day 4

      • crash systems crashes (no pun intended) in copy_old_mempage() link; will need to understand elfcorehdr...
      • call path vmcore_init() -> parse_crash_elf_headers() -> elfcorehdr_read() -> read_from_oldmem() -> copy_oldmem_page() -> copy_to_iter()
    • Day 5

      • hacking arch/arm64/kernel/crash_dump.c:copy_old_mempage() to see if crash system really starts. It does.
      • fun fact: retested with more reserved memory and with UEFI FW, host kernel crashes in init but directly starts the crash kernel, so it works (somehow) \o/
    • TODOs

      • fix elfcorehdr so that we actually can make use of all this...
      • test where in the boot __init() chain we can/should call kexec_early_dump()


    Contributing to Linux Kernel security by pperego

    Description

    A couple of weeks ago, I found this blog post by Gustavo Silva, a Linux Kernel contributor.

    I always strived to start again into hacking the Linux Kernel, so I asked Coverity scan dashboard access and I want to contribute to Linux Kernel by fixing some minor issues.

    I want also to create a Linux Kernel fuzzing lab using qemu and syzkaller

    Goals

    1. Fix at least 2 security bugs
    2. Create the fuzzing lab and having it running

    The story so far

    • Day 1: setting up a virtual machine for kernel development using Tumbleweed. Reading a lot of documentation, taking confidence with Coverity dashboard and with procedures to submit a kernel patch
    • Day 2: I read really a lot of documentation and I triaged some findings on Coverity SAST dashboard. I have to confirm that SAST tool are great false positives generator, even for low hanging fruits.
    • Day 3: Working on trivial changes after I read this blog post: https://www.toblux.com/posts/2024/02/linux-kernel-patches.html. I have to take confidence with the patch preparation and submit process yet.
      • First trivial patch sent: using strtruefalse() macro instead of hard-coded strings in a staging driver for a lcd display
      • Fix for a dereference before null check issue discovered by Coverity (CID 1601566) https://scan7.scan.coverity.com/#/project-view/52110/11354?selectedIssue=1601566
    • Day 4: Triaging more issues found by Coverity.
      • The patch for CID 1601566 was refused. The check against the NULL pointer was pointless so I prepared a version 2 of the patch removing the check.
      • Fixed another dereference before NULL check in iwlmvmparsewowlaninfo_notif() routine (CID 1601547). This one was already submitted by another kernel hacker :(
    • Day 5: Wrapping up. I had to do some minor rework on patch for CID 1601566. I found a stalker bothering me in private emails and people I interacted with me, advised he is a well known bothering person. Markus Elfring for the record.
    • Wrapping up: being back doing kernel hacking is amazing and I don't want to stop it. My battery pack is completely drained but changing the scope gave me a great twist and I really want to feel this energy not doing a single task for months.

      I failed in setting up a fuzzing lab but I was too optimistic for the patch submission process.

    The patches

    1


    FizzBuzz OS by mssola

    Project Description

    FizzBuzz OS (or just fbos) is an idea I've had in order to better grasp the fundamentals of the low level of a RISC-V machine. In practice, I'd like to build a small Operating System kernel that is able to launch three processes: one that simply prints "Fizz", another that prints "Buzz", and the third which prints "FizzBuzz". These processes are unaware of each other and it's up to the kernel to schedule them by using the timer interrupts as given on openSBI (fizz on % 3 seconds, buzz on % 5 seconds, and fizzbuzz on % 15 seconds).

    This kernel provides just one system call, write, which allows any program to pass the string to be written into stdout.

    This project is free software and you can find it here.

    Goal for this Hackweek

    • Better understand the RISC-V SBI interface.
    • Better understand RISC-V in privileged mode.
    • Have fun.

    Resources

    Results

    The project was a resounding success add-emoji Lots of learning, and the initial target was met.


    Hacking on sched_ext by flonnegren

    Description

    Sched_ext upstream has some interesting issues open for grabs:

    Goals

    Send patches to sched_ext upstream

    Also set up perfetto to trace some of the example schedulers.

    Resources

    https://github.com/sched-ext/scx


    Automated Test Report reviewer by oscar-barrios

    Description

    In SUMA/Uyuni team we spend a lot of time reviewing test reports, analyzing each of the test cases failing, checking if the test is a flaky test, checking logs, etc.

    Goals

    Speed up the review by automating some parts through AI, in a way that we can consume some summary of that report that could be meaningful for the reviewer.

    Resources

    No idea about the resources yet, but we will make use of:

    • HTML/JSON Report (text + screenshots)
    • The Test Suite Status GithHub board (via API)
    • The environment tested (via SSH)
    • The test framework code (via files)


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).
    • [W] Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.
    • [I] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?
    • [W] Applying any basic salt state (including a formula)
    • [W] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    Yearly Quality Engineering Ask me Anything - AMA for not-engineering by szarate

    Goal

    Get a closer look at how developers work on the Engineering team (R & D) of SUSE, and close the collaboration gap between GSI and Engineering

    Why?

    Santiago can go over different development workflows, and can do a deepdive into how Quality Engineering works (think of my QE Team, the advocates for your customers), The idea of this session is to help open the doors to opportunities for collaboration, and broaden our understanding of SUSE as a whole.

    Objectives

    • Give $audience a small window on how to get some questions answered either on the spot or within days of how some things at engineering are done
    • Give Santiago Zarate from Quality Engineering a look into how $audience sees the engineering departments, and find out possibilities of further collaboration

    How?

    By running an "Ask me Anything" session, which is a format of a kind of open Q & A session, where participants ask the host multiple questions.

    How to make it happen?

    I'm happy to help joining a call or we can do it async (online/in person is more fun). Ping me over email-slack and lets make the magic happen!. Doesn't need to be during hackweek, but we gotta kickstart the idea during hackweek ;)

    Rules

    The rules are simple, the more questions the more fun it will be; while this will be only a window into engineering, it can also be the place to help all of us get to a similar level of understanding of the processes that are behind our respective areas of the organization.

    Dynamics

    The host will be monitoring the questions on some pre-agreed page, and try to answer to the best of their knowledge, if a question is too difficult or the host doesn't have the answer, he will do his best to provide an answer at a later date.

    Atendees are encouraged to add questions beforehand; in the case there aren't any, we would be looking at how Quality Engineering tests new products or performs regression tests

    Agenda

    • Introduction of Santiago Zarate, Product Owner of Quality Engineering Core team
    • Introduction of the Group/Team/Persons interested
    • Ice breaker
    • AMA time! Add your questions $PAGE
    • Looking at QE Workflows: How is
      • A maintenance update being tested before being released to our customers
      • Products in development are tested before making it generally available
    • Engineering Opportunity Board


    Drag Race - comparative performance testing for pull requests by balanza

    Description

    «Sophia, a backend developer, submitted a pull request with optimizations for a critical database query. Once she pushed her code, an automated load test ran, comparing her query against the main branch. Moments later, she saw a new comment automatically added to her PR: the comparison results showed reduced execution time and improved efficiency. Smiling, Sophia messaged her team, “Performance gains confirmed!”»

    Goals

    • To have a convenient and ergonomic framework to describe test scenarios, including environment and seed;
    • to compare results from different tests
    • to have a GitHub action that executes such tests on a CI environment

    Resources

    The MVP will be built on top of Preevy and K6.


    Make more sense of openQA test results using AI by livdywan

    Description

    AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

    User Story

    Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

    Goals

    • Leverage a chat interface to help Allison
    • Create a model from scratch based on data from openQA
    • Proof of concept for automated analysis of openQA test results

    Bonus

    • Use AI to suggest solutions to merge conflicts
      • This would need a merge conflict editor that can suggest solving the conflict
    • Use image recognition for needles

    Resources

    Timeline

    Day 1

    • Conversing with open-webui to teach me how to create a model based on openQA test results

    Day 2

    Highlights

    • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
    • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
    • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
    • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

    Outcomes

    • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
    • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.