Although availability of computer games on Linux has improved a lot there is way more potential for openSUSE to fire them up.

This project is about improving the usability of openSUSE for gaming to appreciate the gamers who run openSUSE as their primary OS. The final goal for a number of improvements is that each gamer can play flawlessly without a single issue.

The recommended steps to achieve this:

  1. Find out how to run graphical tests with hardware acceleration on bare metal machines.

  2. Establish the preconditions in openQA when there is lack of them.

  3. Write tests for games from the official repos which just check whether the game starts.

  4. Investigate how to run game tests on platforms like Steam without leaking login credentials.

  5. Extend the testing to modern AAA games on these platforms.

  6. Find out how to implement realtime needle matching to verify the render output.

What does that help?

  • We get info about the necessary libraries to run these games and can decide how to improve their availability.

  • We get regression checks whether the game is still starting.

  • Depending how far this project may evolve we might be able to check for performance regressions as well.


Test machine:

  • CPU & IGP: AMD Ryzen 5 2400G (Vega11)

  • Mainboard: ASRock AB350 Pro4 , Bios P4.70

  • Memory: 16 GiB DDR4

  • GPU: AMD Radeon R9 290X


Hackweek 17 - Results: 1. Find out how to run graphical tests with hardware acceleration on bare metal machines. - Done.

Documentation : GPU-Passthrough (for this particular machine)

  1. Plug one monitor in the IGP output to run the VM host and an other monitor in the dGPU to display the VM.

  2. Enable AMD-Vi and IOMMU in UEFI of the mainboard. (In this case: Advanced > CPU Configuration > SVM Mode: enabled and Advanced > North Bridge Configuration > IOMMU: enabled)

  3. Boot, install Tumbleweed and check whether IOMMU is running dmesg | grep -i "\(iommu\|amd-vi\)"

  4. Follow Steps 3 to 6 of the following guide: VGA PCI Passthrough guide on openSuSE Leap 42.2 - openSUSE Forums. For Step 5 use kvm_amd instead of kvm_intel in /etc/dracut.conf.d/gpu-passthrough.conf. The following steps of the linked guide might also work for other systems but I got a black screen on the VM.

  5. When the GPU is using vfio-pci as driver on the VM host after reboot, produce a new virtual disk: qemu-img create -f qcow2 vm_hdd.qcow2 200G

  6. Use again the script in step 3 of the guide VGA PCI Passthrough guide on openSuSE Leap 42.2 - openSUSE Forums to get the correct device number. It is located directly after it's IOMMU Group number at the beginning of the line.

  7. Get path to evdev devices: ls /dev/input/by-id/ | grep "-event-"

  8. Switch to root and use the following line to run the qemu-VM (adapt it regarding other hardware/addresses etc.): qemu-system-x86_64 -enable-kvm -m 12000 -cpu host,kvm=off -smp 8,sockets=1,cores=4,threads=2 -device vfio-pci,host=10:00.0,x-vga=on -device vfio-pci,host=10:00.1 -vga none -hda /home/user/vm_hdd.qcow2 -object input-linux,id=kbd1,evdev=/dev/input/by-id/usb-DELL_Dell_USB_Entry_Keyboard-event-kbd,grab_all=on,repeat=on -object input-linux,id=mouse1,evdev=/dev/input/by-id/usb-PixArt_Dell_MS116_USB_Optical_Mouse-event-mouse -nic user,model=virtio-net-pci. Specified inputs can be switched between the VM host and the VM guest by pressing both Ctrl keys on the keyboard. For Linux installation at first boot one needs to append -cdrom /path/to/tumbleweed.iso as option.

  9. (After installation on the VM press e in Grub2 and add kernel boot parameter nomodeset to break amdgpu and radeon modules and boot with software rendering mode. Copy necessary firmware from the Radeon ucode folder on freedesktop.org to /lib/firmware/amdgpu. Also append the kernel boot line in /etc/default/grub with modprobe.blacklist=radeon and amdgpu.cik_support=1 and run sudo grub2-mkconfig -o /boot/grub2/grub2.cfg. This step was only necessary because of current issues with the R9 290X GPU and openSUSE)

After everything has been done dmesg | grep -i amdgpu should give only positive output.

Documentation : Hardware accelerated screen capturing on the VM (for this particular machine)

  1. Download ffmpeg from git: git clone git://source.ffmpeg.org/ffmpeg.git && cd ffmpeg and install missing build requirements

  2. Configure with special parameters to enable hardware encoding: ./configure --enable-libdrm --enable-vaapi --enable-encoder=h264_vaapi. Then simply make and sudo make install.

  3. Capture a test video, test.mp4, of the screen: LIBVA_DRIVER_NAME=radeonsi ./ffmpeg -framerate 60 -f kmsgrab -i - -init_hw_device vaapi=v:/dev/dri/card0 -filter_hw_device v -filter:v hwmap,scale_vaapi=w=1920:h=1080:format=nv12 -c:v h264_vaapi -profile:v constrained_baseline -level:v 3.1 -b:v 20000k test.mp4

Final thoughts for Hackweek 17

It is definitely possible to enable openQA to run graphical hardware accelerated tests. Forwarding the screen output via network would also enable the testing of bare metal machines. At the same time, for virtual machines there is also the option to map the memory to the VM host for output in a window for example. How this can be achieved exactly has to be determined.

I will have a closer look at this and for the next Hackweek the goal should be the implementation.

Looking for hackers with the skills:

Nothing? Add some keywords!

This project is part of:

Hack Week 17

Activity

  • over 6 years ago: pgeorgiadis liked this project.
  • over 6 years ago: maritawerner liked this project.
  • over 6 years ago: suntorytimed liked this project.
  • over 6 years ago: SLindoMansilla liked this project.
  • over 6 years ago: SLindoMansilla joined this project.
  • over 6 years ago: okurz liked this project.
  • over 6 years ago: dfaggioli liked this project.
  • over 6 years ago: maxmaher joined this project.
  • over 6 years ago: clanig started this project.
  • over 6 years ago: clanig originated this project.

  • Comments

    • okurz
      over 6 years ago by okurz | Reply

      Cool idea. As a starting point, we have a steam test module in the scenario https://openqa.opensuse.org/tests/latest?test=extratestson_gnome#previous for example see https://openqa.opensuse.org/tests/702007#step/steam/29

    • clanig
      over 6 years ago by clanig | Reply

      @okurz Thank you for the links.

      I will bring my future HTPC to the office - so unfortunately can't leave it there after Hackweek ;) It has two AMD GPUs, one Vega IGP and one Sea Islands GPU. Both are supported by AMDGPU kernel module and either RadeonSI OpenGL driver or RADV/AMDVLK Vulkan drivers. It would likely be best to pass the big graphics card to the guest via IOMMU while the IGP could run the VM host. (But I haven't done this before and really hope that IOMMU support will work for this combination) Perhaps there are even better options but this scenario is very close to real setups.

    • clanig
      over 6 years ago by clanig | Reply

      I have set up GPU-Passthrough for the Sea Islands GPU successfully. The AMDGPU kernel module was in use and 0 A.D. ran with 40-60 FPS at highest detail level and 1080p resolution in the Tumbleweed-VM. Using EVDEV-Passthrough I only had to press both Ctrl keys to switch my controls from host to guest and vice versa.

      @SLindoMansilla told me about a chat with @bmwiedemann who mentioned that we don't get the graphics output for openQA when it is rendered with hardware acceleration on the GPU. But he thinks there should be a way to redirect it.

      A possible, pretty expensive solution might be to grab the output with a video capture card. That way the system under test would be left untouched. After chatting with several people, one might also consider to let the SUT make screenshots and send them outside. That would be massively cheaper and more flexible. The downside is that the SUT would do some work to test itself. Though considering this the easiest solution I am currently investigating how this could be done and whether there are other solutions we haven't thought about yet.

    • pgeorgiadis
      over 6 years ago by pgeorgiadis | Reply

      FYI I just tried to install Steam in TW and the client was failing to start. I fixed it by doing:

      zypper in libXtst6-32bit libva2-32bit libvdpau1-32bit libva-x11-2-32bit

      sudo ln -s /usr/lib/libva.so /usr/lib/libva.so.1

      sudo ln -s /usr/lib/libva-x11.so /usr/lib/libva-x11.so.1

      steam # now it works

    • clanig
      over 6 years ago by clanig | Reply

      @pgeorgiadis Thank you! Unfortunately I don't think I can get so far during this Hackweek.

      I have done some thinking about how to get the screen output to the openQA server. In my opinion it is the best option to providing a video stream of the screen in the virtual machine to the openQA server. To achieve this the Mesa video acceleration API(VA-API) together with ffmpeg could be used to compress the screen output with a contemporary codec like MPEG-4 (unfortunately VP9 hardware encoding is not supported by any GPU on Linux yet). There would be a minimal performance impact.

      A virtual serial connection would still be necessary to install the machine and setup the video stream.

      I am going to try to do tests to providing a video stream and find out what we currently have and what is necessary to implement it.

    • clanig
      over 6 years ago by clanig | Reply

      Today I have done some tests with ffmpeg. The version from the repositories does not have hardware encoding support. For this reason I had to compile it with --enable-libdrm, --enable-vaapi, --enable-encoder=h264_vaapi.

      I was able to capture the screen with a good image quality and almost no performance impact. Using kmsgrab I hoped to be able to capture everything, from tty over X, login screen to Wayland because kmsgrab isn't tied to a display server (in theory). Unfortunately I was only able to capture the ttys while switching them with a bit of buggy rendering and also Wayland. I haven't tried to capture X yet but my attempts to capture the screen during the transition from X to Wayland or X/Wayland to tty and so on were very unsuccessful because either ffmpeg crashed or the user became corrupted/Wayland didn't want to start anymore until ffmpeg has been exited...

      But I expect kmsgrab to work fine with X as well so it's at least a reasonable option to cover both after the user login. With autologin enabled one could capture the screen after some delay when booting the machine "blindly".

      Tomorrow I will try to provide the captured screen via network and try to access it from the VM host.

    • clanig
      over 6 years ago by clanig | Reply

      Update: I have contacted ffmpeg users in IRC who told me that kmsgrab should allow switching between X, Wayland and tty without issues. I suggest that the problems might have to do with the fact that I used hardware accelerated encoding.

      But being unsatisfied with the idea to send the captured screen as encoded video, I looked a bit closer to the kernel in order to find out where ffmpeg grabs the images from. As a result I found out that it uses the dma buffer in the kernel. Rob Clark had a presentation about DMA Buffer Sharing Framework: An Introduction.

      When I asked the him about the possibility to expose it to the VM host in IRC Daniel Vetter mentioned a patch series by Oleksandr Andrushchenko, who provided me a link to his patches: xen: dma-buf support for grant device - DRI Devel - Patchwork

      Unfortunately (in regard of this project) it is for Xen and hasn't been merged yet. But when it would be ported to KVM in the right way it should allow us to access the dma-buffer directly on the host of the virtual machine and having the ability to view everything that is displayed by the GPU. Daniel also mentioned UDMABUF as a project.

    Similar Projects

    This project is one of its kind!