(was: Create a DRM driver for Matrox G200)
Even after 20 years, the Matrox G200 series is still an excellent 2d graphics card. Unfortunately, there's only an fbdev driver and a user-space driver. Both are obsolete, as modern Linux uses the DRM framework for managing graphics cards. There already is a DRM driver for the G200 server series. This driver is under-maintained and doesn't work with desktop chips.
I intend to work on a DRM driver for the G200 during the hackweek. Let's see how far one can get within a few days. :)
This project is part of:
Hack Week 17 Hack Week 21
Activity
Comments
-
-
almost 7 years ago by tdz | Reply
Oh, interresting! I found this: http://www.fujitsu.com/de/products/computing/servers/primergy/os/linux/suse/ and it specifically mentions Primergy. Do we have one of these devices around for testing?
My plan is to start with desktop cards (because I can do that locally) and at some point merge support for the server. The differences are minor. I mentioned this earlier, the current server-chipset driver is under-maintained and not up to today's DRM. Having desktop support should also help to keep this maintained for the longer term.
-
-
-
-
almost 7 years ago by tdz | Reply
The state of the driver after day 2 is at
https://gitlab.suse.de/tdz/linux/tree/mga-kms-day2
I've added code for computing a mode's required memory bandwidth and VCLK (actually Pixel PLL config). This is part of the check-phase of applying a mode. The commit phase is next. Once that works, a lot of clean-up will have to be done.
-
-
about 3 years ago by tdz | Reply
I think it's time to revive this hackweek project with a slightly different spin.
Egbert's patches for desktop G200 have landed in the kernel's DRM driver for server G200 a few releases ago. But there's more Matrox desktop hardware that can be supported. I have some half-done patches for G400, etc that I wanted to get finished.
-
almost 3 years ago by tdz | Reply
Day 1: The current kernel driver for Matrox supports the various flavors of the G200 chipset. The overall modesetting pipeline is the same for all Matrox cards, but each version's hardware has it's own peculiarities. Therefore, I studied the old userspace driver to understand how it sets up hardware for the G400.
-
-
almost 3 years ago by tdz | Reply
Day 3: I got the G400 working with the mgag200 kernel driver. I took the driver's existing G200 code and adapted it with parameters for the G400. The parameters come from the X11 userspace driver. In the afternoon, I started working on G450 support. The G450 and G550 use a different argorithm for programming the PLL. I'll have to port the existing code from one of the other Matrox drivers into mgag200.
-
almost 3 years ago by tdz | Reply
Day 4: I got the Matrox G450 working.
As I mentioned, the PLL setup algorithm is different from previous cards. The PLL produces an output frequency from a fixes input frequency plus a few circuits that modify it. Such modifications apply divider or multiplier operations to the input frequency in a predefined way. The result is not a 100% match, but usually close enough. Drivers typically take the setting that results in the least difference to the target frequency. (That's why 60 Hz displays usually run with ~59.xx Hz)
The existing Matrox G450 code is different in that it computes all possible combinations of PLL settings that produce the target frequency and then apply them one by one until the graphics card reports success. Taking this code from the existing fbdev driver requires quite a bit of refactoring to fit it into DRM's atomic modesetting scheme.
-
almost 3 years ago by tdz | Reply
Day 5: I worked on cleaning up the G450 code. As I mentioned, the PLL setup algorithm is much more elaborate than for the other models. Integrating this into DRM patterns requires several refactor-debug cycles.
Overall, I made good progress with the Matrox cards. I have added support for the G400, G400 MAX and the G450. The one left is the G550. Looking at other existing Matrox drivers, it seems very similar to the G450, so it should be relatively easy to support after the G450 code has fallen into place.
Maybe I'll take the time to finish this and submit the code upstream inclusion.
Similar Projects
early stage kdump support by mbrugger
Project Description
When we experience a early boot crash, we are not able to analyze the kernel dump, as user-space wasn't able to load the crash system. The idea is to make the crash system compiled into the host kernel (think of initramfs) so that we can create a kernel dump really early in the boot process.
Goal for the Hackweeks
- Investigate if this is possible and the implications it would have (done in HW21)
- Hack up a PoC (done in HW22 and HW23)
- Prepare RFC series (giving it's only one week, we are entering wishful thinking territory here).
update HW23
- I was able to include the crash kernel into the kernel Image.
- I'll need to find a way to load that from
init/main.c:start_kernel()
probably afterkcsan_init()
- I workaround for a smoke test was to hack
kexec_file_load()
systemcall which has two problems:- My initramfs in the porduction kernel does not have a new enough kexec version, that's not a blocker but where the week ended
- As the crash kernel is part of init.data it will be already stale once I can call
kexec_file_load()
from user-space.
The solution is probably to rewrite the POC so that the invocation can be done from init.text (that's my theory) but I'm not sure if I can reuse the kexec infrastructure in the kernel from there, which I rely on heavily.
update HW24
- Day1
- rebased on v6.12 with no problems others then me breaking the config
- setting up a new compilation and qemu/virtme env
- getting desperate as nothing works that used to work
- Day 2
- getting to call the invocation of loading the early kernel from
__init
afterkcsan_init()
- getting to call the invocation of loading the early kernel from
Day 3
- fix problem of memdup not being able to alloc so much memory... use 64K page sizes for now
- code refactoring
- I'm now able to load the crash kernel
- When using virtme I can boot into the crash kernel, also it doesn't boot completely (major milestone!), crash in
elfcorehdr_read_notes()
Day 4
- crash systems crashes (no pun intended) in
copy_old_mempage()
link; will need to understand elfcorehdr... - call path
vmcore_init() -> parse_crash_elf_headers() -> elfcorehdr_read() -> read_from_oldmem() -> copy_oldmem_page() -> copy_to_iter()
- crash systems crashes (no pun intended) in
Day 5
- hacking
arch/arm64/kernel/crash_dump.c:copy_old_mempage()
to see if crash system really starts. It does. - fun fact: retested with more reserved memory and with UEFI FW, host kernel crashes in init but directly starts the crash kernel, so it works (somehow) \o/
- hacking
TODOs
- fix elfcorehdr so that we actually can make use of all this...
- test where in the boot
__init()
chain we can/should callkexec_early_dump()