Project Description
When we experience a early boot crash, we are not able to analyze the kernel dump, as user-space wasn't able to load the crash system. The idea is to make the crash system compiled into the host kernel (think of initramfs) so that we can create a kernel dump really early in the boot process.
Goal for the Hackweeks
- Investigate if this is possible and the implications it would have (done in HW21)
- Hack up a PoC (done in HW22 and HW23)
- Prepare RFC series (giving it's only one week, we are entering wishful thinking territory here).
update HW23
- I was able to include the crash kernel into the kernel Image.
- I'll need to find a way to load that from
init/main.c:start_kernel()
probably afterkcsan_init()
- I workaround for a smoke test was to hack
kexec_file_load()
systemcall which has two problems:- My initramfs in the porduction kernel does not have a new enough kexec version, that's not a blocker but where the week ended
- As the crash kernel is part of init.data it will be already stale once I can call
kexec_file_load()
from user-space.
The solution is probably to rewrite the POC so that the invocation can be done from init.text (that's my theory) but I'm not sure if I can reuse the kexec infrastructure in the kernel from there, which I rely on heavily.
update HW24
- Day1
- rebased on v6.12 with no problems others then me breaking the config
- setting up a new compilation and qemu/virtme env
- getting desperate as nothing works that used to work
- Day 2
- getting to call the invocation of loading the early kernel from
__init
afterkcsan_init()
- getting to call the invocation of loading the early kernel from
Day 3
- fix problem of memdup not being able to alloc so much memory... use 64K page sizes for now
- code refactoring
- I'm now able to load the crash kernel
- When using virtme I can boot into the crash kernel, also it doesn't boot completely (major milestone!), crash in
elfcorehdr_read_notes()
Day 4
- crash systems crashes (no pun intended) in
copy_old_mempage()
link; will need to understand elfcorehdr... - call path
vmcore_init() -> parse_crash_elf_headers() -> elfcorehdr_read() -> read_from_oldmem() -> copy_oldmem_page() -> copy_to_iter()
- crash systems crashes (no pun intended) in
Day 5
- hacking
arch/arm64/kernel/crash_dump.c:copy_old_mempage()
to see if crash system really starts. It does. - fun fact: retested with more reserved memory and with UEFI FW, host kernel crashes in init but directly starts the crash kernel, so it works (somehow) \o/
- hacking
TODOs
- fix elfcorehdr so that we actually can make use of all this...
- test where in the boot
__init()
chain we can/should callkexec_early_dump()
- do we really need
memdup()
or can we used the complied kernel for creating the segments? - refactor and rename everything (Kconfig menu shows in wrong place, Kconfig entry needs to go somewhere else, ekdump vs early_dump vs early_kdump
Resources
This project is part of:
Hack Week 21 Hack Week 22 Hack Week 23 Hack Week 24
Activity
Comments
-
5 months ago by ptesarik | Reply
FWIW I was contemplating a similar scheme back in 2016. My idea was to load: 1. kdump kernel 2. kdump initrd 3. production kernel 4. production initrd Then boot into the kdump kernel, update memory maps and kexec to the production kernel. When the production kernel crashes, pass control back to the kdump kernel. For the return to the kdump kernel, I was looking at the
KEXEC_PRESERVE_CONTEXT
flag, but in the end I doubt it's really helpful without further modifications to the production kernel. At this point, it's probably easier to boot the production kernel first and set up an initial crash kernel at early boot.Good luck!
-
20 days ago by mbrugger | Reply
[ 0.221029] Unable to handle kernel level 3 address size fault at virtual address ffff800080aa0000
[ 0.222848] Mem abort info:
[ 0.223419] ESR = 0x0000000096000003
[ 0.224187] EC = 0x25: DABT (current EL), IL = 32 bits
[ 0.225300] SET = 0, FnV = 0
[ 0.225933] EA = 0, S1PTW = 0
[ 0.226587] FSC = 0x03: level 3 address size fault
[ 0.227600] Data abort info:
[ 0.228198] ISV = 0, ISS = 0x00000003, ISS2 = 0x00000000
[ 0.229351] CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[ 0.230385] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[ 0.231466] swapper pgtable: 64k pages, 48-bit VAs, pgdp=000000005d6b0000
[ 0.232850] [ffff800080aa0000] pgd=100000005ef80003, p4d=100000005ef80003, pud=100000005ef80003, pmd=100000005ef90003, pte=00681591c0000f03
[ 0.235548] Internal error: Oops: 0000000096000003 [#1] PREEMPT SMP
[ 0.236828] Modules linked in:
[ 0.237504] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-dirty #9
[ 0.239037] Hardware name: linux,dummy-virt (DT)
[ 0.240047] pstate: a0400005 (NzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 0.241563] pc : _memcpy+0x110/0x230
[ 0.242374] lr : _copytoiter+0x374/0x670
[ 0.243276] sp : ffff8000800efa20
[ 0.244009] x29: ffff8000800efa70 x28: 0000000000000000 x27: ffff800080aa0000
[ 0.245577] x26: ffff8000800efba0 x25: 00000000000001a8 x24: ffffd7d750365000
[ 0.247124] x23: ffff8000800efbb0 x22: 0000000000000000 x21: ffff8000800efba0
[ 0.248672] x20: 00000000000001a8 x19: 0000000000000000 x18: ffffffffffffffff
[ 0.250234] x17: 0000000087130253 x16: 00000000e547dfaa x15: 0720072007200720
[ 0.251792] x14: ffffa2fe3f221a00 x13: ffffd7d750365fb8 x12: ffffd7d75162c9c8
[ 0.253349] x11: ffffd7d75169ca30 x10: ffffd7d7516849f0 x9 : ffffd7d751684a48
[ 0.254900] x8 : 0000000000017fe8 x7 : c0000000ffffefff x6 : 0000000000000001
[ 0.256457] x5 : ffff22febfcc1ba8 x4 : ffff800080aa01a8 x3 : 00000000ffffefff
[ 0.258013] x2 : 00000000000001a8 x1 : ffff800080aa0000 x0 : ffff22febfcc1a00
[ 0.259564] Call trace:
[ 0.260109] _memcpy+0x110/0x230
[ 0.260842] copyoldmempage+0xc8/0x110
[ 0.261713] readfromoldmem+0x1bc/0x268
[ 0.262595] elfcorehdrreadnotes+0x9c/0xd0
[ 0.263536] mergenoteheaderself64.constprop.15+0x110/0x3b0
[ 0.264813] vmcoreinit+0x298/0x794
[ 0.265612] dooneinitcall+0x64/0x1e8
[ 0.266455] kernelinitfreeable+0x238/0x288
Similar Projects
Linux on Cavium CN23XX cards by tsbogend
Before Cavium switched to ARM64 CPUs they developed quite powerful MIPS based SOCs. The current upstream Linux kernel already supports some Octeon SOCs, but not the latest versions. Goal of this Hack Week project is to use the latest Cavium SDK to update the Linux kernel code to let it running on CN23XX network cards.
Kill DMA and DMA32 memory zones by ptesarik
Description
Provide a better allocator for DMA-capable buffers, making the DMA and DMA32 zones obsolete.
Goals
Make a PoC kernel which can boot a x86 VM and a Raspberry Pi (because early RPi4 boards have some of the weirdest DMA constraints).
Resources
- LPC2024 talk:
- video:
RISC-V emulator in GLSL capable of running Linux by favogt
Description
There are already numerous ways to run Linux and some programs through emulation in a web browser (e.g. x86 and riscv64 on https://bellard.org/jslinux/), but none use WebGL/WebGPU to run the emulation on the GPU.
I already made a PoC of an AArch64 (64-bit Arm) emulator in OpenCL which is unfortunately hindered by a multitude of OpenCL compiler bugs on all platforms (Intel with beignet or the new compute runtime and AMD with Mesa Clover and rusticl). With more widespread and thus less broken GLSL vs. OpenCL and the less complex implementation requirements for RV32 (especially 32bit integers instead of 64bit), that should not be a major problem anymore.
Goals
Write an RISC-V system emulator in GLSL that is capable of booting Linux and run some userspace programs interactively. Ideally it is small enough to work on online test platforms like Shaderoo with a custom texture that contains bootstrap code, kernel and initrd.
Minimum:
riscv32 without FPU (RV32 IMA) and MMU (µClinux), running Linux in M-mode and userspace in U-mode.
Stretch goals:
FPU support, S-Mode support with MMU, SMP. Custom web frontend with more possibilities for I/O (disk image, network?).
Resources
RISC-V ISA Specifications
Shaderoo
OpenGL 4.5 Quick Reference Card
Result as of Hackweek 2024
WebGL turned out to be insufficient, it only supports OpenGL ES 3.0 but imageLoad/imageStore needs ES 3.1. So we switched directions and had to write a native C++ host for the shaders.
As of Hackweek Friday, the kernel attempts to boot and outputs messages, but panics due to missing memory regions.
Since then, some bugs were fixed and enough hardware emulation implemented, so that now Linux boots with framebuffer support and it's possible to log in and run programs!
The repo with a demo video is available at https://github.com/Vogtinator/risky-v
FizzBuzz OS by mssola
Project Description
FizzBuzz OS (or just fbos
) is an idea I've had in order to better grasp the fundamentals of the low level of a RISC-V machine. In practice, I'd like to build a small Operating System kernel that is able to launch three processes: one that simply prints "Fizz", another that prints "Buzz", and the third which prints "FizzBuzz". These processes are unaware of each other and it's up to the kernel to schedule them by using the timer interrupts as given on openSBI (fizz on % 3 seconds, buzz on % 5 seconds, and fizzbuzz on % 15 seconds).
This kernel provides just one system call, write
, which allows any program to pass the string to be written into stdout.
This project is free software and you can find it here.
Goal for this Hackweek
- Better understand the RISC-V SBI interface.
- Better understand RISC-V in privileged mode.
- Have fun.
Resources
Results
The project was a resounding success Lots of learning, and the initial target was met.
Create a DRM driver for VGA video cards by tdz
Yes, those VGA video cards. The goal of this project is to implement a DRM graphics driver for such devices. While actual hardware is hard to obtain or even run today, qemu emulates VGA output.
VGA has a number of limitations, which make this project interesting.
- There are only 640x480 pixels (or less) on the screen. That resolution is also a soft lower limit imposed by DRM. It's mostly a problem for desktop environments though.
- Desktop environments assume 16 million colors, but there are only 16 colors with VGA. VGA's 256 color palette is not available at 640x480. We can choose those 16 colors freely. The interesting part is how to choose them. We have to build a palette for the displayed frame and map each color to one of the palette's 16 entries. This is called dithering, and VGA's limitations are a good opportunity to learn about dithering algorithms.
- VGA has an interesting memory layout. Most graphics devices use linear framebuffers, which store the pixels byte by byte. VGA uses 4 bitplanes instead. Plane 0 holds all bits 0 of all pixels. Plane 1 holds all bits 1 of all pixels, and so on.
The driver will probably not be useful to many people. But, if finished, it can serve as test environment for low-level hardware. There's some interest in supporting old Amiga and Atari framebuffers in DRM. Those systems have similar limitations as VGA, but are harder to obtain and test with. With qemu, the VGA driver could fill this gap.
Apart from the Wikipedia entry, good resources on VGA are at osdev.net and FreeVGA