Description
A couple of weeks ago, I found this blog post by Gustavo Silva, a Linux Kernel contributor.
I always strived to start again into hacking the Linux Kernel, so I asked Coverity scan dashboard access and I want to contribute to Linux Kernel by fixing some minor issues.
I want also to create a Linux Kernel fuzzing lab using qemu and syzkaller
Goals
- Fix at least 2 security bugs
- Create the fuzzing lab and having it running
The story so far
- Day 1: setting up a virtual machine for kernel development using Tumbleweed. Reading a lot of documentation, taking confidence with Coverity dashboard and with procedures to submit a kernel patch
- Day 2: I read really a lot of documentation and I triaged some findings on Coverity SAST dashboard. I have to confirm that SAST tool are great false positives generator, even for low hanging fruits.
- Day 3: Working on trivial changes after I read this blog post:
https://www.toblux.com/posts/2024/02/linux-kernel-patches.html. I have to take confidence
with the patch preparation and submit process yet.
- First trivial patch sent: using strtruefalse() macro instead of hard-coded strings in a staging driver for a lcd display
- Fix for a dereference before null check issue discovered by Coverity (CID 1601566) https://scan7.scan.coverity.com/#/project-view/52110/11354?selectedIssue=1601566
- Day 4: Triaging more issues found by Coverity.
- The patch for CID 1601566 was refused. The check against the NULL pointer was pointless so I prepared a version 2 of the patch removing the check.
- Fixed another dereference before NULL check in iwlmvmparsewowlaninfo_notif() routine (CID 1601547). This one was already submitted by another kernel hacker :(
- Day 5: Wrapping up. I had to do some minor rework on patch for CID 1601566. I found a stalker bothering me in private emails and people I interacted with me, advised he is a well known bothering person. Markus Elfring for the record.
Wrapping up: being back doing kernel hacking is amazing and I don't want to stop it. My battery pack is completely drained but changing the scope gave me a great twist and I really want to feel this energy not doing a single task for months.
I failed in setting up a fuzzing lab but I was too optimistic for the patch submission process.
The patches
First result
The patch is included in the Linux Kernel.
Resources
The serie of blog posts by Gustavo Silva inspiring this project.
An email with some quick "where to start" instructions The patchset philosophy
No Hackers yet
This project is part of:
Hack Week 24
Activity
Comments
Be the first to comment!
Similar Projects
VulnHeap by r1chard-lyu
Description
The VulnHeap project is dedicated to the in-depth analysis and exploitation of vulnerabilities within heap memory management. It focuses on understanding the intricate workflow of heap allocation, chunk structures, and bin management, which are essential to identifying and mitigating security risks.
Goals
- Familiarize with heap
- Heap workflow
- Chunk and bin structure
- Vulnerabilities
- Vulnerability
- Use after free (UAF)
- Heap overflow
- Double free
- Use Docker to create a vulnerable environment and apply techniques to exploit it
Resources
- https://heap-exploitation.dhavalkapil.com/divingintoglibc_heap
- https://raw.githubusercontent.com/cloudburst/libheap/master/heap.png
- https://github.com/shellphish/how2heap?tab=readme-ov-file
Explore simple and distro indipendent declarative Linux starting on Tumbleweed or Arch Linux by janvhs
Description
Inspired by mkosi the idea is to experiment with a declarative approach of defining Linux systems. A lot of tools already make it possible to manage the systems infrastructure by using description files, rather than manual invocation. An example for this are systemd presets for managing enabled services or the /etc/fstab
file for describing how partitions should be mounted.
If we would take inspiration from openSUSE MicroOS and their handling of the /etc/
directory, we could theoretically use systemd-sysupdate
to swap out the /usr/
partition and create an A/B boot scheme, where the /usr/
partition is always freshly built according to a central system description. In the best case it would be possible to still utilise snapshots, but an A/B root scheme would be sufficient for the beginning. This way you could get the benefit of NixOS's declarative system definition, but still use the distros package repositories and don't have to deal with the overhead of Flakes or the Nix language.
Goals
- A simple and understandable system
- Check fitness of
mkosi
or write a simple extensible image builder tool for it - Create a declarative system specification
- Create a system with swappable
/usr/
partition - Create an A/B root scheme
- Swap to the new system without reboot (kexec?)
Resources
- Ideas that have been floating around in my head for a while
- https://0pointer.net/blog/fitting-everything-together.html
- GNOME OS
- MicroOS
- systemd mkosi
- Vanilla OS
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
toptop - a top clone written in Go by dshah
Description
toptop
is a clone of Linux's top
CLI tool, but written in Go.
Goals
Learn more about Go (mainly bubbletea) and Linux
Resources
RISC-V emulator in GLSL capable of running Linux by favogt
Description
There are already numerous ways to run Linux and some programs through emulation in a web browser (e.g. x86 and riscv64 on https://bellard.org/jslinux/), but none use WebGL/WebGPU to run the emulation on the GPU.
I already made a PoC of an AArch64 (64-bit Arm) emulator in OpenCL which is unfortunately hindered by a multitude of OpenCL compiler bugs on all platforms (Intel with beignet or the new compute runtime and AMD with Mesa Clover and rusticl). With more widespread and thus less broken GLSL vs. OpenCL and the less complex implementation requirements for RV32 (especially 32bit integers instead of 64bit), that should not be a major problem anymore.
Goals
Write an RISC-V system emulator in GLSL that is capable of booting Linux and run some userspace programs interactively. Ideally it is small enough to work on online test platforms like Shaderoo with a custom texture that contains bootstrap code, kernel and initrd.
Minimum:
riscv32 without FPU (RV32 IMA) and MMU (µClinux), running Linux in M-mode and userspace in U-mode.
Stretch goals:
FPU support, S-Mode support with MMU, SMP. Custom web frontend with more possibilities for I/O (disk image, network?).
Resources
RISC-V ISA Specifications
Shaderoo
OpenGL 4.5 Quick Reference Card
Result as of Hackweek 2024
WebGL turned out to be insufficient, it only supports OpenGL ES 3.0 but imageLoad/imageStore needs ES 3.1. So we switched directions and had to write a native C++ host for the shaders.
As of Hackweek Friday, the kernel attempts to boot and outputs messages, but panics due to missing memory regions.
Since then, some bugs were fixed and enough hardware emulation implemented, so that now Linux boots with framebuffer support and it's possible to log in and run programs!
The repo with a demo video is available at https://github.com/Vogtinator/risky-v
Hacking on sched_ext by flonnegren
Description
Sched_ext upstream has some interesting issues open for grabs:
Goals
Send patches to sched_ext upstream
Also set up perfetto to trace some of the example schedulers.
Resources
https://github.com/sched-ext/scx
Create a DRM driver for VGA video cards by tdz
Yes, those VGA video cards. The goal of this project is to implement a DRM graphics driver for such devices. While actual hardware is hard to obtain or even run today, qemu emulates VGA output.
VGA has a number of limitations, which make this project interesting.
- There are only 640x480 pixels (or less) on the screen. That resolution is also a soft lower limit imposed by DRM. It's mostly a problem for desktop environments though.
- Desktop environments assume 16 million colors, but there are only 16 colors with VGA. VGA's 256 color palette is not available at 640x480. We can choose those 16 colors freely. The interesting part is how to choose them. We have to build a palette for the displayed frame and map each color to one of the palette's 16 entries. This is called dithering, and VGA's limitations are a good opportunity to learn about dithering algorithms.
- VGA has an interesting memory layout. Most graphics devices use linear framebuffers, which store the pixels byte by byte. VGA uses 4 bitplanes instead. Plane 0 holds all bits 0 of all pixels. Plane 1 holds all bits 1 of all pixels, and so on.
The driver will probably not be useful to many people. But, if finished, it can serve as test environment for low-level hardware. There's some interest in supporting old Amiga and Atari framebuffers in DRM. Those systems have similar limitations as VGA, but are harder to obtain and test with. With qemu, the VGA driver could fill this gap.
Apart from the Wikipedia entry, good resources on VGA are at osdev.net and FreeVGA
Model checking the BPF verifier by shunghsiyu
Project Description
BPF verifier plays a crucial role in securing the system (though less so now that unprivileged BPF is disabled by default in both upstream and SLES), and bugs in the verifier has lead to privilege escalation vulnerabilities in the past (e.g. CVE-2021-3490).
One way to check whether the verifer has bugs to use model checking (a formal verification technique), in other words, build a abstract model of how the verifier operates, and then see if certain condition can occur (e.g. incorrect calculation during value tracking of registers) by giving both the model and condition to a solver.
For the solver I will be using the Z3 SMT solver to do the checking since it provide a Python binding that's relatively easy to use.
Goal for this Hackweek
Learn how to use the Z3 Python binding (i.e. Z3Py) to build a model of (part of) the BPF verifier, probably the part that's related to value tracking using tristate numbers (aka tnum), and then check that the algorithm work as intended.
Resources
- Formal Methods for the Informal Engineer: Tutorial #1 - The Z3 Theorem Prover and its accompanying notebook is a great introduction into Z3
- Has a section specifically on model checking
- Software Verification and Analysis Using Z3 a great example of using Z3 for model checking
- Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers - existing work that use formal verification to prove that the multiplication helper used for value tracking work as intended
- [PATCH v5 net-next 00/12] bpf: rewrite value tracking in verifier - initial patch set that adds tristate number to the verifier
Modularization and Modernization of cifs.ko for Enhanced SMB Protocol Support by hcarvalho
Creator:
Enzo Matsumiya ematsumiya@suse.de @ SUSE Samba team
Members:
Henrique Carvalho henrique.carvalho@suse.com @ SUSE Samba team
Description
Split cifs.ko in 2 separate modules; one for SMB 1.0 and 2.0.x, and another for SMB 2.1, 3.0, and 3.1.1.
Goals
Primary
Start phasing out/deprecation of older SMB versions
Secondary
- Clean up of the code (with focus on the newer versions)
- Update cifs-utils
- Update documentation
- Improve backport workflow (see below)
Technical details
Ideas for the implementation.
- fs/smb/client/{old,new}.c to generate the respective modules
- Maybe don't create separate folders? (re-evaluate as things progresses!)
- Remove server->{ops,vals} if possible
- Clean up fs_context.* -- merge duplicate options into one, handle them in userspace utils
- Reduce code in smb2pdu.c -- tons of functions with very similar init/setup -> send/recv -> handle/free flow
- Restructure multichannel
- Treat initial connection as "channel 0" regardless of multichannel enabled/negotiated status, proceed with extra channels accordingly
- Extra channel just point to "channel 0" as the primary server, no need to allocate an extra TCPServerInfo for each one
- Authentication mechanisms
- Modernize algorithms (references: himmelblau, IAKERB/Local KDC, SCRAM, oauth2 (Azure), etc.
OIDC Loginproxy by toe
Description
Reverse proxies can be a useful option to separate authentication logic from application logic. SUSE and openSUSE use "loginproxies" as an authentication layer in front of several services.
Currently, loginproxies exist which support LDAP authentication or SAML authentication.
Goals
The goal of this Hack Week project is, to create another loginproxy which supports OpenID Connect authentication which can then act as a drop-in replacement for the existing LDAP or SAML loginproxies.
Testing is intended to focus on the integration with OIDC IDPs from Okta, KanIDM and Authentik.
Resources
CVE portal for SUSE Rancher products by gmacedo
Description
Currently it's a bit difficult for users to quickly see the list of CVEs affecting images in Rancher, RKE2, Harvester and Longhorn releases. Users need to individually look for each CVE in the SUSE CVE database page - https://www.suse.com/security/cve/ . This is not optimal, because those CVE pages are a bit hard to read and contain data for all SLE and BCI products too, making it difficult to easily see only the CVEs affecting the latest release of Rancher, for example. We understand that certain costumers are only looking for CVE data for Rancher and not SLE or BCI.
Goals
The objective is to create a simple to read and navigate page that contains only CVE data related to Rancher, RKE2, Harvester and Longhorn, where it's easy to search by a CVE ID, an image name or a release version. The page should also provide the raw data as an exportable CSV file.
It must be an MVP with the minimal amount of effort/time invested, but still providing great value to our users and saving the wasted time that the Rancher Security team needs to spend by manually sharing such data. It might not be long lived, as it can be replaced in 2-3 years with a better SUSE wide solution.
Resources
- The page must be simple and easy to read.
- The UI/UX must be as straightforward as possible with minimal visual noise.
- The content must be created automatically from the raw data that we already have internally.
- It must be updated automatically on a daily basis and on ad-hoc runs (when needed).
- The CVE status must be aligned with VEX.
- The raw data must be exportable as CSV file.
- Ideally it will be written in Go or pure Shell script with basic HTML and no external dependencies in CSS or JS.
Kanidm: A safe and modern IDM system by firstyear
Kanidm is an IDM system written in Rust for modern systems authentication. The github repo has a detailed "getting started" on the readme.
In addition Kanidm has spawn a number of adjacent projects in the Rust ecosystem such as LDAP, Kerberos, Webauthn, and cryptography libraries.
In this hack week, we'll be working on Quokca, a certificate authority that supports PKCS11/TPM storage of keys, issuance of PIV certificates, and ACME without the feature gatekeeping implemented by other CA's like smallstep.
For anyone who wants to participate in Kanidm, we have documentation and developer guides which can help.
I'm happy to help and share more, so please get in touch!
Model checking the BPF verifier by shunghsiyu
Project Description
BPF verifier plays a crucial role in securing the system (though less so now that unprivileged BPF is disabled by default in both upstream and SLES), and bugs in the verifier has lead to privilege escalation vulnerabilities in the past (e.g. CVE-2021-3490).
One way to check whether the verifer has bugs to use model checking (a formal verification technique), in other words, build a abstract model of how the verifier operates, and then see if certain condition can occur (e.g. incorrect calculation during value tracking of registers) by giving both the model and condition to a solver.
For the solver I will be using the Z3 SMT solver to do the checking since it provide a Python binding that's relatively easy to use.
Goal for this Hackweek
Learn how to use the Z3 Python binding (i.e. Z3Py) to build a model of (part of) the BPF verifier, probably the part that's related to value tracking using tristate numbers (aka tnum), and then check that the algorithm work as intended.
Resources
- Formal Methods for the Informal Engineer: Tutorial #1 - The Z3 Theorem Prover and its accompanying notebook is a great introduction into Z3
- Has a section specifically on model checking
- Software Verification and Analysis Using Z3 a great example of using Z3 for model checking
- Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers - existing work that use formal verification to prove that the multiplication helper used for value tracking work as intended
- [PATCH v5 net-next 00/12] bpf: rewrite value tracking in verifier - initial patch set that adds tristate number to the verifier
Bot to identify reserved data leak in local files or when publishing on remote repository by mdati
Description
Scope here is to prevent reserved data or generally "unwanted", to be pushed and saved on a public repository, i.e. on Github, causing disclosure or leaking of reserved informations.
The above definition of reserved or "unwanted" may vary, depending on the context: sometime secret keys or password are stored in data or configuration files or hardcoded in source code and depending on the scope of the archive or the level of security, it can be either wanted, permitted or not at all.
As main target here, secrets will be registration keys or passwords, to be detected and managed locally or in a C.I. pipeline.
Goals
Detection:
- Local detection: detect secret words present in local files;
- Remote detection: detect secrets in files, in pipelines, going to be transferred on a remote repository, i.e. via
git push
;
Reporting:
- report the result of detection on stderr and/or log files, noticed excluding the secret values.
Acton:
- Manage the detection, by either deleting or masking the impacted code or deleting/moving the file itself or simply notify it.
Resources
- Project repository, published on Github (link): m-dati/hkwk24;
- Reference folder: hkwk24/chksecret;
- First pull request (link): PR#1;
- Second PR, for improvements: PR#2;
- README.md and TESTS.md documentation files available in the repo root;
- Test subproject repository, for testing CI on push [TBD].
Notes
We use here some examples of secret words, that still can be improved.
The various patterns to match desired reserved words are written in a separated module, to be on demand updated or customized.
[Legend: TBD = to be done]