Problem statement
Once a kernel is built, a developer/janitor may want to boot the kernel for various reasons, such as performing simple boot test or running tests and workloads from user space or simply playing around in a shell. However, an easy to use and a descriptive tool to perform those tasks doesn't exist to our knowledge.
We talked to kernel developers and were told to have a look at the following resources:
The approach
We plan to address this issue in the upcoming Hackweek. Our idea is to leverage LinuxKit as a driver to boot a given kernel image in different environments (qemu, Hyper-V, VMware and public clouds). As linuxkit is container-based, it is trivial to boot the kernel with various rootfs-images of all kinds of distributions. Note that it's easy to create custom rootfs images.
The tool we seek to implement should wrap everything up into something useful for developers and CI systems to use from the command line as well as from configuration files. The benefits of using a container-based infrastructure include:
Reproducibility: We bundle kernel images with the desired rootfs together and store them for a given amount of time. Re-running and re-creating those becomes trivial and easy.
Declarative approach: All steps to create the desired image are baked into configuration files. The benefits are again reproducibility and documentation.
Flexibility: In theory, we can bundle any kernel image with any rootfs and add as many files, binaries and directories on top as we please. Supporting different kinds of environments, including public clouds, makes the tool attractive for a broader audience.
Looking for hackers with the skills:
This project is part of:
Hack Week 16
Activity
Comments
Similar Projects
toptop - a top clone written in Go by dshah
Description
toptop
is a clone of Linux's top
CLI tool, but written in Go.
Goals
Learn more about Go (mainly bubbletea) and Linux
Resources
Contributing to Linux Kernel security by pperego
Description
A couple of weeks ago, I found this blog post by Gustavo Silva, a Linux Kernel contributor.
I always strived to start again into hacking the Linux Kernel, so I asked Coverity scan dashboard access and I want to contribute to Linux Kernel by fixing some minor issues.
I want also to create a Linux Kernel fuzzing lab using qemu and syzkaller
Goals
- Fix at least 2 security bugs
- Create the fuzzing lab and having it running
The story so far
- Day 1: setting up a virtual machine for kernel development using Tumbleweed. Reading a lot of documentation, taking confidence with Coverity dashboard and with procedures to submit a kernel patch
- Day 2: I read really a lot of documentation and I triaged some findings on Coverity SAST dashboard. I have to confirm that SAST tool are great false positives generator, even for low hanging fruits.
- Day 3: Working on trivial changes after I read this blog post:
https://www.toblux.com/posts/2024/02/linux-kernel-patches.html. I have to take confidence
with the patch preparation and submit process yet.
- First trivial patch sent: using strtruefalse() macro instead of hard-coded strings in a staging driver for a lcd display
- Fix for a dereference before null check issue discovered by Coverity (CID 1601566) https://scan7.scan.coverity.com/#/project-view/52110/11354?selectedIssue=1601566
- Day 4: Triaging more issues found by Coverity.
- The patch for CID 1601566 was refused. The check against the NULL pointer was pointless so I prepared a version 2 of the patch removing the check.
- Fixed another dereference before NULL check in iwlmvmparsewowlaninfo_notif() routine (CID 1601547). This one was already submitted by another kernel hacker :(
- Day 5: Wrapping up. I had to do some minor rework on patch for CID 1601566. I found a stalker bothering me in private emails and people I interacted with me, advised he is a well known bothering person. Markus Elfring for the record.
Wrapping up: being back doing kernel hacking is amazing and I don't want to stop it. My battery pack is completely drained but changing the scope gave me a great twist and I really want to feel this energy not doing a single task for months.
I failed in setting up a fuzzing lab but I was too optimistic for the patch submission process.
The patches
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Explore simple and distro indipendent declarative Linux starting on Tumbleweed or Arch Linux by janvhs
Description
Inspired by mkosi the idea is to experiment with a declarative approach of defining Linux systems. A lot of tools already make it possible to manage the systems infrastructure by using description files, rather than manual invocation. An example for this are systemd presets for managing enabled services or the /etc/fstab
file for describing how partitions should be mounted.
If we would take inspiration from openSUSE MicroOS and their handling of the /etc/
directory, we could theoretically use systemd-sysupdate
to swap out the /usr/
partition and create an A/B boot scheme, where the /usr/
partition is always freshly built according to a central system description. In the best case it would be possible to still utilise snapshots, but an A/B root scheme would be sufficient for the beginning. This way you could get the benefit of NixOS's declarative system definition, but still use the distros package repositories and don't have to deal with the overhead of Flakes or the Nix language.
Goals
- A simple and understandable system
- Check fitness of
mkosi
or write a simple extensible image builder tool for it - Create a declarative system specification
- Create a system with swappable
/usr/
partition - Create an A/B root scheme
- Swap to the new system without reboot (kexec?)
Resources
- Ideas that have been floating around in my head for a while
- https://0pointer.net/blog/fitting-everything-together.html
- GNOME OS
- MicroOS
- systemd mkosi
- Vanilla OS
VulnHeap by r1chard-lyu
Description
The VulnHeap project is dedicated to the in-depth analysis and exploitation of vulnerabilities within heap memory management. It focuses on understanding the intricate workflow of heap allocation, chunk structures, and bin management, which are essential to identifying and mitigating security risks.
Goals
- Familiarize with heap
- Heap workflow
- Chunk and bin structure
- Vulnerabilities
- Vulnerability
- Use after free (UAF)
- Heap overflow
- Double free
- Use Docker to create a vulnerable environment and apply techniques to exploit it
Resources
- https://heap-exploitation.dhavalkapil.com/divingintoglibc_heap
- https://raw.githubusercontent.com/cloudburst/libheap/master/heap.png
- https://github.com/shellphish/how2heap?tab=readme-ov-file
Modularization and Modernization of cifs.ko for Enhanced SMB Protocol Support by hcarvalho
Creator:
Enzo Matsumiya ematsumiya@suse.de @ SUSE Samba team
Members:
Henrique Carvalho henrique.carvalho@suse.com @ SUSE Samba team
Description
Split cifs.ko in 2 separate modules; one for SMB 1.0 and 2.0.x, and another for SMB 2.1, 3.0, and 3.1.1.
Goals
Primary
Start phasing out/deprecation of older SMB versions
Secondary
- Clean up of the code (with focus on the newer versions)
- Update cifs-utils
- Update documentation
- Improve backport workflow (see below)
Technical details
Ideas for the implementation.
- fs/smb/client/{old,new}.c to generate the respective modules
- Maybe don't create separate folders? (re-evaluate as things progresses!)
- Remove server->{ops,vals} if possible
- Clean up fs_context.* -- merge duplicate options into one, handle them in userspace utils
- Reduce code in smb2pdu.c -- tons of functions with very similar init/setup -> send/recv -> handle/free flow
- Restructure multichannel
- Treat initial connection as "channel 0" regardless of multichannel enabled/negotiated status, proceed with extra channels accordingly
- Extra channel just point to "channel 0" as the primary server, no need to allocate an extra TCPServerInfo for each one
- Authentication mechanisms
- Modernize algorithms (references: himmelblau, IAKERB/Local KDC, SCRAM, oauth2 (Azure), etc.
FizzBuzz OS by mssola
Project Description
FizzBuzz OS (or just fbos
) is an idea I've had in order to better grasp the fundamentals of the low level of a RISC-V machine. In practice, I'd like to build a small Operating System kernel that is able to launch three processes: one that simply prints "Fizz", another that prints "Buzz", and the third which prints "FizzBuzz". These processes are unaware of each other and it's up to the kernel to schedule them by using the timer interrupts as given on openSBI (fizz on % 3 seconds, buzz on % 5 seconds, and fizzbuzz on % 15 seconds).
This kernel provides just one system call, write
, which allows any program to pass the string to be written into stdout.
This project is free software and you can find it here.
Goal for this Hackweek
- Better understand the RISC-V SBI interface.
- Better understand RISC-V in privileged mode.
- Have fun.
Resources
Results
The project was a resounding success Lots of learning, and the initial target was met.
Improve various phones kernel mainline support (Qualcomm, Exynos, MediaTek) by pvorel
Similar to previous hackweeks ( https://hackweek.opensuse.org/projects/improve-qualcomm-soc-msm8994-slash-msm8992-kernel-mainline-support, https://hackweek.opensuse.org/projects/test-mainline-kernel-on-an-older-qualcomm-soc-msm89xx-explore-mainline-kernel-qualcomm-mainlining) try to improve kernel mainline support of various phones.
Result
In the end I concentrated again to msm8994:
- 507aae9a3549c ("arm64: dts: qcom: msm8994-angler: Enable power key, volume up/down") (will be in kernel 6.14)
- Testing of c910544d22347 ("arm64: dts: qcom: msm8994: Describe USB interrupts") (will be in kernel 6.14)
- WIP USB support for msm8994
Create DRM drivers for VESA and EFI framebuffers by tdz
Description
We already have simpledrm for firmware framebuffers. But the driver is originally for ARM boards, not PCs. It is already overloaded with code to support both use cases. At the same time it is missing possible features for VESA and EFI, such as palette modes or EDID support. We should have DRM drivers for VESA and EFI interfaces. The infrastructure exists already and initial drivers can be forked from simpledrm.
Goals
- Initially, a bare driver for VESA or EFI should be created. It can take functionality from simpledrm.
- Then we can begin to add additional features. The boot loader can provide EDID data. With VGA hardware, VESA can support paletted modes or color management. Example code exists in vesafb.
early stage kdump support by mbrugger
Project Description
When we experience a early boot crash, we are not able to analyze the kernel dump, as user-space wasn't able to load the crash system. The idea is to make the crash system compiled into the host kernel (think of initramfs) so that we can create a kernel dump really early in the boot process.
Goal for the Hackweeks
- Investigate if this is possible and the implications it would have (done in HW21)
- Hack up a PoC (done in HW22 and HW23)
- Prepare RFC series (giving it's only one week, we are entering wishful thinking territory here).
update HW23
- I was able to include the crash kernel into the kernel Image.
- I'll need to find a way to load that from
init/main.c:start_kernel()
probably afterkcsan_init()
- I workaround for a smoke test was to hack
kexec_file_load()
systemcall which has two problems:- My initramfs in the porduction kernel does not have a new enough kexec version, that's not a blocker but where the week ended
- As the crash kernel is part of init.data it will be already stale once I can call
kexec_file_load()
from user-space.
The solution is probably to rewrite the POC so that the invocation can be done from init.text (that's my theory) but I'm not sure if I can reuse the kexec infrastructure in the kernel from there, which I rely on heavily.
update HW24
- Day1
- rebased on v6.12 with no problems others then me breaking the config
- setting up a new compilation and qemu/virtme env
- getting desperate as nothing works that used to work
- Day 2
- getting to call the invocation of loading the early kernel from
__init
afterkcsan_init()
- getting to call the invocation of loading the early kernel from
Day 3
- fix problem of memdup not being able to alloc so much memory... use 64K page sizes for now
- code refactoring
- I'm now able to load the crash kernel
- When using virtme I can boot into the crash kernel, also it doesn't boot completely (major milestone!), crash in
elfcorehdr_read_notes()
Day 4
- crash systems crashes (no pun intended) in
copy_old_mempage()
link; will need to understand elfcorehdr... - call path
vmcore_init() -> parse_crash_elf_headers() -> elfcorehdr_read() -> read_from_oldmem() -> copy_oldmem_page() -> copy_to_iter()
- crash systems crashes (no pun intended) in
Day 5
- hacking
arch/arm64/kernel/crash_dump.c:copy_old_mempage()
to see if crash system really starts. It does. - fun fact: retested with more reserved memory and with UEFI FW, host kernel crashes in init but directly starts the crash kernel, so it works (somehow) \o/
- hacking
TODOs
- fix elfcorehdr so that we actually can make use of all this...
- test where in the boot
__init()
chain we can/should callkexec_early_dump()
Yearly Quality Engineering Ask me Anything - AMA for not-engineering by szarate
Goal
Get a closer look at how developers work on the Engineering team (R & D) of SUSE, and close the collaboration gap between GSI and Engineering
Why?
Santiago can go over different development workflows, and can do a deepdive into how Quality Engineering works (think of my QE Team, the advocates for your customers), The idea of this session is to help open the doors to opportunities for collaboration, and broaden our understanding of SUSE as a whole.
Objectives
- Give $audience a small window on how to get some questions answered either on the spot or within days of how some things at engineering are done
- Give Santiago Zarate from Quality Engineering a look into how $audience sees the engineering departments, and find out possibilities of further collaboration
How?
By running an "Ask me Anything" session, which is a format of a kind of open Q & A session, where participants ask the host multiple questions.
How to make it happen?
I'm happy to help joining a call or we can do it async (online/in person is more fun). Ping me over email-slack and lets make the magic happen!. Doesn't need to be during hackweek, but we gotta kickstart the idea during hackweek ;)
Rules
The rules are simple, the more questions the more fun it will be; while this will be only a window into engineering, it can also be the place to help all of us get to a similar level of understanding of the processes that are behind our respective areas of the organization.
Dynamics
The host will be monitoring the questions on some pre-agreed page, and try to answer to the best of their knowledge, if a question is too difficult or the host doesn't have the answer, he will do his best to provide an answer at a later date.
Atendees are encouraged to add questions beforehand; in the case there aren't any, we would be looking at how Quality Engineering tests new products or performs regression tests
Agenda
- Introduction of Santiago Zarate, Product Owner of Quality Engineering Core team
- Introduction of the Group/Team/Persons interested
- Ice breaker
- AMA time! Add your questions $PAGE
- Looking at QE Workflows: How is
- A maintenance update being tested before being released to our customers
- Products in development are tested before making it generally available
- Engineering Opportunity Board
Automated Test Report reviewer by oscar-barrios
Description
In SUMA/Uyuni team we spend a lot of time reviewing test reports, analyzing each of the test cases failing, checking if the test is a flaky test, checking logs, etc.
Goals
Speed up the review by automating some parts through AI, in a way that we can consume some summary of that report that could be meaningful for the reviewer.
Resources
No idea about the resources yet, but we will make use of:
- HTML/JSON Report (text + screenshots)
- The Test Suite Status GithHub board (via API)
- The environment tested (via SSH)
- The test framework code (via files)
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Drag Race - comparative performance testing for pull requests by balanza
Description
«Sophia, a backend developer, submitted a pull request with optimizations for a critical database query. Once she pushed her code, an automated load test ran, comparing her query against the main branch. Moments later, she saw a new comment automatically added to her PR: the comparison results showed reduced execution time and improved efficiency. Smiling, Sophia messaged her team, “Performance gains confirmed!”»
Goals
- To have a convenient and ergonomic framework to describe test scenarios, including environment and seed;
- to compare results from different tests
- to have a GitHub action that executes such tests on a CI environment
Resources
The MVP will be built on top of Preevy and K6.
Hack on isotest-ng - a rust port of isotovideo (os-autoinst aka testrunner of openQA) by szarate
Description
Some time ago, I managed to convince ByteOtter to hack something that resembles isotovideo but in Rust, not because I believe that Perl is dead, but more because there are certain limitations in the perl code (how it was written), and its always hard to add new functionalities when they are about implementing a new backend, or fixing bugs (Along with people complaining that Perl is dead, and that they don't like it)
In reality, I wanted to see if this could be done, and ByteOtter proved that it could be, while doing an amazing job at hacking a vnc console, and helping me understand better what RuPerl needs to work.
I plan to keep working on this for the next few years, and while I don't aim for feature completion or replacing isotovideo tih isotest-ng (name in progress), I do plan to be able to use it on a daily basis, using specialized tooling with interfaces, instead of reimplementing everything in the backend
Todo
- Add
make
targets for testability, e.g "spawn qemu and type" - Add image search matching algorithm
- Add a Null test distribution provider
- Add a Perl Test Distribution Provider
- Fix unittests https://github.com/os-autoinst/isotest-ng/issues/5
- Research OpenTofu how to add new hypervisors/baremetal to OpenTofu
- Add an interface to openQA cli
Goals
- Implement at least one of the above, prepare proposals for GSoC
- Boot a system via it's BMC
Resources
See https://github.com/os-autoinst/isotest-ng
Technical talks at universities by agamez
Description
This project aims to empower the next generation of tech professionals by offering hands-on workshops on containerization and Kubernetes, with a strong focus on open-source technologies. By providing practical experience with these cutting-edge tools and fostering a deep understanding of open-source principles, we aim to bridge the gap between academia and industry.
For now, the scope is limited to Spanish universities, since we already have the contacts and have started some conversations.
Goals
- Technical Skill Development: equip students with the fundamental knowledge and skills to build, deploy, and manage containerized applications using open-source tools like Kubernetes.
- Open-Source Mindset: foster a passion for open-source software, encouraging students to contribute to open-source projects and collaborate with the global developer community.
- Career Readiness: prepare students for industry-relevant roles by exposing them to real-world use cases, best practices, and open-source in companies.
Resources
- Instructors: experienced open-source professionals with deep knowledge of containerization and Kubernetes.
- SUSE Expertise: leverage SUSE's expertise in open-source technologies to provide insights into industry trends and best practices.
Improve Development Environment on Uyuni by mbussolotto
Description
Currently create a dev environment on Uyuni might be complicated. The steps are:
- add the correct repo
- download packages
- configure your IDE (checkstyle, format rules, sonarlint....)
- setup debug environment
- ...
The current doc can be improved: some information are hard to be find out, some others are completely missing.
Dev Container might solve this situation.
Goals
Uyuni development in no time:
- using VSCode:
- setting.json should contains all settings (for all languages in Uyuni, with all checkstyle rules etc...)
- dev container should contains all dependencies
- setup debug environment
- implement a GitHub Workspace solution
- re-write documentation
Lots of pieces are already implemented: we need to connect them in a consistent solution.
Resources
- https://github.com/uyuni-project/uyuni/wiki
ADS-B receiver with MicroOS by epaolantonio
I would like to put one of my spare Raspberry Pis to good use, and what better way to see what flies above my head at any time?
There are various ready-to-use distros already set-up to provide feeder data to platforms like Flightradar24, ADS-B Exchange, FlightAware etc... The goal here would be to do it using MicroOS as a base and containerized decoding of ADS-B data (via tools like dump1090
) and web frontend (tar1090
).
Goals
- Create a working receiver using MicroOS as a base, and containers based on Tumbleweed
- Make it easy to install
- Optimize for maximum laziness (i.e. it should take care of itself with minimum intervention)
Resources
- 1x Small Board Computer capable of running MicroOS
- 1x RTL2832U DVB-T dongle
- 1x MicroSD card
- https://github.com/antirez/dump1090
- https://github.com/flightaware/dump1090 (dump1090 fork by FlightAware)
- https://github.com/wiedehopf/tar1090
Project status (2024-11-22)
So I'd say that I'm pretty satisfied with how it turned out. I've packaged readsb
(as a replacement for dump1090
), tar1090
, tar1090-db
and mlat-client
(not used yet).
Current status:
- Able to set-up a working receiver using combustion+ignition (web app based on Fuel Ignition)
- Able to feed to various feeds using the Beast protocol (Airplanes.live, ADSB.fi, ADSB.lol, ADSBExchange.com, Flyitalyadsb.com, Planespotters.net)
- Able to feed to Flightradar24 (initial-setup available but NOT tested! I've only tested using a key I already had)
- Local web interface (tar1090) to easily visualize the results
- Cockpit pre-configured to ease maintenance
What's missing:
- MLAT (Multilateration) support. I've packaged mlat-client already, but I have to wire it up
- FlightAware support
Give it a go at https://g7.github.io/adsbreceiver/ !
Project links
- https://g7.github.io/adsbreceiver/
- https://github.com/g7/adsbreceiver
- https://build.opensuse.org/project/show/home:epaolantonio:adsbreceiver
Port the classic browser game HackTheNet to PHP 8 by dgedon
Description
The classic browser game HackTheNet from 2004 still runs on PHP 4/5 and MySQL 5 and needs a port to PHP 8 and e.g. MariaDB.
Goals
- Port the game to PHP 8 and MariaDB 11
- Create a container where the game server can simply be started/stopped
Resources
- https://github.com/nodeg/hackthenet
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games