Avahi Integration and Network Connection
Project Description
This project aims to integrate Avahi service on a Linux server to manage and control network connections between mobile devices and televisions. The goal is to establish a 1-to-1 connection between specific mobile devices on the 192.168.10.x network and TVs on the 192.168.20.x network, acting as a controlled intermediary.
Why Start This Project?
The project is initiated to address the challenges of uncontrolled network connections facilitated by Avahi. By creating a custom C/C++ program, we aim to restrict and secure connections between mobile and TV networks, allowing specific devices to connect only to designated TVs.
Goal for this Hackweek
The primary objective for this Hackweek is to develop and implement the C/C++ program that will interface with Avahi, handling mDNS queries and unicast responses. We intend to demonstrate the controlled forwarding of packets between networks, achieving the goal of establishing a secure 1-to-1 connection.
To be more specific: For this hackweed we want to achieve
- successfully install avahi on the system
- check /usr/include > cd avahi- avahi-client/ avahi-common/ avahi-compat-libdns_sd/ avahi-core/ avahi-libevent/
- use sudo zypper install avahi-compat-libdns_sd-devel
- run using gcc -o avahiexample avahiexample.c -lavahi-client -lavahi-common ./avahi_example
- output finally > avahi-browse -a
Resources
Other Resources
Keywords
- Avahi
- Network Connection
- mDNS
- Linux
- C/C++ Programming
- Hackweek
- Network Security
- Integration
This project is part of:
Hack Week 23
Activity
Comments
-
about 1 year ago by vojha | Reply
Scenario Overview
This scenario involves establishing a 1-to-1 connection between mobile devices (192.168.10.x network) and televisions (192.168.20.x network) through a Linux server acting as an intermediary. The goal is to allow specific mobile devices to discover and connect only to certain TVs, creating a controlled connection process between networks.
Problem and Objective
Avahi Disabled Scenario
When Avahi is disabled: - Mobile devices on the 192.168.10.x network cannot discover any TVs on the 192.168.20.x network.
Avahi Enabled Scenario
When Avahi is enabled: - All mobile devices can discover all TVs, allowing connections between any mobile and TV on the network.
Proposed Solution and Process
Create a C/C++ Program
The program will interface with Avahi, handling the detection and communication between mobile and TV networks. It identifies the source (mobile) and target (TV) IP addresses and manages mDNS queries.
Purpose of the C/C++ Program
When Avahi is disabled, the program will handle and manipulate mDNS queries, allowing a 1-to-1 connection. The program ensures controlled forwarding of packets between networks, establishing specific connections between mobile devices and TVs.
Steps to Implement
- Install and start the Avahi service.
- Create the C/C++ program to handle mDNS queries and unicast responses between the mobile and TV networks. The program will act as a proxy server, forwarding and restricting packets between the specified mobile and TV.
Testing Procedure
Validate the C program's functionality by sending mDNS queries from mobile devices and forwarding them to the TV network. Test and confirm that the forwarded responses are unicast and specific to the intended mobile device.
Desired Outcomes & Future Works
The desired outcome is to establish controlled 1-to-1 connections between specific mobile devices and TVs without Avahi automatically forwarding mDNS packets. This will ensure a restricted and secure network connection, allowing specific mobile devices to connect only to certain TVs.
Note: Ensure that Avahi doesn't automatically forward packets unless the integration program is actively running.
Result and Outcome
I was able to run a simple service publication.
Service Publication
Publishes a service named "MyService" of type "example.tcp" on all network interfaces and protocols (IPv4 and IPv6) on port 12345. This service can be discovered on the local network.
Output
The output indicates the successful publication of the service. When the service is successfully published, the
entry_group_callback
function is called, and it prints: "Service 'MyService' successfully published."To run the C program, use the following example:
```bash gcc -o avahiintegration avahiintegration.c -lavahi-client -lavahi-common ./avahi_integration There are .sh files to install Avahi on Ubuntu, stop and start the daemon services as well, which makes life easier when testing different scenarios.
Output En02 smart_tv found.
bash Copy code $ ./avahiintegration Resolved service: Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa, _googlecast.tcp, local, a56831aa-ef2a-1db0-9c3f-d6b56ff217fa.local:8009 Resolved service: Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa, googlecast.tcp, local, a56831aa-ef2a-1db0-9c3f-d6b56ff217fa.local:8009 ^C In another tab:
bash Copy code avahi-browse -a Output:
bash Copy code eno2 IPv6 Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa googlecast.tcp local eno2 IPv4 Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa googlecast.tcp local eno1 IPv4 dap2230 Web Site local eno2 IPv6 Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa googlecast.tcp local eno2 IPv6 Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa googlecast.tcp local eno2 IPv4 Smart-2K-ATV4-a56831aaef2a1db09c3fd6b56ff217fa googlecast.tcp local Client failure, exiting: Daemon connection failed. The message appeared when we stopped the Avahi daemon using sudo systemctl stop avahi-daemon. Note that the ./avahi_integration ELF file was running in another tab, and nothing happened.
-
about 1 year ago by vojha | Reply
Avahi Integration and Network Connection README
What is Avahi?
Avahi is a system that facilitates service discovery on a local network. It implements Zeroconf networking specifications, including mDNS (multicast Domain Name System) and DNS-SD (DNS-based Service Discovery). Avahi enables devices to automatically discover and utilize services on the local network without requiring manual configuration.
For more information, visit the Avahi website.
Run in SUSE Linux OpenSUSE Tumbleweed
The Avahi files were successfully installed on the system, and the following steps were performed:
```bash cd /usr/include/avahi- avahi-client/ avahi-common/ avahi-compat-libdns_sd/ avahi-core/ avahi-libevent/ To compile and run the example program:
bash Copy code gcc -o avahiexample avahiexample.c -lavahi-client -lavahi-common ./avahi_example To browse services:
bash Copy code avahi-browse -a
-
about 1 year ago by vojha | Reply
All the programs and shell scripts
Everything is posted in github, feel free to check the codes and implement the same at your end. See you in next Hackweek
https://github.com/varunkojha/avahi-c
Desired Outcomes & Future Works
The desired outcome is to establish controlled 1-to-1 connections between specific mobile devices and TVs without Avahi automatically forwarding mDNS packets. This will ensure a restricted and secure network connection, allowing specific mobile devices to connect only to certain TVs.
Similar Projects
Add a machine-readable output to dmidecode by jdelvare
Description
There have been repeated requests for a machine-friendly dmidecode output over the last decade. During Hack Week 19, 5 years ago, I prepared the code to support alternative output formats, but didn't have the time to go further. Last year, Jiri Hnidek from Red Hat Linux posted a proof-of-concept implementation to add JSON output support. This is a fairly large pull request which needs to be carefully reviewed and tested.
Goals
Review Jiri's work and provide constructive feedback. Merge the code if acceptable. Evaluate the costs and benefits of using a library such as json-c.
FastFileCheck work by pstivanin
Description
FastFileCheck is a high-performance, multithreaded file integrity checker for Linux. Designed for speed and efficiency, it utilizes parallel processing and a lightweight database to quickly hash and verify large volumes of files, ensuring their integrity over time.
https://github.com/paolostivanin/FastFileCheck
Goals
- Release v1.0.0
Design overwiew:
- Main thread (producer): traverses directories and feeds the queue (one thread is more than enough for most use cases)
- Dedicated consumer thread: manages queue and distributes work to threadpool
- Worker threads: compute hashes in parallel
This separation of concerns is efficient because:
- Directory traversal is I/O bound and works well in a single thread
- Queue management is centralized, preventing race conditions
- Hash computation is CPU-intensive and properly parallelized
FizzBuzz OS by mssola
Project Description
FizzBuzz OS (or just fbos
) is an idea I've had in order to better grasp the fundamentals of the low level of a RISC-V machine. In practice, I'd like to build a small Operating System kernel that is able to launch three processes: one that simply prints "Fizz", another that prints "Buzz", and the third which prints "FizzBuzz". These processes are unaware of each other and it's up to the kernel to schedule them by using the timer interrupts as given on openSBI (fizz on % 3 seconds, buzz on % 5 seconds, and fizzbuzz on % 15 seconds).
This kernel provides just one system call, write
, which allows any program to pass the string to be written into stdout.
This project is free software and you can find it here.
Goal for this Hackweek
- Better understand the RISC-V SBI interface.
- Better understand RISC-V in privileged mode.
- Have fun.
Resources
Results
The project was a resounding success Lots of learning, and the initial target was met.
ESETv2 Emulator / interpreter by m.crivellari
Description
ESETv2 is an intriguing challenge developed by ESET, available on their website under the "Challenge" menu.
The challenge involves an "assembly-like" language and a Python compiler that generates .evm
binary files.
This is an example using one of their samples (it prints N Fibonacci numbers):
.dataSize 0
.code
loadConst 0, r1 # first
loadConst 1, r2 # second
loadConst 1, r14 # loop helper
consoleRead r3
loop:
jumpEqual end, r3, r15
add r1, r2, r4
mov r2, r1
mov r4, r2
consoleWrite r1
sub r3, r14, r3
jump loop
end:
hlt
This language also supports multi-threading. It includes instructions such as createThread
to start a new thread, joinThread
to wait until a thread completes, and lock
/unlock
to facilitate synchronization between threads.
Goals
- create a full interpreter able to run all the available samples provided by ESET.
- improve / optimize memory (eg. using bitfields where needed as well as avoid unnecessary memory allocations)
Resources
- Challenge URL: https://join.eset.com/en/challenges/core-software-engineer
- My github project: https://github.com/DispatchCode/eset_vm2 (not 100% complete)
Achivements
Project still not complete. Added lock / unlock instruction implementation but further debug is needed; there is a bug somewhere. Actually the code it works for almost all the examples in the samples folder. 1 of them is not yet runnable (due to a missing "write" opcode implementation), another will cause the bug to show up; still not investigated, anyhow.
New KDE Plasma notification app/applet by apappas
Description
My memory is terrible so I depend a lot on notifications to carry me through the workday. As a plasma user I am ok with the current applet, but I don't love it. It is too small for the centrality it has in my day. Also I dislike how you can not go back to notifications you have dismissed
Goals
Develop a plasma app that * must gather notifications without disrupting the existing notification app * must offer the ablity to refer to dismissed/archived/seen notification up to some defined point in the past * must allow deletion of notifications
RISC-V emulator in GLSL capable of running Linux by favogt
Description
There are already numerous ways to run Linux and some programs through emulation in a web browser (e.g. x86 and riscv64 on https://bellard.org/jslinux/), but none use WebGL/WebGPU to run the emulation on the GPU.
I already made a PoC of an AArch64 (64-bit Arm) emulator in OpenCL which is unfortunately hindered by a multitude of OpenCL compiler bugs on all platforms (Intel with beignet or the new compute runtime and AMD with Mesa Clover and rusticl). With more widespread and thus less broken GLSL vs. OpenCL and the less complex implementation requirements for RV32 (especially 32bit integers instead of 64bit), that should not be a major problem anymore.
Goals
Write an RISC-V system emulator in GLSL that is capable of booting Linux and run some userspace programs interactively. Ideally it is small enough to work on online test platforms like Shaderoo with a custom texture that contains bootstrap code, kernel and initrd.
Minimum:
riscv32 without FPU (RV32 IMA) and MMU (µClinux), running Linux in M-mode and userspace in U-mode.
Stretch goals:
FPU support, S-Mode support with MMU, SMP. Custom web frontend with more possibilities for I/O (disk image, network?).
Resources
RISC-V ISA Specifications
Shaderoo
OpenGL 4.5 Quick Reference Card
Result as of Hackweek 2024
WebGL turned out to be insufficient, it only supports OpenGL ES 3.0 but imageLoad/imageStore needs ES 3.1. So we switched directions and had to write a native C++ host for the shaders.
As of Hackweek Friday, the kernel attempts to boot and outputs messages, but panics due to missing memory regions.
Since then, some bugs were fixed and enough hardware emulation implemented, so that now Linux boots with framebuffer support and it's possible to log in and run programs!
The repo with a demo video is available at https://github.com/Vogtinator/risky-v
Port some classic game to Linux by MDoucha
Let's pick some old classic game, reverse engineer the data formats and game rules and write an open source engine for it from scratch. Some games from 1990s are simple enough that we could have a playable prototype by the end of the week.
Write which games you'd like to hack on in the comments. Don't forget to check e.g. on Open Source Game Clones, Github and SourceForge whether the game is ported already.
Hack Week 24 - Master of Orion II: Battle at Antares & Chaos Overlords
Work on Master of Orion II continues but we can hack more than one game. Chaos Overlords is a dystopian, lighthearted, cyberpunk turn-based strategy game originally released in 1996 for Windows 95 and Mac OS. The player takes on the role of a Chaos Overlord, attempting to control a city. Gameplay involves hiring mercenary gangs and deploying them on an 8-by-8 grid of city sectors to generate income, occupy sectors and take over the city.
How to ~~install & play~~ observe the decompilation progress:
- Clone the Git repository
- A playable reimplementation does not exist yet, but when it does, it will be linked in the repository mentioned above.
Further work needed:
- Analyze the remaining unknown data structures, most of which are related to the AI.
- Decompile the AI completely. The strong AI is part of the appeal of the game. It cannot be left out.
- Reimplement the game.
Hack Week 20, 21, 22 & 23 - Master of Orion II: Battle at Antares
Master of Orion II is one of the greatest turn-based 4X games of the 1990s. Explore the galaxy, colonize planets, research new technologies, fight space monsters and alien empires and in the end, become the ruler of the galaxy one way or another.
How to install & play:
- Clone the Git repository
- Run
./bootstrap; ./configure; make && make install
- Copy all *.LBX files from the original Master of Orion II to the installation data directory (
/usr/local/share/openorion2
by default) - Run
openorion2
Further work needed:
- Analyze the rest of the original savegame format and a few remaining data files.
- Implement most of the game. The open source engine currently supports only loading saved games from the original version and viewing the galaxy map, fleet management and list of known planets.
Hack Week 19 - Signus: The Artifact Wars
Signus is a Czech turn-based strategy game similar to Panzer General or Battle Isle series. Originally published in 1998 and open-sourced by the original developers in 2003.
How to install & play:
- Clone the Git repository
- Run
./bootstrap; ./configure; make && make install
in bothsignus
andsignus-data
directories. - Run
signus
Further work needed:
- Create openSUSE package
- Implement full support for original game data (the open source version uses slightly different data file contents but original game data can be converted using a script).
YQPkg - Bringing the Single Package Selection Back to Life by shundhammer
tl;dr
Rip out the high-level YQPackageSelector widget from YaST and make it a standalone Qt program without any YaST dependencies.
See section "Result" at the bottom for the current status after the hack week.
The Past and the Present
We used to have and still have a powerful software selection with the YaST sw_single module (and the YaST patterns counterpart): You can select software down to the package level, you can easily select one of many available package versions, you can select entire patterns - or just view them and pick individual packages from patterns.
You can search packages based on name, description, "requires" or "provides" level, and many more things.
The Future
YaST is on its way out, to be replaced by the new Agama installer and Cockpit for system administration. Those tools can do many things, but fine-grained package selection is not among them. And there are also no other Open Source tools available for that purpose that even come close to the YaST package selection.
Many aspects of YaST have become obsolete over the years; many subsystems now come with a good default configuration, or they can configure themselves automatically. Just think about sound or X11 configuration; when did you last need to touch them?
For others, the desktops bring their own tools (e.g. printers), or there are FOSS configuration tools (NetworkManager, BlueMan). Most YaST modules are no longer needed, and for many others there is a replacement in tools like Cockpit.
But no longer having a powerful fine-grained package selection like in YaST sw_single will hurt. Big time. At least until there is an adequate replacement, many users will want to keep it.
The Idea
YaST sw_single always revolved around a powerful high-level widget on the abstract UI level. Libyui has low-level widgets like YPushButton, YCheckBox, YInputField, more advanced ones like YTable, YTree; and some few very high-level ones like YPackageSelector and YPatternSelector that do the whole package selection thing alone, working just on the libzypp level and changing the status of packages or patterns there.
For the YaST Qt UI, the YQPackageSelector / YQPatternSelector widgets work purely on the Qt and libzypp level; no other YaST infrastructure involved, in particular no Ruby (or formerly YCP) interpreter, no libyui-level widgets, no bindings between Qt / C++ and Ruby / YaST-core, nothing. So it's not too hard to rip all that part out of YaST and create a standalone program from it.
For the NCurses UI, the NCPackageSelector / NCPatternSelector create a lot of libyui widgets (inheriting YWidget / NCWidget) and use a lot of libyui calls to glue them together; and all that of course still needs a lot of YaST / libyui / libyui-ncurses infrastructure. So NCurses is out of scope here.
Preparatory Work: Initializing the Package Subsystem
To see if this is feasible at all, the existing UI examples needed some fixing to check what is needed on that level. That was the make-or-break decision: Would it be realistically possible to set the needed environment in libzypp up (without being stranded in the middle of that task alone at the end of the hack week)?
Yes, it is: That part is already working:
https://github.com/yast/yast-ycp-ui-bindings/pull/71
Go there for a screenshot
That's already halfway there.
The complete Ruby code of this example is here. The real thing will be pure C++ without any YaST dependencies.
The Plan
VulnHeap by r1chard-lyu
Description
The VulnHeap project is dedicated to the in-depth analysis and exploitation of vulnerabilities within heap memory management. It focuses on understanding the intricate workflow of heap allocation, chunk structures, and bin management, which are essential to identifying and mitigating security risks.
Goals
- Familiarize with heap
- Heap workflow
- Chunk and bin structure
- Vulnerabilities
- Vulnerability
- Use after free (UAF)
- Heap overflow
- Double free
- Use Docker to create a vulnerable environment and apply techniques to exploit it
Resources
- https://heap-exploitation.dhavalkapil.com/divingintoglibc_heap
- https://raw.githubusercontent.com/cloudburst/libheap/master/heap.png
- https://github.com/shellphish/how2heap?tab=readme-ov-file
toptop - a top clone written in Go by dshah
Description
toptop
is a clone of Linux's top
CLI tool, but written in Go.
Goals
Learn more about Go (mainly bubbletea) and Linux
Resources
Contributing to Linux Kernel security by pperego
Description
A couple of weeks ago, I found this blog post by Gustavo Silva, a Linux Kernel contributor.
I always strived to start again into hacking the Linux Kernel, so I asked Coverity scan dashboard access and I want to contribute to Linux Kernel by fixing some minor issues.
I want also to create a Linux Kernel fuzzing lab using qemu and syzkaller
Goals
- Fix at least 2 security bugs
- Create the fuzzing lab and having it running
The story so far
- Day 1: setting up a virtual machine for kernel development using Tumbleweed. Reading a lot of documentation, taking confidence with Coverity dashboard and with procedures to submit a kernel patch
- Day 2: I read really a lot of documentation and I triaged some findings on Coverity SAST dashboard. I have to confirm that SAST tool are great false positives generator, even for low hanging fruits.
- Day 3: Working on trivial changes after I read this blog post:
https://www.toblux.com/posts/2024/02/linux-kernel-patches.html. I have to take confidence
with the patch preparation and submit process yet.
- First trivial patch sent: using strtruefalse() macro instead of hard-coded strings in a staging driver for a lcd display
- Fix for a dereference before null check issue discovered by Coverity (CID 1601566) https://scan7.scan.coverity.com/#/project-view/52110/11354?selectedIssue=1601566
- Day 4: Triaging more issues found by Coverity.
- The patch for CID 1601566 was refused. The check against the NULL pointer was pointless so I prepared a version 2 of the patch removing the check.
- Fixed another dereference before NULL check in iwlmvmparsewowlaninfo_notif() routine (CID 1601547). This one was already submitted by another kernel hacker :(
- Day 5: Wrapping up. I had to do some minor rework on patch for CID 1601566. I found a stalker bothering me in private emails and people I interacted with me, advised he is a well known bothering person. Markus Elfring for the record.
Wrapping up: being back doing kernel hacking is amazing and I don't want to stop it. My battery pack is completely drained but changing the scope gave me a great twist and I really want to feel this energy not doing a single task for months.
I failed in setting up a fuzzing lab but I was too optimistic for the patch submission process.
The patches
Explore simple and distro indipendent declarative Linux starting on Tumbleweed or Arch Linux by janvhs
Description
Inspired by mkosi the idea is to experiment with a declarative approach of defining Linux systems. A lot of tools already make it possible to manage the systems infrastructure by using description files, rather than manual invocation. An example for this are systemd presets for managing enabled services or the /etc/fstab
file for describing how partitions should be mounted.
If we would take inspiration from openSUSE MicroOS and their handling of the /etc/
directory, we could theoretically use systemd-sysupdate
to swap out the /usr/
partition and create an A/B boot scheme, where the /usr/
partition is always freshly built according to a central system description. In the best case it would be possible to still utilise snapshots, but an A/B root scheme would be sufficient for the beginning. This way you could get the benefit of NixOS's declarative system definition, but still use the distros package repositories and don't have to deal with the overhead of Flakes or the Nix language.
Goals
- A simple and understandable system
- Check fitness of
mkosi
or write a simple extensible image builder tool for it - Create a declarative system specification
- Create a system with swappable
/usr/
partition - Create an A/B root scheme
- Swap to the new system without reboot (kexec?)
Resources
- Ideas that have been floating around in my head for a while
- https://0pointer.net/blog/fitting-everything-together.html
- GNOME OS
- MicroOS
- systemd mkosi
- Vanilla OS
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Update Haskell ecosystem in Tumbleweed to GHC-9.10.x by psimons
Description
We are currently at GHC-9.8.x, which a bit old. So I'd like to take a shot at the latest version of the compiler, GHC-9.10.x. This is gonna be interesting because the new version requires major updates to all kinds of libraries and base packages, which typically means patching lots of packages to make them build again.
Goals
Have working builds of GHC-9.10.x and the required Haskell packages in 'devel:languages:haskell` so that we can compile:
git-annex
pandoc
xmonad
cabal-install
Resources
- https://build.opensuse.org/project/show/devel:languages:haskell/
- https://github.com/opensuse-haskell/configuration/
- #discuss-haskell
- https://www.twitch.tv/peti343
Tumbleweed on Mars-CM (RISC-V board) by ph03nix
RISC-V is awesome, Tumbleweed is awesome, chocolate cake is awesome. I'm planning to combine all of them in one project.
Project Description
I recently purchased a MILK-V Mars CM and managed to setup it up already using Debian Linux. My project for this Hackweek is to see how far I can get to run Tumbleweed on this compute module board.
Goal for this Hackweek
- Run Tumbleweed on the Compute Module
Resources
- http://milkv.io/mars-cm
- https://en.opensuse.org/HCL:VisionFive2
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
Results: Infrastructure Achievements
We successfully built and automated a containerized stack to support our AI experiments. This included:
- a Fully-Automated, One-Command, GPU-accelerated Kubernetes setup: we created an OpenTofu based script, tofu-tag, to deploy SUSE's RKE2 Kubernetes running on CUDA-enabled nodes in AWS, powered by openSUSE with GPU drivers and gpu-operator
- Containerization of the TAG and PyTAG frameworks: TAG (Tabletop AI Games) and PyTAG were patched for seamless deployment in containerized environments. We automated the container image creation process with GitHub Actions. Our forks (PRs upstream upcoming):
./deploy.sh
and voilà - Kubernetes running PyTAG (k9s
, above) with GPU acceleration (nvtop
, below)
Results: Game Design Insights
Our project focused on modeling and analyzing two card games of our own design within the TAG framework:
- Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
- AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
- Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .
- more about Bamboo on Dario's site
- more about R3 on Silvio's site (italian, translation coming)
- more about Totoro on Silvio's site
A family picture of our card games in progress. From the top: Bamboo, Totoro, R3
Results: Learning, Collaboration, and Innovation
Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:
- "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
- AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
- GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
- Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.
Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!
The Context: AI + Board Games