A quantum physics effect to teach, a puzzle to build, a problem to solve, a tool to learn!
Polarizing filters are plastic films that let light shine through only in a particular direction (angle). Combining two at 90 degrees completely blocks light.

Very counterintuitively, inserting a third filter between two filters at 90 degrees allows some light shine through!
This interesting effect can only be explained with quantum physics, as brilliantly explained in this 3Blue1Brown video.
Polarizing filter films are cheap... So I wanted to create a carboard toy to demo this effect in a suprising way to my kids!
A puzzle to build
Idea is to build a puzzle around this weird effect.
I want to build a cardboard octagon with many "windows" (holes), each window covered with one polarizing filter at a certain angle, like this:

Stacking multiple such octagons on top of one another will block light in some combination of filters and not others, depending on the individual filter angles. Moreover, rotating octagons in the stack will make the "displayed pattern" change!

A problem to solve
One a set of "patterns" to display is decided, is it possible to write a program to determine the assignment of filter angles, for each "window" in each octagon, that is able to produce them all?
In principle, yes! In practice, there's an explosion in the number of possible combinations! Eg. 8 angles × 10 windows × 8 slices × 5 octagons × 8⁴ rotation combinations × 5! orderings × 5 upside-down flips is about 8 billion.
...a bit too much for simple for loops! I need a smarter approach.
A tool to learn

Google OR-Tools CP-SAT is a powerful constraint programming solver. It can be used to quickly find solutions to huge combinatorial problems - where one has to find one valid assignment to thousands of variables under thousands of constraints within billions of possible combinations (not all of which valid or optimal)!
Solvers are applicable to many problems and are not new in SUSE's tradition - eg. the zypper package manager uses libsolv to compute valid package dependency combinations, and Uyuni uses Optaplanner to compute valid subscription assignments.
CP-SAT is open source, very efficient (actually close to the state of the art in the field) and easily scriptable from Python... a very interesting target to experiment with!
Now I have an excuse to play with this!
Scope of HackWeek
Find a combination that works for a decent example, and actually cut it in cardboard and filters to try it out!
https://github.com/moio/octaopticon
This project is part of:
Hack Week 23
Activity
Comments
-
about 2 years ago by moio | Reply
Day 1 diary - the physical prototyping day
Spent a bit of time into producing good SVGs with Python, then printed them and tried to find dimensions that worked (one big and one small for testing).
After few iterations decided to go with octagonal stars rather than plain octagons:

Then literally hammered out holes with a 10mm punch! Worked beautifully.

Then, cut and tested positioning of filter film:

All seems good from the physical realm so far.
Next up: coding to determine per-hole filter positioning!
-
about 2 years ago by moio | Reply
Day 2 diary: mostly coding
CP-SAT
Learnt a lot about CP-SAT, evolved some code I had around to handle:
- a variable number of "pizzas" ("stars with filter windows")
- a variable number of "slices" ("sectors" of stars)
- a variable number of "windows" per "slice"
- a variable number of "angles" filters can be glued on
- a variable number of "images"
Difficult part today is the reordering of "pizzas" in the "pizza stack". Giving that ability makes more combinations possible, but indirection has to be dealt with in code.
Testing
The good part about this problem is that tests can trivially be randomized, so it's easy to see if produced solutions work or not.
The bad part is not all randomized problem instances have a solutions. For those who do not, CP-SAT will happily burn CPUs for hours. I added a pretty arbitrary time limit.
ChatGPT
I used ChatGPT for the scaffolding work - and was quite happy with it:
> Set up a new Python 3.9 based project according to current best practices. > > The project must use the ortools library from Google (note that is a wrapper around a C++ library) > > Include support for: linting, dependency management, github codespace, tests, a Dockerfile, github actions on push and PR including and tests and lint, github actions for release of source archive and docker container on ghcr.io > > Also include a scaffolded README and LICENSE (AGPL) > > The project must compile and work cross platform, including Linux x86 and Mac arm. > > Explain every file created step by step and why
Not a perfect result, but a good result to learn from - faster than stitching together 10 blog posts (for someone not daily into an ecosystem).
-
about 2 years ago by moio | Reply
Day 3 diary: 3 failures, 1 success
Failure 1: adding the possibility of re-ordering the stack
I thought that allowing to re-ordering pizzas in the stack could help with storing more "images" - found out that as not the case. On a large set of pseudorandom tests, only an extra 4 out of 186 could be solved by changing the order. Not worth it, commit reverted.
Failure 2: going from a SAT problem to an optimization problem
CP-SAT has the cool ability of allowing to specify an objective function to minimize or maximise - making it simple to reformulate a satisfiability problem in an optimization one. I tried this approach to make the assignments more flexible but failed: I could not find a good way to mix it with the Automaton constraints which I am using to simulate light traveling through a series of filters. Path abandoned for now.
Failure 3: allowing brighter-than-specified pixels
This seemed an easy way to enlarge the solution space - interestingly, almost no effect was visible in tests. Sticking for the simpler approach (to match pixel values exactly) for now.
Success! First small four-pizza prototype works!
I am happy to report that after some serious hammering and cutting...
...and serious gluing of filter films...

...I've got a nice filter set! Notice how filtering of monitor light (which is polarized) changes with rotation!

Now I made four pizzas...

And, in the right order, they will display a programmed X pattern!

I was able to "store" 7 patterns in the four pizzas (a "Y", a "q", the "X" above, an "o", an "I", a "c" and a "K").
Next step: the bigger brother pizza with bigger patterns!
-
about 2 years ago by moio | Reply
Day 4 diary: scale up!
Today I dealt with the bigger version of the puzzle. Software scaled just fine!
About hardware I was lucky enough to get help from my son across all phases!

I am really happy with the result, here they are in all their whiteness:

What message did we hid in there? Stay tuned tomorrow for the last demo!
PS. Thanks to colleague AR about having kids do some of the job - that worked great!
-
about 2 years ago by moio | Reply
Day 5 diary: it's a wrap!
Today I created a video to explain progress and results, enjoy!
Tricky part was to get light right - so that it was clearly visible on video. Ended up with an inverted laptop screen covered with an opaque film - otherwise light comes polarized and all behavior is totally different!
Similar Projects
Liz - Prompt autocomplete by ftorchia
Description
Liz is the Rancher AI assistant for cluster operations.
Goals
We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.
Example:
- User prompt: "Can you show me the list of p"
- Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"
Example:
- User prompt: "Show me the logs of #rancher-"
- Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".
Technical Overview
- The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
- The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.
Resources
Improvements to osc (especially with regards to the Git workflow) by mcepl
Description
There is plenty of hacking on osc, where we could spent some fun time. I would like to see a solution for https://github.com/openSUSE/osc/issues/2006 (which is sufficiently non-serious, that it could be part of HackWeek project).
Collection and organisation of information about Bulgarian schools by iivanov
Description
To achieve this it will be necessary:
- Collect/download raw data from various government and non-governmental organizations
- Clean up raw data and organise it in some kind database.
- Create tool to make queries easy.
- Or perhaps dump all data into AI and ask questions in natural language.
Goals
By selecting particular school information like this will be provided:
- School scores on national exams.
- School scores from the external evaluations exams.
- School town, municipality and region.
- Employment rate in a town or municipality.
- Average health of the population in the region.
Resources
Some of these are available only in bulgarian.
- https://danybon.com/klasazia
- https://nvoresults.com/index.html
- https://ri.mon.bg/active-institutions
- https://www.nsi.bg/nrnm/ekatte/archive
Results
- Information about all Bulgarian schools with their scores during recent years cleaned and organised into SQL tables
- Information about all Bulgarian villages, cities, municipalities and districts cleaned and organised into SQL tables
- Information about all Bulgarian villages and cities census since beginning of this century cleaned and organised into SQL tables.
- Information about all Bulgarian municipalities about religion, ethnicity cleaned and organised into SQL tables.
- Data successfully loaded to locally running Ollama with help to Vanna.AI
- Seems to be usable.
TODO
- Add more statistical information about municipalities and ....
Code and data
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt (including bootstrapping with bootstrap script) and Salt-ssh clients
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
In progress/done for Hack Week 25
Guide
We started writin a Guide: Adding a new client GNU Linux distribution to Uyuni at https://github.com/uyuni-project/uyuni/wiki/Guide:-Adding-a-new-client-GNU-Linux-distribution-to-Uyuni, to make things easier for everyone, specially those not too familiar wht Uyuni or not technical.
openSUSE Leap 16.0
The distribution will all love!
https://en.opensuse.org/openSUSE:Roadmap#DRAFTScheduleforLeap16.0
Curent Status We started last year, it's complete now for Hack Week 25! :-D
[W]Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file) NOTE: Done, client tools for SLMicro6 are using as those for SLE16.0/openSUSE Leap 16.0 are not available yet[W]Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)[W]Package management (install, remove, update...). Works, even reboot requirement detection
Improve/rework household chore tracker `chorazon` by gniebler
Description
I wrote a household chore tracker named chorazon, which is meant to be deployed as a web application in the household's local network.
It features the ability to set up different (so far only weekly) schedules per task and per person, where tasks may span several days.
There are "tokens", which can be collected by users. Tasks can (and usually will) have rewards configured where they yield a certain amount of tokens. The idea is that they can later be redeemed for (surprise) gifts, but this is not implemented yet. (So right now one needs to edit the DB manually to subtract tokens when they're redeemed.)
Days are not rolled over automatically, to allow for task completion control.
We used it in my household for several months, with mixed success. There are many limitations in the system that would warrant a revisit.
It's written using the Pyramid Python framework with URL traversal, ZODB as the data store and Web Components for the frontend.
Goals
- Add admin screens for users, tasks and schedules
- Add models, pages etc. to allow redeeming tokens for gifts/surprises
- …?
Resources
tbd (Gitlab repo)
Improve chore and screen time doc generator script `wochenplaner` by gniebler
Description
I wrote a little Python script to generate PDF docs, which can be used to track daily chore completion and screen time usage for several people, with one page per person/week.
I named this script wochenplaner and have been using it for a few months now.
It needs some improvements and adjustments in how the screen time should be tracked and how chores are displayed.
Goals
- Fix chore field separation lines
- Change screen time tracking logic from "global" (week-long) to daily subtraction and weekly addition of remainders (more intuitive than current "weekly time budget method)
- Add logic to fill in chore fields/lines, ideally with pictures, falling back to text.
Resources
tbd (Gitlab repo)
mgr-ansible-ssh - Intelligent, Lightweight CLI for Distributed Remote Execution by deve5h
Description
By the end of Hack Week, the target will be to deliver a minimal functional version 1 (MVP) of a custom command-line tool named mgr-ansible-ssh (a unified wrapper for BOTH ad-hoc shell & playbooks) that allows operators to:
- Execute arbitrary shell commands on thousand of remote machines simultaneously using Ansible Runner with artifacts saved locally.
- Pass runtime options such as inventory file, remote command string/ playbook execution, parallel forks, limits, dry-run mode, or no-std-ansible-output.
- Leverage existing SSH trust relationships without additional setup.
- Provide a clean, intuitive CLI interface with --help for ease of use. It should provide consistent UX & CI-friendly interface.
- Establish a foundation that can later be extended with advanced features such as logging, grouping, interactive shell mode, safe-command checks, and parallel execution tuning.
The MVP should enable day-to-day operations to efficiently target thousands of machines with a single, consistent interface.
Goals
Primary Goals (MVP):
Build a functional CLI tool (mgr-ansible-ssh) capable of executing shell commands on multiple remote hosts using Ansible Runner. Test the tool across a large distributed environment (1000+ machines) to validate its performance and reliability.
Looking forward to significantly reducing the zypper deployment time across all 351 RMT VM servers in our MLM cluster by eliminating the dependency on the taskomatic service, bringing execution down to a fraction of the current duration. The tool should also support multiple runtime flags, such as:
mgr-ansible-ssh: Remote command execution wrapper using Ansible Runner
Usage: mgr-ansible-ssh [--help] [--version] [--inventory INVENTORY]
[--run RUN] [--playbook PLAYBOOK] [--limit LIMIT]
[--forks FORKS] [--dry-run] [--no-ansible-output]
Required Arguments
--inventory, -i Path to Ansible inventory file to use
Any One of the Arguments Is Required
--run, -r Execute the specified shell command on target hosts
--playbook, -p Execute the specified Ansible playbook on target hosts
Optional Arguments
--help, -h Show the help message and exit
--version, -v Show the version and exit
--limit, -l Limit execution to specific hosts or groups
--forks, -f Number of parallel Ansible forks
--dry-run Run in Ansible check mode (requires -p or --playbook)
--no-ansible-output Suppress Ansible stdout output
Secondary/Stretched Goals (if time permits):
- Add pretty output formatting (success/failure summary per host).
- Implement basic logging of executed commands and results.
- Introduce safety checks for risky commands (shutdown, rm -rf, etc.).
- Package the tool so it can be installed with pip or stored internally.
Resources
Collaboration is welcome from anyone interested in CLI tooling, automation, or distributed systems. Skills that would be particularly valuable include:
- Python especially around CLI dev (argparse, click, rich)
Improve/rework household chore tracker `chorazon` by gniebler
Description
I wrote a household chore tracker named chorazon, which is meant to be deployed as a web application in the household's local network.
It features the ability to set up different (so far only weekly) schedules per task and per person, where tasks may span several days.
There are "tokens", which can be collected by users. Tasks can (and usually will) have rewards configured where they yield a certain amount of tokens. The idea is that they can later be redeemed for (surprise) gifts, but this is not implemented yet. (So right now one needs to edit the DB manually to subtract tokens when they're redeemed.)
Days are not rolled over automatically, to allow for task completion control.
We used it in my household for several months, with mixed success. There are many limitations in the system that would warrant a revisit.
It's written using the Pyramid Python framework with URL traversal, ZODB as the data store and Web Components for the frontend.
Goals
- Add admin screens for users, tasks and schedules
- Add models, pages etc. to allow redeeming tokens for gifts/surprises
- …?
Resources
tbd (Gitlab repo)
openQA log viewer by mpagot
Description
*** Warning: Are You at Risk for VOMIT? ***
Do you find yourself staring at a screen, your eyes glossing over as thousands of lines of text scroll by? Do you feel a wave of text-based nausea when someone asks you to "just check the logs"?
You may be suffering from VOMIT (Verbose Output Mental Irritation Toxicity).
This dangerous, work-induced ailment is triggered by exposure to an overwhelming quantity of log data, especially from parallel systems. The human brain, not designed to mentally process 12 simultaneous autoinst-log.txt files, enters a state of toxic shock. It rejects the "Verbose Output," making it impossible to find the one critical error line buried in a 50,000-line sea of "INFO: doing a thing."
Before you're forced to rm -rf /var/log in a fit of desperation, we present the digital antacid.
No panic: we have The openQA Log Visualizer
This is the UI antidote for handling toxic log environments. It bravely dives into the chaotic, multi-machine mess of your openQA test runs, finds all the related, verbose logs, and force-feeds them into a parser.
Goals
Work on the existing POC openqa-log-visualizer about few specific tasks:
- add support for more type of logs
- extend the configuration file syntax beyond the actual one
- work on log parsing performance
Find some beta-tester and collect feedback and ideas about features
If time allow for it evaluate other UI frameworks and solutions (something more simple to distribute and run, maybe more low level to gain in performance).
Resources
MCP Server for SCC by digitaltomm
Description
Provide an MCP Server implementation for customers to access data on scc.suse.com via MCP protocol. The core benefit of this MCP interface is that it has direct (read) access to customer data in SCC, so the AI agent gets enhanced knowledge about individual customer data, like subscriptions, orders and registered systems.
Architecture

Goals
We want to demonstrate a proof of concept to connect to the SCC MCP server with any AI agent, for example gemini-cli or codex. Enabling the user to ask questions regarding their SCC inventory.
For this Hackweek, we target that users get proper responses to these example questions:
- Which of my currently active systems are running products that are out of support?
- Do I have ready to use registration codes for SLES?
- What are the latest 5 released patches for SLES 15 SP6? Output as a list with release date, patch name, affected package names and fixed CVEs.
- Which versions of kernel-default are available on SLES 15 SP6?
Technical Notes
Similar to the organization APIs, this can expose to customers data about their subscriptions, orders, systems and products. Authentication should be done by organization credentials, similar to what needs to be provided to RMT/MLM. Customers can connect to the SCC MCP server from their own MCP-compatible client and Large Language Model (LLM), so no third party is involved.
Milestones
[x] Basic MCP API setup MCP endpoints [x] Products / Repositories [x] Subscriptions / Orders [x] Systems [x] Packages [x] Document usage with Gemini CLI, Codex
Resources
Gemini CLI setup:
~/.gemini/settings.json:
"what is it" file and directory analysis via MCP and local LLM, for console and KDE by rsimai
Description
Users sometimes wonder what files or directories they find on their local PC are good for. If they can't determine from the filename or metadata, there should an easy way to quickly analyze the content and at least guess the meaning. An LLM could help with that, through the use of a filesystem MCP and to-text-converters for typical file types. Ideally this is integrated into the desktop environment but works as well from a console. All data is processed locally or "on premise", no artifacts remain or leave the system.
Goals
- The user can run a command from the console, to check on a file or directory
- The filemanager contains the "analyze" feature within the context menu
- The local LLM could serve for other use cases where privacy matters
TBD
- Find or write capable one-shot and interactive MCP client
- Find or write simple+secure file access MCP server
- Create local LLM service with appropriate footprint, containerized
- Shell command with options
- KDE integration (Dolphin)
- Package
- Document
Resources
Song Search with CLAP by gcolangiuli
Description
Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface
Goals
Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:
- Music Tagging;
- Free text search;
- Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.
The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.
Result
In this MVP we implemented:
- Async Song Analysis with Clap model
- Free Text Search of the songs
- Similar song search based on vector representation
- Containerised version with web interface
We also documented what went well and what can be improved in the use of AI.
You can have a look at the result here:
Future implementation can be related to performance improvement and stability of the analysis.
References
- CLAP: The main model being researched;
- huggingface: Pre-trained models for CLAP;
- Free Music Archive: Creative Commons songs that can be used for testing;
SUSE Observability MCP server by drutigliano
Description
The idea is to implement the SUSE Observability Model Context Protocol (MCP) Server as a specialized, middle-tier API designed to translate the complex, high-cardinality observability data from StackState (topology, metrics, and events) into highly structured, contextually rich, and LLM-ready snippets.
This MCP Server abstract the StackState APIs. Its primary function is to serve as a Tool/Function Calling target for AI agents. When an AI receives an alert or a user query (e.g., "What caused the outage?"), the AI calls an MCP Server endpoint. The server then fetches the relevant operational facts, summarizes them, normalizes technical identifiers (like URNs and raw metric names) into natural language concepts, and returns a concise JSON or YAML payload. This payload is then injected directly into the LLM's prompt, ensuring the final diagnosis or action is grounded in real-time, accurate SUSE Observability data, effectively minimizing hallucinations.
Goals
- Grounding AI Responses: Ensure that all AI diagnoses, root cause analyses, and action recommendations are strictly based on verifiable, real-time data retrieved from the SUSE Observability StackState platform.
- Simplifying Data Access: Abstract the complexity of StackState's native APIs (e.g., Time Travel, 4T Data Model) into simple, semantic functions that can be easily invoked by LLM tool-calling mechanisms.
- Data Normalization: Convert complex, technical identifiers (like component URNs, raw metric names, and proprietary health states) into standardized, natural language terms that an LLM can easily reason over.
- Enabling Automated Remediation: Define clear, action-oriented MCP endpoints (e.g., execute_runbook) that allow the AI agent to initiate automated operational workflows (e.g., restarts, scaling) after a diagnosis, closing the loop on observability.
Hackweek STEP
- Create a functional MCP endpoint exposing one (or more) tool(s) to answer queries like "What is the health of service X?") by fetching, normalizing, and returning live StackState data in an LLM-ready format.
Scope
- Implement read-only MCP server that can:
- Connect to a live SUSE Observability instance and authenticate (with API token)
- Use tools to fetch data for a specific component URN (e.g., current health state, metrics, possibly topology neighbors, ...).
- Normalize response fields (e.g., URN to "Service Name," health state DEVIATING to "Unhealthy", raw metrics).
- Return the data as a structured JSON payload compliant with the MCP specification.
Deliverables
- MCP Server v0.1 A running Golang MCP server with at least one tool.
- A README.md and a test script (e.g., curl commands or a simple notebook) showing how an AI agent would call the endpoint and the resulting JSON payload.
Outcome A functional and testable API endpoint that proves the core concept: translating complex StackState data into a simple, LLM-ready format. This provides the foundation for developing AI-driven diagnostics and automated remediation.
Resources
- https://www.honeycomb.io/blog/its-the-end-of-observability-as-we-know-it-and-i-feel-fine
- https://www.datadoghq.com/blog/datadog-remote-mcp-server
- https://modelcontextprotocol.io/specification/2025-06-18/index
- https://modelcontextprotocol.io/docs/develop/build-server
Basic implementation
- https://github.com/drutigliano19/suse-observability-mcp-server
Results
Successfully developed and delivered a fully functional SUSE Observability MCP Server that bridges language models with SUSE Observability's operational data. This project demonstrates how AI agents can perform intelligent troubleshooting and root cause analysis using structured access to real-time infrastructure data.
Example execution
Liz - Prompt autocomplete by ftorchia
Description
Liz is the Rancher AI assistant for cluster operations.
Goals
We want to help users when sending new messages to Liz, by adding an autocomplete feature to complete their requests based on the context.
Example:
- User prompt: "Can you show me the list of p"
- Autocomplete suggestion: "Can you show me the list of p...od in local cluster?"
Example:
- User prompt: "Show me the logs of #rancher-"
- Chat console: It shows a drop-down widget, next to the # character, with the list of available pod names starting with "rancher-".
Technical Overview
- The AI agent should expose a new ws/autocomplete endpoint to proxy autocomplete messages to the LLM.
- The UI extension should be able to display prompt suggestions and allow users to apply the autocomplete to the Prompt via keyboard shortcuts.
Resources
Sim racing track database by avicenzi
Description
Do you wonder which tracks are available in each sim racing game? Wonder no more.
Goals
Create a simple website that includes details about sim racing games.
The website should be static and built with Alpine.JS and TailwindCSS. Data should be consumed from JSON, easily done with Alpine.JS.
The main goal is to gather track information, because tracks vary by game. Older games might have older layouts, and newer games might have up-to-date layouts. Some games include historical layouts, some are laser scanned. Many tracks are available as DLCs.
Initially include official tracks from:
- ACC
- iRacing
- PC2
- LMU
- Raceroom
- Rennsport
These games have a short list of tracks and DLCs.
Resources
The hardest part is collecting information about tracks in each game. Active games usually have information on their website or even on Steam. Older games might be on Fandom or a Wiki. Real track information can be extracted from Wikipedia or the track website.
Port some classic game to Linux by MDoucha
Let's pick some old classic game, reverse engineer the data formats and game rules and write an open source engine for it from scratch. Some games from 1990s are simple enough that we could have a playable prototype by the end of the week.
Write which games you'd like to hack on in the comments. Don't forget to check e.g. on Open Source Game Clones, Github and SourceForge whether the game is ported already.
Hack Week 25 - Master of Orion II: Battle at Antares
Work on Master of Orion II continued with Tech Review and Colony list screens.
Hack Week 24 - Master of Orion II: Battle at Antares & Chaos Overlords
Work on Master of Orion II continues but we can hack more than one game. Chaos Overlords is a dystopian, lighthearted, cyberpunk turn-based strategy game originally released in 1996 for Windows 95 and Mac OS. The player takes on the role of a Chaos Overlord, attempting to control a city. Gameplay involves hiring mercenary gangs and deploying them on an 8-by-8 grid of city sectors to generate income, occupy sectors and take over the city.
How to ~~install & play~~ observe the decompilation progress:
- Clone the Git repository
- A playable reimplementation does not exist yet, but when it does, it will be linked in the repository mentioned above.
Further work needed:
- Analyze the remaining unknown data structures, most of which are related to the AI.
- Decompile the AI completely. The strong AI is part of the appeal of the game. It cannot be left out.
- Reimplement the game.
Hack Week 20, 21, 22 & 23 - Master of Orion II: Battle at Antares
Master of Orion II is one of the greatest turn-based 4X games of the 1990s. Explore the galaxy, colonize planets, research new technologies, fight space monsters and alien empires and in the end, become the ruler of the galaxy one way or another.
How to install & play:
- Clone the Git repository
- Run
./bootstrap; ./configure; make && make install - Copy all *.LBX files from the original Master of Orion II to the installation data directory (
/usr/local/share/openorion2by default) - Run
openorion2
Further work needed:
- Analyze the rest of the original savegame format and a few remaining data files.
- Implement most of the game. The open source engine currently supports only loading saved games from the original version and viewing the galaxy map, fleet management and list of known planets.
Hack Week 19 - Signus: The Artifact Wars
Signus is a Czech turn-based strategy game similar to Panzer General or Battle Isle series. Originally published in 1998 and open-sourced by the original developers in 2003.
How to install & play:
- Clone the Git repository
- Run
./bootstrap; ./configure; make && make installin bothsignusandsignus-datadirectories. - Run
signus
Further work needed:
Advent of Code: The Diaries by amanzini
Description
It was the Night Before Compile Time ...
Hackweek 25 (December 1-5) perfectly coincides with the first five days of Advent of Code 2025. This project will leverage this overlap to participate in the event in real-time.
To add a layer of challenge and exploration (in the true spirit of Hackweek), the puzzles will be solved using a non-mainstream, modern language like Ruby, D, Crystal, Gleam or Zig.
The primary project intent is not just simply to solve the puzzles, but to exercise result sharing and documentation. I'd create a public-facing repository documenting the process. This involves treating each day's puzzle as a mini-project: solving it, then documenting the solution with detailed write-ups, analysis of the language's performance and ergonomics, and visualizations.
|
\ ' /
-- (*) --
>*<
>0<@<
>>>@<<*
>@>*<0<<<
>*>>@<<<@<<
>@>>0<<<*<<@<
>*>>0<<@<<<@<<<
>@>>*<<@<>*<<0<*<
\*/ >0>>*<<@<>0><<*<@<<
___\\U//___ >*>>@><0<<*>>@><*<0<<
|\\ | | \\| >@>>0<*<0>>@<<0<<<*<@<<
| \\| | _(UU)_ >((*))_>0><*<0><@<<<0<*<
|\ \| || / //||.*.*.*.|>>@<<*<<@>><0<<<
|\\_|_|&&_// ||*.*.*.*|_\\db//_
""""|'.'.'.|~~|.*.*.*| ____|_
|'.'.'.| ^^^^^^|____|>>>>>>|
~~~~~~~~ '""""`------'
------------------------------------------------
This ASCII pic can be found at
https://asciiart.website/art/1831
Goals
Code, Docs, and Memes: An AoC Story
Have fun!
Involve more people, play together
Solve Days 1-5: Successfully solve both parts of the Advent of Code 2025 puzzles for Days 1-5 using the chosen non-mainstream language.
Daily Documentation & Language Review: Publish a detailed write-up for each day. This documentation will include the solution analysis, the chosen algorithm, and specific commentary on the language's ergonomics, performance, and standard library for the given task.
