Description

Continue the work on kqlite (Lightweight remote SQLite with high availability and auto failover).
It's a solution for applications that require High Availability but don't need all the features of a complete RDBMS and can fit SQLite in their use case.
Also kqlite can be considered to be used as a lightweight storage backend for K8s (https://docs.k3s.io/datastore) and the Edge, and allowing to have only 2 Nodes for HA.

Goals

Push kqlite to a beta version.
kqlite as library for Go programs.

Resources

https://github.com/kqlite/kqlite

Looking for hackers with the skills:

database sqlite cluster

This project is part of:

Hack Week 25 Hack Week 24

Activity

  • 18 days ago: epenchev added keyword "database" to this project.
  • 18 days ago: epenchev added keyword "sqlite" to this project.
  • 18 days ago: epenchev added keyword "cluster" to this project.
  • about 1 month ago: epenchev liked this project.
  • about 1 month ago: epenchev started this project.
  • about 1 month ago: epenchev originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Sim racing track database by avicenzi

    Description

    Do you wonder which tracks are available in each sim racing game? Wonder no more.

    Goals

    Create a simple website that includes details about sim racing games.

    The website should be static and built with Alpine.JS and TailwindCSS. Data should be consumed from JSON, easily done with Alpine.JS.

    The main goal is to gather track information, because tracks vary by game. Older games might have older layouts, and newer games might have up-to-date layouts. Some games include historical layouts, some are laser scanned. Many tracks are available as DLCs.

    Initially include official tracks from:

    • ACC
    • iRacing
    • PC2
    • LMU
    • Raceroom
    • Rennsport

    These games have a short list of tracks and DLCs.

    Resources

    The hardest part is collecting information about tracks in each game. Active games usually have information on their website or even on Steam. Older games might be on Fandom or a Wiki. Real track information can be extracted from Wikipedia or the track website.


    Uyuni read-only replica by cbosdonnat

    Description

    For now, there is no possible HA setup for Uyuni. The idea is to explore setting up a read-only shadow instance of an Uyuni and make it as useful as possible.

    Possible things to look at:

    • live sync of the database, probably using the WAL. Some of the tables may have to be skipped or some features disabled on the RO instance (taskomatic, PXT sessions…)
    • Can we use a load balancer that routes read-only queries to either instance and the other to the RW one? For example, packages or PXE data can be served by both, the API GET requests too. The rest would be RW.

    Goals

    • Prepare a document explaining how to do it.
    • PR with the needed code changes to support it


    GRIT: GRaphs In Time by fvanlankvelt

    Description

    The current implementation of the Time-Travelling Topology database, StackGraph, has served SUSE Observability well over the years. But it is dependent on a number of complex components - Zookeeper, HDFS, HBase, Tephra. These lead to a large number of failure scenarios and parameters to tweak for optimal performance.

    The goal of this project is to take the high-level requirements (time-travelling topology, querying over time, transactional changes to topology, scalability) and design/prototype key components, to see where they would lead us if we were to start from scratch today.

    An example would be to use RocksDB to persist topology history. Its user-defined timestamps seem to match well with time-travelling, has transaction support with fine-grained conflict detection.

    Goals

    Determine feasibility of implementing the model on a whole new architecture. See how to model the graph and its history such that updates and querying are performant, transactional conflicts are minimized. Build a prototype to validate the model.

    Resources

    Backend developers, preferably experienced in distributed systems. Programming language: scala 3 with some C++ for low-level.

    Progress

    The project has started at github GRaphs In Time - a C++ project that

    • embeds RocksDB for persistence,
    • uses (nu)Raft for replication/consensus,
    • supports large transactions, with
    • SNAPSHOT isolation
    • time-travel
    • graph operations (add/remove vertex/edge, indexes)


    Casky – Lightweight C Key-Value Engine with Crash Recovery by pperego

    Description

    Casky is a lightweight, crash-safe key-value store written in C, designed for fast storage and retrieval of data with a minimal footprint. Built using Test-Driven Development (TDD), Casky ensures reliability while keeping the codebase clean and maintainable. It is inspired by Bitcask and aims to provide a simple, embeddable storage engine that can be integrated into microservices, IoT devices, and other C-based applications.

    Objectives:

    • Implement a minimal key-value store with append-only file storage.
    • Support crash-safe persistence and recovery.
    • Expose a simple public API: store(key, value), load(key), delete(key).
    • Follow TDD methodology for robust and testable code.
    • Provide a foundation for future extensions, such as in-memory caching, compaction, and eventual integration with vector-based databases like PixelDB.

    Why This Project is Interesting:

    Casky combines low-level C programming with modern database concepts, making it an ideal playground to explore storage engines, crash safety, and performance optimization. It’s small enough to complete during Hackweek, yet it provides a solid base for future experiments and more complex projects.

    Goals

    • Working prototype with append-only storage and memtable.
    • TDD test suite covering core functionality and recovery.
    • Demonstration of basic operations: insert, load, delete.
    • Optional bonus: LRU caching, file compaction, performance benchmarks.

    Future Directions:

    After Hackweek, Casky can evolve into a backend engine for projects like PixelDB, supporting vector storage and approximate nearest neighbor search, combining low-level performance with cutting-edge AI retrieval applications.

    Resources

    The Bitcask paper: https://riak.com/assets/bitcask-intro.pdf The Casky repository: https://github.com/thesp0nge/casky

    Day 1

    [0.10.0] - 2025-12-01

    Added

    • Core in-memory KeyDir and EntryNode structures
    • API functions: caskyopen, caskyclose, caskyput, caskyget, casky_delete
    • Hash function: caskydjb2hash_xor
    • Error handling via casky_errno
    • Unit tests for all APIs using standard asserts
    • Test cleanup of temporary files

    Changed

    • None (first MVP)

    Fixed

    • None (first MVP)

    Day 2

    [0.20.0] - 2025-12-02


    Collection and organisation of information about Bulgarian schools by iivanov

    Description

    To achieve this it will be necessary:

    • Collect/download raw data from various government and non-governmental organizations
    • Clean up raw data and organise it in some kind database.
    • Create tool to make queries easy.
    • Or perhaps dump all data into AI and ask questions in natural language.

    Goals

    By selecting particular school information like this will be provided:

    • School scores on national exams.
    • School scores from the external evaluations exams.
    • School town, municipality and region.
    • Employment rate in a town or municipality.
    • Average health of the population in the region.

    Resources

    Some of these are available only in bulgarian.

    • https://danybon.com/klasazia
    • https://nvoresults.com/index.html
    • https://ri.mon.bg/active-institutions
    • https://www.nsi.bg/nrnm/ekatte/archive

    Results

    • Information about all Bulgarian schools with their scores during recent years cleaned and organised into SQL tables
    • Information about all Bulgarian villages, cities, municipalities and districts cleaned and organised into SQL tables
    • Information about all Bulgarian villages and cities census since beginning of this century cleaned and organised into SQL tables.
    • Information about all Bulgarian municipalities about religion, ethnicity cleaned and organised into SQL tables.
    • Data successfully loaded to locally running Ollama with help to Vanna.AI
    • Seems to be usable.

    TODO

    • Add more statistical information about municipalities and ....

    Code and data


    Kudos aka openSUSE Recognition Platform by lkocman

    Description

    Relevant blog post at news-o-o

    I started the Kudos application shortly after Leap 16.0 to create a simple, friendly way to recognize people for their work and contributions to openSUSE. There’s so much more to our community than just submitting requests in OBS or gitea we have translations (not only in Weblate), wiki edits, forum and social media moderation, infrastructure maintenance, booth participation, talks, manual testing, openQA test suites, and more!

    Goals

    • Kudos under github.com/openSUSE/kudos with build previews aka netlify

    • Have a kudos.opensuse.org instance running in production

    • Build an easy-to-contribute recognition platform for the openSUSE community a place where everyone can send and receive appreciation for their work, across all areas of contribution.

    • In the future, we could even explore reward options such as vouchers for t-shirts or other community swag, small tokens of appreciation to make recognition more tangible.

    Resources

    (Do not create new badge requests during hackweek, unless you'll make the badge during hackweek)


    Hacking a SUSE MLS 7.9 Cluster by roseswe

    Description

    SUSE MLS (Multi-Linux Support) - A subscription where SUSE provides technical support and updates for Red Hat Enterprise Linux (RHEL) and CentOS servers

    The most significant operational difference between SUSE MLS 7 and the standard SUSE Linux Enterprise Server High Availability Extension (SLES HAE) lies in the administrative toolchain. While both distributions rely on the same underlying Pacemaker resource manager and Corosync messaging layer, MLS 7 preserves the native Red Hat Enterprise Linux 7 user space. Consequently, MLS 7 administrators must utilize the Pacemaker Configuration System (pcs), a monolithic and imperative tool. The pcs utility abstracts the entire stack, controlling Corosync networking, cluster bootstrapping, and resource management through single-line commands that automatically generate the necessary configuration files. In contrast, SLES HAE employs the Cluster Resource Management Shell (crmsh). The crm utility operates as a declarative shell that focuses primarily on the Cluster Information Base (CIB). Unlike the command-driven nature of pcs, crmsh allows administrators to enter a configuration context to define the desired state of the cluster using syntax that maps closely to the underlying XML structure. This makes SLES HAE more flexible for complex edits but requires a different syntax knowledge base compared to the rigid, command-execution workflow of MLS 7.

    Scope is here MLS 7.9

    Goals

    • Get more familiar with MLS7.9 HA toolchain and Graphical User Interface and Daemons
    • Create a two node MLS cluster with SBD
    • Check different use cases
    • Create a "SUSE Best Practices" presentation slide set suitable for Consulting Customers

    Resources

    • You need MLS7.9 (Qcow2) installed + subscription
    • KVM server with 2 KVMs, 2 SBD
    • RHEL7 and HA skills