I would like to continue to work on a web UI for the Docker registry. I know we already got Portus which is based on Ruby on Rails, but I would prefer a project based on Golang with a singlepage application for the frontend based on RactJS. So because of the singlepage application we are forced to write proper APIs that gets consumed by the javascript application, beside that I also want to add a CLI client for managing the system.
You can find the project at https://github.com/harborapp.
Looking for hackers with the skills:
This project is part of:
Hack Week 14
Activity
Comments
Be the first to comment!
Similar Projects
Migrate from Docker to Podman by tjyrinki_suse
Description
I'd like to continue my former work on containerization of several domains on a single server by changing from Docker containers to Podman containers. That will need an OS upgrade as well as Podman is not available in that old server version.
Goals
- Update OS.
- Migrate from Docker to Podman.
- Keep everything functional, including the existing "meanwhile done" additional Docker container that is actually being used already.
- Keep everything at least as secure as currently. One of the reasons of having the containers is to isolate risks related to services open to public Internet.
- Try to enable the Podman use in production.
- At minimum, learn about all of these topics.
- Optionally, improve Ansible side of things as well...
Resources
A search engine is one's friend. Migrating from Docker to Podman, and from docker-compose to podman-compose.
toptop - a top clone written in Go by dshah
Description
toptop
is a clone of Linux's top
CLI tool, but written in Go.
Goals
Learn more about Go (mainly bubbletea) and Linux
Resources
Cluster API Add-on Provider for Kubewarden by csalas
Description
Can we integrate Kubewarden with Cluster API provisioning?
Cluster API is a Kubernetes project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. TLDR; CAPI let's you define Kubernetes clusters in plain YAML, and CAPI providers (infrastructure, control plane/bootstrap, etc.) manage provisioning and configuration for you.
What if we could create an add-on provider that automatically installs Kubewarden and deploys Policy Servers to CAPI clusters?
Goals
- As a user I'd like to set a cluster (or list of clusters) and have the provider install Kubewarden for me.
- As a user I'd like to set what policies must be enforced for a cluster (or list of clusters).
Resources
- Cluster API: https://cluster-api.sigs.k8s.io/
- Kubewarden: https://docs.kubewarden.io/
FamilyTrip Planner: A Personalized Travel Planning Platform for Families by pherranz
Description
FamilyTrip Planner is an innovative travel planning application designed to optimize travel experiences for families with children. By integrating APIs for flights, accommodations, and local activities, the app generates complete itineraries tailored to each family’s unique interests and needs. Recommendations are based on customizable parameters such as destination, trip duration, children’s ages, and personal preferences. FamilyTrip Planner not only simplifies the travel planning process but also offers a comprehensive, personalized experience for families.
Goals
This project aims to: - Create a user-friendly platform that assists families in planning complete trips, from flight and accommodation options to recommended family-friendly activities. - Provide intelligent, personalized travel itineraries using artificial intelligence to enhance travel enjoyment and minimize time and cost. - Serve as an educational project for exploring Go programming and artificial intelligence, with the goal of building proficiency in both.
Resources
To develop FamilyTrip Planner, the project will leverage: - APIs such as Skyscanner, Google Places, and TripAdvisor to source real-time information on flights, accommodations, and activities. - Go programming language to manage data integration, API connections, and backend development. - Basic machine learning libraries to implement AI-driven itinerary suggestions tailored to family needs and preferences.
Jenny Static Site Generator by adam.pickering
Description
For my personal site I have been using hugo. It works, but I am not satisfied: every time I want to make a change (which is infrequently) I have to read through the documentation again to understand how hugo works. I don't find the documentation easy to use, and the structure of the repository that hugo requires is unintuitive/more complex than what I need. So, I have decided to write my own simple static site generator in Go. It is named Jenny, after my wife.
Goals
- Pages can be written in markdown (which is automatically converted to HTML), but other file types are also allowed
- Easy to understand and use
- Intuitive, simple design
- Clear documentation
- Hot reloading
- Binaries provided for download
- Future maintenance is easy
- Automated releases
Resources
https://github.com/adamkpickering/jenny
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
Install Uyuni on Kubernetes in cloud-native way by cbosdonnat
Description
For now installing Uyuni on Kubernetes requires running mgradm
on a cluster node... which is not what users would do in the Kubernetes world. The idea is to implement an installation based only on helm charts and probably an operator.
Goals
Install Uyuni from Rancher UI.
Resources
mgradm
code: https://github.com/uyuni-project/uyuni-tools- Uyuni operator: https://github.com/cbosdo/uyuni-operator
Automate PR process by idplscalabrini
Description
This project is to streamline and enhance the pr review process by adding automation for identifying some issues like missing comments, identifying sensitive information in the PRs like credentials. etc. By leveraging GitHub Actions and golang hooks we can focus more on high-level reviews
Goals
- Automate lints and code validations on Github actions
- Automate code validation on hook
- Implement a bot to pre-review the PRs
Resources
Golang hooks and Github actions
Dartboard TUI by IValentin
Description
Our scalability and performance testing swiss-army knife tool Dartboard is a major WIP so why not add more scope creep? Dartboard is a cli tool which enables users to:
- Define a "Dart" config file as YAML which defines the various components to be created/setup when Dartboard runs its commands
- Spin up infrastructure utilizing opentofu/terraform providers
- Setup K3s or RKE2 clusters on the newly created infrastructure
- Deploy Rancher (with or without downstream cluster), rancher-monitoring (Grafana + Prometheus)
- Create resources in-bulk within the newly created Rancher cluster (ConfigMaps, Secrets, Users, Roles, etc.)
- Run various performance and scalability tests via k6
- Export/Import various tracked metrics (WIP)
Given all these features (and the features to come), it can be difficult to onboard and transfer knowledge of the tool. With a TUI, Dartboard's usage complexity can be greatly reduced!
Goals
- Create a TUI for Dartboard's "subcommands"
- Gain more familiarity with Dartboard and create a more user-friendly interface to enable others to use it
- Stretch Create a TUI workflow for generating a Dart file
Resources
https://github.com/charmbracelet/bubbletea
Contribute to terraform-provider-libvirt by pinvernizzi
Description
The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.
It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.
If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.
Goals
- Get more familiar with Terraform provider development and libvirt bindings in Go
- Solve some issues and/or implement some features
- Get in touch with the community around the project
Resources
- CONTRIBUTING readme
- Go libvirt library in use by the project
- Terraform plugin development
- "Good first issue" list
A CLI for Harvester by mohamed.belgaied
[comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.
Project Description
Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as:
harvester vm create my-vm --count 5
to create 5 VMs named my-vm-01
to my-vm-05
.
Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.
Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli
Done in previous Hackweeks
- Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
- Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE
Goal for this Hackweek
The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.
Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it
Issue list is here: https://github.com/belgaied2/harvester-cli/issues
Resources
The project is written in Go, and using client-go
the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact).
Welcome contributions are:
- Testing it and creating issues
- Documentation
- Go code improvement
What you might learn
Harvester CLI might be interesting to you if you want to learn more about:
- GitHub Actions
- Harvester as a SUSE Product
- Go programming language
- Kubernetes API
Explore simple and distro indipendent declarative Linux starting on Tumbleweed or Arch Linux by janvhs
Description
Inspired by mkosi the idea is to experiment with a declarative approach of defining Linux systems. A lot of tools already make it possible to manage the systems infrastructure by using description files, rather than manual invocation. An example for this are systemd presets for managing enabled services or the /etc/fstab
file for describing how partitions should be mounted.
If we would take inspiration from openSUSE MicroOS and their handling of the /etc/
directory, we could theoretically use systemd-sysupdate
to swap out the /usr/
partition and create an A/B boot scheme, where the /usr/
partition is always freshly built according to a central system description. In the best case it would be possible to still utilise snapshots, but an A/B root scheme would be sufficient for the beginning. This way you could get the benefit of NixOS's declarative system definition, but still use the distros package repositories and don't have to deal with the overhead of Flakes or the Nix language.
Goals
- A simple and understandable system
- Check fitness of
mkosi
or write a simple extensible image builder tool for it - Create a declarative system specification
- Create a system with swappable
/usr/
partition - Create an A/B root scheme
- Swap to the new system without reboot (kexec?)
Resources
- Ideas that have been floating around in my head for a while
- https://0pointer.net/blog/fitting-everything-together.html
- GNOME OS
- MicroOS
- systemd mkosi
- Vanilla OS
Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil
Join the Gitter channel! https://gitter.im/uyuni-project/hackweek
Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!
Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.
For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.
No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)
The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.
To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):
- Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
- Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
- Package management (install, remove, update...)
- Patching
- Applying any basic salt state (including a formula)
- Salt remote commands
- Bonus point: Java part for product identification, and monitoring enablement
- Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
- Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
- Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)
If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)
- If you don't have knowledge about some of the steps: ask the team
- If you still don't know what to do: switch to another distribution and keep testing.
This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)
Pending
FUSS
FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.
https://fuss.bz.it/
Seems to be a Debian 12 derivative, so adding it could be quite easy.
[W]
Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)[W]
Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).[W]
Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.[I]
Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?[W]
Applying any basic salt state (including a formula)[W]
Salt remote commands[ ]
Bonus point: Java part for product identification, and monitoring enablement
Switch software-o-o to parse repomd data by hennevogel
Currently software.opensuse.org search is using the OBS binary search for everything, even for packages inside the openSUSE distributions. Let's switch this to use repomd data from download.opensuse.org