Motivation: You know a particular function name and would like to know which package(s) it comes from.

Approximation: It is possible to search in code on Github hoping for a match in upstream repo not being too far from our distro.

We already have all packages stored (and versioned too) in OBS however OBS cannot search in the package contents. There exists Debian codesearch which fulfills the primary motivation for this.

Goal: setup internally available instance of Code search for SLE codebase and hook it to stay up to date with IBS updates.

**Stretch goal:** second instance for openSUSE (assuming the first is easier to coordinate in Hackweek timeframe).

Stretch goal: make it maintenance-less (to survive even without too much care).

Related

SLE kernel source browser

Looking for hackers with the skills:

indexer infra obs go

This project is part of:

Hack Week 18

Activity

  • almost 6 years ago: barendartchuk liked this project.
  • almost 6 years ago: ematsumiya liked this project.
  • almost 6 years ago: jbrielmaier liked this project.
  • almost 6 years ago: mkoutny added keyword "go" to this project.
  • almost 6 years ago: mkoutny added keyword "obs" to this project.
  • almost 6 years ago: mkoutny added keyword "indexer" to this project.
  • almost 6 years ago: mkoutny added keyword "infra" to this project.
  • almost 6 years ago: mkoutny originated this project.

  • Comments

    • mkraus
      over 5 years ago by mkraus | Reply

      FYI, there used to be a code-search.suse.de (source code). From my understanding it had some issues though, and nobody working on it anymore, so it finally got shut down this year. It still might be useful as a basis though, or at least Victor might have some insights on pitfalls to avoid this time.

    • mkoutny
      about 4 years ago by mkoutny | Reply

      TIL about https://code.opensuse.org/

      • git export of OBS packages, i.e. it tracks metafiles
      • it seems not all TW packages are there (e.g. missing systemd)
      • experimental project from bwiedemann

    Similar Projects

    Fix RSpec tests in order to replace the ruby-ldap rubygem in OBS by enavarro_suse

    Description

    "LDAP mode is not official supported by OBS!". See: config/options.yml.example#L100-L102

    However, there is an RSpec file which tests LDAP mode in OBS. These tests use the ruby-ldap rubygem, mocking the results returned by a LDAP server.

    The ruby-ldap rubygem seems no longer maintaned, and also prevents from updating to a more recent Ruby version. A good alternative is to replace it with the net-ldap rubygem.

    Before replacing the ruby-ldap rubygem, we should modify the tests so the don't mock the responses of a LDAP server. Instead, we should modify the tests and run them against a real LDAP server.

    Goals

    Goals of this project:

    • Modify the RSpec tests and run them against a real LDAP server
    • Replace the net-ldap rubygem with the ruby-ldap rubygem

    Achieving the above mentioned goals will:

    • Permit upgrading OBS from Ruby 3.1 to Ruby 3.2
    • Make a step towards officially supporting LDAP in OBS.

    Resources


    Git CI to automate the creation of product definition by gyribeiro

    Description

    Automate the creation of product definition

    Goals

    Create a Git CI that will:

    • automatically be triggered once a change (commit) in package list is done.
    • run tool responsible to update product definition based on the changes in package list
    • test the updated product definition in OBS
    • submit a pull request updating the product definition in the repository

    NOTE: this Git CI may also be triggered manually

    Resources

    • https://docs.gitlab.com/ee/ci/
    • https://openbuildservice.org/2021/05/31/scm-integration/
    • https://github.com/openSUSE/openSUSE-release-tools


    Explore the integration between OBS and GitHub by pdostal

    Project Description

    The goals:

    1) When GitHub pull request is created or modified the OBS project will be forked and the build results reported back to GitHub. 2) When new version of the GitHub project will be published the OBS will redownload the source and rebuild the project.

    Goal for this Hackweek

    Do as much as possible, blog about it and maybe use it another existing project.

    Resources


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).
    • [W] Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.
    • [I] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?
    • [W] Applying any basic salt state (including a formula)
    • [W] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    Learn about OSB and contribute to `kustomize` and `k9s` packages to add ARM arch by dpock

    Description

    There are already k9s and kustomize packages that exist for openSUSE today. These could be used as the source for these binaries in our rancher projects. By using them we would benefit from CVE fixes included in our distribution of the packages not in cluded upstream. However they are not providing arm package builds which are required.

    Goals

    • [ ] Update the kustomize package in OBS to use the newest version and send change request

    Resources

    • k9s: https://build.opensuse.org/package/show/openSUSE:Factory/k9s
    • kustomize: https://build.opensuse.org/package/show/openSUSE:Factory/kustomize
    • Learning Docs: https://confluence.suse.com/display/packaging/Training%2C+Talks+and+Videos


    Cluster API Provider for Harvester by rcase

    Project Description

    The Cluster API "infrastructure provider" for Harvester, also named CAPHV, makes it possible to use Harvester with Cluster API. This enables people and organisations to create Kubernetes clusters running on VMs created by Harvester using a declarative spec.

    The project has been bootstrapped in HackWeek 23, and its code is available here.

    Work done in HackWeek 2023

    • Have a early working version of the provider available on Rancher Sandbox : *DONE *
    • Demonstrated the created cluster can be imported using Rancher Turtles: DONE
    • Stretch goal - demonstrate using the new provider with CAPRKE2: DONE and the templates are available on the repo

    Goals for HackWeek 2024

    • Add support for ClusterClass
    • Add e2e testing
    • Add more Unit Tests
    • Improve Status Conditions to reflect current state of Infrastructure
    • Improve CI (some bugs for release creation)
    • Testing with newer Harvester version (v1.3.X and v1.4.X)
    • Due to the length and complexity of the templates, maybe package some of them as Helm Charts.
    • Other improvement suggestions are welcome!

    DONE in HackWeek 24:

    Thanks to @isim and Dominic Giebert for their contributions!

    Resources

    Looking for help from anyone interested in Cluster API (CAPI) or who wants to learn more about Harvester.

    This will be an infrastructure provider for Cluster API. Some background reading for the CAPI aspect:


    ddflare: (Dynamic)DNS management via Cloudflare API in Kubernetes by fgiudici

    Description

    ddflare is a project started a couple of weeks ago to provide DDNS management using v4 Cloudflare APIs: Cloudflare offers management via APIs and access tokens, so it is possible to register a domain and implement a DynDNS client without any other external service but their API.

    Since ddflare allows to set any IP to any domain name, one could manage multiple A and ALIAS domain records. Wouldn't be cool to allow full DNS control from the project and integrate it with your Kubernetes cluster?

    Goals

    Main goals are:

    1. add containerized image for ddflare
    2. extend ddflare to be able to add and remove DNS records (and not just update existing ones)
    3. add documentation, covering also a sample pod deployment for Kubernetes
    4. write a ddflare Kubernetes operator to enable domain management via Kubernetes resources (using kubebuilder)

    Available tasks and improvements tracked on ddflare github.

    Resources

    • https://github.com/fgiudici/ddflare
    • https://developers.cloudflare.com/api/
    • https://book.kubebuilder.io


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API


    Cluster API Add-on Provider for Kubewarden by csalas

    Description

    Can we integrate Kubewarden with Cluster API provisioning?

    Cluster API is a Kubernetes project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. TLDR; CAPI let's you define Kubernetes clusters in plain YAML, and CAPI providers (infrastructure, control plane/bootstrap, etc.) manage provisioning and configuration for you.

    What if we could create an add-on provider that automatically installs Kubewarden and deploys Policy Servers to CAPI clusters?

    Goals

    • As a user I'd like to set a cluster (or list of clusters) and have the provider install Kubewarden for me.
    • As a user I'd like to set what policies must be enforced for a cluster (or list of clusters).

    Resources

    • Cluster API: https://cluster-api.sigs.k8s.io/
    • Kubewarden: https://docs.kubewarden.io/


    Contribute to terraform-provider-libvirt by pinvernizzi

    Description

    The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.

    It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.

    If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.

    Goals

    • Get more familiar with Terraform provider development and libvirt bindings in Go
    • Solve some issues and/or implement some features
    • Get in touch with the community around the project

    Resources