I would like to change the way "quilt setup" is implemented.

At the moment, we call rpmbuild and intercept the calls to tar and patch in order to record the location where archives are extracted and the order and options of the patches which apply to them. Then we replay that record to create our own quilt-compatible source tree.

While this works good enough in simple cases, there are two drawbacks:

  • We duplicate archive extraction and patch application, which incurs a performance penalty.
  • We miss extra commands from the spec file, so the source tree we produce is not exactly what is specified in the spec file.

What I would like to do is keep intercepting calls to patch, but instead of recording them for later use, I'd like to replace them on the fly by quilt import and quilt push. That way rpmbuild will create a quilt-compatible source tree, which we can just copy over to the final location.

There will certainly be a few issue to solve on the way but I can't foresee any blocker. I hope I'm not overseeing a big obvious and unsolvable problem. If we can get there then I expect "quilt setup" to be better and faster.

Looking for hackers with the skills:

bash rpmbuild

This project is part of:

Hack Week 11

Activity

  • about 10 years ago: jdelvare removed keyword perl from this project.
  • about 10 years ago: rwill liked this project.
  • about 10 years ago: cxiong liked this project.
  • about 10 years ago: sleep_walker liked this project.
  • about 10 years ago: matejcik joined this project.
  • about 10 years ago: matejcik liked this project.
  • about 10 years ago: herbert0890 liked this project.
  • about 10 years ago: puzel liked this project.
  • about 10 years ago: sndirsch liked this project.
  • about 10 years ago: jdelvare added keyword "rpmbuild" to this project.
  • about 10 years ago: jdelvare liked this project.
  • about 10 years ago: joeyli liked this project.
  • about 10 years ago: bmwiedemann liked this project.
  • about 10 years ago: jdelvare started this project.
  • about 10 years ago: jdelvare added keyword "bash" to this project.
  • about 10 years ago: jdelvare added keyword "perl" to this project.
  • about 10 years ago: jdelvare originated this project.

  • Comments

    • matejcik
      about 10 years ago by matejcik | Reply

      I was thinking of implementing quilt-like functionality on top of git. This might help my goals as well, so for now I'm joining your project ;)

      • jdelvare
        about 10 years ago by jdelvare | Reply

        Jan, at least two quilt-like git-based implementations exist, named stgit and guilt. You should probably give them a try if you are interested in this topic.

    • rwill
      about 10 years ago by rwill | Reply

      If you need a "non-trivial" test object, you may want to consider 'grub2', which usually fails to 'setup' due to '%if ! 0%{?efi}' or some such... (c:

      • jdelvare
        about 10 years ago by jdelvare | Reply

        Thanks for the pointer, Raymund. I'll make sure to test my new code on the grub2 spec file.

      • jdelvare
        about 10 years ago by jdelvare | Reply

        The problem with the grub2 spec file is that it uses a syntax which older versions of rpmbuild do not recognize as valid. This has nothing to do with "quilt setup". But it's easy enough to fix the spec file itself as far as I can see, so let's just do that.

    • jdelvare
      about 10 years ago by jdelvare | Reply

      The new backend for "quilt setup" is working, and the performance improvement is very nice. For example, the kernel-default package (which was the slowest one I was aware of) took 1 min 48 s to setup before, while with the new --fast option, it only takes 25 s. So I think the project can be called a success :-)

      There are a few technical implementation details which still need to be sorted out, I'll discuss that with upstream. There are some possible improvements left on my to-do list as well.

      It took a few tries to get things right. For example, I originally wanted to get rid of the md5sum step altogether, before I remembered that patches are typically passed to us through stdin, so we still need a checksum of all patch files in order to resolve the contents to a filename. I was still able to save time by skipping md5sum on archive files.

      Another problem I hit is that my original implementation used "quilt import" to add each patch to the series file. This created a copy of each patch, instead of linking back to the original files in the source directory. That made it impossible to refresh the patches directly as "quilt setup" currently allows. I first worked around that by deleting the copies and linking back afterward, but that was both inefficient and fragile. The proper fix was to manually add the file to the series file instead of calling "quilt import". That required some work so that all the paths were correct both when rpmbuild is applying the patches and later when the working tree is available to the user. The trickiest part was to make it all work also when options -d and/or --sourcedir are used.

      Finally, contrary to my original intention, "quilt push" isn't called on every patch. This turned out to be a bad idea from a performance point of view, because individual calls to "quilt push" are much slower than a single call to "quilt push -a". So, it is still up to the user to apply the patches with "quilt push", same as before.

      Two user-visible differences exist between the original implementation of "quilt setup" and the new, faster one:

      • Headers in the generated series file are incomplete. This prevents reusing the series file for a future call to "quilt setup". I think most users don't care about this feature though, so they won't notice. This could be fixed, but at the price of some of the performance gain, so I don't want to do it by default. Maybe this can be implemented as an option later, but that only really makes sense if we get rid of the original implementation of "quilt setup" (which my patches do not.)
      • Patch failures are no longer reported, the user will see them on "quilt push". A great side effect of this is that all patches are added to the series even if one fails to apply in the middle. So I consider this change a feature, as this avoids the "quilt setup" / "rm -rf" cycle the user had to go through before when some patches do not apply.

      Also a nice side effect for this project is that I came up with a few bug fixes, cleanups, and performance improvement patches for quilt along the way. 6 of them are already upstream and 6 more have been posted for review.

    • jdelvare
      about 10 years ago by jdelvare | Reply

      The result of my work is packaged at: https://build.opensuse.org/package/show/home:jdelvare:branches:devel:tools:scm/quilt

      Just pass --fast to "quilt setup" to use the new code, and enjoy the performance boost. If anything doesn't work as expected, please report to me.

      Note: for complex packages such as the kernel (or any package including patches in archives), you want to use option -d as well, otherwise some of the performance gain is lost. This is a known bug and I have an idea how it can be fixed, but I did not have the time to implement it yet.

      • jdelvare
        about 10 years ago by jdelvare | Reply

        And now as a clickable link: home:jdelvare:branches:devel:tools:scm > quilt

    • jdelvare
      about 10 years ago by jdelvare | Reply

      Patch/RFC was finally posted to the quilt-dev list.

    • jdelvare
      about 10 years ago by jdelvare | Reply

      The quilt setup "fast mode" feature has been committed upstream.

      • jdelvare
        over 9 years ago by jdelvare | Reply

        I have just committed the final pieces of "quilt setup" optimization upstream. It will be faster than ever in v0.65.

    Similar Projects

    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).
    • [W] Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.
    • [I] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?
    • [W] Applying any basic salt state (including a formula)
    • [W] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    A CLI for Harvester by mohamed.belgaied

    [comment]: # Harvester does not officially come with a CLI tool, the user is supposed to interact with Harvester mostly through the UI [comment]: # Though it is theoretically possible to use kubectl to interact with Harvester, the manipulation of Kubevirt YAML objects is absolutely not user friendly. [comment]: # Inspired by tools like multipass from Canonical to easily and rapidly create one of multiple VMs, I began the development of Harvester CLI. Currently, it works but Harvester CLI needs some love to be up-to-date with Harvester v1.0.2 and needs some bug fixes and improvements as well.

    Project Description

    Harvester CLI is a command line interface tool written in Go, designed to simplify interfacing with a Harvester cluster as a user. It is especially useful for testing purposes as you can easily and rapidly create VMs in Harvester by providing a simple command such as: harvester vm create my-vm --count 5 to create 5 VMs named my-vm-01 to my-vm-05.

    asciicast

    Harvester CLI is functional but needs a number of improvements: up-to-date functionality with Harvester v1.0.2 (some minor issues right now), modifying the default behaviour to create an opensuse VM instead of an ubuntu VM, solve some bugs, etc.

    Github Repo for Harvester CLI: https://github.com/belgaied2/harvester-cli

    Done in previous Hackweeks

    • Create a Github actions pipeline to automatically integrate Harvester CLI to Homebrew repositories: DONE
    • Automatically package Harvester CLI for OpenSUSE / Redhat RPMs or DEBs: DONE

    Goal for this Hackweek

    The goal for this Hackweek is to bring Harvester CLI up-to-speed with latest Harvester versions (v1.3.X and v1.4.X), and improve the code quality as well as implement some simple features and bug fixes.

    Some nice additions might be: * Improve handling of namespaced objects * Add features, such as network management or Load Balancer creation ? * Add more unit tests and, why not, e2e tests * Improve CI * Improve the overall code quality * Test the program and create issues for it

    Issue list is here: https://github.com/belgaied2/harvester-cli/issues

    Resources

    The project is written in Go, and using client-go the Kubernetes Go Client libraries to communicate with the Harvester API (which is Kubernetes in fact). Welcome contributions are:

    • Testing it and creating issues
    • Documentation
    • Go code improvement

    What you might learn

    Harvester CLI might be interesting to you if you want to learn more about:

    • GitHub Actions
    • Harvester as a SUSE Product
    • Go programming language
    • Kubernetes API