The docker way of updating containers is to build a new image with the updated binaries and files, which creates a security concern.

The docker way is not anymore running "zypper update" in the containment but to update the whole image in the image registry (hub docker if we are talking about public registry) and then pull the image update from there, stop the outdated containments and replace them by starting new containments based on the new image.

This process breaks our current security update workflow since our workflow is based on running "zypper update" at the host, or in this case, in the containment.

Thus, what we need is a way to update the images in the registry when there are new RPM updates.

When we talk about updating RPMs, we have to make a distinction of 2 cases:

  • The RPM is in the base image
  • The RPM is in a layer above the image

The idea of the project is to make use of the "Remote Build Trigger" feature in the public registry "Docker Hub" [1] to trigger automatic builds of containers which need to be rebuilt.

[1] https://docs.docker.com/docker-hub/builds/

Looking for hackers with the skills:

docker security

This project is part of:

Hack Week 12

Activity

  • over 9 years ago: kpimenov liked this project.
  • over 9 years ago: jordimassaguerpla added keyword "docker" to this project.
  • over 9 years ago: jordimassaguerpla added keyword "security" to this project.
  • over 9 years ago: jordimassaguerpla started this project.
  • over 9 years ago: jordimassaguerpla originated this project.

  • Comments

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      The design on how to solve this is:

      1- add a layer in the docker image which updates the packages, thus "zypper up" command in the Dockerfile.

      2- get a list of RPMs installed on the docker image

      3- get metadata from the update repo

      4- based on the information from 2 and 3, decide to trigger a rebuild

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      I'll start by doing 1 and 2 manually, that is, editing the Dockerfile on a docker example and running "rpm -qa" on that docker example. Thus I'll focus on 3 and 4.

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      after talking to Flavio, I'll redesign it as:

      1- add a layer in docker image which applies patches: "zypper ref && zypper patch" 2- run "docker run --rm IMAGE zypper list-patches" 3- if 2 returns a list of patches, trigger a rebuild

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      1- add a layer in docker image which applies patches: "zypper ref && zypper patch"

      2- run "docker run --rm IMAGE zypper list-patches"

      3- if 2 returns a list of patches, trigger a rebuild

    • lnussel
      over 9 years ago by lnussel | Reply

      can't you build the docker image using kiwi in obs? obs will automatically trigger a rebuild if depending packages changed.

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      Yes I could do that but that wouldn't update existing images in docker hub.

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      https://github.com/jordimassaguerpla/dc-update

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      dc-update is a shell script that pulls the image from a registry (i.e. docker hub), checks if there are pending updates, and if so, adds a layer with the updated packages (by running "zypper patch" and commiting afterwards), and finally pushes the image back to the registry (i.e. docker hub).

      It also works with fedora.

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      Let's work on a web interface now :-) !

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      Web interface:

      https://github.com/jordimassaguerpla/dc-update-web

      It has no yet multiuser support and it requires the dockercfg file to be in the home directory of the user running the web. Adding multiuser support will be the next step.

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      It requires the dcupdate to be in the path. I am going to create an RPM with the dcupdate.

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      It has support for multiple users. Authentication is with github. RPM for dc-update:

      http://download.opensuse.org/repositories/home:/jordimassaguerpla:/dc-update

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      and here there is a screenshot

      https://github.com/jordimassaguerpla/dc-update-web/blob/master/screenshots/hackweek12.png

    • jordimassaguerpla
      over 9 years ago by jordimassaguerpla | Reply

      I had presented the project in the docker meetup group in Barcelona last wednesday. People found it interesting and pointed out a problem if the image had been built with a DockerFile and, after updating it with the dc-update script, you try to rebuild it again based on the DockerFile.

      This script can be used for updating base images which won't get new code and so no rebuilds or it could be changed to trigger a rebuild based on the DockerFile instead of adding a layer with "zypper/yum update".

      In anyway, I've get to practice with docker and rails and have learn a lot, thus it has been worth doing this project :-) .

    Similar Projects

    Migrate from Docker to Podman by tjyrinki_suse

    Description

    I'd like to continue my former work on containerization of several domains on a single server by changing from Docker containers to Podman containers. That will need an OS upgrade as well as Podman is not available in that old server version.

    Goals

    • Update OS.
    • Migrate from Docker to Podman.
    • Keep everything functional, including the existing "meanwhile done" additional Docker container that is actually being used already.
    • Keep everything at least as secure as currently. One of the reasons of having the containers is to isolate risks related to services open to public Internet.
    • Try to enable the Podman use in production.
    • At minimum, learn about all of these topics.
    • Optionally, improve Ansible side of things as well...

    Resources

    A search engine is one's friend. Migrating from Docker to Podman, and from docker-compose to podman-compose.


    OIDC Loginproxy by toe

    Description

    Reverse proxies can be a useful option to separate authentication logic from application logic. SUSE and openSUSE use "loginproxies" as an authentication layer in front of several services.

    Currently, loginproxies exist which support LDAP authentication or SAML authentication.

    Goals

    The goal of this Hack Week project is, to create another loginproxy which supports OpenID Connect authentication which can then act as a drop-in replacement for the existing LDAP or SAML loginproxies.

    Testing is intended to focus on the integration with OIDC IDPs from Okta, KanIDM and Authentik.

    Resources


    VulnHeap by r1chard-lyu

    Description

    The VulnHeap project is dedicated to the in-depth analysis and exploitation of vulnerabilities within heap memory management. It focuses on understanding the intricate workflow of heap allocation, chunk structures, and bin management, which are essential to identifying and mitigating security risks.

    Goals

    • Familiarize with heap
      • Heap workflow
      • Chunk and bin structure
      • Vulnerabilities
    • Vulnerability
      • Use after free (UAF)
      • Heap overflow
      • Double free
    • Use Docker to create a vulnerable environment and apply techniques to exploit it

    Resources

    • https://heap-exploitation.dhavalkapil.com/divingintoglibc_heap
    • https://raw.githubusercontent.com/cloudburst/libheap/master/heap.png
    • https://github.com/shellphish/how2heap?tab=readme-ov-file


    Model checking the BPF verifier by shunghsiyu

    Project Description

    BPF verifier plays a crucial role in securing the system (though less so now that unprivileged BPF is disabled by default in both upstream and SLES), and bugs in the verifier has lead to privilege escalation vulnerabilities in the past (e.g. CVE-2021-3490).

    One way to check whether the verifer has bugs to use model checking (a formal verification technique), in other words, build a abstract model of how the verifier operates, and then see if certain condition can occur (e.g. incorrect calculation during value tracking of registers) by giving both the model and condition to a solver.

    For the solver I will be using the Z3 SMT solver to do the checking since it provide a Python binding that's relatively easy to use.

    Goal for this Hackweek

    Learn how to use the Z3 Python binding (i.e. Z3Py) to build a model of (part of) the BPF verifier, probably the part that's related to value tracking using tristate numbers (aka tnum), and then check that the algorithm work as intended.

    Resources


    Kanidm: A safe and modern IDM system by firstyear

    Kanidm is an IDM system written in Rust for modern systems authentication. The github repo has a detailed "getting started" on the readme.

    Kanidm Github

    In addition Kanidm has spawn a number of adjacent projects in the Rust ecosystem such as LDAP, Kerberos, Webauthn, and cryptography libraries.

    In this hack week, we'll be working on Quokca, a certificate authority that supports PKCS11/TPM storage of keys, issuance of PIV certificates, and ACME without the feature gatekeeping implemented by other CA's like smallstep.

    For anyone who wants to participate in Kanidm, we have documentation and developer guides which can help.

    I'm happy to help and share more, so please get in touch!


    Bot to identify reserved data leak in local files or when publishing on remote repository by mdati

    Description

    Scope here is to prevent reserved data or generally "unwanted", to be pushed and saved on a public repository, i.e. on Github, causing disclosure or leaking of reserved informations.

    The above definition of reserved or "unwanted" may vary, depending on the context: sometime secret keys or password are stored in data or configuration files or hardcoded in source code and depending on the scope of the archive or the level of security, it can be either wanted, permitted or not at all.

    As main target here, secrets will be registration keys or passwords, to be detected and managed locally or in a C.I. pipeline.

    Goals

    • Detection:

      • Local detection: detect secret words present in local files;
      • Remote detection: detect secrets in files, in pipelines, going to be transferred on a remote repository, i.e. via git push;
    • Reporting:

      • report the result of detection on stderr and/or log files, noticed excluding the secret values.
    • Acton:

      • Manage the detection, by either deleting or masking the impacted code or deleting/moving the file itself or simply notify it.

    Resources

    • Project repository, published on Github (link): m-dati/hkwk24;
    • Reference folder: hkwk24/chksecret;
    • First pull request (link): PR#1;
    • Second PR, for improvements: PR#2;
    • README.md and TESTS.md documentation files available in the repo root;
    • Test subproject repository, for testing CI on push [TBD].

    Notes

    We use here some examples of secret words, that still can be improved.
    The various patterns to match desired reserved words are written in a separated module, to be on demand updated or customized.

    [Legend: TBD = to be done]