The docker way of updating containers is to build a new image with the updated binaries and files, which creates a security concern.
The docker way is not anymore running "zypper update" in the containment but to update the whole image in the image registry (hub docker if we are talking about public registry) and then pull the image update from there, stop the outdated containments and replace them by starting new containments based on the new image.
This process breaks our current security update workflow since our workflow is based on running "zypper update" at the host, or in this case, in the containment.
Thus, what we need is a way to update the images in the registry when there are new RPM updates.
When we talk about updating RPMs, we have to make a distinction of 2 cases:
- The RPM is in the base image
- The RPM is in a layer above the image
The idea of the project is to make use of the "Remote Build Trigger" feature in the public registry "Docker Hub" [1] to trigger automatic builds of containers which need to be rebuilt.
[1] https://docs.docker.com/docker-hub/builds/
This project is part of:
Hack Week 12
Activity
Comments
-
over 9 years ago by jordimassaguerpla | Reply
The design on how to solve this is:
1- add a layer in the docker image which updates the packages, thus "zypper up" command in the Dockerfile.
2- get a list of RPMs installed on the docker image
3- get metadata from the update repo
4- based on the information from 2 and 3, decide to trigger a rebuild
-
over 9 years ago by jordimassaguerpla | Reply
I'll start by doing 1 and 2 manually, that is, editing the Dockerfile on a docker example and running "rpm -qa" on that docker example. Thus I'll focus on 3 and 4.
-
over 9 years ago by jordimassaguerpla | Reply
after talking to Flavio, I'll redesign it as:
1- add a layer in docker image which applies patches: "zypper ref && zypper patch" 2- run "docker run --rm IMAGE zypper list-patches" 3- if 2 returns a list of patches, trigger a rebuild
-
over 9 years ago by jordimassaguerpla | Reply
1- add a layer in docker image which applies patches: "zypper ref && zypper patch"
2- run "docker run --rm IMAGE zypper list-patches"
3- if 2 returns a list of patches, trigger a rebuild
-
over 9 years ago by jordimassaguerpla | Reply
Yes I could do that but that wouldn't update existing images in docker hub.
-
over 9 years ago by jordimassaguerpla | Reply
dc-update is a shell script that pulls the image from a registry (i.e. docker hub), checks if there are pending updates, and if so, adds a layer with the updated packages (by running "zypper patch" and commiting afterwards), and finally pushes the image back to the registry (i.e. docker hub).
It also works with fedora.
-
over 9 years ago by jordimassaguerpla | Reply
Web interface:
https://github.com/jordimassaguerpla/dc-update-web
It has no yet multiuser support and it requires the dockercfg file to be in the home directory of the user running the web. Adding multiuser support will be the next step.
-
over 9 years ago by jordimassaguerpla | Reply
It requires the dcupdate to be in the path. I am going to create an RPM with the dcupdate.
-
over 9 years ago by jordimassaguerpla | Reply
It has support for multiple users. Authentication is with github. RPM for dc-update:
http://download.opensuse.org/repositories/home:/jordimassaguerpla:/dc-update
-
over 9 years ago by jordimassaguerpla | Reply
and here there is a screenshot
https://github.com/jordimassaguerpla/dc-update-web/blob/master/screenshots/hackweek12.png
-
over 9 years ago by jordimassaguerpla | Reply
I had presented the project in the docker meetup group in Barcelona last wednesday. People found it interesting and pointed out a problem if the image had been built with a DockerFile and, after updating it with the dc-update script, you try to rebuild it again based on the DockerFile.
This script can be used for updating base images which won't get new code and so no rebuilds or it could be changed to trigger a rebuild based on the DockerFile instead of adding a layer with "zypper/yum update".
In anyway, I've get to practice with docker and rails and have learn a lot, thus it has been worth doing this project :-) .
Similar Projects
Migrate from Docker to Podman by tjyrinki_suse
Description
I'd like to continue my former work on containerization of several domains on a single server by changing from Docker containers to Podman containers. That will need an OS upgrade as well as Podman is not available in that old server version.
Goals
- Update OS.
- Migrate from Docker to Podman.
- Keep everything functional, including the existing "meanwhile done" additional Docker container that is actually being used already.
- Keep everything at least as secure as currently. One of the reasons of having the containers is to isolate risks related to services open to public Internet.
- Try to enable the Podman use in production.
- At minimum, learn about all of these topics.
- Optionally, improve Ansible side of things as well...
Resources
A search engine is one's friend. Migrating from Docker to Podman, and from docker-compose to podman-compose.
OIDC Loginproxy by toe
Description
Reverse proxies can be a useful option to separate authentication logic from application logic. SUSE and openSUSE use "loginproxies" as an authentication layer in front of several services.
Currently, loginproxies exist which support LDAP authentication or SAML authentication.
Goals
The goal of this Hack Week project is, to create another loginproxy which supports OpenID Connect authentication which can then act as a drop-in replacement for the existing LDAP or SAML loginproxies.
Testing is intended to focus on the integration with OIDC IDPs from Okta, KanIDM and Authentik.
Resources
Contributing to Linux Kernel security by pperego
Description
A couple of weeks ago, I found this blog post by Gustavo Silva, a Linux Kernel contributor.
I always strived to start again into hacking the Linux Kernel, so I asked Coverity scan dashboard access and I want to contribute to Linux Kernel by fixing some minor issues.
I want also to create a Linux Kernel fuzzing lab using qemu and syzkaller
Goals
- Fix at least 2 security bugs
- Create the fuzzing lab and having it running
The story so far
- Day 1: setting up a virtual machine for kernel development using Tumbleweed. Reading a lot of documentation, taking confidence with Coverity dashboard and with procedures to submit a kernel patch
- Day 2: I read really a lot of documentation and I triaged some findings on Coverity SAST dashboard. I have to confirm that SAST tool are great false positives generator, even for low hanging fruits.
- Day 3: Working on trivial changes after I read this blog post:
https://www.toblux.com/posts/2024/02/linux-kernel-patches.html. I have to take confidence
with the patch preparation and submit process yet.
- First trivial patch sent: using strtruefalse() macro instead of hard-coded strings in a staging driver for a lcd display
- Fix for a dereference before null check issue discovered by Coverity (CID 1601566) https://scan7.scan.coverity.com/#/project-view/52110/11354?selectedIssue=1601566
- Day 4: Triaging more issues found by Coverity.
- The patch for CID 1601566 was refused. The check against the NULL pointer was pointless so I prepared a version 2 of the patch removing the check.
- Fixed another dereference before NULL check in iwlmvmparsewowlaninfo_notif() routine (CID 1601547). This one was already submitted by another kernel hacker :(
- Day 5: Wrapping up. I had to do some minor rework on patch for CID 1601566. I found a stalker bothering me in private emails and people I interacted with me, advised he is a well known bothering person. Markus Elfring for the record.
Wrapping up: being back doing kernel hacking is amazing and I don't want to stop it. My battery pack is completely drained but changing the scope gave me a great twist and I really want to feel this energy not doing a single task for months.
I failed in setting up a fuzzing lab but I was too optimistic for the patch submission process.
The patches
Migrate from Docker to Podman by tjyrinki_suse
Description
I'd like to continue my former work on containerization of several domains on a single server by changing from Docker containers to Podman containers. That will need an OS upgrade as well as Podman is not available in that old server version.
Goals
- Update OS.
- Migrate from Docker to Podman.
- Keep everything functional, including the existing "meanwhile done" additional Docker container that is actually being used already.
- Keep everything at least as secure as currently. One of the reasons of having the containers is to isolate risks related to services open to public Internet.
- Try to enable the Podman use in production.
- At minimum, learn about all of these topics.
- Optionally, improve Ansible side of things as well...
Resources
A search engine is one's friend. Migrating from Docker to Podman, and from docker-compose to podman-compose.
CVE portal for SUSE Rancher products by gmacedo
Description
Currently it's a bit difficult for users to quickly see the list of CVEs affecting images in Rancher, RKE2, Harvester and Longhorn releases. Users need to individually look for each CVE in the SUSE CVE database page - https://www.suse.com/security/cve/ . This is not optimal, because those CVE pages are a bit hard to read and contain data for all SLE and BCI products too, making it difficult to easily see only the CVEs affecting the latest release of Rancher, for example. We understand that certain costumers are only looking for CVE data for Rancher and not SLE or BCI.
Goals
The objective is to create a simple to read and navigate page that contains only CVE data related to Rancher, RKE2, Harvester and Longhorn, where it's easy to search by a CVE ID, an image name or a release version. The page should also provide the raw data as an exportable CSV file.
It must be an MVP with the minimal amount of effort/time invested, but still providing great value to our users and saving the wasted time that the Rancher Security team needs to spend by manually sharing such data. It might not be long lived, as it can be replaced in 2-3 years with a better SUSE wide solution.
Resources
- The page must be simple and easy to read.
- The UI/UX must be as straightforward as possible with minimal visual noise.
- The content must be created automatically from the raw data that we already have internally.
- It must be updated automatically on a daily basis and on ad-hoc runs (when needed).
- The CVE status must be aligned with VEX.
- The raw data must be exportable as CSV file.
- Ideally it will be written in Go or pure Shell script with basic HTML and no external dependencies in CSS or JS.
Bot to identify reserved data leak in local files or when publishing on remote repository by mdati
Description
Scope here is to prevent reserved data or generally "unwanted", to be pushed and saved on a public repository, i.e. on Github, causing disclosure or leaking of reserved informations.
The above definition of reserved or "unwanted" may vary, depending on the context: sometime secret keys or password are stored in data or configuration files or hardcoded in source code and depending on the scope of the archive or the level of security, it can be either wanted, permitted or not at all.
As main target here, secrets will be registration keys or passwords, to be detected and managed locally or in a C.I. pipeline.
Goals
Detection:
- Local detection: detect secret words present in local files;
- Remote detection: detect secrets in files, in pipelines, going to be transferred on a remote repository, i.e. via
git push
;
Reporting:
- report the result of detection on stderr and/or log files, noticed excluding the secret values.
Acton:
- Manage the detection, by either deleting or masking the impacted code or deleting/moving the file itself or simply notify it.
Resources
- Project repository, published on Github (link): m-dati/hkwk24;
- Reference folder: hkwk24/chksecret;
- First pull request (link): PR#1;
- Second PR, for improvements: PR#2;
- README.md and TESTS.md documentation files available in the repo root;
- Test subproject repository, for testing CI on push [TBD].
Notes
We use here some examples of secret words, that still can be improved.
The various patterns to match desired reserved words are written in a separated module, to be on demand updated or customized.
[Legend: TBD = to be done]