Minimal objective

This is what we consider is the minimum result we can achieve at the end of the hackweek.

User story #1: automatic dockerization

As a developer I write an application and ship it within a Docker image. The Docker image is built from scratch using a Dockerfile.

As I developer I want to Dockerize my application whenever I produce a new version of it.

Proposed solution

The developer must define his Docker image on Portus as a Docker image built from a Dockerfile. He must provide the URL of the git repository containing the Dockerfile.

The developer can trigger a build of the Docker image by simply calling a public endpoint on Portus.

Portus will checkout the source of the Dockerfile, build the image and push it to the registry.

Changes required

Changes required by Portus:

  • Introduce the concept of Docker images built from a Dockerfile.
  • Allow the customer to specify the credentials to use when doing the checkout from Git
  • Add a public API Portus that can be used to trigger builds
  • Add a “Build service” backend to Portus:
    • Trigger builds
    • Retrieve the status of the build process
    • Retrieve build logs

The “Build service” is a totally new component. We want to use a micro-services architecture for different reasons: * Portus has a precise scope and mission. The mere act of building images does not belong in here. * We can keep shipping Portus to enterprise customers without having to maintain the building code. The “Build service” can be delivered once we are confident about it / get the approval from PM. * We have to eat our own dog food. The whole Docker ecosystem is about micro-services, we have to mature experience within SUSE in order to help our customers.

As a first step the “Build service” component is going to be dead simple. We don’t want to reinvent a scheduler/orchestrator. We can delegate these tasks to the orchestration solution we are going to deliver to customers.

Right now the “Build service” is going to be deployed as a single service. The service will expose a public API that will be consumed by Portus.

A build is going to be composed by the following steps:

  1. Checkout the source code from Git.
  2. Perform a simple “docker build”.
  3. Perform a simple “docker push”.

The builds are going to take place into brand new virtual machines because:

  • On the long run the Docker build host is going to require maintenance. Unused layers referenced both by images and containers will need to be pruned to not run out of disk space. Also errored builds will leave broken layers around.
  • Security: a build could run malicious code taken from the internet that could compromise the build host. Also another user might incidentally affect other builds (eg: use a lot of hardware resources).

To build a Docker image we just need a machine running the Docker daemon. We don’t need to ssh into the machine, we just need the Docker daemon to listen to a tcp port.

All this steps (starting a brand new VM, installing Docker, configuring the Docker daemon to listen over tcp in a secure manner) are already addressed by the Docker Machine project.

Hence, at this stage, the Build system should perform the following operations:

  • Spawn a fixed set of worker threads
  • Listen for build requests and send them to the 1st available worker
  • Queue the build requests to a local database when all the workers are busy

The workers would perform the following operations:

  • Spawn a new Docker host using docker-machine
  • Checkout the source code of the project to a temporary location
  • Trigger a “docker build” against the Docker daemon listening on the remote host
  • Trigger a “docker push” from the Docker daemon
  • Destroy the Docker host using Docker machine
  • Wipe the temporary directory containing the git checkout

Advantages:

  • Docker Machine supports lots of backends (OpenStack, EC2, Digital Ocean,...)
  • The Docker daemon and client have been designed to work fine even when they are not on the same host
  • We can even have a simple bash script executed by the worker. In the future we can use libmachine and the dockerclient libraries to do everything.

Disadvantages:

  • Provisioning is slow

Nice to have feature

These are features that could be achieved during the hackweek.

User story #2: dynamic inheritance

As a developer I based my Docker image on an existing one. That’s how the Docker build system works.

As a developer I want to be notified whenever a new release of the base image I used is released.

As a developer I would like to see a notification on Portus whenever I visit a repository containing an image that is based on an outdated one.

As a developer I would like my Docker image to be automatically rebuilt whenever a new version of its base image is released.

Proposed solution

All the docker images are based on layers. Layers can be shared between different Docker images.

All the layers can be organized into a graph to have the complete overview of all the relations between the Docker images.

Portus should be extended to be aware of the relations between the different images stored into the registry.

Whenever Portus receives a push notification from the registry it will:

  • Find the previous version of this image
  • Find all the images based on the old image
  • Send a mail to all the users that have write access (team contributors or owners) to the derived images
  • Trigger a build of all the images that are built from a Dockerfile and are based on the older version of the image

Changes required

These changes are going to be only inside of Portus.

Database changes:

  • Add new models to the database representing the layers
  • Create a relationship between the Tag element and the the Layers
  • Retrieve the information about the layers composing a Docker image from the registry (this is done using the manifest API of the registry)
  • Populate the database accordingly

We can use one of the following gems to store the graph into our database:

  • https://github.com/collectiveidea/awesomenestedset
  • https://github.com/mceachen/closure_tree
  • https://github.com/stefankroes/ancestry

Other changes:

  • React to “push” notification: find and flag outdated children
  • Update the UI when a outdated image is visited
  • Send emails to the user who can update the image (owners/contributors of the namespace containing the Docker image)
  • Trigger a rebuild of all the outdated children that are built from a known Dockerfile

Advantages:

  • Make users aware of outdated images. This will help them to have more secure images.
  • Trigger chain reactions of updates: an outdated child is automatically rebuilt -> the image is pushed to the registry -> Portus is notified -> the outdated children of the just rebuilt child are searched -> other automated builds are rebuilt

Looking for hackers with the skills:

docker portus

This project is part of:

Hack Week 13

Activity

  • about 9 years ago: jordimassaguerpla joined this project.
  • about 9 years ago: cschum liked this project.
  • about 9 years ago: evshmarnev liked this project.
  • about 9 years ago: dgutu liked this project.
  • about 9 years ago: mssola liked this project.
  • about 9 years ago: mssola joined this project.
  • about 9 years ago: guohouzuo joined this project.
  • about 9 years ago: flavio_castelli started this project.
  • about 9 years ago: flavio_castelli added keyword "docker" to this project.
  • about 9 years ago: flavio_castelli added keyword "portus" to this project.
  • about 9 years ago: flavio_castelli originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Migrate from Docker to Podman by tjyrinki_suse

    Description

    I'd like to continue my former work on containerization of several domains on a single server by changing from Docker containers to Podman containers. That will need an OS upgrade as well as Podman is not available in that old server version.

    Goals

    • Update OS.
    • Migrate from Docker to Podman.
    • Keep everything functional, including the existing "meanwhile done" additional Docker container that is actually being used already.
    • Keep everything at least as secure as currently. One of the reasons of having the containers is to isolate risks related to services open to public Internet.
    • Try to enable the Podman use in production.
    • At minimum, learn about all of these topics.
    • Optionally, improve Ansible side of things as well...

    Resources

    A search engine is one's friend. Migrating from Docker to Podman, and from docker-compose to podman-compose.