Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 4 Current »

It is relatively simple to build a first image for your application’s container. Maintaining all your applications over time, however, makes it very difficult to design a reasonable “docker build” arrangement.

Current Conditions

The application image has to be built by a Jenkins job and stored in Harbor.

A Docker project has to be a single Git repository and the Dockefile has to be a file in the root of the working directory obtained by checking out a branch of that repository.

Harbor will regularly download a new list of security issues and will quarantine images that have serious vulnerabilities. Currently Projects are defined so that images with a High Severity problem cannot be pulled either to be run or fixed.

Base Images

Each line in a Dockerfile that changes something generates a layer. The content of the layer is the cumulative set of applying all changes from the bottom layer up to the top. Each layer is identified by a unique SHA256 hash of its content, although Docker normally only shows the first 12 hex characters of the hash.

When you finish processing the Dockerfile, the last layer generated is also called an image. In addition to its obscure hash name, it is common to create an alias (“tag”) that creates a friendly name that is easier to use. When you build a new image using the same Dockerfile you may reassign the same alias/tag to the new image. The old image remains in the Docker cache and may remain in the network image repository server, but now it is only known by its original unique content hash.

You can display all the layers in an image with the “docker history” command, which shows the hash of each layer and the Dockerfile line that created it.

The bottom layer will typically ADD a special file containing a root file system of some Linux distribution created by some vendor (Ubuntu, Debian, Alpine, etc.) and some environment variables and parameters telling Docker how to run that system. Yale doesn’t build Linux images from scratch, so we must start any of our images by referencing one of the Docker Hub images that in turn includes one of these starting Linux distribution image files.

For example, the Docker Hub image named “ubuntu:latest” (on of 3/16/2022) is

>docker pull ubuntu:latest
latest: Pulling from library/ubuntu
7c3b88808835: Pull complete
Digest: sha256:8ae9bafbb64f63a50caab98fd3a5e37b3eb837a3e0780b78e5218e63193961f9
>docker history ubuntu:latest
IMAGE          CREATED       CREATED BY                                      SIZE      COMMENT
2b4cba85892a   13 days ago   /bin/sh -c #(nop)  CMD ["bash"]                 0B
<missing>      13 days ago   /bin/sh -c #(nop) ADD file:8a50ad78a668527e9…   72.8MB

On line 8 we learn that the special file from Canonical has a SHA256 hash beginning with 8a50ad78a668527e9… and was 72.8 MB. This was turned into a Docker Hub image by adding a line telling Docker that it can run the image by starting the bash program.

We see that the top layer has a hash beginning with 2b4cba85892a. That layer hash also becomes the unique hash identifier of the image, but to make the image easier to reference, the friendly alias “ubuntu:latest” is also assigned (temporarily) to this image. Next week there may be a new updated image and the “ubuntu:latest: alias will point to the new iamge, while the 2b4cba85892a identifier will continue to point to this older image.

>docker image ls ubuntu
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
ubuntu       21.10     305cdd15bb4f   13 days ago    77.4MB
ubuntu       latest    2b4cba85892a   13 days ago    72.8MB
ubuntu       <none>    64c59b1065b1   2 months ago   77.4MB
ubuntu       <none>    d13c942271d6   2 months ago   72.8MB

While the “latest” image is 2b4cba85892a, it replaces an older image stored 2 months ago that had a contents hash of d13c942271d6. The old image remains stored in the system in case it was used to build other images to run applications. When all such applications have been updated to use the latest starting image, then the old image can be deleted.

Name and Content

A Dockerfile can contain ADD and COPY statements that copy files in the project directory to some location in the image. Docker creates a hash of the contents of that file and remembers it. Docker does not care about the name of the file, or its path location, or its date and attributes. If you change the Dockerfile to reference a file with a different name and location but the same contents, then because the content has has not changed, Docker regards the ADD or COPY statement as unchanged.

A Dockerfile may contain the line:

copy one.txt /var/tmp

but what shows up in “docker history” is
COPY file:cd49fd6bf375bcab4d23bcc5aa3a426ac923270c7991e1f201515406c3e11e9f in /var/tmp

Note that although the contents of the file have not changed, copying it to an image may give the destination file today’s date. Because the date is part of the file system, this changes the hash of the layer and of the subsequent new image.

How Specific?

The best practice for production applications is to carefully control when they change and what changes are made to them.

However, going back 50 years to mainframe computers, it has always been necessary for system administrators to put monthly maintenance on the operating systems on which applications run. You cannot afford to run systems with known vulnerabilities because you are not ready to “change” a running production application.

Of course, if we change the application itself we must do appropriate testing. It is also necessary to test when we upgrade versions of the OS, Java, Tomcat, database, or other key components. However, if all that we do is to patch bugs in the system or libraries, then such maintenance has to be more routine.

How do we translate these considerations to the maintenance of images?

There is no simple answer. Red Hat doesn’t contribute to Docker Hub, but maintains its own OpenShift container system. Open source vendors provide maintenance, but that is not the same thing as a subscription to a production oriented subsidiary of IBM.

Everyone knows you don’t base an application on a “latest” tag. If you select “9.0-jdk11-openjdk-bullseye” as your base, you know future images will get Debian 11.2 (bullseye) and the most recent minor release of OpenJDK 11 and Tomcat 9.0. You may be implicitly upgraded from Tomcat 9.0.59 to 9.0.60, but that upgrade will fix bugs and may address security vulnerabilities.

Using a more specific tag may prevent critical patches. Using a less specific tag will eventually upgrade you to other versions of Debian, Java, or Tomcat when someone changes the defaults.

Which Distribution?

Alpine is the leanest distribution, but the tomcat-alpine image is no longer being maintained. You can use it, but then you have to put all the maintenance on it through several years and releases.

In previous years Ubuntu was updated more quickly than Debina. Now, however, Debian has a special source of updates for patches to security problems and makes them available immediately after the vulnerability is announced.

So now this is mostly a matter of personal choice.

FROM Behavior (--pull)

In each section of a Dockerfile, the FROM statement chooses a base image.

The work of a “docker build” command is performed in a “builder” component in the Docker Engine. By default, the builders use a special conservative behavior which saves and reuses the first image they encounter that matches an alias tag name on the first FROM they process with that alias.

Specifically, if you process a Dockerfile with a “FROM tomcat:9.0-jdk11-openjdk-bullseye”, and that tag name has not been previously encountered, then the Docker Engine will download the image that at this moment is associated with that alias, will save it in its cache, and from now on will by default reuse that image for all subsequent Dockerfiles that have the same alias in their FROM statement. Meanwhile, tomorrow Docker Hub may get a new image with the same alias but with additional security patches applied.

To avoid this problem, the Yale Jenkins jobs typically add the “--pull” parameter to their “docker build” command. This parameter tells Docker to check for a newer image associated with the alias on the FROM command and to download that newer version when one becomes available.

You probably want to use “docker build --pull” in your sandbox development.

Harbor CVEs

Harbor regularly downloads a database of reported vulnerabilities. It scans images to see if they contain any of these vulnerabilities. High severity problems need to be fixed.

We have found that Ubuntu makes fixes to critical problems available as soon as they are announced. It is not always possible to patch Debian or Alpine.

Once the vendor package libraries contain a fix, problems can be corrected by doing a simple

apt update
apt upgrade -y

If you run the commands without specific package names, the commands patch everything. Alternately, someone could try to build a list of just the packages needed to satisfy Harbor, but that is a massive amount of work. Since this list is not specific to IAM, and not even specific to Yale, but is potentially different for different versions of the base OS, creating such a list is clearly something that “somebody else should do” if only there were someone doing it.

At the Bottom and the Top

No matter what you decide, choosing a specific base image (impish-20220301 or focal-20220302 or impish next month) implies that a specific set of patches have been applied to the original release for you by the vendor. On top of that “level set” you can choose to immediately add packages with an “apt upgrade” before you add software and applications.

However, if you have an application that has been working fine for months and Harbor reports a single critical vulnerability that can be fixed by upgrading a single package, you can do that “apt upgrade” on either the top layer (where almost nothing happens and it builds instantly) or at the bottom layer where you then have to wait minutes for all your software to reinstall on top of the now modified base.

The Build Cache

New images are built in the Docker Engine in a specific environment which I will call the Builder. There are two versions of builder, traditional Docker and a newer Docker option called BuildKit. Whichever builder you choose shares the Image Cache with the hash names and tags described above.

I have already noted one Builder feature that interacts with the Image Cache. If Jenkins were to ever do a “docker build --pull” it would download any new version of the image whose tag/alias is specified in the FROM statement of the Dockerfile.

There is an entirely separate problem caused by the Builder’s desire to optimize its performance. It keeps a “statement cache” history of layers generated by processing Statements in all the Dockerfiles it has previously processed. As long as the statements in the current Dockerfile duplicate the beginning of some other Dockerfile it already processed, then it can avoid doing any work and simply reused the saved layer generated when it processed the same sequence of statements in the previous Dockerfile. Specifically:

  1. When it processes the FROM statement, if “--pull” was not specified and a previous Dockerfile specified the same image tag name, then it has an image in the Cache prevously loaded that it can reuse. Only if “--pull” is specified does it look for a new copy of the iamge on the original source registry.

  2. If it is processing an ADD or COPY statement, it creates a hash from the contents of the source file and compares it to previously saved hash values of identical ADD or COPY statements that appeared in other Dockerfiles that started with the exact same sequence of statements up to this line. If the content of the file changed, then the new data has to be copied and now we have a new layer and can stop optimizing.

  3. For all other statements, if the next statement matches an identical next statement in a previously processed Dockerfile, then we reuse the layer generated by that previous Dockerfile. If this statement is new, then we have a new layer and stop optimizing.

To see why this is useful, consider building a generic Java Web application on top of the current version of Java 11 and Tomcat 9.5. If you have to start with a Docker Hub image, then every such application beings with what should be identical statements to FROM ubuntu:xxx, patch it, install Java, install Tomcat, install other packages that everyone needs, add common Yale configuration data, and only after all this is done are there are few lines at the end of the Dockerfile to install Netid, or ODMT, or YBATU, or something else.

If we used Docker the way it is supposed to work, we would create our own base image with all this stuff already done. Then the application Dockerfile could have a FROM statement that names our custom Yale Tomcat image and just does the application specific stuff. However, with this Builder optimization, every application Dockerfile could begin with the same 100 statements, and the Builder would recognize that all these Dockerfiles start with same way, essentially skip over all the duplicate statements, and just get to the different statements at the end of the file.

So if Yale Management doesn’t like us to create our own base images, we can do essentially the same thing if we can design a way to manufacture Dockerfiles by appending a few application specific lines on the end of some identical block of boilerplate code. Since there is nothing in Git or Docker that does this, we could add a step to the Jenkins pipeline to assemble the Dockerfile after the project is checked out from Git and before it is passed to “docker build”.

Except for the “RUN apt update && apt upgrade -y”.

If this is done in a Dockerfile that is subject to optimization, the Builder sees that this line is identical to the same line that was processed in a Dockerfile that began with the same statements and was run a month ago, or a year ago. Since the line is identical, it doesn’t actually run it and reuses the same layer it previously generated. This fails to put on any new patches.

There is no formal way to stop optimization, but the solution is obvious. Have Jenkins put today’s date in a file. COPY the file to the image. Then “RUN apt update && apt upgrade -y”. The content of this file changes each day. The processing of COPY checks the content and sees it has changed and stops optimizing the Dockerfile. Then the next statement always runs and patches the image.

However, if you do this at the beginning of the Dockerfile you lose all optimization. So it is better to arrange for this to be at then end of the Dockerfile.

  • No labels