Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

When you finish processing the Dockerfile, the last layer generated is also called an image. In addition to its obscure hash name, it is common to create an alias (“tag”) that creates a friendly name that is easier to use, but when . When you build a new image using the same Dockerfile again you may reassign the old friendly name same alias/tag to the new image, and then the old image remains but image. The old image remains in the Docker cache and may remain in the network image repository server, but now it is only known by its original unique content hash.

You can display all the layers in an image with the “docker history” command, which shows the hash of each layer and the Dockerfile line that created it.

The bottom layer will typically ADD a special file containing a root file system of some Linux distribution created by some vendor (Ubuntu, Debian, Alpine, etc.) and some environment variables and parameters telling Docker how to run that system. We cannot create such files, so the convention is to download a starting image already created by the vendor from Docker HubYale doesn’t build Linux images from scratch, so we must start any of our images by referencing one of the Docker Hub images that in turn includes one of these starting Linux distribution image files.

For example, the Docker Hub image named “ubuntu:latest” (on of 3/16/2022) is

...

On line 8 we learn that the special file someone got from Canonical has a SHA256 hash beginning with 8a50ad78a668527e9… and was 72.8 MB. This was turned into a Docker Hub image by adding a line telling Docker that it can run the image by starting the bash program.

The We see that the top layer has a hash beginning with 2b4cba85892a which is also the . That layer hash also becomes the unique hash identifier of the image, but to make the image easier to reference, the friendly alias tag “ubuntu:latest” is easier to remember. However, next also assigned (temporarily) to this image. Next week there may be a new “latest” updated image that has been updated and will have different contents and a new hashand the “ubuntu:latest: alias will point to the new iamge, while the 2b4cba85892a identifier will continue to point to this older image.

Code Block
>docker image ls ubuntu
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
ubuntu       21.10     305cdd15bb4f   13 days ago    77.4MB
ubuntu       latest    2b4cba85892a   13 days ago    72.8MB
ubuntu       <none>    64c59b1065b1   2 months ago   77.4MB
ubuntu       <none>    d13c942271d6   2 months ago   72.8MB

...

A Dockerfile can contain ADD and COPY statements that copy files in the project directory to some location in the image. Docker only cares about and remembers the contents (represented by creates a hash of the contents ) and not of that file and remembers it. Docker does not care about the name of the input file. So the file, or its path location, or its date and attributes. If you change the Dockerfile to reference a file with a different name and location but the same contents, then because the content has has not changed, Docker regards the ADD or COPY statement as unchanged.

A Dockerfile may contain the line:

...

but what shows up in “docker history” is
COPY file:cd49fd6bf375bcab4d23bcc5aa3a426ac923270c7991e1f201515406c3e11e9f in /var/tmp

In general, Docker doesn’t care about the source of data in a layer. You can change the source to a different file or URL, but if it has the same content there really hasn’t been a change.

Except for the date. In the image, the file you created will have the current timestamp in the destination directory. So even if you run the same Dockerfile twice and copy the same data, generally the layer you generate will have a different hash in each run because the timestamp of the file in the directory is part of the data that goes into the image content hashNote that although the contents of the file have not changed, copying it to an image may give the destination file today’s date. Because the date is part of the file system, this changes the hash of the layer and of the subsequent new image.

How Specific?

Generally, the The best practice for production applications is to know exactly what is in carefully control when they change and what changes are made to them. However, at Yale production applications are loaded onto VMs that get updated once a month from Red Hat libraries. We trust Red Hat to update its system carefully. Fifty years ago applications ran on IBM mainframes that were updated once a month by a similar process. Today applications that run on a Windows system get monthly Patch Tuesday updates and application developers don’t track bug fixes.

However, we have to be more careful about the version of the OS we are running (Ubuntu 20.04 or 22.04), the version of Java we are running (Java 8 or 11), and the version of components like Tomcat we are running (Tomcat 8.5, 9, or 10). Upgrades to new versions can change behavior and cause problems for applications.

Generally these principles are already baked into the standard tag names assigned to images in Docker Hub. If you look at the standard images offered that include Debian, Java, and Tomcat, you will find a page that lists all the tags given to a specific supported image. For example:

  • 9.0.60-jdk11-openjdk-bullseye, 9.0-jdk11-openjdk-bullseye, 9-jdk11-openjdk-bullseye, 9.0.60-jdk11-openjdk, 9.0-jdk11-openjdk, 9-jdk11-openjdk, 9.0.60-jdk11, 9.0-jdk11, 9-jdk11, 9.0.60, 9.0, 9

This means that if you just ask for “tomcat:9” you get the image that is specifically tomcat:9.0.60-jdk11-openjdk-bullseye (tomcat 9.0.60 on top of OpenJDK 11 running on the “bullseye” release of Debian (11.2).

If you are starting to develop a new application and know you want to use Java 11 and Tomcat 9, this is the image which is the most default for those choices (because it has aliases “9” and “9-jdk11”). However, once you put an application into production, you don’t want things to change unnecessarily, so you might use the more specific alias of “9.0.60-jdk11-openjdk-bullseye” and then change that tag only when there is a reported security problem with Tomcat 9.0.60 fixed by a later version number.

If you come back and rebuild this application after a few months, you may find that the new image has bug fixes to OpenJDK 11 and Debian bullseye. You may choose to just accept such fixes in the same way that you accept the montly RHEL or Windows maintenance applied to an application running on a VM.

Alternately, you may decide to control every single change made to a production application, but that may be an unreasonable burden

However, going back 50 years to mainframe computers, it has always been necessary for system administrators to put monthly maintenance on the operating systems on which applications run. You cannot afford to run systems with known vulnerabilities because you are not ready to “change” a running production application.

Of course, if we change the application itself we must do appropriate testing. It is also necessary to test when we upgrade versions of the OS, Java, Tomcat, database, or other key components. However, if all that we do is to patch bugs in the system or libraries, then such maintenance has to be more routine.

How do we translate these considerations to the maintenance of images?

There is no simple answer. Red Hat doesn’t contribute to Docker Hub, but maintains its own OpenShift container system. Open source vendors provide maintenance, but that is not the same thing as a subscription to a production oriented subsidiary of IBM.

Everyone knows you don’t base an application on a “latest” tag. If you select “9.0-jdk11-openjdk-bullseye” as your base, you know future images will get Debian 11.2 (bullseye) and the most recent minor release of OpenJDK 11 and Tomcat 9.0. You may be implicitly upgraded from Tomcat 9.0.59 to 9.0.60, but that upgrade will fix bugs and may address security vulnerabilities.

Using a more specific tag may prevent critical patches. Using a less specific tag will eventually upgrade you to other versions of Debian, Java, or Tomcat when someone changes the defaults.

Which Distribution?

Alpine is the leanest distribution, but the tomcat-alpine image is no longer being maintained. You can use it, but then you have to put all the maintenance on it through several years and releases.

It used to be that In previous years Ubuntu was updated more quickly updated than DebianDebina. However, in the last few years there are been an intense focus on reported vulnerabilites and Security patches. In response, Debian created a separate repository for fixes to reported vulnerabilities and they deploy patches to it as soon as possible. This does not, however, extend to non-security bugs where Ubuntu may still be quicker to make fixes available.

If you want a Docker Hub standard image with Java and Tomcat pre-installed, you can choose between Debian (“bullseye”) and Ubuntu (“focus” or 20.04, the most recent LTS release soon to be replaced by 22.04).

Any additional comments from tracking Docker Hub releases and their ability to patch vulnerabilities will be added here as we gather experience.

FROM Behavior

Docker has a special conservative behavior when processing an ambiguous tag in the FROM statement, which is the first statement in a Dockerfile and sets the “base image” for this build. The first time you build a Dockerfile that begins with the statement

FROM ubuntu:latest

The Docker Engine doing the build downloads the image associated with this tag from Docker Hub and associates the name “ubuntu:latest” with that image for all subsequent Dockerfile image builds until you specifically replace it. For a desktop build, you add the parameter “--pull” to a “docker build” command to tell Docker to check for a newer image currently associated with this tag name and download it and make it the new “ubuntu:latest” for subsequent builds.

Yale’s Jenkins build process does not exactly use “--pull” but accomplishes the same thing using a different technique. So when you build an image with Jenkins, you get the current image associated with a tag, and generally you want to add “--pull” to your desktop build. If it is really important to start with a very old base image, use the 12 character unique hash name to be sure you get what you wantNow, however, Debian has a special source of updates for patches to security problems and makes them available immediately after the vulnerability is announced.

So now this is mostly a matter of personal choice.

FROM Behavior (--pull)

In each section of a Dockerfile, the FROM statement chooses a base image.

The work of a “docker build” command is performed in a “builder” component in the Docker Engine. By default, the builders use a special conservative behavior which saves and reuses the first image they encounter that matches an alias tag name on the first FROM they process with that alias.

Specifically, if you process a Dockerfile with a “FROM tomcat:9.0-jdk11-openjdk-bullseye”, and that tag name has not been previously encountered, then the Docker Engine will download the image that at this moment is associated with that alias, will save it in its cache, and from now on will by default reuse that image for all subsequent Dockerfiles that have the same alias in their FROM statement.

To avoid this, the “docker build” command used by Yale Jenkins image build jobs typically specifies the “--pull” parameter. This causes Docker to check the base image source network repository server for an image with a newer date than the one stored in the local Engine cache. If one is available, then newer images is downloaded and used.

Since this is the normal behavior of Jenkins, a developer should also specify “--pull” on any “docker build” command used in a development sandbox. That way the image used in the sandbox for unit testing will be as close as possible to the one build by Jenkins and used in the Yale cluster for final testing and final production deployment.

However, you should understand that running the same image build using the same Dockerfile a second time may pull a newer base image with additional maintenance installed if, by luck, maintenance was done to that image alias in Docker Hub.

Harbor CVEs

Harbor regularly downloads a database of reported vulnerabilities. It scans images to see if they contain any of these vulnerabilities. High severity problems need to be fixed.

We have found It used to be that Ubuntu makes made fixes to critical problems available as soon as they are announced. It is not always possible to patch Debian or Alpine.

Once the vendor package libraries contain a fix, problems can be corrected by doing a simple

apt update
apt upgrade -y

If you run the commands without specific package names, the commands patch everything. Alternately, someone could try to build a list of just the packages needed to satisfy Harbor, but that is a massive amount of work. Since this list is not specific to IAM, and not even specific to Yale, but is potentially different for different versions of the base OS, creating such a list is clearly something that “somebody else should do” if only there were someone doing it.

At the Bottom and the Top

No matter what you decide, choosing a specific base image (impish-20220301 or focal-20220302 or impish next month) implies that a specific set of patches have been applied to the original release for you by the vendor. On top of that “level set” you can choose to immediately add packages with an “apt upgrade” before you add software and applications.

However, if you have an application that has been working fine for months and Harbor reports a single critical vulnerability that can be fixed by upgrading a single package, you can do that “apt upgrade” on either the top layer (where almost nothing happens and it builds instantly) or at the bottom layer where you then have to wait minutes for all your software to reinstall on top of the now modified base.

The Build Cache

New images are built in the Docker Engine in a specific environment which I will call the Builder. There are two versions of builder, traditional Docker and a newer Docker option called BuildKit. Whichever builder you choose shares the Image Cache with the hash names and tags described above.

I have already noted one Builder feature that interacts with the Image Cache. If Jenkins were to ever do a “docker build --pull” it would download any new version of the image whose tag/alias is specified in the FROM statement of the Dockerfile.

There is an entirely separate problem caused by the Builder’s desire to optimize its performance. It keeps a “statement cache” history of layers generated by processing Statements in all the Dockerfiles it has previously processed. As long as the statements in the current Dockerfile duplicate the beginning of some other Dockerfile it already processed, then it can avoid doing any work and simply reused the saved layer generated when it processed the same sequence of statements in the previous Dockerfile. Specifically:

  1. When it processes the FROM statement, if “--pull” was not specified and a previous Dockerfile specified the same image tag name, then it has an image in the Cache prevously loaded that it can reuse. Only if “--pull” is specified does it look for a new copy of the iamge on the original source registry.

  2. If it is processing an ADD or COPY statement, it creates a hash from the contents of the source file and compares it to previously saved hash values of identical ADD or COPY statements that appeared in other Dockerfiles that started with the exact same sequence of statements up to this line. If the content of the file changed, then the new data has to be copied and now we have a new layer and can stop optimizing.

  3. For all other statements, if the next statement matches an identical next statement in a previously processed Dockerfile, then we reuse the layer generated by that previous Dockerfile. If this statement is new, then we have a new layer and stop optimizing.

To see why this is useful, consider building a generic Java Web application on top of the current version of Java 11 and Tomcat 9.5. If you have to start with a Docker Hub image, then every such application beings with what should be identical statements to FROM ubuntu:xxx, patch it, install Java, install Tomcat, install other packages that everyone needs, add common Yale configuration data, and only after all this is done are there are few lines at the end of the Dockerfile to install Netid, or ODMT, or YBATU, or something else.

If we used Docker the way it is supposed to work, we would create our own base image with all this stuff already done. Then the application Dockerfile could have a FROM statement that names our custom Yale Tomcat image and just does the application specific stuff. However, with this Builder optimization, every application Dockerfile could begin with the same 100 statements, and the Builder would recognize that all these Dockerfiles start with same way, essentially skip over all the duplicate statements, and just get to the different statements at the end of the file.

So if Yale Management doesn’t like us to create our own base images, we can do essentially the same thing if we can design a way to manufacture Dockerfiles by appending a few application specific lines on the end of some identical block of boilerplate code. Since there is nothing in Git or Docker that does this, we could add a step to the Jenkins pipeline to assemble the Dockerfile after the project is checked out from Git and before it is passed to “docker build”.

Except for the “RUN apt update && apt upgrade -y”.

If this is done in a Dockerfile that is subject to optimization, the Builder sees that this line is identical to the same line that was processed in a Dockerfile that began with the same statements and was run a month ago, or a year ago. Since the line is identical, it doesn’t actually run it and reuses the same layer it previously generated. This fails to put on any new patches.

There is no formal way to stop optimization, but the solution is obvious. Have Jenkins put today’s date in a file. COPY the file to the image. Then “RUN apt update && apt upgrade -y”. The content of this file changes each day. The processing of COPY checks the content and sees it has changed and stops optimizing the Dockerfile. Then the next statement always runs and patches the image.

However, if you do this at the beginning of the Dockerfile you lose all optimization. So it is better to arrange for this to be at then end of the Dockerfileavailable more quickly than Debian. However, as vulnerability scanning became routine in Harbor and other image repositories, Debian responded by creating a special “security” package source where they place fixes as soon as vulnerabilities are announced.

Since the base image may have been built before the most recent problems were reported, it is a good idea to include a

RUN apt update && apt upgrade -y

in your Dockerfile. To make sure this works, you need to understand the Build Cache.

Build Cache

As was previously mentioned, each line in a Dockerfile generates a new layer stored in the Engine Cache and identified by a hash of the layer contents.

By default, the Builder optimizes its performance by saving the contents of the line in the Dockerfile it processed linking it to the hash of the layer generated by that line. In subsequent image builds it compares lines from the new Dockerfile to the saved lines from previous Dockerfile processing. If the current Dockerfile begins with a sequence of statements that match statements in a previously processed Dockerfile, then the Builder does not rerun the processing but simply reuses the layer generated by that statement in the prevous build.

We have already noted that there is a special rule for the base image in the FROM statement. Once a base image is loaded, it is reused in all subsequent builds until the “--pull” parameter is specified in the “docker build” command.

When an ADD or COPY statement is encountered, the Builder obtains a hash of the content of source files. Although the name of the source file may be the same in the Dockerfile, it does not regard the statements as being identical if the content of the source file is different from the content in the previous build.

However, this does not solve the problem when a RUN statement explicitly or implicitly references source files in some network server. In particular:

RUN apt update && apt upgrade -y

implicitly references the current packages provided by the image vendor. These packages will change whenever there is a fix to an important bug, but there is no way for the Builder to determine if new packages are available.

To make sure that the latest data is used to build images, Jenkins also specifies the “--no-cache” parameter on a “docker build”. This parameter disables the Builder optimization and forces every image build to re-execute all the statements in the Dockerfile.

Again, since this is the default in Jenkins processing, it should be specified explicitly in the “docker build” command you run in your development sandbox to build images for initial testing.

Although this parameter forces each Dockerfile command to be run again, if the command generates the exact same layer (in this case, if no new packages were added and the “apt upgrade” therefore reapplies the same packages and ends up building the same layer as the previous Dockerfile build, then the new layer will have the same content hash and the Engine layer cache will already have a copy and reuse it. This also applies when you “docker push” an image you built to an image registry like Harbor. Even though you reran each line in the Dockerfile, if you generated a layer with the same contents and hash as a previously generated layer then Docker will discover this and will report that the layer already exists and therefore did not have to be pushed to the destination registry.