...
The Jenkins job builds an image using just the Dockerfile with no custom parameters. Therefore, the Dockerfile all by itself must build and image for the Yale environment, while files in this project override build args and provide additional tags only on the developer’s desktop.
The exact method of doing the equivalent of a “docker build” on the desktop is defined by the developer in a profile. It is possible to use a local docker command, or a remote ssh docker command, or a substitute for Docker like Podman.
Each user provides a profile that defines the Sandbox environment (the local substitute for Artifactory and Harbor for example) and the personal choices (the use of a local docker, remote docker, or substitute).
A script file is included in every project and is run only on the developer desktop to do that type of build. The project specific script file includes project specific parameters while the generic code shared by all projects built by this developer are in the developer’s profile script.
The developer may run a copy of Harbor on his machine, or may choose to scan images and share images across VMs using a different method. All images are tagged on the developer machine with the same prefixes they will have in the Yale Harbor server.
General Approach
...
to define any project specific overrides. For example, the Dockerfile may contain a reference to a Released artifact version such as 1.0.19 while the script may contain the developer’s override version 1.0.20-SANDBOX.
General Approach
Whenever there is a value in the Dockerfile that should be different in the Sandbox from the Jenkins build value, create an ARG and set the default to match the value in Jenkins. Then you can override this ARG value by adding a --build-arg parameter to the “docker build”.
Examples:
ARG ARTIFACT_SERVER=https://repository.its.yale.edu/artifactory/libs-releases-local
ARG YaleArtifactVersion=1.0.48
Overrides for ARG values and other properties he ARG ARTIFACT_SERVER would typically not be in the build.json file because it is a permanent part of the developer desktop configuration. If the user provides a local artifact server URL, it will be set in the user’s $HOME\sandboxProfile.ps1 file:
$SANDBOX_ARTIFACT_SERVER="http://repository.yale.sandbox/artifactory"
Putting the project specific --build-arg overrides in the script means that every script file is unnecessarily unique. Instead, the override values are stored in a build.json file in the project that is only used on the developer desktopwhich is read by code in the profile script and automatically generates the override parameters.
Code Block |
---|
{ "new_image_tag": "iiq:8.1p1-yale-1.0.48", "build_arg": { "YaleArtifactVersion": "1.0.48-SNAPSHOT", "SailpointVersion": "8.1p1" } } |
The smallest only parameter required in the build.json file contains only is the “new_image_tag” because the tag is generated from the Jenkins job parameters and is not in the Dockerfile, so this is the only place it can come from on the developer’s desktop.
Note that the ARG YaleArtifactVersion default is a “production” version number (after a Perform Release has been done), while the override value in the JSON is the -SNAPSHOT version number used while developing before you generate a Release.
A best practice is that when an image contains a Yale Artifact, the Artifact Version number is in the image tag, so it is easy to tell what version of the application was put into the image.
The ARG ARTIFACT_SERVER would typically not be in the build.json file because it is a permanent part of the developer desktop configuration. If the user provides a local artifact server URL, it will be set in the user’s $HOME\sandboxProfile.ps1 file:
$SANDBOX_ARTIFACT_SERVER="http://repository.yale.sandbox/artifactory"
The the profile will add it to the list of --build-arg parameters passed to the “docker build” command:
Code Block |
---|
$build_args=[System.Collections.ArrayList]@( `
"--build-arg ARTIFACT_SERVER=$SANDBOX_ARTIFACT_SERVER", `
"--build-arg IMAGE_REGISTRY=$SANDBOX_IMAGE_REGISTRY" `
) |
The convention is for the profile to expose a build-image command and to have the individual build.ps1 file in each project call that function to do the “docker build” or its equivalent. There are alternate approaches to do a “wsl docker build” (run it in WSL), or “ssh userid@vm docker build” (run it on a VM), or “multipass exec vm docker build” (special version of ssh for Multipass), or “wsl podman build” (run podman replacement for docker in WSL), or “nerdctl build” (Rancher desktop). By putting the actual command in the profile script, the user can choose which technology he prefers to use.
Parameters for the Build
There are a few parameters that the Jenkins job always puts on the “docker build”, and they are probably a good choice for you to add to your environment
--no-cache causes the build to execute each step in the Dockerfile even if that line has not changed and there is a cached result from the previous execution of that line. Generally our Dockerfiles are not that big and complicated, and allowing Docker to reuse the cached results of executing a previous Dockerfile produces unfortunate results. For example, a which is unique for each project and cannot be specified in the Dockerfile (but must be a -t on the “docker build” command).
The project specific build.ps1 script defines variables, includes the Profile script, and then calls the build-image command generated by the profile script. Most build.ps1 files are minimal and identical with this common content.
Code Block |
---|
if (test-path $HOME\SandboxProfile.ps1) {
. $HOME\SandboxProfile.ps1
build-image
} else {
Write-Host "You have no $HOME/SandboxProfile.ps1 file. Get if from https://git.yale.edu/gilbert/SandboxImageBuild"
} |
Remember, the build.ps1 and build.json files are checked into the Git project but are not used by the Jenkins build. While they contain information for you, they are mostly intended to help the next developers who works on this project, providing an example of things that they may want to customize.
Parameters for the Build
Most parameters like --build-arg and -t are described in the Docker documentation and have obvious uses. Two parameters are always supplied by Jenkins and turn out to be very useful for developers as well. They are more obscure and should be explained:
--no-cache disables a Docker optimization which caches the results of each line in the Dockerfile and in a subsequent run of the same file, uses the results from the previous run instead of re-executing the statement. This might be occasionally helpful if you have a very long running statement that never changes, but in practice it is often a bad optimization. There are some statements that reference things in the network that change over time, and though the statement itself has not changed, the results are different. An example everyone may use is
“RUN apt update && apt upgrade -Y” is probably intended to install all the very latest fixes and not the fixes that were cached from running these commands a few months ago. Dockerfile line caching produces unexpected results and should not be the default for unsuspecting users
You expect this to put all the currently available fixes on the image, but if this was run and cached a month ago then Docker will reuse the result, which omits fixes added in the last month. There are less obvious examples that suggest that --no-cache is a better default.
--pull is a special version of --no-cache that applies to the image in the FROM statement. By default, Docker doesn’t check to see if there is a new version of the image in the source repository (DockerHub?) when you already have an image downloaded with the same tag. This is OK if the tag is specific to a particular unchanging version. However, when you are referencing a “latest” image, then you may want to replace an old base image with an updated version with bugs fixed and you need to suppress the unwelcome Docker optimization by forcing it to --pull a newer image if one is available.
Base Image Choice
The most common choice for Docker Hub images is Debian. When it became common to scan images for vulnerabilities, Debian was initially slow to respond. Today, however, they have a special package source URL for vulnerability fixes and make a fix available as soon as the vulnerability is announced. This is, therefore, the best choice when looking for a clean scanA DockerHub image like “tomcat:latest” is periodically updated with the latest version and fixes. Nobody would use it, but more specific tags that require jdk11 and tomcat 9 will still be automatically update with the latest version of 9 and the latest fixes to jdk11. When you use a base image name that is periodically updated, and you want to get the most recent version, you have to specific --pull on the docker build or docker will use the previous image that it pulled which had the same tag name.
Base Image Choice
If you look at all the Tomcat tags in DockerHub, you will find versions based on several distributions. Currently Debian is the simplest distribution that remains current and has fixes for all reported vulnerabilities.