Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Each subproject creates a JAR or WAR file. The first project is cas-server-core and it contains builds a JAR file containing about 95% or more of all the CAS code. It has to be built first because all the other projects depend on it. After that, there are projects to create optional components that you may or may not choose to use.

Building the WAR

The CAS WAR that you actually run in the Web server is built in two steps in two Maven projects.

Apereo distributes a project called cas-server-webapp to create an initial prototype WAR file. The WAR it creates is not particularly useful, but it contains at least a starter version of all the Spring XML used to configure CAS and many CSS, JSP, and HTML pages. It also contains a WEB-INF/lib with the basic JAR libraries needed by a basic CAS system.

Although you can modify the cas-server-webapp project directly, this results in a directory with a mixture of Apereo files and Yale files, and the next time you get a new CAS release from Apereo you have to sift through them to find out which ones have been changed by Apereo.

Apereo recommends using the WAR Overlay feature of Maven. Yale creates a second WAR project called cas-server-yale-webapp. Instead of copying all the files from the Apereo project, the WAR Overlay project contains only the files that Yale has changed or added. Generally the WAR Overlay includes:

  • Yale Java source for Spring Beans that are slightly modified versions of standard Apereo beans.
  • Yale "branded" look and feel (CSS, JSP, HTML).
  • A pom.xml file with additional dependencies beyond just the cas-server-core that was included in the template WAR built by cas-server-webapp. For example, Yale depends on cas-server-support-ldap and cas-server-integration-ehcache. We don't modify them, but since they are optional modules they were not included in the template WAR. Adding them to the dependencies in the Overlay project POM adds them to our WEB-INF/lib.

The WAR Overlay project references the template WAR, and at the end of processing the files that Yale added or changed are "overlayed" on top of the template to build the cas.war file that is actually deployed in production. Modified files are replaced. New files are added.

At Yale, when you run the Jenkins Build job for CAS, the result of that job is to create a cas.war file and store it in the Yale Artifactory server with a version number and some hierarchical naming. For example, Version 1.1.1 of the Yale CAS Server is stored as https://repository.its.yale.edu/maven2/libs-releases-local/edu/yale/its/cas/cas-server-war/1.1.1/cas-server-war-1.1.1.war.

One or Two Projects?

If you were never ever going to change any Apereo source, then it would make sense to have two separate projects in your Eclipse workspace. One project would be the absolutely vanilla CAS source you downloaded from Apereo, and the other project would be the additional Yale modules and the WAR Overlay project. The Yale project would depend on the Apereo artifacts, but the only reason for having the source is so you can see methods that you step into while debugging your own code.

However, Apereo code can have problems. Sometimes it is a bug. Sometimes it is just a difference of opinion between the Yale way of doing things and the Apereo way. Sometimes you want to add in specific support for features that Apereo is working on but has not yet finished.

  • Apereo defaults all registered services to be able to Proxy, but it seems better if the default is that they cannot Proxy unless you explicitly authorize it. This involves changing the default value for allowedToProxy in AbstractRegisteredService.java from "true" to "false".
  • AbstractTicket is a Serializable class, but Apereo source forgot to give it a VersionUID. As a result, every time you deploy a new version of cas-server-core you have to "cold start" CAS by wiping out all the current tickets. You should only have to do that if you actually have changed the Ticket classes, not just because you recompiled the library to change something that has nothing to do with Tickets.
  • Yale does not use Single Sign-Out, but code in TicketGrantingTicketImpl added to support that option has a very small but non-zero chance of throwing a ConcurrentAccessException and possibly screwing CAS up. Since we don't need it, Yale might just comment out the statement that causes the problem.

Each of these is a one line change, and only the last is important. If CAS was not open source we would probably just live with the old code, but we have the chance to fix it.

Once you start to consider changing Apereo code, then it is inconvenient to maintain two separate projects. For one thing, you need two separate Jenkins build jobs, one for each project. Then you have release numbering issues.

The alternative is to have a single project that combines both the Apereo source for a CAS release (example: 3.5.2.1) and the Yale code that has been built and tested for that CAS release (separately designated Version 1.1.x). Most of the time the Apereo source just sits there and is not used and is not compiled because it is not changed. If you need to make a modification, it is immediately available.

The single project may be confusing, unless you have read this document. There is a large block of source checked into Subversion that appears in your Eclipse workspace, except that none of it is actually used. Why is it there at all? If the Apereo source is not going to be changed, shouldn't it be separated out for clarity?

There are advantages to the Two Project approach, and advantages to the One Project approach. Currently we use the single project structure, but if you feel really strongly about it you can separate the code in some future release.

The Parent pom.xml

The top level directory is the "parent" project. Typically it is named "cas-server" although this is simply a choice you make when you check the source out from SVN. The only thing in the parent project is its pom.xml file, and a few README type files.

However, one of the more complicated problems when Yale migrates from one release of CAS to another is to reconcile our changes to the parent pom.xml with any Apereo changes. This is not the same type of code migration that occurs if you have modifications to some HTML or Java source file.

All the CAS code, configuration files, and HTML do the same thing at Yale that they do everywhere else and that Apereo designed them to do. So our version is fairly close to the original.

However, the top level pom.xml file is not just about compiling the source and building the template WAR file (which is the part we need). Apereo is in the business of maintaining an Open Source Project that is distributed widely on the Internet. A large part of their concern is to make sure that all the legal boilerplate is properly maintained. Do the files all have a proper copyright notice? Are all the licenses to the dependency files properly listed? Maven has some impressive support for doing all this open source release management, but unless that is something you specifically have learned, setting up the legal boilerplate is a daunting task.

Yale is a CAS customer. We may have our own files, but unless and until we contribute them back to Apereo we do not care if a file that nobody sees outside Yale has a proper open source copyright notice and license declaration. If we try to use the Apereo top level pom.xml file as distributed, then every time we try to cut a Yale release we get error messages for all the boilerplate that is missing from our own files. So we remove all that stuff and create our own top level pom.xml.

Given that we have to make substantial changes, we might accept that this pom.xml is a Yale file and no longer a modified version of the original Apereo project. Then for sanity, and because our release process requires it, we change it to reflect the rest of the Yale environment:

  • Version - There are two version numbers, one for unmodified Apereo code corresponding to a CAS release (3.5.2.1) and one representing the internal Yale Release of the CAS artifact that we put into production (1.1.x or 1.1.x-SNAPSHOT). The Yale top level project and all the Yale source use the Yale version ID. If we modify an Apereo project (say we make a change to cas-server-core) then we also change the version number of that JAR file from the Apereo to the corresponding Yale ID numbering.
  • SCM - defines the Yale Subversion server where Yale maintains its production source.
  • Repositories - defines Yale's Artifactory, but merged with other sources for artifacts from the Apereo pom.
  • pluginManagement - Maven is a modular system of extensible functions implemented by optional "plugin" modules. The first step for Yale was to ditch the "notice" and "license" plugins that mange the legal boilerplate. Then we make a few parameter changes to a few other plugins. For example, when you are done working on Version 1.1.1-SNAPSHOT and want to create the official 1.1.1 Yale version of CAS and then reset the project to begin work on 1.1.2-SNAPSHOT, you call the Yale Jenkins Build job "Perform Maven Release" operation, which in turn uses the maven-release-plugin. At Yale we find that running the JUnit tests during this process is not only slow but a bad idea, so we change the configuration of that plugin in the parent pom.
  • dependencyMangement - In any official CAS Release, all of the code is coordinated so all modules and options use the same versions of all the same libraries that will be distributed in the WAR and made available at runtime. Unfortunately, when Yale develops its own code it may need a later version of the same library. For example, Apereo is happy with commons.io Version 2.0, but Yale needs Version 2.4. Apache is smart enough to make the 2.4 library downward compatible with programs written for 2.0, but swapping a newer better version of a library is tricky. The first step is that our top level pom.xml declares the 2.4 version to be the one we want so that all our projects use the same version. This also ensures that the 2.4 version of the commons-io JAR will be merged into the WAR during the Overlay processing.
  • However, there is a final processing step that has to be done in the WAR Overlay project and is related to but not directly specified in the parent pom.xml file. Because we are using some unmodified CAS 3.5.2.1 modules (particularly cas-server-webapp) and then we merge in new libraries from the WAR Overlay project, the resulting cas.war file would have both commons-io-2.0.jar and commons-io-2.4.jar in WEB-INF/lib (2.0 from vanilla cas-server-webapp and 2.4 from our project). So we have to add an exclude statement in the configuration of maven-war-plugin in the WAR Overlay project to remove the unwanted commons-io-2.0.jar file). Generally speaking, we remove any file that the build process would normally include that is bad for JBoss or where there are multiple versions of the same code and we only want to keep the latest version.
  • modules - The CAS Project (that is the source checked into SVN that also is found in the Eclipse workspace) contains all the projects distributed by Apereo in a CAS release. It is just that we do not use most of them and we try to avoid changing them, so we don't need to compile or build them. That is accomplished by deleting (or commenting out) all the <module> statements in the Apereo parent pom.xml that reference unmodified Apereo source projects, then adding <module> statements for all the Yale projects. Even though the source and the project are physically present as subdirectories, if the parent pom.xml doesn't refer to a subdirectory in a <module> statement then Maven ignores that directory during the build. On the other hand, if we decide to make a change to an Apereo project then we add back in or uncomment the <module> statement, but with Yale changes we also alter the version ID in the pom.xml for that subdirectory from Apereo "3.5.2.1" to Yale "1.1.x-SNAPSHOT".
  • Parameters - At the end of the POM there are parameters that specify the version numbers of dependency libraries and Java.

Subversion

Because CAS is maintained by Apereo in Git, and because of the way we structure and maintain the project, if you really wanted to track the history of CAS modules properly Git would be a much better choice. However, the Yale Build process is based on Subversion and that is what we have to maintain.

So in any given cycle of CAS development, you begin by obtaining a copy of the current CAS release from Apereo. You then replace the top level pom.xml with a new one created by merging any changes made by Apereo since the last release with the prior Yale top level pom.xml (and this has to be done manually by someone who understands Maven). Then you merge in the Yale added subdirectories.

This becomes a new project in the /cas area of Yale Subversion. Because it is nominally new, you start without history. If you want to see the history for Apereo code, use Git. If you want to see the actual history of Yale modules, find the Subversion directory of previous CAS development projects. Generally there are several years of stable operation between CAS development projects, and if the thing has been working for years you a probably no longer interested in the "ancient history" of a module.

The ReleaseNotes.html file in in the root of the cas-server project will identify the current state of new directories and any modified files. This should be a descriptive match for the physical code change history in SVN.

The Jenkins Build and Install Jobs

In order to put CAS into production, you have to conform to the Yale Jenkins process.

There is a Jenkins Build job. The developer normally runs it to compile the current SVN trunk to produce a Version 1.1.x-SNAPSHOT of the cas.war artifact which is stored in the Artifactory server. The convention of a Jenkins Build job is that it checks out a copy of the entire Project file from SVN to a Jenkins managed server and then it does a "mvn clean install" on the top level project (the parent pom.xml). Maven will in turn do a clean install on each of the subprojects reference by <module> statements to create updated artifacts, of which the only important one is the cas.war file.

There is a Jenkins Install job. It also checks out the installer project from SVN. That project also has a Maven POM, but the Yale convention is that it runs an Ant build.xml script to download a copy of the cas.war file created in a previous Build step and copy it over to the JBoss deploy directory. As files are copied some edits are made "on the fly" to insert parameter values for the names of userids or password used to access databases, AD, or to configure special options in Spring.

The job of the developer is to commit changes to SVN that make the Build and Install job work properly to create the CAS instance running in DEV, TEST, and PROD. In order to make those changes, you need a desktop Sandbox environment that can not only compile, test, and debug CAS but can also prototype the Jenkins Build and Install process (without using Jenkins of course).

Development

CAS runs in production on Red Hat Enterprise Linux, but you can do development with Eclipse and JBoss running on Windows or any other OS. If you plan to work on SPNEGO, however, you should use a Linux development machine or VM because SPNEGO behavior depends on the native GSSAPI stack in the OS and that is different in Windows from the version of the stack in Linux.

Yale uses Eclipse as the Java IDE. If you do not already have Eclipse installed, start with the J2EE download package of the current release from eclipse.org. Eclipse adds previously optional features to later releases. Anything that does not come preinstalled can be added using the Help - Install New Software ... menu (if the option is an Eclipse project) or the Help - Eclipse Marketplace menu (Eclipse and third party tools, but check carefully because the Marketplace lists versions for every different release of Eclipse). You need:

  • Maven support (M2E) which appears to be standard in the latest releases.
  • Subversion support (Subversive) which is not automatically included because there were several competing projects. The first time you run a Subversive window you will get a popup to install a third party (Polarion) connector from a list and you should choose the latest version of SVNKit).
  • JBoss Tools (at least the JBossAS manager) which is listed in Marketplace because it comes from Red Hat.
  • AJDT, the Eclipse support for AspectJ which the JASIG CAS source uses (starting with the Luna release, Eclipse realizes that CAS uses AspectJ and automatically installs this component when it processes the CAS project if you have not preinstalled it).
  • The standard Maven support comes with a version of Maven 3 built into Eclipse. That is exactly right for building the CAS executable, but it is a Yale convention that the Install job that copies the WAR file over to JBoss has to run under Maven 2. So you need to download the last Maven 2.2.1 from apache.org and unzip it to a directory somewhere on your disk. Later on you will define this directory to Eclipse as a optional "Maven Installation" it can use to run jobs configured to use Maven 2 instead of the default 3.

Check Out and Build the Project

Open the SVN Repository Exploring "perspective" (page) of Eclipse and define the repository url https://svn.its.yale.edu/repos/cas. This directory contains various cas clients and servers. Find the trunk of the current cas-server project and check it out.

The cas-server project is a Maven "parent" project with subprojects to build JARs and WARs in subdirectories. The Check Out As ... wizard normally assumes you are checking out a single project with a single result, so it will no configure this project properly on its own. Just check it out as a directory with no particular type, or you can try to configure it as a Maven project.

Return to the J2EE perspective, right click the new cas-server project directory, and choose Import - Maven - Existing Maven Projects. This is the point where the M2E Eclipse support for Maven discovers the parent and subdirectory structure. It reads through the parent POM to find the "modules", then scans the subdirectories for POM files that configure the subprojects. Then it presents a new dialog listing the projects it has found. Generally it has already found and configured the parent project, so only the subprojects will be checked.

If you are working on a CAS release that has already been configured, you will only see the subprojects that some previous Yale programmer decided were of interest at Yale. If you are working on a new release of CAS with a vanilla POM you may see a list of all the projects, and this may be a good time to deselect the CAS optional projects that Yale does not use.

Now sit back while the M2E logic tries to scan the POMs of all the subprojects and downloads the dependency JAR files from the internet. If you get a missing dependency for a Yale JAR file, then the $HOME/.m2/settings.xml file does not point to the Artifactory server where Yale stores its JAR files. You can ignore error messages about XML file syntax errors. In the Luna version of Eclipse, the M2E support discovers that CAS uses AspectJ and offers to install support for AspectJ in Eclipse.

A Maven project has standard source directories (src/main/java) used to build the output JAR or WAR artifact. A Maven POM has "dependency" declarations it uses to download JAR files (to your local Maven repository) and build a "classpath" used to find referenced classes at compile time. Eclipse has a "Build Path"  (physically the .classpath file) that defines source directories, the location where the ouput class files are stored, and the libraries that have to be used to compile the source. M2E runs through the POM files in each subproject and creates a Eclipse project with a Eclipse Build Path that tells Eclipse to do exactly the same thing Maven would do to the same project. Eclipse will compile the same source in the same source directory and produce the same output to the same output directory using the same JAR libraries to run the compiler. It is important (because it answers questions later on) that you understand that Eclipse is being configured to do MOSTLY the same thing that Maven would do, but Eclipse does it without running Maven itself. The MOSTLY qualification is that M2E sets default values for the enormous number of options that an Eclipse project has (what error message to display or hide, syntax coloring, everything else you can set if you right click the project and choose Preferences from the menu). After M2E creates an initial default batch of project setting, you are free to change them and ask Eclipse to do things slightly different without breaking the development process.

At the end there should be several types of error messages that don't really matter:

  • JSP, HTML, XML, XHTML, and JavaScript files may report errors because Eclipse is trying to validate syntax without having everything it needs to do the validation properly, or because some files are fragmentary and have to be assembled from pieces before they are well formed.
  • The M2E default is to configure the Eclipse project to use Java 1.6 both as a language level for the compiler and as a JRE library default. If you are running on a machine with 1.7 or 1.8, you will get warning messages that the project is set up for 1.6 but there is no actual 1.6 known to Eclipse. You can ignore this message, or correct the default project settings.

Before the import, there was only the "cas-server" project (and the installer). The import creates an Eclipse "shadow project" (my name) for every subdirectory under cas-server that is needed. These shadow projects are Eclipse entities built from the Maven POM files. They contain Eclipse configuration and option elements (what Eclipse thinks of as the project and Build Path). What these shadow projects don't have are real source files. They contain configuration pointers the real source files in the subdirectories of the cas-server project. As you open and look at these projects, they will appear to have source. However, this source will be structured as Java Packages and Java Classes and Java Resources. This is a logical representation in Java and WAR terms of the actual source files that remain as subdirectories of the cas-source directory.

In a few cases, such as the Search option on the menu, you may look for files with certain text or attributes and the results of the search will display the same file twice, once as a physical text file in the cas-server directory tree and once as a Java Class source in the corresponding Eclipse project. Just realize that these are two different views of the same file, and changes you make to either view are also changes to the other view.

Interactive and Batch Modes

Eclipse tries to make J2EE development simpler than is reasonably possible. Eclipse is almost successful, close enough so that it is quite usable as long as you realize what cannot possibly be made to work exactly right and solve that problem with another tool.

The Maven POM contains declarations that drive a Java compiler running under Maven to compile all the source, and then it builds JAR, WAR, and EAR files. The M2E support for Maven inside Eclipse tries to understand the POM file and to create a set of Eclipse project configurations, including Build Path references to dependent libraries in the Maven local repository, to compile the source in the same way. This part works, so Eclipse can be a good source editor including autocomplete hints and the ability to find the source of classes you are using or all references to a method you are coding.

It is in the running and debugging of applications under JBoss or Tomcat that Eclipse may attempt to be too simple.

J2EE servers have a "deployment" directory where you put the WAR file that contains the Java classes and HTML/XML/JSP/CSS files. You can deploy the application as a single ZIP file, or you can deploy it as a directory of files.

Most Java application servers have a Hot Deploy capability that allows you to replace a single HTML or CSS file in the (non-ZIP, directory of files) application and the server will notice the new file with its new change date and immediately start to use it. This can ever work for some individual class files. However, Hot Deploy behavior is random and error prone if you attempt to change a number of files at the same time because the application server can notice some changes in the middle of the update and start to act on an incomplete set of updates.

Eclipse likes the idea of a simple change to a single HTML, CSS, or even a simple Java source file taking effect immediately after you save the file in the editor. However, since it knows that Hot Deploy doesn't actually work if you physically copy a batch of files over to the Tomcat or JBoss deploy directory, the Eclipse J2EE runtime support has traditionally attempted to do something much more sophisticated. It "hacks" the normal Tomcat configuration file to insert a special Eclipse module that creates (to Tomcat) a "virtual" WAR file that does not actually exist in the Tomcat deploy directory but which is synthetically emulated by Eclipse using the source and class files that it has in the projects in its workspace.

This is a little better than Hot Deploy, because Eclipse can simulate swapping the entire set of changed files in a single transaction, but in the long run it doesn't work either. CAS, for example, is a big Spring application with a bunch of preloaded Spring Beans created in memory when the application starts up. The only way to change any of these beans is to bring the entire application down and restart it.

So successful CAS development requires you, even if all you want to do is change the wording on an HTML file, to stop the application server, run a real "mvn install" on the cas-server project to build a new WAR file, run a "mvn install" on the Installer project to copy it over to the application server deploy directory, and then restart the application server. The Eclipse capability to make some changes take effect immediately on a running server is simply too unreliable, and because of that we cannot use the entire "interactive" Eclipse application build strategy but must instead switch to a Maven "batch" artifact building and deployment process.

The advantage of the "batch" approach is that you are guaranteed to get a 100% exact duplicate behavior as you will get later on when you run the Yale Jenkins jobs to compile and install the application onto the test or production servers. Eclipse does a pretty good job of doing almost exactly the same thing that Maven will do, but it is not worth several days of debugging to finally discover that almost the same thing is not quite as good as exactly the same thing.

Since the Eclipse project was created from the Maven POM, it is configured to compile all the same source that Maven would compile and to put the resulting class files in the same output directory. Because Eclipse compiles the source when the project is first built, and then recompiles source after every edit, whenever you run the "mvn install" job Maven finds that the class files are up to date and does not recompile the source. If it is important to have Maven recompile source itself, then you have to do a "mvn clean install" step. Also, neither Maven nor Eclipse does a good job cleaning up class files after a source file is deleted, so when you make such a change you should "clean" the project to force a recompile of the remaining files.

Running Maven Jobs Under Eclipse

At any time you can right click a POM file and choose "Run as ..." and select a goal (clean, compile, install). Rather than taking all the default configuration parameters, or entering them each time you run Maven, you can build a Run Configuration. Basically, this is a simple job with one step that runs Maven. The Run Configuration panel allows you to select every possible option, including the Java Runtime you want to use to run Maven, and a specific Maven installation so you can run some things under Maven 2 and some under Maven 3, plus the standard Maven options to not run tests or to print verbose messages. Since the Jenkins jobs each run one Maven POM, in the Sandbox you can build two Maven Run Configurations that duplicate the function of each Jenkins job.

First, lets review what the Jenkins jobs do.

  • The "Trunk Build" job checks the source project out of subversion. In this case, it is a "parent" project with subdirectories that all get checked out. It then runs a "mvn install" on the parent. The parent POM contains "module" XML elements indicating which subdirectory projects are to be run. Each subdirectory project generates a JAR or WAR artifact. The Jenkins Trunk Build installs these artifacts on the Artifactory server replacing any prior artifact with the same name and version number.
  • The Jenkins Install job checks the source of the Install project out of SVN. By Yale convention, each Install project is an Ant build.xml script and a package of parameters in an install.properties file created from the Parameters entered or defaulted by the person running the Jenkins job. Minimally the Install job downloads the artifact from Artifactory and copies it to the JBoss deploy directory, although typically the artifact will be unzipped, variables will be located and replaced by the values in the properties file, and then the file will be rezipped and deployed.

In the Sandbox you already have a copy of the CAS source files checked out in your Eclipse project in your workspace, and you can also check out a copy of the Installer project. So there is no need to access SVN. Similarly, in the Sandbox we do not want to mess up Artifactory, so the local Maven repository (which defaults to a subdirectory tree under the .m2 directory in your HOME folder) holds the generated artifacts.

With these changes, a Build project compiles the source (in the Eclipse workspace) and generates artifacts in the .m2 Maven local repostory. The Install project runs the Ant script with a special Sandbox version of the install.properties to copy the artifact from the .m2 local repository to the Sandbox JBoss Deploy directory (with property edits).

There is one last step. The Jenkins install job stops and starts JBoss. In the Sandbox you will manually start and stop the JBoss server using the JBoss AS Eclipse plugin. This plugin generates icons in the Eclipse toolbar to Start, Stop, and Debug JBoss under Eclipse. You Stop JBoss before running the Install job and then start it (typically using the Debug version of start) after the Install completes.

Now to the mechanics of defining a Run Configuration. You can view them by selecting the Run - Run Configurations menu option. Normally you think of using Run configurations to run an application, but once you install M2E there is also a Maven Build section. Here you can choose a particular Maven runtime directory, a set of goals ("clean install" for example), and some options to run Maven in a project directory. The recommendation is that on your sandbox machine you create two Maven Run Configurations. The configuration labelled "CAS Build" runs a Maven "install" or "clean install" goal on the parent POM of the CAS source project. The configuration labelled "CAS Install" runs Maven on the POM of the Installer project (and it has to run Maven 2 because this is still the standard for Installer jobs).

CAS Source Project Structure

In Maven, a POM with a packaging type of "pom" contains only documentation, parameters, and dependency configuration info. It is used to configure other projects, but generates no artifact. The parent "cas-server" directory is a "pom" project. All the subdirectories point to this parent project, so when the parent sets the Log4J version to 1.2.15 then all the subprojects inherit that version number from the parent and every part of CAS uses the same Log4J library to compile and at runtime.

A "jar" Maven project compiles Java source and combines the resulting *.class files and any resource data files into a zip file with an extension of ".jar". This is an artifact that is stored in the repository and is used by other projects. The most important CAS subproject is "cas-server-core", which compiles most of the CAS source and puts it in a very large JAR file. The other CAS subprojects compile optional JAR files that you may or may not need depending on which configuration options you use. Any JAR file created by one of these subprojects will end up in the WEB-INF/lib directory of the CAS WAR file.

A type "war" Maven project builds a WAR file from three sources:

  1. It can have some Java source in src/main/java, which is compiled and the resulting class files are stored in the WEB-INF/classes directory of the WAR.
  2. The files in the src/main/webapp resource directory contain the HTML, JSP, XML, CSS, and other Web data files.They are simply copied to the WAR.
  3. Any JAR files that are listed as a runtime dependency not provided by the container in the WAR project POM are copied from the Maven repository to the WEB-INF/lib directory.

Then the resulting files are zipped up to create a WAR artifact which is stored in the Maven repository under the version number specified in the POM.

The "cas-server-webapp" project is a standard JASIG project that builds a WAR file, but that is not the WAR file we deploy.

WAR Overlay

Maven has a second mechanism for building WAR files. This is called a "WAR Overlay". It is triggered when the POM indicates that the project builds a WAR artifact, but the project also contains a WAR artifact as its first dependency. A WAR cannot contain a second WAR file, so when Maven sees this it decides that the current project is intended to update the contents of the WAR-dependency rather than building a new WAR from scratch.

A WAR Overlay project can have Java source in src/main/java, in which case the source will be compiled and will be added to or replace identically named *.class files in the old WAR-dependency.

Mostly, the WAR Overlay replaces or adds files from src/main/webapp. This allows Yale to customize the CSS files or create special wording in the JSP and HTML pages.

JAR files listed as a runtime dependency of the Web Overlay project are added to the WEB-INF/lib directory along with all the other JAR files in the original WAR. In this case, however, replacing doesn't make any sense. A JAR file with the same name has the same Version number, so it will be a copy of the exact same file that is already there. Because WAR Overlay only replaces identically named files, you really do not want to have an artifact with one Version number in the WAR Overlay project and a different Version number in the old WAR because they cannot be merged and the result will be garbage.

The JASIG recommends using the WAR Overlay mechanism, and the Yale CAS customizations follow that rule. In the JASIG distributed source the WAR Overlay project subdirectory is named cas-server-uber-webapp, but Yale's version of this project is named cas-server-yale-webapp. This project produces the artifact named edu.yale.its.cas.cas-server-war with the version number that we track in Jenkins and Subversion as the actual CAS Server executable.

CAS has traditionally been created using the "WAR Overlay" technique of Maven. First, the cas-server-webapp directory builds a conventional WAR from its own files and dependencies. This is a generic starting point WAR file with all the basic JAR files and XML configuration. Then a second type "war" project is run that (in the standard JASIG CAS distribution) is called cas-server-uber-webapp. What makes it different from a standard type "war" Maven project is that the first dependency in this project is the cas-server-webapp.war file build by the previous step.

A "WAR Overlay" project depends on an initial vanilla prototype WAR file as its starting point. Everything in that WAR will be copied to a new WAR file with a new name and version, unless a replacement file with the same name is found in the WAR Overlay. When Yale or any other CAS customer is building their own configuration of CAS, the WAR Overlay project directory provides a convenient place to store the Yale configuration changes that replace the prototype files of the same name in the previously run generic cas-server-webapp type "war" project.

If you made no changes at all to JASIG code, the WAR Overlay project is actually the only thing you would need to check out and edit. Yale would never need to locally compile any CAS source, because the JASIG repository version of the cas-server-webapp.war artifact for that release version number would be perfectly good input to the WAR Overlay process. However, at this time we include all the JASIG source and recompile the JASIG code every time we build the CAS source trunk. If that is a small waste of time, it prepares the structure so we can fix JASIG bugs if they become a problem.

The "cas-server-yale-webapp" project is the WAR Overlay that builds the artifact we actually deploy into production.

Source Build

CAS is distributed as a cas-server directory with a bunch of subdirectories and a POM file. The subdirectories are JAR and WAR projects. The POM file contains a certain amount of boilerplate for JASIG and the CAS project, plus two things that are important. The parameters and dependency management section ensure that all elements of CAS are compiled or built with the same version of library modules (quartz for timer scheduling, hibernate for object to database mapping, etc.). Then a list of <module> statements indicate which subdirectory projects are to be run, and the order in which they are run, when Maven processes the parent POM. The possible modules include:

  • cas-server-core - This is the single really big project that contains most of CAS Server code and build the big JAR library artifact. It is built first and all the other projects depend on it and the classes in the JAR file.
  • cas-server-webapp - This is a dummy WAR file that contains the vanilla version of the JSP, CSS, HTML, and the XML files that configure Spring and everything else. The dependencies in this project load the basic required JAR libraries into WEB-INF/lib. This WAR is not intended to install or run on its own. It is the input to the second step WAR overlay where additional library jar files will be added and some of these initial files will be replaced with Yale customizations.
  • Additional optional extension subprojects of CAS supporting specific features. For example, the cas-server-support-ldap module is needed if you are going to authenticate Netid passwords to AD using LDAP protocol (you can authenticate to AD using Kerberos, in which case you don't need this module). A JASIG standard distribution includes all possible subprojects and the parent cas-server POM file builds all of them. To save development time, you can comment out the subproject <module> statement in the parent POM of modules you don't use. Disk space is cheap, and deleting the subdirectory of modules you don't use may be confusing. 
  • Yale CAS extension JAR projects. For example, the cas-server-expired-password-change module is the Yale version of what CAS subsequently added as a feature called "LPPE". Yale continues to use its code because the Yale Netid system involves databases and services beyond simply the Active Directory component and support
  • cas-server-yale-webapp - Yale customization is (unless it is impossible) stored in the WAR Overlay project directory. If we need Java code, we put this source here and it is compiled and ends up in WEB-INF/classes. We replace some standard JSP and CSS files to get the Yale look and feel and wording. The XML files that configure Spring and therefore CAS behavior are here. This project must always be the last module listed in the cas-server parent POM so it is built last. The cas-server-webapp project is a dependency, so that is the base WAR file that will be updated. This project replaces JSP, CSS, and XML files in the base WAR with identically named files from this project. It adds new files and library jars (from the dependency list of this project).

Yale will have modified the POM file in the cas-server parent project to comment out "module" references to optional artifacts that we are not using in the current CAS project. It makes no sense to build cas-server-support-radius if you are not using it.

It is possible for Yale to also delete the subproject subdirectory from the instance of cas-server source checked into SVN. If we are not building or using the module, we don't need to maintain it in the project source. However, this makes relatively little difference. In CAS 3.5.2 the entire JASIG source is 1628 files or 10 megabytes of storage, while the optional projects we don't use account for only 210 files or 500K. So deleting the unused optional source modules saves very little time for every Trunk Build checkout and since we don't compile them, there is no processing time cost.

The Road Not Taken

The reference from a child project to its parent uses an artifact name and version, and the parent is found in the local Maven repository rather than the parent directory. So storing the subprojects as subdirectories under the cas-server parent is not a physical requirement. It does make things simpler.

It would have been possible to separate out the JASIG code in one directory of SVN, and then put the Yale code in another directory. This would slightly simplify the documentation because you would know what is Yale and what is JASIG. But that would only be a slight improvement.

We still have to decide how to handle bugs or problems in the JASIG source. When an individual file has been changed in the JASIG source because that is the only place it can be changed, then that file and the project directory that contains it is no longer vanilla JASIG code. At that point there has to be an entry in the Release Notes to show that the file has been changed, and now separate directories do not simplify the documentation.

This also means that there have to be two Jenkins Build jobs, one for the JASIG part of the source and one for the Yale part of the source. Remember, the Jenkins job structure at Yale assumes that a Build job checks out one directory and all its subdirectories.

Therefore, we compared the advantages and disadvantages, flipped a coin, and decided to check into SVN a directory consisting of JASIG source merged with Yale created project subdirectories. We do not claim that this is the right way to do things, but neither is it obviously wrong.

The POM in a vanilla JASIG component subproject will have a Version number that corresponds to CAS releases (3.5.2). The POM in a Yale customization subcomponent will have a Version number that corresponds to the Yale CAS installation "Release" number for Jenkins tracking (1.1.0-SNAPSHOT). The artifact name and version number in the WAR Overlay project must match the artifact name and version number of the Jenkins Install job parameters.

 

Standalone.xml

In theory, the developer of a project does not have passwords for the production databases, so runtime parameters are provided from a source outside the WAR. A common Yale convention was to build XML files defining each database as a JBoss Datasource and creating a service that stuffed a group of parameters into the JBoss JNDI space where they could be read at runtime using the built in java.naming classes. The Install job used to build these XML files.

However, the JBoss EAP 6.1 rewrite has changed the JBoss best practice. Now both Datasources and JNDI values are stored in the main "subsystem" configuration file. In the sandbox, this is the standalone.xml file in the /config subdirectory of the JBoss server. You need to get a copy of an appropriate configuration file for your JBoss and install it yourself. Production services will be responsible for providing the file in DEV, TEST, and PROD, but you need to notify them if you add something during sandbox development so they can also add it to the versions they manage.

Best Practice

Check out from SVN the current version of the cas-server source. Also check out the Installer project directory. Get a copy of install.properties for the Installer project from someone who has one, and make sure the jboss.deploy.dir points to the location where you put JBoss on your sandbox.

If it is at all possible, make changes only to the WAR Overlay project subdirectory.

M2E generates the .settings directory and the .project and .classpath Eclipse project files, but we generally do not check them into SVN because they may be different depending on which Eclipse optional plugins you have installed and what release of Eclipse you are running. Maven generates the target directory with all the generated class files and artifact structure, but that also should never be checked into SVN. To avoid having these directories even show up when you are synchronizing your edits with the SVN trunk, go to the Eclipse Menu for Window - Preferences - Team - Ignored Resources and Add Pattern for "target", ".settings", ".classpath", and ".project" (omitting the quotes of course).

CAS is vastly over engineered. An effort was made to deliver a product that anyone can configure and customize using only XML files. No Java programming is required. So if you are a Java programmer, you already have an advantage over the target audience.

CAS has a set of key interfaces which are then implemented with one or more Java classes. The choice of which Java class you will use as a AuthenticationManager, AuthenticationHandler, TicketCache, or Web Flow state is made when the fully qualified name of the class is specified in one of the XML bean configuration statements.

Sometimes you need behavior that is just slightly different from the standard JASIG behavior. Make a reasonable effort to see if you can find an alternate implementation class or an alternate configuration of the standard class that has the behavior you want. If not, then instead of modifying the org.jasig.cas code in the JASIG source, see if this is one of the "bean" classes named in the XML. If so, then just make your own copy of the original source in the WAR Overlay project source area and rename its package to a edu.yale.its.tp.cas name. Change it to do what you want, and then change the fully qualified name in the XML to reference the new name of the modified class you have put in the Yale customization project.

Of course, JASIG may change the source of its original class in some future release. However, classes that implement standard interfaces do not really need to pick up every JASIG change. If they fix a bug, then you want to copy over the new code. If they simply add a new feature or change the behavior of something that you are not using, then there is no need to update your modified copy of the original JASIG code. Essentially, the modified code has become 100% Yale stuff no matter who originally authored it and if it continues to do what you need then there is no need to change it in future releases, unless the interface changes.

Most strategic changes in behavior can be coded in Java source complied as part of the WAR Overlay project. What cannot be changed in the WAR Overlay project? If a class is not named in the XML configuration files, but instead is imported into some other Java source file, then you cannot in general replace it with new code in the WEB-INF/classes directory.

This is a consequence of some fine print in the Java implementation of ClassLoaders. Since this is computer logic stuff, I can boil it down to some simple logic, but it has to be expressed as a set of rules:

Every class is loaded by a ClassLoader.

In JBoss, there is one ClassLoader for WEB-INF/classes and a second ClassLoader for WEB-INF/lib. Then there are ClassLoaders for EARs and for the JBoss module JAR files, but they aren't important here.

When the XML references a class by name, JBoss searches first through the classes compiled in the WAR Overlay project and stored in the WEB-INF/classes directory, and then it searches through WEB-INF/lib.So it finds the WAR Overlay source first and uses it.

However, when an already loaded class references another class with an import statement, then Java searches for the new class by first using the ClassLoader that loaded the class that is looking for it. This means that any class loaded from WEB-INF/lib will first search for all of its import names in WEB-INF/lib first, and only later search WEB-INF/classes.

Therefore, you can override XML named classes using source from the WAR Overlay project, but the other classes that are imported into JASIG source files probably have to be edited and changed in their original JASIG project. If you have to make a change, identify the modified JASIG source in the ReleaseNotes, and if it is a bug change submit it to JASIG to be fixed in future releasesWith CAS 3 Yale starts with the Apereo CAS source directory tree and adds new subproject subdirectories for the Yale added code.

Starting with CAS 4, Yale creates a separate Yale-CAS project directory containing only our subprojects, but we copy the Apereo parent pom.xml file from the distributed Apereo Source distribution and modify it so it becomes a Yale specific file.

Building the WAR

Before you build the WAR, you have to build all the JAR files it will contain. Apereo JAR files can be downloaded by Maven from the Internet so they do not have to be rebuilt. Yale JAR files, and any Apereo JAR files that Yale decides to modify, have be be locally compiled and stored. Yale new and modified artifacts always have a Yale specific Version ID that distinguishes them from the Apereo CAS artifacts.

The CAS WAR that you actually run in the Web server is built in two steps in two Maven projects.

Apereo distributes a project called cas-server-webapp to create an initial template WAR file. This WAR is not particularly useful, but it contains at least a starter version of all the Spring XML used to configure CAS and many CSS, JSP, and HTML pages. It also contains a WEB-INF/lib with the basic JAR libraries needed by a CAS system. This unmodified Apereo WAR file is a "template" that Yale modifies or updates to create our final WAR.

Although you can modify the cas-server-webapp project directly, this results in a directory with a mixture of Apereo files and Yale files, and the next time you get a new CAS release from Apereo you have to sift through them to find out which ones have been changed by Apereo.

Apereo recommends using the WAR Overlay feature of Maven. Yale creates a second WAR project called cas-server-yale-webapp that contains only the files that Yale has changed or added. Generally the WAR Overlay includes:

  • Yale Java source for Spring Beans that are slightly modified versions of standard Apereo beans.
  • Yale "branded" look and feel (CSS, JSP, HTML).
  • Spring XML to configure CAS options that Yale has selected and a few Yale additions.
  • A pom.xml file with dependency statements for Apereo optional and Yale additional JAR files referenced by the Spring XML that will be added to WEB-INF/lib.

A normal WAR project simply takes all the files in the project and uses them to build a WAR. A WAR Overlay project, however, starts with a template WAR that has already been built. Maven knows that it is working with a WAR Overlay project when it discovers that the first <dependency> in the POM is a file of type "war". Since one WAR cannot contain another WAR inside it, Maven knows that it is supposed to start with the template (dependency) WAR and update it by replacing or adding files from this project to build the output WAR file.

The Template WAR built by cas-server-webapp contains most of the Spring XML configuration, an initial set of CSS, HTML, and JSP files, and almost all the JAR files needed by CAS in its WEB-INF/lib. It also contains the cas-server-core JAR because which has 95% of all the CAS code.

The Yale WAR Overlay project replaces the CSS, JS, HTML, and JSP files associated with the CAS login function to provide the Yale "look and feel" of the CAS login page that everyone expects. It also replaces some of the subsequent confirmation or error pages to also have the same look.

Yale has selected several CAS optional modules. We use cas-server-support-ldap because we authenticate user passwords to AD using LDAP protocol, and cas-server-integration-ehcache because we use ehcache to replicate tickets between CAS servers. These optional JAR files will have been built by Apereo and are available as Maven artifacts, but they were not included in the cas-server-webapp template because they are optional. So the Yale WAR Overly has to add these JAR files to its list of dependencies and configure these options in new or modified Spring XML files. We also must include the JAR files build by the two Yale projects.

While the WAR Overlay process provides a way to update, replace, or add files, it does not provide any way to delete files you no longer want. However, Maven builds a WAR using the maven-war-plugin which can be configured in the POM file. One configuration option is the <packagingExcludes> list that lists a file (or wildcard set of files) in the Template WAR that are to be removed instead of being copied to the output WAR. There are two cases where this is used:

  • Some JAR files used by the cas-server-webapp project to build a WAR that will run in Tomcat are inappropriate for JBoss. JBoss provides its own versions of some JAR files, so the JAR file provided by the application should be deleted.
  • If you build everything with a single Maven environment then Maven will ensure that the latest required version of any given JAR file is used by all modules. However, the cas-server-webapp WAR file is built by Apereo with the latest version of all the libraries it uses. Yale has a few of its own projects that it builds in its own Maven environment, and every so often Yale will use a later version of a JAR file than Apereo required. So if cas-server-webapp had a dependency on 1.2.3 of some JAR file and Yale depends on 1.2.4, then the normal WAR Overlay process would copy both versions of the JAR file (1.2.3 and also 1.2.4) to the output WEB-INF/lib. So we add a packagingExcludes statement in the WAR Overly POM to delete the old version (1.2.3) to make sure the only version of the JAR in the WAR we are going to run at Yale has only the version of the library we want to use.

Version Numbers and Project Structure

The first Apereo version of CAS was 3.0.0 and now there are CAS 4.0.x versions. CAS 1 and CAS 2 were written at Yale and are no longer meaningful, but this means that the Maven version numbers starting 1.x and 2.x are free and will never be used by Apereo. So Yale internally is reusing the 1.x Version numbers for Yale modules and for Yale modified versions of Apereo modules. Since Yale only periodically updates its CAS version, we skip over a lot of Apereo version numbers:

Apero VersionYale Version
3.4.x1.0.x
3.5.x1.1.x
4.0.x1.2.x

 

This becomes important (and confusing) when Yale has to make a change to an Apereo JAR file to fix a problem we cannot ignore and cannot wait for a new Apereo release. In the Maven repositories there will be a vanilla Apereo artifact ending in *-4.0.2.jar and Yale will then add its own artifact named *-1.2.0.jar. If the artifact is one of the optional libraries, then we will simply add the 1.2.0 dependency to the WAR Overlay project, but if this is cas-server-core or anything else in the template WAR built by the cas-server-webapp project then we also have to add the -4.0.2.jar file to the Exclude list of the Maven WAR plugin configuration in the WAR Overlay POM file.

Although there may be additional code in various stages of development, a CAS release at Yale only depends on three Yale projects:

  • cas-server-expired-password-change (to prompt the user to change a password that is a year old)
  • cas-server-support-CushyTicketRegistry (only the configuration module is currently used to configure ehcache)
  • cas-server-yale-webapp (the WAR Overlay with all the Yale configuration and HTML changes)

In CAS 3 development Yale created a source project that included the standard Apereo source along with the three additional Yale projects in a single directory. The advantage of this project structure is that Yale was recompiling all the source and was building at least formal artifacts with the Yale version numbers for all the Apereo code. If Yale ever wanted to change an Apereo module, all it had to do was to make a change in the Apereo source and then change the dependency version number for that artifact in the WAR Overlay from the vanilla 3.5.x to the Yale 1.1.x. It made modifications to Apereo easy, but it created a lot of work when you upgraded to a new CAS release where you ended up changing the project structure of a bunch of Apereo modules you were never really going to change.

So in CAS 4 we change the Yale project structure to a much simpler and cleaner approach. All the vanilla Apereo code is removed initially. If there are modification we need to make to an Apereo module, that will be a bunch of work we can do later on when it is needed. The Yale project then consists of only the three Yale added projects as subdirectories of a parent CAS project, plus a parent POM as the only file in that parent directory.

The Yale parent POM is a modified version of the parent POM in the vanilla Apereo CAS 4.0.2 source directory. First, we change the version number from 4.0.2 to Yale's 1.2.0 because that is the default version of any Yale written or modified code. Then we comment out all the processing that Apereo has added to Maven to do open source project housekeeping. When you build an open source project for general distribution, there are Maven plugins that make sure that your comment files list the open source licenses of all the open source dependency libraries you use, and plugins that complain about syle (missing JavaDoc commments for example), and plugins that do special release packaging. Yale is a customer and our projects only have to run at Yale, so we don't need all those checks.

We should note that there are two distinct meanings to the term "parent" in a Parent POM.

The parent directory is a Maven project with a POM file with a packaging type of "pom".  For example, the Apereo parent POM for CAS 4.0.0 begins with the following declaration:

Code Block
  <groupId>org.jasig.cas</groupId>
  <artifactId>cas-server</artifactId>
  <packaging>pom</packaging>
  <version>4.0.0</version>

This file is processed when anyone does a Maven "mvn install" operation on the parent directory. In this case, the important part of the Parent Directory POM is that it contains "<module>cas-server-core</module>" statements for every subdirectory that builds an artifact. Maven processes each <module> statement in the order they appear in the Parent Directory POM file, so this file can be written so that a JAR file is built and stored in the Maven repository before a subsequent module is compiled that depends on that JAR file.

There is, however, a second meaning of "parent" that relates to the projects in the subdirectories. Each subproject POM file contains a <parent> reference:

Code Block
  <parent>
    <artifactId>cas-server</artifactId>
    <groupId>org.jasig.cas</groupId>
    <version>4.0.0</version>
  </parent>

The <parent> directory contains DepedencyManagement statements, Maven plugin configuration statements, and <parameter> definitions that are shared by all the subprojects that reference it. This ensures that, for example, all the subdirectory projects are complied with the same version of the Apache commons-io JAR library.

It is a common Maven programming convention that the <parent> POM is also the Parent Directory POM. That is the way that Apereo organizes CAS and it is the way that Yale organizes its CAS additions project, but it is not a requirement. Technically, the Parent simply has to be a project of type "<packaging>pom</packaging>" that has already been processed by Maven and has already been stored in the Maven local repository before it is referenced in a <parent> statement of a subproject POM. When the <parent> POM is also the Parent Directory POM and Maven is invoked to build the parent directory, then by definition the Parent Directory POM is processed first and is stored in the Maven Repository before any of the subdirectory projects are built. However, Shibboleth 3 is an example of another approach where the <parent> POM is itself a subdirectory and the Parent Directory POM but is the first <module> statement and is therefore built and stored in the local Maven repository before any of the subprojects that reference it.

The vanilla Apereo CAS source project has its Parent Directory POM that establishes the global defaults and builds all the Apereo artifacts at version 4.0.2. You can recompile them in your sandbox or you can simply let Maven download the artifacts from the Yale Artifactory network server or from the Internet. Then the Yale CAS source project has its modified version of the Apereo Parent Directory POM with all the open source project boilerplate processing commented out and maybe a few version numbers incremented (for example, Yale changes the Java compiler versions from 1.6 to 1.7 for its own coding). The WAR Overlay (cas-server-yale-webapp) project can get the template WAR from an instance previously complied on the sandbox machine and stored in the local Maven repository, or it can get the standard version of the template WAR complied by Apereo and stored on the internet servers. Either way the template WAR is the same file (except for timestamps) and the WAR Overlay processing produces the same result.

In CAS 3.5.x (Yale version 1.1.x) the Yale CAS source project contains the three Yale subprojects, but it also contains a bunch of vanilla Apereo source subprojects. Some of them build Yale versions of Apereo modules (that is, a cas-server-core-1.1.1.jar file), but just because we build a Yale version of the artifact doesn't mean that it is ever used or deployed anywhere. If you look at the WAR Overlay project (cas-server-yale-webapp) you may find that the <dependency> statement in that POM references the vanilla Apereo Version number (3.5.2.1) and so the Yale version of the artifact we have just compiled is stored but ignored. If you are debugging a probem and need to add trace statements to the vanilla Apereo code on the Sandbox, you can temporarily change the Version number in the WAR Overlay to use your modified code, but then after the problem has been resolved you can revert the WAR Overlay POM to its standard settings and go back to using vanilla Apereo code.

Because the CAS 4.0.x (Yale version 1.2.x) project does not normally contain a copy of any Apereo project, making temporary changes for sandbox debugging is a much larger amount of work. You have to build a custom version of the Apereo JAR file, and that means making all the necessary changes to the project and subproject directories to make it work. It is important if you do this to NEVER compile a modified version of the Apereo source and store it in the local Maven repository under the vanilla Apereo version number. Once you have stored your own modified version of cas-server-core-4.0.2.jar then without some custom cleanup on your part it will override the real vanilla version of the module and you will always get the modified version on that Sandbox machine. So if you decide to do some work on a previously vanilla Apereo project, the first thing to do is to change the Version number in the project POM.

Why Not Vanilla?

Apereo code contains some bugs or sloppy coding. Yale has found them and reported them informally, but there has not been much interest in fixing them. Some examples of differences of opinion or bugs:

  • The "service=" parameter value on the CAS login has to exactly match the value of the same parameter on the validate, serviceValidate, or samlValidate request. There is a difference of option about how carefully they have to match. The JASIG code matched the entire string excluding JSESSIONID if it was present. Yale believes that the entire query string (everything after the "?") should be excluded, and maybe the match should stop with context (https://servername/context/stuff-to-be-ingnored). This changes the substrings used in the equals() test.
  • In the Login WebFlow logic Apereo saves the generated TGTID in a "request" scoped variable between the userid/password form and the CASTGC cookie generation, but when we insert the Duo second login screen all "request" scoped variables are lost. We had to copy it to "flow" scope during the display of the Duo form.
  • When you are having a CAS problem, you may want to insert additional logging. For example, if the validate request is failing you may want to print out the exact service= strings being compared and the point at which they differ.
  • Apereo defaults all registered services to be able to Proxy, but it seems better if the default is that they cannot Proxy unless you explicitly authorize it. This involves changing the default value for allowedToProxy in AbstractRegisteredService.java from "true" to "false".
  • AbstractTicket is a Serializable class, but Apereo source forgot to give it a VersionUID. As a result, every time you deploy a new version of cas-server-core you have to "cold start" CAS by wiping out all the current tickets. You should only have to do that if you actually have changed the Ticket classes, not just because you recompiled the library to change something that has nothing to do with Tickets.
  • Yale does not use Single Sign-Out, but code in TicketGrantingTicketImpl added to support that option has a very small but non-zero chance of throwing a ConcurrentAccessException and possibly screwing CAS up. Since we don't need it, Yale might just comment out the statement that causes the problem.

Each of these is a one line change, and only the first is user visible and the last is important for reliablity. If CAS was not open source we would probably just live with the old code, but we have the chance to fix/change it.

Whether we use vanilla Apereo code or make Yale modifications depends on Yale requirements, staffing, and management. There is a cost and commitment to maintaining a modification, but there is also a problem if some application does not work correctly or if some Yale need is not being properly addressed.

Subversion

If you check out a project from Subversion, then you rename a file or copy it to another directory, and then commit the changes, Subversion doesn't realize that the file has been renamed or moved. It sees that the original file name is no longer where it once was, and that a new file name has appeared in the directory, so it treats this as a delete and a create. All the file history is lost.

The command level Subversion client has commands that copy a file and link its old history to its new location. So if it matters, there are some things you should do with Subversion outside the Eclipse environment. This is easy to learn with any Linux Sandbox development environment.

During Sandbox development your complied JAR and WAR artifacts are stored in the local Maven repository on the Sandbox machine. However, the result of Sandbox development is to make and commit source changes to the Subversion version of the project. When you move to DEV (and then TEST and PROD) the Jenkins Build process will check out and compile the project and build new copies of the artifacts on the Jenkins managed machines and store them in the Artifactory network server.

An Eclipse project is built when you check a Maven project out of Subversion into the Eclipse Workspace. Generally a Sandbox instance will contain an Eclipse Workspace directory, it will contain a CAS project, and that project will contain source from some earlier version of CAS development. The first step when you begin work on CAS is to update the source in the workspace project with the latest version from Subversion.

There is a small amount of customization that has to be done manually after the project is checked out of Subversion into an Eclipse Workspace. For example, every Maven J2EE project seems to default to Java 1.6 while Yale CAS modules are written to use Java 1.7. So you have to manually go in and change the project defaults and the generated library section of the Eclipse Build configuration. Updating the contents of an existing Eclipse project preserves this customization, while checking out a complete new Maven project from Subversion means you will have to redo the small amount of post checkout project cleanup.

The Jenkins Build and Install Jobs

In order to put CAS into production, you have to conform to the Yale Jenkins process.

First the developer creates code on a desktop sandbox environment and runs basic tests. When the code is working, it is checked into Subversion.

The developer then runs the Jenkins Build job for that project. In "Trunk Build" processing (the default) the Build job first checks out a copy of the current SVN trunk onto a Jenkins managed Build machine. It then runs the top level parent Maven pom.xml job, which in turn runs all the subprojects to compile Java source and build the 1.1.x-SNAPSHOT versions of all the JAR files and of the final WAR files. At the end of the job, these files are all stored in Artifactory. The Trunk Build can be run again and again to replace the SNAPSHOT files until Integration Testing on the DEV machines is successful.

There are Jenkins "Install" jobs for DEV, TEST, and PROD. Each checks out a copy of the installer directory stored in SVN next to but separate from the cas-server source project. The install job also runs a top level (in the installer directory) pom.xml file, although that Maven project just runs an Ant build.xml script to download from Artifactory the specific version of the CAS WAR file built by the previous Build job. The Ant script copies (and typically unzips) this WAR file to the JBoss application deploy directory. As text and XML files are copied, Ant makes some edits "on the fly" to insert parameter values for the names of userids or password used to access databases, AD, or to configure special options in Spring.

After DEV testing is complete, but before Installing to TEST or PROD, the Jenkins Build job is run a second time to Perform a Maven Release. Jenkins checks out the source project from SVN, but this time it changes the version ID in all the project pom.xml files to drop the "-SNAPSHOT" suffix. So if you were working on "1.1.2-SNAPSHOT" this momentarily creates version "1.1.2" files. Those files are checked into SVN as a Tag, and they are also compiled to produce the "1.1.2" version of the WAR file which is stored in Artifactory. This becomes the official "1.1.2" Release of Yale CAS. Then Maven changes all the Version ID strings in the pom.xml files a second time to increment the minor Version number and re-add the suffix, so that when the developer updates the pom.xml files in his Eclipse workspace he begins work on "1.1.3-SNAPSHOT".

Development

Up to this point we have discussed where the files are stored and how the release are built, but nothing has been said about editing files and writing code. You do that on your desktop/laptop computer.

CAS development requires Java, JBoss, Maven, and Eclipse. The last three tools run under Java, and Java is designed to be platform independent, so you can do development under Windows or Mac OSX if you prefer. There are a few hours of setup time getting the right versions of everything set up on your computer (particularly adding the right options to Eclipse). This produces a sandbox machine.

When we talk about the CAS Development Sandbox, however, this is a VM created to run under Oracle's open source VM host called VirtualBox. You can run VirtualBox on Windows or Mac, and in the VM Java, JBoss, Maven, and Eclipse are all set up to work on CAS. This is very helpful for testing, particularly if you need to test communication between CAS machines in a cluster. However, the responsiveness of a VM running on a desktop is not as good as native applications running on the real OS.

This section describes how to set up a sandbox. It can be a guide for updating the CAS Development Sandbox VM when you want to move to Java 8 or 9, JBoss Wildfly, and Eclipse Mars (4.5), or it can explain how to configure current versions of everything on your native desktop OS.

Eclipse

Yale uses Eclipse as the Java IDE. If you prefer a different IDE (Netbeans, IntelliJ, ...) the only absolute requirements are the ability to check projects in and out of SVN, the ability to build Maven projects, and the ability to debug code in JBoss. However, it would be up to you to adapt the following Eclipse instructions.

Start with the Eclipse for J2EE download package of the current release from eclipse.org. This contains Eclipse support to edit Java, XML, and HTML source and to import and automatically configure Maven projects that build JAR or WAR files. Additional capability can be added. Additional features from an already known source (including any eclipse.org features) can be added from the Help - Install New Software menu. Eclipse also has a general source for third party add-on features at Help -Eclipse Marketplace. The Marketplace is easier to use, but you need to carefully read the descriptions to make sure the item you plan to install is the right version for the release of Eclipse you are running.

Eclipse needs:

  • Maven support (called "M2E") which has become standard in modern Eclipse releases but we mention it here because it is a complex package with functions that will be describe later in some detail.
  • Subversion support ("Subversive") which was developed by a third party named Polarion but was then contributed to eclipse.org. It can be installed from Add Software because it is now owned by the Eclipse project.
  • However, while the Subversive code understands basic SVN concepts, the actual code to communicate over the network to the SVN server is still a third party addition from Polarion that you will be prompted to select the first time you try to use any SVN function in Eclipse. Choose the 100% Java library called SVNKit (use the lastest version number in the menu).
  • Add JBoss Tools from the Marketplace. Do not select the full JBoss branded replacement for the entire Eclipse program, just add the Tools part and make sure that you choose the one that corresponds to your current Eclipse (Luna for example). You do not have to install all the tools, but it is simpler to simply hit OK and accept the entire package.
  • Eclipse has optional AspectJ support, and CAS has some AspectJ components. It used to be necessary to install this option manually, but starting with Luna you will get a popup dialog inviting you to add Eclipse AspecJ support when it encounters it as you import the CAS project.

In addition to Eclipse extensions, Eclipse can be made aware of important external resources.

  • In Window - Preferences - Java - Installed JREs you can configure more than one instance of Java installed on your machine. By default Oracle Java tends to run the highest version number, but CAS is distributed to run on 1.6 and at Yale it runs on 1.7. If you happen to have 1.8 installed on your machine for other purposes that is the version Eclipse will discover when you first install it and you should configure other versions that you are going to use to test applications. Install and configure a full JDK because Maven needs it to run.
  • Eclipse comes with a current version of Maven 3 built in. Unfortunately, the Yale Jenkins Install jobs run on Maven 2.2.1 and that is not fully compatible with current Maven 3. So you need to unzip a copy of Maven 2.2.1 somewhere on your system and add the location of this directory to Eclipse through Window - Preferences - Maven - Installations.
  • Eclipse needs to know where your JBoss server is to start it. This gets to be a bit tricky because the original Eclipse for J2EE code from eclipse.org that you started with has some support for JBoss servers, but the JBoss Tools that you just added has better support. It turns out to be better to let JBoss Tools "discover" the JBoss directory and autoconfigure everything for you. Go to Windows - Preferences - JBoss Tools - JBoss Runtime Detection. Click Add and type the directory one up from the root of the JBoss Server (if JBoss is in c:\appservers\jboss-eap-6.2 then "Add" c:\appservers). Then click Search ... and all the application servers in that directory will be found. If you do not already have JBoss downloaded and installed, "Add" the directory where you want to install it and click Download ... Select the version of JBoss from the list.

Check Out the Maven Project

If you are working with the current CAS release, you check it out from the Yale SVN server. If you are going to start work on a new CAS release you still have to check out the old release, but then you have to also import the new CAS release distribution from Apereo and merge the two.

Open the SVN Repository Exploring "perspective" (page) of Eclipse and define the repository url https://svn.its.yale.edu/repos/cas. This directory contains various cas clients and servers. Find the trunk of the current cas-server project and check it out as a simple directory. Do not use the Eclipse Wizard to create a particular type of Eclipse project. Just create a new generic Eclipse project.

CAS is stored in SVN as a Maven project. This means it has a pom.xml file in its top level directory.

Eclipse has its own project structure. An Eclipse project has a .project and .classpath file and a .settings directory in its top level directory.

M2E is the name of the Eclipse support for Maven. M2E is able to read through the pom.xml file and to generate the .project and .classpath files and the .settings directory that contains what Eclipse needs in order to correctly display and build the Maven project. If in Eclipse Project Explorer you right click on a project and choose Configure from the menu, you can configure that single project to be a Maven project. However, since CAS is a parent project with subprojects, you need to use Import to get all of them.

Return to the J2EE perspective, right click the new cas-server project directory, and choose Import - Maven - Existing Maven Projects. This is the point where the M2E Eclipse support for Maven discovers the parent and subdirectory structure. It reads through the parent POM to find the "modules", then scans the subdirectories for POM files that configure the subprojects. Then it presents a new dialog listing the projects it has found. Generally it has already found and configured the parent project, so only the subprojects need to be checked.

M2E will only display subprojects that were mentioned in a <module> statement of the top level parent pom.xml file. However, you do not need to click the checkboxes to select all of them to be turned into Eclipse projects. You only need to select the projects you are working on. If you leave some out, you can always repeat the Import Existing Maven Projects step and add more.

Now M2E does some serious work, and you have to give it time to do everything. It processes the pom.xml file in a subproject to decide if it builds a JAR or a WAR. It configures the project to compile the Java source. It reads through the dependency list in the POM and downloads to your local .m2 repository file all the JAR libraries on which the CAS project depends. Then it compiles all the source. When it encounters AspectJ stuff it will invite you to add AJDT support to Eclipse.

If you have not properly configured the Yale Artifactory server in your .m2/settings file, then you may get a message about a missing Yale dependency JAR file. Ignore error messages about bad XML syntax. You may also be told that the project is configured for Java 1.6 but there is only a 1.7 runtime available. These are all unimportant issues.

M2E understands Maven projects, but is is distinct from real Maven. Real Maven (lets call it "Batch Maven" because you run it from the command line) is an extensible system of optional plugin modules that can be expanded to provide all sorts of special processing. M2E is an Eclipse component that can read pom.xml files and configure Eclipse to to approximately the same thing that Maven would do to compile the source and build the artifact.

Eclipse can compile Java and AspectJ source. It can resolve references at compile time to JAR files downloaded and stored in the Maven local (.m2) repository. It can merge compiled *.class files with XML and properties files to build the JAR or WAR.

There are some things that M2E and Eclipse cannot do on their own, like compiling WSDL files to generate Java proxy source for remote Web Services. To get those extra steps you have to run Real batch Maven. M2E can run real Maven in batch mode under Eclipse, but you have to do this manually yourself. During the automatic project import step M2E mostly gets the Java and WAR parts right, and fortunately that is all that CAS requires.

M2E knows what it knows, and it knows what it doesn't understand. It reports as an unresolved problem any configuration in any pom.xml file of a Maven plugin that it doesn't fully support. Mostly these are messages like "Plugin execution is not covered by lifecycle configuration ..." This is M2E noting that there is some sort of extra processing that Maven does in batch but that the M2E support cannot exactly duplicate in its Eclipse configuration. For CAS this does not matter, because we will be using Real Batch Maven to generate all the JAR and WAR files that go into JBoss. We only need Eclipse to be configured properly so that the Java IDE functions like autocomplete and autofix and Open Declaration and Open Type Hierarchy work, and M2E gets that part right.

When M2E has completed its Import function, there are now two distinct types of projects for every project directory.

  1. There is still a Real Batch Maven project represented by the pom.xml. This will run and generate the 100% exactly correct result when you run a "mvn clean install" either at the command line or from within Eclipse.
  2. There is also an Eclipse project represented by the .project, .classpath, and .settings generated by M2E. This is good enough to compile all the Java source, but it is not able to do all the optional or special processing that was configured in the pom.xml. It produces perfectly acceptable *.class files and puts them in the correct working directory, and there is no reason why Maven would have to recompile that source and create new *.class files of its own. However, Eclipse would not be a reliable source if you tried to use it to build a JAR or WAR file because it would leave out some of the special processing. For that you need to run batch mvn.

Interactive and Batch Modes

We have already discussed why it is important to run the batch "mvn clean install" command to build artifacts. However, there is a second subtle interactive v batch distinction that you need to understand for testing and debugging.

Eclipse has special support to make very simple J2EE development as simple as possible. This is useful for simple applications where the developer spends a long time editing JSP, HTML and CSS files or simple Java source.

Since the start of J2EE, application servers have a "hot deploy" mode of operation where they will notice when the timestamp of a HTML or CSS file changes and they will use the new version of the file as soon as it is stored in the application server deploy directory. However, Eclipse is an IDE and when you save a new copy of an HTML or CSS file, you are putting it in the Eclipse workspace and not over on the c:\appservers\jboss-eap-6.2\standalone\deployments\cas.war directory. It could have an option to immediately copy changed files over to the deployment directory, but it tries instead to do something much more clever.

The J2EE support in Eclipse has a trick for bringing up widely known Java application servers (Tomcat, JBoss) in a special Eclipse managed configuration mode. These servers have always had the ability (required by Linux convention) to put their configuration files over in one directory tree, their libraries in another tree, their log files somewhere else, and their applications wherever the system administrator wants to configure them. Eclipse can run one of these servers overriding its normal configuration directory with one that Eclipse has hacked.

The Eclipse trick is to configure the application server to believe that the WAR file it is dealing with is the Eclipse project in the Eclipse workspace. In some cases Eclipse configures a special Eclipse-Tomcat JAR library with a replacement for the usual Tomcat classes that read files from normal WAR files. Eclipse creates a "virtual" WAR that comes from its workspace. If you are running the application server in this mode, then when you save an HTML file in the Eclipse workspace, it doesn't have to be copied anywhere its. It has been hot deployed to the application server automatically.

This trick cannot work if your application requires any special Maven processing to build the WAR file. The CAS WAR has to be built by Real Batch Maven, and Yale conventions further require last minute parameter substitution with the Install job Ant script. So CAS in general and particularly CAS at Yale cannot use the oversimplified application debugging provided by Eclipse.

You have to run Real Batch Maven under Eclipse to do the same work as the Jenkins Build job, then you need to run it again to do the same work as the Jenkins Install job. That produces a cas.war directory in the JBoss deploy directory on the sandbox machine. Then you want to start JBoss. This is where the JBoss Tools add-on to Eclipse is helpful. It can start and stop, with or without debugging, an ordinary JBoss server installed outside Eclipse. The server runs with its normal configuration from its normal configuration directory. The only change is that you manage it from Eclipse instead of from the command line, and you can set breakpoints with Eclipse debugging without the extra step of attaching Eclipse to a running process.

Running Maven Jobs Under Eclipse

If you have not already done so, go to SVN Repository Exploring, connect to the /repos/cas SVN repository, and check out the CAS Installer job that corresponds to the cas-server project you are working with.

If you have a Maven project in Eclipse, then if you right click on the pom.xml file in the root directory of the project you are presented with a set of Maven batch operations ("mvn clean", "mvn install", etc.). Running Maven this way from Eclipse requires you to take all the defaults.

The alternative is to configure a Run Configuration. Choose Run - Run Configurations from the menu. On the left side of the dialog, select "m2 Maven Build", then select New. Given the configuration a name like "CAS Build" or "CAS Install". For the Base directory click Browse Workspace and choose the cas-server project (for Build) or the cas-installer project (for Install). The Goals for each should be "install" or to be safer, "clean install".

At the bottom of the dialog, you can choose a Maven Runtime from the configured versions of Maven Eclipse knows about. You can use the Embedded Maven 3 for the build, but you need to use a configured external version of Maven 2.2.1 for the installer project. You can also use the JRE tab to select a specific version of Java.

Conceptually, the CAS Build and CAS Install Run Configurations are the sandbox version of the Jenkins Build and Install jobs.

The Build job has no parameters. The Install job, however, requires that you add an install.properties file containing sandbox versions of the parameters that an operator would enter running the real Jenkins install job. We do not check this file in, so you need to get it from another developer, or a local shared server disk. With passwords removed, it will look something like:

target.environment=DEV
jboss.deploy.dir=/c:/appservers/jboss-eap-6.2/standalone/deployments/
cas.server.version=1.1.2-SNAPSHOT
acs.pwd=xxxx
ad.server.admin.userPwd=xxxx
ad.server.admin.userDn=CN=somenetid,CN=users,DC=yu,DC=yale,DC=net
cas.cookie.secure=false
cas.ticketRegistry.xml=ticketRegistryEhcache
cas.clear.log=true

 Target.environment selects a secondary parameter file with environment specific parameters. The value DEV adds the install-DEV.properties file values. The jboss.deploy.dir is the JBoss directory into which the cas.war is copied. The cas.server.version should correspond to the Maven version ID of the artifacts created in the Build job.  ACS.pwd is the password of the yu_shib userid in the ACSx Oracle database, or if it is meaningless you will get some error messages and then CAS will not check for expired passwords. The ad.server.admin parameters must represent a netid and password (recommend you use a dependent netid) that simply connects you to the AD you are using (because the AD requires some login before you can use it, but the login isn't really and admin and doesn't need any special privileges. If you specify cas.cookie.secure=false then you can test CAS with an ordinary http:// connection to port 8080 and you don't need to use SSL or a certificate on your sandbox.

The real Jenkins Build job downloads source from SVN, but you already have your source in your Eclipse workspace. Your CAS Build Run Configuration runs the top level parent Maven project, which in turn runs all the subprojects and builds the JAR and WAR artifacts. However, unlike the real Jenkins Build job, your CAS Build stops when it has deposited these artifacts in the local .m2 Maven repository on your sandbox machine. Nothing is changed on the Artifactory server.

The CAS Install Run Configuration similarly uses the artifact in the .m2 local Maven repository. It explodes the WAR file in the JBoss deploy directory and inserts parameters from the properties files.

There is one last step. The Jenkins install job stops and starts JBoss. In the Sandbox you will manually start and stop the JBoss server using the JBoss Tools Start and Stop toobar icons.

Best Practice

Where possible, add Java code to the WAR Overlay project, use an edu.yale.its.* package name. Reference the classname in Spring XML beans. If you are going to modify Apereo code that generates a Spring Bean, then copy that source to the Overlay project and rename it. This is better than changing the Apereo code in cas-server-core. CAS interfaces are stable so code migrates from one release to the next.

Learn the important CAS interfaces. Generally speaking, you can do most customizations at the AuthenticationManager, AuthenticationHandler, CredentialsToPrincipalResolver, and TicketRegistry interfaces, or by building a Spring Web Flow bean.

However, you cannot change an internal class in cas-server-core (that is, a class used directly by other classes in the same project) without actually editing that project. Moving that class to the WAR Overlay project will not work because other projects cannot override internal classes.

The Debug Cycle

Make changes to the files and save them.

...

Do not commit the generated "target" subdirectories to SVN. They contain temporary files that should not be saved.

Do not commit the install.properties file you put in the install project.

Once you are ready to proceed to the next phase, run a Jenkins Trunk Build and do a DEV Install.

...