CAS was created at Yale and Versions 1 and 2 of the CAS code were written at Yale. Version 3 was completely rewritten and has been managed by a group of universities called "JA-SIG". Recently JA-SIG merged with Sakai to and was renamed "Apereo". In Yale CAS documentation, any reference to jasig.org should be understood to now reference apereo.org.
The CAS Project Directory Structure
Any given release of the CAS Server can be downloaded as a zip file from Apereo or it can be checked out from the Git server used by CAS developers as documented at the Apereo Web site. The release source is a Maven project, specifically a "parent" project with subdirectories that contain Maven subprojects. This is a common Maven way to package multiple projects that have to be compiled and built in a particular order.
The outer directory contains the parent or master pom.xml that defines parameters shared by all the subprojects. It also contains a list of <module> statements that identify the subprojects to be built in the order in which they should run.
Each subproject creates a JAR or WAR file. The first project is cas-server-core and it contains about 95% or more of all the CAS code. It has to be built first because all the other projects depend on it. After that, there are projects to create optional components that you may or may not choose to use.
Building the WAR
The CAS WAR that you actually run in the Web server is built in two steps in two Maven projects.
Apereo distributes a project called cas-server-webapp to create an initial prototype WAR file. The WAR it creates is not particularly useful, but it contains at least a starter version of all the Spring XML used to configure CAS and many CSS, JSP, and HTML pages. It also contains a WEB-INF/lib with the basic JAR libraries needed by a basic CAS system.
Although you can modify the cas-server-webapp project directly, this results in a directory with a mixture of Apereo files and Yale files, and the next time you get a new CAS release from Apereo you have to sift through them to find out which ones have been changed by Apereo.
Apereo recommends using the WAR Overlay feature of Maven. Yale creates a second WAR project called cas-server-yale-webapp. Instead of copying all the files from the Apereo project, the WAR Overlay project contains only the files that Yale has changed or added. Generally the WAR Overlay includes:
- Yale Java source for Spring Beans that are slightly modified versions of standard Apereo beans.
- Yale "branded" look and feel (CSS, JSP, HTML).
- A pom.xml file with additional dependencies beyond just the cas-server-core that was included in the template WAR built by cas-server-webapp. For example, Yale depends on cas-server-support-ldap and cas-server-integration-ehcache. We don't modify them, but since they are optional modules they were not included in the template WAR. Adding them to the dependencies in the Overlay project POM adds them to our WEB-INF/lib.
The WAR Overlay project references the template WAR, and at the end of processing the files that Yale added or changed are "overlayed" on top of the template to build the cas.war file that is actually deployed in production. Modified files are replaced. New files are added.
At Yale, when you run the Jenkins Build job for CAS, the result of that job is to create a cas.war file and store it in the Yale Artifactory server with a version number and some hierarchical naming. For example, Version 1.1.1 of the Yale CAS Server is stored as https://repository.its.yale.edu/maven2/libs-releases-local/edu/yale/its/cas/cas-server-war/1.1.1/cas-server-war-1.1.1.war.
One or Two Projects?
If you were never ever going to change any Apereo source, then it would make sense to have two separate projects in your Eclipse workspace. One project would be the absolutely vanilla CAS source you downloaded from Apereo, and the other project would be the additional Yale modules and the WAR Overlay project. The Yale project would depend on the Apereo artifacts, but the only reason for having the source is so you can see methods that you step into while debugging your own code.
However, Apereo code can have problems. Sometimes it is a bug. Sometimes it is just a difference of opinion between the Yale way of doing things and the Apereo way. Sometimes you want to add in specific support for features that Apereo is working on but has not yet finished.
- Apereo defaults all registered services to be able to Proxy, but it seems better if the default is that they cannot Proxy unless you explicitly authorize it. This involves changing the default value for allowedToProxy in AbstractRegisteredService.java from "true" to "false".
- AbstractTicket is a Serializable class, but Apereo source forgot to give it a VersionUID. As a result, every time you deploy a new version of cas-server-core you have to "cold start" CAS by wiping out all the current tickets. You should only have to do that if you actually have changed the Ticket classes, not just because you recompiled the library to change something that has nothing to do with Tickets.
- Yale does not use Single Sign-Out, but code in TicketGrantingTicketImpl added to support that option has a very small but non-zero chance of throwing a ConcurrentAccessException and possibly screwing CAS up. Since we don't need it, Yale might just comment out the statement that causes the problem.
Each of these is a one line change, and only the last is important. If CAS was not open source we would probably just live with the old code, but we have the chance to fix it.
Once you start to consider changing Apereo code, then it is inconvenient to maintain two separate projects. For one thing, you need two separate Jenkins build jobs, one for each project. Then you have release numbering issues.
The alternative is to have a single project that combines both the Apereo source for a CAS release (example: 3.5.2.1) and the Yale code that has been built and tested for that CAS release (separately designated Version 1.1.x). Most of the time the Apereo source just sits there and is not used and is not compiled because it is not changed. If you need to make a modification, it is immediately available.
The single project may be confusing, unless you have read this document. There is a large block of source checked into Subversion that appears in your Eclipse workspace, except that none of it is actually used. Why is it there at all? If the Apereo source is not going to be changed, shouldn't it be separated out for clarity?
There are advantages to the Two Project approach, and advantages to the One Project approach. Currently we use the single project structure, but if you feel really strongly about it you can separate the code in some future release.
The Parent pom.xml
The top level directory is the "parent" project. Typically it is named "cas-server" although this is simply a choice you make when you check the source out from SVN. The only thing in the parent project is its pom.xml file, and a few README type files.
However, one of the more complicated problems when Yale migrates from one release of CAS to another is to reconcile our changes to the parent pom.xml with any Apereo changes. This is not the same type of code migration that occurs if you have modifications to some HTML or Java source file.
All the CAS code, configuration files, and HTML do the same thing at Yale that they do everywhere else and that Apereo designed them to do. So our version is fairly close to the original.
However, the top level pom.xml file is not just about compiling the source and building the template WAR file (which is the part we need). Apereo is in the business of maintaining an Open Source Project that is distributed widely on the Internet. A large part of their concern is to make sure that all the legal boilerplate is properly maintained. Do the files all have a proper copyright notice? Are all the licenses to the dependency files properly listed? Maven has some impressive support for doing all this open source release management, but unless that is something you specifically have learned, setting up the legal boilerplate is a daunting task.
Yale is a CAS customer. We may have our own files, but unless and until we contribute them back to Apereo we do not care if a file that nobody sees outside Yale has a proper open source copyright notice and license declaration. If we try to use the Apereo top level pom.xml file as distributed, then every time we try to cut a Yale release we get error messages for all the boilerplate that is missing from our own files. So we remove all that stuff and create our own top level pom.xml.
Given that we have to make substantial changes, we might accept that this pom.xml is a Yale file and no longer a modified version of the original Apereo project. Then for sanity, and because our release process requires it, we change it to reflect the rest of the Yale environment:
- Version - There are two version numbers, one for unmodified Apereo code corresponding to a CAS release (3.5.2.1) and one representing the internal Yale Release of the CAS artifact that we put into production (1.1.x or 1.1.x-SNAPSHOT). The Yale top level project and all the Yale source use the Yale version ID. If we modify an Apereo project (say we make a change to cas-server-core) then we also change the version number of that JAR file from the Apereo to the corresponding Yale ID numbering.
- SCM - defines the Yale Subversion server where Yale maintains its production source.
- Repositories - defines Yale's Artifactory, but merged with other sources for artifacts from the Apereo pom.
- pluginManagement - Maven is a modular system of extensible functions implemented by optional "plugin" modules. The first step for Yale was to ditch the "notice" and "license" plugins that mange the legal boilerplate. Then we make a few parameter changes to a few other plugins. For example, when you are done working on Version 1.1.1-SNAPSHOT and want to create the official 1.1.1 Yale version of CAS and then reset the project to begin work on 1.1.2-SNAPSHOT, you call the Yale Jenkins Build job "Perform Maven Release" operation, which in turn uses the maven-release-plugin. At Yale we find that running the JUnit tests during this process is not only slow but a bad idea, so we change the configuration of that plugin in the parent pom.
- dependencyMangement - In any official CAS Release, all of the code is coordinated so all modules and options use the same versions of all the same libraries that will be distributed in the WAR and made available at runtime. Unfortunately, when Yale develops its own code it may need a later version of the same library. For example, Apereo is happy with commons.io Version 2.0, but Yale needs Version 2.4. Apache is smart enough to make the 2.4 library downward compatible with programs written for 2.0, but swapping a newer better version of a library is tricky. The first step is that our top level pom.xml declares the 2.4 version to be the one we want so that all our projects use the same version. This also ensures that the 2.4 version of the commons-io JAR will be merged into the WAR during the Overlay processing.
- However, there is a final processing step that has to be done in the WAR Overlay project and is related to but not directly specified in the parent pom.xml file. Because we are using some unmodified CAS 3.5.2.1 modules (particularly cas-server-webapp) and then we merge in new libraries from the WAR Overlay project, the resulting cas.war file would have both commons-io-2.0.jar and commons-io-2.4.jar in WEB-INF/lib (2.0 from vanilla cas-server-webapp and 2.4 from our project). So we have to add an exclude statement in the configuration of maven-war-plugin in the WAR Overlay project to remove the unwanted commons-io-2.0.jar file). Generally speaking, we remove any file that the build process would normally include that is bad for JBoss or where there are multiple versions of the same code and we only want to keep the latest version.
- modules - The CAS Project (that is the source checked into SVN that also is found in the Eclipse workspace) contains all the projects distributed by Apereo in a CAS release. It is just that we do not use most of them and we try to avoid changing them, so we don't need to compile or build them. That is accomplished by deleting (or commenting out) all the <module> statements in the Apereo parent pom.xml that reference unmodified Apereo source projects, then adding <module> statements for all the Yale projects. Even though the source and the project are physically present as subdirectories, if the parent pom.xml doesn't refer to a subdirectory in a <module> statement then Maven ignores that directory during the build. On the other hand, if we decide to make a change to an Apereo project then we add back in or uncomment the <module> statement, but with Yale changes we also alter the version ID in the pom.xml for that subdirectory from Apereo "3.5.2.1" to Yale "1.1.x-SNAPSHOT".
- Parameters - At the end of the POM there are parameters that specify the version numbers of dependency libraries and Java.
Subversion
Because CAS is maintained by Apereo in Git, and because of the way we structure and maintain the project, if you really wanted to track the history of CAS modules properly Git would be a much better choice. However, the Yale Build process is based on Subversion and that is what we have to maintain.
So in any given cycle of CAS development, you begin by obtaining a copy of the current CAS release from Apereo. You then replace the top level pom.xml with a new one created by merging any changes made by Apereo since the last release with the prior Yale top level pom.xml (and this has to be done manually by someone who understands Maven). Then you merge in the Yale added subdirectories.
This becomes a new project in the /cas area of Yale Subversion. Because it is nominally new, you start without history. If you want to see the history for Apereo code, use Git. If you want to see the actual history of Yale modules, find the Subversion directory of previous CAS development projects. Generally there are several years of stable operation between CAS development projects, and if the thing has been working for years you a probably no longer interested in the "ancient history" of a module.
The ReleaseNotes.html file in in the root of the cas-server project will identify the current state of new directories and any modified files. This should be a descriptive match for the physical code change history in SVN.
The Jenkins Build and Install Jobs
In order to put CAS into production, you have to conform to the Yale Jenkins process.
There is a Jenkins Build job. The developer normally runs it to compile the current SVN trunk to produce a Version 1.1.x-SNAPSHOT of the cas.war artifact which is stored in the Artifactory server. The convention of a Jenkins Build job is that it checks out a copy of the entire Project file from SVN to a Jenkins managed server and then it does a "mvn clean install" on the top level project (the parent pom.xml). Maven will in turn do a clean install on each of the subprojects reference by <module> statements to create updated artifacts, of which the only important one is the cas.war file.
There is a Jenkins Install job. It also checks out the installer project from SVN. That project also has a Maven POM, but the Yale convention is that it runs an Ant build.xml script to download a copy of the cas.war file created in a previous Build step and copy it over to the JBoss deploy directory. As files are copied some edits are made "on the fly" to insert parameter values for the names of userids or password used to access databases, AD, or to configure special options in Spring.
The job of the developer is to commit changes to SVN that make the Build and Install job work properly to create the CAS instance running in DEV, TEST, and PROD. In order to make those changes, you need a desktop Sandbox environment that can not only compile, test, and debug CAS but can also prototype the Jenkins Build and Install process (without using Jenkins of course).
Development
CAS runs in production on Red Hat Enterprise Linux, but you can do development with Eclipse and JBoss running on Windows or any other OS. If you plan to work on SPNEGO, however, you should use a Linux development machine or VM because SPNEGO behavior depends on the native GSSAPI stack in the OS and that is different in Windows from the version of the stack in Linux.
Yale uses Eclipse as the Java IDE. If you do not already have Eclipse installed, start with the J2EE download package of the current release from eclipse.org. Eclipse adds previously optional features to later releases. Anything that does not come preinstalled can be added using the Help - Install New Software ... menu (if the option is an Eclipse project) or the Help - Eclipse Marketplace menu (Eclipse and third party tools, but check carefully because the Marketplace lists versions for every different release of Eclipse). You need:
- Maven support (M2E) which appears to be standard in the latest releases.
- Subversion support (Subversive) which is not automatically included because there were several competing projects. The first time you run a Subversive window you will get a popup to install a third party (Polarion) connector from a list and you should choose the latest version of SVNKit).
- JBoss Tools (at least the JBossAS manager) which is listed in Marketplace because it comes from Red Hat.
- AJDT, the Eclipse support for AspectJ which the JASIG CAS source uses (starting with the Luna release, Eclipse realizes that CAS uses AspectJ and automatically installs this component when it processes the CAS project if you have not preinstalled it).
- The standard Maven support comes with a version of Maven 3 built into Eclipse. That is exactly right for building the CAS executable, but it is a Yale convention that the Install job that copies the WAR file over to JBoss has to run under Maven 2. So you need to download the last Maven 2.2.1 from apache.org and unzip it to a directory somewhere on your disk. Later on you will define this directory to Eclipse as a optional "Maven Installation" it can use to run jobs configured to use Maven 2 instead of the default 3.
Check Out and Build the Project
Open the SVN Repository Exploring "perspective" (page) of Eclipse and define the repository url https://svn.its.yale.edu/repos/cas. This directory contains various cas clients and servers. Find the trunk of the current cas-server project and check it out.
The cas-server project is a Maven "parent" project with subprojects to build JARs and WARs in subdirectories. The Check Out As ... wizard normally assumes you are checking out a single project with a single result, so it will no configure this project properly on its own. Just check it out as a directory with no particular type, or you can try to configure it as a Maven project.
Return to the J2EE perspective, right click the new cas-server project directory, and choose Import - Maven - Existing Maven Projects. This is the point where the M2E Eclipse support for Maven discovers the parent and subdirectory structure. It reads through the parent POM to find the "modules", then scans the subdirectories for POM files that configure the subprojects. Then it presents a new dialog listing the projects it has found. Generally it has already found and configured the parent project, so only the subprojects will be checked.
If you are working on a CAS release that has already been configured, you will only see the subprojects that some previous Yale programmer decided were of interest at Yale. If you are working on a new release of CAS with a vanilla POM you may see a list of all the projects, and this may be a good time to deselect the CAS optional projects that Yale does not use.
Now sit back while the M2E logic tries to scan the POMs of all the subprojects and downloads the dependency JAR files from the internet. If you get a missing dependency for a Yale JAR file, then the $HOME/.m2/settings.xml file does not point to the Artifactory server where Yale stores its JAR files. You can ignore error messages about XML file syntax errors. In the Luna version of Eclipse, the M2E support discovers that CAS uses AspectJ and offers to install support for AspectJ in Eclipse.
A Maven project has standard source directories (src/main/java) used to build the output JAR or WAR artifact. A Maven POM has "dependency" declarations it uses to download JAR files (to your local Maven repository) and build a "classpath" used to find referenced classes at compile time. Eclipse has a "Build Path" (physically the .classpath file) that defines source directories, the location where the ouput class files are stored, and the libraries that have to be used to compile the source. M2E runs through the POM files in each subproject and creates a Eclipse project with a Eclipse Build Path that tells Eclipse to do exactly the same thing Maven would do to the same project. Eclipse will compile the same source in the same source directory and produce the same output to the same output directory using the same JAR libraries to run the compiler. It is important (because it answers questions later on) that you understand that Eclipse is being configured to do MOSTLY the same thing that Maven would do, but Eclipse does it without running Maven itself. The MOSTLY qualification is that M2E sets default values for the enormous number of options that an Eclipse project has (what error message to display or hide, syntax coloring, everything else you can set if you right click the project and choose Preferences from the menu). After M2E creates an initial default batch of project setting, you are free to change them and ask Eclipse to do things slightly different without breaking the development process.
At the end there should be several types of error messages that don't really matter:
- JSP, HTML, XML, XHTML, and JavaScript files may report errors because Eclipse is trying to validate syntax without having everything it needs to do the validation properly, or because some files are fragmentary and have to be assembled from pieces before they are well formed.
- The M2E default is to configure the Eclipse project to use Java 1.6 both as a language level for the compiler and as a JRE library default. If you are running on a machine with 1.7 or 1.8, you will get warning messages that the project is set up for 1.6 but there is no actual 1.6 known to Eclipse. You can ignore this message, or correct the default project settings.
Before the import, there was only the "cas-server" project (and the installer). The import creates an Eclipse "shadow project" (my name) for every subdirectory under cas-server that is needed. These shadow projects are Eclipse entities built from the Maven POM files. They contain Eclipse configuration and option elements (what Eclipse thinks of as the project and Build Path). What these shadow projects don't have are real source files. They contain configuration pointers the real source files in the subdirectories of the cas-server project. As you open and look at these projects, they will appear to have source. However, this source will be structured as Java Packages and Java Classes and Java Resources. This is a logical representation in Java and WAR terms of the actual source files that remain as subdirectories of the cas-source directory.
In a few cases, such as the Search option on the menu, you may look for files with certain text or attributes and the results of the search will display the same file twice, once as a physical text file in the cas-server directory tree and once as a Java Class source in the corresponding Eclipse project. Just realize that these are two different views of the same file, and changes you make to either view are also changes to the other view.
Interactive and Batch Modes
Eclipse tries to make J2EE development simpler than is reasonably possible. Eclipse is almost successful, close enough so that it is quite usable as long as you realize what cannot possibly be made to work exactly right and solve that problem with another tool.
The Maven POM contains declarations that drive a Java compiler running under Maven to compile all the source, and then it builds JAR, WAR, and EAR files. The M2E support for Maven inside Eclipse tries to understand the POM file and to create a set of Eclipse project configurations, including Build Path references to dependent libraries in the Maven local repository, to compile the source in the same way. This part works, so Eclipse can be a good source editor including autocomplete hints and the ability to find the source of classes you are using or all references to a method you are coding.
It is in the running and debugging of applications under JBoss or Tomcat that Eclipse may attempt to be too simple.
J2EE servers have a "deployment" directory where you put the WAR file that contains the Java classes and HTML/XML/JSP/CSS files. You can deploy the application as a single ZIP file, or you can deploy it as a directory of files.
Most Java application servers have a Hot Deploy capability that allows you to replace a single HTML or CSS file in the (non-ZIP, directory of files) application and the server will notice the new file with its new change date and immediately start to use it. This can ever work for some individual class files. However, Hot Deploy behavior is random and error prone if you attempt to change a number of files at the same time because the application server can notice some changes in the middle of the update and start to act on an incomplete set of updates.
Eclipse likes the idea of a simple change to a single HTML, CSS, or even a simple Java source file taking effect immediately after you save the file in the editor. However, since it knows that Hot Deploy doesn't actually work if you physically copy a batch of files over to the Tomcat or JBoss deploy directory, the Eclipse J2EE runtime support has traditionally attempted to do something much more sophisticated. It "hacks" the normal Tomcat configuration file to insert a special Eclipse module that creates (to Tomcat) a "virtual" WAR file that does not actually exist in the Tomcat deploy directory but which is synthetically emulated by Eclipse using the source and class files that it has in the projects in its workspace.
This is a little better than Hot Deploy, because Eclipse can simulate swapping the entire set of changed files in a single transaction, but in the long run it doesn't work either. CAS, for example, is a big Spring application with a bunch of preloaded Spring Beans created in memory when the application starts up. The only way to change any of these beans is to bring the entire application down and restart it.
So successful CAS development requires you, even if all you want to do is change the wording on an HTML file, to stop the application server, run a real "mvn install" on the cas-server project to build a new WAR file, run a "mvn install" on the Installer project to copy it over to the application server deploy directory, and then restart the application server. The Eclipse capability to make some changes take effect immediately on a running server is simply too unreliable, and because of that we cannot use the entire "interactive" Eclipse application build strategy but must instead switch to a Maven "batch" artifact building and deployment process.
The advantage of the "batch" approach is that you are guaranteed to get a 100% exact duplicate behavior as you will get later on when you run the Yale Jenkins jobs to compile and install the application onto the test or production servers. Eclipse does a pretty good job of doing almost exactly the same thing that Maven will do, but it is not worth several days of debugging to finally discover that almost the same thing is not quite as good as exactly the same thing.
Since the Eclipse project was created from the Maven POM, it is configured to compile all the same source that Maven would compile and to put the resulting class files in the same output directory. Because Eclipse compiles the source when the project is first built, and then recompiles source after every edit, whenever you run the "mvn install" job Maven finds that the class files are up to date and does not recompile the source. If it is important to have Maven recompile source itself, then you have to do a "mvn clean install" step. Also, neither Maven nor Eclipse does a good job cleaning up class files after a source file is deleted, so when you make such a change you should "clean" the project to force a recompile of the remaining files.
Running Maven Jobs Under Eclipse
At any time you can right click a POM file and choose "Run as ..." and select a goal (clean, compile, install). Rather than taking all the default configuration parameters, or entering them each time you run Maven, you can build a Run Configuration. Basically, this is a simple job with one step that runs Maven. The Run Configuration panel allows you to select every possible option, including the Java Runtime you want to use to run Maven, and a specific Maven installation so you can run some things under Maven 2 and some under Maven 3, plus the standard Maven options to not run tests or to print verbose messages. Since the Jenkins jobs each run one Maven POM, in the Sandbox you can build two Maven Run Configurations that duplicate the function of each Jenkins job.
First, lets review what the Jenkins jobs do.
- The "Trunk Build" job checks the source project out of subversion. In this case, it is a "parent" project with subdirectories that all get checked out. It then runs a "mvn install" on the parent. The parent POM contains "module" XML elements indicating which subdirectory projects are to be run. Each subdirectory project generates a JAR or WAR artifact. The Jenkins Trunk Build installs these artifacts on the Artifactory server replacing any prior artifact with the same name and version number.
- The Jenkins Install job checks the source of the Install project out of SVN. By Yale convention, each Install project is an Ant build.xml script and a package of parameters in an install.properties file created from the Parameters entered or defaulted by the person running the Jenkins job. Minimally the Install job downloads the artifact from Artifactory and copies it to the JBoss deploy directory, although typically the artifact will be unzipped, variables will be located and replaced by the values in the properties file, and then the file will be rezipped and deployed.
In the Sandbox you already have a copy of the CAS source files checked out in your Eclipse project in your workspace, and you can also check out a copy of the Installer project. So there is no need to access SVN. Similarly, in the Sandbox we do not want to mess up Artifactory, so the local Maven repository (which defaults to a subdirectory tree under the .m2 directory in your HOME folder) holds the generated artifacts.
With these changes, a Build project compiles the source (in the Eclipse workspace) and generates artifacts in the .m2 Maven local repostory. The Install project runs the Ant script with a special Sandbox version of the install.properties to copy the artifact from the .m2 local repository to the Sandbox JBoss Deploy directory (with property edits).
There is one last step. The Jenkins install job stops and starts JBoss. In the Sandbox you will manually start and stop the JBoss server using the JBoss AS Eclipse plugin. This plugin generates icons in the Eclipse toolbar to Start, Stop, and Debug JBoss under Eclipse. You Stop JBoss before running the Install job and then start it (typically using the Debug version of start) after the Install completes.
Now to the mechanics of defining a Run Configuration. You can view them by selecting the Run - Run Configurations menu option. Normally you think of using Run configurations to run an application, but once you install M2E there is also a Maven Build section. Here you can choose a particular Maven runtime directory, a set of goals ("clean install" for example), and some options to run Maven in a project directory. The recommendation is that on your sandbox machine you create two Maven Run Configurations. The configuration labelled "CAS Build" runs a Maven "install" or "clean install" goal on the parent POM of the CAS source project. The configuration labelled "CAS Install" runs Maven on the POM of the Installer project (and it has to run Maven 2 because this is still the standard for Installer jobs).
CAS Source Project Structure
In Maven, a POM with a packaging type of "pom" contains only documentation, parameters, and dependency configuration info. It is used to configure other projects, but generates no artifact. The parent "cas-server" directory is a "pom" project. All the subdirectories point to this parent project, so when the parent sets the Log4J version to 1.2.15 then all the subprojects inherit that version number from the parent and every part of CAS uses the same Log4J library to compile and at runtime.
A "jar" Maven project compiles Java source and combines the resulting *.class files and any resource data files into a zip file with an extension of ".jar". This is an artifact that is stored in the repository and is used by other projects. The most important CAS subproject is "cas-server-core", which compiles most of the CAS source and puts it in a very large JAR file. The other CAS subprojects compile optional JAR files that you may or may not need depending on which configuration options you use. Any JAR file created by one of these subprojects will end up in the WEB-INF/lib directory of the CAS WAR file.
A type "war" Maven project builds a WAR file from three sources:
- It can have some Java source in src/main/java, which is compiled and the resulting class files are stored in the WEB-INF/classes directory of the WAR.
- The files in the src/main/webapp resource directory contain the HTML, JSP, XML, CSS, and other Web data files.They are simply copied to the WAR.
- Any JAR files that are listed as a runtime dependency not provided by the container in the WAR project POM are copied from the Maven repository to the WEB-INF/lib directory.
Then the resulting files are zipped up to create a WAR artifact which is stored in the Maven repository under the version number specified in the POM.
The "cas-server-webapp" project is a standard JASIG project that builds a WAR file, but that is not the WAR file we deploy.
WAR Overlay
Maven has a second mechanism for building WAR files. This is called a "WAR Overlay". It is triggered when the POM indicates that the project builds a WAR artifact, but the project also contains a WAR artifact as its first dependency. A WAR cannot contain a second WAR file, so when Maven sees this it decides that the current project is intended to update the contents of the WAR-dependency rather than building a new WAR from scratch.
A WAR Overlay project can have Java source in src/main/java, in which case the source will be compiled and will be added to or replace identically named *.class files in the old WAR-dependency.
Mostly, the WAR Overlay replaces or adds files from src/main/webapp. This allows Yale to customize the CSS files or create special wording in the JSP and HTML pages.
JAR files listed as a runtime dependency of the Web Overlay project are added to the WEB-INF/lib directory along with all the other JAR files in the original WAR. In this case, however, replacing doesn't make any sense. A JAR file with the same name has the same Version number, so it will be a copy of the exact same file that is already there. Because WAR Overlay only replaces identically named files, you really do not want to have an artifact with one Version number in the WAR Overlay project and a different Version number in the old WAR because they cannot be merged and the result will be garbage.
The JASIG recommends using the WAR Overlay mechanism, and the Yale CAS customizations follow that rule. In the JASIG distributed source the WAR Overlay project subdirectory is named cas-server-uber-webapp, but Yale's version of this project is named cas-server-yale-webapp. This project produces the artifact named edu.yale.its.cas.cas-server-war with the version number that we track in Jenkins and Subversion as the actual CAS Server executable.
CAS has traditionally been created using the "WAR Overlay" technique of Maven. First, the cas-server-webapp directory builds a conventional WAR from its own files and dependencies. This is a generic starting point WAR file with all the basic JAR files and XML configuration. Then a second type "war" project is run that (in the standard JASIG CAS distribution) is called cas-server-uber-webapp. What makes it different from a standard type "war" Maven project is that the first dependency in this project is the cas-server-webapp.war file build by the previous step.
A "WAR Overlay" project depends on an initial vanilla prototype WAR file as its starting point. Everything in that WAR will be copied to a new WAR file with a new name and version, unless a replacement file with the same name is found in the WAR Overlay. When Yale or any other CAS customer is building their own configuration of CAS, the WAR Overlay project directory provides a convenient place to store the Yale configuration changes that replace the prototype files of the same name in the previously run generic cas-server-webapp type "war" project.
If you made no changes at all to JASIG code, the WAR Overlay project is actually the only thing you would need to check out and edit. Yale would never need to locally compile any CAS source, because the JASIG repository version of the cas-server-webapp.war artifact for that release version number would be perfectly good input to the WAR Overlay process. However, at this time we include all the JASIG source and recompile the JASIG code every time we build the CAS source trunk. If that is a small waste of time, it prepares the structure so we can fix JASIG bugs if they become a problem.
The "cas-server-yale-webapp" project is the WAR Overlay that builds the artifact we actually deploy into production.
Source Build
CAS is distributed as a cas-server directory with a bunch of subdirectories and a POM file. The subdirectories are JAR and WAR projects. The POM file contains a certain amount of boilerplate for JASIG and the CAS project, plus two things that are important. The parameters and dependency management section ensure that all elements of CAS are compiled or built with the same version of library modules (quartz for timer scheduling, hibernate for object to database mapping, etc.). Then a list of <module> statements indicate which subdirectory projects are to be run, and the order in which they are run, when Maven processes the parent POM. The possible modules include:
- cas-server-core - This is the single really big project that contains most of CAS Server code and build the big JAR library artifact. It is built first and all the other projects depend on it and the classes in the JAR file.
- cas-server-webapp - This is a dummy WAR file that contains the vanilla version of the JSP, CSS, HTML, and the XML files that configure Spring and everything else. The dependencies in this project load the basic required JAR libraries into WEB-INF/lib. This WAR is not intended to install or run on its own. It is the input to the second step WAR overlay where additional library jar files will be added and some of these initial files will be replaced with Yale customizations.
- Additional optional extension subprojects of CAS supporting specific features. For example, the cas-server-support-ldap module is needed if you are going to authenticate Netid passwords to AD using LDAP protocol (you can authenticate to AD using Kerberos, in which case you don't need this module). A JASIG standard distribution includes all possible subprojects and the parent cas-server POM file builds all of them. To save development time, you can comment out the subproject <module> statement in the parent POM of modules you don't use. Disk space is cheap, and deleting the subdirectory of modules you don't use may be confusing.
- Yale CAS extension JAR projects. For example, the cas-server-expired-password-change module is the Yale version of what CAS subsequently added as a feature called "LPPE". Yale continues to use its code because the Yale Netid system involves databases and services beyond simply the Active Directory component and support
- cas-server-yale-webapp - Yale customization is (unless it is impossible) stored in the WAR Overlay project directory. If we need Java code, we put this source here and it is compiled and ends up in WEB-INF/classes. We replace some standard JSP and CSS files to get the Yale look and feel and wording. The XML files that configure Spring and therefore CAS behavior are here. This project must always be the last module listed in the cas-server parent POM so it is built last. The cas-server-webapp project is a dependency, so that is the base WAR file that will be updated. This project replaces JSP, CSS, and XML files in the base WAR with identically named files from this project. It adds new files and library jars (from the dependency list of this project).
Yale will have modified the POM file in the cas-server parent project to comment out "module" references to optional artifacts that we are not using in the current CAS project. It makes no sense to build cas-server-support-radius if you are not using it.
It is possible for Yale to also delete the subproject subdirectory from the instance of cas-server source checked into SVN. If we are not building or using the module, we don't need to maintain it in the project source. However, this makes relatively little difference. In CAS 3.5.2 the entire JASIG source is 1628 files or 10 megabytes of storage, while the optional projects we don't use account for only 210 files or 500K. So deleting the unused optional source modules saves very little time for every Trunk Build checkout and since we don't compile them, there is no processing time cost.
The Road Not Taken
The reference from a child project to its parent uses an artifact name and version, and the parent is found in the local Maven repository rather than the parent directory. So storing the subprojects as subdirectories under the cas-server parent is not a physical requirement. It does make things simpler.
It would have been possible to separate out the JASIG code in one directory of SVN, and then put the Yale code in another directory. This would slightly simplify the documentation because you would know what is Yale and what is JASIG. But that would only be a slight improvement.
We still have to decide how to handle bugs or problems in the JASIG source. When an individual file has been changed in the JASIG source because that is the only place it can be changed, then that file and the project directory that contains it is no longer vanilla JASIG code. At that point there has to be an entry in the Release Notes to show that the file has been changed, and now separate directories do not simplify the documentation.
This also means that there have to be two Jenkins Build jobs, one for the JASIG part of the source and one for the Yale part of the source. Remember, the Jenkins job structure at Yale assumes that a Build job checks out one directory and all its subdirectories.
Therefore, we compared the advantages and disadvantages, flipped a coin, and decided to check into SVN a directory consisting of JASIG source merged with Yale created project subdirectories. We do not claim that this is the right way to do things, but neither is it obviously wrong.
The POM in a vanilla JASIG component subproject will have a Version number that corresponds to CAS releases (3.5.2). The POM in a Yale customization subcomponent will have a Version number that corresponds to the Yale CAS installation "Release" number for Jenkins tracking (1.1.0-SNAPSHOT). The artifact name and version number in the WAR Overlay project must match the artifact name and version number of the Jenkins Install job parameters.
Standalone.xml
In theory, the developer of a project does not have passwords for the production databases, so runtime parameters are provided from a source outside the WAR. A common Yale convention was to build XML files defining each database as a JBoss Datasource and creating a service that stuffed a group of parameters into the JBoss JNDI space where they could be read at runtime using the built in java.naming classes. The Install job used to build these XML files.
However, the JBoss EAP 6.1 rewrite has changed the JBoss best practice. Now both Datasources and JNDI values are stored in the main "subsystem" configuration file. In the sandbox, this is the standalone.xml file in the /config subdirectory of the JBoss server. You need to get a copy of an appropriate configuration file for your JBoss and install it yourself. Production services will be responsible for providing the file in DEV, TEST, and PROD, but you need to notify them if you add something during sandbox development so they can also add it to the versions they manage.
Best Practice
Check out from SVN the current version of the cas-server source. Also check out the Installer project directory. Get a copy of install.properties for the Installer project from someone who has one, and make sure the jboss.deploy.dir points to the location where you put JBoss on your sandbox.
If it is at all possible, make changes only to the WAR Overlay project subdirectory.
M2E generates the .settings directory and the .project and .classpath Eclipse project files, but we generally do not check them into SVN because they may be different depending on which Eclipse optional plugins you have installed and what release of Eclipse you are running. Maven generates the target directory with all the generated class files and artifact structure, but that also should never be checked into SVN. To avoid having these directories even show up when you are synchronizing your edits with the SVN trunk, go to the Eclipse Menu for Window - Preferences - Team - Ignored Resources and Add Pattern for "target", ".settings", ".classpath", and ".project" (omitting the quotes of course).
CAS is vastly over engineered. An effort was made to deliver a product that anyone can configure and customize using only XML files. No Java programming is required. So if you are a Java programmer, you already have an advantage over the target audience.
CAS has a set of key interfaces which are then implemented with one or more Java classes. The choice of which Java class you will use as a AuthenticationManager, AuthenticationHandler, TicketCache, or Web Flow state is made when the fully qualified name of the class is specified in one of the XML bean configuration statements.
Sometimes you need behavior that is just slightly different from the standard JASIG behavior. Make a reasonable effort to see if you can find an alternate implementation class or an alternate configuration of the standard class that has the behavior you want. If not, then instead of modifying the org.jasig.cas code in the JASIG source, see if this is one of the "bean" classes named in the XML. If so, then just make your own copy of the original source in the WAR Overlay project source area and rename its package to a edu.yale.its.tp.cas name. Change it to do what you want, and then change the fully qualified name in the XML to reference the new name of the modified class you have put in the Yale customization project.
Of course, JASIG may change the source of its original class in some future release. However, classes that implement standard interfaces do not really need to pick up every JASIG change. If they fix a bug, then you want to copy over the new code. If they simply add a new feature or change the behavior of something that you are not using, then there is no need to update your modified copy of the original JASIG code. Essentially, the modified code has become 100% Yale stuff no matter who originally authored it and if it continues to do what you need then there is no need to change it in future releases, unless the interface changes.
Most strategic changes in behavior can be coded in Java source complied as part of the WAR Overlay project. What cannot be changed in the WAR Overlay project? If a class is not named in the XML configuration files, but instead is imported into some other Java source file, then you cannot in general replace it with new code in the WEB-INF/classes directory.
This is a consequence of some fine print in the Java implementation of ClassLoaders. Since this is computer logic stuff, I can boil it down to some simple logic, but it has to be expressed as a set of rules:
Every class is loaded by a ClassLoader.
In JBoss, there is one ClassLoader for WEB-INF/classes and a second ClassLoader for WEB-INF/lib. Then there are ClassLoaders for EARs and for the JBoss module JAR files, but they aren't important here.
When the XML references a class by name, JBoss searches first through the classes compiled in the WAR Overlay project and stored in the WEB-INF/classes directory, and then it searches through WEB-INF/lib.So it finds the WAR Overlay source first and uses it.
However, when an already loaded class references another class with an import statement, then Java searches for the new class by first using the ClassLoader that loaded the class that is looking for it. This means that any class loaded from WEB-INF/lib will first search for all of its import names in WEB-INF/lib first, and only later search WEB-INF/classes.
Therefore, you can override XML named classes using source from the WAR Overlay project, but the other classes that are imported into JASIG source files probably have to be edited and changed in their original JASIG project. If you have to make a change, identify the modified JASIG source in the ReleaseNotes, and if it is a bug change submit it to JASIG to be fixed in future releases.
The Debug Cycle
Make changes to the files and save them.
Once you run the CAS Build and CAS Install Run Configuration once, they appear in the recently used Run Configurations from the Run pulldown in the Eclipse toolbar. There is a Run icon (a right pointing triangle on a green circle) and following it there is a downward pointing "pulldown menu" icon that shows the list of recently run configurations.
Once you install JBoss Tools, there is a separate JBoss Run (a green right arrow) farther over to the right on the toolbar.However, during testing you probably do not want to run JBoss in normal mode but instead want to press the JBoss Debug button (the icon of a bug that follows the JBoss Run arrow on the toolbar.
So the normal cycle is to stop the JBoss Server, edit and save the source, the run the CAS Build Maven job, then the CAS Install Maven job, and then restart JBoss (in debug mode).
After The Sandbox
Changed files must eventually be Committed to the SVN trunk.
Do not commit the generated "target" subdirectories to SVN. They contain temporary files that should not be saved.
Once you are ready to proceed to the next phase, run a Jenkins Trunk Build and do a DEV Install.