CAS was created at Yale and Versions 1 and 2 of the CAS code were written at Yale. Version 3 was completely rewritten and has been managed by a group of universities called "JA-SIG". Recently JA-SIG merged with Sakai to and was renamed "Apereo". In Yale CAS documentation, any reference to jasig.org should be understood to now reference apereo.org.
The CAS Project Directory Structure
Any given release of the CAS Server can be downloaded as a zip file from Apereo or it can be checked out from the Git server used by CAS developers as documented at the Apereo Web site. Yale modifies this file, adds additional subdirectories representing the Yale customizations, and then checks it into Yale's Subversion server where the Yale standard Build process begins. Depending on context the CAS "project" could mean the original zip file or the modified file stored in SVN.
The CAS Project is structured as a "parent Maven project with subprojects". This is a common Maven way to package multiple projects that have to be compiled and built in a particular order. The outer directory contains the parent or master POM that establishes parameters shared by all the subprojects and a list of the subprojects to be built in the order in which they should run.
Each subproject creates a JAR or WAR file. The first project is cas-server-core and it contains about 95% or more of all the CAS code. It has to be built first because all the other projects depend on it. After that, there are projects to create optional components that you may or may not choose to use.
Building the WAR
The CAS WAR that you actually run in the Web server is built in two steps in two Maven projects.
Apereo distributes a project called cas-server-webapp to create an initial prototype WAR file. The WAR it creates is not particularly useful, but it contains at least a starter version of all the Spring XML used to configure CAS and many CSS, JSP, and HTML pages. It also contains a WEB-INF/lib with the basic JAR libraries needed by a basic CAS system.
Although you can modify the cas-server-webapp project directly, Apereo recommends using the WAR Overlay feature of Maven. This simplifies the case where the only changes you are making to CAS is to replace specific JSP and HTML pages with your own wording, or CSS files with your own look and feel, or Spring XML beans configured differently. In a WAR Overlay project you reference a "template" WAR containing a starting set of files (the WAR created by cas-server-webapp in this case) and then you include in your WAR Overlay project only the things you want to change or add. Maven copies the template WAR file to the output artifact, but it replaces any file in the template that has the same name as a file in the Overlay project, and it adds any file in the Overlay project that doesn't exist in the template. Therefore the Overlay project only contains Yale changes and doesn't have to include all the unchanged Apereo vanilla files.
If you use any of the optional modules (ehcache, ldap, oauth, etc.) then you configure them in the Overlay project in modified Spring XML files and include them as dependencies in the Overlay project POM. Since you cannot include a JAR library until after it has been created, the Overlay project should be the last project executed, except possibly for projects that generate Javadoc or other documentation.
The Yale Overlay project is in a directory called cas-server-yale-webapp. However, Eclipse frequently pulls out an internal original name for the project and so it often displays as cas-server-war.
At Yale, when you run the Jenkins Build job for CAS, the result of that job is to create a cas.war file and store it in the Yale Artifactory server with a version number and some hierarchical naming. An example is Version 1.1.1 of the Yale CAS Server stored as https://repository.its.yale.edu/maven2/libs-releases-local/edu/yale/its/cas/cas-server-war/1.1.1/cas-server-war-1.1.1.war. However, you cannot build the WAR until you build everything it depends on. The Jenkins Build job runs the parent Maven project in the top level directory, and it creates the WAR file by first building everything it depends on.
The Parent Project
Whenever Yale decides to upgrade from one release of CAS to another, a major part of the work is to adapt the old Yale parent project (the pom.xml file in the root of the CAS Project directory) to the new release. This is not a case where you are simply modifying some Apereo file with Yale changes. The parent project reflects the thing you are building.
Apereo is building a template version of CAS that doesn't do anything particularly interesting but at least creates vanilla versions of all the JAR and WAR files. The most important thing is that these files conform to the rules for Open Source projects distributed publicly. They have to have the right copyright statements and the right license boilerplate.
Yale is building a version of CAS that has to satisfy Yale internal needs. It doesn't care about copyright statements or licenses because we are not giving this code to anyone else. We can strip out all the list of developers and contributors not because we don't appreciate their contributions but because it is not particularly relevant to doing the Yale customizations.
These are really two quite different objectives and they have a fairly large impact on the structure of the parent POM. As a result, you have to merge elements from the old Yale parent and the new Apereo parent. This requires a tour of the Yale version of the parent POM.
- Version - The Apereo parent will have the version number of the CAS release (say 3.5.2.1). The Yale parent will have the version that Yale internally assigns to its CAS artifact (1.1.1-SNAPSHOT for example). The general rule is that Apereo projects and artifacts retain their old version number unless they are changed, but if Yale has to make a single line of modification to a project then it has become a Yale project and be renumber to the Yale development cycle. Fortunately, CAS started at JASIG with 3.x so we can number anything 1.x without conflict.
- SCM - defines the Yale Subversion server where Yale maintains its production source and not the Apereo Git server used to develop new releases.
- Repositories - defines Yale's Artifactory, but merged with other sources for artifacts from the Apereo pom.
- pluginManagement - Maven is a modular system with extensible functions provided by plugin modules. When a project requires special configuration or processing, it can add parameters to the configuration of a particular plugin in this section of the POM file. For the most part Yale copies this section from the Apereo POM file, but specific parameters have to be changed as a secondary result of Yale decisions. For example, because we choose to run CAS under Java 1.7 instead of 1.6, the version of the aspectj-maven-plugin has to be increased from "1.4" as distributed by Apereo to "1.5". When you run the Yale Jenkins Build job operation "Perform Release", which takes version "1.1.1-SNAPSHOT" source, creates a "1.1.1" version tag and artifact, and then leaves SVN set to develop "1.1.2-SNAPSHOT", then running unit tests in the middle of the operation is not helpful and is disabled at Yale by adding <arguments>-Dmaven.test.skip=true</arguments> to the configuraton of the maven-release-plugin.
- dependencyMangement - All the Apereo code is coordinated to use the same release of every dependency JAR file. However, Apereo does not bother to specify explict Versions of secondary dependencies (that is, JAR files required by a library that CAS uses). However, when Yale adds its own code to the mix, we may depend on a particular version of a library where the default behavior of Maven previously selected a different verision of the same library. For example, in CAS 3.5.2.1, cas-server-core depends on a library that itself depends on commons.io and from the chain of pom files Maven selects commons.io Version 2.0. Yale code requires Version 2.4. When two different subprojects independently resolve to different versions of the same dependency, the parent POM has to decide which wins. Basically this second should contain the latest version selected by either Apereo or Yale (and of course nobody should use a library whose developers are dumb enough not to maintain reasonable backward compatibility).
- modules - Yale wants to use vanilla code distributed by Apereo. If we have no changes to distributed CAS code, then there is no reason to recompile it. So initially Yale comments out all the modules that Apereo wrote and adds in uncommented module statements for the Yale developed projects. Therefore, the Build process only compiles and builds the Yale changes. However, if it is necessary to make a change to Apereo code then a changed module has to be uncommented so it can be rebuilt. Also, when you check out code into a new Eclipse directory you may uncomment the module statement for an important project and then tell Eclipse to generate an Eclipse project to build that module, but after you have your Eclipse project you revert back to the version of the POM where the module is commented out. Letting Eclipse configure the project and compile the code once gives you all the Eclipse IDE stuff for classes and methods built by that module and it lets you trace into the module source during debugging.
Yale adds several new Yale specific subprojects to the vanilla Apereo Maven source project directory tree. Some of the projects create JAR files that add specific functions to CAS. The final project builds the actual WAR file that is deployed to JBoss to run the CAS Server at Yale. No changes are made to the Apereo code unless there is a bug that needs to be fixed immediately.
It is a bit strange to check a large block of vendor source into a Yale Subversion directory that is supposed to contain Yale customizations. If there is a better way to handle this code, we can certainly adopt it. However, the Apereo CAS project uses Git as a source control system, while the Yale project management conventions still use Subversion. So to get Apereo code into the Yale Subversion system you have to migrate it across source management systems and probably lose the history data during the migration. Since we don't actually change the vanilla code, this is not a problem. If you want to track the history of a vanilla Apereo source file, use a Git client and get the history from it. The Yale Subversion directory is simply a copy of the most convenient Eclipse workspace project that one can use to do CAS development based on a particular release of CAS from Apereo.
A copy of the CAS Server source with the additional Yale projects is always checked into Yale Subversion as https://svn.its.yale.edu/repos/cas. If you are making changes to the current release, then everything you need is there. If you intend to upgrade to a new release of the Apereo code, then you obtain a new copy of the source distribution for that release (either download the zip file or check it out from Git) and then copy the Yale added subdirectories from the old release project to the new release project and change the pom.xml file in the new parent to execute the projects you want to build. Copying the Yale files from one project to another will lose Yale history, but Yale doesn't change CAS code except every four years or so. If you have been running a version of the code for 4 years without modifications, then the day to day history of the old development project is no longer important.
Some might suggest that the Apereo code be kept in one project tree and the Yale code be kept in a different tree. However, the original Apereo project was designed to work with a single parent and subdirectories, and we decided not to try and change that design. Besides, if all the source from Apereo and Yale is available in your Eclipse project workspace, then all the tools to search for references to a field or during debugging to step into a subroutine automatically work correctly. It is not hard to remember not to change non-Yale code without a good reason, and then to document if you make such a change so people know that this is an issue if we move to a new release of the Apereo base.
The ReleaseNotes.html file in in the root of the cas-server project will identify the current state of new directories and any modified files. This should be a descriptive match for the physical code change history in SVN.
CAS runs in production on Red Hat Enterprise Linux, but you can do development with Eclipse and JBoss running on Windows or any other OS. If you plan to work on SPNEGO, however, you should use a Linux development machine or VM because SPNEGO behavior depends on the native GSSAPI stack in the OS and that is different in Windows from the version of the stack in Linux.
Yale uses Eclipse as the Java IDE. If you do not already have Eclipse installed, start with the J2EE download package of the current release from eclipse.org. Eclipse adds previously optional features to later releases. Anything that does not come preinstalled can be added using the Help - Install New Software ... menu (if the option is an Eclipse project) or the Help - Eclipse Marketplace menu (Eclipse and third party tools, but check carefully because the Marketplace lists versions for every different release of Eclipse). You need:
- Maven support (M2E) which appears to be standard in the latest releases.
- Subversion support (Subversive) which is not automatically included because there were several competing projects. The first time you run a Subversive window you will get a popup to install a third party (Polarion) connector from a list and you should choose the latest version of SVNKit).
- JBoss Tools (at least the JBossAS manager) which is listed in Marketplace because it comes from Red Hat.
- AJDT, the Eclipse support for AspectJ which the JASIG CAS source uses (starting with the Luna release, Eclipse realizes that CAS uses AspectJ and automatically installs this component when it processes the CAS project if you have not preinstalled it).
- The standard Maven support comes with a version of Maven 3 built into Eclipse. That is exactly right for building the CAS executable, but it is a Yale convention that the Install job that copies the WAR file over to JBoss has to run under Maven 2. So you need to download the last Maven 2.2.1 from apache.org and unzip it to a directory somewhere on your disk. Later on you will define this directory to Eclipse as a optional "Maven Installation" it can use to run jobs configured to use Maven 2 instead of the default 3.
Check Out and Build the Project
Open the SVN Repository Exploring "perspective" (page) of Eclipse and define the repository url https://svn.its.yale.edu/repos/cas. This directory contains various cas clients and servers. Find the trunk of the current cas-server project and check it out.
The cas-server project is a Maven "parent" project with subprojects to build JARs and WARs in subdirectories. The Check Out As ... wizard normally assumes you are checking out a single project with a single result, so it will no configure this project properly on its own. Just check it out as a directory with no particular type, or you can try to configure it as a Maven project.
Return to the J2EE perspective, right click the new cas-server project directory, and choose Import - Maven - Existing Maven Projects. This is the point where the M2E Eclipse support for Maven discovers the parent and subdirectory structure. It reads through the parent POM to find the "modules", then scans the subdirectories for POM files that configure the subprojects. Then it presents a new dialog listing the projects it has found. Generally it has already found and configured the parent project, so only the subprojects will be checked.
If you are working on a CAS release that has already been configured, you will only see the subprojects that some previous Yale programmer decided were of interest at Yale. If you are working on a new release of CAS with a vanilla POM you may see a list of all the projects, and this may be a good time to deselect the CAS optional projects that Yale does not use.
Now sit back while the M2E logic tries to scan the POMs of all the subprojects and downloads the dependency JAR files from the internet. If you get a missing dependency for a Yale JAR file, then the $HOME/.m2/settings.xml file does not point to the Artifactory server where Yale stores its JAR files. You can ignore error messages about XML file syntax errors. In the Luna version of Eclipse, the M2E support discovers that CAS uses AspectJ and offers to install support for AspectJ in Eclipse.
A Maven project has standard source directories (src/main/java) used to build the output JAR or WAR artifact. A Maven POM has "dependency" declarations it uses to download JAR files (to your local Maven repository) and build a "classpath" used to find referenced classes at compile time. Eclipse has a "Build Path" (physically the .classpath file) that defines source directories, the location where the ouput class files are stored, and the libraries that have to be used to compile the source. M2E runs through the POM files in each subproject and creates a Eclipse project with a Eclipse Build Path that tells Eclipse to do exactly the same thing Maven would do to the same project. Eclipse will compile the same source in the same source directory and produce the same output to the same output directory using the same JAR libraries to run the compiler. It is important (because it answers questions later on) that you understand that Eclipse is being configured to do MOSTLY the same thing that Maven would do, but Eclipse does it without running Maven itself. The MOSTLY qualification is that M2E sets default values for the enormous number of options that an Eclipse project has (what error message to display or hide, syntax coloring, everything else you can set if you right click the project and choose Preferences from the menu). After M2E creates an initial default batch of project setting, you are free to change them and ask Eclipse to do things slightly different without breaking the development process.
At the end there should be several types of error messages that don't really matter:
- JSP, HTML, XML, XHTML, and JavaScript files may report errors because Eclipse is trying to validate syntax without having everything it needs to do the validation properly, or because some files are fragmentary and have to be assembled from pieces before they are well formed.
- The M2E default is to configure the Eclipse project to use Java 1.6 both as a language level for the compiler and as a JRE library default. If you are running on a machine with 1.7 or 1.8, you will get warning messages that the project is set up for 1.6 but there is no actual 1.6 known to Eclipse. You can ignore this message, or correct the default project settings.
Before the import, there was only the "cas-server" project (and the installer). The import creates an Eclipse "shadow project" (my name) for every subdirectory under cas-server that is needed. These shadow projects are Eclipse entities built from the Maven POM files. They contain Eclipse configuration and option elements (what Eclipse thinks of as the project and Build Path). What these shadow projects don't have are real source files. They contain configuration pointers the real source files in the subdirectories of the cas-server project. As you open and look at these projects, they will appear to have source. However, this source will be structured as Java Packages and Java Classes and Java Resources. This is a logical representation in Java and WAR terms of the actual source files that remain as subdirectories of the cas-source directory.
In a few cases, such as the Search option on the menu, you may look for files with certain text or attributes and the results of the search will display the same file twice, once as a physical text file in the cas-server directory tree and once as a Java Class source in the corresponding Eclipse project. Just realize that these are two different views of the same file, and changes you make to either view are also changes to the other view.
Interactive and Batch Modes
Eclipse tries to make J2EE development simpler than is reasonably possible. Eclipse is almost successful, close enough so that it is quite usable as long as you realize what cannot possibly be made to work exactly right and solve that problem with another tool.
The Maven POM contains declarations that drive a Java compiler running under Maven to compile all the source, and then it builds JAR, WAR, and EAR files. The M2E support for Maven inside Eclipse tries to understand the POM file and to create a set of Eclipse project configurations, including Build Path references to dependent libraries in the Maven local repository, to compile the source in the same way. This part works, so Eclipse can be a good source editor including autocomplete hints and the ability to find the source of classes you are using or all references to a method you are coding.
It is in the running and debugging of applications under JBoss or Tomcat that Eclipse may attempt to be too simple.
J2EE servers have a "deployment" directory where you put the WAR file that contains the Java classes and HTML/XML/JSP/CSS files. You can deploy the application as a single ZIP file, or you can deploy it as a directory of files.
Most Java application servers have a Hot Deploy capability that allows you to replace a single HTML or CSS file in the (non-ZIP, directory of files) application and the server will notice the new file with its new change date and immediately start to use it. This can ever work for some individual class files. However, Hot Deploy behavior is random and error prone if you attempt to change a number of files at the same time because the application server can notice some changes in the middle of the update and start to act on an incomplete set of updates.
Eclipse likes the idea of a simple change to a single HTML, CSS, or even a simple Java source file taking effect immediately after you save the file in the editor. However, since it knows that Hot Deploy doesn't actually work if you physically copy a batch of files over to the Tomcat or JBoss deploy directory, the Eclipse J2EE runtime support has traditionally attempted to do something much more sophisticated. It "hacks" the normal Tomcat configuration file to insert a special Eclipse module that creates (to Tomcat) a "virtual" WAR file that does not actually exist in the Tomcat deploy directory but which is synthetically emulated by Eclipse using the source and class files that it has in the projects in its workspace.
This is a little better than Hot Deploy, because Eclipse can simulate swapping the entire set of changed files in a single transaction, but in the long run it doesn't work either. CAS, for example, is a big Spring application with a bunch of preloaded Spring Beans created in memory when the application starts up. The only way to change any of these beans is to bring the entire application down and restart it.
So successful CAS development requires you, even if all you want to do is change the wording on an HTML file, to stop the application server, run a real "mvn install" on the cas-server project to build a new WAR file, run a "mvn install" on the Installer project to copy it over to the application server deploy directory, and then restart the application server. The Eclipse capability to make some changes take effect immediately on a running server is simply too unreliable, and because of that we cannot use the entire "interactive" Eclipse application build strategy but must instead switch to a Maven "batch" artifact building and deployment process.
The advantage of the "batch" approach is that you are guaranteed to get a 100% exact duplicate behavior as you will get later on when you run the Yale Jenkins jobs to compile and install the application onto the test or production servers. Eclipse does a pretty good job of doing almost exactly the same thing that Maven will do, but it is not worth several days of debugging to finally discover that almost the same thing is not quite as good as exactly the same thing.
Since the Eclipse project was created from the Maven POM, it is configured to compile all the same source that Maven would compile and to put the resulting class files in the same output directory. Because Eclipse compiles the source when the project is first built, and then recompiles source after every edit, whenever you run the "mvn install" job Maven finds that the class files are up to date and does not recompile the source. If it is important to have Maven recompile source itself, then you have to do a "mvn clean install" step. Also, neither Maven nor Eclipse does a good job cleaning up class files after a source file is deleted, so when you make such a change you should "clean" the project to force a recompile of the remaining files.
Running Maven Jobs Under Eclipse
At any time you can right click a POM file and choose "Run as ..." and select a goal (clean, compile, install). Rather than taking all the default configuration parameters, or entering them each time you run Maven, you can build a Run Configuration. Basically, this is a simple job with one step that runs Maven. The Run Configuration panel allows you to select every possible option, including the Java Runtime you want to use to run Maven, and a specific Maven installation so you can run some things under Maven 2 and some under Maven 3, plus the standard Maven options to not run tests or to print verbose messages. Since the Jenkins jobs each run one Maven POM, in the Sandbox you can build two Maven Run Configurations that duplicate the function of each Jenkins job.
First, lets review what the Jenkins jobs do.
- The "Trunk Build" job checks the source project out of subversion. In this case, it is a "parent" project with subdirectories that all get checked out. It then runs a "mvn install" on the parent. The parent POM contains "module" XML elements indicating which subdirectory projects are to be run. Each subdirectory project generates a JAR or WAR artifact. The Jenkins Trunk Build installs these artifacts on the Artifactory server replacing any prior artifact with the same name and version number.
- The Jenkins Install job checks the source of the Install project out of SVN. By Yale convention, each Install project is an Ant build.xml script and a package of parameters in an install.properties file created from the Parameters entered or defaulted by the person running the Jenkins job. Minimally the Install job downloads the artifact from Artifactory and copies it to the JBoss deploy directory, although typically the artifact will be unzipped, variables will be located and replaced by the values in the properties file, and then the file will be rezipped and deployed.
In the Sandbox you already have a copy of the CAS source files checked out in your Eclipse project in your workspace, and you can also check out a copy of the Installer project. So there is no need to access SVN. Similarly, in the Sandbox we do not want to mess up Artifactory, so the local Maven repository (which defaults to a subdirectory tree under the .m2 directory in your HOME folder) holds the generated artifacts.
With these changes, a Build project compiles the source (in the Eclipse workspace) and generates artifacts in the .m2 Maven local repostory. The Install project runs the Ant script with a special Sandbox version of the install.properties to copy the artifact from the .m2 local repository to the Sandbox JBoss Deploy directory (with property edits).
There is one last step. The Jenkins install job stops and starts JBoss. In the Sandbox you will manually start and stop the JBoss server using the JBoss AS Eclipse plugin. This plugin generates icons in the Eclipse toolbar to Start, Stop, and Debug JBoss under Eclipse. You Stop JBoss before running the Install job and then start it (typically using the Debug version of start) after the Install completes.
Now to the mechanics of defining a Run Configuration. You can view them by selecting the Run - Run Configurations menu option. Normally you think of using Run configurations to run an application, but once you install M2E there is also a Maven Build section. Here you can choose a particular Maven runtime directory, a set of goals ("clean install" for example), and some options to run Maven in a project directory. The recommendation is that on your sandbox machine you create two Maven Run Configurations. The configuration labelled "CAS Build" runs a Maven "install" or "clean install" goal on the parent POM of the CAS source project. The configuration labelled "CAS Install" runs Maven on the POM of the Installer project (and it has to run Maven 2 because this is still the standard for Installer jobs).
CAS Source Project Structure
In Maven, a POM with a packaging type of "pom" contains only documentation, parameters, and dependency configuration info. It is used to configure other projects, but generates no artifact. The parent "cas-server" directory is a "pom" project. All the subdirectories point to this parent project, so when the parent sets the Log4J version to 1.2.15 then all the subprojects inherit that version number from the parent and every part of CAS uses the same Log4J library to compile and at runtime.
A "jar" Maven project compiles Java source and combines the resulting *.class files and any resource data files into a zip file with an extension of ".jar". This is an artifact that is stored in the repository and is used by other projects. The most important CAS subproject is "cas-server-core", which compiles most of the CAS source and puts it in a very large JAR file. The other CAS subprojects compile optional JAR files that you may or may not need depending on which configuration options you use. Any JAR file created by one of these subprojects will end up in the WEB-INF/lib directory of the CAS WAR file.
A type "war" Maven project builds a WAR file from three sources:
- It can have some Java source in src/main/java, which is compiled and the resulting class files are stored in the WEB-INF/classes directory of the WAR.
- The files in the src/main/webapp resource directory contain the HTML, JSP, XML, CSS, and other Web data files.They are simply copied to the WAR.
- Any JAR files that are listed as a runtime dependency not provided by the container in the WAR project POM are copied from the Maven repository to the WEB-INF/lib directory.
Then the resulting files are zipped up to create a WAR artifact which is stored in the Maven repository under the version number specified in the POM.
The "cas-server-webapp" project is a standard JASIG project that builds a WAR file, but that is not the WAR file we deploy.
WAR Overlay
Maven has a second mechanism for building WAR files. This is called a "WAR Overlay". It is triggered when the POM indicates that the project builds a WAR artifact, but the project also contains a WAR artifact as its first dependency. A WAR cannot contain a second WAR file, so when Maven sees this it decides that the current project is intended to update the contents of the WAR-dependency rather than building a new WAR from scratch.
A WAR Overlay project can have Java source in src/main/java, in which case the source will be compiled and will be added to or replace identically named *.class files in the old WAR-dependency.
Mostly, the WAR Overlay replaces or adds files from src/main/webapp. This allows Yale to customize the CSS files or create special wording in the JSP and HTML pages.
JAR files listed as a runtime dependency of the Web Overlay project are added to the WEB-INF/lib directory along with all the other JAR files in the original WAR. In this case, however, replacing doesn't make any sense. A JAR file with the same name has the same Version number, so it will be a copy of the exact same file that is already there. Because WAR Overlay only replaces identically named files, you really do not want to have an artifact with one Version number in the WAR Overlay project and a different Version number in the old WAR because they cannot be merged and the result will be garbage.
The JASIG recommends using the WAR Overlay mechanism, and the Yale CAS customizations follow that rule. In the JASIG distributed source the WAR Overlay project subdirectory is named cas-server-uber-webapp, but Yale's version of this project is named cas-server-yale-webapp. This project produces the artifact named edu.yale.its.cas.cas-server-war with the version number that we track in Jenkins and Subversion as the actual CAS Server executable.
CAS has traditionally been created using the "WAR Overlay" technique of Maven. First, the cas-server-webapp directory builds a conventional WAR from its own files and dependencies. This is a generic starting point WAR file with all the basic JAR files and XML configuration. Then a second type "war" project is run that (in the standard JASIG CAS distribution) is called cas-server-uber-webapp. What makes it different from a standard type "war" Maven project is that the first dependency in this project is the cas-server-webapp.war file build by the previous step.
A "WAR Overlay" project depends on an initial vanilla prototype WAR file as its starting point. Everything in that WAR will be copied to a new WAR file with a new name and version, unless a replacement file with the same name is found in the WAR Overlay. When Yale or any other CAS customer is building their own configuration of CAS, the WAR Overlay project directory provides a convenient place to store the Yale configuration changes that replace the prototype files of the same name in the previously run generic cas-server-webapp type "war" project.
If you made no changes at all to JASIG code, the WAR Overlay project is actually the only thing you would need to check out and edit. Yale would never need to locally compile any CAS source, because the JASIG repository version of the cas-server-webapp.war artifact for that release version number would be perfectly good input to the WAR Overlay process. However, at this time we include all the JASIG source and recompile the JASIG code every time we build the CAS source trunk. If that is a small waste of time, it prepares the structure so we can fix JASIG bugs if they become a problem.
The "cas-server-yale-webapp" project is the WAR Overlay that builds the artifact we actually deploy into production.
Source Build
CAS is distributed as a cas-server directory with a bunch of subdirectories and a POM file. The subdirectories are JAR and WAR projects. The POM file contains a certain amount of boilerplate for JASIG and the CAS project, plus two things that are important. The parameters and dependency management section ensure that all elements of CAS are compiled or built with the same version of library modules (quartz for timer scheduling, hibernate for object to database mapping, etc.). Then a list of <module> statements indicate which subdirectory projects are to be run, and the order in which they are run, when Maven processes the parent POM. The possible modules include:
- cas-server-core - This is the single really big project that contains most of CAS Server code and build the big JAR library artifact. It is built first and all the other projects depend on it and the classes in the JAR file.
- cas-server-webapp - This is a dummy WAR file that contains the vanilla version of the JSP, CSS, HTML, and the XML files that configure Spring and everything else. The dependencies in this project load the basic required JAR libraries into WEB-INF/lib. This WAR is not intended to install or run on its own. It is the input to the second step WAR overlay where additional library jar files will be added and some of these initial files will be replaced with Yale customizations.
- Additional optional extension subprojects of CAS supporting specific features. For example, the cas-server-support-ldap module is needed if you are going to authenticate Netid passwords to AD using LDAP protocol (you can authenticate to AD using Kerberos, in which case you don't need this module). A JASIG standard distribution includes all possible subprojects and the parent cas-server POM file builds all of them. To save development time, you can comment out the subproject <module> statement in the parent POM of modules you don't use. Disk space is cheap, and deleting the subdirectory of modules you don't use may be confusing.
- Yale CAS extension JAR projects. For example, the cas-server-expired-password-change module is the Yale version of what CAS subsequently added as a feature called "LPPE". Yale continues to use its code because the Yale Netid system involves databases and services beyond simply the Active Directory component and support
- cas-server-yale-webapp - Yale customization is (unless it is impossible) stored in the WAR Overlay project directory. If we need Java code, we put this source here and it is compiled and ends up in WEB-INF/classes. We replace some standard JSP and CSS files to get the Yale look and feel and wording. The XML files that configure Spring and therefore CAS behavior are here. This project must always be the last module listed in the cas-server parent POM so it is built last. The cas-server-webapp project is a dependency, so that is the base WAR file that will be updated. This project replaces JSP, CSS, and XML files in the base WAR with identically named files from this project. It adds new files and library jars (from the dependency list of this project).
Yale will have modified the POM file in the cas-server parent project to comment out "module" references to optional artifacts that we are not using in the current CAS project. It makes no sense to build cas-server-support-radius if you are not using it.
It is possible for Yale to also delete the subproject subdirectory from the instance of cas-server source checked into SVN. If we are not building or using the module, we don't need to maintain it in the project source. However, this makes relatively little difference. In CAS 3.5.2 the entire JASIG source is 1628 files or 10 megabytes of storage, while the optional projects we don't use account for only 210 files or 500K. So deleting the unused optional source modules saves very little time for every Trunk Build checkout and since we don't compile them, there is no processing time cost.
The Road Not Taken
The reference from a child project to its parent uses an artifact name and version, and the parent is found in the local Maven repository rather than the parent directory. So storing the subprojects as subdirectories under the cas-server parent is not a physical requirement. It does make things simpler.
It would have been possible to separate out the JASIG code in one directory of SVN, and then put the Yale code in another directory. This would slightly simplify the documentation because you would know what is Yale and what is JASIG. But that would only be a slight improvement.
We still have to decide how to handle bugs or problems in the JASIG source. When an individual file has been changed in the JASIG source because that is the only place it can be changed, then that file and the project directory that contains it is no longer vanilla JASIG code. At that point there has to be an entry in the Release Notes to show that the file has been changed, and now separate directories do not simplify the documentation.
This also means that there have to be two Jenkins Build jobs, one for the JASIG part of the source and one for the Yale part of the source. Remember, the Jenkins job structure at Yale assumes that a Build job checks out one directory and all its subdirectories.
Therefore, we compared the advantages and disadvantages, flipped a coin, and decided to check into SVN a directory consisting of JASIG source merged with Yale created project subdirectories. We do not claim that this is the right way to do things, but neither is it obviously wrong.
The POM in a vanilla JASIG component subproject will have a Version number that corresponds to CAS releases (3.5.2). The POM in a Yale customization subcomponent will have a Version number that corresponds to the Yale CAS installation "Release" number for Jenkins tracking (1.1.0-SNAPSHOT). The artifact name and version number in the WAR Overlay project must match the artifact name and version number of the Jenkins Install job parameters.
Standalone.xml
In theory, the developer of a project does not have passwords for the production databases, so runtime parameters are provided from a source outside the WAR. A common Yale convention was to build XML files defining each database as a JBoss Datasource and creating a service that stuffed a group of parameters into the JBoss JNDI space where they could be read at runtime using the built in java.naming classes. The Install job used to build these XML files.
However, the JBoss EAP 6.1 rewrite has changed the JBoss best practice. Now both Datasources and JNDI values are stored in the main "subsystem" configuration file. In the sandbox, this is the standalone.xml file in the /config subdirectory of the JBoss server. You need to get a copy of an appropriate configuration file for your JBoss and install it yourself. Production services will be responsible for providing the file in DEV, TEST, and PROD, but you need to notify them if you add something during sandbox development so they can also add it to the versions they manage.
Best Practice
Check out from SVN the current version of the cas-server source. Also check out the Installer project directory. Get a copy of install.properties for the Installer project from someone who has one, and make sure the jboss.deploy.dir points to the location where you put JBoss on your sandbox.
If it is at all possible, make changes only to the WAR Overlay project subdirectory.
M2E generates the .settings directory and the .project and .classpath Eclipse project files, but we generally do not check them into SVN because they may be different depending on which Eclipse optional plugins you have installed and what release of Eclipse you are running. Maven generates the target directory with all the generated class files and artifact structure, but that also should never be checked into SVN. To avoid having these directories even show up when you are synchronizing your edits with the SVN trunk, go to the Eclipse Menu for Window - Preferences - Team - Ignored Resources and Add Pattern for "target", ".settings", ".classpath", and ".project" (omitting the quotes of course).
CAS is vastly over engineered. An effort was made to deliver a product that anyone can configure and customize using only XML files. No Java programming is required. So if you are a Java programmer, you already have an advantage over the target audience.
CAS has a set of key interfaces which are then implemented with one or more Java classes. The choice of which Java class you will use as a AuthenticationManager, AuthenticationHandler, TicketCache, or Web Flow state is made when the fully qualified name of the class is specified in one of the XML bean configuration statements.
Sometimes you need behavior that is just slightly different from the standard JASIG behavior. Make a reasonable effort to see if you can find an alternate implementation class or an alternate configuration of the standard class that has the behavior you want. If not, then instead of modifying the org.jasig.cas code in the JASIG source, see if this is one of the "bean" classes named in the XML. If so, then just make your own copy of the original source in the WAR Overlay project source area and rename its package to a edu.yale.its.tp.cas name. Change it to do what you want, and then change the fully qualified name in the XML to reference the new name of the modified class you have put in the Yale customization project.
Of course, JASIG may change the source of its original class in some future release. However, classes that implement standard interfaces do not really need to pick up every JASIG change. If they fix a bug, then you want to copy over the new code. If they simply add a new feature or change the behavior of something that you are not using, then there is no need to update your modified copy of the original JASIG code. Essentially, the modified code has become 100% Yale stuff no matter who originally authored it and if it continues to do what you need then there is no need to change it in future releases, unless the interface changes.
Most strategic changes in behavior can be coded in Java source complied as part of the WAR Overlay project. What cannot be changed in the WAR Overlay project? If a class is not named in the XML configuration files, but instead is imported into some other Java source file, then you cannot in general replace it with new code in the WEB-INF/classes directory.
This is a consequence of some fine print in the Java implementation of ClassLoaders. Since this is computer logic stuff, I can boil it down to some simple logic, but it has to be expressed as a set of rules:
Every class is loaded by a ClassLoader.
In JBoss, there is one ClassLoader for WEB-INF/classes and a second ClassLoader for WEB-INF/lib. Then there are ClassLoaders for EARs and for the JBoss module JAR files, but they aren't important here.
When the XML references a class by name, JBoss searches first through the classes compiled in the WAR Overlay project and stored in the WEB-INF/classes directory, and then it searches through WEB-INF/lib.So it finds the WAR Overlay source first and uses it.
However, when an already loaded class references another class with an import statement, then Java searches for the new class by first using the ClassLoader that loaded the class that is looking for it. This means that any class loaded from WEB-INF/lib will first search for all of its import names in WEB-INF/lib first, and only later search WEB-INF/classes.
Therefore, you can override XML named classes using source from the WAR Overlay project, but the other classes that are imported into JASIG source files probably have to be edited and changed in their original JASIG project. If you have to make a change, identify the modified JASIG source in the ReleaseNotes, and if it is a bug change submit it to JASIG to be fixed in future releases.
The Debug Cycle
Make changes to the files and save them.
Once you run the CAS Build and CAS Install Run Configuration once, they appear in the recently used Run Configurations from the Run pulldown in the Eclipse toolbar. There is a Run icon (a right pointing triangle on a green circle) and following it there is a downward pointing "pulldown menu" icon that shows the list of recently run configurations.
Once you install JBoss Tools, there is a separate JBoss Run (a green right arrow) farther over to the right on the toolbar.However, during testing you probably do not want to run JBoss in normal mode but instead want to press the JBoss Debug button (the icon of a bug that follows the JBoss Run arrow on the toolbar.
So the normal cycle is to stop the JBoss Server, edit and save the source, the run the CAS Build Maven job, then the CAS Install Maven job, and then restart JBoss (in debug mode).
After The Sandbox
Changed files must eventually be Committed to the SVN trunk.
Do not commit the generated "target" subdirectories to SVN. They contain temporary files that should not be saved.
Once you are ready to proceed to the next phase, run a Jenkins Trunk Build and do a DEV Install.