CAS Development Conventions at Yale
CAS was created at Yale and Versions 1 and 2 of the CAS code were written at Yale. Version 3 was completely rewritten and has been managed by a group of universities called "JA-SIG". Recently JA-SIG merged with Sakai to and was renamed "Apereo". In Yale CAS documentation, any reference to jasig.org should be understood to now reference apereo.org.
The CAS Project Directory Structure
Any given release of the CAS Server can be downloaded as a zip file from Apereo or it can be checked out from the Git server used by CAS developers as documented at the Apereo Web site. The release source is a Maven project, specifically a "parent" project with subdirectories that contain Maven subprojects. This is a common Maven way to package multiple projects that have to be compiled and built in a particular order.
The outer directory contains the parent or master pom.xml that defines parameters shared by all the subprojects. It also contains a list of <module> statements that identify the subprojects to be built in the order in which they should run.
Each subproject creates a JAR or WAR file. The first project is cas-server-core and it builds a JAR file containing about 95% or more of all the CAS code. It has to be built first because all the other projects depend on it. After that, there are projects to create optional components that you may or may not choose to use.
With CAS 3 Yale starts with the Apereo CAS source directory tree and adds new subproject subdirectories for the Yale added code.
Starting with CAS 4, Yale creates a separate Yale-CAS project directory containing only our subprojects, but we copy the Apereo parent pom.xml file from the distributed Apereo Source distribution and modify it so it becomes a Yale specific file.
Building the WAR
Before you build the WAR, you have to build all the JAR files it will contain. Apereo JAR files can be downloaded by Maven from the Internet so they do not have to be rebuilt. Yale JAR files, and any Apereo JAR files that Yale decides to modify, have be be locally compiled and stored. Yale new and modified artifacts always have a Yale specific Version ID that distinguishes them from the Apereo CAS artifacts.
The CAS WAR that you actually run in the Web server is built in two steps in two Maven projects.
Apereo distributes a project called cas-server-webapp to create an initial template WAR file. This WAR is not particularly useful, but it contains at least a starter version of all the Spring XML used to configure CAS and many CSS, JSP, and HTML pages. It also contains a WEB-INF/lib with the basic JAR libraries needed by a CAS system. This unmodified Apereo WAR file is a "template" that Yale modifies or updates to create our final WAR.
Although you can modify the cas-server-webapp project directly, this results in a directory with a mixture of Apereo files and Yale files, and the next time you get a new CAS release from Apereo you have to sift through them to find out which ones have been changed by Apereo.
Apereo recommends using the WAR Overlay feature of Maven. Yale creates a second WAR project called cas-server-yale-webapp that contains only the files that Yale has changed or added. Generally the WAR Overlay includes:
- Yale Java source for Spring Beans that are slightly modified versions of standard Apereo beans.
- Yale "branded" look and feel (CSS, JSP, HTML).
- Spring XML to configure CAS options that Yale has selected and a few Yale additions.
- A pom.xml file with dependency statements for Apereo optional and Yale additional JAR files referenced by the Spring XML that will be added to WEB-INF/lib.
A normal WAR project simply takes all the files in the project and uses them to build a WAR. A WAR Overlay project, however, starts with a template WAR that has already been built. Maven knows that it is working with a WAR Overlay project when it discovers that the first <dependency> in the POM is a file of type "war". Since one WAR cannot contain another WAR inside it, Maven knows that it is supposed to start with the template (dependency) WAR and update it by replacing or adding files from this project to build the output WAR file.
The Template WAR built by cas-server-webapp contains most of the Spring XML configuration, an initial set of CSS, HTML, and JSP files, and almost all the JAR files needed by CAS in its WEB-INF/lib. It also contains the cas-server-core JAR because which has 95% of all the CAS code.
The Yale WAR Overlay project replaces the CSS, JS, HTML, and JSP files associated with the CAS login function to provide the Yale "look and feel" of the CAS login page that everyone expects. It also replaces some of the subsequent confirmation or error pages to also have the same look.
Yale has selected several CAS optional modules. We use cas-server-support-ldap because we authenticate user passwords to AD using LDAP protocol, and cas-server-integration-ehcache because we use ehcache to replicate tickets between CAS servers. These optional JAR files will have been built by Apereo and are available as Maven artifacts, but they were not included in the cas-server-webapp template because they are optional. So the Yale WAR Overly has to add these JAR files to its list of dependencies and configure these options in new or modified Spring XML files. We also must include the JAR files build by the two Yale projects.
While the WAR Overlay process provides a way to update, replace, or add files, it does not provide any way to delete files you no longer want. However, Maven builds a WAR using the maven-war-plugin which can be configured in the POM file. One configuration option is the <packagingExcludes> list that lists a file (or wildcard set of files) in the Template WAR that are to be removed instead of being copied to the output WAR. There are two cases where this is used:
- Some JAR files used by the cas-server-webapp project to build a WAR that will run in Tomcat are inappropriate for JBoss. JBoss provides its own versions of some JAR files, so the JAR file provided by the application should be deleted.
- If you build everything with a single Maven environment then Maven will ensure that the latest required version of any given JAR file is used by all modules. However, the cas-server-webapp WAR file is built by Apereo with the latest version of all the libraries it uses. Yale has a few of its own projects that it builds in its own Maven environment, and every so often Yale will use a later version of a JAR file than Apereo required. So if cas-server-webapp had a dependency on 1.2.3 of some JAR file and Yale depends on 1.2.4, then the normal WAR Overlay process would copy both versions of the JAR file (1.2.3 and also 1.2.4) to the output WEB-INF/lib. So we add a packagingExcludes statement in the WAR Overly POM to delete the old version (1.2.3) to make sure the only version of the JAR in the WAR we are going to run at Yale has only the version of the library we want to use.
Version Numbers and Project Structure
The first Apereo version of CAS was 3.0.0 and now there are CAS 4.0.x versions. CAS 1 and CAS 2 were written at Yale and are no longer meaningful, but this means that the Maven version numbers starting 1.x and 2.x are free and will never be used by Apereo. So Yale internally is reusing the 1.x Version numbers for Yale modules and for Yale modified versions of Apereo modules. Since Yale only periodically updates its CAS version, we skip over a lot of Apereo version numbers:
Apero Version | Yale Version |
---|---|
3.4.x | 1.0.x |
3.5.x | 1.1.x |
4.0.x | 1.2.x |
This becomes important (and confusing) when Yale has to make a change to an Apereo JAR file to fix a problem we cannot ignore and cannot wait for a new Apereo release. In the Maven repositories there will be a vanilla Apereo artifact ending in *-4.0.2.jar and Yale will then add its own artifact named *-1.2.0.jar. If the artifact is one of the optional libraries, then we will simply add the 1.2.0 dependency to the WAR Overlay project, but if this is cas-server-core or anything else in the template WAR built by the cas-server-webapp project then we also have to add the -4.0.2.jar file to the Exclude list of the Maven WAR plugin configuration in the WAR Overlay POM file.
Although there may be additional code in various stages of development, a CAS release at Yale only depends on three Yale projects:
- cas-server-expired-password-change (to prompt the user to change a password that is a year old)
- cas-server-support-CushyTicketRegistry (only the configuration module is currently used to configure ehcache)
- cas-server-yale-webapp (the WAR Overlay with all the Yale configuration and HTML changes)
In CAS 3 development Yale created a source project that included the standard Apereo source along with the three additional Yale projects in a single directory. The advantage of this project structure is that Yale was recompiling all the source and was building at least formal artifacts with the Yale version numbers for all the Apereo code. If Yale ever wanted to change an Apereo module, all it had to do was to make a change in the Apereo source and then change the dependency version number for that artifact in the WAR Overlay from the vanilla 3.5.x to the Yale 1.1.x. It made modifications to Apereo easy, but it created a lot of work when you upgraded to a new CAS release where you ended up changing the project structure of a bunch of Apereo modules you were never really going to change.
So in CAS 4 we change the Yale project structure to a much simpler and cleaner approach. All the vanilla Apereo code is removed initially. If there are modification we need to make to an Apereo module, that will be a bunch of work we can do later on when it is needed. The Yale project then consists of only the three Yale added projects as subdirectories of a parent CAS project, plus a parent POM as the only file in that parent directory.
The Yale parent POM is a modified version of the parent POM in the vanilla Apereo CAS 4.0.2 source directory. First, we change the version number from 4.0.2 to Yale's 1.2.0 because that is the default version of any Yale written or modified code. Then we comment out all the processing that Apereo has added to Maven to do open source project housekeeping. When you build an open source project for general distribution, there are Maven plugins that make sure that your comment files list the open source licenses of all the open source dependency libraries you use, and plugins that complain about syle (missing JavaDoc commments for example), and plugins that do special release packaging. Yale is a customer and our projects only have to run at Yale, so we don't need all those checks.
We should note that there are two distinct meanings to the term "parent" in a Parent POM.
The parent directory is a Maven project with a POM file with a packaging type of "pom". For example, the Apereo parent POM for CAS 4.0.0 begins with the following declaration:
<groupId>org.jasig.cas</groupId> <artifactId>cas-server</artifactId> <packaging>pom</packaging> <version>4.0.0</version>
This file is processed when anyone does a Maven "mvn install" operation on the parent directory. In this case, the important part of the Parent Directory POM is that it contains "<module>cas-server-core</module>" statements for every subdirectory that builds an artifact. Maven processes each <module> statement in the order they appear in the Parent Directory POM file, so this file can be written so that a JAR file is built and stored in the Maven repository before a subsequent module is compiled that depends on that JAR file.
There is, however, a second meaning of "parent" that relates to the projects in the subdirectories. Each subproject POM file contains a <parent> reference:
<parent> <artifactId>cas-server</artifactId> <groupId>org.jasig.cas</groupId> <version>4.0.0</version> </parent>
The <parent> directory contains DepedencyManagement statements, Maven plugin configuration statements, and <parameter> definitions that are shared by all the subprojects that reference it. This ensures that, for example, all the subdirectory projects are complied with the same version of the Apache commons-io JAR library.
It is a common Maven programming convention that the <parent> POM is also the Parent Directory POM. That is the way that Apereo organizes CAS and it is the way that Yale organizes its CAS additions project, but it is not a requirement. Technically, the Parent simply has to be a project of type "<packaging>pom</packaging>" that has already been processed by Maven and has already been stored in the Maven local repository before it is referenced in a <parent> statement of a subproject POM. When the <parent> POM is also the Parent Directory POM and Maven is invoked to build the parent directory, then by definition the Parent Directory POM is processed first and is stored in the Maven Repository before any of the subdirectory projects are built. However, Shibboleth 3 is an example of another approach where the <parent> POM is itself a subdirectory and the Parent Directory POM but is the first <module> statement and is therefore built and stored in the local Maven repository before any of the subprojects that reference it.
The vanilla Apereo CAS source project has its Parent Directory POM that establishes the global defaults and builds all the Apereo artifacts at version 4.0.2. You can recompile them in your sandbox or you can simply let Maven download the artifacts from the Yale Artifactory network server or from the Internet. Then the Yale CAS source project has its modified version of the Apereo Parent Directory POM with all the open source project boilerplate processing commented out and maybe a few version numbers incremented (for example, Yale changes the Java compiler versions from 1.6 to 1.7 for its own coding). The WAR Overlay (cas-server-yale-webapp) project can get the template WAR from an instance previously complied on the sandbox machine and stored in the local Maven repository, or it can get the standard version of the template WAR complied by Apereo and stored on the internet servers. Either way the template WAR is the same file (except for timestamps) and the WAR Overlay processing produces the same result.
In CAS 3.5.x (Yale version 1.1.x) the Yale CAS source project contains the three Yale subprojects, but it also contains a bunch of vanilla Apereo source subprojects. Some of them build Yale versions of Apereo modules (that is, a cas-server-core-1.1.1.jar file), but just because we build a Yale version of the artifact doesn't mean that it is ever used or deployed anywhere. If you look at the WAR Overlay project (cas-server-yale-webapp) you may find that the <dependency> statement in that POM references the vanilla Apereo Version number (3.5.2.1) and so the Yale version of the artifact we have just compiled is stored but ignored. If you are debugging a probem and need to add trace statements to the vanilla Apereo code on the Sandbox, you can temporarily change the Version number in the WAR Overlay to use your modified code, but then after the problem has been resolved you can revert the WAR Overlay POM to its standard settings and go back to using vanilla Apereo code.
Because the CAS 4.0.x (Yale version 1.2.x) project does not normally contain a copy of any Apereo project, making temporary changes for sandbox debugging is a much larger amount of work. You have to build a custom version of the Apereo JAR file, and that means making all the necessary changes to the project and subproject directories to make it work. It is important if you do this to NEVER compile a modified version of the Apereo source and store it in the local Maven repository under the vanilla Apereo version number. Once you have stored your own modified version of cas-server-core-4.0.2.jar then without some custom cleanup on your part it will override the real vanilla version of the module and you will always get the modified version on that Sandbox machine. So if you decide to do some work on a previously vanilla Apereo project, the first thing to do is to change the Version number in the project POM.
Why Not Vanilla?
Apereo code contains some bugs or sloppy coding. Yale has found them and reported them informally, but there has not been much interest in fixing them. Some examples of differences of opinion or bugs:
- The "service=" parameter value on the CAS login has to exactly match the value of the same parameter on the validate, serviceValidate, or samlValidate request. There is a difference of option about how carefully they have to match. The JASIG code matched the entire string excluding JSESSIONID if it was present. Yale believes that the entire query string (everything after the "?") should be excluded, and maybe the match should stop with context (https://servername/context/stuff-to-be-ingnored). This changes the substrings used in the equals() test.
- In the Login WebFlow logic Apereo saves the generated TGTID in a "request" scoped variable between the userid/password form and the CASTGC cookie generation, but when we insert the Duo second login screen all "request" scoped variables are lost. We had to copy it to "flow" scope during the display of the Duo form.
- When you are having a CAS problem, you may want to insert additional logging. For example, if the validate request is failing you may want to print out the exact service= strings being compared and the point at which they differ.
- Apereo defaults all registered services to be able to Proxy, but it seems better if the default is that they cannot Proxy unless you explicitly authorize it. This involves changing the default value for allowedToProxy in AbstractRegisteredService.java from "true" to "false".
- AbstractTicket is a Serializable class, but Apereo source forgot to give it a VersionUID. As a result, every time you deploy a new version of cas-server-core you have to "cold start" CAS by wiping out all the current tickets. You should only have to do that if you actually have changed the Ticket classes, not just because you recompiled the library to change something that has nothing to do with Tickets.
- Yale does not use Single Sign-Out, but code in TicketGrantingTicketImpl added to support that option has a very small but non-zero chance of throwing a ConcurrentAccessException and possibly screwing CAS up. Since we don't need it, Yale might just comment out the statement that causes the problem.
Each of these is a one line change, and only the first is user visible and the last is important for reliablity. If CAS was not open source we would probably just live with the old code, but we have the chance to fix/change it.
Whether we use vanilla Apereo code or make Yale modifications depends on Yale requirements, staffing, and management. There is a cost and commitment to maintaining a modification, but there is also a problem if some application does not work correctly or if some Yale need is not being properly addressed.
Subversion
If you check out a project from Subversion, then you rename a file or copy it to another directory, and then commit the changes, Subversion doesn't realize that the file has been renamed or moved. It sees that the original file name is no longer where it once was, and that a new file name has appeared in the directory, so it treats this as a delete and a create. All the file history is lost.
The command level Subversion client has commands that copy a file and link its old history to its new location. So if it matters, there are some things you should do with Subversion outside the Eclipse environment. This is easy to learn with any Linux Sandbox development environment.
During Sandbox development your complied JAR and WAR artifacts are stored in the local Maven repository on the Sandbox machine. However, the result of Sandbox development is to make and commit source changes to the Subversion version of the project. When you move to DEV (and then TEST and PROD) the Jenkins Build process will check out and compile the project and build new copies of the artifacts on the Jenkins managed machines and store them in the Artifactory network server.
An Eclipse project is built when you check a Maven project out of Subversion into the Eclipse Workspace. Generally a Sandbox instance will contain an Eclipse Workspace directory, it will contain a CAS project, and that project will contain source from some earlier version of CAS development. The first step when you begin work on CAS is to update the source in the workspace project with the latest version from Subversion.
There is a small amount of customization that has to be done manually after the project is checked out of Subversion into an Eclipse Workspace. For example, every Maven J2EE project seems to default to Java 1.6 while Yale CAS modules are written to use Java 1.7. So you have to manually go in and change the project defaults and the generated library section of the Eclipse Build configuration. Updating the contents of an existing Eclipse project preserves this customization, while checking out a complete new Maven project from Subversion means you will have to redo the small amount of post checkout project cleanup.
The Jenkins Build and Install Jobs
In order to put CAS into production, you have to conform to the Yale Jenkins process.
First the developer creates code on a desktop sandbox environment and runs basic tests. When the code is working, it is checked into Subversion.
The developer then runs the Jenkins Build job for that project. In "Trunk Build" processing (the default) the Build job first checks out a copy of the current SVN trunk onto a Jenkins managed Build machine. It then runs the top level parent Maven pom.xml job, which in turn runs all the subprojects to compile Java source and build the 1.1.x-SNAPSHOT versions of all the JAR files and of the final WAR files. At the end of the job, these files are all stored in Artifactory. The Trunk Build can be run again and again to replace the SNAPSHOT files until Integration Testing on the DEV machines is successful.
There are Jenkins "Install" jobs for DEV, TEST, and PROD. Each checks out a copy of the installer directory stored in SVN next to but separate from the cas-server source project. The install job also runs a top level (in the installer directory) pom.xml file, although that Maven project just runs an Ant build.xml script to download from Artifactory the specific version of the CAS WAR file built by the previous Build job. The Ant script copies (and typically unzips) this WAR file to the JBoss application deploy directory. As text and XML files are copied, Ant makes some edits "on the fly" to insert parameter values for the names of userids or password used to access databases, AD, or to configure special options in Spring.
After DEV testing is complete, but before Installing to TEST or PROD, the Jenkins Build job is run a second time to Perform a Maven Release. Jenkins checks out the source project from SVN, but this time it changes the version ID in all the project pom.xml files to drop the "-SNAPSHOT" suffix. So if you were working on "1.1.2-SNAPSHOT" this momentarily creates version "1.1.2" files. Those files are checked into SVN as a Tag, and they are also compiled to produce the "1.1.2" version of the WAR file which is stored in Artifactory. This becomes the official "1.1.2" Release of Yale CAS. Then Maven changes all the Version ID strings in the pom.xml files a second time to increment the minor Version number and re-add the suffix, so that when the developer updates the pom.xml files in his Eclipse workspace he begins work on "1.1.3-SNAPSHOT".
Development
Up to this point we have discussed where the files are stored and how the release are built, but nothing has been said about editing files and writing code. You do that on your desktop/laptop computer.
CAS development requires Java, JBoss, Maven, and Eclipse. The last three tools run under Java, and Java is designed to be platform independent, so you can do development under Windows or Mac OSX if you prefer. There are a few hours of setup time getting the right versions of everything set up on your computer (particularly adding the right options to Eclipse). This produces a sandbox machine.
When we talk about the CAS Development Sandbox, however, this is a VM created to run under Oracle's open source VM host called VirtualBox. You can run VirtualBox on Windows or Mac, and in the VM Java, JBoss, Maven, and Eclipse are all set up to work on CAS. This is very helpful for testing, particularly if you need to test communication between CAS machines in a cluster. However, the responsiveness of a VM running on a desktop is not as good as native applications running on the real OS.
This section describes how to set up a sandbox. It can be a guide for updating the CAS Development Sandbox VM when you want to move to Java 8 or 9, JBoss Wildfly, and Eclipse Mars (4.5), or it can explain how to configure current versions of everything on your native desktop OS.
Eclipse
Yale uses Eclipse as the Java IDE. If you prefer a different IDE (Netbeans, IntelliJ, ...) the only absolute requirements are the ability to check projects in and out of SVN, the ability to build Maven projects, and the ability to debug code in JBoss. However, it would be up to you to adapt the following Eclipse instructions.
Start with the Eclipse for J2EE download package of the current release from eclipse.org. This contains Eclipse support to edit Java, XML, and HTML source and to import and automatically configure Maven projects that build JAR or WAR files. Additional capability can be added. Additional features from an already known source (including any eclipse.org features) can be added from the Help - Install New Software menu. Eclipse also has a general source for third party add-on features at Help -Eclipse Marketplace. The Marketplace is easier to use, but you need to carefully read the descriptions to make sure the item you plan to install is the right version for the release of Eclipse you are running.
Eclipse needs:
- Maven support (called "M2E") which has become standard in modern Eclipse releases but we mention it here because it is a complex package with functions that will be describe later in some detail.
- Subversion support ("Subversive") which was developed by a third party named Polarion but was then contributed to eclipse.org. It can be installed from Add Software because it is now owned by the Eclipse project.
- However, while the Subversive code understands basic SVN concepts, the actual code to communicate over the network to the SVN server is still a third party addition from Polarion that you will be prompted to select the first time you try to use any SVN function in Eclipse. Choose the 100% Java library called SVNKit (use the lastest version number in the menu).
- Add JBoss Tools from the Marketplace. Do not select the full JBoss branded replacement for the entire Eclipse program, just add the Tools part and make sure that you choose the one that corresponds to your current Eclipse (Luna for example). You do not have to install all the tools, but it is simpler to simply hit OK and accept the entire package.
- Eclipse has optional AspectJ support, and CAS has some AspectJ components. It used to be necessary to install this option manually, but starting with Luna you will get a popup dialog inviting you to add Eclipse AspecJ support when it encounters it as you import the CAS project.
In addition to Eclipse extensions, Eclipse can be made aware of important external resources.
- In Window - Preferences - Java - Installed JREs you can configure more than one instance of Java installed on your machine. By default Oracle Java tends to run the highest version number, but CAS is distributed to run on 1.6 and at Yale it runs on 1.7. If you happen to have 1.8 installed on your machine for other purposes that is the version Eclipse will discover when you first install it and you should configure other versions that you are going to use to test applications. Install and configure a full JDK because Maven needs it to run.
- Eclipse comes with a current version of Maven 3 built in. Unfortunately, the Yale Jenkins Install jobs run on Maven 2.2.1 and that is not fully compatible with current Maven 3. So you need to unzip a copy of Maven 2.2.1 somewhere on your system and add the location of this directory to Eclipse through Window - Preferences - Maven - Installations.
- Eclipse needs to know where your JBoss server is to start it. This gets to be a bit tricky because the original Eclipse for J2EE code from eclipse.org that you started with has some support for JBoss servers, but the JBoss Tools that you just added has better support. It turns out to be better to let JBoss Tools "discover" the JBoss directory and autoconfigure everything for you. Go to Windows - Preferences - JBoss Tools - JBoss Runtime Detection. Click Add and type the directory one up from the root of the JBoss Server (if JBoss is in c:\appservers\jboss-eap-6.2 then "Add" c:\appservers). Then click Search ... and all the application servers in that directory will be found. If you do not already have JBoss downloaded and installed, "Add" the directory where you want to install it and click Download ... Select the version of JBoss from the list.
Check Out the Maven Project
If you are working with the current CAS release, you check it out from the Yale SVN server. If you are going to start work on a new CAS release you still have to check out the old release, but then you have to also import the new CAS release distribution from Apereo and merge the two.
Open the SVN Repository Exploring "perspective" (page) of Eclipse and define the repository url https://svn.its.yale.edu/repos/cas. This directory contains various cas clients and servers. Find the trunk of the current cas-server project and check it out as a simple directory. Do not use the Eclipse Wizard to create a particular type of Eclipse project. Just create a new generic Eclipse project.
CAS is stored in SVN as a Maven project. This means it has a pom.xml file in its top level directory.
Eclipse has its own project structure. An Eclipse project has a .project and .classpath file and a .settings directory in its top level directory.
M2E is the name of the Eclipse support for Maven. M2E is able to read through the pom.xml file and to generate the .project and .classpath files and the .settings directory that contains what Eclipse needs in order to correctly display and build the Maven project. If in Eclipse Project Explorer you right click on a project and choose Configure from the menu, you can configure that single project to be a Maven project. However, since CAS is a parent project with subprojects, you need to use Import to get all of them.
Return to the J2EE perspective, right click the new cas-server project directory, and choose Import - Maven - Existing Maven Projects. This is the point where the M2E Eclipse support for Maven discovers the parent and subdirectory structure. It reads through the parent POM to find the "modules", then scans the subdirectories for POM files that configure the subprojects. Then it presents a new dialog listing the projects it has found. Generally it has already found and configured the parent project, so only the subprojects need to be checked.
M2E will only display subprojects that were mentioned in a <module> statement of the top level parent pom.xml file. However, you do not need to click the checkboxes to select all of them to be turned into Eclipse projects. You only need to select the projects you are working on. If you leave some out, you can always repeat the Import Existing Maven Projects step and add more.
Now M2E does some serious work, and you have to give it time to do everything. It processes the pom.xml file in a subproject to decide if it builds a JAR or a WAR. It configures the project to compile the Java source. It reads through the dependency list in the POM and downloads to your local .m2 repository file all the JAR libraries on which the CAS project depends. Then it compiles all the source. When it encounters AspectJ stuff it will invite you to add AJDT support to Eclipse.
If you have not properly configured the Yale Artifactory server in your .m2/settings file, then you may get a message about a missing Yale dependency JAR file. Ignore error messages about bad XML syntax. You may also be told that the project is configured for Java 1.6 but there is only a 1.7 runtime available. These are all unimportant issues.
M2E understands Maven projects, but is is distinct from real Maven. Real Maven (lets call it "Batch Maven" because you run it from the command line) is an extensible system of optional plugin modules that can be expanded to provide all sorts of special processing. M2E is an Eclipse component that can read pom.xml files and configure Eclipse to to approximately the same thing that Maven would do to compile the source and build the artifact.
Eclipse can compile Java and AspectJ source. It can resolve references at compile time to JAR files downloaded and stored in the Maven local (.m2) repository. It can merge compiled *.class files with XML and properties files to build the JAR or WAR.
There are some things that M2E and Eclipse cannot do on their own, like compiling WSDL files to generate Java proxy source for remote Web Services. To get those extra steps you have to run Real batch Maven. M2E can run real Maven in batch mode under Eclipse, but you have to do this manually yourself. During the automatic project import step M2E mostly gets the Java and WAR parts right, and fortunately that is all that CAS requires.
M2E knows what it knows, and it knows what it doesn't understand. It reports as an unresolved problem any configuration in any pom.xml file of a Maven plugin that it doesn't fully support. Mostly these are messages like "Plugin execution is not covered by lifecycle configuration ..." This is M2E noting that there is some sort of extra processing that Maven does in batch but that the M2E support cannot exactly duplicate in its Eclipse configuration. For CAS this does not matter, because we will be using Real Batch Maven to generate all the JAR and WAR files that go into JBoss. We only need Eclipse to be configured properly so that the Java IDE functions like autocomplete and autofix and Open Declaration and Open Type Hierarchy work, and M2E gets that part right.
When M2E has completed its Import function, there are now two distinct types of projects for every project directory.
- There is still a Real Batch Maven project represented by the pom.xml. This will run and generate the 100% exactly correct result when you run a "mvn clean install" either at the command line or from within Eclipse.
- There is also an Eclipse project represented by the .project, .classpath, and .settings generated by M2E. This is good enough to compile all the Java source, but it is not able to do all the optional or special processing that was configured in the pom.xml. It produces perfectly acceptable *.class files and puts them in the correct working directory, and there is no reason why Maven would have to recompile that source and create new *.class files of its own. However, Eclipse would not be a reliable source if you tried to use it to build a JAR or WAR file because it would leave out some of the special processing. For that you need to run batch mvn.
Interactive and Batch Modes
We have already discussed why it is important to run the batch "mvn clean install" command to build artifacts. However, there is a second subtle interactive v batch distinction that you need to understand for testing and debugging.
Eclipse has special support to make very simple J2EE development as simple as possible. This is useful for simple applications where the developer spends a long time editing JSP, HTML and CSS files or simple Java source.
Since the start of J2EE, application servers have a "hot deploy" mode of operation where they will notice when the timestamp of a HTML or CSS file changes and they will use the new version of the file as soon as it is stored in the application server deploy directory. However, Eclipse is an IDE and when you save a new copy of an HTML or CSS file, you are putting it in the Eclipse workspace and not over on the c:\appservers\jboss-eap-6.2\standalone\deployments\cas.war directory. It could have an option to immediately copy changed files over to the deployment directory, but it tries instead to do something much more clever.
The J2EE support in Eclipse has a trick for bringing up widely known Java application servers (Tomcat, JBoss) in a special Eclipse managed configuration mode. These servers have always had the ability (required by Linux convention) to put their configuration files over in one directory tree, their libraries in another tree, their log files somewhere else, and their applications wherever the system administrator wants to configure them. Eclipse can run one of these servers overriding its normal configuration directory with one that Eclipse has hacked.
The Eclipse trick is to configure the application server to believe that the WAR file it is dealing with is the Eclipse project in the Eclipse workspace. In some cases Eclipse configures a special Eclipse-Tomcat JAR library with a replacement for the usual Tomcat classes that read files from normal WAR files. Eclipse creates a "virtual" WAR that comes from its workspace. If you are running the application server in this mode, then when you save an HTML file in the Eclipse workspace, it doesn't have to be copied anywhere its. It has been hot deployed to the application server automatically.
This trick cannot work if your application requires any special Maven processing to build the WAR file. The CAS WAR has to be built by Real Batch Maven, and Yale conventions further require last minute parameter substitution with the Install job Ant script. So CAS in general and particularly CAS at Yale cannot use the oversimplified application debugging provided by Eclipse.
You have to run Real Batch Maven under Eclipse to do the same work as the Jenkins Build job, then you need to run it again to do the same work as the Jenkins Install job. That produces a cas.war directory in the JBoss deploy directory on the sandbox machine. Then you want to start JBoss. This is where the JBoss Tools add-on to Eclipse is helpful. It can start and stop, with or without debugging, an ordinary JBoss server installed outside Eclipse. The server runs with its normal configuration from its normal configuration directory. The only change is that you manage it from Eclipse instead of from the command line, and you can set breakpoints with Eclipse debugging without the extra step of attaching Eclipse to a running process.
Running Maven Jobs Under Eclipse
If you have not already done so, go to SVN Repository Exploring, connect to the /repos/cas SVN repository, and check out the CAS Installer job that corresponds to the cas-server project you are working with.
If you have a Maven project in Eclipse, then if you right click on the pom.xml file in the root directory of the project you are presented with a set of Maven batch operations ("mvn clean", "mvn install", etc.). Running Maven this way from Eclipse requires you to take all the defaults.
The alternative is to configure a Run Configuration. Choose Run - Run Configurations from the menu. On the left side of the dialog, select "m2 Maven Build", then select New. Given the configuration a name like "CAS Build" or "CAS Install". For the Base directory click Browse Workspace and choose the cas-server project (for Build) or the cas-installer project (for Install). The Goals for each should be "install" or to be safer, "clean install".
At the bottom of the dialog, you can choose a Maven Runtime from the configured versions of Maven Eclipse knows about. You can use the Embedded Maven 3 for the build, but you need to use a configured external version of Maven 2.2.1 for the installer project. You can also use the JRE tab to select a specific version of Java.
Conceptually, the CAS Build and CAS Install Run Configurations are the sandbox version of the Jenkins Build and Install jobs.
The Build job has no parameters. The Install job, however, requires that you add an install.properties file containing sandbox versions of the parameters that an operator would enter running the real Jenkins install job. We do not check this file in, so you need to get it from another developer, or a local shared server disk. With passwords removed, it will look something like:
target.environment=DEV
jboss.deploy.dir=/c:/appservers/jboss-eap-6.2/standalone/deployments/
cas.server.version=1.1.2-SNAPSHOT
acs.pwd=xxxx
ad.server.admin.userPwd=xxxx
ad.server.admin.userDn=CN=somenetid,CN=users,DC=yu,DC=yale,DC=net
cas.cookie.secure=false
cas.ticketRegistry.xml=ticketRegistryEhcache
cas.clear.log=true
Target.environment selects a secondary parameter file with environment specific parameters. The value DEV adds the install-DEV.properties file values. The jboss.deploy.dir is the JBoss directory into which the cas.war is copied. The cas.server.version should correspond to the Maven version ID of the artifacts created in the Build job. ACS.pwd is the password of the yu_shib userid in the ACSx Oracle database, or if it is meaningless you will get some error messages and then CAS will not check for expired passwords. The ad.server.admin parameters must represent a netid and password (recommend you use a dependent netid) that simply connects you to the AD you are using (because the AD requires some login before you can use it, but the login isn't really and admin and doesn't need any special privileges. If you specify cas.cookie.secure=false then you can test CAS with an ordinary http:// connection to port 8080 and you don't need to use SSL or a certificate on your sandbox.
The real Jenkins Build job downloads source from SVN, but you already have your source in your Eclipse workspace. Your CAS Build Run Configuration runs the top level parent Maven project, which in turn runs all the subprojects and builds the JAR and WAR artifacts. However, unlike the real Jenkins Build job, your CAS Build stops when it has deposited these artifacts in the local .m2 Maven repository on your sandbox machine. Nothing is changed on the Artifactory server.
The CAS Install Run Configuration similarly uses the artifact in the .m2 local Maven repository. It explodes the WAR file in the JBoss deploy directory and inserts parameters from the properties files.
There is one last step. The Jenkins install job stops and starts JBoss. In the Sandbox you will manually start and stop the JBoss server using the JBoss Tools Start and Stop toobar icons.
Best Practice
Where possible, add Java code to the WAR Overlay project, use an edu.yale.its.* package name. Reference the classname in Spring XML beans. If you are going to modify Apereo code that generates a Spring Bean, then copy that source to the Overlay project and rename it. This is better than changing the Apereo code in cas-server-core. CAS interfaces are stable so code migrates from one release to the next.
Learn the important CAS interfaces. Generally speaking, you can do most customizations at the AuthenticationManager, AuthenticationHandler, CredentialsToPrincipalResolver, and TicketRegistry interfaces, or by building a Spring Web Flow bean.
However, you cannot change an internal class in cas-server-core (that is, a class used directly by other classes in the same project) without actually editing that project. Moving that class to the WAR Overlay project will not work because other projects cannot override internal classes.
The Debug Cycle
Make changes to the files and save them.
Once you run the CAS Build and CAS Install Run Configuration once, they appear in the recently used Run Configurations from the Run pulldown in the Eclipse toolbar. There is a Run icon (a right pointing triangle on a green circle) and following it there is a downward pointing "pulldown menu" icon that shows the list of recently run configurations.
Once you install JBoss Tools, there is a separate JBoss Run (a green right arrow) farther over to the right on the toolbar.However, during testing you probably do not want to run JBoss in normal mode but instead want to press the JBoss Debug button (the icon of a bug that follows the JBoss Run arrow on the toolbar.
So the normal cycle is to stop the JBoss Server, edit and save the source, the run the CAS Build Maven job, then the CAS Install Maven job, and then restart JBoss (in debug mode).
After The Sandbox
Changed files must eventually be Committed to the SVN trunk.
Do not commit the generated "target" subdirectories to SVN. They contain temporary files that should not be saved.
Do not commit the install.properties file you put in the install project.
Once you are ready to proceed to the next phase, run a Jenkins Trunk Build and do a DEV Install.