In 2010 Yale upgraded to CAS 3.4.2 and implemented "High Availability" CAS clustering using the JBoss Cache option (because Yale Production Services had standardized on JBoss for "clustering"). Unfortunately, the mechanism designed to improve CAS reliability ended up as the cause of most CAS failures. If you insist that Service Tickets be replicated through the cluster, so that any CAS node can validate any Service Ticket, then replication has to complete before the ST can be passed back to the user. But if CAS has to wait for cache activity, then network problems or some sickness on one of the CAS nodes propagates back to all the nodes and CAS stops working. We considered changing to another option, but none of the alternatives has a spotless reputation for reliability.
There is much to be said for "off the shelf COTS software". After all, if something is widely used and written to handle much more complicated problems, then it should handle CAS. Unfortunately, all these packages are designed to support application level software, and at Yale CAS is a Tier 0 system component (in Disaster Recovery planning) and it has to be back up first with as few dependencies as possible. Application software is not written to system specifications.
So CushyTicketRegistry was written to solve the CAS Ticket problem and pretty much nothing else. It does not require a database, or any additional complex network configuration with multicast addresses and timeouts. It depends on the observed behavior that CAS is actually a fairly small component with limited hardware demands so that a slightly less "efficient" but rock solid and dead simple approach can be used to solve the problem.
Rather than trying to move tickets in memory to network message queues and multicast protocols, Cushy periodically uses the standard Java writeObject statement to write a copy of the entire ticket cache to disk. At Yale we have fewer than 20,000 tickets at any time, and this operation takes less than a second of elapsed time (on one core) and uses about 3.2MB of disk. So you can certainly do it once every few minutes. In between full checkpoints, every few seconds we write a file of changes since the last full backup.
Now you have to replicate the files to the other CAS nodes, but CAS runs on a Web server and they are pretty good about transferring files over the network. The data has to be secure, but we use HTTPS for the rest of the datastream and it will work just as well with the ticket backup file. The CAS servers have to authenticate themselves, so Cushy uses the same combination of Service Tickets and server certificates (as in the Proxy Callback) that CAS already uses for server authentication. In short, CAS already had all the problems solved and just needed to reapply existing solutions to its own housekeeping problem.
The "off the shelf" object replication technologies are all enormous black boxes with lots of complex code that no CAS programmer has ever read. As a result, CAS is not written correctly to use most of these packages. They have restrictions and CAS doesn't conform to those restrictions. The chance of failure is small, but if you run 24x7 it will eventually trigger a problem.
Cushy is a small amount of source and any Java programmer should be able to understand it. This means it can be customized to handle any special CAS requirements, which is something you cannot do with generic object cache libraries. It can also be customized to address local network configurations, failure patterns, disaster recovery, or availability profiles.
Executive Summary
This is a quick introduction for those in a hurry.
CAS is a Single SignOn solution. Internally, it creates a set of objects called Tickets. There is a ticket for every logged on user, and short term Service Tickets that exist while a user is being authenticated to an application. The Business Layer of CAS creates tickets by, for example, validating your userid and password in a back end system like Active Directory. The tickets are stored in a plug in component called a Ticket Registry.
For a single CAS server, the Ticket Registry is just a in memory table of tickets (a Java "Map" object) keyed by the ticket ID string. When more than one CAS server is combined to form a cluster, then an administrator chooses one of several optional Ticket Registry solutions that allow the CAS servers to share the tickets.
One clustering option is to use JPA, the standard Java service to map objects to tables in a relational database. All the CAS servers share a database, which means that any CAS node can fail but the database has to stay up all the time or CAS stops working. Other solutions use generic object "caching" solutions (Ehcache, JBoss Cache, Memcached) where CAS puts the tickets into what appears to be a common container of Java objects and, under the covers, the cache technology ensures that the tickets are copied to all the other nodes.
JPA makes CAS dependent on a database. It doesn't really use the database for any real SQL stuff, so you could you almost any database system. However, the database is a single point of failure, so you need it to be reliable. If you already have a 24x7x365 database managed by professionals who can guarantee availability, this is a good solution. If not, then this is an insurmountable prerequisite for bringing up an application like CAS that doesn't really need database.
The various cache (in memory object replication) solutions should also work. Unfortunately, they have massively complex configuration parameters with multicast network addresses and timeouts.They also tend to be better at detecting a node that is dead and does not respond than they are at dealing with nodes that are sick and accept a message but then never really get to processing it and responding. They operate entirely in memory, so at least one node has to remain up while the others reboot in order to maintain the content of the cache. While node failure is well defined, the status of objects is ambiguous if the network is divided into two segments by a linkage failure, the two segments operate independently for a while, and then connection is reestablished.
Since Cushy is specifically designed to handle the CAS Ticket problem, you will not understand it without a more detailed discussion of CAS Ticket connections and relationships. There are some specific CAS design problems that cannot be solved at the TicketRegistry layer. Cushy doesn't fix them, but neither do any of the cache solutions. This document will identify them and suggest how to fix them elsewhere in the CAS code.
Cushy is a cute word that roughly stands for "Clustering Using Serialization to disk and Https transmission of files between servers, written by Yale".
The name explains what it does.Java has a built in operation called writeObject that writes a binary version of Java objects to disk. If you use it on a complex object, like a list of all the tickets in the Registry, then it creates a disk file with all the tickets in the list. Later on you use readObject to turn the disk file back into a copy of the original list. Java calls this mechanism "Serialization". Using just one statement and letting Java do all the work and handle all the complexity makes this easy.
The other mechanisms (JPA or the cache technologies) operate on single tickets. They write individual tickets to the database or replicate them across the network. Obviously this is vastly more efficient than periodically copying all the tickets to disk. Except that at Yale (a typical medium sized university), the entire Registry of tickets can be written to a disk file in 1 second and it produces a file about 3 megabytes in size. That is a trivial use of modern multi-core server hardware, and copying 3 megabytes of data over the network every 5 minutes, or even every minute, is a trivial use of network bandwidth. So Cushy is less efficient, but in a way that is predictable and insignificant, in exchange for code that is simple and easy to completely understand.
Once the tickets are a file on disk, the Web server provides an obvious way (HTTPS GET) to transfer them from one server to another. Instead of using complex multicast sockets with complex error recovery, you are using a simple technology everyone understands to accomplish a trivial function.
Cache solutions go memory to memory. Adding an intermediate disk file wasn't an obvious step, but once you think of it it has some added benefits. If you reboot the CAS server, the local disk file allows CAS to immediately restore the tickets and therefore its state from before the reboot. Serializing the tickets to disk will work no matter how badly the network or other nodes are damaged, and it is the only step that involves the existing CAS code. Although the second step, transferring the file from one server to another, is accomplished with new code that runs in the CAS Web application, it does not touch a single existing CAS object or class. So whatever unexpected problems the network might create, they affect only the independent file transfer logic leaving normal CAS function untouched. And while the cache solutions require complex logic to reconcile cache on different machines after communication between nodes is restored, Cushy retransmits the entire set of tickets every (configurable number) few minutes after which everyone is guaranteed to be back in synchronization.
Cushy is based on four basic design principles:
- CAS is very important, but it is small and cheap to run.
- Emphasize simplicity over efficiency as long as the cost to run remains trivial.
- Assume the network front end is programmable.
- Trying for perfection is the source of most total system failures. Allow one or two users to get a temporary error message when a CAS server fails.
How it works
Cushy is simple enough it can be explained to anyone, but if you are in a rush you can stop here.
Back in the 1960's a "checkpoint" was a copy of the important information from a program written on disk so if the computer crashed the program could start back at almost the point it left off. If a CAS server saves its tickets to a disk file, reboots, and then reads the tickets from the file back into memory it is back to the same state it had before rebooting. If you transfer the file to another computer and bring CAS up on that machine, it have moved the CAS server from one machine to another. Java writeObject and readObject guarantee the state and data are completely saved and restored.
JPA and the cache technologies try to maintain the image of a single big common bucket of shared tickets. This is a very simple view, but it is very hard to maintain and rather fragile. Cushy maintains a separate TicketRegistry for each CAS server, but replicates a copy of each TicketRegistry to all the other servers in the cluster.
Given the small cost of making a complete checkpoint, you could configure Cushy to generate one every 10 seconds and run the cluster on full checkpoints. It is probably inefficient, but using 1 second of one core and transmitting 3 megabytes of data to each node every 10 seconds is not a big deal on modern multi-core servers. This was the first Cushy code milestone and it lasted for about a day.
The next milestone (a day later) was to add an "incremental" file that contains all the tickets added or ticket ids of tickets deleted since the last full checkpoint. Incrementals are designed so that they grow between full checkpoints, they are cumulative, and you can always apply the last incremental you got without worrying about any previous incrementals. Again, slightly inefficient, but trivially so, and emphasize simplicity.
CAS already ran the RegistryCleaner off a timer configured in Spring XML to call it every so often. Cushy adds a second timer to the same configuration file to signal the TicketRegistry frequently. For this example, say it makes the call every 10 seconds. Then every 10 seconds Cushy generates an incremental file, and then it checks all the other nodes to get their most recent incremental file. Separately, Cushy is configured with the time between checkpoints (say every 5 minutes), so when it has been long enough that a new full checkpoint is due, it creates a full checkpoint instead of an incremental.
Each incremental has a small number of new Login (TGT) tickets and maybe a few unclaimed service tickets. However, because we do not know whether any previous incremental was or was not processed, it is necessary to transmit the list of every ticket that was deleted since the last full checkpoint, and that will contain the ID of lots of Service Tickets that were created, validated, and deleted within a few milliseconds. That list is going to grow, and its size is limited by the fact that we can start over again after each full checkpoint.
A Service Ticket is created and then is immediately validated and deleted. Trying to replicate Service Tickets to the other nodes before the validation request comes in is an enormous problem that screws up the configuration and timing parameters for all the other Ticket Registry solutions. Cushy doesn't try to do replication at this speed. Instead, it has CAS configuration elements that ensure that each Ticket ID contains an identifier of the node that created it, and it depends on a front end smart enough to route any of the ticket validation requests to the node that created the ticket and already has it in memory. Then replication only is needed for crash recover.
Note: If the front end is not fully programmable it is a small programming exercise to be considered in Cushy 2.0 to forward the validation request from any CAS node to the node that owns the ticket and then pass back the results of the validation to the app.
Ticket Names
As with everything else, CAS has a Spring bean configuration file (uniqueIdGenerators.xml) to configure how ticket ids are generated. If you accept the defaults, then tickets have the following format:
type - num - random - nodename
where type is "TGT" or "ST", num is a ticket sequence number, random is a large random string like "dmKAsulC6kggRBLyKgVnLcGfyDhNc5DdGKT", and the suffix at the end of the ticket is identified as a nodename.
In vanilla CAS the nodename typically comes from the cas.properties file, but Cushy requires every node in the cluster to have a unique name and even when you are using real clustering many CAS locations leave the "nodename" suffix on the ticket id to its default value of "-CAS". Cushy adds a smarter configuration bean described below and enforces the rule that the end of the ticket really identifies the node that created it and therefore owns it.
How it Fails (Nicely)
The Primary + Warm Spare Cluster
One common cluster model is to have a single master CAS server that normally handles all the requests, and a normally idle backup server (a "warm spare") that does nothing until the master goes down. Then the backup server handles requests while the master is down.
During normal processing the master server is generating tickets, creating checkpoints and increments, and sends them to the backup server. The backup server is generating empty checkpoints with no tickets because it has not yet received a request.
Then the master is shut down or crashes. The backup server has a copy in memory of all the tickets generated by the master, except for the last few seconds before the crash. It can handle new logins and it can issue Service Tickets against logins previously processed by the master, using its copy of the master's registry.
Now the master comes back up and, for this example, let us assume that it resumes its role as master (there are configurations where the backup becomes the new master and so when the old master comes back it becomes the new backup. This is actually easier for Cushy).
The master restores from disk a copy of its old registry and over the network it fetches a copy of the registry from the backup. It now has access to all the login or proxy tickets created by the backup while it was down, and it can issue Service Tickets based on those logins.
However, the failure has left some minor issues that are not important enough to be problems. Because each server is the owner of its own tickets and registry, each has Read-Only access to the tickets of the other server. (Strictly speaking that is not true. You can temporarily change tickets in your copy of the other node's registry, but when the other node comes back up and generates its first checkpoint, whatever changes you made will be replaced by a copy of the old unmodified ticket). So the master is unaware of CAS logouts that occurred while it was down and although it can process a logout for a user that logged into the backup while it was down, it really has no way to actually delete the login ticket. Since no browser has the TGT ID in a cookie any more, nobody will actually be able to use the zombie TGT, but the ticket is going to sit around in memory until it times out.
There are a few more consequences to Single SignOut that will be explained in the next section.
A Smart Front End
A programmable front end is configured to send Validate requests to the CAS server that generated the Service Ticket, /proxy requests to the CAS server that generated the PGT, other requests of logged on users to the CAS server they logged into, and login requests based on standard load balancing or similar configurations. Each ticket has a suffix that indicates which CAS server node generated it.
- If the URL "path" is a validate request (/cas/validate, /cas/serviceValidate, etc.) then route to the node indicated by the suffix on the value of the ticket= parameter.
- If the URL is a /proxy request, route to the node indicated by the suffix of the pgt= parameter.
- If the request has a CASTGC cookie, then route to the node indicated by the suffix of the TGT that is the cookie's value.
- Otherwise, or if the node selected by 1-3 is down, choose a CAS node using whatever round robin or master-backup algorithm previously configured.
So normally all requests go to the machine that created and therefore owns the ticket, no matter what type of ticket it is. When a CAS server fails, requests for its tickets are assigned to one of the other servers. Most of the time the CAS server recognizes this as a ticket from another node and looks in the current shadow copy of that node's ticket registry.
As in the previous example, a node may not have a copy of tickets issued in the last few seconds, so one or two users may see an error.
If someone logged into the failed node needs a Service Ticket, the request is routed to any backup node which creates a Service Ticket (in its own Ticket Registry with its own node suffix which it will own) chained to the copy of the original Login Ticket in the appropriate shadow Ticket Registry. When that ticket is validated, the front end routes the request based on the suffix to this node which returns the Netid from the Login Ticket in the shadow registry.
Again, the rule that each node owns its own registry and all the tickets it created and the other nodes can't successfully change those tickets has certain consequences.
- If you use Single SignOff, then the Login Ticket maintains a table of Services to which you have logged in so that when you logout or when your Login Ticket times out in the middle of the night then each Service gets a call from CAS on a published URL with the Service Ticket ID you used to login so the application can log you off if it has not already done so. In failover mode a backup server can issue Service Tickets for a failed nodes TGT, but it cannot successfully update the Service table in the TGT, because when the failed node comes back up it will restore the old Service table along with the old TGT.
- If the user logs out and the Services are notified by the backup CAS server, and then the node that owned the TGT is restored along with the now undead copy of the obsolete TGT, then in the middle of the night that restored TGT will timeout and the Services will all be notified of the logoff a second time. It seems unlikely that anyone would ever write a service logout so badly that a second logoff would be a problem. Mostly it will be ignored.
You have probably guessed by now that Yale does not use Single SignOut, and if we ever enabled it we would only indicate that it is supported on a "best effort" basis.
CAS Cluster
In this document a CAS "cluster" is just a bunch of CAS server instances that are configured to know about each other. The term "cluster" does not imply that the Web servers are clustered in the sense that they share Session information. Nor does it depend on any other type of communication between machines. In fact, a CAS cluster could be created from a CAS running under Tomcat and one running under JBoss.
To the outside world, the cluster typically shares a common virtual URL simulated by the Front End device. At Yale, CAS is "https://secure.its.yale.edu/cas" to all the users and applications. The "secure.its.yale.edu" DNS name is associated with an IP address managed by the BIG-IP F5 device. It terminates the SSL, then examines requests and based on programming called iRules it forwards requests to any of the configured CAS virtual machines.
Each virtual machine has a native DNS name and URL. It is these "native" URLs that define the cluster because each CAS VM has to use the native URL to talk to another CAS VM. At Yale those URLs follow a pattern of "https://vm-foodevapp-01.web.yale.internal:8080/cas".
Internally, Cushy configuration takes a list of URLs and generates a cluster definition with three pieces of data for each cluster member: a nodename like "vmfoodevapp01" (the first element of the DNS name with dashes removed), the URL, and the ticket suffix that identifies that node (at Yale the F5 likes the ticket suffix to be an MD5 hash of the DNS name).
Sticky Browser Sessions
An F5 can be configured to have "sticky" connections between a client and a server. The first time the browser connects to a service name it is assigned any available backend server. For the next few minutes, however, subsequently requests to the same service go back to whichever server the F5 assigned to handle the first request.
Intelligent routing is based on tickets that exist only after you have logged in. CAS was designed (for better or worse) to use Spring Webflow which keeps information in the Session object during the login process. For Webflow to work, one of two things must happen:
- The browser has to POST the Userid/Password form back to the CAS server that sent it the form (which means the front end has to use sticky sessions based on IP address or JSESSIONID value).
- You have to use real Web Server clustering so the Web Servers all exchange Session objects based on JSESSIONID.
Option 2 is a fairly complex process of container configuration, unless you have already solved this problem and routinely generate JBoss cluster VMs using some canned script. Sticky sessions in the front end are somewhat easier to configure and obviously they are less complicated than routing request by parsing the ticket ID string.
Yale made a minor change to the CAS Webflow to store extra data in hidden fields of the login form, and an additonal check so if the Form POSTs back to another server the other server can handle the rest of the login without requiring Session data.
What is a Ticket Registry
This is a rather detailed description of one CAS component, but it does not assume any prior knowledge.
CAS provides a Single SignOn function. It acts as a system component, but internally it is structured like most other Web applications. Internally it creates, validates, and deletes objects called Tickets. The Ticket Registry is the component that holds the tickets while CAS is running.
When the user logs in, CAS creates a ticket that the user can use to create other tickets (a Ticket Granting Ticket or TGT, although a more friendly name for it is the "Login Ticket"). Then when someone previously logged in uses CAS to authenticate to another Web application, CAS creates a Service Ticket (ST).
Web applications are traditionally defined in three layers. The User Interface generates the Web pages, displays data, and processes user input. The Business Logic validates requests, verifies inventory, approves the credit card, and so on. The backend "persistence" layer talks to a database. CAS doesn't sell anything, but it has roughly the same three layers.
The CAS User Interface uses Spring MVC and Spring Web Flow to log a user on and to process requests from other Web applications. The Business Logic validates the userid and password (typically against an Active Directory), and it creates and deletes the tickets. CAS tickets, however, typically remain in memory and do not need to be written to a database or disk file. Nevertheless, the Ticket Registry is positioned logically where the database interface would be in any other application program, and sometimes CAS actually uses a database.
CAS was written to use the Spring Java Framework to configure its options. CAS requires some object that implements the TicketRegistry function. JASIG CAS provides at least five alternative Ticket Registries. You pick one and then insert its name (and configure its parameters) using a documented Spring XML file which not surprisingly is named "ticketRegistry.xml". Given this modular plug-in design, Cushy is just one more option you can optionally configure with this file.
When you have a regular Web application that sells things, the objects in the application (products, inventory, orders) would be stored in a database and the most modern way to do this is with JPA. To support the JASIG JPA Ticket Registry, all the Java source for tickets and things that tickets contain or point to are annotated with references to database tables and the names and data types of the columns in the table that each data field maps to. If you don't use the JPA Ticket Registry these annotations are ignored. JPA uses the annotations to generate and then weave into these objects invisible support code to detect when something has changed and track connections from one object to the next.
The "cache" versions (Ehcache, JBoss Cache, Memcached) of JASIG TicketRegistry modules have no annotations and few expectaions. They use ordinary objects (sometimes call Plain Old Java Objects or POJOs). They require the objects to be serializable because, like Cushy, they use the Java writeObject statement to turn any object to a stream of bytes that can be held in memory, stored on disk, or sent over the network.
CAS tickets are all serializable, but they are not designed to be very nice about it. This is the "dirty secret" of CAS. It has always expected tickets to be serialized, but it breaks some of the rules and, as a result, can generate failures. They don't happen often, but CAS runs 24x7 and anything that can go wrong will go wrong. With one of the caching solutions, when it goes wrong it is deep inside a huge black box of "off the shelf" code that may or may not recover from the error.
The purpose of this section is to describe in more detail than you find in other CAS documentation just what is going on here, how Cushy avoids problems, and how Cushy would recover even if something went wrong.
In simple terms, the Login ticket (the TGT) "contains" your Netid (username, principal, whatever you call it). In more detail the TGT points to an Authentication object that points to a Principal object that contains the Netid. Currently when a user logs on the TGT, Netid, and any attributes are all determined once and that part of the TGT never changes. In the future, CAS may add higher levels of authentication (secondary "factors") and that might change the important part of the TGT, but that is not a problem now.
However, if you use Single SignOut then CAS also maintains a "services" table in the TGT associates old used ServiceTicket ID strings and a reference to a Service object that contains the URL that CAS should call to notify a service that a user previously authenticated by CAS has logged out. The services table changes through the day as users log in to applications.
CAS also generates Service Tickets. However, the ST is used and discarded in a few milliseconds during normal use, or if it is never claimed it times out after a default period of 10 seconds. When the ST is validated by the application, CAS returns the Netid, but CAS does not store the Netid in the ST. Instead, it points the ST to the TGT and the TGT "contains" the Netid. When the application validates the ST, CAS goes from the ST to the TGT, gets the Netid, deletes the ST, and returns the Netid to the application.
So the ST is around for such a short period of time that you would not think it has an important affect on the structure of the Ticket Registry. There are, however, two impacts:
- First, whenever you ask Java writeObject to serialize an object to bytes, Java not only turns that object into bytes but it also makes a copy of any other object it points to. Cushy, Ehcache, JBoss Cache, and Memcached all serialize objects, but only here will you find anyone explaining what that means. When you think you are serializing an ST what you are really getting is an ST, the TGT it points to, the Authentication and Principal objects the TGT points to, and then the Service objects for all the services that the TGT is remembering for Single SignOut. In reality, the only thing the ST needs is the Netid, but because CAS is designed with many layers of abstraction you get this entire mess whether you like it or not.
- If you do not assume that the Front End is smart enough to route validation requests to the right host, then there is a chase condition between the cache based ticket replication systems copying the ST to the other nodes and the possibility that the front end will route the ST validation request to one of those other nodes. The only way to make sure this will never happen is to configure the cache replication systems to copy the ST to all the other nodes before returning to the CAS Business Layer to confirm the ST is stored. However, if network I/O is synchronous, then if it fails then CAS stops running as a result.
A special kind of Service is allowed to "Proxy", to act on behalf of the user. Such a service gets its own Proxy Granting Ticket (PGT) which acts like a TGT in the sense that it generates Service Tickets and the ST points back to it. However, a PGT does not "contain" the Netid. Rather the PGT points to the TGT which does contain the Netid.
When Cushy does a full checkpoint of all the tickets, it doesn't matter how the tickets are chained together. Under the covers of the writeObject statement, Java does all the work of following the chains and understanding the structure, then it write out a blob of bytes that will recreate the exact same structure when you read it back in.
The caching solutions never serialize the entire Registry. They write single tickets one at a time, except that as we have seen, a single ST or PGT points to a TGT that points to a lot of junk and all that gets written out every time you think you are serializing a "single ticket".
When Cushy generates an incremental file between full checkpoints, then all the added Tickets in the incremental file are individually serialized, producing the same result as the caching solutions. With Cushy, however, every 5 minutes the full checkpoint comes along and cleans it all up.
The reason why CAS can tolerate this sloppy serialization is that it doesn't affect the Business Logic. Suppose a ST is serialized on one node and is sent to another node where it is validate. Validation follows the chain from the ST to the TGT and then gets the Netid (and maybe the attributes). The result is the same whether you obtain the Netid from the "real" TGT or a copy of the real TGT made a few seconds ago. Once the ST is validated it is deleted, and that also discards all the other objects chained off the ST by the caching mechanism. It it isn't validate, then the ST times out and is deleted anyway.
If you have a PGT that points to a TGT, and if the PGT is serialized and copied to another node, and if after it is copied the TGT is changed (which cannot happen today but might be something CAS does in a future release with multifactor support), then the copy of the PGT points to the old copy of the TGT with the old info while the original PGT points to the original TGT with the new data. This problem would have to be solved before you introduce any new CAS features that meaningfully change the TGT.
Cushy solves this currently non-existent problem every time it does a full checkpoint. Between checkpoints, only for the tickets added since the last checkpoint, Cushy creates copies of TGTs from the individually serialized STs and PGTs just like the caching systems. It creates a lot fewer of them and they last only a few minutes.
Now for the real problem that CAS has not solved.
When you serialize a collection, Java must internally obtain an "iterator" and step one by one through the objects in the collection. An iterator knows how to find the next or previous object in the collection. However, the iterator can break if while it is dealing with one element in the collection another thread is adding a new element to the collection "between" the object that serialization is currently processing and the object that the iterator expects to be next. When this happens, serialization stops and throws an error exception.
So if you are going to use a serialization based replication mechanism (like Ehcache, JBoss Cache, or Memcached) then it is a really, really bad idea to have a non-threadsafe collection in your tickets, such as the services table in the TGT used for Single SignOut. Collisions don't happen all that often, but as it turns out a very common user behavior can make them much more likely.
Someone presses the "Open All In Tabs" button of the browser to create several tabs simultaneously. Two tabs reference CAS aware applications that redirect the browser to CAS. The user is already logged on, so each tab only needs a Service Ticket. The problem is that both Service Tickets point to the same TGT, and both go into the services table for Single SignOut, and the first one to get generated can start to be serialized while the second one is about to add its new entry in the services table.
Yale does not use Single SignOut, so we simply disabled the services table. If you want to solve this problem then at least Cushy gives you access to all the code, so you can come up with a solution if you understand Java threading.
Usage Pattern
Users start logging into CAS at the start of the business day. The number of TGTs begins to grow.
Users seldom log out of CAS, so TGTs typically time out instead of being explicitly deleted.
Users abandon a TGT when they close the browser. They then get a new TGT and cookie when they open a new browser window.
Therefore, the number of TGTs can be much larger than the number of real CAS users. It is a count of browser windows and not of people or machines.
At Yale around 3 PM a typical set of statistics is:
Unexpired-TGTs: 13821 Unexpired-STs: 12 Expired TGTs: 30 Expired STs: 11
So you see that a Ticket Registry is overwhelmingly a place to keep TGTs (in this statistic TGTs and PGTs are combined).
Over night the TGTs from earlier in the day time out and the Registry Cleaner deletes them.
So generally the pattern is a slow growth of TGTs while people are using the network application, followed by a slow reduction of tickets while they are asleep, with a minimum probably reached each morning before 8 AM.
If you display CAS statistics periodically during the day you will see a regular pattern and a typical maximum number of tickets in use "late in the day".
Translated to Cushy, the cost of the full checkpoint and the size of the checkpoint file grow over time along with the number of active tickets, and then the file shrinks over night. During any period of intense login activity the incremental file may be unusually large. If you had a long time between checkpoints, then around the daily minimum (8 AM) you could get an incremental file bigger than the checkpoint.
Some Metrics
At Yale there are typically more than 10,000 and fewer than 20,000 Login tickets. Because Service Tickets expire when validated and after a short timeout, there are only several dozen unexpired Service Tickets at any given time.
Java can serialize a collection of 20,000 Login tickets to disk in less than a second (one core of a Sandy Bridge processor).Cushy has to block normal CAS processing just long enough to get a list of references to all the tickets, and the all the rest of the work occurs under a separate thread unrelated to any CAS operation that does not interfere with CAS processing.
Of course, Cushy also has to deserialize tickets from the other nodes. However, remember that if you are currently using any other Ticket Registry the number of tickets reported out in the statistics page is the total number combined across all nodes, while Cushy serializes only the tickets that the current node owns and it deserializes the tickets for the other nodes. So generally you can apply the 20K tickets = 1 second rule of thumb. Serializing 200,000 tickets has been measured to take 9 seconds (so it scales as expected) and if you convert the 20K common pool of tickets to Cushy, then each node will serialize 10K of tickets it owns and deserialize 10K of tickets from the other node (load balanced) or else in a master-backup configuration the master will serialize 20K tickets and deserialize 0, while the backup will serialize 0 and deserialize 20K. You come to the same number no matter how you slice it.
Incrementals are trivial (.1 to .2 seconds).
Configuration
In CAS the TicketRegisty is configured using the WEB-INF/spring-configuration/ticketRegistry.xml file. It has two sections.
First, a bean with id="ticketRegistry" is configured selecting the class name of one of the optional TicketRegistry implementations (JBoss Cache, Ehcache, ...). To use Cushy you configure the CushyTicketRegistry class. The rest of the bean definition provides property values that configure that particular type of registry.
Then at the end there are a group of bean definitions that set up periodic timer driven operations using the Spring support for the Quartz timer library. Normally these beans set up the RegistryCleaner to wake up periodically and remove all the expired tickets from the Registry.
Cushy adds a new bean at the beginning. This is an optional bean for class CushyClusterConfiguration that uses some static configuration information and runtime Java logic to find the IP addresses and hostname of the current computer to select a specific cluster configuration and generate property values that can be passed on to the CushyTicketRegistry bean. If this class does not do what you want, you can alter it, replace it, or just generate static configuration for the CushyTicketRegistry bean.
The Cluster
We prefer a single "cas.war" artifact that works everywhere. It has to work on standalone or clustered environments, in a desktop sandbox with or without virtual machines, but also in official DEV (development), TEST, and PROD (production) servers.
There are techniques (Ant, Maven) to "filter" a WAR file replacing one string of text with another as it is deployed to a particular host. While that works for individual parameters like "nodeName", the techniques that are available make it hard to substitute a variable number of elements, and some locations have one CAS node in development, two CAS nodes in test, and three CAS nodes in production.
Then when we went to Production Services to actually deploy the code, they said that they did not want to edit configuration files. They wanted a system where the same WAR is deployed anywhere and when it starts up it looks at the machine it is on, decides that this a TEST machine (because it has "tst" in the hostname), and so it automatically generates the configuration of the TEST cluster.
At this point you should have figured out that it would be magical if anyone could write a class that reads your mind and figures out what type of cluster you want. However, it did seem reasonable to write a class that could handle most configurations out of the box and was small enough and simple enough that you could add any custom logic yourself.
The class is CushyClusterConfiguration and it is separate from CushyTicketRegistry to isolate its entirely optional convenience features and make it possible to jiggle the configuration logic without touching the actual TicketRegistry. It has two configuration strategies:
First, you can configure a sequence of clusters (desktop sandbox, and machine room development, test, and production) by providing for each cluster a list of the machine specific raw URL to get to CAS (from other machines also behind the machine room firewall). CusyClusterConfiguration look up all the IP addresses of the current machine, then looks up the addresses associated with the servers in each URL in each cluster. It chooses the first cluster that it is in (that contains a URL that resolves to an address of the current machine).
Second, if none of the configured clusters contains the current machine, or if no configuration is provided, then Cushy uses the HOSTNAME and some Java code to automatically configure the cluster. At this point we expect you to provide some programming, unless you can use the Yale solution off the shelf.
At Yale we know that CAS is a relatively small application with limited requirements, and that any modern multi-core server can certainly handle all the CAS activity of the university (or even of a much larger university). So we always create clusters with only two nodes, and the other node is just for recovery from a serious failure (and ideally the other node is in another machine room far enough away to be outside the blast radius).
In any given cluster, the hostname of both machines is identical except for a suffix that is either the three characters "-01" or "-02". So by finding the current HOSTNAME it can say that if this machine has "-01" in its name, the other machine in the cluster is "-02", or the reverse.
Configuration By File
You can define the CushyClusterConfiguration bean with or without a "clusterDefinition" property. If you provide the property, it is a List of Lists of Strings:
<bean id="clusterConfiguration" class="edu.yale.its.tp.cas.util.CushyClusterConfiguration"
p:md5Suffix="yes" >
<property name="clusterDefinition">
<list>
<!-- Desktop Sandbox cluster -->
<list>
<value>http://foo.yu.yale.edu:8080/cas/</value>
<value>http://bar.yu.yale.edu:8080/cas/</value>
</list>
<!-- Development cluster -->
<list>
<value>https://casdev1.yale.edu:8443/cas/</value>
<value>https://casdev2.yale.edu:8443/cas/</value>
</list>
...
</list>
</property>
</bean>
In spring, the <value> tag generates a String, so this is what Java calls a List<List<String>> (List of Lists of Strings). As noted, the top List has two elements. The first element is a List with two Strings for the machines foo and bar. The second element is another List with two strings for casdev1 and casdev2.
There is no good way to determine all the DNS names that may resolve to an address on this server. However, it is relatively easy in Java to find all the IP addresses of all the LAN interfaces on the current machine. This list may be longer than you think. Each LAN adapter can have IPv4 and IPv6 addresses, and then there can be multiple real LANs and a bunch of virtual LAN adapters for VMWare or Virtualbox VMs you host or tunnels to VPN connections. Of course, there is always the loopback address.
So CushyClusterConfiguration goes to the first cluster (foo and bar). It does a name lookup (in DNS and in the local etc/hosts file) for each server name (foo.yu.yale.edu and bar.yu.yale.edu). Each lookup returns a list of IP addresses associated with that name.
CushyClusterConfiguration selects the first cluster and first host computer whose name resolves to an IP address that is also an address on one of the interfaces of the current computer. The DNS lookup of foo.yu.yale.edu returns a bunch of IP addresses. If any of those addresses is also an address assigned to any real or virtual LAN on the current machine, then that is the cluster host name and that is the cluster to use. If not, then try again in the next cluster.
CushyClusterConfiguration can determine if it is running in the sandbox on the desktop, or if it is running the development, test, production, disaster recovery, or any other cluster definition. The only requirement is that IP addresses be distinct across servers and cluster.
Restrictions (if you use a single WAR file with a single global configuration):
It is not generally possible to determine the port numbers that a J2EE Web Server is using. So it is not possible to make distinctions based only on port number. CushyClusterConfiguration requires a difference in IP addresses. So if you want to emulate a cluster on a single machine, use VirtualBox to create VMs and don't think you can run two Tomcats on different ports.
(This does not apply to Unit Testing, because Unit Testing does not use a regular WAR and is not constrained to a single configuration file. If you look at the unit tests you can see examples where there are two instances of CushyTicketRegistry configured with two instances of CushyClusterConfiguration with two cluster configuration files. In fact, it can be a useful trick that the code stops at the first match. If you edit the etc/hosts file to create a bunch of dummy hostnames all mapped on this computer to the loopback address (127.0.0.1), then those names will always match the current computer and Cushy will stop when it encounters the first such name. The trick then is to create for the two test instances of Cushy two configuration files (localhost1,localhost2 and localhost2,localhost1). Fed the first configuration, that test instance of Cushy will match the first name (localhost1) and will expect the cluster to also have the other name (localhost2). Fed the second configuration the other test class will stop at localhost2 (which is first in that file) and then assume the cluster also contains localhost1.)
Any automatic configuration mechanism can get screwed up by mistakes made by system administrators. In this case, it is a little easier to mess things up in Windows. You may have already noticed this if your Windows machine hosts VMs or if your home computer is a member of your Active Directory at work (though VPNs for example). At least you would see it if you do "nslookup" to see what DNS thinks of your machine. Windows has Dynamic DNS support and it is enabled by default on each new LAN adapter. After a virtual LAN adapter has been configured you can go to its adapter configuration, select IPv4, click Advanced, select the DNS tab, and turn off the checkbox labelled "Register this connection's addresses in DNS". If you don't do this (and how many people even think to do this), then the private IP address assigned to your computer on the virtual LAN (or the home network address assigned to your computer when it has a VPN tunnel to work) gets registered to the AD DNS server. When you look up your machine in DNS you get the IP address you expected, and then an additional address of the form 192.168.1.? which is either the address of your machine on your home LAN or its address on the private virtual LAN that connects it to VMs it hosts.
Generally the extra address doesn't matter. A problem only arises when another computer that is also on a home or virtual network with its own 192.168.1.* addresses looks up the DNS name of a computer, gets back a list of addresses, and for whatever reason decides that that other computer is also on its home or virtual LAN instead of using the real public address that can actually get to the machine.
CushyClusterConfiguration is going to notice all the addresses on the machine and all the addresses registered to DNS, and it may misidentify the cluster if these spurious internal private addresses are being used on more than one sandbox or machine room CAS computer. It is a design objective of continuing Cushy development to refine this configuration process so you cannot get messed up when a USB device you plug into your computer generates a USB LAN with a 192.168.153.4 address for your computer, but to do this in a way that preserves your ability to configure a couple of VM guests on your desktop for CAS testing.
Note also that the Unit Test cases sometimes exploit this by defining dummy hostnames that resolve to the loopback address and therefore are immediately matched on any computer.
In practice you will have a sandbox you created and some machine room VMs that were professionally configured and do not have strange or unexpected IP addresses, and you can configure all the hostnames in a configuration file and Cushy will select the right cluster and configure itself the way you expect.
Autoconfigure
At Yale the names of DEV, TEST, and PROD machines follow a predictable pattern, and CAS clusters have only two machines. So production services asked that CAS automatically configure itself based on those conventions. If you have similar conventions and any Java coding expertise you can modify the autoconfiguration logic at the end of CushyClusterConfiguration Java source.
CAS is a relatively simple program with low resource utilization that can run on very large servers. There is no need to spread the load across multiple servers, so the only reason for clustering is error recovery. At Yale a single additional machine is regarded as providing enough recovery.
At Yale, the two servers in any cluster have DNS names that ends in "-01" or "-02". Therefore, Cushy autoconfigure gets the HOSTNAME of the current machine, looks for a "-01" or "-02" in the name, and when it matches creates a cluster with the current machine and one additional machine with the same name but substituting "-01" for "-02" or the reverse.
Standalone
If no configured cluster matches the current machine IP addresses and the machine does not autoconfigure (because the HOSTNAME does not have "-01" or "-02"), then Cushy configures a single standalone server with no cluster.
Even without a cluster, Cushy still checkpoints the ticket cache to disk and restores the tickets across a reboot. So it provides a useful function in a single machine configuration that is otherwise only available with JPA and a database.
This is all Optional
Although CushyClusterConfiguration makes most configuration problems simple and automatic, if it does the wrong thing and you don't want to change the code you can ignore it entirely. As will be shown in the next section, there are three properties, a string and two Properties tables) that are input to the CusyTicketRegistry bean. The whole purpose of CushyClusterConfiguration is to generate a value for these three parameters. If you don't like it, you can use Spring to generate static values for these parameters and you don't even have to use the clusterConfiguration bean.
Other Parameters
Typically in the ticketRegistry.xml Spring configuration file you configure CushyClusterConfiguration as a bean with id="clusterConfiguration" first, and then configure the usual id="ticketRegistry" using CusyTicketRegistry. The clusterConfiguration bean exports some properties that are used (through Spring EL) to configure the Registry bean.
<bean id="ticketRegistry" class="edu.yale.cas.ticket.registry.CushyTicketRegistry"
p:serviceTicketIdGenerator-ref="serviceTicketUniqueIdGenerator"
p:checkpointInterval="300"
p:cacheDirectory= "#{systemProperties['jboss.server.data.dir']}/cas"
p:nodeName= "#{clusterConfiguration.getNodeName()}"
p:nodeNameToUrl= "#{clusterConfiguration.getNodeNameToUrl()}"
p:suffixToNodeName="#{clusterConfiguration.getSuffixToNodeName()}" />
The nodeName, nodeNameToUrl, and suffixToNodeName parameters link back to properties generated as a result of the logic in the CushyClusterConfiguration bean.
The cacheDirectory is a work directory on disk to which it has read/write privileges. The default is "/var/cache/cas" which is Unix syntax but can be created as a directory structure on Windows. In this example we use the Java system property for the JBoss /data subdirectory when running CAS on JBoss.
The checkpointInterval is the time in seconds between successive full checkpoints. Between checkpoints, incremental files will be generated.
CushyClusterConfiguration exposes a md5Suffix="yes" parameter which causes it to generate a ticketSuffix that is the MD5 hash of the computer host instead of using the nodename as a suffix. The F5 likes to refer to computers by their MD5 hash and using that as the ticket suffix simplifies the F5 configuration even though it makes the ticket longer.
How Often?
"Quartz" is the standard Java library for timer driven events. There are various ways to use Quartz, including annotations in modern containers, but JASIG CAS uses a Spring Bean interface to Quartz where parameters are specified in XML. All the standard JASIG TicketRegistry configurations have contained a Spring Bean configuration that drives the RegistryCleaner to run and delete expired tickets every so often. CushyTicketRegistry requires a second Quartz timer configured in the same file to call a method that replicates tickets. The interval configured in the Quartz part of the XML sets a base timer that determines the frequency of the incremental updates (typically every 5-15 seconds). A second parameter to the CushyTicketRegistry class sets a much longer period between full checkpoints of all the tickets in the registry (typically every 5-10 minutes).
A full checkpoint contains all the tickets. If the cache contains 20,000 tickets, it takes about a second to checkpoint, generates a 3.2 megabyte file, and then has to be copied across the network to the other nodes. An incremental file contains only the tickets that were added or deleted since the last full checkpoint. It typically takes a tenth of a second an uses very little disk space or network. However, after a number of incrementals it is a good idea to do a fresh checkpoint just to clean things up. You set the parameters to optimize your CAS environment, although either operation has so little overhead that it should not be a big deal.
Based on the usage pattern, at 8:00 AM the ticket registry is mostly empty and full checkpoints take no time. Late in the afternoon the registry reaches its maximum size and the difference between incrementals and full checkpoints is at its greatest.
Although CAS uses the term "incremental", the actual algorithm is a differential between the current cache and the last full checkpoint. So between full checkpoints, the incremental file size increases as it accumulates all the changes. Since this also includes a list of all the Service Ticket IDs that were deleted (just to be absolutely sure things are correct), if you made the period between full checkpoints unusually long it is possible for the incremental file to become larger than the checkpoint and since it is transferred so frequently this would be much, much worse to performance than setting the period for full checkpoints to be a reasonable number.
Nodes notify each other of a full checkpoint. Incrementals occur so frequently that it would be inefficient to send messages around. A node picks up the other incrementals from the other nodes each time it generates its own incremental.
CushyTicketRegistry (the code)
CushyTicketRegistry is a medium sized Java class that does all the work. It began with the standard JASIG DefaultTicketRegistry code that stores the tickets in memory (in a ConcurrentHashMap). Then on top of that base, it adds code to serialize tickets to disk and to transfer the disk files between nodes using HTTP.
Unlike the JASIG TicketRegistry implementations, CushyTicketRegistry does not create a single big cache of tickets lumped together from all the nodes. Each node "owns" the tickets it creates
The Spring XML configuration creates what is called the Primary instance of the CushyTicketRegistry class. This object is the TicketRegistry as far as the rest of CAS is concerned and it implements the TicketRegistry interface. From the properties provided by Spring from the CushyClusterConfiguration, the Primary object determines the other nodes in the cluster and it creates an additional Secondary object instance of the CushyTicketRegistry class for each other node.
Tickets created by CAS on this node are stored in the Primary object which periodically checkpoints to disk, and more frequently writes the incremental changes file to disk. It then notifies the other nodes when it has a new checkpoint to pick up. The Secondary objects keep a Read-Only copy of the tickets on the other nodes in memory in case that node fails.
Methods and Fields
In addition to the ConcurrentHashMap named "cache" that CushyTicketRegistry borrowed from the JASIG DefaultTicketRegistry code to index all the tickets by their ID string, CushyTicketRegistry adds two collections:
- addedTickets - a reference to the tickets that were added to the registry since the last full ticket backup to disk.
- deletedTickets - a collection of ticketids for the tickets that were deleted.
These two collections are maintained by the implementations of the addTicket and deleteTicket methods of the TicketRegistry interface.
This class has three constructors.
- The constructor without arguments is used by Spring XML configuration of the class and generates the Primary object that holds the local tickets created by CAS on this node. There is limited initialization that can be done in the constructor, so most of the work is in the afterPropertiesSet() method called by Spring when it completes its XML configuration of the object.
- The constructor with nodename and url parameters is used by the Primary object to create Secondary objects for other nodes in the cluster configuration.
- The constructor with a bunch of arguments is used by Unit Tests.
The following significant methods are added to the CushyTicketRegistry class:
- checkpoint() - Called from the periodic quartz thread. Serializes all tickets in the Registry to the nodename file in the work directory on disk. Makes a point in time thread safe copy of references to all the current tickets in "cache" and clearsthe added and deleted ticket collections. Builds an ArrayList of the non-expired tickets. Serializes the ArrayList (and therefore all the non-expired tickets) to /var/cache/cas/CASVM1. Generates a Service Ticket ID that will act as a password until the next checkpoint call. Notifies the other nodes, in this example by calling the /cas/cache/notify service of CASVM2 passing the password ticketid.
- restore() - Empty the current cache and de-serialize the /var/cache/cas/nodename file to a list of tickets, then add all the unexpired tickets in the list to rebuild the cache. Typically this only happens once on the primary object at CAS startup where the previous checkpoint of the local cache is reloaded from disk to restore this node to the state it was in at last shutdown. However, secondary caches (of CASVM2 in this example) are loaded all the time in response to a /cas/cache/notify call from CASVM2 that it has taken a new checkpoint.
- writeIncremental() - Called by the quartz thread between checkpoints. Serializes point in time thread safe copies of the addedTickets and deletedTickets collections to create the nodename-incremental file in the work directory.
- readIncremental() - De-serialize two collections from the nodename-incremental file in the work directory. Apply one collection to add tickets to the current cache collection and then apply the second collection to delete tickets. After the update, the cache contains all the non-expired tickets from the other node at the point the incremental file was created.
- readRemoteCache - Generate an https: request to read the nodename or nodename-incremental file from another node and store it in the work directory.
- notifyNodes() - calls the /cas/cluster/notify restful service on each other node after a call to checkpoint() generates a full backup. Passes the generated dummy ServiceTicketId to the node which acts as a password in any subsequent getRemoteCache() call.
- processNotify() - called from the Spring MVC layer when the message from a notifyNodes() call arrives at the other node.
- timerDriven() - called from Quartz every so often (say every 10 seconds) to generate incrementals and periodically a full checkpoint. It also reads the current incrmental from all the other nodes.
- destroy() - called by Java when CAS is shutting down. Writes a final checkpoint file that can be used after restart to reload all the tickets to their status at shutdown.
Unlike conventional JASIG Cache mechanisms, the CushyTicketRegistry does not combine tickets from all the nodes. It maintains shadow copies of the individual ticket caches from other nodes. If a node goes down, then the F5 starts routing requests for that node to the other nodes that are still up. The other nodes can recognize that these requests are "foreign" (for tickets issued by another node and therefore in the shadow copy of that node's tickets) and they can handle such requests temporarily until the other node is brought back up.
Flow
During normal CAS processing, the addTicket() and deleteTicket() methods lock the registry for just long enough to add an item to the end of the one of the two incremental collections. Cushy uses locks only for very simple updates and copies so it cannot deadlock and performance should not be affected. This is the only part of Cushy that runs under the normal CAS HTTP request processing.
Quartz maintains a pool of threads independent of the threads used by JBoss or Tomcat to handle HTTP requests. Periodically a timer event is triggered, Quartz assigns a thread from the pool to handle it, the thread calls the timerDriven() method of the primary CushyTicketRegistry object, and for the purpose of this example, let us assume that it is time for a new full checkpoint.
Java provides a complex built in class called ConcurrentHashMap that allows a collection of Tickets to be shared by request threads. The JASIG DefaultTicketRegistry uses this service, and Cushy adopts the same design. One method exposed by this built in class provides a new list of references to all the Ticket objects at some point in time. Cushy uses this service to obtain its own private list of all the Tickets that it can checkpoint without affecting any other thread doing normal CAS business.
The collection returned by ConcurrentHashMap is not serializable, so Cushy has to copy Tickets from it to a more standard colleciton, and it uses this opportunity to exclude expired tickets. Then it uses a single Java writeObject statement to write the List and a copy of all the Ticket objects to a checkpoint file on disk. Internally Java does all the hard work of figuring out what objects point to other objects so it can write only one copy of each unique object. When it returns, Cushy just has to close the file.
Between checkpoints the same logic applies, only instead of writing the complete set of Tickets, Cushy only serializes the addedTickets and the deletedTicket Ids to the disk file.
After writing a full checkpoint, Cushy generates a new dummyServiceTicket ID string and issues a Notify (calls the /cluster/notify URL of CAS on all the other nodes of the cluster) passing the dummyServiceTicket string so the other nodes can use it as a password to access the checkpoint and incremental files over the Web.
On the other nodes, the Notify request arrives through HTTP like any other CAS request (like a ST validate request). Spring routes the /cluster/notify suffix to the small Cushy CacheNotifyController Java class. We want all the other nodes to get a new copy of the new full checkpoint file as soon as possible there are two strategies to accomplish this.
Cushy does not expect a meaningful return from the /cluster/notify HTTP request. The purpose is just to trigger action on the other node, and the response is empty. Therefore, one simple strategy is to set an short Read Timeout on the HTTP request. The other node receives the Notify and begins to read the checkpoint file. Meanwhile, the node doing the Notify times out having not yet received a response, and so it goes on to Notify the next node in the cluster. Eventually when the checkpoint file has been fetched and restored to memory the Notify logic returns to the CacheNotifyController bean which then tries to generate an empty reply but discovers that the client node is no longer waiting for a reply. Things may end with a few sloppy exceptions, but the code expects and ignores them.
The other approach has the Notify request on the receiving node wake up a thread in the Secondary CusyTicketRegistry object coresponding to the node that sent the Notify. That thread can fetch the checkpoint file and restore the tickets to memory. Meanwhile, the CacheNotifyController returns immediately and sends the null response back to the notifying node. Nothing times out and no exceptions are generated, but now you have to use threading, which is a bit more heavy duty technology than Web applications prefer to use.
There is no notify for an incremental file. The nodes do not synchronize incrementals (too much overhead). So when the timerDriven() method is called between checkpoints, it writes an incremental file for the current node and then checks each Secondary object and attempts to read an incremental file from each other node in the cluster.
There is a chase condition between one node taking a full checkpoint when another node is trying to read an incremental. A new checkpoint deletes the previous incremental file. As each of the other nodes receives a Notify from this node they realize that there is a new checkpoint and no incremental, so a flag gets set and the next timer cycle through no incremental is read. However, after the checkpoint is generate and before the Notify is sent there is a opportunity for the other node to wake up, ask for the incremental file to be sent, and to get back an HTTP status of FILE_NOT_FOUND.
Security
The collection of tickets contains sensitive data. With access to the TGT ID values, a remote user could impersonate anyone currently logged in to CAS. So when checkpoint and incremental files are transferred between nodes of the cluster, we need to be sure the data is encrypted and goes only to the intended CAS servers.
There are sophisticated solutions based on Kerberos or GSSAPI. However, they add considerable new complexity to the code. At the same time, we do not want to introduce anything substantially new because then it has to pass a new security review. So CushyTicketRegistry approaches security by using the existing technology CAS already uses, just applied in a new way.
CAS is based on SSL and uses the X.509 Certificate of the CAS server to verify the identity of machines. If that is good enough to identity a CAS server to the client and to the application that uses CAS, then it should be good enough to identity one CAS server to another.
CAS uses the Service Ticket as a one time randomly generated temporary password. It is large enough that you cannot guess it nor can you brute force match it in the short period of time it remains valid before it times out. The ticket is added onto the end of a URL with the "ticket=..." parameter, and the URL and all the other data in the exchange is encrypted with SSL.
Now apply the same design to CushyTicketRegistry.
Each time a node generates a new full checkpoint file it uses the standard Service Ticket ID generation code to generate a new Service Ticket ID. This ticket id serves in place of a password to fetch files from that node until the next full checkpoint. When a node generates a checkpoint it calls the "https://servername/cas/cluster/notify?ticket=..." URL on the other nodes in the cluster passing this generated dummy Service Ticket ID. SSL validates the X.509 Certificate on the other CAS server before it lets this request pass through, so the ticketid is encrypted and can only go to the real named server at the URL configured to CAS when it starts up.
When a node gets a /cluster/notify request from another node, it responds with an "https://servername/cas/cluster/getChekpoint?ticket=..." request to obtain a copy of the newly generated full checkpoint file. Again, SSL encrypts the data and the other node X.509 certificate validates its identity. If the other node sends the data as requested, then the Service Ticket ID sent in the notify is valid and it is stored in the secondary YaleServiceRegistry object associated with that node. Between checkpoints the same ticketId is used as a password to fetch incremental files, but when the next checkpoint is generated there is a new Notify with a new ticketid and the old ticketid is no longer valid. There is not enough time to brute force the ticketid before it expires and you have to start over.
Behavior
Normal Operation
A CAS node starts up.The Spring configuration loads the primary YaleTicketRepository object, and it creates secondary objects for all the other configured nodes. Each object is configured with a node name, and secondary objects are configured with the external node URL.
If there is a checkpoint file and perhaps an incremental file for any node in the work directory then the primary and secondary objects will use these files to restore at least the unexpired tickets from the previous time the node was up. This is called a "warm start" and it makes sense if CAS has not been down for long and when you are restarting the same version of CAS.
However, there may be times when you want CAS to start with an empty ticket registry, or when you are upgrading from one version of CAS to another and the Ticket objects may not be compatible. When this is true, any files in the work directory should be deleted before restarting CAS. This is a "cold start". When the CushyTicketRegistry discovers that it has no prior checkpoint file it enter the "Cold Start Quiet Period". For 10 minutes (you can change this in the source) a node will not communicate with any other node in the cluster. It will not send or process notifications and it will not read or return checkpoint or incremental files. This gives machine room operators time to shut down all the CAS servers, delete the files, replace the CAS WAR, and start a new version of CAS with a clean slate. If operations cannot complete this process within the Quite Period then CAS will continue to function, but it may log I/O error messages from the readObject statement if a node tries to restore a checkpoint or incremental file that contains incompatible versions of Ticket objects created by a different version of the CAS code. As soon as all the nodes have been migrated to the new code the error messages go away and Cushy will not have been affected by the errors.
While each node has a copy of its own files, all the other nodes in the cluster have replicated copies of the same files. So if a node fails hard and you lose the disk with the work directory, you can recover the files for the failed node from any other running CAS node in the cluster. Unlike the ehcache or memcached systems where the cache is automatically populated over the network when any node comes up, copying files from one CAS node to another is not an automatic feature. You have to do it manually or else automate it with scripts you write based on your own network configuration.
Remember, every CAS node owns its own Registry and every other CAS node accepts whatever a node says about itself. So if you bring up a node with an empty work directory, then it creates a Registry without tickets, and then it will shortly send an empty checkpoint file to all the other nodes where they will replace any old file with the new empty file and empty their secondary Registry objects. So if you want a warm start, you need to make sure the work directory is populated before you start a CAS node or you will lose all copies of its previous tickets.
If you intend a cold start, it is best to shut down all CAS nodes, empty their work directories, and then bring them back up. You can cold start one CAS node at a time, but it may be confusing if some nodes have no tickets while at the same time other nodes are running with their old ticket population.
During normal processing CAS creates and deletes tickets. It is up to the front end (the F5) to route requests to route browser requests to the node to which the user logged in, and to route validation requests to the node that generated the ticket.
Node Failure
Detecting a node failure is the job of the front end. CAS discovers a failure when a CAS node receives a request that should have been routed to another node. CAS needs no logic to probe the cluster to determine what nodes are up or down. If a node is down then all /cluster/verify and /custer/getIncremental requests will time out, but CAS simply waits the appropriate time and then makes the next request until eventually the node comes back up.
During failure, the most common event is that a browser or Proxy that logged on to another node makes a request to a randomly assigned CAS node to generate a new Service Ticket against the existing login.
Had we been using a JASIG TicketRegistry, then all the tickets from all the nodes would have been stored in a great big virtual bucket. Then any node could find the TGT and issue an ST. So the Business Logic layer does not know or care which node issued a Ticket when creating new tickets or validating existing tickets. Furthermore, when using Ehcache, JBoss Cache, or Memcached the tickets replicated to another node using serialization may be chained to their own private copy of the TGT that was sent from the other node by the Java serialization mechanism. So CAS doesn't really look to carefully at the source of the objects it processes.
The big difference with CushyTicketRegistry is that it keeps separate Registry objects for each node, and it likes to treat the secondary Registry as read-only, at least from the business logic layer of this node. When the other node has failed, then when the business logic layer calls the Registry to find a TGT, then that TGT will be found in one of the secondary Registries. That is the only hint we have that there has been a node failure.
We still preserve the rule that new tickets are created in the primary (local) registry and are indentified with a string that ends with this node's name. That part happens automatically in the business logic when it calls the locally configured Service Ticket unique ID generator, and when it calls addTicket() on the primary object.
In node failure mode, however, the new Service Ticket will have a Granting Ticket field that points to the TGT in the secondary object, that is in the Registry holding a copy of the tickets of the failed node.
If you have been paying attention, you will realize this is no big deal. Any serialized Service Ticket transmitted to another node with one of the JASIG Registry solutions will also be stored on the other node with its own private pointer to a copy of the original TGT, and the fact that the Granting Ticket field in the ST object points to an odd ticket that isn't really "in" the cache has never been a problem. The ST will still validate normally.
Of course, there is a chase condition if after a ST is issued for a TGT in a secondary Registry then the previously failed node starts back up and sends a Notify and the secondary Registry gets refreshed with a new bunch of tickets before the ST is validated. Java will recognize that while the old TGT is no longer in the ConcurrentHashMap of the secondary Registry, the ST still has a valid reference to it. A particularly aggressive cycle of Garbage Collection might delete all the other tickets from the snapshot of the old registry object, but it will leave that one TGT around as long as the ST points to it. When the ST is validated and is deleted, then the old copy of the TGT is released and can be destroyed when Java gets around to it. Again, this arrangement when the ST points to a TGT that is no longer in any Registry HashMap is normal behavior for JASIG replication so it must pose no problem.
The most interesting behavior occurs when a TGT in the secondary Registry of a failed node is used to create a Proxy Granting Ticket. Then the Proxy ticket is issued by and belongs to node that issued it, and the proxying application communicates to that node to get Service Tickets.
The thing that really changes is the handling of CAS Logoff. Fortunately, in normal practice nobody ever logs off of CAS. They just close their browser and let the TGT timeout. However, if someone were to call /cas/logoff during a node failure when they were logged in to another node, then the Business Logic layer will delete the CAS Cookie and report a successful logoff, but we cannot guarantee that CAS will do everything perfectly in the way it would have operated without the node failure.
Sorry but if node failure screws up the complete correct processing of Single SignOut, that is simply a problem you will have to accept. Unless a node stays up to control its TGT and correctly fill in the Service collection with all the Services that the user logged into, then a decentralized recovery system like this cannot globally manage the services.
There is another problem that probably doesn't matter but which should be mentioned. If a node tries to handle Single SignOut on its own during a node failure, and then the failed node comes back, the failed node will restore the TGT that the user just logged out of. Many hours later that TGT will time out, and now the original node will try to notify all the Services that a logout has occurred. So services may get two logout messages from CAS for the same login. It is almost impossible to image a service that will be bothered by this behavior.
Node Recovery
After a failure, the node comes back up and restores its registry from the files in the work directory.
At some point the front end notices the node is back and starts routing requests to it based on the node name in the suffix of CAS Cookies. The node picks up where it left off. It does not know and can not learn about any Service Tickets issued on behalf of its logged in users by other nodes during the failure. It does not know about users who logged out of CAS during the failure.
Cushy defines its support of Single SignOff to be a "best effort" that does not guarantee perfect behavior across node failure.