Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  • Recover tickets after reboot without JPA, or a separate server, or a cluster (works on a standalone server)
  • Recover tickets after a crash, except for the last few seconds of activity that did not get to disk.
  • No dependency on any large external librarylibraries. Pure Java using only the standard Java SE runtime and some Apache commons stuff.
  • All source in one class. A Java programmer can read it and understand it.
  • Can also be used to cluster CAS servers
  • Cannot crash CAS ever, no matter what is wrong with the network or other servers.
  • A completely different and simpler approach to the TicketRegistry. Easier to work with and extend.
  • Probably less efficient than the uses more CPU and network I/O than other TicketRegistry solutions, but it has a constant predictable overhead you can measure and "price out"verify is trivial.

CAS is a Single SignOn solution. Internally the function of CAS is to create, update, and delete a set of objects it calls "Tickets" (a word borrowed from Kerberos). A Logon Ticket (TGT) object is created to hold the Netid when a user logs on to CAS. A partially random string is generated to be the login ticket-id and is sent back to the browser as a Cookie and is also used as a "key" to locate the logon ticket object in a table. Similarly, CAS creates Service Tickets (ST) to identity a user to an application that uses CAS authentication.

...

Four years ago Yale implemented a "High Availability" CAS cluster using JBoss Cache to replicate tickets. After that, the only CAS crashes were caused by failures of JBoss Cache. Red Hat failed to diagnose or fix the problem. As we tried to diagnose the problem ourselves we discovered both bugs and design problems in the structure of Ticket objects and the use of the TicketRegistry solutions that contributed to the failure. We considered replacing JBoss Cache with Ehcache, but there is a more fundamental problem here. It should not be possible for any failure of the data replication mechanism to crash all of the CAS servers at once. Another choice of cache might be more reliable, but it would suffer from the same fundamental structural problem.

All of the previous CAS cluster solutions create a common pool of tickets shared by all of the cluster members. They are designed and configured so that the Front End can distribute requests in a round-robin approach and any server can handle any request. However, once the Service Ticket is returned by one server, the request to validate the ST comes back in milliseconds. So JPA must write the ST to the database, and Ehcache must synchronously replicate the ST to all the other servers, before the ST ID is passed back to the browser. Synchronous replication was the option that exposed CAS to crashing if the replication system had problems, and it imposed a sever performance constraint that requires all the CAS servers to be connected by very high speed networking.

Disaster recovery and very high availability suggests that at least one CAS server should be kept at a distance independent of the machine room, its power supply and support systems. So there is tension between performance considerations to keep servers close and recovery considerations to keep things distant.

CAS was designed with the ability to add a node identifier at the end of every generated ticketid. This capability is not widely used, because it has no particular purpose and is rather difficult to configure. CushyClusterConfiguration makes it easy to configure, and modern Front End programming or the CushyFrontEndFilter use this capability to improve CAS performance and increase reliability.

Ten years ago, when CAS was being designed, the Front End that distributed requests to members of the cluster was typically an ordinary computer running simple software. Today networks have become vastly more sophisticated, and Front End devices are specialized machines with powerful software. They are designed to detect and fend off Denial of Service Attacks. They improve application performance by offloading SSL/TLS processing. They can do "deep packet inspection" to understand the traffic passing through and route requests to the most appropriate server (called "Layer 5-7 Routing" because requests are routed based on higher level protocols rather than just IP address or TCP session). Although this new hardware is widely deployed, CAS clustering has not changed and has no explicit option to take advantage of it.

Front End devices know many protocols and a few common server conventions. For everything else they expose a simple programming language. While CAS performs a Single Sign On function, the logic is actually designed to create, read, update, and delete tickets. The ticketid is the center of each CAS operation. In different requests there are only three places to find the ticketid that defines this operation:

  1. In the ticket= parameter at the end of the URL for validation requests.
  2. In the pgt= parameter for a proxy request.
  3. In the CASTGC Cookie for browser requests.

Programming the Front End to know that "/validate", "/serviceValidate", and two other strings in the URL path means that this is case 1, and "/proxy" means it is case 2, and everything else is case 3 is pretty simple. If you cannot program your Front End, then CushyFrontEndFilter does the coding in Java, although this will occasionally add an extra network hopwhile that might improve reliability somewhat it would not solve the fundamental structural problems.

Having been burned by software so complicated that the configuration files were almost impossible to understand, Cushy was developed to accomplish the same thing in a way so simple it could not possibly fail.

The existing CAS TicketRegistry solutions must be configured to replicate tickets to the other nodes and to wait for this activity to complete, so that any node can validate a Service Ticket that was just generated a few milliseconds ago. Waiting for the replication to complete is what makes CAS vulnerable to a crash if the replication begins but never completes. Synchronous ticket replication is a standard service provided by JBoss Cache and Ehcache, but is it the right way to solve the Service Ticket validation problem? A few minutes spent crunching the math suggested there was a better way.

It is easier and more efficient to send the request to the node that already has the ticket and can process it rather than struggling to get the ticket to every other node in advance of the next request.

In the current TicketRegistry implementations, any request in a cluster to create a Service Ticket must replicate the service ticket to at least one other computer (the database server in JPA, one or more nodes using Ehcache or any other ticket replication mechanism) before the Service Ticket ID is returned to the browser. This ensures that the Service Ticket can be validated by any node to which the application's validation request is directed. After validation, there is a second network transaction to delete the ticket. So every ST involves two backend synchronous operations.

However, it has always been part of CAS that every ticketid has a suffix that, at least on paper, can contain the node name of the CAS server that created the ticket. Using this feature in practice requires some node configuration methodology. Once this is done, then any validate request (for example, any call to /cas/serviceValidate contains in the query string part of the URL a ticket= parameter, and the end of the value of that parameter designates the node that created the ticket. Today you can program most modern network front end devices to extract this information from the request and route the validate request to the node that created the ticket and is guaranteed to have it in memory. If you cannot program your front end device, or if you cannot convince your network administrators to do the work for you, then CushyFrontEndFilter accomplishes the same thing by scanning requests as they arrive at a CAS server and forwarding requests like validation to the server that created the ticket. If you have two servers and requests are randomly assigned to them, then 50% of the time the request goes to the right server and there is no network transaction, and 50% of the time the request has to be forwarded by the Filter to the other server, which then validates the ST and deletes it returning the response message. So with the Filter you expect, on average, one network transaction half the time instead of, with current JPA or Cache technology, two network transactions every time. When the number of nodes in the cluster is more than 2, the Filter works even better.

CushyFrontEndFilter works with Ehcache or CushyTicketRegistry. When added to Ehcache you can change the cache configuration so that the Service Ticket cache does not use synchronous replication, or even better you can turn off replication entirely for the Service Ticket cache because every 10 seconds a Service Ticket is either used and discarded or else times out, so it makes no sense to replicate them at all if the front end or filter routes requests properly.

However, once you come up with the idea of using front end routing to avoid the synchronous ticket replication (which was the source of crashes in JBoss Cache at Yale), then some new more radical changes to TicketRegistry become possible. In addition to the various validate request, you can route the /proxy request to the node that owns the Proxy Granting Ticket, and you can route new Service Ticket requests to the node that issued the Ticket Granting Ticket (based on the suffix of the CASTGC cookie). Now a basic principle of all the existing ticket registry designs is no longer necessary. CAS Ticket objects do not have to be stored in what appears to be a common shared pool. Tickets can be segregated into separate collections based on the identity of the node that created and "owns" the ticket.

"Cushy" stands for "Clustering Using Serialization to disk and Https transmission of files between servers, written by Yale". This summarizes what it is and how it works.

For objects to be replicated from one node to another, programs libraries use the Java writeObject statement to "Serialize" the object to a stream of bytes that can be transmitted over the network and then restored in the receiving JVM. Ehcache and JBoss Cache use writeObject on individual tickets (although it turns out they also end up serializing copies of all the other ticket replication systems operate on individual ticketsobjects the ticket points to, including the TGT when attempting to replicate a ST). However, writeObject can operate just as well on the entire contents of the TicketRegistry. This TicketRegistry. Making a "checkpoint" copy of the entire collection of tickets to disk (at shutdown for example) and then restoring this collection (after a restart) is very simple to code. Since Java does all the work, it is guaranteed to work, but it might not be efficient enough to use. Still, once you have the idea the code starts to write itselfbehave correctly. It is a useful additional function. However, you can be more aggressive in the use of this approach, and that suggests the design of an entirely different type of TicketRegistry.

Start with the DefaultTicketRegistry source that CAS uses to hold tickets in memory on a single CAS standalone server. Then add the writeObject statement (surrounded by the code to open and close the file) to create a checkpoint copy of all the tickets, and a corresponding readObject and surrounding code to restore the tickets to memory. The first thought was to do the writeObject to a network socket, because that was what all the other TicketRegistry implementations were doing. Then it became clear that it was simpler, and more generally useful, and a safer design, if the data was first written to a local disk file. The disk file could then optionally be transmitted over the network in a completely independent operation. Going first to disk created code that was useful for both standalone and clustered CAS servers, and it guaranteed that the network operations were completely separated from the Ticket objects and therefore the basic CAS function.

...

The number of tickets CAS holds grows during the day and shrinks over night. At Yale there are fewer than 20,000 ticket objects in CAS memory, and Cushy can write all those tickets to disk in less than a second generating a file around 3 megabytes in size. Other numbers of tickets scale proportionately (you can run a JUnit test and generate your own numbers). This is such a small amount of overhead that Cushy can be proactivebe proactive.

CAS is a very important application, but on modern hardware it is awfully small and cheap to run. Since it was first developed there have been at least 5 generations of new chip technology that now run what was never a big application to begin with.

So to take the next logical step, start with the previous ticketRegistry.xml configuration and duplicate the XML elements that currently call a function in the RegistryCleaner every few minutes. In the new copy of the XML elements, call the "timerDriven" function in the (Cushy)ticketRegistry bean every few minutes. Now Cushy will not wait for shutdown but will back up the ticket objects regularly just in case the CAS machine crashes without shutting down normally. When CAS restarts after a crash, it can load a fairly current copy of the ticket objects which will satisfy the 99.9% of the users who did not login in the last minutes before the crash.

...

It seems to be more reliable to configure each node to know the name and URL of all the other machines in the same cluster. However, a node specific configuration file on each machine is difficult to maintain and install. You do not want to change the CAS WAR file when you distribute it to each machine, and Production Services wants to churn out identical server VMs with minimal differenceswith minimal differences.

In the 1980's before the internet, 500 universities worldwide were connected by BITNET. The technology required a specific local configuration file for each campus, but maintaining 500 different configurations was impossible. So they created a single global file that defined the entire network from no specific point of vew, and a utility program that, given the identity of a campus somewhere in the network, could translate that global file to the configuration data that campus needed to install to participate in the network. CushyClusterConfiguration does the same thing for your global definition of many CAS clusters.

CushyClusterConfiguration (CCC) provides an alternative approach to cluster configuration, and while it was originally designed for CushyTicketRegistry it also works for Ehcache. Instead of defining the point of view of each individual machine, the administrator defines all of the CAS servers in all of the clusters in the organization. Production, Functional Test, Load Test, Integration Test, down to the developers desktop or laptop "Sandbox" machines.

CCC is a Spring Bean that is specified in the CAS Spring XML. It only has a function during initialization. It reads in the complete set of clusters, uses DNS (or the hosts file) to obtain information about each CAS machine referenced in the configuration, it uses Java to determine the IP addresses assigned to the current machine, and then it tries to match one of the configured machines to the current computer. When it finds a match, then that configuration defines this CAS, and the other machines in the same cluster definition can be used to manually configure Ehcache or CushyTicketRegistry.

CCC exports the information it has gathered and the decisions it has made by defining a number of properties that can be referenced using the "Spring EL" language in the configuration of properties and constructor arguments for other Beans. This obviously includes the TicketRegistry, but the ticketSuffix property can also be used to define a node specific value at the end of the unique ticketids generated by beans configured by the uniqueIdGenerators.xml file.

There is a separate page to explain the design and syntax of CCC.

Front End or CushyFrontEndFilter

If the Front End can be programmed to understand CAS protocol, to locate the ticketid, to extract the node identifying suffix from the ticketid, and to route requests to the CAS server that generated the ticket, then CAS does not have to wait for each Service Ticket ID to be replicated around the cluster. This is much simpler and more efficient, and the Cushy design started by assuming that everyone would see that this is an obviously better idea.

Unfortunately, it became clear that people in authority frequently had a narrow view of what the Front End should do, and that was frequently limited to the set of things the vendor pre-programmed into the device. Furthermore, there was some reluctance to depend on the correct functioning of something new no matter how simple it might be.

So with another couple of day's more programming (much spent understanding the multithreaded SSL session pooling support in the latest Apache HttpClient code), CushyFrontEndFilter was created. The idea here was to code in Java the exact same function that was better performed by an iRule in the BIG_IP F5 device, so that someone would be able to run all the Cushy programs even if he was not allowed to change his own F5. It only has a function during initialization. It reads in the complete set of clusters, uses DNS (or the hosts file) to obtain information about each CAS machine referenced in the configuration, it uses Java to determine the IP addresses assigned to the current machine, and then it tries to match one of the configured machines to the current computer. When it finds a match, then that configuration defines this CAS, and the other machines in the same cluster definition can be used to manually configure Ehcache or CushyTicketRegistry.

CCC exports the information it has gathered and the decisions it has made by defining a number of properties that can be referenced using the "Spring EL" language in the configuration of properties and constructor arguments for other Beans. This obviously includes the TicketRegistry, but the ticketSuffix property can also be used to define a node specific value at the end of the unique ticketids generated by beans configured by the uniqueIdGenerators.xml file.

There is a separate page to explain the design and syntax of CCC.

Front End or CushyFrontEndFilter

Front End devices know many protocols and a few common server conventions. For everything else they expose a simple programming language. The Filter contains the same logic written in Java.

We begin by assuming that the CAS cluster has been configured by CushyClusterConfiguration or its equivalent, and that one part of configuring the cluster was to create a unique ticket suffix for every node and feed that value to the beans configured in the uniqueIdGenerators.xml file.

After login, the other CAS requests all operate on tickets. They generate Service Tickets and Proxy Granting Tickets, validate tickets, and so on. The first step is to find the ticket that is important to this request. There are only three places to find the ticketid that defines an operation:

  1. In the ticket= parameter at the end of the URL for validation requests.
  2. In the pgt= parameter for a proxy request.
  3. In the CASTGC Cookie for browser requests.

A validate request is identified by having a particular "servletPath" value ("/validate", "/serviceValidate, "proxyValidate", "/samlValidate"). The Proxy request has a different path ("/proxy"). Service Ticket create requests come from a browser that has a CASTGC cookie. If none of the servletPath values match and there is no cookie, then this request is not related to a particular ticket and can be handled by any CAS server.

If you program this into the Front End, then the request goes directly to the right server without any additional overhead. With only the Filter, a request goes to some randomly chosen CAS Server which may have to forward the request to another server, forward back the response, and handle failure if the preferred server goes down.

There is a separate page to describe Front End programming for CAS.

CushyTicketRegistry and a CAS Cluster

...

Notify is only done every few minutes when there is a new checkpoint. Incrementals are generated all the time, but they are not announced. Each server is free to poll the other servers periodically to fetch the most recent incremental with the /cas/cluster/getIncremental request (add the dummyServiceTicketId to prove you are authorized to read the data)the dummyServiceTicketId to prove you are authorized to read the data).

CAS is a high security application, but it always has been. The best way to avoid introducing a security problem is to model the design of each new feature on something CAS already does, and then just do it the same way.

Since these node to node communication calls are modeled on existing CAS Service Ticket validation and Proxy Callback requests, they are configured into CAS in the same place (in the Spring MVC configuration, details provided below).

Note: Yes, this sort of thing can be done with GSSAPI, but after looking into configuring Certificates or adding Kerberos, it made sense to keep it simple and stick with the solutions that CAS was already using to solve the same sort of problems in other contexts.

Are You Prepared?

Everything that can go wrong will go wrong. We plan for hardware and software failure, network failure, and disaster recovery. To do this we need to know how things will fail and how they will recover from each type of problem.

...

CushyClusterConfiguration will configure either EhcacheTicketRegistry or CushyTicketRegistry, so it is certainly no easier to configure one or the other.

Although the default configuration of Ehcache uses synchronous replication for Service Tickets, if you program the Front End (or add the CushyFrontEndFilter) to a CAS using Ehcache in the same way described for CushyTicketRegistry, then ST validation requests will go to the CAS server that created the ST, so you can use the same lazy asynchronous replication for Service Tickets that normally Ehcache is configured to use for Logon Tickets (TGTs).

So the main difference between the two is that .

CushyFrontEndFilter works for both Ehcache and CushyTicketRegistry, so any benefits there can apply equally to both systems if you reconfigure Ehcache to exploit them.

With Front End support, every 10 seconds or so Ehcache replicates all the tickets that have changed in the last 10 seconds, while Cushy transmits a file with all of the ticket changes since the last full checkpoint. Then every few minutes it Cushy generates a full checkpoint that Ehcache does not use. So Ehcache transmits a lot less data. However, the cost of transmitting the extra data is so low that this may not matter if Cushy provides extra function.Ehcache is a closed system that operates inside the CAS servers and exposes no external features. Cushy generates checkpoint and incremental files that are less data.

Ehcache uses RMI and does not seem to have any security, so it depends on the network Firewall and the good behavior of other computers in the machine room. Cushy encrypts data and verifies the identity of machines, so it cannot be attacked even from inside the Firewall.

Cushy generates regular files on disk that can be accessed copied using any standard commands, scripts, or utilities. This provides new disaster recovery options.

Ehcache is designed to be a "cache". That is, it is designed to be a high speed, in memory or local disk, copy of some data that has a persistent copy off authoritative source on some server. That is why it has a lot of configuration for "LRU" and object eviction, because it assumes that lost objects are reloaded from persistent storage. You can use it as a replicated in memory table, but you have to understand if you read the documentation that that is not its original design. Cushy is specifically designed to be a CAS TicketRegistry. That is the only thing it does, and it is very carefully designed to do that job correctly.

Cushy models its design on two 40 year old concepts. A common strategy for backing disks up to tape was to do a full backup of all the files once a week, and then during the week to do an incremental backup of the files changed since the last backup. The term "checkpoint" derives from a disk file into which an application saved all its important data periodically so it could restore that data an pick up where it left off after a system crash. These strategies work because they are too simple to fail. More sophisticated algorithms may accomplish the same result with less processing and I/O, but the more complex the logic the more vulnerable you become if the software, or hardware, or network failure occurs in a way that the complex sophisticated software did not anticipate.

Ehcache is a large library of complex code designed to merge changes to shared data across multiple hosts. Cushy is a single source file of pure Java written to be easily understood.

Basic Principles

...

Java written to be easily understood.

Replicating the entire TicketRegistry instead of just replicating individual tickets is less efficient. The amount of overhead is predictable and you can verify that the extra overhead is trivial. However, remember this is simply the original Cushy 1.0 design which was written to prove a point and is aggressively "in your face" pushing the idea of "simplicity over efficiency". After we nail down all the loose ends, it is possible to add a bit of extra optimization to get arbitrarily close to Ehcache in terms of efficiency.

Ticket Chains (and Test Cases)

...

What Cushy Does at Failure

It is not necessary to explain how Cushy runs normally. It is based on DefaultTicketRegistry. It stores the tickets in a table in memory. If you have a cluster, each node in the cluster operates as if it was a standalone server and depends on the Front End to route requests to the node that can handle them.

Separately from the CAS function, Cushy periodically writes some files to a directory on disk. They are ordinary files. They are protected with ordinary operating system security.

In a cluster, the files can be written to a shared disk, or they can be copied to a shared location or from node to node by an independent program that has access to the directories. Or, Cushy will replicate the files itself using HTTPS GET requests.

A failure is detected when a request is routed by the Front End to a node other than the node that created the ticket.

Because CAS is a relatively small application that can easily run on a single machine, a "cluster" can be configured in either of two ways:

  • A Primary server gets all the requests until it fails. Then a Backup "warm spare" server gets requests. If the Primary comes back up relatively quickly, then Cushy will work best if Front End resumes routing all request to the Primary as soon as it becomes available again.
  • Users are assigned to login to a CAS Server on a round-robin or load balanced basis. After a user logs in, the suffix on the login, proxy, or service tickets in the URL or headers of an HTTP request route the request to that server. 

Each CAS server in the cluster has a shadow object representing the TicketRegistry of each of the other nodes. In normal operation the CAS nodes exchange checkpoint and incremental files but they do not restore objects from those files to memory. This is called "Tickets On Request". The first time a request arrives for a ticket owned by another node, the getTicket request restores tickets into memory from the files for that node.

However, every new ticket Cushy creates belongs to the node that created it. During a node failure, the new Service Tickets or Proxy Granting Tickets created for users logged into the failed node are created by and belong to the backup node. They each get a ticket ID that has the suffix of the backup node. They live forever in the Ticket Registry of the backup node. They just happen to be associated with and point to a TGT in the shadow registry on the backup node associated with the failed login node.

So while the failed node is down, and even after it comes back up again, requests associated with tickets created by the backup node are routed to the backup node by the Front End. However, after the failed node comes back new requests for new tickets associated with the login TGT will go back to being processed by the original While other TicketRegistry solutions combine tickets from all the nodes, a Cushy cluster operates as a goup of standalone CAS servers. The Front End or the Filter routes requests to the server that can handle them. So when everything is running fine, the TicketRegistry that CAS uses is basically the same as the DefaultTicketRegistry module that works on standalone servers.

So the interesting things occur when one server goes down or when network connectivity is lost between the Front End and a node, or between one node and another.

If a node fails, or the Front End cannot get to it and thinks it has failed, then requests start to arrive at CAS nodes for tickets that they do not own and did not create. File sharing or replication gives every node a copy of the most recent checkpoint and incremental file from that node, but normally the strategy of "Tickets on Request" does not open or process the files until they are needed. So the first request restores all the tickets for the other node to memory under the Secondary TicketRegistry object created at initialization to represent the failed node.

Since the rule is that the other node "owns" its own tickets, you cannot make any permanent changes to the tickets in the Secondary Registry. These tickets will be passed back as needed to the CAS Business Logic layer, and it will make changes as part of its normal processing thinking that the changes it makes are meaningful. In reality, when the other node comes back it will reload its tickets from the point of failure and that will be the authoritative collection representing the state of those tickets. In practice this doesn't actually matter.

If CAS on this node creates a new Service Ticket or Proxy Granting Ticket related to a Login TGT created originally by the other node, then: The new Ticket belongs to the node that created it and that node identifier is added to the end of the ticket ID. So the new ST is owned by and is validated by this node even though the Login TGT used to create it comes from the Secondary Registry of the failed node.

Service Tickets are created and then in a few milliseconds they are deleted when the application validates them or they time out after a few seconds or minutes. They do not exist long enough to raise any issues.

Proxy Granting Tickets, however, can remain around for hours. So the one long term consequence of a failure is that the login TGT can be on one server, but a PGT can be on a different server that created it while the login server was temporarily unavailable. This requires some thought, but you should quickly realize that everything will work correctly today. In future CAS releases there will be an issue if a user adds additional credentials (factors of authentication) to an existing login after a PGT is created. Without the failure, the PGT sees the new credentials immediately. With current Cushy logic, the PGT on the backup server is bound to a point in time snapshot of the original TGT and will not see the additional credentials. Remember, this only occurs after a CAS failure. It only affects the users who got the Proxy ticket during the failure. It can be "corrected" if the end user logs out and then logs back into the middleware server.Cushy 2.0 will consider addressing this problem automaticallya failure is that the login TGT can be on one server, but a PGT can be on a different server that created it while the login server was temporarily unavailable. The PGT ends up with its own private copy of the TGT which is frozen in time at the moment the PGT was created. Remember, this is normal behavior for all existing TicketRegistry solutions and none of the other TicketRegistry options will ever "fix" this situation. At least Cushy is aware of the problem and with a few fixes to the Ticket classes Cushy 2.0 might be able to do better.

There is also an issue with Single Sign Out. If a user logs out during a failure of his login server, then a backup server processes the Single Log Out normally. Then when the login server is restored to operation, the Login TGT is restored from the checkpoint file into memory. Of course, no browser now has a Cookie pointing to that ticket, so it sits unused all day and then in the evening it times out and a second Single Sign Out process is triggered and all the applications that perviously previously were told the user logged out are not contacted a second time with the same logout information. It is almost unimaginable that any application would be written so badly it would care about this, but it should be mentioned.

While the login server is down, new Service Tickets can be issued, but they cannot be meaningfully added to the "services" table in the TGT that drives Single Sign Out. After the login server is restored, if the user logs out to CAS the only applications that will be notified of the logout will be applications that received their Service Tickets from the logon server. Cushy regards Single Sign Out as a "best effort" service and cannot at this time guarantee processing for ST's issued during a node or network failure.

Again, Cushy 2.0 may address this problem.

Cushy CAS Cluster

In this document a CAS "cluster" is just a bunch of CAS server instances that are configured to know about each other. The term "cluster" does not imply that the Web servers are clustered in the sense that they share Session objects (JBoss "clustering"). Nor does it depend on any other type of communication between machines. In fact, a Cushy CAS cluster could be created from a CAS running under Tomcat on Windows and one running under JBoss on Linux.

To the outside world, the cluster typically shares a common virtual URL simulated by the Front End device. At Yale, CAS is "https://secure.its.yale.edu/cas" to all the users and applications. The "secure.its.yale.edu" DNS name is associated with an IP address managed by the BIG-IP F5 device. It holds the certificate, terminates the SSL, then examines requests and based on programming called iRules it forwards requests to any of the configured CAS virtual machines.

Each virtual machine has a native DNS name and URL. It is these "native" URLs that define the cluster because each CAS VM has to use the native URL to talk to another CAS VM. At Yale those URLs follow a pattern of "https://vm-foodevapp-01.web.yale.internal:8443/cas". 

Internally, Cushy configuration takes a list of URLs and generates a cluster definition with three pieces of data for each cluster member: a nodename like "vmfoodevapp01" (the first element of the DNS name with dashes removed), the URL, and the ticket suffix that identifies that node (the F5 prefers the ticket suffix to the an MD5 hash of the IP address of the VMof the machine that is down.When that machine comes back up it resumes controlling the old TGT of the logged in user, and when the user logs off the Single Sign Out processing will occur only for servers that that machine knows about, and will omit services to which the user connected while the server that owned the TGT was down. Cushy provides a "best effort" Single Sign Out experience, and Cushy 1.0 cannot do better than this.

There are a few types of network failure that work differently from node failure.

If one CAS node is unable to connect to another CAS node for a while, even though the other node is up, then it marks the other node as being "unhealthy" and waits patiently for the other node to send a /cluster/notify. The other node will send a Notify every time it generates a new Checkpoint, and when one of those Notify messages gets through then the two nodes will reestablish communication.

If the Front End is unable to get to a CAS Node, but the other server can get to it, then what happens next depends on whether the CushyFrontEndFilter is also installed. Having both the programmed Front End and also the Filter is a bit like suspenders and a belt, but if the Front End is doing its job then the Filter has nothing to do. However, in this particular case the Filter will see a request for a ticket owned by another node and will attempt to forward it to the node indicated in the request. If it succeeds then CAS has automatically routed traffic around the point of failure. However, remember that if the node actually goes down then there will be two connect timeout delays, one where the Front End determines the node is down and then a second where the Filter verifies that it is down.

Without the Filter then the current node receives a request for a ticket it does not own, loads tickets into its Secondary Registry for that node, and processes the request. What is different is that if the node is really up and the two nodes can connect, then this CAS node will continue to receive Notify requests and new checkpoint and incremental files from the other node even as it is also processing requests for that node sent to it by the Front End. Cushy is designed to handle this situation (because even in a normal failure the other node can come up just as you are in the middle of handling a request for it).

Configuration

In CAS the TicketRegisty is configured using the WEB-INF/spring-configuration/ticketRegistry.xml file.

...

Then add a second timer driven operation to the end of the file to call the "timerDriven" method of the CushyTicketRegistry object on a regular basis (say once every 10 seconds) to trigger writing the checkpoint and incremental files.

There is a separate page that describes CushyClusterConfiguration in detail.

 

You Can Configure Manually

Although CushyClusterConfiguration makes most configuration problems simple and automatic, if it does the wrong thing and you don't want to change the code you can ignore it entirely. As will be shown in the next section, there are three properties, a string and two Properties tables) that are input to the CusyTicketRegistry bean. The whole purpose of CushyClusterConfiguration is to generate a value for these three parameters. If you don't like it, you can use Spring to generate static values for these parameters and you don't even have to use the clusterConfiguration beanthe CushyTicketRegistry object on a regular basis (say once every 10 seconds) to trigger writing the checkpoint and incremental files.

There is a separate page that describes CushyClusterConfiguration in detail.

 

You Can Configure Manually

Since CushyClusterConfiguration only generates strings and Property tables that are used by CushyTicketRegistry, if you prefer you can generate those strings and tables manually in the CAS configuration file for each server.

Other Parameters

Typically in the ticketRegistry.xml Spring configuration file you configure CushyClusterConfiguration as a bean with id="clusterConfiguration" first, and then configure the usual id="ticketRegistry" using CusyTicketRegistry. The clusterConfiguration bean exports some properties that are used (through Spring EL) to configure the Registry bean.

...