Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Proxy Granting Tickets, however, can remain around for hours. So the one long term consequence of a failure is that the login TGT can be on one server, but a PGT can be on a different server that created it while the login server was temporarily unavailable. This requires some thought, but you should quickly realize that everything will work correctly today. In future CAS releases there will be an issue if a user adds additional credentials (factors of authentication) to an existing login after a PGT is created. Without the failure, the PGT sees the new credentials immediately. With current Cushy logic, the PGT on the backup server is bound to a point in time snapshot of the original TGT and will not see the additional credentials. Remember, this only occurs after a CAS failure. It only affects the users who got the Proxy ticket during the failure. It can be "corrected" if the end user logs out and then logs back into the middleware server.Cushy 2.0 will consider addressing this problem automaticallyThe PGT ends up with its own private copy of the TGT which is frozen in time at the moment the PGT was created. Remember, this is normal behavior for all existing TicketRegistry solutions and none of the other TicketRegistry options will ever "fix" this situation. At least Cushy is aware of the problem and with a few fixes to the Ticket classes Cushy 2.0 might be able to do better.

There is also an issue with Single Sign Out. If a user logs out during a failure of his login server, then a backup server processes the Single Log Out normally. Then when the login server is restored to operation, the Login TGT is restored from the checkpoint file into memory. Of course, no browser now has a Cookie pointing to that ticket, so it sits unused all day and then in the evening it times out and a second Single Sign Out process is triggered and all the applications that previously were told the user logged out are not contacted a second time with the same logout information. It is almost unimaginable that any application would be written so badly it would care about this, but it should be mentioned.

While the login server is down, new Service Tickets can be issued, but they cannot be meaningfully added to the "services" table in the TGT of the machine that is down.When that machine comes back up it resumes controlling the old TGT of the logged in user, and when the user logs off the Single Sign Out processing will occur only for servers that that machine knows about, and will omit services to which the user connected while the server that owned the TGT was down. Cushy provides a "best effort" Single Sign Out experience, and Cushy 1.0 cannot do better than this.

There are a few types of network failure that work differently from node failure.

If one CAS node is unable to connect to another CAS node for a while, even though the other node is up, then it marks the other node as being "unhealthy" and waits patiently for the other node to send a /cluster/notify. The other node will send a Notify every time it generates a new Checkpoint, and when one of those Notify messages gets through then the two nodes will reestablish communication.

If the Front End is unable to get to a CAS Node, but the other server can get to it, then what happens next depends on whether the CushyFrontEndFilter is also installed. Having both the programmed Front End and also the Filter is a bit like suspenders and a belt, but if the Front End is doing its job then the Filter has nothing to do. However, in this particular case the Filter will see a request for a ticket owned by another node and will attempt to forward it to the node indicated in the request. If it succeeds then CAS has automatically routed traffic around the point of failure. However, remember that if the node actually goes down then there will be two connect timeout delays, one where the Front End determines the node is down and then a second where the Filter verifies that it is down.

Without the Filter then the current node receives a request for a ticket it does not own, loads tickets into its Secondary Registry for that node, and processes the request. What is different is that if the node is really up and the two nodes can connect, then this CAS node will continue to receive Notify requests and new checkpoint and incremental files from the other node even as it is also processing requests for that node sent to it by the Front End. Cushy is designed to handle this situation (because even in a normal failure the other node can come up just as you are in the middle of handling a request for it).

Cushy CAS Cluster

In this document a CAS "cluster" is just a bunch of CAS server instances that are configured to know about each other. The term "cluster" does not imply that the Web servers are clustered in the sense that they share Session objects (JBoss "clustering"). Nor does it depend on any other type of communication between machines. In fact, a Cushy CAS cluster could be created from a CAS running under Tomcat on Windows and one running under JBoss on Linux.

...