Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Four years ago Yale implemented a "High Availability" CAS cluster using JBoss Cache to replicate tickets. After that, the only CAS crashes were caused by failures of JBoss Cache. Red Hat failed to diagnose or fix the problem. We considered replacing JBoss Cache with Ehcache, but there is a more fundamental problem here. It should not be possible for any failure of the data replication mechanism to crash all of the CAS servers at once. Another choice of cache might be more reliable, but it would suffer from the same fundamental structural problem.

Having been burned by software so complicated that the configuration files were almost impossible to understand, Cushy was developed to accomplish the same thing in a way so simple it could not possibly fail.

The existing CAS TicketRegistry solutions must be configured to replicate tickets to the other nodes and to wait for this activity to complete, so that any node can validate a Service Ticket that was just generated a few milliseconds ago. Waiting for the replication to complete is what makes CAS vulnerable, but it seemed like an obvious solution because Ehcache and JBoss Cache promised that it would work. Once it didn't work, it was obvious that someone would at least ask the question whether there was another way to do this. If there was, then the entire TicketRegistry strategy could be reconsidered.

...