Friday, 9 August 2013

Designing an efficient and scalable cache

Nearly all applications use some form of caching. Reusing the results of previous computation can decrease latency and increase throughput at the cost of some additional memory. Caching usually looks to be simpler than it is. Most of the naive cache implementations are designed taking single threaded application into consideration. But this design often turns into scalability bottleneck when application scales. Here I would design an efficient and scalable cache for a computational expensive function. I would be using Java for these examples. The codes are not complete and are only for clearing the concept.

Lets start with a naive cache implementation and try to improve it by finding various faults in it. Whenever we think of cache, HashMap comes to the mind. So, lets start with HashMap. Have a look at the following code snippet :

In the above code snippet, Computable declares an function with argument of type A and return type V. ExpensiveFunction implements this interface which takes a lot of time to compute. To implement naive cache, we would have a wrapper that records the results of previous computation and encapsulates the caching process. The technique is called Memoization. CacheImpl1 class is a first attempt at this which uses HashMap. The compute method in the wrapper first checks whether the result has already been computed. If it is already computed, it returns the result. If not, it computes the result and stores it in the cache for future use.

As we know, HashMap is not thread-safe, so to make the program thread-safe, the above method synchronizes the entire compute method. Though this approach ensures thread safety, but it has a performance bottleneck. Only one thread can execute compute at a time. Suppose one thread is busy executing compute which is an expensive function, no other thread would be able to execute compute even though the result has already been computed. If multiple threads are waiting for compute, it may actually take more time than without memoization. This is what we had not expected from caching.

Let's improve it a little bit by using ConcurrentHashMap which is thread safe. So, we need not synchronize the entire method. CacheImpl2 in the below snippet follows this approach, but it also has some bottlenecks. Two threads calling compute may end up computing the same value. Find out how ?? So, whats the benefit of caching. The mere advantage of caching is to prevent the same value from being calculated multiple times.

The problem with CacheImpl2 is that if one thread starts an expensive computation, the other thread is not aware that a computation is already in progress and may start the same computation. FutureTask class in Java can be handy here. Its worth going through this class first. This class provides methods to start and cancel a computation, query to see is the computation is complete and retrieve the results of a computation. FutureTask,get() is the answer to our requirement. This method returns the result if it is available, else it blocks till the result is computed and then returns it. CacheImpl3 is the below snippet uses FutureTask.
CacheImpl3 implementation above looks to be perfect, but it also has a flaw. There is still a case, wherein 2 threads can compute the same value. The culprit is the if block in the compute method which is non-atomic. Think !! With a little thought you should be able to think of that scenario. How to correct this flaw ??? putIfAbsent method of ConcurrentHashMap is the answer. Its clear from the function name, what it does. If you are still not sure, please go through the documentation. Let's look at some more corner cases. What is the computation is cancelled or it fails. Future attempts to compute this results will also fail. To counter this corner case, cache implementation should remove the Future from the cache in such cases. Here is the final implementation of cache :

No comments:

Post a Comment