Cache Considerations for Web Applications

27 Sep 2012

When it comes to caching in web applications, I have always done it within the service that accesses the data repositories. A request comes in, it's passed to the service class which then checks the cache for the item. If it exists then it is returned, if it doesn't then a call is made to the data layer (that will then hit SQL Server for example) and the result is put into cache with a sliding expiry. The result is then returned from the service and shown to the user.

This means that the cache is populated as users browse the site, and cache is kept up to date by having sliding expiration times so it will expire off the cache if not accessed in a while. Then the next request will populate it again.

This method goes hand in hand with a Clear Cache option, this is a back-end option so a user can force the cache to be cleared when data has been updated.

I have always used this method of cache, because it's easy to implement, requires no "initialisation" and the popular content will be put into cache fairly quickly. It also means for development, the cache can be set to off and content is instantly served from the data layer.

The downside to this method is that whilst a request is hitting the data layer getting the data, all requests will be passing straight through this cache layer and hitting the data layer. Therefore once cleared or expired the data layer will get a brief hammering whilst the cache is built up.

One option to help this is to put the expiry time on content to be permanent, and make the clear cache option selective. Therefore the first request will cache the item, and it will only be lost from cache if the site/app pool resets or the user chooses to delete that specific object. This method still suffers from a brief data layer saturation when an item is cleared from cache but it will happen less often.

Another way of curing this is to put everything in cache with a version number as part of the key, and make the version number a shared variable. When a clear cache is issued, instead of deleting from cache, it can add all the current data into cache using the next version number and then once complete change the shared version variable to the next. This means that users will start getting the latest data from cache and the old version data can be cleared/expired gracefully.

The problem here is that the data layer will still be saturated, but at least in a single controlled manner (rather than by lot's of user requests). It could also take some time to do this if the data to be cached is large, and data might need to be updated quicker. One final issue could be that this initialisation of the cache causes a dependency that the initialisation function may need updating every time something changes with the data.

A final way that I have yet to implement but I like the sound of is to use a cache layer that user requests don't go through. With the above options some requests can go through the cache layer to get the data they need if its not on the cache, but this final option would mean that if its not on the cache the data doesn't exist. That's ok though because this cache option is a controlled cache, it can be external to the application even - offloaded to another server like memcached implementations.

This option relies on an initialisation method as well, but once the cache is setup it can be replicated onto dedicated servers if necessary and/or permanently stored rather than just in RAM. This is the option high performance sites may need to use to stop high activity on the data layer. In the simplest implementation, this will mean that on Application_Start the cache will be populated and then all requests will hit the cache only.

If any data changes then it will need to be pushed into the cache, as if requests are hitting a wall and the data is being pushed into the wall from the other side. This will require selective publishing of changes to the cache when a change is made but it means the data layer access is controlled completely by application code and all user requests are hitting a cached layer only.

A very simple way of implementing this final option can be done by replacing the data layer functions to point to a cache application rather than SQL server (for example). The app can then carry on with it's internal RAM cache and hitting the data layer as necessary. Then any changes made on SQL Server (for example) can be pushed into the cache application as is necessary and commands sent to the site to "clear RAM cache" so that this cache application layer is hit.

Posted by makit
Last revised 27 Sep 2012 10:08 PM

makit / Martyn Kilbryde

Professional software developer, ponderer and eccentric.

flickr

Github

LinkedIn

Stack Overflow

Twitter