Caching is not what you think it is

Caching to solve performance problems? It may not be the best idea

Davide Mauri
4 min readJul 24, 2017

Let’s say it immediately and clearly: caching should be done to decrease costs needed to increase performance and scalability and NOT to solve performance and scalability problems.

We’ve all been there

A lot of times, I’ve seen developers using cache as a way to simply hide performance and scalability problems. Database queries are slow? Cache the results in a middle tier or a specialized microservice. API calls are slow? Cache the result on the client.

Guess what? This approach has two major flaws: the first is that sooner or later performance and scalability will become a problem again. Following the same line of thought, this will drive to the decision to cache even more. And this, in turns, leads to the second major flaw. Code complexity will become so high that it will be so difficult to maintain and evolve your code that it will just be impossible to do.

This happens because the complexity of the code that manage caching has an exponential growth. At the beginning caching is really easy. You store a value and you get that value back when you need. Evicting stale data from cache is quite easy, since object dependencies are trivial or even non-existent.

But then you need to improve performance and scalability even more and more things are put into cache. Suddenly a totally overlooked problem immediately becomes more complex. How do you know when something cached needs to be updated to avoid it to become stale? Who is in charge of deciding when to update the cached data? And when this decision should happen? Is up to those who reads and write data to kick in the update process, or there must be something that works behind the scenes that automatically refresh cached data when needed?

Really not simple answers here. Nor correct, since the right answer strongly depends on your scenario.

Now, add concurrency to the problem and you’re done. How do you deal with concurrent access? I’m sure your caching engine is already providing some sort of protection against concurrent writes and it may even support transaction to a certain level, but this means it is just contributing to increase solution complexity, since setting and getting a value from cache is not a simple as doing get and set anymore.

Not to mention that, if not engineered really well, your code becomes a trap of “if something is null then” since, by definition, cache is volatile and thus you have to be prepared to handle the situation where the value you’re looking for is not cached anymore.

Of course, you can make sure you have a warm-cache approach, where everything is loaded in cache at the startup and never removed unless explicitly asked. But again, more complexity, and higher risks of stale data.

What about adding a second level caching, maybe leveraging browser memory? Things are getting worse, since all the problems described so far, simply gets doubled.

You’re now at the end of the road, with a solution that is complex as the worst spaghetti code you’ve ever seen, with invisible dependencies on cached objects and the bad news is that you have to maintain it. Maybe it works, but its maintenance costs will sooner or later become unsustainable.

You’ll know by then that caching is not a problem: cache invalidation is. https://martinfowler.com/bliki/TwoHardThings.html. But it will be too late, unfortunately.

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

How to avoid to be there again

Each time you feel the urge to cache something, ask yourself if that need derives from a lack of knowledge (there is no shame admitting this: we work in teams just because no-one can know everything!) or you’re are effectively hitting the limits of your resources.

If you have performance problem, first and firstmost, find where the bottleneck is and solve it, don’t hide it under the caching carpet!

Adding caching logic is an architectural change that needs to be carefully evaluated for the reason described previously.

  • Will caching help me to sustain more concurrent requests without requiring the solution to cost proportionally more?
  • Will caching help me to reduce costs related to performances?
  • Will caching help me to reduce costs related to scalability?
  • How stale data will impact user experience?
  • How cache should be invalidated? There can be a general rule or each object needs some specific invalidation logic?
  • Are my caching needs trivial? Will this change in the foreseeable future?
  • Where caching should be done? And who should do it?
  • Evaluate options. Should you use Redis? Local Storage? Or maybe even the database you’re using is fine. (Remember that we need to find the solution with the best performance/costs ratio, on average)

Only after you have answered to these question, and evaluated the answers, then you may start to think to implement a caching. Otherwise you’re just at the beginning of your worst nightmare.

--

--

Davide Mauri

Data Geek, Storyteller, Developer at heart, now infiltrated in Azure SQL product group to make sure developers voice is heard loud and clear. Heavy Metal fan.