Distributed Caching using Hazelcast

In the current post we shall see the need for caching in any application, and also explain Distributed Caching using Hazelcast.

Traditional Application

Traditionally applications were designed with application layer and database layer, with the role of persisting and providing access to your system’s data falling to a relational database. The relational database usually used to have a certain degree of replication mainly from a Resiliency perspective i.e. as a failover mechanism to switch to backup/slave database incase Primary database fails/goes down.

As the load on the application increases, one way to scale the applications is to replicated application layers for balancing the load and scaling linearly, however this approach doesn’t work beyond level and the database becomes the bottle neck. We can’t scale the database layer as we did with application layer because we would end up losing ACID property as soon as we do that.

                 Traditional Application

Replication based Caching

In order to overcome the above issue we shall be go for Replication based caching wherein each application node shall have a cache instance, all the cache instances have replicated data i.e. same data is present on all the cache instances. The replication is taken care via JGroups or JMS based approach. Here whenever a cache entry is added to a cache instance the same entry is added to other cache nodes as well same with update/delete scenario.

Since the cache instance is present in application layer, this reduces database calls & provides faster access for cached data.

The above architecture also introduces other problems such as how will the data in the cache be updated with the data in the underlying database. If the cache syncs up often with the database then it will end up querying the database often or the scenario if the cache doesn’t sync with database then it might end up serving stale data to the end user.

The caching systems typically operate in of the below modes.

  • Time-bound cache: This holds entries for a defined period (time-to-live, popularly abbreviated as TTL)
  • Write-through cache: This holds entries until they are invalidated by subsequent updates

Below are some of the disadvantages of the below approach.

Disadvantages
  • The above approach works well for a few nodes but as the application scales to many nodes such as say >50, replicating a change on a cache node to all other nodes becomes a big overhead.
  • The maximum size of the data that can be cached is limited to a the size of each cache node. Ex: Say we have 25 application nodes with each node having a 3 GB cache size. Here the maximum amount of the data that can be cache is 3GB whereas the total memory used for caching is 25×3 = 75GB, which narrows down to caching 3GB data using a 75GB capacity.
  • The JGroups/JMS becomes a single point of failure and overhead as well as its this component’s responsibility to maintain the replication factor amongst the nodes.
  • The same data is replicated amongst all the cache nodes for say 2 nodes or 2000 nodes. This scale of replication is not all required in the real world.

               Replication based caching

Distributed Caching

In order to overcome the above disadvantages with the Replication based caching, we shall be approach Distributed based caching. This approach is best suited for applications that need linear scale to millions of users and thousands of concurrent requests. In this approach we shall have dedicated distributed cache servers or these cache servers can also be part of application server as well.

Hazelcast is a radical, new approach towards data that was designed from the ground up around distribution. It embraces a new, scalable way of thinking in that data should be shared for resilience and performance while allowing us to configure the trade-offs surrounding consistency, as the data requirements dictate.

Below are some of the features of the Hazelcast Cluster.

  • Hazelcast is a masterless nature.
  • Each node is configured to be functionally the same and operates in a peer-to-peer manner.
  • The oldest node in the cluster is the de facto leader. This node manages the membership by automatically making decisions as to which node is responsible for which data.
  • Hazelcast makes it incredibly simple to get up and running, as the system is self-discovering, self-clustering, and works straight out of the box.
  • Hazelcast persists the entire data in-memory, this makes it incredibly fast.

     Distributed caching with Hazelcast

The cache data is replicated to maximum of 3 nodes only thereby allowing us to cache more data compared to replicated caching. In this case if we have 100 nodes with each having a memory of 3 GB for cache we shall be able to cache 100 GB of data. In case a node goes down, the application is not affected as the data is replicated on other nodes as well. Whenever the new node joins the cache cluster the data is partitioned on to the new node, same is the case when an existing node is removed from the cache cluster.

A distributed cache is by far the most powerful as it can scale up in response to changes to the application’s needs.

The below diagram shows a distributed hazelcast with 3 nodes and a replication factor of 2, data is split into a number of partitions, and each partition slice is owned by a node and backed up on another, the interactions will look like the following figure.

For data belonging to Partition 1, our application will have to communicate to Node 1, Node 2 for data belonging to Partition 2, and so on. The slicing of the data into each partition is dynamic. So, in practice, there are typically more partitions than nodes, hence each node will own a number of different partitions and hold backups for the number of others. As mentioned before, this is an internal operational detail and our application does not need to know it.

Distributed cache partitions in Hazelcast

 

I hope this has been useful for you and I’d like to thank you for reading. If you like this article, please leave a helpful comment and share it with your friends.

Comments

  1. By Ajay

    Reply

  2. By Vinayak

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *