Getting Started with Hazelcast
In the current post we shall see how to getting started with hazelcast, its advanced features before getting in deeper.
In the previous posts we learnt how to install, use Hazelcast and access it using Java.
Ways to start a new Hazelcast Instance.
-
Shell Script:
12345678910111213141516171819WM-C02RJ2EKG8WP:bin masampat$ cd $HAZELCAST_HOME/binWM-C02RJ2EKG8WP:bin masampat$ ./start.shJAVA_HOME found at /Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/HomePath to Java : /Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/Home/bin/java######################################### RUN_JAVA=/Library/Java/JavaVirtualMachines/jdk1.8.0_74.jdk/Contents/Home/bin/java# JAVA_OPTS=# starting now....########################################INFO: [192.168.1.104]:5701 [dev] [3.8.3]Members [1] {Member [192.168.1.104]:5701 - b00cb13c-7803-43ff-b1d9-f53a43e8864b this}Jul 18, 2017 8:30:03 PM com.hazelcast.core.LifecycleServiceINFO: [192.168.1.104]:5701 [dev] [3.8.3] [192.168.1.104]:5701 is STARTED -
Jar File:
12345678910111213141516171819202122232425262728293031323334353637383940414243WM-C02RJ2EKG8WP:bin masampat$ cd $HAZELCAST_HOME/libWM-C02RJ2EKG8WP:lib masampat$ java -cp hazelcast-3.8.3.jar com.hazelcast.console.ConsoleAppJul 18, 2017 9:02:38 PM com.hazelcast.config.FileSystemXmlConfigINFO: Configuring Hazelcast from '/Users/masampat/next-gen/hazelcast-3.8.3/lib/hazelcast.xml'.Jul 18, 2017 9:02:38 PM com.hazelcast.instance.DefaultAddressPickerINFO: [LOCAL] [dev] [3.8.3] Prefer IPv4 stack is true.Jul 18, 2017 9:02:38 PM com.hazelcast.instance.DefaultAddressPickerINFO: [LOCAL] [dev] [3.8.3] Picked [192.168.1.104]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is trueJul 18, 2017 9:02:38 PM com.hazelcast.systemINFO: [192.168.1.104]:5702 [dev] [3.8.3] Hazelcast 3.8.3 (20170704 - 10e1449) starting at [192.168.1.104]:5702Jul 18, 2017 9:02:38 PM com.hazelcast.systemINFO: [192.168.1.104]:5702 [dev] [3.8.3] Copyright (c) 2008-2016, Hazelcast, Inc. All Rights Reserved.Jul 18, 2017 9:02:38 PM com.hazelcast.systemINFO: [192.168.1.104]:5702 [dev] [3.8.3] Configured Hazelcast Serialization version : 1Jul 18, 2017 9:02:38 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorINFO: [192.168.1.104]:5702 [dev] [3.8.3] Backpressure is disabledJul 18, 2017 9:02:39 PM com.hazelcast.instance.NodeINFO: [192.168.1.104]:5702 [dev] [3.8.3] Creating MulticastJoinerJul 18, 2017 9:02:39 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImplINFO: [192.168.1.104]:5702 [dev] [3.8.3] Starting 8 partition threadsJul 18, 2017 9:02:39 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImplINFO: [192.168.1.104]:5702 [dev] [3.8.3] Starting 5 generic threads (1 dedicated for priority tasks)Jul 18, 2017 9:02:39 PM com.hazelcast.core.LifecycleServiceINFO: [192.168.1.104]:5702 [dev] [3.8.3] [192.168.1.104]:5702 is STARTINGJul 18, 2017 9:02:39 PM com.hazelcast.internal.cluster.impl.MulticastJoinerINFO: [192.168.1.104]:5702 [dev] [3.8.3] Trying to join to discovered node: [192.168.1.104]:5701Jul 18, 2017 9:02:39 PM com.hazelcast.nio.tcp.InitConnectionTaskINFO: [192.168.1.104]:5702 [dev] [3.8.3] Connecting to /192.168.1.104:5701, timeout: 0, bind-any: trueJul 18, 2017 9:02:39 PM com.hazelcast.nio.tcp.TcpIpConnectionManagerINFO: [192.168.1.104]:5702 [dev] [3.8.3] Established socket connection between /192.168.1.104:55661 and /192.168.1.104:5701Jul 18, 2017 9:02:45 PM com.hazelcast.systemINFO: [192.168.1.104]:5702 [dev] [3.8.3] Cluster version set to 3.8Jul 18, 2017 9:02:45 PM com.hazelcast.internal.cluster.ClusterServiceINFO: [192.168.1.104]:5702 [dev] [3.8.3]Members [2] {Member [192.168.1.104]:5701 - b00cb13c-7803-43ff-b1d9-f53a43e8864bMember [192.168.1.104]:5702 - 58ead2f8-9a0b-46bf-8b71-d249d629145f this}Jul 18, 2017 9:02:47 PM com.hazelcast.core.LifecycleServiceINFO: [192.168.1.104]:5702 [dev] [3.8.3] [192.168.1.104]:5702 is STARTEDhazelcast[default] > -
Java Program
123456789101112import com.hazelcast.config.Config;import com.hazelcast.core.Hazelcast;import com.hazelcast.core.HazelcastInstance;public class HazelCastServer {public static void main(String[] args) {Config cfg = new Config();HazelcastInstance instance = Hazelcast.newHazelcastInstance(cfg);}}When we type “help” in the command line, the following output is displayed, if we have look at it we get a brief idea about the operations supported by Hazelcast.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100hazelcast[default] > helpCommands:-- General commandsecho true|false //turns on/off echo of commands (default false)silent true|false //turns on/off silent of command output (default false)#<number> <command> //repeats <number> time <command>, replace $i in <command> with current iteration (0..<number-1>)&<number> <command> //forks <number> threads to execute <command>, replace $t in <command> with current thread number (0..<number-1>When using #x or &x, is is advised to use silent true as well.When using &x with m.putmany and m.removemany, each thread will get a different share of keys unless a start key index is specifiedjvm //displays info about the runtimewho //displays info about the clusterwhoami //displays info about this cluster memberns <string> //switch the namespace for using the distributed queue/map/set/list <string> (defaults to "default"@<file> //executes the given <file> script. Use '//' for comments in the script-- Queue commandsq.offer <string> //adds a string object to the queueq.poll //takes an object from the queueq.offermany <number> [<size>] //adds indicated number of string objects to the queue ('obj<i>' or byte[<size>])q.pollmany <number> //takes indicated number of objects from the queueq.iterator [remove] //iterates the queue, remove if specifiedq.size //size of the queueq.clear //clears the queue-- Set commandss.add <string> //adds a string object to the sets.remove <string> //removes the string object from the sets.addmany <number> //adds indicated number of string objects to the set ('obj<i>')s.removemany <number> //takes indicated number of objects from the sets.iterator [remove] //iterates the set, removes if specifieds.size //size of the sets.clear //clears the set-- Lock commandslock <key> //same as Hazelcast.getLock(key).lock()tryLock <key> //same as Hazelcast.getLock(key).tryLock()tryLock <key> <time> //same as tryLock <key> with timeout in secondsunlock <key> //same as Hazelcast.getLock(key).unlock()-- Map commandsm.put <key> <value> //puts an entry to the mapm.remove <key> //removes the entry of given key from the mapm.get <key> //returns the value of given key from the mapm.putmany <number> [<size>] [<index>]//puts indicated number of entries to the map ('key<i>':byte[<size>], <index>+(0..<number>)m.removemany <number> [<index>] //removes indicated number of entries from the map ('key<i>', <index>+(0..<number>)When using &x with m.putmany and m.removemany, each thread will get a different share of keys unless a start key <index> is specifiedm.keys //iterates the keys of the mapm.values //iterates the values of the mapm.entries //iterates the entries of the mapm.iterator [remove] //iterates the keys of the map, remove if specifiedm.size //size of the mapm.localSize //local size of the mapm.clear //clears the mapm.destroy //destroys the mapm.lock <key> //locks the keym.tryLock <key> //tries to lock the key and returns immediatelym.tryLock <key> <time> //tries to lock the key within given secondsm.unlock <key> //unlocks the keym.stats //shows the local stats of the map-- MultiMap commandsmm.put <key> <value> //puts an entry to the multimapmm.get <key> //returns the value of given key from the multimapmm.remove <key> //removes the entry of given key from the multimapmm.size //size of the multimapmm.clear //clears the multimapmm.destroy //destroys the multimapmm.iterator [remove] //iterates the keys of the multimap, remove if specifiedmm.keys //iterates the keys of the multimapmm.values //iterates the values of the multimapmm.entries //iterates the entries of the multimapmm.lock <key> //locks the keymm.tryLock <key> //tries to lock the key and returns immediatelymm.tryLock <key> <time> //tries to lock the key within given secondsmm.unlock <key> //unlocks the keymm.stats //shows the local stats of the multimap-- List commands:l.add <string>l.add <index> <string>l.contains <string>l.remove <string>l.remove <index>l.set <index> <string>l.iterator [remove]l.sizel.clear-- IAtomicLong commands:a.geta.set <long>a.inca.dec-- Executor Service commands:execute <echo-input> //executes an echo task on random memberexecuteOnKey <echo-input> <key> //executes an echo task on the member that owns the given keyexecuteOnMember <echo-input> <memberIndex> //executes an echo task on the member with given indexexecuteOnMembers <echo-input> //executes an echo task on all of the memberse<threadcount>.simulateLoad <task-count> <delaySeconds> //simulates load on executor with given number of thread (e1..e16)hazelcast[default] >
Data structures supported by Hazelcast
Hazelcast is much more powerful than a pure cache. It is an in-memory data grid that supports a number of distributed collections, processors, and features. We can load the data from various sources into differing structures, send messages across the cluster, perform analytical processing on the stored data, take out locks to guard against concurrent activity, and listen to the goings-on inside the workings of the cluster. Most of these implementations correspond to a standard Java collection.
- Standard utility collections:
- Map: Key-value pairs
- List: A collection of objects
- Set: Non-duplicated collection
- Queue: Offer/poll FIFO collection
- Specialized collection:
- Multi-Map: Key–collection pairs
- Lock: Cluster wide mutex
- Topic: Publish and subscribe messaging
- Concurrency utilities:
- AtomicNumber: Cluster-wide atomic counter
- IdGenerator: Cluster-wide unique identifier generation
- Semaphore: Concurrency limitation
- CountdownLatch: Concurrent activity gatekeeping
- Listeners: This notifies the application as things happen
- Distributed Executor Service
- MapReduce Functionality