Is it possible to synch custom plugins in a cluster?

Is it possible to synch custom plugins in a cluster with hazelcast or only openfire core?

Does anyone have some code snipplets? Or a simple tutorial for coding with hazelcast?

Is it possible to get access to the hazelcast object in a custom plugin? I dont want to create a second cluster within openfire only for synchronize my plugin
or how to use the ClusterManager …!?

In Details:
I have a cache which i want to replicate to all cluster nodes to prevent much database querys

1 Like

Found out by myself…

  • use org.jivesoftware.util.cache.Cache as Cache

    Cache<String, String> cache = CacheFactory.createCache(“MY_CUSTOM_CACHE”);
    Cache is a MAP so Cache could also be Cache<String, OTHERCLASS> …

  • Create ClusterTaskImplemantation…

public class MyClusterTask<V> implements ClusterTask<V>{
	private Cache<String,String> localcache;
	private Map<String,String> remotecache = new Map<String,String>();
	public MyClusterTask()
	public MyClusterTask(Cache<String, String> cache)
	public void run() {
		  execute(new Runnable() {
	            public void run() {

//maybe clear the localcache before merging or just put and replace!?
	                   //merge remotecache to localcache here
				for (String key : remotecache.keySet())
					String val = remotecache.get(key);
	 protected void execute(Runnable runnable) {
        boolean clusterStarting = ClusterManager.isClusteringStarting();
        try {
        catch (IllegalArgumentException e) {
            // Room not found so check if we are still joining the cluster
            if (clusterStarting) {
            	//Log.debug("Still joining cluster...");
            else {
                // Task failed since room was not found
                //Log.error(e.getMessage(), e);

	public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
		int size = ExternalizableUtil.getInstance().readInt(in);

		for (int n=0;n<size;n++)
			String key = ExternalizableUtil.getInstance().readSafeUTF(in);
			String val = ExternalizableUtil.getInstance().readSafeUTF(in);

	public void writeExternal(ObjectOutput out) throws IOException {
		int size =  localcache.size();
		ExternalizableUtil.getInstance().writeInt(out,size);	 //how many objects to cluster..
		Set<String> keys = localcache.keySet();
		for (String key : keys)
			String val = localcache.get(key);
			ExternalizableUtil.getInstance().writeSafeUTF(out, key);
			ExternalizableUtil.getInstance().writeSafeUTF(out, val);			

	public V getResult() {
		return null;

  • Start a Clustertask
    CacheFactory.doClusterTask(new MyClusterTask(cache)); //cache from 1
    //now writeExternal() of MyClusterTask will be called on the local machine
    // then readExternal() of the remote machine(s) will be fired
    // an finally run() of MyClusterTask will be called > starting a new thread to do the job…


Keep in mind that the Cache is automatically shared across the cluster; there’s no need to write a ClusterTask to update it (I’m not sure what the intent of MyClusterTask is).

Also check the defaults of the Cache meet your requirements - both retention time and total size can be configured to suit your needs - though changes only take affect after a restart.



Oh did not know that the cache is automatically clustered… is it clustered immediatlely??? Or is there some time between local update und the push ? MyClusterTask was just an example

Yes, the Cache is “immediate” - or as good as. The ClusterTask only needs to beused if you need to perform a specific action on the remote node(s).