Clustering Plugin - JGroups/ JBossCache

Have at it! It’s still very not-functional ATM, but I’ve been diverted to a few other tasks. So please, raise issues on the Gitub page and fork it to make your own improvements.

Note that the project uses Maven and the Openfire Maven plugin to do the build:

(yes, you have to build and install the plugin manually… You’ll get over it.)

Pls use this thread for discussion on what’s broken, etc. Any formal requests/bugs/fixes etc. can be done through the project page on Github. kthxs.

This plugin is starting to pickup a little more momentum as my time has freed up from other projects and I have been able to focus more time on this plugin. Clustering works in a limited fashion currently. Nodes are able to join and leave the cluster and clients can send messages to nodes on other clusters.

I know there has been alot of talk from the community about wanting to have a clustering plugin, well here is your chance to have it. Get the code base and start contributing. The plugin is moving in the right direction but we could use some help.

I just updated my GIT repository 20 minutes ago and added some issues that need to be worked on.

You can get the code here:

Go code!


Work continues on this plugin. As of the latest checkin (find it here: messages can be sent between clients on different nodes and presence works correctly. I’m narrowing down on the defects in the code and hoping to complete some additional cleaup this week. I would be interested to see what kind of results other people in the community would have, and what kind of defects people may find in the code.

Getting closer…

Yet another shameless plug for some help.

With the latest push presence is working better and caching has been fixed to work much better than before. Work continues on this plugin and I will attempt to do some load testing against our cluster tomorrow.

Still alot of work remains to be done should anyone want to help. Open issues can be found on the project homepage:

Latest revision can be found here:



I’ve been doing a lot of testing and optimization since my last post and with the current implemenation we have been able to connect over 63,000 clients on a 3 node cluster. Work is coming along nicely and the implemenation should now improve now that the ignite code has been released.

However there is still a lot of work remaining to be completed. The latest branch I’ve been using for optimization can be found here:

If you would like to use this plugin feel free to contribute! Send me a PM here or respond to this thread with any questions you may have as to what you could do to help, or how to setup and test the plugin.



Hi, we are very interested in trying this out. Could you tell us what license this code comes under (Apache please :slight_smile: )?

Would you be able to bundle the license into your source download file?

Does this work with OpenFire 3.6.4 or later? Also just curious, have you tried this for pubsub scalability?

The plugin in it’s current state does work, however it does need additional work to really get it into a “production ready” state. We have moved away from openfire for our xmpp server implementation so work on this has stalled for the time being. However should someone want to start work on this plugin again I can provide assistance and some support to help move the project along again. I may also be able to offer a handful of hours of coding time a week.

What follows is the last status update on the plugin that I wrote for our team. For what it’s worth, I think this can be a viable alternative to the hackish approach to clustering with Coherence. Though I do believe that this project could benefit greatly from merging this code with the clustering code from openfire trunk. We very much walked into implementing this plugin blindly and felt our way through how it was suppose to work. Once Gato checked in the Coherence code there were alot of Ah-ha! moments.

Openfire clustering status

Over the past few weeks a lot of defect fixing and optimization has occurred on the plug-in. Load testing has been happening along with the optimization for the past 2 weeks. The plug-in has gone from being unable to login more than 1,000 users per machine to peaking at 17,000 users logged in and sending messages to each-other. I believe that with further tuning the jboss clustering implementation could be made faster, more reliable and production ready.

What follows is an outline of what has already been done to optimize and test the cluster as well as a roadmap for the optimizations that are still needed and the challenges facing the implementation.

The current testing environment consists of 5 VM’s. Each one is a dual core Xeon processor running at 2332.512 MHz with 2 gigs of ram. 3 of these machines are running the openfire cluster and the remaining 2 are being used to run load against the 3 node openfire cluster.

The Openfire JVM is being run with the following options: -Xms2048m -Xmx2048m -XX:NewSize=768M -XX:MaxNewSize=768M -XX:MaxTenuringThreshold=4 -XX:SurvivorRatio=6 -XX:+ScavengeBeforeFullGC -XX:PermSize=256M -XX:MaxPermSize=256M -XX:+HandlePromotionFailure -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ParallelGCThreads=3 -XX:+CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -verbosegc -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/openfire/gc.dat

There are also a few machine settings that must be changed for optimal cluster performance. More info is available here:

Clustering is being tested using a program called Tsung ( The Tsung script connects clients to the cluster every 0.01 seconds. Each client then retrieves a 30 user roster, sends an message to an online user, sends a message to an offline user, then disconnects.

Using this load test users connected have reached a peak of 17,000 logged in users per node on the cluster. Memory and CPU usage on each node at this point begins to max out the system resources and response time for connecting new users and messaging between clients slows considerably.

The biggest single reason for the performance increase has been changing the replication mode from synchronous to asynchronous. Allowing nodes in the cluster to not have to wait for a message response from all other nodes has allowed us to scale to users in the tens of thousands from the few thousand we were initially able to connect. This however did present other issues, for example, if a task for a newly logged in user on Node A arrives at node B before the cache has replicated the task will fail. This may lead to messages being messed or incorrect presence broadcasts.

In order to solve this issue temporarily task may attempt to execute a few times before failing. Currently a packet routing task will attempt to execute 5 times in 500 ms before failing. At this point the only time this has been identified to happen is just after a user logs in and their session information has yet to replicate to all caches. To scale better a new solution may need to be found for this use case.

Other optimizations that have gone into the system consist of reducing the size of objects passed between cluster nodes. One example of this is the ClusterSession and ClusteredClientSession, was serializing most of the fields on this object when typically the only thing needed is the JID. These fields have been replaced with tasks that can be executed to retrieve this information if needed. These tasks are rarely called and by using these tasks in place of the fields messaging between nodes on the cluster has improve significantly.

Openfire also has an internal cache locking mechanism. The initial implementation of this was causing thread deadlocking and slowdown issues. Since the switch to asynchronous replication the locking mechanism has been moved out of openfire and jboss cache has been handling the locking itself. This has eliminated the deadlocking and thread slowing issue that were being seen in the cluster and has increased the speed of user login, presence information and messaging between nodes greatly.

In order to get the JBossCache clustering plugin to a state where it is production ready there will need to be improvements to the memory usage of the program, how quickly tasks are sent over the wire and able to be processed and allowing more users to log into a single node. Below I have outlined the challenges that are ahead of this implementation, and some of the work items that need to be done in order to deliver a highly reliable, scalable clustering plugin for Openfire.


Challenge: Memory Usage
Currently memory usage can get very high, and stay very high. Currently all cache information from each node in the cluster is distributed to every node in the cluster. This can cause memory usage to skyrocket, and with no eviction policy or caps currently in place this could be a real issue with the ever growing size of users and nodes in the cluster

Implement an eviction policy. Some caches need to be able to grow without limits, such as the user session cache, however some other caches can be capped and vetted incrementally with minimal impact on the speed of the cluster. Also some caches may be converted to strictly local caches as their information is only needed at the point of origin of the user session.

In the case of ever growing users and cluster nodes some caches may need to be offloaded into a cache that is not a multisession replicated jboss cache. Perhaps something such as a memcached cluster could serve as the back end store for user sessions in the future should the cluster grow to such a size.

Something that would also help for both testing and implementation would be to move cluster nodes on to more powerful hardware. More processor speed and increased ram are the two resource issues that is currently holding the cluster nodes from hosting more users.

Challenge: Cluster Speed
Currently under high load it may take a few seconds for a user to login or perform an action (retrieve presence, send a message, obtain a roster, etc…). Some of these delays are transparent, some of them are not, but in either case the system needs to gain some efficiency to be able to scale up to more simultaneous users.

Currently the marshalling of tasks to go out over the wire is inefficient. Tasks are being written using the built in java serialization and passed over the wire. Many tasks are executed each second between nodes and picking a more efficient protocol to encode these messages should be a high priority. Using a wire protocol such as Thrift or protobufs would allow for much smaller messages as well as faster marshalling and unmarshalling of tasks.

Along the same line would be to replace the ExternalizableUtilStrategy with something more efficient. A more efficient strategy for serializing objects would go a long way towards increasing the speed of cluster nodes. Perhaps this too could be replaced with something like Thrift.

The ProcessPacketTask could be more efficient as well. Under the current implementation when the ProcessPacketTask is run on the target server the entire process stack is called again, even though the packet has already been partially processed on the server initially processing the task. This causes some repeat calls to the node that originally processed the packet for JID information. We could eliminate these unnecessary calls by including JID information directly in the task.

Challenge: Implementation incomplete
Currently the entire interface is not yet complete. A majority of the functionality is working, users can send messages, retrieve rosters and get presence information, but there are still many parts of the interface that are not finished. Things still unimplemented include finding the ClusterComponentSession, ConnectionMultiplexerSession, IncomingServerSession and ClusterOutgoingSession.

Find the unfinished portions of the plugin and finish them!

The following list of tasks must be completed for the JBossClustering plugin to become production worthy:
-Implement cache eviction for JBossCache
-Split out caches that should be local from replicated caches
-Convert the wire format of tasks to a more efficient format (Thrift, protobuf?)
-Explore the possibility of using a more efficient wire protocol for task return objects (Thrift, protobuf?)
-Fix inefficiencies in the task processing code
-Implement missing pieces of the plugin.

I should also answer your other questions. We just kind of put the code out there hoping to gain some community support which never happened. We had planned on using whatever license openfire uses.

We really didn’t get as far as to start testing pub/sub with it. As you can see from the last post there are still some issues that needed to be worked out and I didn’t want to put the cart before the horse.

I was also developing against the trunk, so it should work with 3.4.6.


Great! I downloaded the code and started installation.Would it be possible for you to bundle Openfire’s current license ( I believe it’s Apache) into your downloadable source file. Our company has very strict policies on developing using non Apache license. This will help us a lot to move forward. Thanks!

I am able to start two openfire nodes on different hosts connected to the same db. I use the same xmpp.domain. I dropped the clustering plugin in both and restarted them. I followed the build, deployment and setup instructions. The only step I wasn’t sure is the jboss and jgroup settings

com.enernoc.clustering.jgroups.config – JGroups confiuration file URL

com.enernoc.clustering.cache.config –JBossCache configuration file URL

Note that these are configured to use a “local” clustering mode by default, which attempts to allow multiple servers to run on the same machine… Assuming I got the configuration right.

When I start the openfire nodes, I see each of them recognize only their own nodes in the cluster. I am pretty sure my cache is local. Do you have any steps on what I need to do to turn on the distribution? I look through the links and tried a few things with no luck. How do I get a URL for the config files. I found some xml files in plugins/clustering/classes folder and have been trying with modifying them. any help will be very useful.



So for this property:

com.enernoc.clustering.jgroups.config – JGroups confiuration file URL

Use udp.xml


com.enernoc.clustering.cache.config –JBossCache configuration file URL

use cache.xml

Now when you deploy the plugin to these machines you will need to edit upd.xml and upd2.xml to reflect the ip address of the local machine. You will need to edit the following properties in both files



the bind.address should be the local ip address for the box in both upd.xml and upd2.xml

The mcast_addr sould be the same in each servers udp.xml and udp2.xml

the mcast_port should be the same in each servers upd.xml and upd2.xml but you will need to use a different port in each configuration.

What I used to do was have a batch script on each server that I would run that would remove the old plugin, copy the new plugin in, expand it and run some sed stuff against the files to set them up. I’ll see if I can dig it up or you may want to write something similar for your setup. It saves alot of time if you are fixing defects and need to redeploy to the cluster often.



Finding the bind ip address is something that can be done in the java code and applied here so you don’t have to edit the config file on each machine before starting the server, it’s just something that we haven’t done yet.

We have moved away from openfire for our xmpp server implementation so work on this has stalled for the time being.

Which XMPP server are you now using?

finally it worked. I didn’t set the cache.config and jgroups.config properties. It picked up the correct files by default. My issue was because the nodes were not in the same subnet and the multicast was not working correctly. Now I can see both nodes in the openfire admin page. Thanks a lot! Will proceed with further testing on pubsub.



For what it’s worth if anyone needs to set this up and can not use multicast, the jgroups stuff can also be configured to use TCP. Though I have not done any testing with that yet.

We tried pubsub and found that when we have more than one nodes the cluster pubsub is not working correctly. Essentially ‘pubsub.domain’ is not recognized and we get remote-server-not-found(404). When we send an iq with disco#items we don’t get pubsub.domain if there are more than one node in the system. With one node and clustering on, it works fine.

We would like to identify the issue and see what could be a fix. Unfortunately, our team wouldn’t let us continue any more evaluation/bug fixing on opensource products unless we have a license we can use for later.

Thanks Brian for all your help. If you do plan on appending a license to your source, I will be happy to see if I can find where the problem is with pubsub.




I have added a license file and the header to all .java files in the project. Tom is going to merge them into the project master and they are currently available in my branch here:

We did no work or testing specifically related to pub/sub for the plugin. We first wanted to get the basics working before we moved to items such as this.

That said it should be easy enough to add a pubsub task into the code and have it fire across nodes. Looking at the ignite implementation may give you a good starting point for this.


Thanks! This is very useful. I hope our discussion here will be helpful to anyone trying this software in future.


Hi macdiesel,

What is the state of this innitiative now?, has it undergone any development since 2009?

Has is reached a production ready state and are people still active on this innitiate?

Your reply will be very helpful, Thanks!

We have not been working on it, but the code is still available here:

We switched to ejabberd for our xmpp server.

I saw that there was a project for GSOC for opensource openfire clustering but I am unsure if it got picked up as a project.


I 'm using a Websphere 6.1 Cluster made up of 2 nodes, each node on a different server (node1 on server1, node2 on server2).I’m using JbossCache Api 3.0.3GA as caching instrument of my WebApplication.

I have two different cache instances: the first for user layouts on file system, the second for user data in memory.All works fine when I start up the cluster, data sharing between nodes of the two instances is great (in UDP and also in TCP mode).Layout Cache is binded on port 7800 on the two servers/nodes, the user data cache is binded on 7801 port.That’s my TCP configuration for only one of the two cache instances on the two nodes/servers of my cluster (all the files in file attached)…

transportes especiales

__Most people walk in and out of your life, but only FRIENDS leave footprints in your heart__

Hi Transp,

Can I ask how many concurrent your cluster can handle?