powered by Jive Software

Java memory and number of users in openfire

Hi, I know that there’s not going to be a ‘right’ answer to this question… but thought I’d ask anyway … are there any guidelines as to what the openfire java memory should be set to for given number of user accounts (just registered accounts, not concurrent users). I know this is quite subjective as will depend on number of groups and a 1001 other factors. But our server has half gig set for java memory, and with 16000+ users (registered) we’re getting over 90% memory usage - I’ve seen it at nearly 98% - so not good!!!

Would just increasing the memory solve this or could there be something else going on to increase memory usage like this - guess I’m interested in how our figures/experiences compare to other peoples OpenFire servers.

Cheers for any help/advice


Hi Alex,

for Wildfire I did always calculate 1 MB for every user as it did use one thread for every connection.

With Openfire and NIO support this should no longer be the case, especially if you take a look at http://www.igniterealtime.org/about/OpenfireScalability.pdf

You could take a look at http://wiki.igniterealtime.org/display/WILDFIRE/HowtoconfigureWildfire%27scaches but I guess it will not really help you.

Increasing the Xmx value will of course help you with the heap memory. But depending on your operating system you may hit some other limits like available pages for 32 bit processes.

Do you run any plugins or is it the core server which uses so much memory?


Is that 16k concurrent users? If not, how many concurrent?

Hey Alex,

Besides LG’s comment the other thing that could take up lot of memory is a high incoming rate of packets with a low processing rate of packets. In this case packets will be queued up in memory thus consuming more server memory. The only way to really know what is going on on your server is to take a few heap dumps of the JVM and analyze them.

To learn how to get heap dumps follow these links:

Heap dumps are back with a vengeance

Heap Dump Snapshots


– Gato

Our total amount of registered users is quite a bit higher than that. We run with one Gig of memory (-Xms1024m -Xmx1024m). We track heap usage through Zabbix. It typically show an old-generation utilization between 25 and 50% (last weeks graph is attached to this post).

We have noticed that, as Gato suggests, having enough CPU power, as well as having beefy, dedicated hardware for the database backend brings a lot of stability to the resource usage of openfire.

One of our admins expressed his concerns in the past on growing the heap of Openfire. His experiences with very large heap sizes are that good: typically, huge garbage collection sweeps will mess up your (used-to-be-realtime) server.

Thanks for the replies all useful stuff. For info we’ve only probbaly got a couple of hundred concurrent users - though all usesrs have quite large (up to around 1000 contacts) rosters (they’re auto generated) and we do have a coupld go plugins that have been added.

Will have a ‘play’ with the settings (and the cache sizes too) to see what makes any difference.



Hi Guus,

with the current garbage collectors I don’t think that 4 GB java heap is a problem if you are running a 4 CPU server.

I’ve seen GC cycles of 40ms which did free 700 MB of a 1,5 GB heap on AIX with IBM’s JRE, and this JRE is claimed to be not the fastest while the RISC CPU cores do a much better job than x86 cores.


reference: http://java.sun.com/performance/reference/whitepapers/6_performance.html#2.2.2

Hi Gato,

do you still have the tuning guide and the stats plugin for the NIO queues or is this no longer necessary?

One should use a very fast disc or memory stick to write dumps as this is really an IO problem. I have no idea if the Sun dumps are compressed, if this is not the case than one may expect them to take some minutes (depending on the heap size) to be written.