We’re using openfire with up to 400 concurrent users and are suffering from memory & cpu issues with it (see attached screenshot for the heap memory usage).
Our setup is as follows:
PEP Disabled
Only MUC is needed
PrivacyLists used
ClientControl Plugin for bookmarks enabled
Raptor Plugin enabled for flood protection
HTTP-Bind disabled as we’re using punjab because it works better for us
Users connect with our opensourced AJAX client Candy
Openfire running with JVM 6
Openfire attached to a MySQL db for Authentication & General Storage
Message archiving enabled
We increased the max memory to 1024mb (-Xms512m -Xmx1024m) but that didn’t really help, it just uses more memory and fails later but it still fails.
Openfire is on a Server with a Dualcore Xeon with 1.86Ghz each and 2gb memory.
As I saw in JVM Settings, we could configure more settings but I don’t really know what’s the best thing to do.
Does anyone know what we could do to improve that?
I actually don’t want to switch to another Jabber server but I’m thinking about it because I heard good things about ejabberd (But the doc there is awful).
Using punjab cut down the memory usage big time for me since httpsessions are not leaked in openfire, but you are already doing that. Do you know how to get a heap dump from the running openfire process?
I’m having a very similar issue, but on a much larger scale (trying to push 20k users on a box) via coherence clustering. Would using punjab defeat the purpose of using BOSH to cycle requests to a box in the cluster since it ties connections to given boxes?
Yeah punjab already helped a lot (especially for the connection drops we had), but it’s still an issue.
I don’t know how to get a heap dump, but on the JVM settings Page the creator mentioned VisualVM. Would that be the way to go?
Regarding disabling the raptor plugin: we’ll test it over the weekend how it performs. I think it could be the problem. The sad thing about that would be: It’s actually a great plugin and I need something like that