Cache implementation of Openfire - Is it LRU?

We are running production App on openfire and unfortunately its not scaling well. Added hazel cast and added ulimit and other params.

Hardware: Linux ubuntu 8 core with 16 Gb Ram.

Java Ram: 4 GB

Openfire version: 3.9.3

Error: Cache roaster full fall to 90% in 10 ms.

Solutions tried:

Solution A:

Step 1: increased cache size of .5 mb(default) to 250MB .

Realised this number was too less for 6 hours duration (cache max duration default value)

Step2: Reduced max duration from 6 hours to 30 mins.

End result: Server is working but now JAVA memory usage is almost 95% until it hits GC or Caches 30 mins time is over.

Solution B:

Step 1: increased cache size of .5 mb(default) to -1

Realised this number was too less for 6 hours duration (cache max duration default value)

Step2: Reduced max duration from 6 hours to 30 mins.

End result: Server is working but now JAVA memory usage is almost 95% until it hits GC or Caches 30 mins time is over.

Possible solution:

I could increase 4 GB to further 6 GB but that doesn’t give any guarantee that Out of memory would not happen as if cache limit is unlimited it pretty much means that out of memory scenario can happen.

I am curious whats the implementation of cache in Openfire. Why open fire has this time bound . Cant we do LRU and invalidate the oldest entry?

I don’t see how we can scale open fire to million active users…

Can anyone comment on the scalability and caching strategy for that?

Openfire uses LRU afaik. You may ignore the warning/info messages, the Openfire log level should maybe be changed to debug here.

For best performance one wants to keep everything in memory.