Memory leak

Hello.

I’‘m experiencing memory leak on my Wildfire instance. It runs standalone in the Jetty instance, I’‘ve added -Xms64m -Xmx64m to the command line, and all the plugins are enabled. I don’‘t have a lot of users (5), and I’'m using pyMSNt and pyICQ transports.

The heap grows up to the max, but the GC is not freeing memory any more. When I try to see the admin page, I’'ve got an 500 error, Java heap.

Here is a graph of memory after the Full gc: http://zorel.org/static/gc.gif (graph on aprox. 6 hours of running). I had no OOM error in nohup.out.

Do anyone have the same behaviour? Which info could I give to determine the origin?

Wildfire 2.4.4

1.5.0_05 Sun Microsystems Inc. – Java HotSpot™ Server VM

Linux Debian Sarge

Regards.

I have the same problem. Only a few users (15-20) and I have to restart the server every couple days due to memory leaks. Symptoms: server connections fail, no one can re-login, and admin page gives the following:

Exception:

java.lang.OutOfMemoryError: Java heap space

/pre

I didn’'t have this problem with the old Jive Messenger, but since installing the “Wildfire” version something is definitely leaking.

I’‘m happy to see i’'m not alone

Hi,

a normal javacore would be helpful. Maybe you can get a javacore in the early morning when nearly no one is online - so there should be only some few threads in it. If Wildfire does not terminate the threads properly then you should see 200+ threads.

SUN’‘s Trouble-Shooting and Diagnostic Guide for Java http://java.sun.com/j2se/1.5/pdf/jdk50_ts_guide.pdf describes how to generate dump files (2.1.2.2 Using HAT). I don’'t know how much the performance decreases when the profiling mode is used, but maybe you can run Wildfire for a day with these parameters and create and analyze some heap dumps.

LG

Hi.

Thanks for this really interesting doc, I didn’'t know it.

What do you need exactly? If I run the JVM with this hprof and i’'m taking 3 ou 4 dumps at the beginning and 3 or 4 when the problem arise, is it ok?

Regards.

Hi,

one or two dumps should be enough. It will take some time to be written, so all connected users will experience a small lag. I didn’‘t yet look at the tools available to analyze these dumps … the tools I have are great for dumps of IBM’'s JRE, but Wildfire does not run with it.

LG

I’'m having memory leak issues of some sort but may not be getting this error. Running Wildfire 2.4.3 the java memory usage is maxed. The service has been running for about 8 days, has the ICQ, MSN and AOL python transports attached, and 1 user - me.

I can no longer get to the console of wildfire and the msn transport has now failed. When I could last access the console (about 4 hours ago) the java memory usage was between 96 % and 99%.

Do you want everyone with memory problems to create these “dumps” or will you be able to figure it out from 1 person’'s dump?

Edit: Forgot to mention that I’'m running this on a redhat 7.2 box, using the java that comes with the RPM and an external mysql server (running on the same box).

Message was edited by:

tiker

Hi Rob,

I think one dump where the cause of the out of memory is clearly visible should be fine. One will get ideas where similar code is used and can eliminate more than one source for out of memories.

LG

Hey guys,

Thanks for the bug report. I created JM-558 for this problem and checked in a fix for this issue. You may want to try again with the next nightly build. I’'m now profiling other parts of the server to confirm that there are no more leakings.

Thanks,

– Gato