Possible memory leak in 2.5.0

I have a test server installed with just one client logged in. I have been monitoring it with the V2Status plugin and have watched the memory usage climb daily then crash and climb again.

I have the .csv file if anyone wants to see it.

jason

Hi Jason,

there was at least one memory leak in 2.5.0 (JM-573) which should not be related to what you are seeing. Maybe you see the normal behavior, the JVM allocates memory until it reaches it current max memory value (somewhere between Xms and Xmx) and then the garbage collector frees unused objects.

I expect that “crash” does not mean that the server crashes but that the memory use decreases from 16 MB to 9 MB instantly?

LG

I expect that “crash” does not mean that the server

crashes but that the memory use decreases from 16 MB

to 9 MB instantly?

Yes this is exactly what I am seeing on the test server. The memory usage will climb steadily until 100mb then drops to about 30mb then starts over again. This appears to be on a 12hr cycle.

So what you are saying is that this is a bug in 2.5.0?

In my production server I see a steady stairstep climb of memory usage until the server stops responding on port 5222/3 and a reset of the server is required. I have not seen it drop the memory usage on its own like I see on the test server.

thanks

jason

LG, you stated that this is normal behavior, I did not see that on 2.4.3 that release stayed constant between about a 75 mb tolerance, whether during peck load or off load.

Or is this normal for 2.5.0?

thanks again,

jason

Hi,

what you see on your production server could be related to the JM issue which is fixed in 2.5.1.

May I ask which Xms and Xms settings your test and production server uses?

Maybe you can start your test server with -Xms16m and -Xmx48m, so you’‘d see some more GC’‘s, maybe every four hours. It’'s the connection pool, the heartbeat and the monitoring tool which cause the memory usage to increase so slowly.

LG

what you see on your production server could be

related to the JM issue which is fixed in 2.5.1.

May I ask which Xms and Xms settings your test and

production server uses?

I just upgraded to 2.5.1 about an hour ago so I will be watching it closely.

My test server initializes with the following settings:

Xms128mb

Xmx128mb

Xss128kb

jason

Sorry I missed the production server part.

Production server:

Xms384m

Xmx384m

Xss128k

jason

Hi,

with such settings the JVM will never increase it’‘s own max memory size, and so it’‘s no wonder that you see the 100->30 GC’‘s. This usually leads to very long GC’‘s (compared to GC’'s where only 10 MB must be scanned and freed).

Maybe you can set Xms for your production server to 200 MB, so you should be able to notice how often the JVM increases it’'s internal max value. Adding some memory log options as described on http://www.tagtraum.com/gcviewer.html (-Xloggc: -XX:+PrintGCDetails) would do no harm.

LG

I will do that and see what happens, thanks again.

jason