Openfire & Empathy bug?

Hello everyone,

I saw a post regarding the memory leak problem,my problem is a little different:

the moment i log in with the new ubuntu 9.10 client “Empathy” the server’s CPU gets up to 99% and everything gets bricked.

Anyone experiencing the same problem? any workaround?

I tried to restrict clients in the “permitted clients” section but Empathy seems to be uneffected by it and keep connecting.

Any idea on how i could block empathy?

Thank you

Hi,

Can you get a few thread dumps with a difference of seconds between them and paste them here? Use kill -3 [process_id] to get the thread dumps. Information will be logged in the stdout. Get the dumps when CPU goes high (e.g. 99%). That information will tell us what the server is doing. The server should handle fine no matter which client you are using.

Tks,

– Gato

I did kill -3, but nothing went out in th stdout, anyway in the error.log i found this error:

java.lang.OutOfMemoryError: Java heap space

So i think it’s the same problem of “memory leak” adding the cpu up to 99%…

Could you explain why i can’t forbid Empathy to connect even if i restrict the permitted client?

Thanks

servizi_it wrote:

Could you explain why i can’t forbid Empathy to connect even if i restrict the permitted client?
Probably because Empathy is not listed in this plugin. This filter is very simplistic and very easy to override, you just has to change your clients ID. Client Control plugin’s filter wasnt intended to do strong control. It can only help against novice users.

Hi Gato,

I have one thread dump collected when JVM memory is at 99.8 percent. I’ve posted

the details in the following thread, including server config and scenario:

http://www.igniterealtime.org/community/message/198157?tstart=0#198157

It’s actually very very easy to force the JVM to run out of memory using Empathy

with even one user, once one knows what to do

Openfire 3.6.4

JVM memory status: 252.80 MB of 253.19 MB (99.8%) used

ps -ef | grep -i java

daemon 30532 1 14 19:05 pts/0 00:01:33 /usr/lib/jvm/java-1.6.0-sun-1.6.0.u7/jre/bin/java -server -Xms128m -Xmx256m -DopenfireHome=/opt/openfire -Dopenfire.lib.dir=/opt/openfire/lib -classpath /opt/openfire/lib/startup.jar -jar /opt/openfire/lib/startup.jar

kill -3 30532

cd /opt/openfire/logs/

more nohup.out

Output is in the attached.

Cheers,

Dave
nohup-out-20091117.txt.zip (5706 Bytes)

This bug is being tracked as OF-82. Another community disussion related to the same issue can be found at http://www.igniterealtime.org/community/message/198152#198152

Hi Davenz,

The CPU is hitting ~100% due to the OutOfMemory (OOM) problem. The thread dump is showing a server with no activity at the moment of the snapshot. What we need to see now is a heapdump of the JVM. There is an easy way to get one when the JVM runs OOM and that is by adding

-XX:+HeapDumpOnOutOfMemoryError to the command line when starting up the server.

Since memory leaks are just like any other leak…i.e. they happen (slowly or quickly) during seconds, minutes, hours or days we can also get heap dumps every couple of (e.g) hours and compare them. If we see that a given object is being accumulated during that time then we found the source of the leak without having to wait for the OOM. In summary, if your OOM problem happens every X days then you might want to create manual heap dumps every hour and post them here so we can analyze them. If the problem is easy to reproduce then it is better to just wait for the OOM since our analysis effort will be easier and more certain. To create manual heap dumps you need to run from the command line: jmap -dump:live,format=b,file=[name_of_output_file] [process_id].

Tks,

– Gato

Sorry about the delay, once I have a minute I will definitely be recreating this on an Opensolaris system for analysis!

Cheers,

Dave