Increased memory usage

Hey folk, I don’‘t recall if I’'ve posted this or not: GATE-264

Regardless, please don’‘t be afraid to keep posting your experiences and such in this regard. The more data I have the better. I’‘m almost positive it’‘s directly related to the new contact manager code I implemented. I don’‘t know -why- it’'s broken, but hey, working on that. =)

jadestorm wrote:

Well I mean, it’‘s a large plugin, so it’'s bound to cause a bit of an increase. But in theory I would think it would stay relatively stable after an initial increase. What does it tend to be if you are running 1.0.2?

With 1.02 it stabilizes around 22-24 MB w/AIM and MSN in use, up to 3 users w/7 total sessions.

Just for grins I cranked up the Yahoo! and ICQ gateways as well. This seems to help trigger the problem more quickly; after 48 hrs (as of yesterday afternoon 3:00 p.m. local time) I’‘d gone up to 48 MB in use. At this moment it’‘s 86 MB and climbing slowly. I have disabled the Yahoo! gateway but that hasn’'t helped.

Unfortunately, sending a SIGQUIT to the Java process doesn’‘t seem to log anything, anywhere (checked in the stderr logs, system log, Openfire logs, etc.). I’‘m sure you’‘re correct that it’‘s going somewhere but I haven’'t found it yet.

Thanks! And as for trunk, I certainly don’‘t “condone” using trunk. ;D I often commit things that don’'t work yet. So “fair warning”.

Yeah, I understand… I’‘m not doing it blindly, and I’'m plenty familiar with revision control in general (and Subversion in particular) to be able to move around and get any particular changeset that I need. Thanks for being explicit with your intentions though.

ICQ, MSN & GTalk in use, and the server made the night with no great increase in memory.

Currently memory is cycling between 36 and 80MB (consistently over a 60 second period).

It’‘s early, so we’'re only at 70 concurrent users, but so far, so good.

D

Well that’‘s generally some goodish news on a very bad day =) (I’'m stranded in Dallas Texas…)

Sorry to hear that - according to our people in our Texas remote office, the weather’'s been dreadful out there.

So, 2 1/2 days up time and memory usage is up to 120-160MB (and currently about 100 users, peaking yesterday in the region of 200).

So, memory has increased steadily, so there may be an issue, but it’'s not particularly severe in our environment.

It doesn’'t seem to depend on the number of users. I only had 2 users and the memory usage jumped up to 125 MB.

Well, I just don’'t understand it. I was going to post to agree with you - my memory utilisation was up at about 200MB and seemingly increasing daily. Then, about a minute ago, it dumped 100meg, and is now cycling between 50 and 90 MB.

The memory cycle is also really odd. Anyone have an explanation for it/see the same thing? i.e. memory increased a couple of meg a second for about 30 seconds, then returns the start value and repeats!

I still haven’'t had a chance to look into it. On a bright note I did successfully get home today! Yay!

I’'m not always great at tracking down memory issues, so bear with me. =) at least this one has a pretty clear (to me) path as to where the likely culprit is =D

I’‘m experiencing the same problem. I’'ve generated an object histogram, showing the objects which are using the higher amount of memory:

Object Histogram:

Size Count Class description


159406656 4981458 java.util.concurrent.locks.AbstractQueuedSynchronizer$Node

11872824 12402 int[]

10069064 129352 char[]

6605472 48538 * ConstMethodKlass

6314408 12777 byte[]

3498928 48538 * MethodKlass

3470888 82585 * SymbolKlass

3022512 125938 java.lang.String

2783496 4617 * ConstantPoolKlass

1783208 4617 * InstanceKlassKlass

1685096 16474 java.util.HashMap$Entry[]

1588832 4028 * ConstantPoolCacheKlass

1295424 40482 java.util.concurrent.ConcurrentHashMap$Segment

1290504 23839 java.lang.Object[]

1040520 43355 java.util.HashMap$Entry

973488 40562 java.util.concurrent.locks.ReentrantLock$NonfairSync

823896 3007 * MethodDataKlass

702720 40483 java.util.concurrent.ConcurrentHashMap$HashEntry[]

639000 15975 java.util.HashMap

476352 4962 java.lang.Class

476128 14879 org.xmpp.packet.JID

420416 13138 java.util.LinkedHashMap$Entry

400920 6370 short[]

388608 16192 java.util.ArrayList

My good buddies over at Jive pointed me at a tool that should help track down the leaks. I noticed the same thing you did though, that one concurrency related object just going batshit insane. =/ (though oddly enough I only noticed it on my G5 based powermac, not my intel based macbook pro)

Apparently I still have a memory issue with beta 3. Memory usage increases more slowly but there still seems to be something. It takes about 2 to 3 days for my server to get locked.

Any sort of information about what’‘s causing it? I don’‘t really know what to tell you to look for since you probably don’‘t own a copy of jprofiler. I’‘m not real sure how else to tell what’‘s using up so much memory. I can’'t get any memory leaks to occur anymore.

Sorry I have no idea. I’'m no java developer. I only know that after about 2 - 3 days the used space goes to 125 mb. These are the only errors I have from that. There are more but they only say the same:

2007.07.08 16:21:16 org.jivesoftware.openfire.gateway.protocols.oscar.BaseFlapConnection$3.handleExc eption(BaseFlapConnection.java:62) ERRTYPE_SNAC_PACKET_LISTENER FLAP ERROR: Java heap space org.jivesoftware.openfire.gateway.protocols.oscar.BaseFlapConnection$4@4604764b

java.lang.OutOfMemoryError: Java heap space

2007.07.08 16:22:28 org.jivesoftware.openfire.container.PluginManager$PluginMonitor.run(PluginManage r.java:921)

java.lang.OutOfMemoryError: Java heap space

2007.07.08 16:29:52 org.jivesoftware.openfire.nio.ConnectionHandler.exceptionCaught(ConnectionHandle r.java:109)

java.lang.OutOfMemoryError: Java heap space

2007.07.08 16:45:15 org.jivesoftware.openfire.container.PluginManager$PluginMonitor.run(PluginManage r.java:921)

java.lang.OutOfMemoryError: Java heap space

Hi.

Here is the memory usage histogram in my installation:

num #instances #bytes class name


1: 261503 24297344 [C

2: 22394 21652000 [B

3: 102046 8972464 [Ljava.util.HashMap$Entry;

4: 295230 7085520 java.lang.String

5: 50706 6650480 <constMethodKlass>

6: 11083 6168504 [I

7: 140082 4482624 java.util.concurrent.ConcurrentHashMap$Segment

8: 50706 4061944 <methodKlass>

9: 85263 3624160 <symbolKlass>

10: 90351 3614040 java.util.HashMap

11: 141106 3386544 java.util.concurrent.locks.ReentrantLock$NonfairSync

12: 137252 3294048 java.util.HashMap$Entry

13: 4800 2869288 <constantPoolKlass>

14: 42957 2456224 [Ljava.lang.Object;

15: 140253 2383752 [Ljava.util.concurrent.ConcurrentHashMap$HashEntry;

16: 4800 1931504 <instanceKlassKlass>

17: 37048 1778304 org.openymsg.network.YahooUser

18: 4196 1655664 <constantPoolCacheKlass>

19: 47304 1513728 java.util.LinkedHashMap$Entry

20: 69887 1118192 java.util.HashSet

21: 44169 1060056 java.util.Hashtable$Entry

22: 31663 1013216 org.xmpp.packet.JID

23: 35600 854400 java.util.ArrayList

24: 2854 837720 <methodDataKlass>

25: 10936 787392 net.sf.jml.impl.MsnContactImpl

26: 30713 737112 java.util.concurrent.ConcurrentHashMap$HashEntry

27: 8757 700432 [Ljava.util.concurrent.ConcurrentHashMap$Segment;

28: 14004 672192 org.jivesoftware.openfire.roster.RosterItem

29: 25849 620376 java.util.LinkedList$Entry

30: 15335 613400 java.util.WeakHashMap$Entry

31: 11501 552048 java.util.LinkedHashMap

32: 5154 494784 java.lang.Class

33: 2357 450000 [Ljava.util.Hashtable$Entry;

34: 26845 429520 java.util.HashMap$KeySet

35: 6695 421752 [S

36: 8029 385392 java.nio.HeapByteBuffer

37: 23585 377360 java.util.concurrent.ConcurrentLinkedQueue$Node

38: 7851 376848 org.jivesoftware.openfire.gateway.protocols.msn.MSNBuddy

39: 8757 350280 java.util.concurrent.ConcurrentHashMap

40: 7325 330160 [[I

41: 12671 304104 java.util.LinkedList

42: 6070 291360 net.kano.joscar.ssiitem.BuddyItem

43: 5952 285696 org.jivesoftware.openfire.gateway.protocols.oscar.OSCARBuddy

44: 15506 248096 java.util.concurrent.LinkedBlockingQueue$Node

45: 2033 243960 java.net.SocksSocketImpl

46: 10095 242280 org.jaxen.util.IdentityHashMap$Entry

47: 7298 233536 java.util.concurrent.locks.AbstractQueuedSynchronizer$Node

48: 276 199488 [Lorg.jaxen.util.IdentityHashMap$Entry;

49: 11299 180784 net.sf.jml.MsnClientId

50: 11269 180304 java.util.LinkedHashSet

juancarlos, how are you getting that information? I’‘d like to be able to relay how to do that to others if you don’'t mind =)

As for that particular dump, wth is a [C ??? =) And why are there so many of it?

You can get the heap histogram through jmap which is part of the JDK:

jmap -histo <pid of openfire jvm proccess>

[C means a one dimensional array of characters.

The primitive types are represented by one of the following characters:

B = byte

C = char

D = double

F = float

I = int

J = long

S = short

Z = boolean

Class and interface types are represented by the fully qualified name, with an ‘‘L’’ prefix and a ‘’;’’ suffix and arrays are prefixed by one ‘’[’’ per dimension so ‘’[[I’’ is a two dimensional array of ints.

see http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#14757

Hi jadestorm.

Of course, I got it using standard Java 1.6 debugging tools. In this case, to get a memory usage histogram, you can use jmap -histo <PID> (assuming that you are running Openfire over a UNIX-like OS).

Here is more info about some useful tools:

http://java.sun.com/javase/6/webnotes/trouble/other/matrix6-Unix.html

http://java.sun.com/developer/technicalArticles/J2SE/monitoring/

I really have no clue about this classes named [C , [B … and other like that. I just posted them hoping it could help. I’'ll continue investigating this.

Thanks srt and juancarlos! That’‘s very handy! I just noticed that the bundled jre doesn’‘t include that command… so it appears to be a jdk vs jre thing. Oddly enough, I tried to use a jdk installed separate from the jvm i’‘m running to jmap -histo my process id and it complained that it couldn’‘t access the jvm. =/ Either way, I’'m not running into the leak what-so-ever so it doesn ''t really matter if I can do it at the moment.

We have about 300 concurrent users weekdays and 160 on weekends here, server was pretty solid (up for weeks/months) from 2.5.* - 3.3.0, but lately, I have to restart the server almost nightly.

Our memory usage stays about 60-80mb 20 minutes after it’‘s started up with full load, within a few hours we’'re up to 256+ and at the end of the night, close to 512 mb. Midnight most users sign off, about 70 idlers, but still the memory stays allocated until the morning after (so lately i just kick it to have a fresh morning).

Like i mentioned we have about 300 users, dont have any message logging to db or files, content checker is looking for “http://” though. We also use every plugin currently available (maybe not use, but they are all installed, sans non-free stuff.). We use AIM, MSN, Yahoo and ICQ… 90% of users have MSN, 10% ICQ/Yahoo. 1 AIM user…

Edit:

Few more details: We’‘re running JRE (hotspot) v 1.5.0, i’'ll be upgrading it to 1.6.x tonight.

Every MSN user at work has an average of 50 contacts in their lists, they are all adolecents just out of post-sec schooling. Im sure this can cause an additional load per user.

And finally we’'re running v 1.1.0 Beta 3 of the IM Gateway.

Josh

What version of the im gateway plugin are you using?

There are actually some known memory leaks with Openfire itself that I noticed Gato fixed recently in trunk, so they’'ll be fixed in 3.4.0. (I think there was a 3.3.2 release that included some big memory leak issues)