Possible resource leak?

Personally I haven’t had a reoccurance of this issue since my last post in this thread… so perhaps it was a “fluke”. Vinh, I’m wondering if your codebase has not pulled the Smack 3.3.1 lib into it’s build/lib/dist/ directory yet?

I’ve also noticed in task manager, I don’t have a Spark.exe process but only a javaw.exe that is running at around 150mb now. In wroot’s version, there was only the Spark.exe process. Are you seeing the same thing with your custom code Jason?

Something to add is that I’m using IzPack to create the installer but I doubt that could be causing the issues. I tried patching it but maybe I should start from scratch and do a new SVN check out and then do my code changes on that. Any other suggestions or maybe your steps would help. Thanks!

Hmm, ya I’ve seen the Spark.exe process dissapear and become a javaw.exe process a few times, not 100% sure on what causes that but I think it has to do with restarting/logging-out of Spark, since the program basically re-launches itself before the original dies off, but the original was launched from the install4j or whatever program that built the installer… so the re-launch may not use the same mechanism to create it’s own windows process name. just my guess…

My company has been experimenting with Fastpath inegration, so I may have restarted my Spark client at some point, but, I don’t think I’ve had a relapse of the memory leak issue since I rebuilt from source.

@Vinh – to ease the workload a little of having to get fresh source from the SVN repo, you can try creating a patch file of your custom version, then apply that patch to the fresh downloaded source.

If you have TortoiseSVN, it’s as easy as right-clicking on the src directory insdie your spark svn repo copy, and click Create Patch. Checkmark the files you want included in your patch, and then save it. Right click the patch file and click Apply, then browse to the src directory of the fresh svn repo download and apply it.

This is what I ended up doing, so I have a “compay patch” now that I can apply to fresh source whenever needed since we don’t use SVN over here.

Btw, even using this new build with updated Smack we still have occasional lock ups on old machines with low resources. Not even running for a whole day

hmm, that’s odd. I only have a couple of machines using the latest SVN Trunk, all others are using source from the trunk pulled a few months back, with the Smack 3.2.x jar added replacing the 3.3.x from that time… and those machines have reported no performance issues (most are pretty low resource, aroundish 2GB’s at max). When we experienced the problem, it was on systems with plenty of resources too, such as mine with 8GB’s or ram…

Can anyone profile the process to see where the memory is going?

I have been monitoring spark for few weeks now and I identified three places with memory leak issues

  1. GroupChatParticipantList that implements ChatRoomListener.

The problem here is that if you have 5 group chat rooms opened you will have 5 listener instances, and when someone joins one of the rooms, all 5 listeners will be notified. The resolution here is to keep only one ChatRoomListener instance. ChatRoomListener methods have groupchat instance as parameter in all methods, so you know what room will be notified

  1. JPanelRenderer - used to render ContactList items. For every offline-online message you will get contact item added/removed from online to ofline. One instance is kept in the renderer and accumulates over time. For example I run spark for a week and I got 140 MB memory accumulated in ~2000 instances of ContactItem. The resolution here is to return always one rendered instance each time painted according with ContactItem status (online, offline etc)

  2. In ContactList.java the changeOfflineToOnline method creates a Timer every time - in 24 hours this causes a thread lock reported in threaddump file - solution here is to replace Timer with a Swing task -SwingUtilities.invokeLater(new Runnable()

I have tested all these fixes and spark looks better now. I will create ticket and commit the code

NOTE - These issues are not related with latest spark trunk code, the issues were in spark for long time, probably in 2.6.3 too

Great. Post the ticket link here, when filed. A side note. I have assigned one ticket with a broadcast patch. Could you commit that patch? I’ve tested it.

I will comit it.

here is the ticket regarding resource leaks: http://issues.igniterealtime.org/browse/SPARK-1558

I commited the code with ticket SPARK-1558

please rerun tests and let us know if it improves memory footprint at your end — thanks

I’ll take a profile of my spark running today under normal use, and tomorrow i’ll build a new in-house using this patch and then re-profile for a few days. I’ll post back my findings.

How is your profiling, Jason? I’m using latest build for 3 days. No side effects (other that disappeared offline bubble icons, which has been already fixed in the more recent build). I’m willing to close this ticket.

I have a 3-day profile under “normal” use of my old build taken and snapshotted to a file (so it can be imported in another JProfiler). I have not been able to push out a new build internally yet, so still need to profile the latest changes. We had one of our internal biz apps start acting up this week and duplicated a bunch of orders (!!!), so that has obsorbed a bit of my time.
running_spark.jps.zip (1515796 Bytes)

There is another leak I found.

If you keep spark running while having multiple group chat rooms opened, when reconnecting, new instances of group chat rooms will be created without really closing existing ones.

So if you keep 5 group chats opened and if you network goes down 10 times you will end up having 50 group chat instances.

The culprit is here: /src/java/org/jivesoftware/sparkimpl/plugin/transcripts/ChatTranscriptPlugin.ja va

public void persistChatRoom(final ChatRoom room) {

lastMessage.put(room,message);

Rooms are kept in private HashMap<ChatRoom,Message> lastMessage = new HashMap<ChatRoom,Message>();

There is no proper .equals in GroupChat room, so every time a room is closed that object instance is kept in the map.

I failed to understand why that map, but looks related to history messages, client-side. I will reopen the SPARK-1558 ticket to fix this

One of the leaks - the one with ContactItem instances accumulation is also due of a significant leak related to CellRendererPane in JDK 1.6 Check this: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6542440

This is fixed in JDK 7 - so I recommend that everyone to use JDK 7. I don’t know if SPARK is officialy built in bamboo with JDK 7, but I recommend to use only JDK 7 to build. I did that locally with JDK 7 and no more CellRendererPane leak with the most recent spark code

I have noticed some issue recently. I haven’t seen this with 618 build i think, though i have seen it happening to other users on 618. Now i’m on 622 and today on 623 build and i have seen this. Spark is hanging and shows black rectangles instead of windows. Sometimes it recovers, but usually it becomes unresponsive and you have to kill it. Definitely haven’t seen such issues on 610 build.

I did not reproduce this behavior so far. I will keep looking. In the meantime, please send me some logs when is happening again --thanks

Spark is being built with Java 7 now on the CI, and the next release is planned to use Java 7 by default. There were several other Java 6 problems that effected spark, such as AWT bugs (window focus stealing, tray icon dissapearing, etc). I’m sure there’s more reasons for the swtich too.

our only use of the group rooms overe here is via Fastpath plugin… and our CS reps are trained to close them after a conversation ends… so I will go ahead and build a new internal Spark today and get it deployed so I can get a 3 day profile to compare my previous profile with.

I think I inroduced this by adding a sleep inside a invokeLater task. I read a bit about Swing UI concurency and they recommend to use javax.swing.Timer when a delayed task needs to be executed. This has the advantage to not require a invokeLater task and can execute a task given a delay. This gets executed inside Swing UI Thread and does not create a new Thread (like java.util.Timer) so task acumulation is avoided gracefuly