Sparks doesn’t close when computer is shut down

We use Sparkversions 2.5.8 and 2.6.0. And the (about 35) computers run XP, Vista and 7. Java is now 6u26 but problems started already with version 6u25. Server is CentOS 5.4 with Openfire 3.6.4.

When some users select Shutdown on their XP computers the Shutdown process hangs and displays the “End now”.
On Vista and on 7 computers the same thing happens but it instead says “Waiting for Spark”, “This program is preventing Windows…”.

So this happens on all computers with all operating systems and it happens for different Java and different Spark versions. I cannot see anything otherwise on the problem computers that they have in common. Same with the users, they have nothing that’s in common between them. It’s all very random. Although once the problem occurs it stays put. Have had only one user that says the problem went away by itself so I don’t really trust that statement (guess he just used differentcomputers).

I have cleared the Java cache and tried to clear up some other temp stuff but it’s made no difference. Have increased the time that Windows waits for software to close, with registry key HungAppTimeout set to 12sec.

Of these thirty five computers I’d say that half have the problem and half don’t.
Does anyone recognize this? Does anyone have a suggestion as to what I might try next?

Haven’t seen such behavior. Maybe some other software causing this? Like antivirus solution?

I’ve looked into that but since if the user first closes Spark and then selects Windows Shutdown it works fine. Also Spark closes just fine if the users right-click on the notification icon and select Exit. The process (Spark in Task Manager) closes a second after the Spark icon or window exits. This is although just tested on a few of the computers, don’t know if it’s a second on all.

Found another thing yesterday. On a couple of the problem computers the logs\errors.log file was enormous. Sadly this was not the case on all clients though. The biggest one that I’ve yet found is 690MB! Most of the larger ones were around fifty megs. But this is not the answer to our problem because not all are large. Mostly they are at about five or ten k, i.e. they have one or two error entries. Early this morning I set a group policy preferences that replaced all the errors.log files.

Hopefully this can fix the problem on a couple of the computers.

The entries in errors.log were all from a while ago, not from the last couple of days. The latest errors on the clients happened around end of May. So no help there I’m afraid.

Does anyone know of other files that might grow like this?

Or perhaps there are some cache or temp files that can be cleared on the clients?

Temps are stored in Local Settings\Temp on Windows: folder hsperfdata_username with a random file inside and jna_numbersequence.dll Those files disappear after you close Spark.

You can try starting with a fresh profile on problematic machines. I mean removing user\appdata\spark folder and then starting Spark and entering login information. History will be lost, but you can save it somewhere while you are testing. You can copy it to a new profile later.

I’m still testing stuff, like starting with a fresh empty profile. And the first shutdown after having cleared the profile works fine. Spark closes and I get no “This program is preventing Windows…” and no “End now”. But then on the second time I do a Shutdown the errors are back.

Is there something on the server to be done? Some temp stuff to clear? I’m not good with CentOS or Openfire I’m afraid. Should mention that the whole server automatically restarts every night.

Maybe it’s time to upgrade to Openfire 3.7.0? Has anyone seen a good guide for doing this? Something a newbie like me can easily follow?

I think server has nothing to do with this.

Upgrading Openfire depends on how it is installed, was it installed with an installer or just unpacked. In the first case you just the newest installer and it should upgrade. In the second case you unpack it and overwrite the installation. In both cases you shouls stop Openfire and make a backup before doing this.

This entry is what’s filling up the errors.log. Exactly the same entry over and over again. Is this a real error? The users say nothings wrong (except for the shutdown error of course).

If it’s no real error, how do I stop this from filling up the errors.log file?

2011-jun-21 16:24:49 org.jivesoftware.spark.util.log.Log error

ALLVARLIG: (same as SEVERE I guess, but here in swedish)

com.jniwrapper.win32.LastErrorException: The operation completed successfully.

at com.jniwrapper.win32.gdi.WindowDC.(

at com.jniwrapper.win32.gdi.DDBitmap.(

at com.jniwrapper.win32.gdi.Icon.load(

at com.jniwrapper.win32.gdi.Icon.(

at com.jivesoftware.plugin.jniwrapper.WindowsSystemTray.setTrayIcon(WindowsSystemT

at com.jivesoftware.plugin.jniwrapper.WindowsSystemTray.changePresence(WindowsSyst

at com.jivesoftware.plugin.jniwrapper.WindowsSystemTray$3.presenceChanged(WindowsS

at org.jivesoftware.spark.SessionManager.changePresence(

at com.jivesoftware.plugin.jniwrapper.SystemIdleListener$ java:91)

at java.util.TimerThread.mainLoop(Unknown Source)

at Source)

Well, I have a (almost) clean computer and installed 2.6.0_online. Never connected to the server or entered anything into any fields in Spark. Still got the same error when I selected Log off (same with Shutdown of course) for the user. User has domain admin rights and computer is Windows 7 64-bit. And as I said the computer is clean, not been running for more then a day.

I don’t like this! What the h__l is going one!

All right, I’m better now, thanks for listening guys.

I just upgraded to 2.6.0 a few weeks back, and did a total wipe of previous install as well as all the profile information, running Openfire 3.7.0. I have been starting to notice this problem as well. Just today I logged a user off his system and was presented with the hung app windo asking me if I wanted to end now. These are all Win Xp S3 machines. I was going to go to 2.6.2 today, but ran into a problem with the new username/password encryption configuration. Oh well, I will wait to see what happens with 2.6.0 and this issue, unless I get 2.6.2 worked out.


Sorry but same thing with 2.6.2 for us here. Tested on a couple Win 7 computers and the problem (like clockwork) started on the second shutdown/logoff.

I did some investigating of the network traffic but the computer doesn’t seem to talk to anything on the Internet or to the Openfire server (as previously confirmed). It’s something local on the client for sure.

Sad thing is there’re no tools to run on the computer to see what’s happening. I’m thinking of stuff like Sysinternals Process Explorer. Since those tools all close immediately upon selecting shutdown. At least I’ve not gotten them to register anything before they were closed.

Anyone got suggestion for tools I should try out? Maybe something that runs remotely and still can register processes, registry changes and file access?

Maybe you can use Process Monitor. It is possible to set it run on reboot, maybe it will capture something while shutting down too.

I fired up Process Monitor (it’s a beautiful tool, don’t know why I didn’t try that from the start) and found a very strange thing. Spark constantly (three four times per second) does the following.

Process: Spark.exe

Operation: CreateFile

Path: C:\Users\BAP\AppData\Local\Temp\e4j_p5164.tmp


Detail: Desired Access: Read Attributes, Disposition: Open, Options: Open For Backup, Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a

Also during shutdown/logoff. There’s nothing else it does, just this. Also during the whole time it’s running actually. Constantly tries to read a file named e4j_p???.tmp which is not there.

I put a txt file called like this into temp and after Spark had done a couple ReadFile and Query* it got deleted. I have a pml file of this if anyone wants to have a look at it.

Can anyone explain what this is? Has to be this that stops Spark from closing right?

this seems to come from exe4j, which is the executable wrapper used for spark

I see these same errors with the exe4j temp file. I have tried several different versions of the JRE to see if that made a difference and it did not.

Setup details: Win 7 64-bit, Spark 2.5.8, tried JRE 6u4, 6u10, 6u24, 6u26.

The temp folder C:\Users\username\AppData\Local\Temp\ is definitely writeable, every other process seems to be using it without problems.

Well it’s been a month and as far as I can find there’s still no solution posted in the forum. Or have I missed it?

I’ve since upgraded to 2.6.3, same thing.

Anyway; here are four log entries from Process Monitor. All within the same second. This actally shows up about three or four times per second, not a lot but still enough to want to get rid of it. Plus that pesky little thing; our computers don’t shut down over night!

10:27:42,0847170 Spark.exe CreateFile C:\Users\xyz\AppData\Local\Temp\e4j_p3340.tmp NAME NOT FOUND Desired Access: Read Attributes

10:27:42,3847358 Spark.exe CreateFile C:\Users\xyz\AppData\Local\Temp\e4j_p3340.tmp NAME NOT FOUND Desired Access: Read Attributes

10:27:42,6847530 Spark.exe CreateFile C:\Users\xyz\AppData\Local\Temp\e4j_p3340.tmp NAME NOT FOUND Desired Access: Read Attributes

10:27:42,9847718 Spark.exe CreateFile C:\Users\xyz\AppData\Local\Temp\e4j_p3340.tmp NAME NOT FOUND Desired Access: Read Attributes

If this is related to e4j wrapper, then we probably can’t do much. Also it is not reproducable on all systems (on mine at least), so it is hard to pin point the actual cause. Wonder if using different installer would change anything, but we don’t have IzPack based installers and i don’t really know how to build one. You can try compiling Spark from the source, run such version on an affected PC and see if this version exits ok. This won’t solve your problem, but maybe will help to narrow the issue.

We’re having the same issue and for what it is worth I’ve found a semi work-around.

What I tried, after reading this thread, was to start Spark not with the exe file but using startup.jar. This solves the problem on my W7-64 machine but unfortunally not on our XP clients where it is more of an issue.

I suspect the lockup behaviour to be related to Java, we’re on u26, but unfortunally I can’t test different Java versions right now.

If anyone wants to try bypasing the .exe put the following in a bat-file, adjusting paths as needed.

@echo off


cd “C:\Program Files (x86)\Spark\lib”

start "C:\Program Files (x86)\Java\jre6\bin\javaw.exe -jar " startup.jar


We did a hands-on-resolution.

This thread helped me to pin down that it had something to do with the e4j executable wrapper (I’ve no idea what that means exactly). So we reinstalled all the troubled clients. This time, instead of using spark_2_6_3_online.exe we used spark_2_6_3.exe. And now no user reports having the problem anymore, which is enough for me, for now.