Noticed this with the recent 3.10 builds (RC and maybe some previous build, not with the older builds). Maybe nothing critical, but as i use Openfire with a launcher in my testing environment, now it takes much much longer for Openfire to stop or quit when you press Stop/Quit in the launcher. Once i’ve also experienced an issue, that after i have stopped Openfire this way and pressed Start again, it didn’t load normally and clients were unable to login or were throwing SSL errors. So i had to kill it and start again. Haven’t tried with the service yet.
I’m experiencing the same thing with the service on 3.10. Stopping the service appears to hang, but clients are disconnected. The service doesn’t stop, and has to be killed in task manager.
Could you provide a Java thread dump of Openfire, during the shutdown? That should tell us what is being waited for.
What’s the easiest way to do this on Windows? Wonder if linux users experince this too or is this an issue with windows executable. I have just rebooted my 3.9.3 service and it took 5 seconds to stop and 2 to start.
To create a Java stack trace on WIndows, press CTRL-BREAK (which probably requires an active screen of sorts). You can also use the jstack executable that ships with the JVM.
Plugins can greatly affect the shutdown routine, as they’re shut down in sequence. If I were a betting man…
Well, i have the same plugins i had with older 3.10 builds.
Ok, on windows in cmd window first run jps and get the pids of java processes. Then run jstack -l pid (that small L letter). I have generated a bunch of stacks. On windows there are two java processes: openfire (launcher) and openfired. Maybe its the openfire one causing problems. I was able to get two stacks while stopping the launcher. Also, it stops fast if no Spark client is connected to the server. If i login with Spark, then it stops longer. Not sure if this is Spark affecting itself or the user or his roster.
jstack.zip (8756 Bytes)
I can’t find an immediate culprit here. The openfire process is waiting up to 10 seconds for the openfired process to end. The stacktrace that we’re after is probably in the other process.
// attempt to perform a graceful shutdown by sending
// an "exit" command to the process (via stdin)
Writer out = new OutputStreamWriter(
new BufferedOutputStream(openfired.getOutputStream()));
out.write("exit\n");
out.close();
final Thread waiting = Thread.currentThread();
Thread waiter = new Thread() {
public void run() {
try {
// wait for the openfire server to stop
openfired.waitFor();
waiting.interrupt();
}
catch (InterruptedException ie) { /* ignore */ }
}
};
waiter.start();
try { // wait for a maximum of ten seconds
Thread.sleep(10000);
waiter.interrupt();
openfired.destroy();
}
catch (InterruptedException ie) { /* ignore */ }
cardLayout.show(cardPanel, "main");
Not the user. Logged with other client with the same user and everything is fine. So, it is Spark. Maybe server is waiting for some packet for Spark. Made 3 stacks of openfired process while it was stopping.
openfired stopping.zip (15288 Bytes)
It seems that downgrading MINA did solve this issue too. With 3.10.2 Openfire launcher is exiting as fast as it was with 3.9.3.