Very long startup time

Hi,

it seems that Spark 1.1 needs much longer than 1.0.4 to display the “Login Screen”. Actually 1.0.4 did not insert an icon to the system tray before login while 1.1.0 does this even before displaying the “Login Screen”.

After login it also takes ages to load everything, the buddies are displayed very soon as all offline, … after some seconds the users who are online are displayed as online.

The Menu is incomplete and takes even longer to be fully available (especially Spark, Plugins takes very very long).

It seems that Spark’‘s internal workflow’‘s were completely redesigned without testing on normal PC’‘s. I can’'t afford a 3 GHz dual-core CPU

Am I the only one to complain?

LG

LG,

Hmm, strange. Pretty much every Spark operation for me is a lot faster under 1.1 than it was under 1.0. What are the specs on the machine you’‘re using so that we can try to duplicate what you’'re seeing?

Regards,

Matt

Hi Matt,

it’‘s an AMD 900 MHz with 256 MB, so one could say it’‘s rather old - to access the internet it’'s fine (except of some flash web pages).

The Taskmanager has a column named “CPU Time”, the time to show the “Login Screen” with Spark 1.0.4 vs 1.1.0: 3 vs 4 seconds (auto-login disabled). After login it takes for both version about 7 CPU seconds to load. These are the values measured while the PC is idle and has cached the files. They are usually slightly higher and then Spark 1.1 makes the bad difference.

These values are very close, so with a faster PC you may not be able to see a difference.

So I reactivated my Spark.cmd script - it takes 2 CPU seconds to show the Login Screen plus 6 CPU seconds to load Spark 1.1. It uses these very strict parameters “-Xms16m -Xmx16m -XX:NewSize=4m -XX:MaxNewSize=4m -XX:PermSize=10m -XX:MaxPermSize=10m” - maybe you can add Xms, NewSize and PermSize to Spark.exe? Or does it support a vmoptions file like Wifi so one can set it?

But also with this script the Contact List gets rendered three times (within a second anyhow):

1st time: Offline Group/No contacts available

2nd time: Offline Group/all contacts

3rd time: Online Groups/contacts and Offline Group/offline contacts

LG

Hi Matt,

very strict parameters “-Xms16m -Xmx16m -XX:NewSize=4m -XX:MaxNewSize=4m -XX:PermSize=10m -XX:MaxPermSize=10m” - maybe you can add Xms, NewSize and PermSize to Spark.exe?[/i]

Spark 1.1.1 has the same slow startup as 1.1.0, is it possible to add these parameters to the Spark.exe java starter?

If you could find the time to take a look at jvmstat / visualGC and run it together with Spark you would see that these minimum values are not too high.

LG

Hi Matt,

is it possible to add these parameters to the Spark.exe java starter?

LG

I’‘ll pass the values by Derek. However, it typically seems best to let Java make it’‘s own default decisions about heap allocation. Then, there can be special cases for individual machines. I’‘m fairly sure that Spark does support the vmoptions file like Wildfire. Have you tried that out? Also, it won’'t be all too long before Java 6 is out, which has many more optimizations for client-side performance if I remember correctly.

Regards,

Matt

Hi Matt,

this reply will be somehow off-topic:

You are a java developer, aren’'t you? Do you really believe that the JVM knows better than the developer how much initial memory and the maximum memory it needs?

As you did mention the vmoptions file I see that you are not ignoring that Sun did supply so much parameters for the JVM.

Is it really so hard to use a profiler or jvmstat and take a look at the memory usage after startup? Maybe you can use a Spark installation with some rosters and contacts to make sure to that you match production instead of laboratory environments.

Actually it seems to be too hard, I had to do with some developers which claimed that their application needs 8 GB memory and now the application uses 500 MB and runs fine.

LG

PS: I’'ll try the bin/spark.exe.vmoptions file.

Do you really

believe that the JVM knows better than the developer

how much initial memory and the maximum memory it

needs?

In each release of Java, the JVM is getting better and better and just “doing the right thing”. Still, there are definitely cases where it makes sense to set the memory used explicitly.

PS: I’'ll try the bin/spark.exe.vmoptions file.

Let us know how it goes. Real-world feedback on memory settings that work will always be valuable.

Regards,

Matt

Hi Matt,

it seems that the vmoptions file must be put in the same directory as Spark.exe. I did use the values mentioned above and I’‘m well aware that I’'m not far away from OutOfMemory errors if I install any plugin or open chat windows with too much emotions.

If one want’‘s to use this file, please be aware that you write another log file (gc.log) and if you don’'t adjust the values you may hit OutOfMemory errors, they will be written to the log file.

spark.exe.vmoptions

-Xms16m

-Xmx16m

-XX:NewSize=4m

-XX:MaxNewSize=4m

-XX:PermSize=10m

-XX:MaxPermSize=10m

-XX:+PrintGCDetails

-XX:+PrintGCTimeStamps

-Xloggc:logs/gc.log

/code

LG

edited: just a ping for Matt to look at this. And one note: 10 MB PermSize are too little for Spark 1.1.3.