Linux Spark 1.1.3 wont login

So, I tried 1.1.3 today, and Spark wont log in. The login window comes up, I put all my info in, then the window closes and the app quits. This is the output on the terminal:

jay@kirk:/scratch/Spark$ ./Spark

ls: /scratch/Spark/lib/windows: No such file or directory

Error: Couldn’'t find per display information

/code

I tried using my own JRE (the 1.6 beta) but got the same results. Any idea why its looking for windows? Is that maybe related? I am connecting to a Wildfire 2.5.1 server.

Hi Slushpuppie,

I know that Spark has difficulty running with Mustang beta. Did you try running with JDK 1.5 or 1.4? Of course, we are working on getting Spark running perfectly with Mustang Probably in the 1.2 timeframe.

Cheers,

Derek

The linux client uses the jre packaged with it by default, which appears to be 1.5.0_06-b05 with Spark 1.1.1. 1.1.3 didnt include a jre but since I just upgraded, it left the jre there. Shouldnt it be using that? My system jre is 1.6.0-beta-b59g, which Im not supprised Spark (either 1.1.1 or 1.1.3) wont work with.

As I play with it more, Im seeing that with 1.1.3, its not using the built in JRE. In fact, short of modifying the startup script, it dosnt seem to honor JAVA_HOME envrionment variable if you have java anywhere in your path or a “common” location.

So, I started reading through the script to see what is going on. It seems it looks for the JRE in this order, and stops as soon as it finds one:

INSTALL4J_JAVA_HOME_OVERRIDE envrionment variable

$app_home/.install4j/pref_jre.cfg config file

which java 2> /dev/null

search $common_jvm_locations

JAVA_HOME envrionment variable

JDK_HOME envrionment variable

INSTALL4J_JAVA_HOME envrionment variable

In 1.1.1 before all these it checks $app_home/jre, but not 1.1.3. I think the logic here is a bit wrong. When I specify JAVA_HOME=/blah I expect java to use that, so that really should go farther up on the list. Same deal with JDK_HOME. The which java and searching common paths should be a list ditch effort, not the first thing tried. The INSTALL4J stuff seems right: have the OVERRIDE first and the regular one last.

While reading through the script, another fact occured to me. There is a version check to make sure the JRE isnt newer than 1.5, so why did it pass with 1.6? While, it seems that it writes to your home directory in .install4j the version of each JRE it finds when it first finds it. The problem is, I upgraded java after the first time this was run, and now Spark thinks its an older version than it really is. I dont think you really save anything by not checking the version every time (done only once at startup), so it migth be worth just skipping that caching all together so you can handle when people upgrade Java.

So, in the end I got it working, with a bit of hacking. Perhaps we can consider this thread a feature request now?