As I play with it more, Im seeing that with 1.1.3, its not using the built in JRE. In fact, short of modifying the startup script, it dosnt seem to honor JAVA_HOME envrionment variable if you have java anywhere in your path or a “common” location.
So, I started reading through the script to see what is going on. It seems it looks for the JRE in this order, and stops as soon as it finds one:
INSTALL4J_JAVA_HOME_OVERRIDE envrionment variable
$app_home/.install4j/pref_jre.cfg config file
which java 2> /dev/null
JAVA_HOME envrionment variable
JDK_HOME envrionment variable
INSTALL4J_JAVA_HOME envrionment variable
In 1.1.1 before all these it checks $app_home/jre, but not 1.1.3. I think the logic here is a bit wrong. When I specify JAVA_HOME=/blah I expect java to use that, so that really should go farther up on the list. Same deal with JDK_HOME. The
which java and searching common paths should be a list ditch effort, not the first thing tried. The INSTALL4J stuff seems right: have the OVERRIDE first and the regular one last.
While reading through the script, another fact occured to me. There is a version check to make sure the JRE isnt newer than 1.5, so why did it pass with 1.6? While, it seems that it writes to your home directory in .install4j the version of each JRE it finds when it first finds it. The problem is, I upgraded java after the first time this was run, and now Spark thinks its an older version than it really is. I dont think you really save anything by not checking the version every time (done only once at startup), so it migth be worth just skipping that caching all together so you can handle when people upgrade Java.
So, in the end I got it working, with a bit of hacking. Perhaps we can consider this thread a feature request now?