Guide for Translators

Hi,

I cant find any summarized info about how to make translations for Spark. A lot of questions in forum. We need a wiki topic for that. What i have found is that translation should be converted to ascii format. So, is this up to translators to do, or maybe they can just send translated properties file to Derek and he will convert ant include them?

Hi Oleg,

I did wonder about a Wiki approach some weeks ago but as Jivesoftware is busy it seems that no one is setting up something like this.

http://www.jivesoftware.org/fisheye/viewrep/svn-org/spark/trunk/src/resources/i1 8n should be the right address to fetch the english i18n file, so one can modify it and sent it to Derek (or post it in the Wiki where no one will see it).

Issue SPARK-375 will one allow to select the language instead using the information the operating system returns.

LG

It’'s actually easy to copy lib\spark.jar to a new directory, extract it there, modify i18n\spark_i18n.properties and then select all files and create a new smack.jar file.

What’‘s really bad is that the file has no order. So one must guess where which properties are used and whether “Password” on the splash screen uses the same or another key as “Password” on the “Create New Account” page. If one want’'s to add “&” in front of some characters one should make sure that every dialog uses usable “&” shortcuts.

The German i18n file is really bad: “&Proxy, &Protkoll, &Port, &Passwort” in one dialogue (Settings: Proxy) but the English one (splash screen) is not better with “&Server, &Save Password” and “&Auto Login, &Accounts”. The Spanish one uses “description = Descripci\u00f3n” - \u00f3 == encoded unicode in an UTF-8 encoded file …

LG

Added: SPARK-284 describes these problems with duplicate ‘’&’’ entries, also that “Con&tacts” does not work. And this is the problem for every lower case &_-character. So “User&name” does not work will while “User&Name” will work.

And also one question: in which codepage translated .properties file should be? How i can translate this to cyrillic-based languages?

Hi,

use ASCII as encoding and \u for unicode characters.

LG

The original post which will lead you in the wrong direction:

what about UTF-8 instead of using a code page? Notepad, Notepad++, vi and many more editors support UTF-8.

So you can just add “foo=???” - if you save it as UTF-8 every one with an installed Cyrillic font can view it without problems.

Hi all,

I have no experience managing jar files. It is possible to uncompress spark.jar file with winzip, edit spark_i18n_es.properties file and change for example, the following line:

title.set.status.message = Setear Mensaje de Estado

for

title.set.status.message = Definir Mensaje de Estado

then save spark_i18n_es.properties , and bulid a new spark.jar file?

How can I build a new spark.jar file? winzip?

Thanks in advance,

JJ.-

Hi JJ,

do everything with Winzip if you like it best. I may quote myself: “It’'s actually easy to copy lib\spark.jar to a new directory, extract it there, modify i18n\spark_i18n.properties and then select all files and create a new smack.jar file.

A short tutorial:

Make a backup of spark.jar, you’'ll need it if you damage spark.jar

Copy spark.jar to C:\tmp\lang\

Select "Extract Here"
Modify the properties file in folder i18n
?? One may also be able to create a new one for another language, but I don’'t know whether Spark will accept it ??
Select all files except spark.jar
Select “Compress to ZIP + Options” and set the filename to C:\tmp\lang\spark.jar.

Copy C:\tmp\lang\spark.jar to your Spark/lib directory.

LG

I have followed the procedure, but with spark.jar file; I couldn´t find any properties file inside smack.jar.

After copying the new spark.jar file to lib directory and rebooting spark, no results…

Is it necessary to clear any cache or something like that?

Thanks a lot,

JJ.-

Sorry!

I’‘ll fix my post above. It’'s in spark.jar and not in smack.jar.

LG

it2000 wrote:

So you can just add “foo=???” - if you save it as UTF-8 every one with an installed Cyrillic font can view it without problems.

But not with Spark. I have done all this and i dont see cyrillic chars. I think this must be because of that converting to ascii.

Hi Oleg,

you’'re right.

The file if indeed not UTF-8 encoded and if one tries this then also “äöü” will look in Spark like “äöü”.

/me slabs Derek Java around with a big trout.

LG

… not a Derek but a Java issue as Java expects ASCII property files and not UTF-8. So we’'re back in 1739 and need to convert UTF-8 or local encoded files to ASCII with \u-sequences.

Any idea why can´t I see any change in spark intereface after editing properties file???

This is the procedure I followed:

Make a backup of spark.jar

Copy spark.jar to C:\tmp\lang\

Select “Extract Here”

Modify the properties file in folder i18n (spanish file)

I changed this line

title.set.status.message = Setear Mensaje de Estado

for the following one:

title.set.status.message = Definir Mensaje de Estado

Select all files except spark.jar

Select “Compress to ZIP + Options” and set the filename to C:\tmp\lang\spark.jar.

Copy C:\tmp\lang\spark.jar to your Spark/lib directory.

start spark…

and no changes in the menu…

Hi,

may someone please test whether this code (JDK is needed to compile it) works to convert ANSI or UTF-8 encoded files to Java i18n property files?

For me it looks fine for European, Cyrillic and Arabic characters.

LG

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter; public class Convert
{
     public static void main(String[] args) throws Exception
     {
          boolean firstchar = true;
          String infile = System.getProperty("in");
          String outfile = System.getProperty("out");
          String encoding = System.getProperty("en");
          String noEscape = System.getProperty("ne","");
          /**
           * non-ASCII characters (more than 7 bit) will be escaped
           * as \\uNNNN with NNNN as the unicode sequence
           */
          boolean escape_output = true;
          if ( (infile==null) || (outfile==null) )
          {
               System.out.println("Converts local or UTF-8 encoded files to ASCII files,");
               System.out.println("non-ASCII characters will be escaped as unicode with \\uNNNN.");
               System.out.println("Use it to generate ''app_i18n[_xx[_XX]].properties'' files.");                System.out.println("\nUsage: java [-Den=UTF-8] -Din=foo -Dout=bar [-Dne=true] Convert");
               System.out.println("  Default read encoding: -Den="+ System.getProperty("file.encoding"));
               System.out.println("  Write encoding: UTF-8");
               System.out.println("    Non-ASCII characters will be escaped, set -Dne=true to disable this.");
               System.out.println("\nExamples:");
               System.out.println("Convert ANSI to UTF-8: java -Din=foo.txt -Dout=foo.utf8 -Dne=true Convert");
               //System.out.println("Convert ANSI to escaped ASCII: java -Din=foo.txt -Dout=foo.ascii Convert");
               System.out.println("Convert UTF-8 to escaped ASCII: java -Den=UTF-8 -Din=foo.utf8 -Dout=foo.ascii Convert");
               System.exit(1);
          };
          if (encoding==null)
          {
               encoding = System.getProperty("file.encoding");
          };
          if (noEscape.equals("true"))
          {
               escape_output = false;
          }           FileInputStream fis =  new FileInputStream(infile);
          InputStreamReader isr = new InputStreamReader(fis, encoding);
          BufferedReader br = new BufferedReader(isr);
          System.out.println("INFO: Using "+isr.getEncoding()+" encoding for "+infile);
          
          FileOutputStream fos = new FileOutputStream(outfile);
          OutputStreamWriter osw = new OutputStreamWriter(fos, "UTF-8");
          BufferedWriter bw = new BufferedWriter(osw);          
          System.out.println("INFO: Using "+osw.getEncoding()+" encoding for "+outfile);           String thisLine;
          while ((thisLine = br.readLine()) != null)
          {
             if (escape_output)
             {             
                  for (char c : thisLine.toCharArray())
                  {
                       if (firstchar==true)
                       {
                            firstchar = false;
                            // UTF-8 BOM = EFBBBF, will be converted to UTF-16, big-endian
                            // UTF-16, big-endian BOM = FEFF
                            // BOM = Byte Order Mark
                            
                            if (c==65279)                             {
                                 System.out.println("INFO: Skipping Windows UTF-8 BOM (first character)");
                                 continue;
                            };
                       };
                       if (c <= 128)
                     {
                            bw.write(c);
                     } else
                     {
                          String uc = "0000" + Integer.toString(c,16);
                          uc = uc.substring(uc.length()-4);
                          bw.write("\\u"+uc);
                     };
                  };
             } else
             {
                  bw.write(thisLine);
             };
             bw.write(System.getProperty("line.separator"));
          };
          bw.flush();
                  // close w-file
          bw.close();
          osw.close();
          fos.close();
          
          // close r-file
          br.close();
          isr.close();
          fis.close();
     };
};

jjalonso wrote:

Select all files except spark.jar

Select “Compress to ZIP + Options” and set the filename to C:\tmp\lang\spark.jar.

Copy C:\tmp\lang\spark.jar to your Spark/lib directory.

dont know, maybe it just didnt ovewrite existing spark.jar. Try deleting it from that \lang\ folder after extracting.

I think the problem is that I had renamed the old file to spark1.jar in /lib direcotory, and spark is reading, I don´t know why, this file.

If I rename the old file to spark1.jar.old and then start spark, I get this error:

java.lang.ClassNotFoundException: org.jivesoftware.Spark

at java.net.URLClassLoader$1.run(Unknown Source)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(Unknown Source)

at java.lang.ClassLoader.loadClass(Unknown Source)

at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)

at java.lang.ClassLoader.loadClass(Unknown Source)

at com.exe4j.runtime.LauncherEngine.launch(Unknown Source)

at com.exe4j.runtime.WinLauncher.main(Unknown Source)

Why is not working my new spark.jar file???

Thanks in advance,

JJ.-

Hi JJ,

is the size of the spark.jar files nearly the same?

Are the folders in the sark.jar file the same?

LG

OK!!!

The problem was in building the new jar file. My fault, sorry!!! In order to build the new jar file, I was selecting entire directory (spark) instead of the content of this directory.

Comparing the content on the good spark.jar and the new one, I realized that the problem was here…

Thanks a lot, LG

Best regards,

JJ.-

I’'ve translated Spark into Polish, and it actually works for me. Is there any chance, that my translation will be added into future releases of Spark? Where should I send the translated file?

Regards,

Krzysztof

Message was edited by: kris23

You can email the file to Derek (derek@jivesoftware.com).

I did that some time ago. Unfortunately no reply