Openfire.log file growed big

So, as we have already discussed it a bit in chat yesterday. It seems that after upgrade to 3.5.1 (or maybe 3.5.0) and installing Monitoring Service, Fastpath (archiving is disabled completely) my openfire.log file in embedded-db dir has growed a lot. But it’s constantly changing. One day it’s 180 MB, and today it’s 69 MB. It’s hard to look what’s inside with my weak PC. So i’ve managed only to make a screenshot of first lines. Seems that there are a lot of similar info.

I would be quite happy if my openfire_log only would have a size of 180MB

Currently my openfire_log is about 12GB and it’s still getting bigger.

That’s horrible… But this started with 3.5.x versions to me. I have checked my openfire.log at home (test server), it was few bytes. After server start (just few test users, which are offline now) it has grown at 10 MB !!! And can anyone explain me what this “7ff8000000000000” is for?? Because 99% of openfire.log is filled with that string.

So, can anyone from the developers explain what is this “7ff8000000000000” for? Btw, yesterday when i was doing backup in the end of the work day this file was only 7 MB. Very dynamic.

Hi Oleg,

for me the entries in your file look a lot like RRD data.

I did run the Enterprise Plugin. It does collect RRD data every minute and writes it to the database. One must always write the whole RRD data and not only the new value - so it’s a lot of data. I did complain some month ago as when running Oracle or a similar database in archive mode the archive logs must be backed up quite often. As one does need them for a complete recovery after a crash it’s quite expensive as archive logs backups must be kept until the next full backup.

Your embedded-db/ contains an entry “hsqldb.log_size=200” - this means that openfire.log can reach 200 MB - then a checkpoint will occur and the file will be merged in openfire.script. You can change this value to 20 - checkpoints will occur more often and they will complete faster. Anyhow backups may get corrupted more easy.

In my opinion one should always use local RRD files to store statistic data. Even if Openfire runs in a cluster. So one would have RRD files on every instance within a cluster. If a cluster member fails it will of course have not all data and some gaps in the RRD file will occur.

Or at least one should have the option to decide where the RRD data is stored:

a) local filesystem

b) Openfire database

c) other database (likely not running in archive mode)


PS: One should probably create an issue as it seems to affect finally not only me.

Hi Cyberlink,

as you are using MS SQL I have no idea where one would tune it. For me it looks to be similar to an Oracle archive log and I wonder why MS does not rotate it every n MB.


added: See for a script. Anyhow I wonder why you use MS SQL if you have little or no experience with it.