Cannot increase JAVA memory on Openfire 5.0.1 / Debian 12 /

I am trying to increase the JAVA memory on Openfire 5.0.1 / Debian 12 but cannot seem to get it working

I created a file called /usr/share/openfire/bin/openfire.vmoptions with the following in:

cat /usr/share/openfire/bin/openfire.vmoptions
-Xms1024m
-Xmx2048m

Yet I still only see 980Mb assigned to JAVA in Openfire:

I also added the line “OPENFIRE_OPTS=“-Xms1024m -Xmx2048m” to /etc/init.d/openfire but still no luck.

Any suggestions for Debian 12?

You need to change the DAEMON_OPTS in the /etc/default/openfire

Use sudo -e /etc/default/openfire

See the documentation *

1 Like

please note that the Openfire in Debian migrated from the /etc/init.d/openfire to using SystemD unit.

So you should use systemctl restart openfire command instead of the calling the /etc/init.d/openfire restart

You need to change the DAEMON_OPTS in the /etc/default/openfire

sudo -e /etc/default/openfire

I’ll add notes on this to the https://download.igniterealtime.org/openfire/docs/latest/documentation/install-guide.html

Thanx. That worked.

P.S. Your URL to the documentation is broken.

The new v5.0.2 was released so please try to upgrade, it has some minor fixed for the SystemD.

The documentation link also updated, so see the Custom Properties section

I’m using version 5.0.2 with Oracle Linux 9 and I can’t find the DAEMON_OPTS file in the /opt/openfire/ folder. Can you help me? Java’s memory usage is increasing significantly.

If you installed via RPM, you can customize this by editing /etc/sysconfig/openfire and looking at the OPENFIRE_OPTS option.

From the Install Guide

I changed it as indicated:

OPENFIRE_OPTS=“Xms1024m -Xmx2048m”

However, when I start the service, it displays the message below:
Job for openfire.service failed because the service did not take the steps required by its unit configuration.
See “systemctl status openfire.service” and “journalctl -xeu openfire.service” for details.

Hi Ramonn! I’m sorry to hear that it’s not working for you.

Where you wrote:

OPENFIRE_OPTS="Xms1024m -Xmx2048m"

you may have made a mistake. I believe that both arguments need to start with a - symbol, like this:

OPENFIRE_OPTS="-Xms1024m -Xmx2048m"

If that doesn’t fix the issue, then an obvious next step is to investigate the output of:

systemctl status openfire.service

and:

journalctl -xeu openfire.service

as suggested by that error message. Did you try that?

There was no error, that’s exactly what I didn’t notice, but it still shows an error in the output:

journalctl -xeu openfire.service
Nov 19 14:20:13 vmspark systemd[1]: openfire.service: Main process exited, code=exited, status=143/n/a
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://support.oracle.com
░░
░░ An ExecStart= process belonging to unit openfire.service has exited.
░░
░░ The process’ exit code is ‘exited’ and its exit status is 143.
Nov 19 14:20:13 vmspark systemd[1]: openfire.service: Failed with result ‘exit-code’.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://support.oracle.com
░░
░░ The unit openfire.service has entered the ‘failed’ state with result ‘exit-code’.
Nov 19 14:20:13 vmspark systemd[1]: Stopped SYSV: Openfire is an XMPP server..
░░ Subject: Unit openfire.service has completed shutdown
░░ Defined-By: systemd
░░ Support: https://support.oracle.com
░░
░░ The openfire.service unit has completed shutdown.
Nov 19 14:20:13 vmspark systemd[1]: Starting SYSV: Openfire is an XMPP server…
░░ Subject: Unit openfire.service being started
░░ Defined-By: systemd
░░ Support: https://support.oracle.com
░░
░░ The unit openfire.service is being started.

Nov 19 14:20:13 vmspark su[6350]: (to daemon) root on none
Nov 19 14:20:13 vmspark su[6350]: pam_unix(su:session): session opened for user daemon(uid=2) by (uid=0)
Nov 19 14:20:13 vmspark su[6350]: pam_unix(su:session): session closed for user daemon
Nov 19 14:20:13 vmspark openfire[6341]: Starting openfire:
Nov 19 14:20:14 vmspark systemd[1]: Started SYSV: Openfire is an XMPP server..
░░ Subject: Unit openfire.service has completed initialization
░░ Defined-By: systemd
░░ Support: https://support.oracle.com
░░
░░ The openfire.service unit has completed initialization.

░░
░░ The start-up result is done.

/etc/sysconfig/openfire

OPENFIRE_OPTS=“-Xms1024m -Xmx2048m”

If you wish to override the auto-detected JAVA_HOME variable, uncomment

and change the following line.

JAVA_HOME=“/usr/lib/jvm/java-17-openjdk-17.0.16.0.8-2.0.1.el9.x86_64”

Hmm, that output is not particularly helpful.

Try looking at the logs from Openfire. You should find them in a directory called logs that is a subdirectory of the directory where you installed Openfire (it most likely is /opt/openfire/logs/).

You could try to remove everything from that logs directory (to get rid of old data), then try to restart Openfire, and then inspect the files that are generated in that directory. If there is a file named nohup.out, then that’s of particular interest (but also look in any other files)!

Despite all the changes, the processor and memory problems persist even with only one person connected.

nohup.out

Finished processing all plugins.
An exception was thrown when one of the pluginManagerListeners was notified of a ‘monitored’ event!
Exception in thread “timer-monitoring” java.lang.OutOfMemoryError: Java heap space
Exception in thread “pool-fastpath7” java.lang.OutOfMemoryError: Java heap space
An unexpected exception occurred:
Exception in thread “jetty-immediate-executor” java.lang.OutOfMemoryError: Java heap space
Exception in thread “timer-fastpath” java.lang.OutOfMemoryError: Java heap space
Exception in thread “archive-service-worker-3” java.lang.OutOfMemoryError: Java heap space
Exception in thread “saxReaderUtil-8” java.lang.OutOfMemoryError: Java heap space

The server has 12 GB of memory and 2 processors.

From this log, it seems that the problem occurs after plugins are loaded. Maybe the problem is caused by a plugin. Try removing plugins, to see if you can identify one specific plugin that is the source of the problem.

I removed some plugins and the problem persists, and the following message appears as soon as I restart the server.

Initialized plugin ‘admin’.
Successfully loaded plugin ‘admin’.
Loading plugin ‘clientcontrol’…
Initialized plugin ‘clientcontrol’.
Successfully loaded plugin ‘clientcontrol-2.1.10’.
Loading plugin ‘emaillistener’…
Initialized plugin ‘emaillistener’.
Successfully loaded plugin ‘emaillistener-1.2.1’.
Loading plugin ‘dbaccess’…
Initialized plugin ‘dbaccess’.
Successfully loaded plugin ‘dbaccess-1.3.0’.
Loading plugin ‘broadcast’…
Initialized plugin ‘broadcast’.
Successfully loaded plugin ‘broadcast-1.9.3’.
Loading plugin ‘monitoring’…
Initialized plugin ‘monitoring’.
Successfully loaded plugin ‘monitoring-2.7.0’.
Loading plugin ‘justmarried’…
Initialized plugin ‘justmarried’.
Successfully loaded plugin ‘justmarried-1.3.0’.
Loading plugin ‘search’…
Initialized plugin ‘search’.
Successfully loaded plugin ‘search-1.7.5’.
Loading plugin ‘contentfilter’…
Initialized plugin ‘contentfilter’.
Successfully loaded plugin ‘contentfilter-1.9.0’.
Loading plugin ‘userimportexport’…
Initialized plugin ‘userimportexport’.
Successfully loaded plugin ‘userimportexport-2.8.0’.
Loading plugin ‘packetfilter’…
Initialized plugin ‘packetfilter’.
Successfully loaded plugin ‘packetfilter-3.3.2’.
Loading plugin ‘motd’…
Initialized plugin ‘motd’.
Successfully loaded plugin ‘motd-1.3.0’.
Loading plugin ‘xmldebugger’…
Initialized plugin ‘xmldebugger’.
Successfully loaded plugin ‘xmldebugger-1.8.0’.
Finished processing all plugins.
An exception was thrown when one of the pluginManagerListeners was notified of a ‘monitored’ event!

Although it proves nothing, it is suspicious that the problem is logged as a side-effect of loading plugins. Please remove all plugins to see if any of them causes this.

I tried everything, the plugins worked, all the errors stopped, but only one person can connect to Spark and it freezes the Openfire web server.

ERROR StatusConsoleListener Caught exception executing shutdown hook null
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.Arrays.copyOf(Arrays.java:3481)
at java.base/java.util.concurrent.ConcurrentHashMap$CollectionView.toArray(ConcurrentHashMap.java:4471)
at java.base/java.util.ArrayList.addAll(ArrayList.java:670)
at org.apache.logging.log4j.spi.LoggerRegistry.getLoggers(LoggerRegistry.java:134)
at org.apache.logging.log4j.spi.LoggerRegistry.getLoggers(LoggerRegistry.java:129)
at org.apache.logging.log4j.core.LoggerContext.updateLoggers(LoggerContext.java:739)
at org.apache.logging.log4j.core.LoggerContext.updateLoggers(LoggerContext.java:729)
at org.apache.logging.log4j.core.LoggerContext.stop(LoggerContext.java:375)
at org.apache.logging.log4j.core.LoggerContext$1.run(LoggerContext.java:305)
at org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry$RegisteredCancellable.run(DefaultShutdownCallbackRegistry.java:119)
at org.apache.logging.log4j.core.util.DefaultShutdownCallbackRegistry.run(DefaultShutdownCallbackRegistry.java:78)
at java.base/java.lang.Thread.run(Thread.java:840)

I’m sorry that you have to go through this. I think it’s time that we bring in the expert tools to diagnose this.

What you’ll need to do is create a Java heap dump (not a thread dump) of Openfire when it is running out of memory. There are various ways to do that. Many of them are documented here: https://www.baeldung.com/java-heap-dump-capture

When we have a heap dump, we can use memory analyzers to find out what is occupying all that memory.

@Ramonn maybe you can temporary increase XMX to 15Gb and then try again? You may also try to disable some plugins like the PackerFilter of the XML Debug: they shouldn’t affect users.
Once OF started and if it worked then you may try to reduce again the memory limit down to 2Gb.

As a general tip to improve performance you may try a newer Java 21 or even newer available in your Linux Distro. In the Debian 12 it may be missing but you may use the extrepo to install from zulu-openjdk or something else.
This year the Debian had a new stable release Debian 13 that should have newer OpenJDK packages.

Also please ensure that you are not use the embedded DB (HQSQL) but using a good DB server like PostgreSQL or at least MySQL/MariaDB.