Abnormal Memory Usage

Memory on our servers has been quite high recently. I remember a few months ago adding the “.vmoptions” file to my bin directory defining the amount of RAM I would like java to use, but its not there anymore. Is the bin directory removed when performing an upgrade? I have herd it may also be plug-ins that are causing the excessive memory usage. I removed all the plug-ins we don’t use anymore and restarted to service. Below are a few lines from the debug log. Does anyone know what they mean, or how to fix them?

It may be important to note that this install is clusterd using heartbeat, and mySQL master master replication.

Server Properties
Server Uptime:
129 days, 16 hours, 32 minutes – started Dec 31, 2009 4:01:55 PM
Version:
Openfire 3.6.4
Server Directory:
/opt/openfire
Server Name:
jabber1.xxx.com
Environment
Java Version:
1.6.0_03 Sun Microsystems Inc. – Java HotSpot™ Server VM
Appserver:
jetty-6.1.x
Host Name:
chat1.xxx.com
OS / Hardware:
Linux / i386
Locale / Timezone:
en / Eastern Standard Time (-5 GMT)
Java Memory

944.23 MB of 1014.81 MB (93.0%) used

DEBUG:

2010.05.10 09:16:04 ConnectionHandler:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at org.apache.mina.transport.socket.nio.SocketIoProcessor.read(SocketIoProcessor.j ava:218)
at org.apache.mina.transport.socket.nio.SocketIoProcessor.process(SocketIoProcesso r.java:198)
at org.apache.mina.transport.socket.nio.SocketIoProcessor.access$400(SocketIoProce ssor.java:45)
at org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run(SocketIoProce ssor.java:485)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

2010.05.10 09:16:05 Stat: muc_occupants. Last sample: 1273497300. New sample: 1273497360
2010.05.10 09:16:05 42132160 (01/12/00) - Connection #1413 tested: OK
2010.05.10 09:16:05 42132161 (01/12/00) - Connection #1413 tested: OK
2010.05.10 09:16:18 NIOConnection: startTLS: using c2s

2010.05.10 09:16:19 AuthorizationManager: Trying Default Mapping.map(user.name)
2010.05.10 09:16:19 DefaultAuthorizationMapping: No realm found
2010.05.10 09:16:19 AuthorizationManager: Trying Default Policy.authorize(user.name, user.name)
2010.05.10 09:16:19 DefaultAuthorizationPolicy: Checking authenID realm

2010.05.10 09:16:19 42132260 (01/12/00) - #1414 registered a statement as closed which wasn’t known to be open. This could happen if you close a statement twice.

Thanks for your help,

Wesley

Hi Wesley,

did you read the announcement on http://www.igniterealtime.org/community/index.jspa ?

Anyhow with an uptime of 130 days you may consider simply restarting it.

LG