In comparison, this take not any notable memory resources (on a Wildfly appserver running a bunch of different applciation with an about 30GB heap). But it arrest attention on the common monitoring because it “needless” occupies file handles in the magnitude of some thousands between the large FGC intervals in the magnitude of days, and overshoot the number of “needed” file handles in high factors. Because it’s slow-growing, an additional cron-driven watchdog just count the number of filehandles for the JVM process and invoke a FGC above a certain “80%” limit. (There is another main watchdog to command graceful or forced restarts at the “95%” levels for all common resources.)
I’m using OpenFire to run OpenMeet (now aka Pàdé), therefore the number of XMPP users is just in the order of hundred. OpenMeet uses two additional “external” JVM to run the Jitsi components.
My current set used for the OpenFire JVM (running at a LXC container with 20Cores assigned to) is:
-server -XX:+UseG1GC -Xms128m -Xmx256m -XX:MaxMetaspaceSize=128M -XX:MaxDirectMemorySize=512M -Djdk.nio.maxCachedBufferSize=262144
-XX:MaxGCPauseMillis=50 -XX:ConcGCThreads=5 -XX:+ParallelRefProcEnabled -XX:ParallelGCThreads=5 -XX:ActiveProcessorCount=20 -XX:+UseStringDeduplication -XX:+PrintGCApplicationStoppedTime
-Djava.io.tmpdir=/var/tmp -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=60
-Dcom.sun.management.jmxremote.port=##### -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=... -Dcom.sun.management.jmxremote.password.file=...
And also I take care about to set MALLOC_ARENA_MAX=1
in the process environment for the JVMs to reduce the glibc footprint for direct memory