I’ve been testing performance of Openfire 3.5.2 on my test machine these days.
I read a lot of posts here, especially those answered by Gato. Gato claims in several posts that he
has reached 250K concurrent users in a single JVM on a 64bit OS with 2GB RAM allocated to JVM.
I’m curious about Gato’s test and result in this post.
I can’t reach even 1/20 performance of what Gato archieved.
My test machine is a AMD64 4000+ 2G dual-core CPU, 3GB physical memory (2GB allocated to JVM), CentOS 5.2 64bit version,
no connection manager.
Database is PostgreSQL 8.3 installed on another machine.
“nofile” parameter of limits.conf has been changed to 22000 to avoid too many open files exception.
the JVM parameter is “-Xms1024m -Xmx2048m -server”.
The goal I wanna achieve is 100K registered user in DB, and 30K-60K active online users.
I use Tsung to test.
20K users in DB, each has 19 friends, no group. 6K users log in one by one every 0.01s. each user will get roster after log in, and then
send 1 presence per second. The test will last 30 mins, I tested this case several times and got only one success. All other tests ended with
20K users in DB, each has 19 friends, no group. 20K users log in one by one every 0.01s. each user will get roster after log in, and then
send 1 presence per 5 seconds. The test failed in 48th mins, OutOfMemoryError.
Is my load too heavy?
I’ve also read a PDF document from Openfire, the “Openfire Scalability Enhancements”.
The doc mentioned extremely active users and users with low levels of activity, can someone define what is extremely active users and
what is low levels of activity?
I’ve also used JProfiler to find out the memory leak reason. I found that when in heavy load all byte object will used 1.2G RAM and didn’t
have a chance to be GCed, the OutOfMemoryError is throwed. while in low level load byte object could be GCed once in several seconds.
I have no idea why.
Please give me some advice here.