I’'ve a test case to let lots of users (about 20k ~ 50k) login to the wildfire server through the connection manager.
The env is:
server1: Wildfire - Java5/Linux/MySQL 5.0.22 InnoDB
server2: my test code + connection Manager (in the same vm) - Java5/Linux
LAN: 1G ethernet
My login program is very simple:
set presence status
I’'ve create a thread pool to start 50 login threads(Java 5.0 ''s thread pool), to provent wildfire get too many requests at the same time.
My problem is:
The login speed get slower and slower, from 200 users/sec to 100/sec, 50/sec, and 1 user/sec eventually(about 30k users logined then).
The problem unlikely be my code or connection manager:
I have checked all the log files for wildfire and connection manager without found any error packet.
And every logined user could be found in Wildfire’'s admin console.
After login, my send/recv msg program runs very well without problems. The chat program began to send message packets to each other, the send/recv speed keeps in 5,000 ~ 8,000 packets / second after running 30 minutes.
I’‘ve checked the client code and connection manager on JProfiler, I didn’'t find any problems.
Wildfire and Connection Manager’'s code are checked out from svn about two weeks ago.
First I think the databasse could be a bottleneck, I checked mysql’'s log and found that every time a user login, Wildfire wrote a update log in the jiveUserProp.
I commented that code in Wildfire and ran the test again. But still the same problem.
So I think there is maybe something in Wildfire prevent me to login, maybe synchronized or deadlock problems? Anyone who are familiar with Wildfire’'s code can give me any tips? Multithread code is very difficult to debug, or it’'s hard for me to find the problem myself.