Why can connection manager support more than 5000 concurrent with one port

Hello everyone:

The clients use xmpp protocol to communicate with wildfire server in port 5222(or 5223). I think this

is tcp/ip programming( Not UDP) . So if there are more than 5,000 concurrent user connecting to the connection manager, can CM deal with it with only one port ? Any requirement to the hardware ?

thanks for your opinion!


5000 sounds like a “magic number” for me

I reached 15000 users connected on a single connection manager but had to made tuning on the java native thread stack size (Xss) and play with the maximum number of available file descriptors per process (ulimit on Linux).

Can you give us some details ?



I run connection manager on a pc( only 512M memory, 2.6GHz CPU) . The operating system is windows xp. Wildfire server and the mysql database runs in another machine of XP .

I modified cmanager.bat , and set the jvm memory as : REM # SET JVM_SETTINGS = “-Xms300m -Xmx480m” . The CM can only support 800 concurrent users. ( I start 1000 threads gradually in

another two machines each, and each thread send request to the CM in each one second after it’'s started. When both of the two machine had started more than 450 threads , the CM crashed. The error

is ‘‘java.lang.OutOfMemoryError: Java heap space’’ )

My system has this goal: one client ( named CLIENT-A)is permanencely connecting to the wildfire server. Other clients ( maybe more than 5000 ) connects to the connection manager at any time they like. They want to get some data from CLIENT-A. So they always send requests to him. Meanwhile, these normal clients seldom talk with each other.

How can I improve the performance of the CM and WF ? Do I need to run CM in a linux system ? Any advise you would give me ?

Thank you!

Message was edited by: handsomeli

Message was edited by: handsomeli

Just to be sure, if you let REM in front of your line, it is like a comment so this variable will not be set.

Remove REM #. I don’'t know if it will work better but at least your parameters will be used.

When I made tests with the CM I had either Java heap space problems or unable to create native threads.

We use Linux but for your need around 5000 users, I don’‘t know if you even need a CM in fact. But if you plan to use it, I don’'t think there is any problem with Windows.

I will make a brief test with 5000 connections on a signle CM with -Xmx480M and tell you what I got

I made test with JVM_SETTINGS="-Xms420M -Xmx420M -Xss80K" and I reached 5200 users.

Be sure to run your client tests on a separate machine from the CM and it should work with 5000 users.

Hope it helps

Interesting - we are fighting similar problems with Java VM tuning at our site. We don’'t use a CM, but when we started to reach 5000 concurrent users, strange things began happening. I agree with the ulimit -n (per process file descriptors) suggestion - we bumped ours from 8192 to 16384 (we have a big linux box with 8GB of RAM).

However, we have seen various Java Hotspot compiler errors of the form (which of course causes the server to crash):

Exception java.lang.OutOfMemoryError: requested 2048000 bytes for GrET* in /BUILD_AREA/jdk1.5.0_06/hotspot

/src/share/vm/utilities/growableArray.cpp. Out of swap space?

Exception in thread “CompilerThread0” java.lang.OutOfMemoryError: requested 1053808 bytes for Chunk::new.

Out of swap space?

Now, I’‘m 99% sure that our system isn’‘t running out of swap space (checked that), so a little Googling led to a suggestion to tune the VM by increasing the ‘‘permanent generation’’. Apparently, this is where the VM keeps ‘‘reflective’’ data, pointers to class objects on the heap, etc. I’‘ve already increased our heap to 2GB max, and we are barely touching that, but by default, when using the ‘’-server’’ flag to the VM, the permanent generation is 64MB.

I’‘ve put the following flags into our wildfire.vmoptions file to pass to the VM when it starts (though I haven’‘t restarted since I put them there, as I’‘m afraid to touch that now) - just the PermSize flags are new BTW - we’'ve been running with the stack size and max heap flags for a long time:





Our production server takes almost 1 1/2 hours to fully stabilize when 5000 people try to have their clients reconnect at once. Frankly, this is a huge problem that the Wildfire folks still haven’'t solved, and it makes it very painful for us upon restart.

Jive Software folk - numerous people have been asking for a ‘‘tuning guide’’ for this sort of thing for a long time. Do you guys have anything like that? Also, specifically around the increase of the ‘‘permanent generation’’, what are your thoughts? If the heap is holding 5000 connection objects (plus who knows what else, since in our case we run about 130 conf rooms as well), could the permanent generation be running out of memory?

I know in the past you guys have said that tuning the VM wasn’'t in your purview as developers of the software, but IMHO, at larger installations, tuning the VM goes hand in hand with the stability of Wildfire, and this needs to be addressed. Thanks.

-Guy Martin

Your information is interesting. I have yet no experience from real situations. I simulate users with tsung.

I reached around 30000 users with 3 CMS.

But this was just simulation. I had an arrival rate of users around 100 per second, used rosters with 50 items. If I increase the roster sizes, it is harder to keep this rate. I can keep the 100 per second until 150 items. If I try with 300 items, system is generally saturated. I would say there are different problems: if you use shared groups you need less Database access but more CPU to generate rosters from groups. At te opposite if you use rosters, system needs less CPU on WF but Database is the bottleneck.

It is not as easy to point out where the bottleneck is. I am doing tests to evaluate at best the configuration.

Currently I can say that the following configuration enables simulating 30 000 users

3 CM (Xeon 2Gb RAM) -Xmx1G -Xms1G -Xss80K -> capacity 15000 users

1 WF (BiXeon 4G) -server -Xmx2/3G.

We use a war version of WF embedded in a tomcat server.

It works pretty well. But I agree with the fact the reconnecting simultaneously 5000 users can be a pain especially if rosters are large. I didn’'t make tests yet with Conference rooms, it is planned but not yet performed.

I remember having had your memory problem. I’'ll have a look tomorrow if I can find what I did. Anyway try perhaps a combination of -server and -Xmx and -Xms. I also tried to use the aggressive heap policy.

Just a remark, which you are already aware of but I think there is a difference between heap memory and thw swap memory. More on this tomorrow

Good evening … personnaly time to go to bed

So, Jive Software folks - I’‘ve heard #’'s bandied about that a single (non-CM) instance of Wildfire can handle 10k concurrent users - is that still the contention?

Based on what we are seeing, I’‘m not sure about that. We crashed again today (twice), even after I’‘d adjusted the Permsize as mentioned above - we reached 5020 online users today, but it was very very dicey at the time…It was the CompilerThread1 exception again - it looks like we are triggering a Java Hotspot issue, and I’'m not sure how to fix it.

So, now I’‘m at a crossroads - I don’‘t currently have the infrastructure to support a load balancing DNS round-robin setup that can feed multiple CM’‘s, but I can’'t keep having our server crashing every day.

I’'ll repeat my oft-asked question to the Jive Software team - is there any configuration/tuning guidelines for JVM parameters - at least something that could ‘‘get us over the hump’’ until I can figure out how we either add more capability into our infrastructure to support these multiple CM concepts, or ???