Scalability of Openfire

Is it possible to scale the thruput of Openfire by having multiple servers supporting the same

user domain?

One user domain: mydomain.com

Many servers/hosts/hardware: 1.mydomain.com, 2.mydomain.comN.mydomain.com

Can you make it fully transparent for users? That is I talk to "chubby@mydomain.com" no

matter mine or his TCP connection host.

Any hints on using LDAP or MySQL as for shared user database?

What other load-balancing features are available, if you anticipate a very large amount of data transferred between OpenFire clients?

Thanks

Peter

Message was edited by: fshsweden

Hey Peter,

The only way to host the same domain across several machines is to use clustering. We will soon (a few weeks) start working on clustering. Clustering is an enterpise feature for mission critical applications. Anyway, what is the load that you are expecting to have at a single point in time? Clustering provides fail-over (fault tolerance) and also horizontal scalability. If you are only interested in scalability and not fail-over then the current architecture (with or without connection managers) may still be enough for you.

Regards,

–Gato

What you’‘re asking for by multiple Jabber servers supporting the same domain is essentially clustering, and the answer iscurrentlyno, openfire doesn’'t support clustering–yet.

However, Openfire does support connection managers. Connection managers are essentially a multiplexer of OpenFire IM. They combine requests and pass them on to the server as a big chunk rather than many little chunks. The result is less connections to the actual IM server and thus less memory and threads utilized on the IM Server. The Connection manager application itself isn’'t doing a lot of processing work so the utilization is rather low as well.

I thought I remember somewhere seeing 1 connection manager per 10k users, but I can’‘t seem to find the reference anywhere. From an availability and scalability perspective, you could put 2 connection managers up behind a load balancer and have your clients connect to the connection manager in a round robin or least used fashion and openfire should scale nicely. As needed, add more CMs and you’'re golden.

Iota

Hey Iota,

Iota wrote:

I thought I remember somewhere seeing 1 connection manager per 10k users, but I can’'t seem to find the reference anywhere.

That value was mentioned in this forums when using Wildfire 3.1.* or older. As of Wildfire 3.2.* the number has gone up quite a bit. In our local tests we saw numbers like 50K per JVM.

From an availability and scalability perspective, you could put 2 connection managers up behind a load balancer and have your clients connect to the connection manager in a round robin or least used fashion and openfire should scale nicely. As needed, add more CMs and you’'re golden.

If he’‘s is interested in no single point of failure then he would still have the single point of failure in the single Wildfire server. Connection Managers will provide an ad-hoc availability solution but it is not a complete solution for fault-tolerance since if the server goes down then the entire application won’'t be available. Using Connection Managers is a good solution for scalability and some partial availability but not for a true availability solution.

Regards,

– Gato

Gato, how much memory are you allocating to the JVM to get 50k users? We seem to be having trouble getting much more than about 1500 or so…

Message was edited by: BenV

Hey BenV,

We allocated 2GB and the server was consuming 1.1GB aprox. Note however that if your incoming rate is bigger than your processing rate or your outgoing rate is bigger than your actual speed of packets delivery then your server will start queuing up things and may eventually run out of memory. We will be adding throttling capability to the server to ensure that even under really heavy load you don’'t run out of memory.

Regards,

– Gato

Does the clustering you are putting into place take care of the database clustering as well?

Regards,

Anthony

Hello!

The reason for the question is load-balancing and not fail-over.

Since we piggyback XMPP messages with own data (an extension), we anticipate high bandwidth usage between some clients. We just want to find out whether there is a way to scale the architecture hardware-wise if (or when…) this becomes a problem.

I know you have been testing with up to 50K users per server (with connection managers), but I assume that the actual data sent between the clients was fairly small? What if each client send packets of size 1K-2K, and maybe up to 1 packet per second? How many clients would the server support then? This is the real source of my question.

Thanks for any info,

Peter

I saw a comment about adding throttling. When will this be added? (or is it already there?)

– Mick

To purely solve the scalability issue, i tried to use different domain name for different physical server.

And all servers connected together through s2s.

Of course, my usage maybe different. Example, i write my own client API to hide the domain name part of the ID.

is this a proper way to provide the scalability ?

regards

peter

Did you use the same database for your set-up?

if so, did it work smooth with openfire starting and accessing the tables. Conflicts?