Number of threads per connection when using smack 3.3.1?

We are using Openfire 3.8.0 and smack 3.3.1 (linux CentOS 6.3). With ~750 users connected to Openfire through jboss with smack we are seeing ~2,251 Smack related threads in jboss (750 * 3 + 1). We are using Hazelcast for clustering and there are 3 nodes in the cluster.

Should I be concerned about the number of threads being used in this scenario?

Also, one third of the threads are RUNNABLE, two thirds are WAITING, and the one extra thread is in TIMED_WAITING state.

TIA for any information/thoughts on this!

Mike

Well, there is a reader and writer thread per connection, at the moment I can’t think of where the third one comes from, but there is then one extra (not connection based) used to manage the keepalive manager.

You are seeing 3 threads per connection plus 1 extra (the keepalive).

I can’t tell you if you should be concerned or not, as that is much more an issue with your server than anything else. I would expect that these threads will be idle most of the time.

Unfortunately, Smack wasn’t really designed with this scenario in mind so it doesn’t share resources like threads across connections. It should still work fine, as long as the number of threads isn’t a problem on your hardware.

Update: The third thread is also in the reader for processing packet listeners.

rcollier:

First, thank you for your very helpful response! The previous version of smack that we were using (3.2.1) did not utilize 3 threads per connection (more like 1 per connection). So, seeing the threads triple what they used to be was disconcerting.

Second, we are providing indirect browser-based access to Openfire through jboss. Are there alternative implementations/designs that I should consider? (We do not want to provide direct access to Openfire - it will still have to be browser-based and indirect.)

TIA,

Mike

That is odd, the older version actually has more threads, 4 per connection, since it has a keepalive thread per connection. As I mentioned, there is a single shared one for keepalives as of 3.3.

Sorry, I didn’t mean to mislead you; but I’m sure my original question and follow-up were confusing.

I just reviewed the code and we only use smack for certain admin-related functions (e.g. create account, update account, and presence).

The browser to app-server is handled by the JHBServlet and this uses sockets to connect to Openfire. I would like to modify the JHBServlet to use smack (rather than sockets). However, I am also wondering if there are better ways of exposing Openfire indirectly to browser-based clients through our app server.

TIA,

Mike

After posting the above I am even more concerned about the number of Smack related threads on our app servers using smack 3.3.1.

Given that we only use smack in a few admin related scenarios (add/update account, userExists) and yet we have a very large number of Smack related threads (two-thirds of which are not in a RUNNABLE state).

I plan to downgrade back to 3.2.1 until I can figure out what’s going on with the current release.

Thanks,

Mike

are you actually experiencing problems due to high-thread count? ( <---- makes me think of buying new bed linens!)

or is it just a concern because it’s not what you noticed before?

modern hardware/operating systems are capable of addressing several hundered thousand threads at a time. It’s common for a process or program to consume a large number of threads during it’s runtime – and since openfire is a server, that “runtime” is however long your server is running.

i can say, that i’ve met others on the forums running installations easily twice the size of yours without problems other than normal large-scale scaling problems such as memory, etc… which generally are fixed by throwing hardware at the problem such as more ram, or clustering with other servers.

Anyways, to bring us back around in a circle – what is the actual problem you are experiencing?

Hi Jason, thanks for responding!

The Openfire servers (we have three, and they are clustered) are running just fine.

The application servers (running jboss 7.2.x with Oracle JRE 7) are experiencing an ever-growing number of smack related threads; two-thirds of which are not in a RUNNABLE state. Yesterday morning, with approx 5,000 total threads in the JVM, an app server threw an OOM Exception (and rolled over and died).

Thanks,

Mike

and just checking, but you are certain you are using Smack 3.3.1, not Smack 3.3.0 ?

3.3.0 had a resource leak problem that got fixed in 3.3.1… wondering if that may be what you are experiencing?

The vast majority of threads would be expected to be in a WAIT state, since they are waiting for incoming and outgoing packets. With a low amount of activity, the threads should be waiting for something to process.

dunno if this is relevant, but spark uses Smack 3.3.1 and doesn’t experience this issue. here’s a screenshot from a Spark that’s been running for a few days now without restart – i just fired up the profiler so it only shows the past few minutes… but as you can see, there’s only a “normal” amount of threads being used, as expected. also, as rcollier noted, most are in the waiting status since they are waiting for events or whatever.

EDIT: To avoid confusion, I should say “my [custom] spark uses Smack 3.3.1”

That is only a single connection though. The OP has hundreds of connections from a web server, so he would see 3 of the Smack threads you show for each connection, plus the keep alive one.

With the exception of the keepalive though, the code affecting the number of threads hasn’t changed, so I don’t see how it was better prior to version 3.3. It should be worse.