Wildfire and handling of external components

Hi there,

I’'m using a modified Smack library to build an external component, a transport. The library was first mentioned here:

It connects, subscribes, sends stuff back and forth, but I’‘ve found some quirky issues which may be related to Wildfire. After a while (undefined period), the component doesn’‘t receive things like messages or presence stanzas from the server. Any messages and presence stanza’‘s sent from the legacy network arrive fine, and Wildfire says the component session still exists (otherwise legacy2xmpp couldn’‘t work). I have not been able to find a cause for this in the modified Smack (though it’‘s closed source, still), I have determined that it’'s “processPacket” method is not called. Any ideas?

Another problem I’'m having is related to broadcasting of presence information. To send presence info of a given legacy user (who has been subscribed to), I have tried:

  • Sending without a ‘‘to’’ (doesn’'t arrive anywhere);

  • Sending to a bare jid (arrives at the highest priority logged in session, should arrive everywhere);

  • Sending to all full JIDs I know (all stanzas arrive at the highest priority session).

The latter two cases seem like incorrect handling of presence stanzas if I’‘ve read the RFC’'s right. Any help on how to do this?

I’'ll rephrase my most important question:

The question is: How can my component legacy.network.tld send presence for “legacyuser@legacy.network.tld” so that it arrives at all resources of the subscribed users? Things I’'ve tried:

  • Send presence without a ‘‘to’’ and a ‘‘from’’ set to the JID representation of the legacy user. This doesn’'t work.

  • Send presence with the same ‘‘from’’, and a ‘‘to’’ set to the bare jid of the person who should receive this.

Now, it seems to me one of these should work, but the first one does nothing (packet disappears), and the second only arrives at highest resource. This seems to me like a bug of some sort.

I tried sending to a directed resource, but the resource is ignored and the

stanza goes to the highest resource. This also doesn’'t seem logical.

Anyone have any idea how I might solve this?

So I dug some further, and found this section in Wildfire’'s source

[[ file: org/jivesoftware/wildfire/handler/PresenceUpdateHandler.java:269 and further ]]

// Foreign updates will do a reverse lookup of entries in rosters

// on the server

Log.warn("Presence requested from server "

  • localServer.getServerInfo().getName()

  • " by unknown user: " + update.getFrom());


Connection con = null;

PreparedStatement pstmt = null;

try {

pstmt = con.prepareStatement(GET_ROSTER_SUBS);

pstmt.setString(1, update.getSender().toBareString().toLowerCase());

ResultSet rs = pstmt.executeQuery();

while (rs.next()){

long userID = rs.getLong(1);

try {

User user = server.getUserManager().getUser(userID);




} catch (UserNotFoundException e) {


} catch (UnauthorizedException e) {





catch (SQLException e) {




It seems that the code that does exactly what I want, namely sending directed presence to non-local (ie. subscribed from a Transport) users has been commented out. This also means that the statement “Foreign updates will do a reverse lookup of entries in rosters on the server” is incorrect, meaning I can’‘t get this to work. This destroys the ability of legacy gateways to send presence updates in an XMPP-standard manner (that is, to all users who have that legacy user on the roster). Mind you, I’'d rather be able to send it directed to a given bare jid, which would then route the presence update to all online resources.

Why was this commented out, and what’'s the best way to get transports sending presence?

Hey Frank,

I have just talked to Gato about this and he is looking into this from both the protocol perspective and the technical wildfire perspective. We’'ll get back to you soon about it.



Hey Frank,

An update from Gato, this does appear to be a bug. I have created an issue to deal with this and we are working on a fix for it:




Hi Alex,

Thanks for the response, glad to see it’‘s going to be taken care of. This causes problems with external, server-to-server (e.g. Google Talk) connected networks as well, and it’'ll be nice to see this fixed.

I see it’'s been marked as Critical, and scheduled for 3.0.0. Do you see any chance of this making it into the 2.6 branch?



Hey Frank,

Gato has checked in a fix under a different more encompassing issue that we were having handling presence:


The fix will be available for testing in the latest nightly builds.

Currently, with a major release we do not backport fixes to previous versions. The reason being we haven’'t thought there was a large demand for this. If in the future there are more people asking for this to be done we will definitely do this.

If you want to do the fix yourself you can checkout the 2.6.2 branch from SVN and patch the files checked in against the JM-735 issue. If you need anymore help with this, let me know.



Hmm. Two questions related to this: When is the 3.0 branch expected to stabilize a bit, and/or: are new versions expected in the 2.6 branch? If we patch 2.6.2, and want to update to, say, 2.6.3, we’‘d have to sync our patched 2.6.2. That’'s going to be messy.

The bug Frank describes is quite severe if you’‘re working within a community in which users are expected to have more than one resource available at a time. I’‘d rather not have to work with a local, patched 2.6 branch, but I really don’‘t want to wait until 3.0 stabilizes if that’'s going to take more than a couple of days (as I guess it will).

Could you reconsider the possibility to apply the 3.0 fix in the 2.6 branch?


Wildfire 3.0 is coming out tomorrow so we’'re deep in the release process. We expect 3.0.x to a stable release very quickly, as most of the changes for 3.0 involve either:

  1. Bug fixes

  2. Major new features that are somewhat isolated in the code that they affect, such as the plugin downloading framework.

After we get through the 3.0 release, we’‘d be happy to evaluate a patch to 2.6.x. I’‘m definitely not opposed to it, it’'s just a matter of time at the moment.

Best Regards,


Thanks Matt. In the meantime we’'ll give 3.0 a go in our development domain.