powered by Jive Software

How to restrict external components to local openfire users

is there a way to restrict the usage of external components to local users of the openfire server (to which the external components are connected to)?

there are external components (eg jmc, pyirct) that i do not want external users to use (ie i don’t want to be a spam or botnet relay).

i’m trying to migrate from jabberd 1.4 to openfire. with jabberd 1.4 i would just set the jid of the component to a non-dns-resolvable name (“jmc.localhost”) and local users could use it (because the jabber server knew how to route to it using the established session without resorting to dns resolution), but external users could not (because their jabber server did not know how to route to it without a dns-resolvable name). this approach does not seem possible because though the component sets the “to” attribute to “jmc.localhost”, openfire responds with a “from” attribute of “jmc.localhost.example.com” (assuming the openfire server is configured as “example.com”). i’ve looked over XEP-0114 and i can’t tell if this behavior is required (ie “The domain identifier portion of the JID contained in the ‘from’ attribute MUST match the hostname of the component.”).

the problem is compounded by the fact that i’m currently stuck with wildcard subdomains (*.example.com) resolving to the same ip address as “example.com”, so any subdomain resolves to the “correct” dns-resolvable name.

is there a server property that would disable external users from communicating with external components?

the im gateway plugin doesn’t necessitate this functionality/feature/work-around because it is only available to local users due to its integration with the server (i presume for storing user information in the server’s database) and some external components implement ACL functionality, but some don’t.

I’m not aware of a specific setting in stock Openfire to prevent access to external components. When I was writing pyaimt and pyicqt, I had to do it myself in my own code. =) (not sure I ever got around to it either)

HOWEVER! I might have a solution for you. Take a look at Nate’s awesome Packet Filter plugin:

http://www.igniterealtime.org/community/community/plugins/packetfilter

You want version 2.0:

http://www.igniterealtime.org/community/docs/DOC-1370

Using it, you could set up two rules for each component jid:

  1. Pass, Packet Type: all, From: Other: *@myserver.org, To: Other: irc.myserver.org

  2. Reject, Packet Type: all, From: All, To: Other: irc.myserver.org

Repeat for each component you want to lock down.

this is exactly what i was looking for! (actually, i was expecting a per-component checkbox/property, but this is much more powerful/reusable/flexible).

i’ll reply to this thread reporting my experience with the plugin after i have a chance to test it (by monday, hopefully).

Let me know if you run into any troubles getting this setup.

-Nate

no troubles whatsoever and it solved my problem perfectly.

well, a minor speedbump: in addition to the rules suggested by jadestorm i had to allow component.example.com to speak to “all” (as that was easier than creating a “pass” rule for “component” to speak to each external component), but that was easily seen from the logs (since i turned logging on for all rules until i had tested them all).

so with two external components (email & irc) i had the following rules:

i tried .example.com (in the third rule, but it would have been beneficial combining the first two into "@example.com -> .example.com"), but it didn’t work. "@example.com" is called out in the readme, but i thought i would try a different wildcard form (subdomain instead of username), and i guess it’s not supported. (bug or feature request?)

also being an engineer (by trade, xmpp server admin by hobby) i have this nagging desire to be more specific than “any packet type” for all my rules. i was hoping that turning on logging would show what packet type was triggering the rules, so i could tighten them (and learn/reinforce/memorize the packet type names in the process), but the log output didn’t report that detail. (feature request?)

would being more detailed (specifying the packet type, creating a specific rule for component to talk to each external component, avoiding wildcards, etc) make the rules any faster, or is the difference negligible (not that i have a latency or throughput requirement i’m failing, but just curious)?

thanks for the pointer, jadestorm, and for the packet filter plugin, nate.

Glad it worked out.

I haven’t really built out the syntax for specifying free form JIDs yet. It would make sense to be able to do *domain or *@domain. I’ll add it in the next release.

A good way to write rules I’ve found is to mimic the behavior you are trying to disable with Sparks debug window running. The window will show you packets in realtime, from there you can see the packet type and to/from addresses.

Wildcarding should be faster then specifying specific type. When you are blocking “any” packet the packet filter doesn’t need to waste cycles seeing what type of packet it is, it is just dropped. Same thing for wild card domains.

-Nate

It would make sense to be able to do *domain or *@domain.

i thought about that when creating the rules: as long as “example.com" is interpreted as ".example.com or example.com” (eg “a subdomain or the domain itself”), otherwise “subvert-example.com” would match (using standard globing semantics), though it’s probably not what the rule creator intended as it’s another domain entirely.

A good way to write rules I’ve found is to mimic the behavior you are trying to disable with Sparks debug window running.

thanks for the suggestion. i haven’t messed with the spark client (psi has fulfilled all my requirements), but this might justify it. but this would only show the packets to/from the client, right? it wouldn’t show packets between component.example.com and the components?

Wildcarding should be faster then specifying specific type.

Same thing for wild card domains.

good to know (as it is easier to ask you than analyze the source code or run benchmarks).

again, thanks for the packet filter plugin!