Spark and Multiple Connections to a Single Wildfire Server

First off, great products. Working wonderfully here.

I ran into a problem with using a Wildfire server behind two internet connections. I have my DNS setup correctly (2 ips behind 1 DNS name) but the round-robin system isn’t changing IPs fast enough or something. so my quesiton is:

Is there a way to put multiple servers into Spark (like a prioritized list) to connect to my Wildfire server?

Thanks for your input in advance.

There is no such option in Spark.

what is it you are trying to accomplish? is it just that your office or whatever has multiple WAN connections for a redundant HA setup? If so, then this usually should be abstracted away from your servers…

depending on your setup, the problem may actually be the configuration of your firewall(s)/gateway(s). If a connection request comes into your network on one public IP address (one of your WAN connections), but gets routed back out the other WAN connection, most programs will freak out and not work as expected (different server responding to their request).

When I worked with a multi-wan setup at a company, we used PFSense’s CARP and other features to setup “sticky connections” so that any request would be responded to from the same WAN connection it came in on.

Jason,

I completely agree with you, if this were the case. I thought maybe I was an issue where connect requests were coming down 1 pipe then going up the other but I have the correct Dynamic NAT setup in our firewall to accomodate the right IP/port routing. I was able to test around this by creating different DNS entries for each pipe and testing them in Spark. Works fine if I use the dedicated DNS name to 1 IP but when I use the single DNS name to the multiple IPs, if one of those IPs goes down Spark just doesn’t seem to make it.

I agree that this could probably be resolved with some DNS prioritization but I was hoping Spark had a connection mechanism built in that would try server names and then just go to next one if failure to connect occurred. My experience with DNS round-robin processing has never really been great.

Any other ideas would be great… maybe this would be a nice feature in the future for Spark. Wouldn’t be tough to implement I think.

hmm, from the sound of it, you have got things configured OK (of course I say that without seeing myself! lol).

given everything else works as it should and there are no other undlying issues, then it would come down to Spark simply doesn’t have this ability (yet). I think what you are after is more along the lines of setting up a Connection Pool of some sorts, and if one address in the pool fails to resolve or timeouts or whatever, try the next address, etc. This probably could be implemented in Spark as a plugin, no changes needing to be made to Openfire side of things. If you have a little java know-how (or some spare time), maybe take a crack at it.

Jason,

That would be a great idea, unfortunately I have little JAVA coding experience. It would probably be something that would take me far too many man hours, whereas someone with the skill set could whip it up in a few. I wonder if I should just submit it as a recommendation/suggestion. Appreciate all your help anyhow.

Ok, I filed this as a Feature Request and suggested it be a Plugin (Sparkplug) instead of core Spark functionality, this way users who don’t have this situation don’t need to bother configuring it, etc.

SPARK-1556

At the moment there isn’t a ton of new development happening with Spark as our resources are spread thin and contributors are very busy people! But, hopefully someone will pick this ticket up and run with it!

1 Like