Already began a little

Hi,

I already began a little to work on this project for my exam, with the idea of matt.

Well, I have beggined to search how to make a protocole of routing, so I search a little on internet and find the jabberd2 routing protocole :

http://jabberd.jabberstudio.org/dev/docs/component.shtml

I talked with rod, who wrote this, and said me it’'s really specific to each server(software) architecture.

And talked with Artur who develop Tigase Jabber server, who want to develop something like, and he is justified to work on. We found some stuffs that aren’'t needed, and some stuffs that are not implemented yet.

But this non officiel JEP is not only used for C2S connections, it can be used for all connection that can receive a server.

I’‘m viewing Pampero for only C2S because it’‘s the most problem of server’‘s cpu load. However I’'m opened to all If we can make a really good JEP !

That’'s all for the moment for me

Glad to have you here! One thing I’‘ve never understood from that proto-jep is what the purpose of the route tag is. I suppose it’'s because (from the JEP):

“The router will treat the payload as an opaque XML chunk. All routing decisions are based solely on the route “to” attribute.”

Anyway, it makes more sense to me that the component do a special bind to the main domain of the server and then it work transparently after that (no route tags needed).

Have you guys discussed what you see as the best approach so far?

Regards,

Matt

Well, firstly I thinked almost like you,

but with some stuff like this after a TLS/SASL negociation which is documented by the JEP 114 :

and id represent a connection id to easyli dispatch to each client his packet, and to allow to the server to dispatch on his internal object ( for object oriented programed servers ) which represent each user.

This is the most simple method.

But the protocole used by jabberd2 can have some good stuffs like the managing of the different services. And we can think it can be extensible to reduce the cpu load, like the service discovery, broadcasting, or another stuffs that doesn’'t need authentication.

But use the firstly discussed protocole, for me, is not a problem for developping, but I think we have to talk a little of the goal of the project and the protocole, and the really needed stuffs…

the forum is open

Glad to see we’'re not alone.

One thing that strikes me as being problematic is the use of from and to. If you’‘re concentrating connections then the from[/b] doesn’‘t make much sense because it’'s no longer the client, but the intermediate. The to[/b] works as long as their is one server (not a couple in a cluster).

I think that this should be changed to reflect a session id and the connection manager maintain the to/from pair in memory (or replicated to another node). it also provides a way to allow nodes and servers to communicate in a way that is optimized for clustering and failover. The connection manager can determine the server it wants to go to irregardless of what is in the “to” field. Different port number or clustering schemes would make the “to” field a moniker for the messenger cluster.

client --> connection manager 1 (wrap in client

Noah

For this protocole the from is used to know which “service” sent the packet.

However it can be used to repertory easyli the user, like “the packet is from someone@someserver.org/someressource” so the connection manager can, perhaps find it easyli, but for security question I’‘ll prefer to have only a connection id, so if someone can intercept and decrypt the transaction between the connection manager and the server and isn’‘t specially lucky, he won’'t know who is talking.

However, it could not be likely for all the people, depend how you prefer work ? with id or jid ?

And for me I view the negociation between the server and manager like that :

// Here the TLS/SASL negociation

CM : hi, i’'m a connection manager

S : ho really can i view you passport ?

CM : yes, there is

S : Well, now we talk crypted

// Now during the stream

CM : Hey ! I have a new connection from a user

S : Ok, I give you this id : 32423972( for example)

CM : The connection, with id 32423972 said that :

" tag, and for the differents errors which are needed, if a message can’'t be delivered.

I think we’‘re on the same page. The proto-jep uses a from/to. What I/we’'re proposing is a connection id to be assigned in lieu of that information. Essentially

Is it required that the server originate that connection id?

Noah

I think it’'s the better, because the server must be the manager of all, It must keep the control of all elements.

I say an example : imagine it have 2 or 3 connection manager, how we can know if two connection manager haven’'t token the same connection id to two different connection ? doing some test we can, but it take a little more of cpu, so if we generate this numbers on the server, getting a timestamp or something like we have unique numbers.

knowing that two connection managers arent using the same idea can be done two ways:

  1. the server can assign a unique id to each connection manager so that 1234_456 is from connection manager 1234 and 7890_456 is from connection manager 7890.

  2. Or the server can just relate each id to each connection manager on its own.

Yes or another third way :

where cmid is the connection manager id

and connid the connection id…

At the end of the day, aren’‘t we just talking about a simple hashmap lookup? We’‘ll have the user’'s JID and it should be a very fast operation to lookup which connection manager their session lives on. How does the route packet really save us effort?

Because of the way JM is implemented, we’‘ll have to fully parse the incoming XML every time (so that packet interceptors, etc) will be executed. Therefore, it won’‘t be possible to do optimizations like send a message packet from one connection manager to another without every parsing what’'s inside of .

-Matt

Do you think really we’'ll have to parse each packet ?

Do you think really we’'ll have to parse each packet ?

I don’‘t really see a way around it. I’'ve actually been looking to see if we could decrease the burden of xml parsing on the core server. One option may be Fast Infoset:

or, Agiledelta Efficient XML:

http://www.agiledelta.com/products.htm#Efficient%20XML%20Encoding

-Matt

I was thinking of FastInfoset between CM and Server, reducing the bandwidth requirements and memory footprint on CM. I didn’'t give much thought about the server, but any optimizations we make in CM will probably be candidate upgrades to the server. Once the smoke has cleared of course.

Noah

I viewed the Connection Manager like a blind man, which get the received packet, set the route tag and send it to the core server.

then the core server see inside the route tag and what the packet which it need to treat.

I think you guys may be thinking about this problem in the wrong direction. It’‘s very easy to scale by adding more connection managers but very, very hard to scale by adding more core routers (we won’'t even have clustering for some time). Therefore, the more work the connection manager can do, the better. That means it should verify the XML as much as possible and do anything else it can to lighten the load on the core server.

-Matt

Yes, I’'m not against this idea, then it depend on what the connection manager will do with the packet after read it, because if it does not do anything why it read it ?

Agree with Matt, so long as the core server can still function standalone. I am sure that is his intent.

So in fact what we are talking about is duplicating some of the core server functionality in the connection manager.

I was interested to here that you are thinking about using a different on-the-wire format for connection manager<->core server comms. Before we settle on an particular format, I think it would be prudent to isolate this as much as possible and make it pluggable, that way we can start with the existing format and also have a look at the basic zip format. Someone has already written the zip format and is waiting for feature negotiation to be added to core server. Perhaps feature negotiation is a way to resolve the format at run time, this would be very powerful deployment feature, combined with a pluggable io techniques (ala emberio).

Feature negotiation should be part of the CM<->Server protocol.

I’‘ve been mulling about the on the wire format mostly because I was trying to determine an efficient way to store each connection buffer on the CM in memory. I’‘m not sure how much memory is required per connection, but the on thing that most c10k server’‘s don’'t do is buffer much. All non java implementations (read c/c++ with direct access to system api aio or win32 iocp) get to rely on the system telling when socket and[/b] file io are ready to go, then they shuffle the bytes between the disk/socket using a capped buffer.

I think, again feedback is welcome, that the CM needs to store a least the complete stanza in memory before delivering to the server…or at least enough to determine routing or priority and then deliver. If so then an efficient means of storing the xml in memory is required to scale. I’‘d prefer to investigate FastInfoset first (obviously once the regular xml is working…very important to baseline) and the other later. Obviously there’'s no point in losing the encoding/benefit when you transmit it across the wire.

Are you referring to EmberIO’'s ability to switch single thread socket to nio if load demands? That is a very cool feature and something to investigate as a feature.

Noah

Yeah there was some discussion a while back regarding nio and EmberIO was mentioned. I was thinking more along the lines of allowing a deployment configuration which could specify socket per connection (standard io) or configurable socket pool (nio). This would allow the admin to fine tune his configuration.

As for dynamically switching between the two, I hadn’'t thought of it. Might be a bit dangerous! It would be assuming alot about the environment but could be another useful feature.

Conor.

I think what Mike was after in EmberIO was that for low volume of connections, say <50, nio was overkill and EmberIO would use the single threaded model below that threshold. Beyond that threshold, it would switch to NIO.

Regardless, there are already lessons learned from these projects that should be incorporated where it make sense. One thing that Mike talked about was that most implementations will call read() once before returning to selectable key back to the selector and this was causing most NIO code to perform horribly. What he found is that sun’'s jvm would return 1 byte and on the next read() would return a whole heap of bytes to process. Mike suggestion, try calling read more than once per select.

It sounds all ethereal until I was helping Gato with the 2.3.0 TLS code base. The exact scenario that Mike was talking about, with regards to the read and return to select, was being done and watching the debugger you’‘d see read() return 1, 1, 177 and so on. Just using NIO won’'t necessarily get you the glories of scalability.

Noah