I am using wildfire3.0 and test it in compression mode. When client send a pretty big packet (> 8 KB), the server only read some of the packet and the rest of the packet is returned to upper layer with another packet, the flush mode of ZInputStream is JZlib.Z_FULL_FLUSH. Have you ever met the problem ?
I send some large compressed packet to wildfire. The server use ZInputStream as InputStream and the server. At first some small packet is read correctly but a large packet is devided into 2 part. The first part is read correctly but rest is not. When I send a small packet to server, the rest of the large packet and the small packet is read out together.
The server doesn’‘t lose data but the data is delayed by ZInputStream. I suspect it is because the ZInputStream’'s flush mode. But I tried Z_FULL_FLUSH Z_PARTIAL_FLUSH , it is still like that.
Perhaps what happened to me is relevant to this issue:
I was working on my own Jabber client (not using Smack), and everything worked fine until I added in JZlib compression. Messages DO get sent and received, but some messages do not seem to be flushed from the stream.
E.g.
User A sends msg “1”, User B doesn’'t receive anything.
User A sends msg “2”, User B receives “1”
User A sends msg “3”, User B receives “2” and “3”
I tried using all the different flush modes in JZlib streams, but the problem either persisted or got worse.
Finally I downloaded Wildfire’'s source codes, changed all “setFlushMode(JZlib.Z_PARTIAL_FLUSH)” to “setFlushMode(JZlib.Z_FULL_FLUSH)”, and the problem disappeared.
Am I doing something wrong here? Any advice is appreciated.
It has been awhile since I implemented stream compression. I remember having played with both options and for some reason decided to go with the PARTIAL mode. Anyway, you should know that we are now evaluating MINA as our networking layer framework and based on what I read they only support the PARTIAL mode. Based on that I would suggest changing your client to use PARTIAL mode. FYI, that is what our Smack (and Spark) client does.