powered by Jive Software

POC for DOS

The attached source code is a mess as I had a lot of issues to create zlib compressed data. But finally it did work and my Openfire process did report at least an GC overhead OOM. Even though the code creates this error I did review it again and it seems that it is using zlib completely wrong …

Even the wrong compressed packets (gzip, deflate) did cause high CPU usage issues, so one should also fix this and close connections if decompression fails.

The real fix to announce the availability of compression after authentication is missing. I added some comments/fixes at the end of OpenfireDOS.java to make sure that compression fails if the session is not authenticated.

One other evil thing is that the warn log explodes when it logs:
2014.02.27 21:26:44 org.jivesoftware.openfire.nio.ConnectionHandler - Closing session due to exception: (SOCKET, R: /127.0.0.1:53494, L: /127.0.0.1:5222, S: 0.0.0.0/0.0.0.0:5222) org.apache.mina.filter.codec.ProtocolDecoderException: java.lang.Exception: Stopped parsing never ending stanza (Hexdump: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 [and so on]

updated src - one may detetect even more bugs with it

One big issue is in XMLLightweightParser.read() - we throw an exception if the buffer limit is reached but due to the asyncrounous handling read() is called again and again causing an exception storm. And this storm seems to be the root cause for memory (perfomance) issues.

I did test on localhost, so I had enough bandwidth available. My fix:

boolean bufferOverflow = false;
public void read(ByteBuffer byteBuffer) throws Exception {
     if (bufferOverflow) {
          byteBuffer.position(byteBuffer.limit()); // fake read all data
          return;
     }
...
     if (buffer.length() > maxBufferSize) {
          buffer.delete(0, buffer.length());
          buffer.trimToSize();
          bufferOverflow = true;
          throw new Exception("Stopped parsing never ending stanza");
     }
...

I’m still working on the test case, the current state is attached here. Is there anyone we want to invite to this private group like csh/dele/?

openfire.zip contains the three modified classes, the changes are explained here

XMLLightweightParser.java

As I understand the code an exception will always be a ProtocolDecoderException so the session will be closed.

The buffer indicates now whether an exception was thrown and if this is the case no further data will be processed.

The TCP buffers should not be the problem, but the internal MINA buffers seem to contain too much data.

Other “fixes” which do not solve the problem. But it may make sense to add them still to Openfire:

StanzaHanlder.java - compressClient()

If an error did occur the connection will be closed. This matches the Log.warn(Closing connection) messages.

The code to disable compression for unauthenticated clients is commented as it is not really needed.

ConnectionHandler.java - exceptionCaught()

The only active change is to limit the log message from 1 MB to 256 characters (the “cause” can be 1 MB long).

I would simplify the exceptionCaught() method, anyhow no need to do it right now.

XMLLightweightParser.java commited, so the nightly builds will have the fix.

The code to disable compression for unauthenticated clients is commented as it is not really needed

Could you elaborate that? Disabling compression for unauthenticated clients seems to be the first line of defense for such cases.

I did run the tests (src.zip) and they showed that the main problem is that the XMLLightweightParser throws huge exceptions. If the stream is too long it throws one exception ``throw new Exception(“Stopped parsing never ending stanza”);´´ which contains the whole stream - 1 MB up to ??? MB huge. Processing this exception takes some time (e.g. 20ms).

As there is more data available ``read()´´ will be called again throwing another “Stopped parsing never ending stanza” exception. It also takes some time - depending on the available memory it may take longer - to process it.

``read()´´ will be called some more times.

Finally the socket is closed but MINA still decompressed the zip bomb and the needed memory to process the exceptions or the huge bytebuffer cause OOM errors.

So I did fix this.

Now for every client only one (huge) exception is thrown and then the buffer is read/purged with lightning speed. With slow disk IO logging the whole exception may still be an issue.

The code to disable compression for unauthenticated clients is commented as it is not really needed.

With the fix above it does not matter whether the client is authenticated or not.

Requiring auth to use compression on public servers with in-band account creation makes it only a little bit harder to exploit the zip bomb.

If the stream is too long it throws one exception ``throw new Exception(“Stopped parsing never ending stanza”);´´ which contains the whole stream - 1 MB up to ??? MB huge.
Where does this Exception contain the whole stream? Am I missing something?

With the fix above it does not matter whether the client is authenticated or not.

Requiring auth to use compression on public servers with in-band account creation makes it only a little bit harder to exploit the zip bomb.

But it prevents zip bombs on c2s connection for servers with disabled in-band account creation. Compression should always be only possible if the client got authenticated.

“Where does this Exception contain the whole stream? Am I missing something?”

Look at http://issues.igniterealtime.org/browse/OF-455 for example.

The exception looks indeed harmless, but as MINA is involved it is not. I did not track throwing of the exception, but it looks like MINA catches the exception and throws a new ProtocolDecoderException which contains the whole byte buffer.

Then the ConnectionHandler logs it:

… if (cause instanceof ProtocolDecoderException) { Log.warn("Closing session due to exception: " + session, cause); …

As it goes into the warn log it will always be written to disk. You may want to try one of the examples in src.zip and see your warn.log “explode”.

“Compression should always be only possible if the client got authenticated.”

I agree. Anyhow this can also be fixed later with a proper fix which offers compression after auth.

LG is fully right: the Exception that our filter throws, gets caught by MINA, which wraps it in a ProtocolDecoderException with a hexdump of the byte buffer. That’s all being done in this class: https://svn.apache.org/repos/asf/mina/mina/branches/1.1/core/src/main/java/org/a pache/mina/filter/codec/ProtocolCodecFilter.java

There is a simple but nasty workaround for this: instead of throwing an Exception, the parser can throw a ProtocolDecoderException. MINA checks for this, and won’t wrap.

The obvious downside is that this ties in our logic with MINA’s, while there is no need to. On the other hand: the parser already has a dependency to MINA (it uses its buffer implementation).