powered by Jive Software

Is it possible to permanently fill the queued MUC messages?

One of our ops team noticed a lot of stack traces in the Openfire DB starting:

2010.09.14 02:45:25 [org.jivesoftware.openfire.muc.spi.MUCPersistenceManager.saveConversationLogEnt ry(MUCPersistenceManager.java:991)] Error saving conversation log entry

com.mysql.jdbc.MysqlDataTruncation: Data truncation: Data too long for column ‘subject’ at row 1

at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2983)

at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631)

at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723)

at com.mysql.jdbc.Connection.execSQL(Connection.java:3283)

at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1332)

at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1604)

at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1519)

at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1504)

at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.logicalcobwebs.proxool.ProxyStatement.invoke(ProxyStatement.java:100)

at org.logicalcobwebs.proxool.ProxyStatement.intercept(ProxyStatement.java:57)

at $java.sql.PreparedStatement$$EnhancerByProxool$$2d6807aa.executeUpdate()

at org.jivesoftware.openfire.muc.spi.MUCPersistenceManager.saveConversationLogEntr y(MUCPersistenceManager.java:987)

at org.jivesoftware.openfire.muc.spi.MultiUserChatServiceImpl.logConversation(Mult iUserChatServiceImpl.java:415)

at org.jivesoftware.openfire.muc.spi.MultiUserChatServiceImpl.access$100(MultiUser ChatServiceImpl.java:64)

at org.jivesoftware.openfire.muc.spi.MultiUserChatServiceImpl$LogConversationTask. run(MultiUserChatServiceImpl.java:401)

at java.util.TimerThread.mainLoop(Timer.java:512)

at java.util.TimerThread.run(Timer.java:462)

I believe I understand the way the muc log works (if rooms have logging enabled) - a flush time and a batch size are set. At the specified interval, a batch amount of logs can be committed. But what happens if there is a subject that is over the max column size? Does this mean the batch will always fail (until the offending message(s) are cleared from that queue via a restart?

We can reproduce this - if you set a massive room topic, you start to see errors in your log file as it tries and fails to write this to the DB (as a muc message weirdly, haven’t set the code to find out why but presumavly because it’s a subject change sent to the room).

Just done some more investigation of this and it’s not so bad - just annoying.

Steps to reproduce:

  1. Start Openfire (tested with 3.6.4)
  2. Create a MUC room and ensure it is set to be logged. Also ensure the service is set to log at regular intervals
  3. Send a message using an IM client (I used PSi but any will do) but with a subject node following the body node (this is valid XMPP it seems). You’ll probably need to send the raw XML itself depending on your client.
  4. Tail your error log and watch it grow with SQL exceptions as Openfire tries to write that line to the conversation log every single time the flush interval comes up. It seems that any messages you send afterwards do get persisted though.

I don’t know how important people feel this is, it’s just an annoyance for our ops team who like nice clean empty error logs. I guess a solution would be to truncate the subject deep down in the MUCPersistenceManager, possibly as the first line before saveConversationLogEntry is called.

Does short subject attached to a message creates these errors, or just a very long subject?

Wonder if you could test this with 3.7.0 Beta.

It needs to be longer in length then the length of the subject column in the ofMucConversationLog table, as this is what causes the SQL exception.


Much appreciated

Can you confirm on the ticket that it is still occuring with 3.9.1 version?