Spark crashes on roster add

We are running Openfire 3.3.2 and Spark 2.5.x. The issue occurs when users try to add each other from the search screen (right-click on a user and ‘Add as Contact’. Spark crashes immediately.

Spark log says:

Sep 28, 2007 11:04:41 AM org.jivesoftware.spark.util.log.Log errorSEVERE: Unable to add new entry oas@chat.portseattle.orgNo response from the server.: at org.jivesoftware.smack.Roster.createEntry(Roster.java:249) at org.jivesoftware.spark.ui.RosterDialog.addEntry(RosterDialog.java:447) at org.jivesoftware.spark.ui.RosterDialog$4.construct(RosterDialog.java:412) at org.jivesoftware.spark.util.SwingWorker$2.run(SwingWorker.java:129) at java.lang.Thread.run(Unknown Source)

Openfire server error log says:

2007.09.28 11:03:18 org.jivesoftware.openfire.handler.IQRosterHandler.handleIQ(IQRosterHandler.java: 128) Internal server error

org.jivesoftware.openfire.user.UserAlreadyExistsException: oas@chat.portseattle.org

at org.jivesoftware.openfire.roster.RosterItemProvider.createItem(RosterItemProvid er.java:108)

at org.jivesoftware.openfire.roster.Roster.provideRosterItem(Roster.java:334)

at org.jivesoftware.openfire.roster.Roster.provideRosterItem(Roster.java:295)

at org.jivesoftware.openfire.roster.Roster.createRosterItem(Roster.java:280)

at org.jivesoftware.openfire.handler.IQRosterHandler.manageRoster(IQRosterHandler. java:226)

at org.jivesoftware.openfire.handler.IQRosterHandler.handleIQ(IQRosterHandler.java :105)

at org.jivesoftware.openfire.handler.IQHandler.process(IQHandler.java:48)

at org.jivesoftware.openfire.IQRouter.handle(IQRouter.java:300)

at org.jivesoftware.openfire.IQRouter.route(IQRouter.java:104)

at org.jivesoftware.openfire.spi.PacketRouterImpl.route(PacketRouterImpl.java:67)

at org.jivesoftware.openfire.net.StanzaHandler.processIQ(StanzaHandler.java:289)

at org.jivesoftware.openfire.net.ClientStanzaHandler.processIQ(ClientStanzaHandler .java:79)

at org.jivesoftware.openfire.net.StanzaHandler.process(StanzaHandler.java:254)

at org.jivesoftware.openfire.net.StanzaHandler.process(StanzaHandler.java:153)

at org.jivesoftware.openfire.nio.ConnectionHandler.messageReceived(ConnectionHandl er.java:132)

at org.apache.mina.common.support.AbstractIoFilterChain$TailFilter.messageReceived (AbstractIoFilterChain.java:703)

at org.apache.mina.common.support.AbstractIoFilterChain.callNextMessageReceived(Ab stractIoFilterChain.java:362)

at org.apache.mina.common.support.AbstractIoFilterChain.access$1100(AbstractIoFilt erChain.java:54)

at org.apache.mina.common.support.AbstractIoFilterChain$EntryImpl$1.messageReceive d(AbstractIoFilterChain.java:800)

at org.apache.mina.filter.codec.support.SimpleProtocolDecoderOutput.flush(SimplePr otocolDecoderOutput.java:62)

at org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecF ilter.java:200)

at org.apache.mina.common.support.AbstractIoFilterChain.callNextMessageReceived(Ab stractIoFilterChain.java:362)

at org.apache.mina.common.support.AbstractIoFilterChain.access$1100(AbstractIoFilt erChain.java:54)

at org.apache.mina.common.support.AbstractIoFilterChain$EntryImpl$1.messageReceive d(AbstractIoFilterChain.java:800)

at org.apache.mina.filter.executor.ExecutorFilter.processEvent(ExecutorFilter.java :266)

at org.apache.mina.filter.executor.ExecutorFilter$ProcessEventsRunnable.run(Execut orFilter.java:326)

at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

at java.lang.Thread.run(Unknown Source)

But before recreating the bug I verified both users rosters were empty of each other!

mysql> select * from jiveRoster where username=‘oas’ and jid like ‘mf1@%’;

Empty set (0.00 sec)

mysql> select * from jiveRoster where username=‘mf1’ and jid like ‘oas@%’;

Empty set (0.00 sec)

So is there another table in the openfire database which might have a stale/orphaned entry to clear out, or is this strictly a client-side issue?

I discovered orphaned records in jiveRosterGroups table.

Unfortunately neither openfire or spark reflected the nature of the error in their logs or debugging. Was only able to figure this out from tcpdump capture showing the SQL query failure response:

Error message: Duplicate entry ‘414’ for key 1

Perhaps the code might be more robust or otherwise improved if REPLACE INTO was used.