powered by Jive Software

Repeated duplicate key error in ofpubsubitem_pk

We are getting batches of 40-50 very rapid errors in our postgresql logs from openfire:

“repeated duplicate key error in ofpubsubitem_pk”

These batches occur every few minutes. I’m not sure when they started so I can’t point to a revision. A restart of the openfire service does not stop the problem and neither does deleting the (few) records in that “ofpubsubitem” table. Any ideas anyone?

I have run into a similar error in the past. Are you generating your own id’s for your items?

In my case, we did this with long id’s and the primary key field is set to (I believe) 100 characters, so if your keys are longer and the prefix is not unique, then the id is truncated and you get duplicate id’s on database entry.

To fix it, we simply updated the column to be double its original length (to something we knew would be long enough).

Hope this helps!

Thanks for that - I have implemented and will monitor.

We are still receiving the error at the moment - in batches of around 50 messages every two minutes. We’ve changed the three key fields on the ofpubsubitem table to all be 200 chars - that’s double the original values. We are not generating our own keys - I have no idea what the table is for nor where there messages going into it come from. We just use Miranda and Pidgin clients against it.

Any more ideas anyone?

Did you look at the keys being stored? How long are they?

You said you aren’t setting the id’s yourself, but you don’t know where the messages are coming from. What I meant by are you setting the id yourself, is whether the publisher is setting the id in the published item or whether it is set by the server. If being set by the publisher, there is nothing in the spec to limit how long that id can be, yet it is being truncated when inserted into the db, so when the id length exceeds the column size this problem can occur. Check if the keys being inserted, especially the ones giving you the error, exceed the column size.

I did look at the data being stored before and they were all shorter than the original field sizes of 100 chars. Since I can’t see the failed data insert values, however, I have no idea of their length. I also cannot tell if they are being set by the server or the publisher. Since we have a tiny environment it should be easily controllable though. Any idea how to find out?