powered by Jive Software

Truncate dbo.ofPubsubItem table?

Are there any guidelines or tools to truncate this table? our deployment is to about 12K users and this table is <65GB in only 3500 or so rows. This constitutes over half our database volume.

I would love to truncate it, but can’t find any guidelines. Any help would be appreciated

Actually, the table is 98GB and only has 3100 rows. The ‘payload’ column is ‘ntext’. I assume these are images, but I dont know if they are archives of sent images or something else.

I assume then that your node(s) are configured to be persistent. If it is storing too much information, you can either make the node non-persistent or limit the number of items retained by node configuration.

I’m just the DBA here… but neither I nor my colleagues were aware of any such service. I can’t see, in spark or other clients, any functionality that would appear to be anything for PubSub-ish other than maybe conference rooms, so I’m at a loss of how this is being used. Aparently our userbase is aware of some pubsub functionality though.

Hi,

as a DBA you may simple dump the table and look at the nodeID and JID column. How many different JIDs and nodeIDs are in the table?

The ofPubsubNode table contains the nodeID, creationDate and creator, soit may be easy to identify the users who use so much memory.

LG

I have already looked at the table. Unfortunately there is nothing in the table that indicates what is consuming the space. I tried dropping some rows on a test instance and it had no effect whatsoever.

Can anyone out there tell me what this table is for?

(I work with eoconnor)

So, Here’s what I think may be going on and a little history… maybe some of you can confirm if this would cause problems?

We had been running for several years with openfire running on a single instance with an embedded DB. Due to memory issues (the JVM would basically run constantly at about 90-95% memory utilization and would start to fail during peak login times) we decided to move the embedded DB out of the JVM and into a standard MSSQL environment. Because of the lack of documented methods of doing this DB migration (I found pieces and parts of info on these discussion boards, but no sure fire way of doing this) I went ahead and wrote a script to import the data in the openfire.script file into a prepared (empty) database on the SQL server. This went off without a hitch at the time… we pointed openfire to now use the MSSQL db and fired it up. All seemed well.

Well… what I’ve found is that when importing the data, some columns that had double quotes ( " ) were doubled up ( “” ). one such column was the “payload” column. New items were being added to the DB with the correctly formatted payload (with " ) but the old data that was imported from the HSQLDB was still hanging around. Today in a last-ditch effort to stop this table growing by 10-15GB per day, we removed all these old records that had the doubled-up double quotes ( “” ). Since that update… the table has seemed stable… and no longer growing. (THIS IS GOOD!)

But… researching (thanks eoconnor!) the ntext datatype that the payload column is using… we’re not sure how to successfully purge what ever data may have been linked. (the data is stored in LOB structures rather than directly in the payload column?) now that we’ve fixed the issue that seemed to be causing the data bloat… we need to cut out the erroneous bloat. The data in this table (ofPubsubItem) seems to be just avatar pictures, music users are currently listening to in iTunes etc. I can’t tell if there’s any other critical data in there too… but if not, I’m thinking stop Openfire, drop that table and recreate it from scratch. Would this cause any problems that you can think of?

I’ve found data in several other tables (ofPrivate, ofOffline, ofPrivacyList, ofPresence) that also have the doubled-up double quotes … and I’ve tested removing these rows on a test server and it seems to not cause any problems. (I’m actually seeing less cryptic errors in the error.log now too).

Sorry for the long post… there’s just lots of weirdness going on so it takes a bit to explain.

Thanks!

  • Jeremy.