i’m performing a clustered scenario with hazelcast plugin. I’m interested in archiving messages and, in order to get this task, i used the monitoring plugin. Looking at the code i noticed that there is just one instance of the cluster (the senior cluster member) that writes directly to the DB. All the orher instances of the cluster triggers some events that the SeniorClusterMemeber receive and serve. Read operations instead are distributed to all the servers in the cluster. I think that this software architecture limits the maximum available throughput in writing to the DB. Is there anyone that can explain me the reason why the monitoring plugin works in this way?
Anyone knows what is the throughput of a single openfire server to a DB in terms of insert per second?
thanks a lot