chp
October 24, 2019, 3:40pm
1
Hi,
i found two bugs in the implementation of Openfire’s cache config:
1.
on line:
{
return;
}
if ( property.endsWith( ".size" ) )
{
final Long size = getMaxCacheSize( cache.getName() );
cache.setMaxCacheSize( size < Integer.MAX_VALUE ? size.intValue() : Integer.MAX_VALUE );
}
if ( property.endsWith( ".maxLifeTime" ) )
{
final Long lifetime = getMaxCacheLifetime( cache.getName() );
cache.setMaxLifetime( lifetime );
}
// Note that changes to 'min' and 'type' cannot be applied runtime - a restart is required for those.
}
@Override
public void propertyDeleted( String property, Map<String, Object> params )
The PropertyEventListener checks for “.maxLifeTime” instead of “.maxLifetime”. Thats why lifetime settings will take effect only on restart.
2.
on line:
*/
void setName(String name);
/**
* Returns the maximum size of the cache in bytes. If the cache grows larger
* than the max size, the least frequently used items will be removed. If
* the max cache size is set to -1, there is no size limit.
*
* @return the maximum size of the cache in bytes.
*/
long getMaxCacheSize();
/**
* Sets the maximum size of the cache in bytes. If the cache grows larger
* than the max size, the least frequently used items will be removed. If
* the max cache size is set to -1, there is no size limit.
*
*<p><strong>Note:</strong> If using the Hazelcast clustering plugin, this will not take
* effect until the next time the cache is created</p>
*
* @param maxSize the maximum size of the cache in bytes.
the cache interface says: getMaxCacheSize() returns long
on line:
/**
* Sets the maximum size of the cache in bytes. If the cache grows larger
* than the max size, the least frequently used items will be removed. If
* the max cache size is set to -1, there is no size limit.
*
*<p><strong>Note:</strong> If using the Hazelcast clustering plugin, this will not take
* effect until the next time the cache is created</p>
*
* @param maxSize the maximum size of the cache in bytes.
*/
void setMaxCacheSize(int maxSize);
/**
* Returns the maximum number of milliseconds that any object can live
* in cache. Once the specified number of milliseconds passes, the object
* will be automatically expired from cache. If the max lifetime is set
* to -1, then objects never expire.
*
* @return the maximum number of milliseconds before objects are expired.
*/
long getMaxLifetime();
the cache interface says that the same value is set as int
Also the documentation of the setMaxCacheSize says (https://github.com/igniterealtime/Openfire/blob/master/xmppserver/src/main/java/org/jivesoftware/util/cache/Cache.java#L77 ) that the value is in bytes
in the hazelcast plugin the value is used as megabyte (https://docs.hazelcast.org/docs/3.12/javadoc/ -> USED_HEAP_SIZE)
https://github.com/igniterealtime/openfire-hazelcast-plugin/blob/master/src/java/org/jivesoftware/openfire/plugin/util/cache/ClusteredCacheFactory.java#L246
That means that in non cluster environments the cache is in bytes and in cluster environments (with hazelcast) it is in megabytes.
my recommended solution would be that setMaxCacheSize should use long and in hazelcast plugin the value of byte should be converted to megabye.
I can create a pull-request for it
Best Regards
chp
1 Like
wroot
October 24, 2019, 4:28pm
2
It seems that you are savvy with GitHub. You can suggest a PR to fix that.
speedy
October 25, 2019, 12:51am
3
@chp
thanks for the contribution! Please feel free to submit a PR on github. We welcome any improvements and/or bug fixes. @guus is currently our project lead, so feel free to reach out to him if you have any questions. He and others can usually be found in our chat… open_chat@conference.igniterealtime.org
Thanks again!
guus
October 31, 2019, 11:12am
4
Thanks for reporting this! The fix related to the typo / different usage of camel-case will be fixed in Openfire 4.4.3.
guus
November 21, 2019, 2:04pm
5
The bug where an inconsistent data type is used to represent cache sizes will be fixed in Openfire 4.5.0.