Compatability between versions

What’'s the story on compatibility between versions? We wrote a plugin for 2.0.4, and it works fine. However, when we upgrade to 2.0.8, we get a java.lang.IncompatibleClassChangeError on this method call:

ProviderManager.addExtensionProvider(

NodeIpRequestExtension.ELEMENTNAME,

NodeIpRequestExtension.NAMESPACE,

NodeIpRequestExtension.class);

NodeIpRequestExtension extends org.jivesoftware.smack.packet.PacketExtension.

I suspect 2.0.8 uses a newer version of Smack.

Should plugin vendors expect to have to create a new release for each version of Spark? Are compatibility issues documented in the Spark release notes somewhere?

Thanks

Message was edited by: mrpantsuit

That’‘s actually a great question. Although we really hesitate to upgrade or change our APIs, it sometime becomes a necessity (this time it was for reconnection logic in smack). Also, please take into account that Spark is very very young and changes / modifications are done at a very quick pace along with numerous changes in Smack for stability. I would say that as time goes on, the changes will decrease, and yes, it’'s a good idea for us to have some compatibility change issues in our release notes.

Cheers,

Derek

Hey Derek,

What do you think about the idea of adding a element to the plugin.xml file? That way developers could specify a range of Spark versions that they know their plugins will work with, and would prevent end users from installing plugins that are incompatible or haven’'t been tested with newer releases of Spark.

Cheers,

Ryan

Hi Ryan,

while I like the idea to of a maxVersion I’'d like to see min-/maxApiVersion in the plugins. A plugin should be able to query the required API version of Spark so it may support multiple API versions.

LG

Hi LG,

Plugins can query Spark to find out what version it is, the problem is that it has to be loaded first, so if the plugin fails to load than it won’‘t be able to find out if it is compatible with the version of Spark it’‘s attempting to run in. So, it’'s a bit of a catch-22. Part of the idea behind having a maxSparkVersion is that Spark would be able to “protect” itself from loading a plugin that will fail to load.

Cheers,

Ryan

Hi Ryan,

a generic initialize class could work with every Spark version and decide whether it will initialize or not.

Even if Spark needs to check plugin.xml it should check for an API version. Otherwise one would need to update the plugins quite often.

LG

Hey LG,

Those sound like interesting ideas, I was just trying to be more pragmatic and offer up a solution that wouldn’'t require semi-major alterations to both the Spark plugin manager and all the existing plugins themselves.

In either case, with the exception of package name changes the plugin API has remained generally unchanged from 1.0.x through 2.0.x, and for the most part 2.x.x. I would suspect things will continue to settle down as Spark matures so maybe the whole discussion is moot.

Cheers,

Ryan

Although the plugin API might have mostly remained static, the version of Smack used has not, and it has no less of an impact on plugin compatibility (apparently).

Also, I suspect plugin vendors will continue to want their plugins to fail gracefully if they are incompatible regardless of how infrequent compatibility changes become.

Perhaps Spark could provide a separate version number for the plugin API that plugins could obtain before initializing. This version number would increment only on an incompatible change to the plugin API (including Smack).

How is this problem handled in other plugin architectures, e.g., Eclipse, NetBeans?

HI mrpantsuit,

Although the plugin API might have mostly remained static, the version of Smack used has not, and it has no less of an impact on plugin compatibility (apparently).

That’'s a good point. Not only can changes be made to Spark/the plugin API, but to Smack as well.

Also, I suspect plugin vendors will continue to want their plugins to fail gracefully if they are incompatible regardless of how infrequent compatibility changes become.

Very true.

Perhaps Spark could provide a separate version number for the plugin API that plugins could obtain before initializing. This version number would increment only on an incompatible change to the plugin API (including Smack).

I’‘m not sure in the case of Spark/Smack how neccessary it is that you would need to track both versions. Maybe I’‘m not understanding what you wrote, but I can’‘t think of a situation where a new version of Smack would be included with Spark that wouldn’'t cause the version of Spark to change.

How is this problem handled in other plugin architectures, e.g., Eclipse, NetBeans?

Good question. A quick Google turned up this document that explains how it’‘s handled in Eclipse. In your plugins manifest file you specify the version(s) of Eclipse and/or other plugins that your plugin needs, so it’'s similiar to what I was proposing with the maxSparkVersion.

Cheers,

Ryan

In regards to a separate plugin API version number, I was thinking that for new Spark releases that do NOT include plugin API changes, the maxSparkVersion strategy will erroneously report incompatibility. Whereas, a maxPluginApiVersion strategy would allow plugins to work until plugin compatibility actually changes.