[infinispan-dev] Stream encoding of Flags and future compatibility
Galder Zamarreño
galder at redhat.com
Fri May 18 03:35:01 EDT 2012
On May 16, 2012, at 5:45 PM, Dan Berindei wrote:
> On Wed, May 16, 2012 at 1:24 PM, Galder Zamarreño <galder at redhat.com> wrote:
>>
>>
>> On May 15, 2012, at 10:52 AM, Dan Berindei wrote:
>>
>>> On Tue, May 15, 2012 at 10:44 AM, Galder Zamarreño <galder at redhat.com> wrote:
>>>>
>>>> On May 11, 2012, at 11:30 PM, Dan Berindei wrote:
>>>>
>>>>> On Fri, May 11, 2012 at 7:23 PM, Sanne Grinovero <sanne at infinispan.org> wrote:
>>>>>> On 11 May 2012 16:37, Galder Zamarreño <galder at redhat.com> wrote:
>>>>>>> Quickly tried this and caused no issues:
>>>>>>> https://github.com/galderz/infinispan/commit/7718926e5a4a6763506250362d7bd5cbdccd2931
>>>>>>
>>>>>> Looks good! I'm sure this doesn't solve all future migration problems,
>>>>>> but if we could keep this kind of tricks around it should improve
>>>>>> odds.
>>>>>> IMHO, this is a kind of sensitivity that we should apply across all
>>>>>> areas (not just flags).
>>>>>>
>>>>>
>>>>> Looks interesting, but then you have the opposite problem: not all new
>>>>> flags can be ignored, so you need a way to specify that a new flag is
>>>>> "required". E.g. if we had just added a ZERO_LOCK_ACQUISITION_TIMEOUT
>>>>> flag then the client would be expecting spurious failures, but not
>>>>> extra long delays.
>>>>
>>>> Hmmm, I disagree. If you're adding a new flag, say in 5.2, and you expect a node that runs 5.0 to deal with it properly, really, what you need to be doing is implementing that flag in 5.0.
>>>>
>>>
>>> Well, 5.0 is already out there, so modifying it is not an option.
>>
>> You can always release a 5.0.x :), and I'm pretty sure we might have to do some micro releases to make rolling upgrades work.
>>
>
> Well, we already have 5.0.1.Final, and I'm pretty sure it's
> incompatible with 5.0.0.Final :)
> But yes, I agree that we could release a version that handles a new
> flag correctly yet never sends it to other nodes.
>
>
>>> What we can do is ensure that the clients see the incompatibility in
>>> their testing environment and don't use two versions in production
>>> without being aware of the problem.
>>>
>>>> We want:
>>>> - if an old client encounters a new/unknown marshalled value, to not blow up and log a WARN.
>>>> - if an old client is expected to react to a to a new/unknown marshalled value in the way new versions deal with it, it'll need to implement it.
>>>>
>>>> We don't want:
>>>> - old client to 'blow up in flames' when they encounter new/unknown options, since this causes problems with potential rolling upgrades.
>>>>
>>>
>>> I think there are situations where two versions really are
>>> incompatible and we really should "blow up in flames".
>>
>> Any blowing up in flames would stop rolling upgrades from working, so find me a real example of this first and try to understand how rolling upgrade would work in that scenario...
>>
>
> I think there are cases where upgrading nodes one by one (a "true"
> rolling upgrade) does not make sense, because the Infinispan versions
> are just too different to work together in the same cluster.
>
> But there are probably many other places where we'd be blowing up in
> flames in such cases, so it's not worth adding complications to the
> flags serialization format just for that.
>
>
>>
>>> I'm not saying that's justified in all cases or even in the majority
>>> of cases, but I'm pretty sure it's not going to be 0% either.
>>>
>>>>>
>>>>>> On a totally different page, why are we serializing Flags one-by-one ?
>>>>>> We mostly need to serialize EnumSets right?
>>>>>> An EnumSet can be encoded by using the bits of a couple of bytes.
>>>>>> Three bytes looks like enough for all our needs.. we could even be
>>>>>> clever and reserve a special Externalizer-ID for the empty set, to
>>>>>> avoid 3 bytes where none are needed.
>>>>>> While currently we need an integer (4 bytes) to encode the header for
>>>>>> "EnumSet", plus (4 bytes header + 1 byte value) * each flag -> a lot.
>>>>>>
>>>>>
>>>>> RiverMarshaller already has an optimization for the empty set:
>>>>> https://github.com/dmlloyd/jboss-marshalling/blob/master/river/src/main/java/org/jboss/marshalling/river/RiverMarshaller.java#L613
>>>>>
>>>>> I'm not sure why it doesn't encode each element as a bit, it might be
>>>>> to keep wire compatibility when the order of values in an enum
>>>>> changes.
>>>>
>>>> David?
>>>>
>>>>> However, because there is only one EnumSet for all Enum types, a
>>>>> hypothetical EnumSetExternalizer also needs to write the name of the
>>>>> enum class - if we wanted to serialize EnumSet<Flag> in 2 bytes then
>>>>> we'd need to make the transformation in ReplicableCommandExternalizer.
>>>>
>>>> Not necessarily. I think we should do what Sanne suggests but manually in the Flag.Externalizer class, since that's tied to the Flag type.
>>>>
>>>> Within it, we can replicate what an enum set does for marshalling. We already have such code in the Hot Rod server/client (that's how we handle flags there - completely forgot about it when I wrote Flag.Externalizer), so shouldn't be a biggie.
>>>>
>>>
>>> The call stack looks like this: ReplicableCommandExternalizer ->
>>> EnumSet externalizer (in RiverMarshaller) -> Flag.Externalizer.
>>> You can't change how the EnumSet is serialized in Flag.Externalizer,
>>> you have to modify either the EnumSet externalizer (e.g. by writing a
>>> new FlagSet class) or ReplicableCommandExternalizer.
>>
>> Writing a new FlagSet class is my fav. Easy to do and easy to resgister in the ext table.
>>
>
> I think I prefer writing the flags explicitly in the
> ReplicableCommandExternalizer, to avoid creating lots of extra object
> instances.
^ That could work. I added a note to ISPN-2044. Thanks
>
>>
>>>
>>> The HotRod flags are serialized directly in Codec10.writeHeader, which
>>> is the equivalent of ReplicableCommandExternalizer.writeCommandHeader.
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Galder Zamarreño
>> Sr. Software Engineer
>> Infinispan, JBoss Cache
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
More information about the infinispan-dev
mailing list