[JBoss JIRA] (ISPN-2145) No descriptions for invalid jgroups configuration files
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2145?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-2145:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> No descriptions for invalid jgroups configuration files
> -------------------------------------------------------
>
> Key: ISPN-2145
> URL: https://issues.jboss.org/browse/ISPN-2145
> Project: Infinispan
> …
[View More] Issue Type: Bug
> Components: Core
> Affects Versions: 5.1.2.FINAL, 7.0.0.CR1
> Environment: Any
> Reporter: Dmitry Udalov
> Assignee: Tristan Tarrant
> Fix For: 7.1.0.Beta1
>
>
> Can't find error's description for invalid jgroups configuration files. Shuffling elements of the file (why not!) makes it invalid, but log-files only report the existence of the error and you have to debug it to figure out the problem. It would be easier if the class JGroupsTransport also reports the exception, not just a generic message in blocks like
> } catch (Exception e) {
> log.errorCreatingChannelFromConfigFile(cfg);
> throw new CacheException(e);
> }
> As a result log-file contains a lot of generic messages without explaining the problem, which in my case was quite helpful:
> java.lang.Exception: events [GET_DIGEST SET_DIGEST FIND_INITIAL_MBRS FIND_ALL_VIEWS ] are required by GMS, but not provided by any of the protocols below it
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
[View Less]
10 years, 4 months
[JBoss JIRA] (ISPN-3244) TopologyAwareSyncConsistentHashFactory should limit the number of segments per node
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-3244?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-3244:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> TopologyAwareSyncConsistentHashFactory should limit the number of segments per node
> -----------------------------------------------------------------------------------
>
> Key: ISPN-3244
> URL: https://issues.jboss.org/browse/…
[View More]ISPN-3244
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.2.6.Final, 5.3.0.CR2
> Reporter: Dan Berindei
> Fix For: 7.1.0.Beta1
>
>
> Let's say we have a cluster with 5 nodes: A(r1), B(r2), C(r2), D(r3), E(r3)
> TopologyAwareConsistentSyncHashFactory will spread the segments equally on each rack, meaning A will own 2x segments compared to the other nodes.
> TopologyAwareConsistentHashFactory limits the maximum number per node, so that A owns just as many segments as the other nodes. With a slight limitation: the number of racks must be greater than numOwners, otherwise each rack must hold (at least) one copy of all the data.
> TopologyAwareConsistentSyncHashFactory is a little random, so we can't distribute the data perfectly, but we can limit the number of segments on each node to something like 1.5x the average number of segments.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
[View Less]
10 years, 4 months
[JBoss JIRA] (ISPN-3273) Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-3273?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-3273:
------------------------------
Fix Version/s: 7.1.0.Beta1
(was: 7.1.0.Alpha1)
> Dist L1 owners that aren't primary don't respect assumeOriginKeptEntryInL1
> --------------------------------------------------------------------------
>
> Key: ISPN-3273
> URL: https://issues.jboss.org/browse/ISPN-3273
> …
[View More] Project: Infinispan
> Issue Type: Bug
> Components: Core
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 7.1.0.Beta1
>
> Attachments: DistSyncFuncTest.java
>
>
> When a write operation occurs causing a L1 invalidation, there is a boolean to say assumeOriginKeptEntryInL1 which means the owner won't send an invalidation to the originating node that caused this update. This works fine for the primary owner, however any additional backups think the origin is the primary owner and such send invalidations to possibly the real origin.
> -This affects both tx and non tx caches. Tx caches that are sync don't see the problem since locking prevents the invalidation, however it causes an unneeded network roundtrip which can cause delay.-
> Actually this only affects non-tx caches, as tx caches send the prepare/commit directly to the owner(s) instead of having it relayed.
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
[View Less]
10 years, 4 months