[JBoss JIRA] (ISPN-4654) AND over range queries does not work (indexless query)
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4654?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4654:
-----------------------------------------------
Dave Stahl <dstahl(a)redhat.com> changed the Status of [bug 1139650|https://bugzilla.redhat.com/show_bug.cgi?id=1139650] from VERIFIED to CLOSED
> AND over range queries does not work (indexless query)
> ------------------------------------------------------
>
> Key: ISPN-4654
> URL: https://issues.jboss.org/browse/ISPN-4654
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Remote Querying
> Affects Versions: 6.0.2.Final, 7.0.0.Beta1
> Reporter: Radim Vansa
> Assignee: Adrian Nistor
> Fix For: 7.0.0.Beta2
>
>
> Check this in QueryDslConditionsTest:
> {code}
> public void testAnd5() throws Exception {
> QueryFactory qf = getQueryFactory();
> // range queries use different code
> Query q = qf.from(getModelFactory().getUserImplClass())
> .having("id").lt(1000)
> .and().having("age").lt(1000)
> .toBuilder().build();
> List<User> list = q.list();
> assertEquals(3, list.size());
> }
> {code}
> The problem is that some subscription gets suspended and the second LT does not fire the second predicate update (and then neither the AND reevaluation).
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 8 months
[JBoss JIRA] (ISPN-4710) DistributedSegmentReadLocker should be allowed to skip ReadLocks on small files
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4710?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4710:
-----------------------------------------------
Dave Stahl <dstahl(a)redhat.com> changed the Status of [bug 1166865|https://bugzilla.redhat.com/show_bug.cgi?id=1166865] from VERIFIED to CLOSED
> DistributedSegmentReadLocker should be allowed to skip ReadLocks on small files
> -------------------------------------------------------------------------------
>
> Key: ISPN-4710
> URL: https://issues.jboss.org/browse/ISPN-4710
> Project: Infinispan
> Issue Type: Enhancement
> Components: Lucene Directory
> Reporter: Sanne Grinovero
> Assignee: Gustavo Fernandes
> Fix For: 7.0.0.CR1
>
>
> Both of these methods:
> - {{org.infinispan.lucene.readlocks.DistributedSegmentReadLocker.deleteOrReleaseReadLock(String)}}
> - {{org.infinispan.lucene.readlocks.DistributedSegmentReadLocker.realFileDelete(FileReadLockKey, AdvancedCache<Object, Integer>, AdvancedCache<?, ?>, AdvancedCache<?, ?>, boolean)}}
> Are performing a lot of unnecessary operations - potentially on synchronous clustered caches - as we know in advance that files which are not being chunked don't need a read lock, and are not being chunked in smaller pieces (which affects how we delete things).
> The determining factor between the two styles is defined in:
> {{org.infinispan.lucene.impl.DirectoryLuceneV4.openInput(String, IOContext)}}
> {code} @Override
> public IndexInput openInput(final String name, final IOContext context) throws IOException {
> final IndexInputContext indexInputContext = impl.openInput(name);
> if ( indexInputContext.readLocks == null ) {
> return new SingleChunkIndexInput(indexInputContext);
> }
> else {
> return new InfinispanIndexInput(indexInputContext);
> }
> }{code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 8 months
[JBoss JIRA] (ISPN-4497) Race condition in LocalLockMergingSegmentReadLocker results in file content being deleted
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4497?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4497:
-----------------------------------------------
Dave Stahl <dstahl(a)redhat.com> changed the Status of [bug 1166865|https://bugzilla.redhat.com/show_bug.cgi?id=1166865] from VERIFIED to CLOSED
> Race condition in LocalLockMergingSegmentReadLocker results in file content being deleted
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-4497
> URL: https://issues.jboss.org/browse/ISPN-4497
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 5.2.6.Final
> Reporter: Anuj Shah
> Assignee: Sanne Grinovero
> Fix For: 7.0.0.Alpha5
>
>
> There is a race condition in LocalLockMergingSegmentReadLocker which can lead to more calls delete on the underlying DistributedSegmentReadLocker which results in the file being removed from the caches.
> This happens with three or more threads acquiring and releasing locks on the same file simultaneously:
> # Thread 1 (T1) acquires a lock and creates a {{LocalReadLock}}, call it L1 - the underlying lock is acquired
> # T2 starts to acquire and holds a reference to L1
> # T3 starts to acquire and holds a reference to L1
> # T1 releases - L1 at this stage only has value of 1 - so the underlying lock is released, and L1 is removed from the map
> # T2 continues - finds L1 with value 0 and acquires the underlying lock
> # T3 continues - increments L1 value to 2
> # T2 releases - creates a new {{LocalReadLock}}, L2 - this has zero value so the underlying lock is released, and L2 is removed from the map
> # T3 releases - creates a new {{LocalReadLock}}, L3 - this has zero value so the underlying lock is released, and L3 is removed from the map
> # The final step triggers a real file delete as underlying lock is released one too many times
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 8 months
[JBoss JIRA] (ISPN-4776) The topology id for the merged cache topology is not always bigger than all the partition topology ids
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4776?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4776:
-----------------------------------------------
Dave Stahl <dstahl(a)redhat.com> changed the Status of [bug 1155611|https://bugzilla.redhat.com/show_bug.cgi?id=1155611] from VERIFIED to CLOSED
> The topology id for the merged cache topology is not always bigger than all the partition topology ids
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4776
> URL: https://issues.jboss.org/browse/ISPN-4776
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 7.0.0.CR2, 7.0.0.Final
>
>
> With the ISPN-4574 fix, I changed the merge algorithm to pick the partition with the most members (both in the _stable_ topology and in the _current_ topology) instead of the partition with the highest topology id.
> However, the biggest topology is not necessarily the partition with the highest topology id, so it's possible that some nodes will ignore the merged topology because they already have a higher topology installed. This happened once in ClusterTopologyManagerTest.testClusterRecoveryAfterThreeWaySplit:
> {noformat}
> 00:24:59,286 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterCacheStatus] Recovered 3 partition(s) for cache cache: [CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeL-25322: 60+0]}, pendingCH=null, unionCH=null}, CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns = 60, owners = (2)[, NodeL-25322: 30+10, NodeN-6727: 30+10]}, pendingCH=DefaultConsistentHash{ns = 60, owners = (2)[, NodeL-25322: 30+30, NodeN-6727: 30+30]}, unionCH=null}, CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}]
> 00:24:59,287 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterCacheStatus] Updating topologies after merge for cache cache, current topology = CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}, stable topology = CacheTopology{id=4, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (3)[, NodeL-25322: 20+20, NodeM-12972: 20+20, NodeN-6727: 20+20]}, pendingCH=null, unionCH=null}, availability mode = null
> 00:24:59,287 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache cache, topology = CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}, availability mode = null
> 00:24:59,288 TRACE (transport-thread-NodeL-p33097-t3:) [LocalTopologyManagerImpl] Ignoring consistent hash update for cache cache, current topology is 8: CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}
> {noformat}
> Failure logs here: http://ci.infinispan.org/viewLog.html?buildId=12364&buildTypeId=Infinispa...
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 8 months
[JBoss JIRA] (ISPN-5060) PartitionHandling: remove unavailable mode
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5060?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5060:
-----------------------------------------------
Dave Stahl <dstahl(a)redhat.com> changed the Status of [bug 1179926|https://bugzilla.redhat.com/show_bug.cgi?id=1179926] from VERIFIED to CLOSED
> PartitionHandling: remove unavailable mode
> ------------------------------------------
>
> Key: ISPN-5060
> URL: https://issues.jboss.org/browse/ISPN-5060
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 7.1.0.Beta1, 7.1.0.Final
>
>
> The Unavailable mode name is misleading, because some keys are available, just like in Degraded mode.
> The only difference between Degraded and Unavailable is that with Degraded the cluster might recover without manual intervention. The administrator still has to know a lot more details in order to decide whether manual intervention is needed or not. So it would be less confusing if gracefully shutting down {{numOwners}} nodes in quick succession would leave the cache in Degraded mode instead of Unavailable.
> Instead of removing the Unavailable mode completely, we could also change it to deny access to all the keys and allow the administrator to use it. E.g. if we had an operation to dump the cache into a shared store and another to load the cache from a shared store, the administrator could force the cache into Unavailable mode while dumping/loading the cache.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 8 months