[JBoss JIRA] (ISPN-2255) FineGrainedAtomicMap missing key, value pairs in some cluster nodes in distributed mode, embedded infinispan cache
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2255?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2255:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1042861|https://bugzilla.redhat.com/show_bug.cgi?id=1042861] from MODIFIED to ON_QA
> FineGrainedAtomicMap missing key,value pairs in some cluster nodes in distributed mode, embedded infinispan cache
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2255
> URL: https://issues.jboss.org/browse/ISPN-2255
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 5.1.5.FINAL
> Environment: Details about cluster:
> 1. Infinispan 5.1.5
> 2. java 7u5 64bit
> 3. linux 64bit (two nodes debian and one node ubuntu)
> 4. tomcat 7.0.28
> 5. jbossTM 4.16.0 JTA only
> Tomcat integration code is here:
> https://github.com/zvrablik/tomcatInfinispanSessionManager/tree/master/to...
> 6. all machines are on real hw ( no virtual machines), date and time is synchronized and shouldn't vary more than 1 second
> Reporter: Zdenek Henek
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: 621
> Fix For: 7.0.0.Alpha1, 7.0.0.Final
>
> Attachments: fgamissueTest2.zip, jgroupsConfig.xml, sessionInfinispanConfigtestLB.xml
>
>
> I am using FineGrainedAtomicMap to store data into clustering distributed cache. When clustering set to replicated doesn't cause any issues or at least I haven't run
> into any. When I switch to distributed mode I don't see some key,value pairs in some nodes. This happens randomly. The numOwners is set to two and I have cluster of three nodes.
> more details:
> https://community.jboss.org/thread/203420?tstart=60
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-3871) Server testsuite fixes
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3871?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3871:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1073612|https://bugzilla.redhat.com/show_bug.cgi?id=1073612] from MODIFIED to ON_QA
> Server testsuite fixes
> ----------------------
>
> Key: ISPN-3871
> URL: https://issues.jboss.org/browse/ISPN-3871
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Reporter: Jakub Markos
> Assignee: Jakub Markos
> Fix For: 7.0.0.Alpha1, 7.0.0.Final
>
>
> while running the server testsuite on various os configurations, several test issues were found:
> - suppress-state-transfer tests on windows were failing because jenkins+windows+cygwin is slow, we're doing something like: "kill node -> verify that views are updated" and during the verification the update is still pending, fixed with sleeps after the kill
> - hotrod protocol version bump
> - running the testsuite with specific zip distribution on windows
> - xsite config example test fix for ibm java
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-3947) HotRod client keep trying recover connections to a failed cluster
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3947?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3947:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1058887|https://bugzilla.redhat.com/show_bug.cgi?id=1058887] from MODIFIED to ON_QA
> HotRod client keep trying recover connections to a failed cluster
> -----------------------------------------------------------------
>
> Key: ISPN-3947
> URL: https://issues.jboss.org/browse/ISPN-3947
> Project: Infinispan
> Issue Type: Feature Request
> Components: Remote Protocols
> Affects Versions: 6.0.1.Final, 7.0.0.Alpha1
> Reporter: Wolf-Dieter Fink
> Assignee: Galder Zamarreño
> Priority: Critical
> Labels: 621, hotrod, hotrod-java-client
> Fix For: 7.0.0.Alpha1, 7.0.0.Final
>
>
> If an infinispan-server cluster is not longer reachable for some reason, i.e. network disconnect, the hot-rod client try to re-establish the lost connections.
> The client library will retry this by a fixed calculation based on the max numbers of connections from the pool or 10 multiplied with the number of available servers.
> This can lead in a very long time until the application can continue and react as it will wait for the read- or connect-timeout for each try.
> To improve this behaviour there should be a configurable limit of retries per server and/or a timeout in total.
> This will give the application the chance to handle a remote-cache failure and reply to the user instead of hanging for minutes (with the default settings)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-4053) WriteSkew not throwing an exception for AtomicMap in REPL and DIST mode
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4053?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4053:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1047908|https://bugzilla.redhat.com/show_bug.cgi?id=1047908] from MODIFIED to ON_QA
> WriteSkew not throwing an exception for AtomicMap in REPL and DIST mode
> -----------------------------------------------------------------------
>
> Key: ISPN-4053
> URL: https://issues.jboss.org/browse/ISPN-4053
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 6.0.1.Final
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
> Labels: 621
> Fix For: 7.0.0.Final, 7.0.0.Alpha2
>
>
> The writeSkewCheck flag does not work correctly for AtomicMap. When Infinispan runs in REPEATABLE_READ isolation mode for transactions, the writeSkewCheck flag is enabled, and two concurrent transactions read/write data from the same AtomicMap. The WriteSkewException exception is never thrown when committing either of the transactions.
> Consequently, the data stored in the AtomicMap might be changed by another transaction, and current transaction will not register this change during commit.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-3942) HotRod client keep trying recover a connection to a cluster member after shutdown
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3942?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3942:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1059277|https://bugzilla.redhat.com/show_bug.cgi?id=1059277] from MODIFIED to ON_QA
> HotRod client keep trying recover a connection to a cluster member after shutdown
> ---------------------------------------------------------------------------------
>
> Key: ISPN-3942
> URL: https://issues.jboss.org/browse/ISPN-3942
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 6.0.1.Final, 7.0.0.Alpha1
> Environment: Infinispan server with HotRod client in server-client mode and replicated caches.
> Reporter: Wolf-Dieter Fink
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: clustered, hotrod, hotrod-java-client
> Fix For: 7.0.0.Final, 7.0.0.Alpha2
>
> Attachments: hotrod-client.log
>
>
> If a HotRod client is connected to a server cluster and one of the servers do a correct shutdown, the client keep try to reconnect the server after the cluster view has changed.
> There is only a replicated cache configured.
> From the server-logfile the new cluster view is established
> 16:00:22,290 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-3,shared=udp) ISPN000094: Received new cluster view: [node1/clustered|2] (1) [node1/clustered]
> There is no failure from the application perspective, but there there is always a retry if the RoundRobin return the unavailable server.
> This might end in a performance or failure issue if there is a larger cluster and a part going out of work for some reason.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-3868) Deadlock in RemoteCache getAsync
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3868?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3868:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1073331|https://bugzilla.redhat.com/show_bug.cgi?id=1073331] from MODIFIED to ON_QA
> Deadlock in RemoteCache getAsync
> --------------------------------
>
> Key: ISPN-3868
> URL: https://issues.jboss.org/browse/ISPN-3868
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 6.0.0.Final
> Environment: RemoteCahe component of 6.0.0.Final
> Reporter: Alexander Furer
> Assignee: Mircea Markus
> Labels: remote
> Fix For: 7.0.0.Alpha1, 7.0.0.Final
>
>
> Here is the implementation of remoteCahe.getAsync() :
> {code}
> public NotifyingFuture<V> getAsync(final K key) {
> assertRemoteCacheManagerIsStarted();
> final NotifyingFutureImpl<V> result = new NotifyingFutureImpl<V>();
> Future<V> future = executorService.submit(new Callable<V>() {
> @Override
> public V call() throws Exception {
> V toReturn = get(key);
> result.notifyFutureCompletion();
> return toReturn;
> }
> });
> result.setExecuting(future);
> return result;
> }
> {code}
> 2 problems here :
> 1. Callable's call method might be called BEFORE calling client had a chance to add listener (i.e. getAsync is not returned yet), in this case its' listener futureDone method will never be called.
> 2. Even case #1 has not happened and notifyFutureCompletion is called on listener, but the future is not resolved yet : "call" has not returned, that's why the future that is passed to listener is not resolved, and calling future.get from listener blocks forever.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-3957) Preload with async cache store is not efficient
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3957?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3957:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1059489|https://bugzilla.redhat.com/show_bug.cgi?id=1059489] from MODIFIED to ON_QA
> Preload with async cache store is not efficient
> -----------------------------------------------
>
> Key: ISPN-3957
> URL: https://issues.jboss.org/browse/ISPN-3957
> Project: Infinispan
> Issue Type: Enhancement
> Components: Loaders and Stores
> Affects Versions: 5.2.7.Final
> Reporter: Mircea Markus
> Assignee: Dan Berindei
> Labels: 621
> Fix For: 5.2.8.Final, 7.0.0.Alpha1, 7.0.0.Final
>
>
> Configuring on a AdvancedCacheLoader preload=true and asyn=true cause it to load each entry in the store in a loop, each entry being loaded through an store read.
> This is caused by the way loadAll is implemented in the AsynLoader: in order to enforce consistency with whatever is in memory it does some special handling. The thing is, though, that we don't need this advanced async loader logic during the initial preload, as the async cache loader is empty.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (ISPN-4062) Infinispan directory server module is missing some dependecies
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4062?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4062:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1073086|https://bugzilla.redhat.com/show_bug.cgi?id=1073086] from MODIFIED to ON_QA
> Infinispan directory server module is missing some dependecies
> --------------------------------------------------------------
>
> Key: ISPN-4062
> URL: https://issues.jboss.org/browse/ISPN-4062
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 6.0.0.Final
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Labels: 6.3remotequeries, 621
> Fix For: 7.0.0.Final, 7.0.0.Alpha2
>
>
> Configuring an indexed cache with 'infinispan' provider will result in an exception at server startup due to missing dependencies from module.xml:
> {code}
> 19:01:59,862 INFO [org.jboss.as.clustering.infinispan] (CacheStartThread,null,LuceneIndexesData) JBAS010281: Started LuceneIndexesData cache from local container
> 19:01:59,868 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,null,LuceneIndexesLocking) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 19:01:59,868 INFO [org.jboss.as.clustering.infinispan] (CacheStartThread,null,LuceneIndexesLocking) JBAS010281: Started LuceneIndexesLocking cache from local container
> 19:01:59,883 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-6) MSC00001: Failed to start service jboss.infinispan.local.addressbook: org.jboss.msc.service.StartException in service jboss.infinispan.local.addressbook: Failed to start service
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1767) [jboss-msc-1.0.4.GA.jar:1.0.4.GA]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_40]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_40]
> Caused by: java.lang.NoClassDefFoundError: javax/management/MBeanRegistrationException
> at java.lang.Class.forName0(Native Method) [rt.jar:1.7.0_40]
> at java.lang.Class.forName(Class.java:270) [rt.jar:1.7.0_40]
> at org.jboss.logging.Logger.getMessageLogger(Logger.java:2247)
> at org.jboss.logging.Logger.getMessageLogger(Logger.java:2211)
> at org.infinispan.util.logging.LogFactory.getLog(LogFactory.java:19)
> at org.infinispan.lucene.impl.DirectoryBuilderImpl.<clinit>(DirectoryBuilderImpl.java:23)
> at org.infinispan.lucene.directory.DirectoryBuilder.newDirectoryInstance(DirectoryBuilder.java:25)
> at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
> at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:103)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:261)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:528)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManagers(IndexManagerHolder.java:495)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.buildEntityIndexBinding(IndexManagerHolder.java:104)
> at org.hibernate.search.spi.SearchFactoryBuilder.initDocumentBuilders(SearchFactoryBuilder.java:363)
> at org.hibernate.search.spi.SearchFactoryBuilder.buildNewSearchFactory(SearchFactoryBuilder.java:219)
> at org.hibernate.search.spi.SearchFactoryBuilder.buildSearchFactory(SearchFactoryBuilder.java:143)
> at org.infinispan.query.impl.LifecycleManager.getSearchFactory(LifecycleManager.java:213)
> at org.infinispan.query.impl.LifecycleManager.cacheStarting(LifecycleManager.java:73)
> at org.infinispan.factories.ComponentRegistry.notifyCacheStarting(ComponentRegistry.java:228)
> at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:214)
> at org.infinispan.CacheImpl.start(CacheImpl.java:674)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:553)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:516)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:398)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:412)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:89)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:80)
> at org.jboss.as.clustering.infinispan.subsystem.CacheService.start(CacheService.java:78)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.4.GA.jar:1.0.4.GA]
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.4.GA.jar:1.0.4.GA]
> ... 3 more
> Caused by: java.lang.ClassNotFoundException: javax.management.MBeanRegistrationException from [Module "org.infinispan.lucene:main" from local module loader @136d1b4 (finder: local module finder @19dc4 (roots: /home/adrian/work/cpp-client/infinispan-server-7.0.0-SNAPSHOT/modules,/home/adrian/work/cpp-client/infinispan-server-7.0.0-SNAPSHOT/modules/system/layers/base))]
> at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190) [jboss-modules.jar:1.2.0.CR1]
> at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468) [jboss-modules.jar:1.2.0.CR1]
> at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456) [jboss-modules.jar:1.2.0.CR1]
> at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398) [jboss-modules.jar:1.2.0.CR1]
> at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:120) [jboss-modules.jar:1.2.0.CR1]
> ... 33 more
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months