[JBoss JIRA] (ISPN-5193) infinispan-embedded and infinispan-cli-interpreter don't work together
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/ISPN-5193?page=com.atlassian.jira.plugin.... ]
Michal Karm Babacek reassigned ISPN-5193:
-----------------------------------------
Assignee: Sebastian Łaskawiec
> infinispan-embedded and infinispan-cli-interpreter don't work together
> ----------------------------------------------------------------------
>
> Key: ISPN-5193
> URL: https://issues.jboss.org/browse/ISPN-5193
> Project: Infinispan
> Issue Type: Bug
> Components: CLI
> Affects Versions: 7.1.0.CR2, 8.0.0.Beta2
> Reporter: Jakub Markos
> Assignee: Sebastian Łaskawiec
>
> Creating a java application (no container) with both infinispan-embedded and infinispan-cli-interpreter dependencies results in this error when starting a cache manager:
> {code}Exception in thread "main" java.util.ServiceConfigurationError: org.infinispan.lifecycle.ModuleLifecycle: Provider org.infinispan.cli.interpreter.LifecycleCallbacks could not be instantiated: java.lang.ExceptionInInitializerError
> at java.util.ServiceLoader.fail(ServiceLoader.java:224)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
> at org.infinispan.commons.util.ServiceFinder.addServices(ServiceFinder.java:60)
> at org.infinispan.commons.util.ServiceFinder.load(ServiceFinder.java:42)
> at org.infinispan.util.ModuleProperties.resolveModuleLifecycles(ModuleProperties.java:41)
> at org.infinispan.factories.GlobalComponentRegistry.<init>(GlobalComponentRegistry.java:94)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:292)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:271)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:244)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:231)
> at Main64.main(Main64.java:17)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
> Caused by: java.lang.ExceptionInInitializerError
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at java.lang.Class.newInstance(Class.java:374)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
> ... 15 more
> Caused by: java.lang.IllegalArgumentException: Logger implementation class org.infinispan.cli.interpreter.logging.Log_$logger has no matching constructor
> at infinispan.org.jboss.logging.Logger.getMessageLogger(Logger.java:2255)
> at infinispan.org.jboss.logging.Logger.getMessageLogger(Logger.java:2211)
> at org.infinispan.util.logging.LogFactory.getLog(LogFactory.java:21)
> at org.infinispan.cli.interpreter.LifecycleCallbacks.<clinit>(LifecycleCallbacks.java:20)
> ... 21 more
> {code}
> Tried with 7.1.0.CR2, config:
> {code}
> <?xml version="1.0" encoding="UTF-8"?>
> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xmlns="urn:infinispan:config:7.1">
> <cache-container default-cache="localcache">
> <local-cache name="localcache"/>
> </cache-container>
> </infinispan>
> {code}
> application:
> {code}
> public static void main(String[] args) throws Exception {
> EmbeddedCacheManager manager = new DefaultCacheManager("config.xml");
> Cache defaultCache = manager.getCache("localcache");
> for (int i = 0; i < 10; i++) {
> defaultCache.put("key"+i, "value"+i);
> }
> Thread.sleep(5000000);
> }
> {code}
> Using infinispan-core dependency instead of infinispan-embedded works.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (ISPN-6981) AffinityIndexManager fails to index documents in async mode
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6981?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-6981:
-----------------------------------------
Digging a bit more on the {{This lock is no longer being held}} error, it seems a visibility issue with the LockCache:
1) Index {{entity.235}} is started on {{NodeB}}, it acquires and release the lock, common practice when initializing the Infinispan Directory:
{code}
21:04:15,598 DEBUG (pool-3-thread-2:[]) [WorkspaceHolder] HSEARCH000236: Backend for index 'entity.235' started: using asynchronous backend with periodic commits.
21:04:15,608 TRACE (pool-3-thread-2:[]) [BaseLuceneLock] Lock: write.lock acquired for index: entity.235 from AffinityTopologyChangeAsyncTest-NodeB-33534
21:04:15,628 TRACE (pool-3-thread-2:[]) [BaseLuceneLock] Lock: write.lock removed for index: entity.235 from AffinityTopologyChangeAsyncTest-NodeB-33534 (was AffinityTopologyChangeAsyncTest-NodeB-33534)
{code}
2) An index work arrives on {{NodeB}}, and it re-acquires the lock
{code}
21:04:15,632 TRACE (Hibernate Search: Index updates queue processor for index entity.235-1:[]) [BaseLuceneLock] Lock: write.lock acquired for index: entity.235 from AffinityTopologyChangeAsyncTest-NodeB-33534
21:04:15,635 TRACE (Hibernate Search: Index updates queue processor for index entity.235-1:[]) [InfinispanIndexOutput] Opened new IndexOutput for file:_0.fdt in index: entity.235
{code}
3) Due to topology changes, {{NodeB}} looses ownership of segment 235, and thus from index {{entity.235}}. It starts the process of flushing the index as part of the close:
{code}
21:04:16,479 DEBUG (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [AffinityIndexManager] Topology changed notification for entity.235
21:04:16,480 DEBUG (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [AffinityIndexManager] Ownership lost to 'AffinityTopologyChangeAsyncTest-NodeD-30445', closing index manager
21:04:16,480 DEBUG (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [AffinityIndexManager] Flushing directory provider
21:04:16,489 DEBUG (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [AbstractWorkspaceImpl] HSEARCH000304: Closing index writer for IndexManager 'entity.235'
21:04:17,542 DEBUG (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [DirectoryImplementor] Removed file: segments_1 from index: entity.235 from AffinityTopologyChangeAsyncTest-NodeB-33534
21:04:17,543 TRACE (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [InfinispanIndexOutput] Refreshed file listing view
{code}
4) While the flushing is happening, the new owner of segment 235, {{NodeD}}, *for an unknown reason, acquires the lock* for {{entity.235}}, when it shouldn't, since in theory {{NodeB}} still holds the lock.
{code}
21:04:17,835 DEBUG (remote-thread-AffinityTopologyChangeAsyncTest-NodeD-p26-t5:[]) [AffinityIndexManager] *Lock holder for 235 is null*
21:04:17,843 TRACE (Hibernate Search: Index updates queue processor for index entity.235-1:[]) [BaseLuceneLock] Lock: write.lock acquired for index: entity.235 from AffinityTopologyChangeAsyncTest-NodeD-30445
21:04:17,849 TRACE (Hibernate Search: Index updates queue processor for index entity.235-1:[]) [InfinispanIndexOutput] Opened new IndexOutput for file:_1.fdt in index: entity.235
{code}
5) When {{NodeB}} finishes flushing the index started on 3), it removes the lock from the index, that was strangely held by {{NodeD}}
{code}
21:04:18,491 TRACE (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [BaseLuceneLock] Lock: write.lock removed for index: entity.235 from AffinityTopologyChangeAsyncTest-NodeB-33534 (was AffinityTopologyChangeAsyncTest-NodeD-30445)
21:04:18,491 TRACE (transport-thread-AffinityTopologyChangeAsyncTest-NodeB-p12-t3:[]) [IndexWriterHolder] IndexWriter closed
{code}
6) The Comit scheduler on {{NodeD}} starts to fail since it does not lock the lock anymore.
> AffinityIndexManager fails to index documents in async mode
> -----------------------------------------------------------
>
> Key: ISPN-6981
> URL: https://issues.jboss.org/browse/ISPN-6981
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Affects Versions: 9.0.0.Alpha4
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
>
> During topology changes in async mode ("<index>.worker.execution" set to "async"), the {{AfinityIndexManager}} sometimes fails to index entries. Some of errors thrown:
> {noformat}
> 19:06:04,638 ERROR (Hibernate Search: Index updates queue processor for index entity.5-1) [LogErrorHandler] HSEARCH000058: Exception occurred
> org.apache.lucene.store.LockObtainFailedException: lock instance already assigned
> {noformat}
> {noformat}
> 18:53:59,553 ERROR (Hibernate Search: Index updates queue processor for index entity.2-1) [LuceneBackendQueueTask] HSEARCH000073: Error in
> backend
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:720)
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:734)
> {noformat}
> {noformat}
> 18:55:31,328 ERROR (Hibernate Search: Commit Scheduler for index entity.143-1) [AffinityErrorHandler] HSEARCH000117: IOException
> on the IndexWriter
> java.io.IOException: This lock is no longer being held
> at org.infinispan.lucene.impl.BaseLuceneLock.ensureValid(BaseLuceneLock.java:89)
> at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
> at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
> at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.<init>(Lucene50PostingsWriter.java:105)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (ISPN-6981) AffinityIndexManager fails to index documents in async mode
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6981?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-6981:
------------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/4553
> AffinityIndexManager fails to index documents in async mode
> -----------------------------------------------------------
>
> Key: ISPN-6981
> URL: https://issues.jboss.org/browse/ISPN-6981
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Affects Versions: 9.0.0.Alpha4
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
>
> During topology changes in async mode ("<index>.worker.execution" set to "async"), the {{AfinityIndexManager}} sometimes fails to index entries. Some of errors thrown:
> {noformat}
> 19:06:04,638 ERROR (Hibernate Search: Index updates queue processor for index entity.5-1) [LogErrorHandler] HSEARCH000058: Exception occurred
> org.apache.lucene.store.LockObtainFailedException: lock instance already assigned
> {noformat}
> {noformat}
> 18:53:59,553 ERROR (Hibernate Search: Index updates queue processor for index entity.2-1) [LuceneBackendQueueTask] HSEARCH000073: Error in
> backend
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:720)
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:734)
> {noformat}
> {noformat}
> 18:55:31,328 ERROR (Hibernate Search: Commit Scheduler for index entity.143-1) [AffinityErrorHandler] HSEARCH000117: IOException
> on the IndexWriter
> java.io.IOException: This lock is no longer being held
> at org.infinispan.lucene.impl.BaseLuceneLock.ensureValid(BaseLuceneLock.java:89)
> at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
> at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
> at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.<init>(Lucene50PostingsWriter.java:105)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (ISPN-6981) AffinityIndexManager fails to index documents in async mode
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6981?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-6981:
------------------------------------
Summary: AffinityIndexManager fails to index documents in async mode (was: AffinityIndexManager sometimes fails to index document in async mode)
> AffinityIndexManager fails to index documents in async mode
> -----------------------------------------------------------
>
> Key: ISPN-6981
> URL: https://issues.jboss.org/browse/ISPN-6981
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Affects Versions: 9.0.0.Alpha4
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
>
> During topology changes in async mode ("<index>.worker.execution" set to "async"), the {{AfinityIndexManager}} sometimes fails to index entries. Some of errors thrown:
> {noformat}
> 19:06:04,638 ERROR (Hibernate Search: Index updates queue processor for index entity.5-1) [LogErrorHandler] HSEARCH000058: Exception occurred
> org.apache.lucene.store.LockObtainFailedException: lock instance already assigned
> {noformat}
> {noformat}
> 18:53:59,553 ERROR (Hibernate Search: Index updates queue processor for index entity.2-1) [LuceneBackendQueueTask] HSEARCH000073: Error in
> backend
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:720)
> at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:734)
> {noformat}
> {noformat}
> 18:55:31,328 ERROR (Hibernate Search: Commit Scheduler for index entity.143-1) [AffinityErrorHandler] HSEARCH000117: IOException
> on the IndexWriter
> java.io.IOException: This lock is no longer being held
> at org.infinispan.lucene.impl.BaseLuceneLock.ensureValid(BaseLuceneLock.java:89)
> at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
> at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
> at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.<init>(Lucene50PostingsWriter.java:105)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (ISPN-6919) Improve non-tx writes (triangle)
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6919?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-6919:
------------------------------------
[~ksobolew] I'm afraid the functional API's commands can't run without exclusive access to the entries they modify, either.
[~pruivo] numOwners == 1 should really be described separately. If we exclude that, the first case becomes "originator != primary"
What happens when the backups install a new topology in which they're no longer owners, or if the backups crash?
Also, why do we need the replication to the backups to be ordered? That won't help if the primary owner crashes, and another node becomes the primary owner.
More general, we should really have some diagrams to discuss how the triangle is supposed to handle the various corner cases that appear when a node joins/leaves/crashes, or when there's a split+merge. I'm not saying this approach won't work with topology changes, it's just that I have no idea whether how it's supposed to work.
> Improve non-tx writes (triangle)
> --------------------------------
>
> Key: ISPN-6919
> URL: https://issues.jboss.org/browse/ISPN-6919
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> The current algorithm has 4 messages in the network (worst case) happended sequentially:
> 1: originator => primary owner
> 2: primary owner => backups
> 3: backups => primary owner (ack)
> 4: primary owner => originator (reply)
> The algorithm can be improved to the following:
> 1: originator => primary owner
> 2: primary owner => backups & primary owner => originator (parallel)
> 3: backups => originator & backups => primary owners (acks)
> The main flow would be: originator => primary => backups => originator (<= there is the triangle :) )
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months
[JBoss JIRA] (ISPN-6919) Improve non-tx writes (triangle)
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6919?page=com.atlassian.jira.plugin.... ]
Dan Berindei edited comment on ISPN-6919 at 9/7/16 10:58 AM:
-------------------------------------------------------------
[~ksobolew] I'm afraid the functional API's commands can't run without exclusive access to the entries they modify, either.
[~pruivo] numOwners == 1 should really be described separately. If we exclude that, the first case becomes "originator != primary"
What happens when the backups install a new topology in which they're no longer owners, or if the backups crash?
Also, why do we need the replication to the backups to be ordered? That won't help if the primary owner crashes, and another node becomes the primary owner.
More general, we should really have some diagrams to discuss how the triangle is supposed to handle the various corner cases that appear when a node joins/leaves/crashes, or when there's a split+merge. I'm not saying this approach won't work with topology changes, it's just that I have no idea how it's supposed to work.
was (Author: dan.berindei):
[~ksobolew] I'm afraid the functional API's commands can't run without exclusive access to the entries they modify, either.
[~pruivo] numOwners == 1 should really be described separately. If we exclude that, the first case becomes "originator != primary"
What happens when the backups install a new topology in which they're no longer owners, or if the backups crash?
Also, why do we need the replication to the backups to be ordered? That won't help if the primary owner crashes, and another node becomes the primary owner.
More general, we should really have some diagrams to discuss how the triangle is supposed to handle the various corner cases that appear when a node joins/leaves/crashes, or when there's a split+merge. I'm not saying this approach won't work with topology changes, it's just that I have no idea whether how it's supposed to work.
> Improve non-tx writes (triangle)
> --------------------------------
>
> Key: ISPN-6919
> URL: https://issues.jboss.org/browse/ISPN-6919
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> The current algorithm has 4 messages in the network (worst case) happended sequentially:
> 1: originator => primary owner
> 2: primary owner => backups
> 3: backups => primary owner (ack)
> 4: primary owner => originator (reply)
> The algorithm can be improved to the following:
> 1: originator => primary owner
> 2: primary owner => backups & primary owner => originator (parallel)
> 3: backups => originator & backups => primary owners (acks)
> The main flow would be: originator => primary => backups => originator (<= there is the triangle :) )
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 7 months