[JBoss JIRA] (ISPN-8980) High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Debashish Bharali (JIRA)
[ https://issues.jboss.org/browse/ISPN-8980?page=com.atlassian.jira.plugin.... ]
Debashish Bharali commented on ISPN-8980:
-----------------------------------------
[~gustavonalle] We were unable to produce trace logs for Infinispan and Apache-Lucene.
We were able to produce trace for HibernateSearch only.
Currently I am trying again to generate Infinispan and Apache Lucene trace logs.
Can you suggest the operations to be performed for the below suggested step:
*For Force the index to be read on the second node: do a MatchAllQuery and iterate through all results, or a series of queries that together will match all data in the index.*
> High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-8980
> URL: https://issues.jboss.org/browse/ISPN-8980
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Reporter: Debashish Bharali
> Assignee: Gustavo Fernandes
> Priority: Critical
> Attachments: SysOutLogs.txt, neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> During high concurrency of action, we are getting *{color:red}'Error loading metadata for index file'{color}* even in *{color:red}Non-Clustered{color}* env.
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> *Worker Backend : JGroups*
> *Worker Execution: Sync*
> *write_metadata_async: false (implicitly)*
> *Note:* Currently we are on Non-Clustered env. We are moving to Clustered Env within few days.
> On analyzing the code, and putting some additional SYSOUT loggers into FileListOperations and DirectoryImplementor classes, we have established the following points:
> # This is happening during high concurrency on non-clustered env.
> # One thread *'T1'* is deleting a segment and segment name *'SEG1'* from the *'FileListCacheKey'* list* stored in MetaDatacache*.
> # Concurrently, at the same time, another thread *'T2'* is looping through the FileList ['copy list' from MetadataCache - for -FileListCacheKey - provided by toArray method of *FileListOperations* (changes also being done in the corresponding original list by T1 thread) ].
> # *'T2'* is calling open input method on each segment name - getting corresponding Metadata segment from *MetadataCache*.
> # However, for *'T2'*, the *'copy list'* still contains the name of segment *'SEG1'*.
> # So while looping through the list, *'T2'* tries to get Segment from MetadataCache for segment name *'SEG1'*.
> # But at this instant, *segment* corresponding to segment name *'SEG1'*, has been already removed from *MetadataCache* by *'T1'*.
> # This results in *'java.io.FileNotFoundException: Error loading metadata for index file'* for segment name *'SEG1'*
> # As mentioned earlier, this happens more often during high concurrency.
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> Full Stack trace:
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-7420) Hot Rod enhancements for transcoding
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-7420?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-7420:
-----------------------------------------
Some comments:
| Once the first cache write has determined the key+value types, the clients do not need to send them again. If the client sends different types for the same cache, it should either result in the server ignoring it or an error (the former is preferable). To avoid sending unnecessary data, advanced clients could cache the key+value type for a given cache after the first write request and then don't send it again.
This would require a sort of Session in the server, keeping state per client, which makes the implementation more complex. Also, "first write" is not as trivial as it seems. Writes can come from several sources, such as state transfer, replication, rest, scripts, etc. Coordinating those will also increase complexity: as different clients can compete for the "first write" each one with different MediaTypes. I'd rather no go to that route.
Furthermore, I'd prefer to allow users to choose the media on a per request basis. Since the MediaType is configured in the server, whatever data format used by the client can be converted to the storage format.
What I propose is a similar approach to the REST client: user can optionally configure MediaType in the server caches, and in every Hot Rod request it can send mime type for keys and values. This would allow to send JSON and read back protostream, for example, in two different requests. Since the MimeType is sent for every request, no state per client is maintained in the server, there is no need to identify the first write. Regarding wire format, the most commons MediaTypes (covering 99% of use cases) can be serialized efficiently by using a fixed id Table
> Hot Rod enhancements for transcoding
> ------------------------------------
>
> Key: ISPN-7420
> URL: https://issues.jboss.org/browse/ISPN-7420
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
>
> Several enhancements will need to be made to the Hot Rod protocol to work with transcoding:
> h3. Cache Writes (key + value)
> * Cache write operations that include values should have an optional parameter to be able to define the MIME type of the key and value that is being written. When the optional parameter is sent to the server, it will enable the server to implicitly discover what the types of the key+value are.
> * Once the first cache write has determined the key+value types, the clients do not need to send them again. If the client sends different types for the same cache, it should either result in the server ignoring it or an error (the former is preferable).
> * To avoid sending unnecessary data, advanced clients could cache the key+value type for a given cache after the first write request and then don't send it again.
> h3. Cache reads
> * Any operation that involves retrieving data should optionally take the type that the value should be transcoded to when returning it back to the client. This enables data to be read in different formats.
> * Within these operations, write operations that return previous values should be included.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-8980) High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8980?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-8980:
-----------------------------------------
[~debashish.bharali] I would expect the .dat files to be the same size after a node joins the cluster, since they use REPL caches. I've already asked a couple of times, but to fully understand what is going on, TRACE logs are vital. Can you produce a TRACE log of the first node and the TRACE logs of the joining node?
Also, if what you are saying is true (data not being replicated to cache stores), it should be easy to reproduce the issue without doing high load, using lots of data or waiting for it to happen:
1. Load data in the first node
2. Added a second node to the cluster
3. Force the index to be read on the second node: do a MatchAllQuery and iterate through all results, or a series of queries that together will match all data in the index.
> High concurrency : Infinispan Directory Provider: Lucene : Error loading metadata for index file
> ------------------------------------------------------------------------------------------------
>
> Key: ISPN-8980
> URL: https://issues.jboss.org/browse/ISPN-8980
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Reporter: Debashish Bharali
> Assignee: Gustavo Fernandes
> Priority: Critical
> Attachments: SysOutLogs.txt, neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> During high concurrency of action, we are getting *{color:red}'Error loading metadata for index file'{color}* even in *{color:red}Non-Clustered{color}* env.
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> *Worker Backend : JGroups*
> *Worker Execution: Sync*
> *write_metadata_async: false (implicitly)*
> *Note:* Currently we are on Non-Clustered env. We are moving to Clustered Env within few days.
> On analyzing the code, and putting some additional SYSOUT loggers into FileListOperations and DirectoryImplementor classes, we have established the following points:
> # This is happening during high concurrency on non-clustered env.
> # One thread *'T1'* is deleting a segment and segment name *'SEG1'* from the *'FileListCacheKey'* list* stored in MetaDatacache*.
> # Concurrently, at the same time, another thread *'T2'* is looping through the FileList ['copy list' from MetadataCache - for -FileListCacheKey - provided by toArray method of *FileListOperations* (changes also being done in the corresponding original list by T1 thread) ].
> # *'T2'* is calling open input method on each segment name - getting corresponding Metadata segment from *MetadataCache*.
> # However, for *'T2'*, the *'copy list'* still contains the name of segment *'SEG1'*.
> # So while looping through the list, *'T2'* tries to get Segment from MetadataCache for segment name *'SEG1'*.
> # But at this instant, *segment* corresponding to segment name *'SEG1'*, has been already removed from *MetadataCache* by *'T1'*.
> # This results in *'java.io.FileNotFoundException: Error loading metadata for index file'* for segment name *'SEG1'*
> # As mentioned earlier, this happens more often during high concurrency.
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> Full Stack trace:
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9053) Counter failures in Hot Rod client
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-9053:
---------------------------------
Summary: Counter failures in Hot Rod client
Key: ISPN-9053
URL: https://issues.jboss.org/browse/ISPN-9053
Project: Infinispan
Issue Type: Bug
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
Since the client was moved to an async approach, the following failures are causing failures in StrongCounterAPI and WeakCounterAPI, mainly related to listeners:
* Wrong message received while expecting a CounterEvent
{noformat}
ISPN004043: Unrecoverable error reading event from server 127.0.0.1/127.0.0.1:43903, exiting listener [B0x5C39BFF
6B7ED724E..[16]
io.netty.handler.codec.DecoderException: java.lang.AssertionError
...
Caused by: java.lang.AssertionError
at org.infinispan.client.hotrod.impl.protocol.Codec20.readAndValidateHeader(Codec20.java:495) ~[classes/:?]
at org.infinispan.client.hotrod.impl.protocol.Codec20.readCounterEvent(Codec20.java:226) ~[classes/:?]
at org.infinispan.client.hotrod.event.impl.CounterEventDispatcher.readEvent(CounterEventDispatcher.java:36) ~[classes/:?]
at org.infinispan.client.hotrod.event.impl.CounterEventDispatcher.readEvent(CounterEventDispatcher.java:17) ~[classes/:?]
at org.infinispan.client.hotrod.event.impl.EventDispatcher.decode(EventDispatcher.java:44) ~[classes/:?]
at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:98) ~[classes/:?]
... 19 more
{noformat}
* Unknown disconnects
{noformat}
(HotRod-StrongCounterAPITest-ServerWorker-345-2:[]) [HotRodExceptionHandler] Exception caught
io.netty.channel.unix.Errors$NativeIoException: syscall:read(..) failed: Connection reset by peer
at io.netty.channel.unix.FileDescriptor.readAddress(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.22.Final.jar:4.1.22.Final]
{noformat}
* Other possible related failures:
{noformat}
Caused by: java.lang.AssertionError
at org.infinispan.client.hotrod.impl.transport.netty.ChannelPool.release(ChannelPool.java:170)
at org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory.releaseChannel(ChannelFactory.java:307)
at org.infinispan.client.hotrod.impl.operations.HotRodOperation.releaseChannel(HotRodOperation.java:92)
at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.invoke(RetryOnFailureOperation.java:73)
at org.infinispan.client.hotrod.impl.transport.netty.ChannelPool.activateChannel(ChannelPool.java:217)
at org.infinispan.client.hotrod.impl.transport.netty.ChannelPool.acquire(ChannelPool.java:86)
at org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory.fetchChannelAndInvoke(ChannelFactory.java:257)
at org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory.fetchChannelAndInvoke(ChannelFactory.java:252)
at org.infinispan.client.hotrod.counter.operation.BaseCounterOperation.fetchChannelAndInvoke(BaseCounterOperation.java:82)
at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:58)
at org.infinispan.client.hotrod.counter.impl.StrongCounterImpl.compareAndSwap(StrongCounterImpl.java:36)
at org.infinispan.counter.api.StrongCounter.compareAndSet(StrongCounter.java:83)
{noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9045) Failure to change 'protocolVersion' in RemoteStoreConfiguration
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/ISPN-9045?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated ISPN-9045:
---------------------------------
Status: Open (was: New)
> Failure to change 'protocolVersion' in RemoteStoreConfiguration
> ---------------------------------------------------------------
>
> Key: ISPN-9045
> URL: https://issues.jboss.org/browse/ISPN-9045
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.2.0.Final, 9.2.1.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> {noformat}
> 14:51:28,266 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache.store.web.dist: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache.store.web.dist: Failed to start service
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.execute(ServiceControllerImpl.java:1706)
> at org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1540)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1364)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: No enum constant org.infinispan.client.hotrod.ProtocolVersion.2.5
> at java.lang.Enum.valueOf(Enum.java:238)
> at org.infinispan.configuration.parsing.XmlConfigHelper.valueConverter(XmlConfigHelper.java:406)
> at org.infinispan.configuration.parsing.XmlConfigHelper.setAttributes(XmlConfigHelper.java:418)
> at org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder.withProperties(AbstractStoreConfigurationBuilder.java:149)
> at org.jboss.as.clustering.infinispan.subsystem.StoreBuilder.getValue(StoreBuilder.java:103)
> at org.jboss.as.clustering.infinispan.subsystem.StoreBuilder.getValue(StoreBuilder.java:54)
> at org.jboss.msc.service.ValueService.start(ValueService.java:49)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1714)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.execute(ServiceControllerImpl.java:1693)
> ... 6 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9045) Failure to change 'protocolVersion' in RemoteStoreConfiguration
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/ISPN-9045?page=com.atlassian.jira.plugin.... ]
Work on ISPN-9045 started by Radoslav Husar.
--------------------------------------------
> Failure to change 'protocolVersion' in RemoteStoreConfiguration
> ---------------------------------------------------------------
>
> Key: ISPN-9045
> URL: https://issues.jboss.org/browse/ISPN-9045
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.2.0.Final, 9.2.1.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> {noformat}
> 14:51:28,266 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache.store.web.dist: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache.store.web.dist: Failed to start service
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.execute(ServiceControllerImpl.java:1706)
> at org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1540)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1364)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IllegalArgumentException: No enum constant org.infinispan.client.hotrod.ProtocolVersion.2.5
> at java.lang.Enum.valueOf(Enum.java:238)
> at org.infinispan.configuration.parsing.XmlConfigHelper.valueConverter(XmlConfigHelper.java:406)
> at org.infinispan.configuration.parsing.XmlConfigHelper.setAttributes(XmlConfigHelper.java:418)
> at org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder.withProperties(AbstractStoreConfigurationBuilder.java:149)
> at org.jboss.as.clustering.infinispan.subsystem.StoreBuilder.getValue(StoreBuilder.java:103)
> at org.jboss.as.clustering.infinispan.subsystem.StoreBuilder.getValue(StoreBuilder.java:54)
> at org.jboss.msc.service.ValueService.start(ValueService.java:49)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1714)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.execute(ServiceControllerImpl.java:1693)
> ... 6 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months