[JBoss JIRA] (ISPN-4605) Race condition during Marshalling of Lucene Directory components would trigger nonsense during unmarshalling
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-4605?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-4605:
----------------------------------
Summary: Race condition during Marshalling of Lucene Directory components would trigger nonsense during unmarshalling (was: Race condition during Marshalling of Lucene Directory components would trigger nonsense reads)
> Race condition during Marshalling of Lucene Directory components would trigger nonsense during unmarshalling
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4605
> URL: https://issues.jboss.org/browse/ISPN-4605
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Lucene Directory
> Affects Versions: 7.0.0.Alpha5
> Reporter: Sanne Grinovero
> Assignee: Sanne Grinovero
> Fix For: 7.0.0.Beta1
>
>
> Some components in the Lucene Directory are highly concurrent, when these are marshalled we need to ensure a consistent snapshot is written to the byte stream.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months
[JBoss JIRA] (ISPN-2958) Lucene Directory Read past EOF
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2958?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero reassigned ISPN-2958:
-------------------------------------
Assignee: Sanne Grinovero (was: Pedro Ruivo)
> Lucene Directory Read past EOF
> ------------------------------
>
> Key: ISPN-2958
> URL: https://issues.jboss.org/browse/ISPN-2958
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Lucene Directory
> Affects Versions: 5.2.1.Final
> Reporter: Clement Pang
> Assignee: Sanne Grinovero
>
> This seems to be happening rather deterministically.
> Infinispan configuration (in JBoss EAP 6.1.0.Alpha):
> {code}
> <cache-container name="lucene">
> <local-cache name="dshell-index-data" start="EAGER">
> <eviction strategy="LIRS" max-entries="50000"/>
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-metadata" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-lock" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> </cache-container>
> {code}
> Upon shutting down the server and confirming that passivation did indeed write the data to disk, the subsequent start-up would fail right away with:
> {code}
> Caused by: org.hibernate.search.SearchException: Could not initialize index
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:162)
> at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
> at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:104)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
> ... 64 more
> Caused by: java.io.IOException: Read past EOF
> at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
> at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:41)
> at org.apache.lucene.store.DataInput.readInt(DataInput.java:86)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:272)
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:182)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1168)
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:157)
> ... 67 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months
[JBoss JIRA] (ISPN-2958) Lucene Directory Read past EOF
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2958?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero resolved ISPN-2958.
-----------------------------------
Labels: (was: 64QueryBlockers retest stable_embedded_query)
Resolution: Incomplete Description
> Lucene Directory Read past EOF
> ------------------------------
>
> Key: ISPN-2958
> URL: https://issues.jboss.org/browse/ISPN-2958
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Lucene Directory
> Affects Versions: 5.2.1.Final
> Reporter: Clement Pang
> Assignee: Sanne Grinovero
>
> This seems to be happening rather deterministically.
> Infinispan configuration (in JBoss EAP 6.1.0.Alpha):
> {code}
> <cache-container name="lucene">
> <local-cache name="dshell-index-data" start="EAGER">
> <eviction strategy="LIRS" max-entries="50000"/>
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-metadata" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-lock" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> </cache-container>
> {code}
> Upon shutting down the server and confirming that passivation did indeed write the data to disk, the subsequent start-up would fail right away with:
> {code}
> Caused by: org.hibernate.search.SearchException: Could not initialize index
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:162)
> at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
> at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:104)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
> ... 64 more
> Caused by: java.io.IOException: Read past EOF
> at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
> at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:41)
> at org.apache.lucene.store.DataInput.readInt(DataInput.java:86)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:272)
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:182)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1168)
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:157)
> ... 67 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months
[JBoss JIRA] (ISPN-2958) Lucene Directory Read past EOF
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2958?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-2958:
----------------------------------
Fix Version/s: (was: 7.0.0.Beta1)
> Lucene Directory Read past EOF
> ------------------------------
>
> Key: ISPN-2958
> URL: https://issues.jboss.org/browse/ISPN-2958
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Lucene Directory
> Affects Versions: 5.2.1.Final
> Reporter: Clement Pang
> Assignee: Pedro Ruivo
>
> This seems to be happening rather deterministically.
> Infinispan configuration (in JBoss EAP 6.1.0.Alpha):
> {code}
> <cache-container name="lucene">
> <local-cache name="dshell-index-data" start="EAGER">
> <eviction strategy="LIRS" max-entries="50000"/>
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-metadata" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> <local-cache name="dshell-index-lock" start="EAGER">
> <file-store path="lucene" passivation="true" purge="false"/>
> </local-cache>
> </cache-container>
> {code}
> Upon shutting down the server and confirming that passivation did indeed write the data to disk, the subsequent start-up would fail right away with:
> {code}
> Caused by: org.hibernate.search.SearchException: Could not initialize index
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:162)
> at org.hibernate.search.infinispan.impl.InfinispanDirectoryProvider.start(InfinispanDirectoryProvider.java:103)
> at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.initialize(DirectoryBasedIndexManager.java:104)
> at org.hibernate.search.indexes.impl.IndexManagerHolder.createIndexManager(IndexManagerHolder.java:227)
> ... 64 more
> Caused by: java.io.IOException: Read past EOF
> at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
> at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:41)
> at org.apache.lucene.store.DataInput.readInt(DataInput.java:86)
> at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:272)
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:182)
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1168)
> at org.hibernate.search.store.impl.DirectoryProviderHelper.initializeIndexIfNeeded(DirectoryProviderHelper.java:157)
> ... 67 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months
[JBoss JIRA] (ISPN-1568) Clustered Query fail when hibernate search not fully initialized
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-1568?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-1568:
----------------------------------
Assignee: Gustavo Fernandes (was: Adrian Nistor)
> Clustered Query fail when hibernate search not fully initialized
> ----------------------------------------------------------------
>
> Key: ISPN-1568
> URL: https://issues.jboss.org/browse/ISPN-1568
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Core, Embedded Querying
> Affects Versions: 5.1.0.BETA5
> Reporter: Mathieu Lachance
> Assignee: Gustavo Fernandes
>
> Hi,
> I'm running into this issue when doing a clustered query in distribution mode :
> org.infinispan.CacheException: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:166)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:181)
> at org.infinispan.query.clustered.ClusteredQueryInvoker.broadcast(ClusteredQueryInvoker.java:113)
> at org.infinispan.query.clustered.ClusteredCacheQueryImpl.broadcastQuery(ClusteredCacheQueryImpl.java:115)
> at org.infinispan.query.clustered.ClusteredCacheQueryImpl.iterator(ClusteredCacheQueryImpl.java:90)
> at org.infinispan.query.impl.CacheQueryImpl.iterator(CacheQueryImpl.java:129)
> at org.infinispan.query.clustered.ClusteredCacheQueryImpl.list(ClusteredCacheQueryImpl.java:133)
> at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:313)
> at com.XXX.DistributedCache.cacheQueryList(DistributedCache.java:274)
> at com.XXX.ClientCache.getClientsByServerId(ClientCache.java:127)
> at com.XXX.ClientManager.getClientsByServerId(ClientManager.java:157)
> at com.XXX$PingClient.run(PlayerBll.java:890)
> at java.util.TimerThread.mainLoop(Timer.java:512)
> at java.util.TimerThread.run(Timer.java:462)
> Caused by: org.hibernate.search.SearchException: Not a mapped entity (don't forget to add @Indexed): class com.XXX.Client
> at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:549)
> at org.hibernate.search.query.engine.impl.HSQueryImpl.buildSearcher(HSQueryImpl.java:493)
> at org.hibernate.search.query.engine.impl.HSQueryImpl.queryDocumentExtractor(HSQueryImpl.java:292)
> at org.infinispan.query.clustered.commandworkers.CQCreateEagerQuery.perform(CQCreateEagerQuery.java:44)
> at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:135)
> at org.infinispan.query.clustered.ClusteredQueryCommand.perform(ClusteredQueryCommand.java:129)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:170)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:179)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:208)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:156)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:162)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
> at org.jgroups.JChannel.up(JChannel.java:716)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
> at org.jgroups.protocols.pbcast.StreamingStateTransfer.up(StreamingStateTransfer.java:262)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:332)
> at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:700)
> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:561)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:140)
> at org.jgroups.protocols.FD.up(FD.java:273)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:284)
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
> at org.jgroups.protocols.Discovery.up(Discovery.java:354)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1709)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1691)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> With the use of the following
> cache configuration :
> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
> xmlns="urn:infinispan:config:5.1">
> <global>
> <transport clusterName="XXX-cluster" machineId="XXX" siteId="XXX" rackId="XXX" distributedSyncTimeout="15000">
> <properties>
> <property name="configurationFile" value="jgroups-jdbc-ping.xml" />
> </properties>
> </transport>
> </global>
> <default>
> <transaction
> cacheStopTimeout="30000"
> transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
> lockingMode="PESSIMISTIC"
> useSynchronization="true"
> transactionMode="TRANSACTIONAL"
> syncCommitPhase="true"
> syncRollbackPhase="false"
> >
> <recovery enabled="false" />
> </transaction>
> <clustering mode="local" />
> <indexing enabled="true" indexLocalOnly="true">
> <properties>
> <property name="hibernate.search.default.directory_provider" value="ram" />
> </properties>
> </indexing>
> </default>
> <namedCache name="XXX-Client">
> <transaction
> cacheStopTimeout="30000"
> transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
> lockingMode="PESSIMISTIC"
> useSynchronization="true"
> transactionMode="TRANSACTIONAL"
> syncCommitPhase="true"
> syncRollbackPhase="false"
> >
> <recovery enabled="false" />
> </transaction>
> <invocationBatching enabled="false" />
> <loaders passivation="false" />
> <clustering mode="distribution" >
> <sync replTimeout="15000" />
> <stateRetrieval
> timeout="240000"
> retryWaitTimeIncreaseFactor="2"
> numRetries="5"
> maxNonProgressingLogWrites="100"
>
> fetchInMemoryState="false"
> logFlushTimeout="60000"
> alwaysProvideInMemoryState="false"
> />
> </clustering>
> <storeAsBinary enabled="false" storeValuesAsBinary="true" storeKeysAsBinary="true" />
> <deadlockDetection enabled="true" spinDuration="100" />
> <eviction strategy="NONE" threadPolicy="PIGGYBACK" maxEntries="-1" />
> <jmxStatistics enabled="true" />
> <locking writeSkewCheck="false" lockAcquisitionTimeout="10000" isolationLevel="READ_COMMITTED" useLockStriping="false" concurrencyLevel="32" />
> <expiration wakeUpInterval="60000" lifespan="-1" maxIdle="3000000" />
> </namedCache>
> </infinispan>
> and jgroups configuration :
> <config xmlns="urn:org:jgroups"
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-3.0.xsd">
> <TCP
> bind_port="7800"
> loopback="true"
> port_range="30"
> recv_buf_size="20000000"
> send_buf_size="640000"
> discard_incompatible_packets="true"
> max_bundle_size="64000"
> max_bundle_timeout="30"
> enable_bundling="true"
> use_send_queues="true"
> sock_conn_timeout="300"
> enable_diagnostics="false"
> thread_pool.enabled="true"
> thread_pool.min_threads="2"
> thread_pool.max_threads="30"
> thread_pool.keep_alive_time="5000"
> thread_pool.queue_enabled="false"
> thread_pool.queue_max_size="100"
> thread_pool.rejection_policy="Discard"
> oob_thread_pool.enabled="true"
> oob_thread_pool.min_threads="2"
> oob_thread_pool.max_threads="30"
> oob_thread_pool.keep_alive_time="5000"
> oob_thread_pool.queue_enabled="false"
> oob_thread_pool.queue_max_size="100"
> oob_thread_pool.rejection_policy="Discard"
> />
> <JDBC_PING
> connection_url="jdbc:jtds:sqlserver://XXX;databaseName=XXX"
> connection_username="XXX"
> connection_password="XXX"
> connection_driver="net.sourceforge.jtds.jdbcx.JtdsDataSource"
> initialize_sql=""
> />
> <MERGE2 max_interval="30000"
> min_interval="10000"/>
> <FD_SOCK/>
> <FD timeout="3000" max_tries="3"/>
> <VERIFY_SUSPECT timeout="1500"/>
> <pbcast.NAKACK
> use_mcast_xmit="false"
> retransmit_timeout="300,600,1200,2400,4800"
> discard_delivered_msgs="false"/>
> <UNICAST timeout="300,600,1200"/>
> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
> max_bytes="400000"/>
> <pbcast.STATE />
> <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/>
> <UFC max_credits="2000000" min_threshold="0.10"/>
> <MFC max_credits="2000000" min_threshold="0.10"/>
> <FRAG2 frag_size="60000"/>
> </config>
> Tough my entity is well annotated.
> Here's the steps to reproduce :
> 1. boot node A completly.
> 2. boot node B, make all caches start (DefaultCacheManager::startCaches(...)), then breakpoint just after.
> 3. on node A, do a clustered query.
> 4. node A fail because node b has not been fully initialized.
> Here's how I do my query :
> private CacheQuery getClusteredNonClusteredQuery(Query query)
> {
> CacheQuery cacheQuery;
> if (useClusteredQuery)
> {
> cacheQuery = searchManager.getClusteredQuery(query, cacheValueClass);
> }
> else
> {
> cacheQuery = searchManager.getQuery(query, cacheValueClass);
> }
> return cacheQuery;
> }
> I've tried also without supplying any "cacheValueClass" without any success.
> One ugly "workaround" I've found is, to as soon as possible in the application, to force the local insertion and removal of one dummy key and value to force initialization of the search manager like :
> cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).put("XXX", new Client("XXX");
> cache.getAdvancedCache().withFlags(Flag.CACHE_MODE_LOCAL).remove("XXX");
> Tough this technique won't still garanty me that any clustered query will occur before.
> I think the issue this might as well be related to issue : ISPN-627 Provision to get Cache from CacheManager.
> Any idea or workaround ? Do you think by just adding a try catch and return an empty list could "fix" the problem ?
> EDIT :
> I've added a try catch inside org.infinispan.query.clustered.commandworkers.CQCreateEagerQuery::perform()
> @Override
> public QueryResponse perform() {
> query.afterDeserialise((SearchFactoryImplementor) getSearchFactory());
> try
> {
> DocumentExtractor extractor = query.queryDocumentExtractor();
> int resultSize = query.queryResultSize();
>
> ISPNEagerTopDocs eagerTopDocs = collectKeys(extractor);
>
> QueryResponse queryResponse = new QueryResponse(eagerTopDocs,
> getQueryBox().getMyId(), resultSize);
> queryResponse.setAddress(cache.getAdvancedCache().getRpcManager()
> .getAddress());
> return queryResponse;
> }
> catch (SearchException e)
> {
> QueryResponse queryResponse = new QueryResponse(new ISPNEagerTopDocs(), getQueryBox().getMyId(), 0);
> queryResponse.setAddress(cache.getAdvancedCache().getRpcManager().getAddress());
> return queryResponse;
> }
> }
> And made a default constructor in org.infinispan.query.clustered.ISPNEagerTopDocs
> public ISPNEagerTopDocs()
> {
> super(0, new ScoreDoc[0], 0);
> this.keys = new Object[0];
> }
> Swallowing the exception appear to be a "good" workaround.
> Thanks a lot,
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months
[JBoss JIRA] (ISPN-3835) Index Update command is processed before the registry listener is triggered
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-3835?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-3835:
----------------------------------
Assignee: Gustavo Fernandes (was: Sanne Grinovero)
> Index Update command is processed before the registry listener is triggered
> ---------------------------------------------------------------------------
>
> Key: ISPN-3835
> URL: https://issues.jboss.org/browse/ISPN-3835
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Embedded Querying
> Affects Versions: 6.0.0.Final
> Reporter: Sanne Grinovero
> Assignee: Gustavo Fernandes
> Priority: Critical
> Labels: 64QueryBlockers
> Fix For: 7.0.0.Beta1
>
>
> When using the InfinispanIndexManager backend the master node might receive an index update command about an index which it hasn't defined yet.
> Index definitions are triggered by the type registry, which in turn is driven by the ClusterRegistry and an event listener on the ClusterRegistry. It looks like slaves are sending update requests before the master has processed the configuration event.
> This leads to index update commands to be lost (with a stacktrace logged)
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months
[JBoss JIRA] (ISPN-4602) Verify EntryIterator works with MarshalledValues
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4602?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-4602:
--------------------------------
Fix Version/s: 7.0.0.Beta1
> Verify EntryIterator works with MarshalledValues
> ------------------------------------------------
>
> Key: ISPN-4602
> URL: https://issues.jboss.org/browse/ISPN-4602
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Marshalling
> Affects Versions: 7.0.0.Alpha5
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 7.0.0.Beta1
>
>
> The EntryIterator currently doesn't deserialize MarshalledValues as needed which would cause filter failures and the incorrect values to be returned.
> This also means each key/value pair would need to be deserialized when applied to filter which will be slower and should be noted in documentation, but sent across as MarshalledValues?. The only other way is to use some sort of proxy for each object to force lazy deserialization on referencing a field when applying filter, but this seems overkill.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
11 years, 8 months