[JBoss JIRA] (ISPN-5370) Make clear() non-transactional and lock free
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-5370:
---------------------------------
Summary: Make clear() non-transactional and lock free
Key: ISPN-5370
URL: https://issues.jboss.org/browse/ISPN-5370
Project: Infinispan
Issue Type: Enhancement
Components: Core
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
New semantic for clear:
* assumes no concurrent operation while the command is in progress
* Lock free (avoids any issues with stop-the-word or deadlocks)
* non-transactional (it does not interact with current running or others transactions)
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years
[JBoss JIRA] (ISPN-5174) Transaction cannot be recommitted after ownership changes
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5174?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5174:
-----------------------------------------------
Sebastian Łaskawiec <slaskawi(a)redhat.com> changed the Status of [bug 1207080|https://bugzilla.redhat.com/show_bug.cgi?id=1207080] from POST to CLOSED
> Transaction cannot be recommitted after ownership changes
> ---------------------------------------------------------
>
> Key: ISPN-5174
> URL: https://issues.jboss.org/browse/ISPN-5174
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.1.0.CR2, 7.1.1.Final
> Reporter: Radim Vansa
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 7.2.0.Beta2, 7.2.0.Final
>
>
> Once transaction is completed, it cannot commit again. If it should commit more keys since it has become an owner of some new keys modified in this transaction, it just ignores the further commit.
> There is a race with state transfer which can bring an old value (with StateResponseCommand sent before it is commited) but the value is not set by the ongoing transaction either.
> This results with stale value stored on one node.
> In my case, The problematic part is transaction <edg-perf01-62141>:15066 (consisting of 10 modifications) which got prepared and committed on edg-perf04 in topology 25. Before the originator finishes, topology changes and 04 requests ongoing transactions:
> {code}
> 11:06:11,369 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (transport-thread-17) Replication task sending StateRequestCommand{cache=testCache, origin=edg-perf04-35097, type=GET_TRANSACTIONS, topologyId=28, segments=[275, 1, 278, 9, 282, 286, 17, 259, 25, 267, 171, 169, 33, 306, 175, 173, 310, 172, 314, 41, 167, 165, 318, 187, 290, 49, 185, 191, 294, 189, 179, 298, 57, 177, 183, 302, 181, 343, 205, 201, 338, 203, 336, 351, 197, 349, 199, 347, 193, 345, 195, 326, 85, 87, 322, 93, 332, 95, 330, 89, 91, 103, 101, 99, 506, 97, 105, 357, 359, 353, 355, 361]} to single recipient edg-perf01-62141 with response mode GET_ALL
> 11:06:11,495 DEBUG [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread-17) Applying 6 transactions for cache testCache transferred from node edg-perf01-62141
> {code}
> However I don't see how these are applied, since PrepareCommand is not created again - from the code I see only that backup locks are added. Not sure if the transaction is registered at all, since it was already completed on this node (but at that time it did not own key_00000000000002EB).
> After originator stores the entry, it sends one more CommitCommand with topology 28:
> {code}
> 11:06:11,619 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (DefaultStressor-2) Replication task sending CommitCommand {gtx=GlobalTransaction:<edg-perf01-62141>:15066:local, cacheName='testCache', topologyId=28} to addresses [edg-perf03-20530, edg-perf04-35097] with response mode GET_ALL
> {code}
> 04 receives several CommitCommands (both from originator and forwards), but all of them are ignored as the transaction is completed.
> I don't see the logs where state transfer is assembled, but it's probably before the entry is stored on originator as the state transfer contains the old entry:
> {code}
> 11:06:13,449 TRACE [org.infinispan.statetransfer.StateConsumerImpl] (remote-thread-91) Received chunk with keys [key_000000000000065B, key_00000000000006BE, key_FFFFFFFFFFFFE62F, key_0000000000001F42, key_000000000000027B, key_000000000000159D, key_00000000000002EB, key_00000000000002BB] for segment 343 of cache testCache from node edg-perf01-62141
> 11:06:13,454 TRACE [org.infinispan.container.DefaultDataContainer] (remote-thread-91) Store ImmortalCacheEntry{key=key_00000000000002EB, value=[2 #7: 366, 544, 576, 804, 1061, 1181, 1290, ]} in container
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years
[JBoss JIRA] (ISPN-5368) Out of order events produced when using the MassIndexer with async backend
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5368?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-5368:
------------------------------------
Description:
When using async indexing backend on DIST caches with shared index (InfinispanIndexManager), the MassIndexer fails to re-index all the entries, if it is run from a node that is not
the indexing master.
Normally the operation sequence of the MassIndexer in the above configuration, for a two node cluster is:
* Purge the index
* Send index job to node A and to node B
* Flush
Given the backend is async, all index commands are sent to the master RPC-wise asynchronously, and so a reorder can occur and produce like:
* Send index job to node A
* Purge
* Send index job to node B
* Flush
Causing previously re-indexed entries to be wiped
was:
When using async indexing backend on DIST caches with shared index (InfinispanIndexManager), the MassIndexer fails intermittently to re-index all the entries.
Normally the operation sequence of the MassIndexer in the above configuration, for a two node cluster is:
* Purge the index
* Send index job to node A and to node B
* Flush
Given the backend is async, all commands are sent RPC-wise asynchronously, and so a reorder can occur and produce like:
* Send index job to node A
* Purge
* Send index job to node B
* Flush
Causing previously re-indexed entries to be wiped
> Out of order events produced when using the MassIndexer with async backend
> --------------------------------------------------------------------------
>
> Key: ISPN-5368
> URL: https://issues.jboss.org/browse/ISPN-5368
> Project: Infinispan
> Issue Type: Bug
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
>
> When using async indexing backend on DIST caches with shared index (InfinispanIndexManager), the MassIndexer fails to re-index all the entries, if it is run from a node that is not
> the indexing master.
> Normally the operation sequence of the MassIndexer in the above configuration, for a two node cluster is:
> * Purge the index
> * Send index job to node A and to node B
> * Flush
> Given the backend is async, all index commands are sent to the master RPC-wise asynchronously, and so a reorder can occur and produce like:
> * Send index job to node A
> * Purge
> * Send index job to node B
> * Flush
> Causing previously re-indexed entries to be wiped
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years
[JBoss JIRA] (ISPN-5009) Upgrade server base to WildFly 9.0.0.Beta2
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5009?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5009:
-----------------------------------
Summary: Upgrade server base to WildFly 9.0.0.Beta2 (was: Upgrade server base to WildFly 9.0.0.Beta1)
Description: Originally it was meant to upgrade to Wildfly 8.2.0.Final but some issues where found. Upgrade still needed but upgrading to 9.0.0.Beta2 which solves issues. (was: Originally it was meant to upgrade to Wildfly 8.2.0.Final but some issues where found. Upgrade still needed but upgrading to 9.0.0.Beta1 which solves issues.)
> Upgrade server base to WildFly 9.0.0.Beta2
> ------------------------------------------
>
> Key: ISPN-5009
> URL: https://issues.jboss.org/browse/ISPN-5009
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: Build process, Server
> Affects Versions: 7.0.2.Final
> Reporter: Tristan Tarrant
> Assignee: Galder Zamarreño
> Fix For: 7.2.0.CR1, 7.2.0.Final
>
>
> Originally it was meant to upgrade to Wildfly 8.2.0.Final but some issues where found. Upgrade still needed but upgrading to 9.0.0.Beta2 which solves issues.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years
[JBoss JIRA] (ISPN-5009) Upgrade server base to WildFly 9.0.0.Beta1
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5009?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5009:
-----------------------------------
Summary: Upgrade server base to WildFly 9.0.0.Beta1 (was: Upgrade server base to WildFly 8.2)
Fix Version/s: 7.2.0.Final
Description: Originally it was meant to upgrade to Wildfly 8.2.0.Final but some issues where found. Upgrade still needed but upgrading to 9.0.0.Beta1 which solves issues.
> Upgrade server base to WildFly 9.0.0.Beta1
> ------------------------------------------
>
> Key: ISPN-5009
> URL: https://issues.jboss.org/browse/ISPN-5009
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: Build process, Server
> Affects Versions: 7.0.2.Final
> Reporter: Tristan Tarrant
> Assignee: Galder Zamarreño
> Fix For: 7.2.0.CR1, 7.2.0.Final
>
>
> Originally it was meant to upgrade to Wildfly 8.2.0.Final but some issues where found. Upgrade still needed but upgrading to 9.0.0.Beta1 which solves issues.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years
[JBoss JIRA] (ISPN-5131) Deploy custom cache store to Infinispan Server
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5131?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5131:
-----------------------------------------------
Tomas Sykora <tsykora(a)redhat.com> changed the Status of [bug 1186857|https://bugzilla.redhat.com/show_bug.cgi?id=1186857] from ON_QA to VERIFIED
> Deploy custom cache store to Infinispan Server
> ----------------------------------------------
>
> Key: ISPN-5131
> URL: https://issues.jboss.org/browse/ISPN-5131
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores, Server
> Reporter: Tristan Tarrant
> Assignee: Sebastian Łaskawiec
> Fix For: 7.2.0.Final
>
>
> h2. Overview
> Support the deployment and configuration of a custom cache store.
> h2. Client Perspective
> In order to create a deployable Cache Store the client will have to implement {{AdvancedLoadWriteStoreFactory}} (which will contain factory methods for creating {{AdvancedLoadWriteStore}}). Next, the factory will have to be annotated with {{@NamedFactory}} and placed into a jar together with proper entry of {{META-INF/services/org.infinispan.persistence.AdvancedLoadWriteStoreFactory}}. The last activity is to deploy given jar into Hotrod server.
> h2. Implementation overview
> The implementation is based on Deployable Filters and Converters.
> Currently all writers and loaders are instantiated in {{PersistenceManagerImpl#createLoadersAndWriters}}. This implementation will be modified to use {{CacheStoreFactoryRegistry}}, which will contain a list of {{CacheStoreFactories}}. One of the factories will be added by default - the local one (which will the same mechanism as we do now - {{Util.getInstance(classAnnotation)}}. Other {{CacheStoreFactories}} will be added after deployment scanning.
> h2. Implementation doubts and questions:
> * Should we expose a factory for {{AdvancedLoadWriteStore}} or should we include also {{ExternalStore}} (or even separate factory for {{CacheLoader}} and {{CacheWriter}}?
> ** YES, we should expose all of them.
> * How to ensure that deployment scanning has finished before creating instantiating {{AdvancedLoadWriteStore}}?
> ** Using {{org.infinispan.server.endpoint.subsystem.EndpointSubsystemAdd}}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years