[JBoss JIRA] (ISPN-1740) Refactor internal classes and SPIs to use new Configuration beans
by Manik Surtani (JIRA)
Manik Surtani created ISPN-1740:
-----------------------------------
Summary: Refactor internal classes and SPIs to use new Configuration beans
Key: ISPN-1740
URL: https://issues.jboss.org/browse/ISPN-1740
Project: Infinispan
Issue Type: Enhancement
Components: Configuration
Affects Versions: 5.1.0.FINAL
Reporter: Manik Surtani
Assignee: Vladimir Blagojevic
Fix For: 5.2.0.FINAL
The current programmatic configuration makes use of the old 5.0.x config beans internally (injection) as well as unit tests and in SPIs (CacheStore, CommandInterceptor, etc).
We need to refactor these SPIs and internal code to use the new post-5.1 config beans.
However, the public API (DefaultCacheManager) should still support the old Configuration beans. To do this we'd need to write something like the reverse of the LegacyConfigurationAdapter. The LegacyConfigurationAdapter takes a 5.1 Configuration and creates a 5.0 Configuration. We need to do this in reverse once the internals start using the new 5.1 Configuration.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 7 months
[JBoss JIRA] Created: (ISPN-999) Support eventual consistency
by Manik Surtani (JIRA)
Support eventual consistency
----------------------------
Key: ISPN-999
URL: https://issues.jboss.org/browse/ISPN-999
Project: Infinispan
Issue Type: Feature Request
Components: Distributed Cache, Locking and Concurrency
Reporter: Manik Surtani
Assignee: Manik Surtani
Fix For: 5.1.0.BETA1, 5.1.0.Final
Essentially, it is about supporting eventual consistency in Infinispan. Currently Infinispan is strongly consistent when using synchronous distribution mode. Each data owner receives updates synchronously so anyone anywhere on the cluster doing a GET will see the correct value. The only exception is during a rehash (when a new node joins or leaves), that consistency is eventual since the GET may reach a new joiner who may not have applied state it receives from its neighbours yet. However this is hidden from users since the GET is sent to> 1 data owner and if an UnsureResponse is received (determined by the fact that a new joiner responds and the new joiner wouldn't have finished applying state), the caller thread waits for more definite responses.
However, there is a use case for being eventually consistent as well: the main benefits being speed and partition tolerance. E.g., if we use distribution in asynchronous mode, the writes become *much* faster. However, anyone anywhere doing a GET will have to perform the GET on all data owners, and compare the versions of the data received to determine which is the latest. And if there is a conflict, to pass back all values to the user.
So in terms of design, what I have in mind is:
* All cache entries are versioned using vector clocks. One vector clock per node.
* When a node performs a GET, the GET is sent to all data owners (concurrently), and the value + version is retrieved from each.
* If the versions are all the same (or they can be "fast forwarded"), the value is returned
* Otherwise, all potential values and their versions are returned
* A resolve() API should be provided where application code may provide a "hint" as to which version should be "correct" - which will cause an update.
* In terms of implementation, this will touch the DistributionInterceptor, InternalCacheEntry and relevant factories, some config code (since this should be consistency model should be configurable), and a new public interface.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months
[JBoss JIRA] Created: (ISPN-1394) Investigate possibility of doing manual rehashing
by Galder Zamarreño (JIRA)
Investigate possibility of doing manual rehashing
-------------------------------------------------
Key: ISPN-1394
URL: https://issues.jboss.org/browse/ISPN-1394
Project: Infinispan
Issue Type: Feature Request
Components: Distributed Cache
Reporter: Galder Zamarreño
Assignee: Manik Surtani
Fix For: 5.2.0.FINAL
Investigate the possibility of being able to do manual rehashing:
- Approach used Dynamo (and Cassandra)
- If you're adding 100 nodes, using manual rehashing could reduce traffic and make it more predictable
- Could be called via JMX
- But removing 10 nodes could be problematic. Unless number of owners is 11 or higher, which will guarantee that at least a copy of data is around
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months
[JBoss JIRA] Created: (ISPN-1147) Programmatically creating cache should be automatically reflected throughout the cluster
by Randall Hauch (JIRA)
Programmatically creating cache should be automatically reflected throughout the cluster
----------------------------------------------------------------------------------------
Key: ISPN-1147
URL: https://issues.jboss.org/browse/ISPN-1147
Project: Infinispan
Issue Type: Feature Request
Components: Distributed Cache, State transfer
Affects Versions: 4.2.1.FINAL
Reporter: Randall Hauch
Assignee: Manik Surtani
Consider a symmetric cluster of two nodes (N1 and N2) each with a single cache (C1). Currently, a new cache (C2) must be programmatically created _on all all nodes in the cluster_ (if REPL, or all appropriate nodes if DIST) _before_ that new cache can even be _used_.
Ideally, when a new cache (C2) is created on one node (N1), Infinispan should automatically propagate that creation to the appropriate nodes in the cluster so that the new cache can be used immediately.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months