[JBoss JIRA] (ISPN-3087) CLI HELP of some commands should to be changed
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3087?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3087:
-----------------------------------------------
Vitalii Chepeliuk <vchepeli(a)redhat.com> changed the Status of [bug 961317|https://bugzilla.redhat.com/show_bug.cgi?id=961317] from ON_QA to ASSIGNED
> CLI HELP of some commands should to be changed
> ----------------------------------------------
>
> Key: ISPN-3087
> URL: https://issues.jboss.org/browse/ISPN-3087
> Project: Infinispan
> Issue Type: Bug
> Components: CLI
> Affects Versions: 5.2.4.Final
> Environment: ALL
> Reporter: Vitalii Chepeliuk
> Assignee: Tristan Tarrant
> Labels: cli
> Fix For: 5.3.0.CR2
>
>
> Description of problem:
> Grammar mistakes in help of begin, connect, container, locate, put, replace, stats, upgrade commands should be fixed. Wrong words are wrapped in <<wrong word>>
> BEGIN COMMAND--------------------------------------------------------------------
> SYNOPSIS
> begin [cachename]
>
> DESCRIPTION
> Starts a transaction
>
> ARGUMENTS
> cachename
> (optional) the name of the cache on which to start the transaction. The currently selected cache will be used if this argument is <<missingThe>> cache must be transactional for
> this command to work
> CONNECT COMMAND------------------------------------------------------------------
> SYNOPSIS
> connect protocol://[user[:password]@]host][:port][/container[/cache]]
>
> DESCRIPTION
> Connects to an Infinispan instance using the specified protocol, host and port and with the supplied credentials.
>
> ARGUMENTS
> protocol
> currently only the jmx and the remoting (JMX over JBoss Remoting) protocols are supported. The jmxprotocol should be used to connect to directly over the standard JMX
> protocol, whereas <<theremotingprotocol>> should be used to connect to an Infinispan instance managed within an AS/EAP/JDG-style container.
> user (optional)
> The username to use when connecting if the server requires credentials
> password (optional)
> The password to use when connecting if the server requires credentials. When omitted, the password will be asked for interactively
> host
> the host name or IP address where the Infinispan instance is running
> port
> the port to connect to. For <<theremotingprotocol>> this defaults to 9999
> container (optional)
> the cache container to connect to by default. If unspecified, the first cache container will be selected
> cache (optional)
> the cache to connect to by default. If unspecified, no cache will be selected
> CONTAINER COMMAND---------------------------------------------------------------
> SYNOPSIS
> container [containername]
>
> DESCRIPTION
> Shows the available containers or selects a container to be used as default for CLI operations
>
> ARGUMENTS
> <<cachename>>
> (optional) the name of the container to set as default for the following operations
> LOCATE COMMAND------------------------------------------------------------------
> SYNOPSIS
> locate [--codec=codec] [cache.]key
>
> DESCRIPTION
> Shows the addresses of the owners in the cluster of the entry associated with the specified key. This command <<onlyworks>> for distributed caches
>
> ARGUMENTS
> cache
> (optional) the name of the cache to use. If not specified, the currently selected cache will be used. See the cache command
> key the key of the entry for which to show the address--codec=codec option has been specified then the key will be encoded using the specified codec, otherwise the default
> session codec will be used. See <<theencoding>> command for more information
> PUT COMMAND---------------------------------------------------------------------
> SYNOPSIS
> put [--codec=codec] [--ifabsent] [cache.]key value [expires expiration [maxidle idletime]]
>
> DESCRIPTION
> Associates the specified value with the specified key in this cache. If the cache previously contained a mapping for the key, the old value is replaced by the specified value.
> Optionally allows setting of a lifespan and a maximum idle time.
>
> ARGUMENTS
> cache
> the name of the cache where the key/value pair will be stored. If omitted uses the currently selected cache (see the <<cachecommand>>)
> key the key which identifies the element in the cache
> value
> the value to store in the cache associated with the keyIf the --codec=<<codecoption>> has been specified then the key and value will be encoded using the specified codec,
> otherwise the default session codec will be used. See <<theencodingcommand>> for more information
> expiration
> an optional expiration timeout (using the time value notation described below)
> idletime
> an optional idle timeout (using the time value notation described below)
>
> DATA TYPES
> The CLI understands the following types:
> string
> strings can either be quoted between single (') or double (") quotes, or left unquoted. In this case it must not contain spaces, punctuation and cannot begin with a number
> e.g. 'a string', key001
> int an integer is identified by a sequence of decimal digits, e.g. 256
> long
> a long is identified by a sequence of decimal digits suffixed by 'l', e.g. 1000l
> double
> a double precision number is identified by a floating point number(with optional exponent part) and an optional 'd' suffix, e.g.3.14
> float
> a single precision number is identified by a floating point number(with optional exponent part) and an 'f' suffix, e.g. 10.3f
> boolean
> a boolean is represented either by the keywords true and false
> UUID
> a UUID is represented by its canonical form XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
> JSON
> serialized Java classes can be represented using JSON notation, e.g. {"package.MyClass":{"i":5,"x":null,"b":true}}. Please note that the specified class must be available to
> the CacheManager's class loader.
>
> TIME VALUES
> A time value is an integer number followed by time unit suffix: days (d), hours (h), minutes (m), seconds (s), milliseconds (ms)
> REPLACE COMMAND-----------------------------------------------------------------
> See put command above
> UPGRADE COMMAND-----------------------------------------------------------------
> SYNOPSIS
> upgrade [--dumpkeys | --synchronize=migrator | --disconnectsource=migrator] [cachename | --all]
>
> DESCRIPTION
> This command performs operations used during the rolling upgrade procedure.
>
> ARGUMENTS
> --dumpkeys
> Performs the dump of all the keys in the cache to a known entry. It must be performed on the "source" cluster so that the "target" cluster can fetch the entire keyset
> efficiently to complete the synchronization operation
> --synchronize=migrator
> Performs the synchronization of all data from the "source" cluster to the "target" cluster using the specified migrator. It must be performed on the "target" cluster after
> the --dumpkeys operation has been performed on the "source" cluster. The only migrator currently available is hotrod which migrates entries between caches exposed via the
> HotRod remoting protocol.
> --disconnectsource=migrator
> Disconnects the "target" cluster from the "source" cluster. This is performed in a migrator-specific way. After this operation has been performed the "source" cluster can be
> switched off
> --all
> Specifies that the requested operation should be performed on all caches in the currently selected container
> cachename
> (optional) the name of the cache on which to invoke the specified upgrade command. If unspecified, the currently selected cache will be used. See also the --all switch above
>
> USAGE
> In order to perform a rolling upgrade of a HotRod cluster, the following steps must be taken
> 1. Configure and start a new cluster with a RemoteCacheStore pointing to the old cluster and the
> hotRodWrapping flag enabled
> 2. Configure all clients so that they will connect to the new cluster
> <<some space alignment needed here on 3. >>
> 3. Invoke the
> upgrade --dumpkeys command on the old cluster for all of the caches that need to be migrated
> 4. Invoke the
> upgrade --synchronize=hotrod command on the new cluster to ensure that all data is migrated from the old cluster to the new one
> 5. Invoke the
> upgrade --disconnectsource=hotrod command on the new cluster to disable the RemoteCacheStore used to migrate the data
> 6. Switch off the old cluster
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3234) Upgrade to JCache 0.8
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3234?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-3234:
-----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Upgrade to JCache 0.8
> ---------------------
>
> Key: ISPN-3234
> URL: https://issues.jboss.org/browse/ISPN-3234
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: JCache
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Alpha2, 6.0.0.Final
>
>
> When next JCache version is released, upgrade and re-enable the following TCK tests which have been disabled in ISPN-3213:
> {code}org.jsr107.tck.CacheStatisticsTest#testCacheStatistics
> org.jsr107.tck.CacheStatisticsTest#testCacheStatisticsInvokeEntryProcessorGet
> org.jsr107.tck.CacheStatisticsTest#testCacheStatisticsInvokeEntryProcessorCreate
> org.jsr107.tck.CacheStatisticsTest#testCacheStatisticsInvokeEntryProcessorRemove
> org.jsr107.tck.CacheStatisticsTest#testIterateAndRemove
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-1586) inconsistent cache data in replication cluster with local (not shared) cache store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1586?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1586:
--------------------------------
Fix Version/s: (was: 6.0.0.Final)
> inconsistent cache data in replication cluster with local (not shared) cache store
> ----------------------------------------------------------------------------------
>
> Key: ISPN-1586
> URL: https://issues.jboss.org/browse/ISPN-1586
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.0.0.FINAL, 5.1.0.CR1
> Environment: ISPN 5.0.0. Final and ISPN 5.1 sanpshot
> Java 1.7
> Linux Cent OS
> Reporter: dex chen
> Assignee: Dan Berindei
> Priority: Critical
>
> I rerun my test (an embedded ISPN cluser) with ISPN 5.0.0. final and 5.1 Sanpshot code.
> It is configured in "replication", using local cache store, and preload=true, purgeOnStartup=false .. (see the whole config below).
> I will get the inconsistent data among the nodes in the following scenario:
> 1) start 2 node cluster
> 2) after the cluster is formed, add some data to the cache
> k1-->v1
> k2-->v2
> I will see the data replication working perfectly at this point.
> 3) bring node 2 down
> 4) delete entry k1-->v1 through node1
> Note: At this point, on the local (persistent) cache store on the node2 have 2 entries.
> 5) start node2, and wait to join the cluster
> 6) after state merging, you will see now that node1 has 1 entry and nod2 has 2 entries.
> I am expecting that the data should be consistent across the cluster.
> Here is the infinispan config:
> {code:xml}
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
> xmlns="urn:infinispan:config:5.0">
> <global>
> <transport clusterName="demoCluster"
> machineId="node1"
> rackId="r1" nodeName="dexlaptop"
> >
> <properties>
> <property name="configurationFile" value="./jgroups-tcp.xml" />
> </properties>
> </transport>
> <globalJmxStatistics enabled="true"/>
> </global>
> <default>
> <locking
> isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="5000"
> useLockStriping="false"
> />
> <jmxStatistics enabled="true"/>
> <clustering mode="replication">
> <stateRetrieval
> timeout="240000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="false"
> />
> <!--
> Network calls are synchronous.
> -->
> <sync replTimeout="20000"/>
> </clustering>
> <loaders
> passivation="false"
> shared="false"
> preload="true">
> <loader
> class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore"
> fetchPersistentState="true"
> purgeOnStartup="false">
> <!-- set to true for not first node in the cluster in testing/demo -->
> <properties>
> <property name="stringsTableNamePrefix" value="ISPN_STRING_TABLE"/>
> <property name="idColumnName" value="ID_COLUMN"/>
> <property name="dataColumnName" value="DATA_COLUMN"/>
> <property name="timestampColumnName" value="TIMESTAMP_COLUMN"/>
> <property name="timestampColumnType" value="BIGINT"/>
> <property name="connectionFactoryClass" value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory"/>
> <property name="connectionUrl" value="jdbc:h2:file:/var/tmp/h2cachestore;DB_CLOSE_DELAY=-1"/>
> <property name="userName" value="sa"/>
> <property name="driverClass" value="org.h2.Driver"/>
> <property name="idColumnType" value="VARCHAR(255)"/>
> <property name="dataColumnType" value="BINARY"/>
> <property name="dropTableOnExit" value="false"/>
> <property name="createTableOnStart" value="true"/>
> </properties>
> <!--
> <async enabled="false" />
> -->
> </loader>
> </loaders>
> </default>
> </infinispan>
> {code}
> Basically, current ISPN implementation in state transfer will result in data insistence among nodes in replication mode and each node has local cache store.
> I found code BaseStateTransferManagerImpl's applyState code does not remove stale data in the local cache store and result in inconsistent data when joins a cluster:
> Here is the code snipt of applyState():
> {code:java}
> public void applyState(Collection<InternalCacheEntry> state,
> Address sender, int viewId) throws InterruptedException {
> .....
>
> for (InternalCacheEntry e : state) {
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> PutKeyValueCommand put = cf.buildPutKeyValueCommand(e.getKey(), e.getValue(), e.getLifespan(), e.getMaxIdle(), ctx.getFlags());
> interceptorChain.invoke(ctx, put);
> } catch (Exception ee) {
> log.problemApplyingStateForKey(ee.getMessage(), e.getKey());
> }
> }
>
> ...
> }
> {code}
> As we can see that the code bascically try to add all data entryies got from the cluster (other node). Hence, it does not know any previous entries were deleted from the cluster which exist in its local cache store. This is exactly my test case (my confiuration is that each node has its own cache store and in replication mode).
> To fix this, we need to delete any entries from the local cache/cache store which no longer exist in the new state.
> I modified the above method by adding the following code before put loop, and it fixed the problem in my configuration:
> {code:java}
> //Remove entries which no loger exist in the new state from local cache/cache store
> for (InternalCacheEntry ie: dataContainer.entrySet()) {
>
> if (!state.contains(ie)) {
> log.debug("Try to delete local store entry no loger exists in the new state: " + ie.getKey());
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> RemoveCommand remove = cf.buildRemoveCommand(ie.getKey(), ie.getValue(), ctx.getFlags());
> interceptorChain.invoke(ctx, remove);
> dataContainer.remove(ie.getKey());
> } catch (Exception ee) {
> log.error("failed to delete local store entry", ee);
> }
> }
> }
> ...
> {code}
> Obvious, the above "fix" is based on assumption/configure that dataContainer will have all local entries, i.e., preload=true, no enviction replication.
> The real fix, I think, we need delegate the syncState(state) to cache store impl, where we can check the configurations and do the right thing.
> For example, in the cache store impl, we can calculate the changes based on local data and new state, and apply the changes there.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-1586) inconsistent cache data in replication cluster with local (not shared) cache store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1586?page=com.atlassian.jira.plugin.... ]
Mircea Markus resolved ISPN-1586.
---------------------------------
Resolution: Rejected
this is an architectural limitation that will be covered in the scope of ISPN-3351
> inconsistent cache data in replication cluster with local (not shared) cache store
> ----------------------------------------------------------------------------------
>
> Key: ISPN-1586
> URL: https://issues.jboss.org/browse/ISPN-1586
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.0.0.FINAL, 5.1.0.CR1
> Environment: ISPN 5.0.0. Final and ISPN 5.1 sanpshot
> Java 1.7
> Linux Cent OS
> Reporter: dex chen
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 6.0.0.Final
>
>
> I rerun my test (an embedded ISPN cluser) with ISPN 5.0.0. final and 5.1 Sanpshot code.
> It is configured in "replication", using local cache store, and preload=true, purgeOnStartup=false .. (see the whole config below).
> I will get the inconsistent data among the nodes in the following scenario:
> 1) start 2 node cluster
> 2) after the cluster is formed, add some data to the cache
> k1-->v1
> k2-->v2
> I will see the data replication working perfectly at this point.
> 3) bring node 2 down
> 4) delete entry k1-->v1 through node1
> Note: At this point, on the local (persistent) cache store on the node2 have 2 entries.
> 5) start node2, and wait to join the cluster
> 6) after state merging, you will see now that node1 has 1 entry and nod2 has 2 entries.
> I am expecting that the data should be consistent across the cluster.
> Here is the infinispan config:
> {code:xml}
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
> xmlns="urn:infinispan:config:5.0">
> <global>
> <transport clusterName="demoCluster"
> machineId="node1"
> rackId="r1" nodeName="dexlaptop"
> >
> <properties>
> <property name="configurationFile" value="./jgroups-tcp.xml" />
> </properties>
> </transport>
> <globalJmxStatistics enabled="true"/>
> </global>
> <default>
> <locking
> isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="5000"
> useLockStriping="false"
> />
> <jmxStatistics enabled="true"/>
> <clustering mode="replication">
> <stateRetrieval
> timeout="240000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="false"
> />
> <!--
> Network calls are synchronous.
> -->
> <sync replTimeout="20000"/>
> </clustering>
> <loaders
> passivation="false"
> shared="false"
> preload="true">
> <loader
> class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore"
> fetchPersistentState="true"
> purgeOnStartup="false">
> <!-- set to true for not first node in the cluster in testing/demo -->
> <properties>
> <property name="stringsTableNamePrefix" value="ISPN_STRING_TABLE"/>
> <property name="idColumnName" value="ID_COLUMN"/>
> <property name="dataColumnName" value="DATA_COLUMN"/>
> <property name="timestampColumnName" value="TIMESTAMP_COLUMN"/>
> <property name="timestampColumnType" value="BIGINT"/>
> <property name="connectionFactoryClass" value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory"/>
> <property name="connectionUrl" value="jdbc:h2:file:/var/tmp/h2cachestore;DB_CLOSE_DELAY=-1"/>
> <property name="userName" value="sa"/>
> <property name="driverClass" value="org.h2.Driver"/>
> <property name="idColumnType" value="VARCHAR(255)"/>
> <property name="dataColumnType" value="BINARY"/>
> <property name="dropTableOnExit" value="false"/>
> <property name="createTableOnStart" value="true"/>
> </properties>
> <!--
> <async enabled="false" />
> -->
> </loader>
> </loaders>
> </default>
> </infinispan>
> {code}
> Basically, current ISPN implementation in state transfer will result in data insistence among nodes in replication mode and each node has local cache store.
> I found code BaseStateTransferManagerImpl's applyState code does not remove stale data in the local cache store and result in inconsistent data when joins a cluster:
> Here is the code snipt of applyState():
> {code:java}
> public void applyState(Collection<InternalCacheEntry> state,
> Address sender, int viewId) throws InterruptedException {
> .....
>
> for (InternalCacheEntry e : state) {
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> PutKeyValueCommand put = cf.buildPutKeyValueCommand(e.getKey(), e.getValue(), e.getLifespan(), e.getMaxIdle(), ctx.getFlags());
> interceptorChain.invoke(ctx, put);
> } catch (Exception ee) {
> log.problemApplyingStateForKey(ee.getMessage(), e.getKey());
> }
> }
>
> ...
> }
> {code}
> As we can see that the code bascically try to add all data entryies got from the cluster (other node). Hence, it does not know any previous entries were deleted from the cluster which exist in its local cache store. This is exactly my test case (my confiuration is that each node has its own cache store and in replication mode).
> To fix this, we need to delete any entries from the local cache/cache store which no longer exist in the new state.
> I modified the above method by adding the following code before put loop, and it fixed the problem in my configuration:
> {code:java}
> //Remove entries which no loger exist in the new state from local cache/cache store
> for (InternalCacheEntry ie: dataContainer.entrySet()) {
>
> if (!state.contains(ie)) {
> log.debug("Try to delete local store entry no loger exists in the new state: " + ie.getKey());
> InvocationContext ctx = icc.createInvocationContext(false, 1);
> // locking not necessary as during rehashing we block all transactions
> ctx.setFlags(CACHE_MODE_LOCAL, SKIP_CACHE_LOAD, SKIP_REMOTE_LOOKUP, SKIP_SHARED_CACHE_STORE, SKIP_LOCKING,
> SKIP_OWNERSHIP_CHECK);
> try {
> RemoveCommand remove = cf.buildRemoveCommand(ie.getKey(), ie.getValue(), ctx.getFlags());
> interceptorChain.invoke(ctx, remove);
> dataContainer.remove(ie.getKey());
> } catch (Exception ee) {
> log.error("failed to delete local store entry", ee);
> }
> }
> }
> ...
> {code}
> Obvious, the above "fix" is based on assumption/configure that dataContainer will have all local entries, i.e., preload=true, no enviction replication.
> The real fix, I think, we need delegate the syncState(state) to cache store impl, where we can check the configurations and do the right thing.
> For example, in the cache store impl, we can calculate the changes based on local data and new state, and apply the changes there.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3318) Migrate data from one cache store to another
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3318?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-3318:
-------------------------------------
We need to keep the old file cache store around for the backward compatibility (JDG requirement).
We'd also need a migration tool for migrating other cache stores (e.g. LevelDB cache store from certain users) so +1 for the migration tool.
> Migrate data from one cache store to another
> --------------------------------------------
>
> Key: ISPN-3318
> URL: https://issues.jboss.org/browse/ISPN-3318
> Project: Infinispan
> Issue Type: Task
> Components: Loaders and Stores
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Final
>
>
> Find a generic way to transfer data from one cache store to another, which could involve different Infinispan versions. This is handy to migrate file cache store based users to single file cache store (ISPN-2806).
> Ideally, this should be added as a recipe for rolling upgrades.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3346) org.infinispan.rest.MimeMetadata not serializable
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-3346?page=com.atlassian.jira.plugin.... ]
Martin Gencur commented on ISPN-3346:
-------------------------------------
This happens not only in stress tests but also in functional tests.
> org.infinispan.rest.MimeMetadata not serializable
> -------------------------------------------------
>
> Key: ISPN-3346
> URL: https://issues.jboss.org/browse/ISPN-3346
> Project: Infinispan
> Issue Type: Bug
> Components: RPC, Server
> Affects Versions: 6.0.0.Alpha1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Critical
> Fix For: 6.0.0.Alpha2
>
>
> Following exception occurs in REST client stress test with JDG 6.2.0.DR1:
> (4nodes, dist sync, 2owners)
> {code}
> org.jboss.resteasy.spi.UnhandledException: org.infinispan.commons.CacheException: java.lang.RuntimeException: Failure to marshal argument(s)
> at org.jboss.resteasy.core.SynchronousDispatcher.handleApplicationException(SynchronousDispatcher.java:365) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.SynchronousDispatcher.handleException(SynchronousDispatcher.java:233) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.SynchronousDispatcher.handleInvokerException(SynchronousDispatcher.java:209) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:557) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:524) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:126) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:208) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:50) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) [jboss-servlet-api_3.0_spec-1.0.2.Final-redhat-1.jar:1.0.2.Final-redhat-1]
> at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:295) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:145) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:336) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:920) [jbossweb-7.2.0.Final-redhat-1.jar:7.2.0.Final-redhat-1]
> at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
> Caused by: org.infinispan.commons.CacheException: java.lang.RuntimeException: Failure to marshal argument(s)
> at org.infinispan.commons.util.Util.rewrapAsCacheException(Util.java:566) [infinispan-commons-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:176) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:508) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:280) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.distribution.BaseDistributionInterceptor.handleNonTxWriteCommand(BaseDistributionInterceptor.java:140) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.distribution.NonTxDistributionInterceptor.visitPutKeyValueCommand(NonTxDistributionInterceptor.java:72) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:278) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:330) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:143) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:45) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:32) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:32) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:192) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:170) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:112) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:138) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:106) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:70) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:32) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:62) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:321) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1317) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.CacheImpl.putInternal(CacheImpl.java:878) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.CacheImpl.put(CacheImpl.java:870) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.CacheImpl.put(CacheImpl.java:1370) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.AbstractDelegatingAdvancedCache.put(AbstractDelegatingAdvancedCache.java:186) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.rest.Server.putOrReplace(Server.scala:343) [infinispan-server-rest-6.0.0.Alpha1-redhat-1-classes.jar:]
> at org.infinispan.rest.Server.org$infinispan$rest$Server$$putInCache(Server.scala:313) [infinispan-server-rest-6.0.0.Alpha1-redhat-1-classes.jar:]
> at org.infinispan.rest.Server$$anonfun$putEntry$1.apply(Server.scala:301) [infinispan-server-rest-6.0.0.Alpha1-redhat-1-classes.jar:]
> at org.infinispan.rest.Server$$anonfun$putEntry$1.apply(Server.scala:277) [infinispan-server-rest-6.0.0.Alpha1-redhat-1-classes.jar:]
> at org.infinispan.rest.Server.protectCacheNotFound(Server.scala:420) [infinispan-server-rest-6.0.0.Alpha1-redhat-1-classes.jar:]
> at org.infinispan.rest.Server.putEntry(Server.scala:277) [infinispan-server-rest-6.0.0.Alpha1-redhat-1-classes.jar:]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.7.0_25]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [rt.jar:1.7.0_25]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_25]
> at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_25]
> at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:167) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:269) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:227) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:216) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:542) [resteasy-jaxrs-2.3.6.Final-redhat-1.jar:2.3.6.Final-redhat-1]
> ... 18 more
> Caused by: java.lang.RuntimeException: Failure to marshal argument(s)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.marshallCall(CommandAwareRpcDispatcher.java:333) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:352) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) [infinispan-core-6.0.0.Alpha1-redhat-1.jar:6.0.0.Alpha1-redhat-1]
> ... 73 more
> Caused by: org.infinispan.commons.marshall.NotSerializableException: org.infinispan.rest.MimeMetadata
> Caused by: an exception which occurred:
> in object org.infinispan.rest.MimeMetadata@30b78e91
> in object org.infinispan.commands.write.PutKeyValueCommand@4c7c7218
> in object org.infinispan.commands.remote.SingleRpcCommand@9717ae08
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months
[JBoss JIRA] (ISPN-3348) Unable to read entries from SingleFileCacheStore after server restart
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3348?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3348:
--------------------------------
Fix Version/s: 6.0.0.Alpha3
> Unable to read entries from SingleFileCacheStore after server restart
> ---------------------------------------------------------------------
>
> Key: ISPN-3348
> URL: https://issues.jboss.org/browse/ISPN-3348
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 6.0.0.Alpha1
> Reporter: Martin Gencur
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.Alpha3
>
>
> Using the following configuration:
> {code:xml}
> <subsystem xmlns="urn:infinispan:server:core:5.3" default-cache-container="default">
> <cache-container name="default" default-cache="default" listener-executor="infinispan-listener" eviction-executor="infinispan-eviction" replication-queue-executor="infinispan-repl-queue">
> <local-cache name="default" start="EAGER">
> <locking isolation="NONE" acquire-timeout="30000" concurrency-level="1000" striping="false"/>
> <transaction mode="NONE"/>
> <store class="org.infinispan.loaders.file.SingleFileCacheStore" passivation="false" preload="false" purge="false">
> <property name="location">${java.io.tmpdir}/single-file-cache-store</property>
> <property name="maxEntries">2</property>
> </store>
> </local-cache>
> </cache-container>
> </subsystem>
> {code}
> ...it is not possible to retrieve the cache entry from the cache store after server restart, as demonstrated by the following test:
> {code:java}
> RemoteCache<String, String> rc = rcm.getCache();
> rc.clear();
> assertNull(rc.get("k1"));
> rc.put("k1", "v1");
> rc.put("k2", "v2");
> rc.put("k3", "v3");
> assertEquals("v1", rc.get("k1"));
> assertEquals("v2", rc.get("k2"));
> assertEquals("v3", rc.get("k3"));
> controller.kill(CONTAINER);
> controller.start(CONTAINER);
> assertEquals("v2", rc.get("k2"));
> //^^^fails here - unable to find k2
> assertEquals("v3", rc.get("k3"));
> assertNull(rc.get("k1")); //maxEntries was 2, this entry should be lost as the oldest entries are removed
> controller.stop(CONTAINER);
> {code}
> Note that preload is set to false for this test. When I set it to true, the test passes.
> I tried to modify BaseCacheStoreFunctionalTest#testPreloadAndExpiry, disable "preload" and retrieve the cache entries after server restart by calling cache.get() (not from DataContainer) and the test passed (I ran SingleFileCacheStoreFunctionalTest which extends BaseCacheStoreFunctionalTest). So it seems the bug is related to the server distribution.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 6 months