[JBoss JIRA] (ISPN-6404) Add missing schemas into docs/schemas for ISPN
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6404?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-6404:
----------------------------------
Description:
ISPN server is missing some important schemas. We need to add them since they are mentioned in the configuration.
We need the following:
jboss-as-jgroups_1_0.xsd
jboss-as-jgroups_1_1.xsd
jboss-as-jgroups_1_2.xsd
jboss-as-jgroups_2_0.xsd
jboss-as-jgroups_3_0.xsd
jboss-infinispan-endpoint_5_2.xsd
jboss-infinispan-endpoint_6_0.xsd
jboss-infinispan-endpoint_7_0.xsd
jboss-infinispan-endpoint_7_2.xsd
jboss-infinispan-endpoint_8_0.xsd
jboss-infinispan-jgroups_7_0.xsd
jboss-infinispan-jgroups_8_0.xsd
jboss-infinispan-core_6_0.xsd
jboss-infinispan-core_7_1.xsd
jboss-infinispan-core_8_0.xsd
jboss-infinispan-core_8_2.xsd
jboss-infinispan-core_7_0.xsd
jboss-infinispan-core_7_2.xsd
jboss-infinispan-core_8_1.xsd
jboss-infinispan-core_9_0.xsd
jboss-as-cli_1_0.xsd
jboss-as-cli_1_1.xsd
jboss-as-cli_1_2.xsd
jboss-as-cli_2_0.xsd
jboss-as-config_1_0.xsd
jboss-as-config_1_1.xsd
jboss-as-config_1_2.xsd
jboss-as-config_1_3.xsd
jboss-as-config_1_4.xsd
jboss-as-config_1_5.xsd
jboss-as-config_2_0.xsd
jboss-as-config_2_1.xsd
jboss-as-datasources_1_0.xsd
jboss-as-datasources_1_1.xsd
jboss-as-datasources_1_2.xsd
jboss-as-jmx_1_0.xsd
jboss-as-jmx_1_1.xsd
jboss-as-jmx_1_2.xsd
jboss-as-jmx_1_3.xsd
jboss-as-logging_1_0.xsd
jboss-as-logging_1_1.xsd
jboss-as-logging_1_2.xsd
jboss-as-logging_1_3.xsd
jboss-as-logging_2_0.xsd
jboss-as-txn_1_0.xsd
jboss-as-txn_1_1.xsd
jboss-as-txn_1_2.xsd
jboss-as-txn_1_3.xsd
jboss-as-txn_1_4.xsd
jboss-as-txn_1_5.xsd
jboss-as-txn_2_0.xsd
jboss-as-xts_1_0.xsd
jboss-as-xts_2_0.xsd
wildfly-cli_3_0.xsd
wildfly-config_3_0.xsd
wildfly-config_4_0.xsd
module-1_0.xsd
module-1_1.xsd
module-1_2.xsd
module-1_3.xsd
module-1_5.xsd
jboss-as-deployment-scanner_1_0.xsd
jboss-as-deployment-scanner_1_1.xsd
jboss-as-deployment-scanner_2_0.xsd
wildfly-txn_3_0.xsd
wildfly-datasources_2_0.xsd
wildfly-datasources_3_0.xsd
wildfly-datasources_4_0.xsd
was:ISPN server is missing some important schemas (like {{jboss-as-infinispan_2_0.xsd}}). We need to add them since they are mentioned in the configuration.
> Add missing schemas into docs/schemas for ISPN
> ----------------------------------------------
>
> Key: ISPN-6404
> URL: https://issues.jboss.org/browse/ISPN-6404
> Project: Infinispan
> Issue Type: Enhancement
> Components: Build process, Server
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> ISPN server is missing some important schemas. We need to add them since they are mentioned in the configuration.
> We need the following:
> jboss-as-jgroups_1_0.xsd
> jboss-as-jgroups_1_1.xsd
> jboss-as-jgroups_1_2.xsd
> jboss-as-jgroups_2_0.xsd
> jboss-as-jgroups_3_0.xsd
> jboss-infinispan-endpoint_5_2.xsd
> jboss-infinispan-endpoint_6_0.xsd
> jboss-infinispan-endpoint_7_0.xsd
> jboss-infinispan-endpoint_7_2.xsd
> jboss-infinispan-endpoint_8_0.xsd
> jboss-infinispan-jgroups_7_0.xsd
> jboss-infinispan-jgroups_8_0.xsd
> jboss-infinispan-core_6_0.xsd
> jboss-infinispan-core_7_1.xsd
> jboss-infinispan-core_8_0.xsd
> jboss-infinispan-core_8_2.xsd
> jboss-infinispan-core_7_0.xsd
> jboss-infinispan-core_7_2.xsd
> jboss-infinispan-core_8_1.xsd
> jboss-infinispan-core_9_0.xsd
> jboss-as-cli_1_0.xsd
> jboss-as-cli_1_1.xsd
> jboss-as-cli_1_2.xsd
> jboss-as-cli_2_0.xsd
> jboss-as-config_1_0.xsd
> jboss-as-config_1_1.xsd
> jboss-as-config_1_2.xsd
> jboss-as-config_1_3.xsd
> jboss-as-config_1_4.xsd
> jboss-as-config_1_5.xsd
> jboss-as-config_2_0.xsd
> jboss-as-config_2_1.xsd
> jboss-as-datasources_1_0.xsd
> jboss-as-datasources_1_1.xsd
> jboss-as-datasources_1_2.xsd
> jboss-as-jmx_1_0.xsd
> jboss-as-jmx_1_1.xsd
> jboss-as-jmx_1_2.xsd
> jboss-as-jmx_1_3.xsd
> jboss-as-logging_1_0.xsd
> jboss-as-logging_1_1.xsd
> jboss-as-logging_1_2.xsd
> jboss-as-logging_1_3.xsd
> jboss-as-logging_2_0.xsd
> jboss-as-txn_1_0.xsd
> jboss-as-txn_1_1.xsd
> jboss-as-txn_1_2.xsd
> jboss-as-txn_1_3.xsd
> jboss-as-txn_1_4.xsd
> jboss-as-txn_1_5.xsd
> jboss-as-txn_2_0.xsd
> jboss-as-xts_1_0.xsd
> jboss-as-xts_2_0.xsd
> wildfly-cli_3_0.xsd
> wildfly-config_3_0.xsd
> wildfly-config_4_0.xsd
> module-1_0.xsd
> module-1_1.xsd
> module-1_2.xsd
> module-1_3.xsd
> module-1_5.xsd
> jboss-as-deployment-scanner_1_0.xsd
> jboss-as-deployment-scanner_1_1.xsd
> jboss-as-deployment-scanner_2_0.xsd
> wildfly-txn_3_0.xsd
> wildfly-datasources_2_0.xsd
> wildfly-datasources_3_0.xsd
> wildfly-datasources_4_0.xsd
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6404) Add missing schemas into docs/schemas for ISPN
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6404?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6404:
--------------------------------------
Description: ISPN server is missing some important schemas (like {{jboss-as-infinispan_2_0.xsd}}). We need to add them since they are mentioned in the configuration. (was: ISPN server is missing some important schemas (like {{jboss-as-config_4_0.xsd}}). We need to add them since they are mentioned in the configuration.)
> Add missing schemas into docs/schemas for ISPN
> ----------------------------------------------
>
> Key: ISPN-6404
> URL: https://issues.jboss.org/browse/ISPN-6404
> Project: Infinispan
> Issue Type: Enhancement
> Components: Build process, Server
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> ISPN server is missing some important schemas (like {{jboss-as-infinispan_2_0.xsd}}). We need to add them since they are mentioned in the configuration.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6443) Add ability to cancel an event in a listener
by Vincent Massol (JIRA)
[ https://issues.jboss.org/browse/ISPN-6443?page=com.atlassian.jira.plugin.... ]
Vincent Massol updated ISPN-6443:
---------------------------------
Description:
My use case is the following:
* I have some lenghty computation that I cache
* I don't want users to incur the wait before the computation is finished
One idea is:
* Upon expiration (or eviction), I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
* The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
Thus one solution is to be able to cancel the expiration/eviction events (and thus not have to put back the entry in the cache).
Of course I'm very open to hear if there are other solutions to this! :) Thanks a lot
was:
My use case is the following:
* I have some lenghty computation that I cache
* I don't want users to incur the wait before the computation is finished
One idea is:
* Upon expiration (or eviction), I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
* The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
Thus one solution is to be able to cancel the expiration/eviction events (and thus not have to put back the entry in the cache).
> Add ability to cancel an event in a listener
> --------------------------------------------
>
> Key: ISPN-6443
> URL: https://issues.jboss.org/browse/ISPN-6443
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Expiration
> Affects Versions: 8.2.0.Final
> Reporter: Vincent Massol
>
> My use case is the following:
> * I have some lenghty computation that I cache
> * I don't want users to incur the wait before the computation is finished
> One idea is:
> * Upon expiration (or eviction), I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
> * The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
> Thus one solution is to be able to cancel the expiration/eviction events (and thus not have to put back the entry in the cache).
> Of course I'm very open to hear if there are other solutions to this! :) Thanks a lot
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6443) Add ability to cancel an event in a listener
by Vincent Massol (JIRA)
[ https://issues.jboss.org/browse/ISPN-6443?page=com.atlassian.jira.plugin.... ]
Vincent Massol updated ISPN-6443:
---------------------------------
Description:
My use case is the following:
* I have some lenghty computation that I cache
* I don't want users to incur the wait before the computation is finished
One idea is:
* Upon expiration (or eviction), I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
* The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
Thus one solution is to be able to cancel the expiration/eviction events (and thus not have to put back the entry in the cache).
was:
My use case is the following:
* I have some lenghty computation that I cache
* I don't want users to incur the wait before the computation is finished
* Thus, upon expiration, I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
* The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
Thus one solution is to be able to cancel the expiration event (and thus not have to put back the entry in the cache).
> Add ability to cancel an event in a listener
> --------------------------------------------
>
> Key: ISPN-6443
> URL: https://issues.jboss.org/browse/ISPN-6443
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Expiration
> Affects Versions: 8.2.0.Final
> Reporter: Vincent Massol
>
> My use case is the following:
> * I have some lenghty computation that I cache
> * I don't want users to incur the wait before the computation is finished
> One idea is:
> * Upon expiration (or eviction), I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
> * The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
> Thus one solution is to be able to cancel the expiration/eviction events (and thus not have to put back the entry in the cache).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6443) Add ability to cancel an event in a listener
by Vincent Massol (JIRA)
[ https://issues.jboss.org/browse/ISPN-6443?page=com.atlassian.jira.plugin.... ]
Vincent Massol commented on ISPN-6443:
--------------------------------------
See also https://developer.jboss.org/thread/268862
> Add ability to cancel an event in a listener
> --------------------------------------------
>
> Key: ISPN-6443
> URL: https://issues.jboss.org/browse/ISPN-6443
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Expiration
> Affects Versions: 8.2.0.Final
> Reporter: Vincent Massol
>
> My use case is the following:
> * I have some lenghty computation that I cache
> * I don't want users to incur the wait before the computation is finished
> * Thus, upon expiration, I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
> * The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
> Thus one solution is to be able to cancel the expiration event (and thus not have to put back the entry in the cache).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6443) Add ability to cancel an event in a listener
by Vincent Massol (JIRA)
Vincent Massol created ISPN-6443:
------------------------------------
Summary: Add ability to cancel an event in a listener
Key: ISPN-6443
URL: https://issues.jboss.org/browse/ISPN-6443
Project: Infinispan
Issue Type: Enhancement
Components: Core, Expiration
Affects Versions: 8.2.0.Final
Reporter: Vincent Massol
My use case is the following:
* I have some lenghty computation that I cache
* I don't want users to incur the wait before the computation is finished
* Thus, upon expiration, I'd like to put back the entry in the cache and start a thread to perform a recomputation (which would put the recomputed value in the cache)
* The issue is that the cache has a lock on the entry and thus I can't put it back in the cache.
Thus one solution is to be able to cancel the expiration event (and thus not have to put back the entry in the cache).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6402) Default GMS.join_timeout is too long
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6402?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6402:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/4188
> Default GMS.join_timeout is too long
> ------------------------------------
>
> Key: ISPN-6402
> URL: https://issues.jboss.org/browse/ISPN-6402
> Project: Infinispan
> Issue Type: Task
> Components: Core, Server, Test Suite - Server
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
>
> {{GMS.join_timeout}} is used by JGroups for two purposes:
> # Wait for {{FIND_INITIAL_MBRS}} responses. If other nodes are running, but they don't answer within {{join_timeout}} ms, the node will start a new partition by itself.
> # If no other nodes are running when the request is sent, but another node starts and sends its own discovery request within {{join_timeout}}, the initial cluster view will contain both nodes, but this isn't really useful in Infinispan (we have {{gcb.transport().initialClusterSize()}} instead).
> # Once a coordinator is located, the node sends a join request and waits for a response for {{join_timeout}} ms. After a timeout, the node re-sends the join request (up to a maximum of {{max_join_attempts}}, which defaults to 10).
> The default {{GMS.join_timeout}} in Infinispan is 15000, vs. 2000 in JGroups (actually 3000 in {{GMS}} itself, but 2000 in the example configurations).
> The higher timeout will only help us when a node is running, but it's inaccessible (e.g. because of a long GC) at the exact time a node is joining. I'd argue that applications that can tolerate multi-second pauses would be better served by {{gcb.transport().initialClusterSize(2)}} and/or an external discovery mechanism (e.g. {{FILE_PING}}, or something based on the WildFly domain controller). For most applications, the current default means just a 15s delay every time the cluster is (re)started.
> In particular, because our integration tests use the default configuration, it means a delay of 15s for every test that starts a cluster.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6437) InfinispanLock.LockPlaceHolder sometimes doesn't invoke its listeners
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6437?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6437:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> InfinispanLock.LockPlaceHolder sometimes doesn't invoke its listeners
> ---------------------------------------------------------------------
>
> Key: ISPN-6437
> URL: https://issues.jboss.org/browse/ISPN-6437
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 8.2.0.Final
> Reporter: Dan Berindei
> Assignee: Pedro Ruivo
> Labels: testsuite_stability
> Fix For: 9.0.0.Alpha1
>
> Attachments: LocalTopKeyTest.log.gz, LocalTopKeyTest_pr_pruivo_t_6437_refactor_20160328.log
>
>
> When {{InfinispanLock.LockPlaceHolder.lock()}} times out in the {{await()}} call, it doesn't CAS the state and it doesn't run the listeners.
> Listeners are used by the extended statistics module, and missed invocations cause random failures in {{LocalTopKeyTest}}:
> {noformat}
> 17:44:50,412 TRACE (testng-LocalTopKeyTest:[___defaultcache]) [DefaultLockManager] Lock key=key for owner=GlobalTransaction:<null>:34:local. timeout=100 (MILLISECONDS)
> 17:44:50,412 TRACE (testng-LocalTopKeyTest:[___defaultcache]) [InfinispanLock] Acquire lock for GlobalTransaction:<null>:34:local. Timeout=100 (MILLISECONDS)
> 17:44:50,412 TRACE (testng-LocalTopKeyTest:[___defaultcache]) [InfinispanLock] Created a new one: LockPlaceHolder{lockState=WAITING, owner=GlobalTransaction:<null>:34:local}
> 17:44:50,412 TRACE (testng-LocalTopKeyTest:[___defaultcache]) [InfinispanLock] Try acquire. Next in queue=LockPlaceHolder{lockState=WAITING, owner=GlobalTransaction:<null>:34:local}. Current=LockPlaceHolder{lockState=ACQUIRED, owner=GlobalTransaction:<null>:33:local}
> 17:44:50,412 TRACE (testng-LocalTopKeyTest:[___defaultcache]) [InfinispanLock] Unable to acquire. Lock is held.
> 17:44:50,515 ERROR (testng-LocalTopKeyTest:[___defaultcache]) [InvocationContextInterceptor] ISPN000136: Error executing command VersionedPrepareCommand, writing keys [key]
> org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 100 milliseconds for key key and requestor GlobalTransaction:<null>:34:local. Lock is held by GlobalTransaction:<null>:33:local
> at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.lock(DefaultLockManager.java:236) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockAllAndRecord(AbstractLockingInterceptor.java:200) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.checkPendingAndLockAllKeys(AbstractTxLockingInterceptor.java:200) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockAllOrRegisterBackupLock(AbstractTxLockingInterceptor.java:166) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.locking.OptimisticLockingInterceptor.visitPrepareCommand(OptimisticLockingInterceptor.java:70) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:153) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.TxInterceptor.visitPrepareCommand(TxInterceptor.java:140) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.stats.topK.LocalTopKeyTest$PrepareCommandBlocker.visitPrepareCommand(LocalTopKeyTest.java:229) ~[test-classes/:?]
> at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.stats.topK.CacheUsageInterceptor.visitPrepareCommand(CacheUsageInterceptor.java:78) ~[classes/:?]
> at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) ~[infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:79) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:112) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:176) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:335) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.transaction.impl.TransactionCoordinator.prepare(TransactionCoordinator.java:121) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.transaction.impl.TransactionCoordinator.prepare(TransactionCoordinator.java:104) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.transaction.xa.TransactionXaAdapter.commit(TransactionXaAdapter.java:111) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord.topLevelOnePhaseCommit(XAResourceRecord.java:698) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at com.arjuna.ats.arjuna.coordinator.BasicAction.onePhaseCommit(BasicAction.java:2364) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at com.arjuna.ats.arjuna.coordinator.BasicAction.End(BasicAction.java:1518) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:96) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:162) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1200) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:126) [narayana-jta-5.0.4.Final.jar:5.0.4.Final (revision: b4060)]
> at org.infinispan.cache.impl.CacheImpl.tryCommit(CacheImpl.java:1679) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1636) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.putInternal(CacheImpl.java:1163) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1153) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1699) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:241) [infinispan-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> at org.infinispan.stats.topK.LocalTopKeyTest.testLockFailed(LocalTopKeyTest.java:84) [test-classes/:?]
> 17:44:50,581 ERROR (testng-LocalTopKeyTest:[]) [UnitTestTestNGListener] Test testLockFailed(org.infinispan.stats.topK.LocalTopKeyTest) failed.
> java.lang.AssertionError: Wrong number of locked keys expected [2] but found [1]
> at org.testng.Assert.fail(Assert.java:94) ~[testng-6.8.8.jar:?]
> at org.testng.Assert.failNotEquals(Assert.java:494) ~[testng-6.8.8.jar:?]
> at org.testng.Assert.assertEquals(Assert.java:123) ~[testng-6.8.8.jar:?]
> at org.testng.Assert.assertEquals(Assert.java:265) ~[testng-6.8.8.jar:?]
> at org.infinispan.stats.topK.LocalTopKeyTest.assertTopKeyLocked(LocalTopKeyTest.java:188) ~[test-classes/:?]
> at org.infinispan.stats.topK.LocalTopKeyTest.assertLockInformation(LocalTopKeyTest.java:204) ~[test-classes/:?]
> at org.infinispan.stats.topK.LocalTopKeyTest.testLockFailed(LocalTopKeyTest.java:96) ~[test-classes/:?]
> {noformat}
> {{Log.unableToAcquireLock}} doesn't log the cause exception, but it seems unlikely that the timeout came from somewhere else.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6442) NullPointerException in HotRodDecoder.channelActive
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6442?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6442:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/4187
> NullPointerException in HotRodDecoder.channelActive
> ---------------------------------------------------
>
> Key: ISPN-6442
> URL: https://issues.jboss.org/browse/ISPN-6442
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite - Server
> Affects Versions: 8.2.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 9.0.0.Alpha1
>
>
> {{HotRodServer.startInternal}} first starts the Netty transport (with {{super.startInternal()}}) and only then initializes the {{clientListenerRegistry}} field. That means the server can accept a request before {{clientListenerRegistry}} is initialized, causing a NPE in {{HotRodDecoder.channelActive()}}.
> Visible as random failures in {{DistTopologyChangeUnderLoadSingleOwnerTest.testRestartServerWhilePutting}}:
> {noformat}
> 00:10:54,718 ERROR (HotRodServerWorker-408-1) [CacheDecodeContext] ISPN005009: Unexpected error before any request parameters read java.lang.NullPointerException
> at org.infinispan.server.hotrod.HotRodDecoder.channelActive(HotRodDecoder.scala:284)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:183)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:169)
> at io.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:817)
> at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:453)
> at io.netty.channel.AbstractChannel$AbstractUnsafe.access$100(AbstractChannel.java:377)
> at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:423)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
> java.util.concurrent.ExecutionException: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=46498 returned server error (status=0x85): java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.infinispan.client.hotrod.DistTopologyChangeUnderLoadSingleOwnerTest.testRestartServerWhilePutting(DistTopologyChangeUnderLoadSingleOwnerTest.java:64)
> Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=46498 returned server error (status=0x85): java.lang.NullPointerException
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118)
> at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56)
> at org.infinispan.client.hotrod.impl.operations.AbstractKeyValueOperation.sendPutOperation(AbstractKeyValueOperation.java:56)
> at org.infinispan.client.hotrod.impl.operations.PutOperation.executeOperation(PutOperation.java:32)
> at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:54)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:268)
> at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79)
> at org.infinispan.client.hotrod.DistTopologyChangeUnderLoadSingleOwnerTest$PutHammer.call(DistTopologyChangeUnderLoadSingleOwnerTest.java:76)
> at org.infinispan.client.hotrod.DistTopologyChangeUnderLoadSingleOwnerTest$PutHammer.call(DistTopologyChangeUnderLoadSingleOwnerTest.java:67)
> at org.infinispan.test.AbstractInfinispanTest$LoggingCallable.call(AbstractInfinispanTest.java:478)
> ... 4 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months