[JBoss JIRA] (ISPN-6968) Clarify Query DSL error message regarding number format
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-6968?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-6968:
--------------------------------
Affects Version/s: 8.0.0.Final
> Clarify Query DSL error message regarding number format
> -------------------------------------------------------
>
> Key: ISPN-6968
> URL: https://issues.jboss.org/browse/ISPN-6968
> Project: Infinispan
> Issue Type: Enhancement
> Affects Versions: 8.0.0.Final
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> The query:
> {code}
> Query query = queryFactory.from(User.class)
> .select(property("name"), count("age"))
> .having("age").gte(2.3)
> .toBuilder().groupBy("name")
> .build();
> List<Object[]> list = query.list();
> {code}
> will result in an error:
> {code}
> org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=13 returned server error (status=0x85): org.hibernate.hql.ParsingException: ISPN028505: Invalid numeric literal '2.3'
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343)
> {code}
> This is a bit confusing because it does not clearly state that an integer is expected.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (ISPN-6968) Clarify Query DSL error message regarding number format
by Adrian Nistor (JIRA)
Adrian Nistor created ISPN-6968:
-----------------------------------
Summary: Clarify Query DSL error message regarding number format
Key: ISPN-6968
URL: https://issues.jboss.org/browse/ISPN-6968
Project: Infinispan
Issue Type: Enhancement
Reporter: Adrian Nistor
Assignee: Adrian Nistor
Fix For: 9.0.0.Beta1, 9.0.0.Final
The query:
{code}
Query query = queryFactory.from(User.class)
.select(property("name"), count("age"))
.having("age").gte(2.3)
.toBuilder().groupBy("name")
.build();
List<Object[]> list = query.list();
{code}
will result in an error:
{code}
org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=13 returned server error (status=0x85): org.hibernate.hql.ParsingException: ISPN028505: Invalid numeric literal '2.3'
at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343)
{code}
This is a bit confusing because it does not clearly state that an integer is expected.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (ISPN-6968) Clarify Query DSL error message regarding number format
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-6968?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-6968:
--------------------------------
Status: Open (was: New)
> Clarify Query DSL error message regarding number format
> -------------------------------------------------------
>
> Key: ISPN-6968
> URL: https://issues.jboss.org/browse/ISPN-6968
> Project: Infinispan
> Issue Type: Enhancement
> Affects Versions: 8.0.0.Final
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> The query:
> {code}
> Query query = queryFactory.from(User.class)
> .select(property("name"), count("age"))
> .having("age").gte(2.3)
> .toBuilder().groupBy("name")
> .build();
> List<Object[]> list = query.list();
> {code}
> will result in an error:
> {code}
> org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=13 returned server error (status=0x85): org.hibernate.hql.ParsingException: ISPN028505: Invalid numeric literal '2.3'
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343)
> {code}
> This is a bit confusing because it does not clearly state that an integer is expected.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (ISPN-6795) ClusteredGetResponseValidityFilter swallows exceptions
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6795?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-6795:
------------------------------------
For the record, remote gets were never intended to report exceptions from the other side. The idea was that if there's an exception from one target, we'd still want to wait for a response from the others, and if none of the targets return a valid response, `invokeRemotely` would throw a `RpcException`. So this is really an enhancement request, not a bug.
> ClusteredGetResponseValidityFilter swallows exceptions
> ------------------------------------------------------
>
> Key: ISPN-6795
> URL: https://issues.jboss.org/browse/ISPN-6795
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Alpha2
> Reporter: Radim Vansa
> Priority: Minor
>
> ClusteredGetValidityFilter does not accept exceptional responses. Therefore, if there's a exception on the remote side when executing ClusteredGetCommand, it's swallowed and TimeoutException is thrown instead.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (ISPN-6940) Unavailable servers with Replication timeout exception
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/ISPN-6940?page=com.atlassian.jira.plugin.... ]
Paul Ferraro commented on ISPN-6940:
------------------------------------
[~bsikora] I'm not on the Infinispan team - so you shouldn't assign this to me. It's always best to have jira auto-assign. The component lead is responsible for triage/prioritization/delegation of issues. Looking at this issue, I'm wondering if you meant to assign this to WFLY instead? If there is indeed an underlying ISPN issue, I'll file a corresponding ISPN jira.
> Unavailable servers with Replication timeout exception
> ------------------------------------------------------
>
> Key: ISPN-6940
> URL: https://issues.jboss.org/browse/ISPN-6940
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.3.Final
> Reporter: Bogdan Sikora
> Priority: Critical
> Attachments: clusterbench.war
>
>
> Exception in log after every request
> {noformat}
> ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-1) ISPN000136: Error executing command GetKeyValueCommand, writing keys []: org.infinispan.util.concurrent.TimeoutException: Replication timeout for jboss-eap-7.1-1
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.staggeredProcessNext(CommandAwareRpcDispatcher.java:375)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.lambda$processCallsStaggered$3(CommandAwareRpcDispatcher.java:357)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 09:46:25,427 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/jvmroute: org.infinispan.util.concurrent.TimeoutException: Replication timeout for jboss-eap-7.1-1
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.staggeredProcessNext(CommandAwareRpcDispatcher.java:375)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.lambda$processCallsStaggered$3(CommandAwareRpcDispatcher.java:357)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> But it never disabled whole server to every request, eap generates this pretty page with replication error exception
> {noformat}
> <html><head><title>ERROR</title><style>body {
> font-family: "Lucida Grande", "Lucida Sans Unicode", "Trebuchet MS", Helvetica, Arial, Verdana, sans-serif;
> margin: 5px;
> }
> .header {
> background-image: linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -o-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -moz-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -webkit-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -ms-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
>
> background-image: -webkit-gradient(
> linear,
> left bottom,
> left top,
> color-stop(0.08, rgb(153,151,153)),
> color-stop(0.54, rgb(199,199,199))
> );
> color: black;
> padding: 2px;
> font-weight: normal;
> border: solid 1px;
> font-size: 170%;
> text-align: left;
> vertical-align: middle;
> height: 32px;
> }
> .error-div {
> display: inline-block; width: 32px; height: 32px; background: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABmJLR0QAAAAAAAD5Q7t/AAAACXBIWXMAAABIAAAASABGyWs+AAAACXZwQWcAAAAgAAAAIACH+pydAAAGGElEQVRYw8WXW2wcVxnHf+fM7NXrdXbdtZ3ipGqCEzvQKmlFoUggwkvvGBoaqVVV5Y0nkCoQPPCAhMQbQlzUh/IWHoh6QUVKZVLxUCGogKZRW9uJkyg0iVzZ2dhre9c7u7M7c87hYWc2M7t2bBASI306O/85+v7/7zLn24H/8yV2u/EcFNvwVNK2X0DKw1qpYaX1gCWlY0m5arRe8JQ6I2DmaVj/nwk4CxO2Zf1aw9dKo6N67+eOZHN7x0gViiSHBmlXa7hrazjLt1i+eKmxUi5bwpg/e8a8PA3X/msBr0MyJ+WvgFOffehY8v7HH5OW78HqGjgOtNrge2AnIJWAgQEYLqBsm3/NvKM/mZ1rYczpRa2/9x3w/iMBM1BCiHfuGRmZevClF9NJ5cP1m9B0+/aa3t/ZDNw3TtuSfPy737trK6sXtTGPPwOruxLwNhSEEHMHpqZGDz93wubKFdio3ZXUGNOHs2cIDh/g6pt/9G8uXLnlGPP5k1Dt5bN60m6lhXj34OTkxKET37SZm4d6oz/KgFRH7yOrBozbwlTWGf7ql6RZrWTcytrxaTj9Ro9OGb3JS/mz0nDxgYnnvmWbuUsdJ6FTY2JmImShqZA83N90UbOXOXDiG4nCcPFoTsqfbluCP8DenJQLx1/+7pBcXILaZl9qo2svFmalb48xkM9hxkf5+29+u9HSenIayn0ZyEr5i0NHpnKy7aGrtW6UehsLozRBRlRQEh3BwzLpWh3h++yfPJRLSvnzvhK8DkPamOl7n3nCUjcXtyXuK0X0WViaAFc9QtSNJe594uu2MubZGcjHBGTgqeFiQQml0K12nDgSpd4iyi4uRMci6Q8xJSWq7SGEZCg/qD14LCYgIcTzoxMHc6qygYk40aETKTFSdh2aANOWhbEsjJSYCK6C/V0LcH+9xvCB/TlbiBdCAXbQMJPZ/fvQjUbXEULAa6/1deyuh0dwNU+e7DSzEOhmi8xn9iI+ujgVE6ChlBwpYcqVjuIgA3IXBMYYtNbd30IIpJQI0ZEa+gPQnk9ieA8GSrESaGNyiWIBrVQntUHqdrq01iiluo0WilBK4fs+WutOr4S9YAyJoRzamMGYAClEve046ETyTr13EKCU6kZ+N4GNCxfwKpVOryQSuOUVhBCbsRIIuN1cWRkaSCfRrdaOTncijgltNPDn5xHZLIn7x3GNh4CVmADgcv3a9YmBY0cxm06nAQH31KlOao3BL5fxbtzAOE73hAt7JXb6RU7PGO44WK5Do1ZHw6WYAN+YM+X5S8dLXziWizVNvY5aWcFfWuq8Idsctb1Yr8DwmRwrUr1yfVMbcyYmwIWZ2mbdai9+gvfBAsaAbrUwrrtllLuJPibIGGQ2hUlIHLdlA+diTXgSqkKIt5b/OevL+8bwqlWU63anW3jyhaa4c9T2Hs0hrsL5EOCJQ/sov7/gS8Mb07AZEwDgav2D5aVy3eSziEL+Dllk0PSRcmcMqx6BUeGykINsksrttboPP4w2aVfACVhGiFevvft+I/XwJCad3DF6tc2MiD4zmRSZhyZY/OusI4V4JTqKYwIA6lr/2Gm6s5/+7UMv/egDmEyqL7VbRR/LViR6sikGvniYpffm2m6r/VFd65/0vqZ9R/tbsMeG+WKpMDL+lWMJ58Jl/NVqvOG26PAoDmCXhhg4epDl9+a9zUp1uQkPbvWfcMvZchbuEXAuk0kf2X/84YxpuDQv3kA5zW1fw9CswSyZyXFEJsmnf/m46bntuTY8+SxUtuLadri9CokxeEUK8WJhtJgaeeSIxG3RurWGv+Gg3RbaU5CwkOkU1lCWxFgBmbS4ff6qX7297rXh9Gn4/llobMcjtsFywACQ+zZMTsOP8vBIOpNS+bFiJj2cxx7MYg+k8ZwmrWqD9lqNjVvrTc9tWRtw/k345dtwFagDTmTdUUA2EDAYXffB2JPw6FH4ch7GUpCzIemD50J9A1Y+hH/8Cc4vdTq9Tud9D9dNOt+MajclyNL53xZmIhusmcBSQJLOd4UJnHqACzTppDy0kHizl/yuPRB5nggIM0A6IE4GuA34EQFtoBUQuwGm7kbwb+eaEEXmuV5dAAAAJXRFWHRjcmVhdGUtZGF0ZQAyMDA5LTExLTEwVDE5OjM4OjI0LTA3OjAwdDKp4gAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxMC0wMi0yMFQyMzoyNjoyNC0wNzowMC7DUNYAAAAldEVYdGRhdGU6bW9kaWZ5ADIwMTAtMDEtMTFUMDg6NTc6MzUtMDc6MDCruapPAAAAMnRFWHRMaWNlbnNlAGh0dHA6Ly9lbi53aWtpcGVkaWEub3JnL3dpa2kvUHVibGljX2RvbWFpbj/96s8AAAAldEVYdG1vZGlmeS1kYXRlADIwMDktMTEtMTBUMTk6Mzg6MjQtMDc6MDArg9/WAAAAGXRFWHRTb3VyY2UAVGFuZ28gSWNvbiBMaWJyYXJ5VM/tggAAADp0RVh0U291cmNlX1VSTABodHRwOi8vdGFuZ28uZnJlZWRlc2t0b3Aub3JnL1RhbmdvX0ljb25fTGlicmFyebzIrdYAAAAASUVORK5CYII=') left center no-repeat;
> }.error-text-div {
> display: inline-block; vertical-align: top; height: 32px;}.label { font-weight:bold; display: inline-block;}.value { display: inline-block;}</style></head><body><div class="header"><div class="error-div"></div><div class="error-text-div">Error processing request</div></div><div class="label">Context Path:</div><div class="value">/clusterbench</div><br/><div class="label">Servlet Path:</div><div class="value">/jvmroute</div><br/><div class="label">Path Info:</div><div class="value">null</div><br/><div class="label">Query String:</div><div class="value">null</div><br/><b>Stack Trace</b><br/>org.infinispan.util.concurrent.TimeoutException: Replication timeout for jboss-eap-7.1-1<br/>org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)<br/>org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)<br/>java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)<br/>java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)<br/>java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)<br/>java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)<br/>org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.staggeredProcessNext(CommandAwareRpcDispatcher.java:375)<br/>org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.lambda$processCallsStaggered$3(CommandAwareRpcDispatcher.java:357)<br/>java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br/>java.util.concurrent.FutureTask.run(FutureTask.java:266)<br/>java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)<br/>java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)<br/>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)<br/>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)<br/>java.lang.Thread.run(Thread.java:745)<br/></body></html>
> {noformat}
> Reproducing
> # Unzip jboss-eap-7.1.0.DR2.zip
> # copy recurcive (cp -R) to have 3 jboss-eap folders (jboss-eap-7.1, jboss-eap-7.1-2, jboss-eap-7.1-3)
> # open configuration file standalone-ha.xml
> # add some offset to server ports (management-http, management-https, ajp,http, https) to remove conficts
> # add with names (value) what you want
> {noformat}
> <system-properties>
> <property name="jboss.mod_cluster.jvmRoute" value="jboss-eap-7.1-3"/>
> <property name="jboss.node.name" value="jboss-eap-7.1-3"/>
> </system-properties>
> {noformat}
> # You should probably change ip from localhost to prevent firewall to misbehave
> # copy clusterbench.war ro standalone/deployments
> # start servers with standalone-ha.xml
> # curl {noformat} ${YOUR_SET_IP_ADDRESS}:${SERVER_PORT}/clusterbench/jvmroute {noformat}
> # one of the servers should return 500, if not restart one of them
> I had third server unable to response, when i turn off second one, third came to live. After start of second one, third one stops responding again. Sometimes more then one server stops to respond, i have got scenario where whole cluster stopped responding.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (ISPN-6967) Add a possibility to manually start the caches after a graceful shutdown if nodes are missing
by Wolf-Dieter Fink (JIRA)
Wolf-Dieter Fink created ISPN-6967:
--------------------------------------
Summary: Add a possibility to manually start the caches after a graceful shutdown if nodes are missing
Key: ISPN-6967
URL: https://issues.jboss.org/browse/ISPN-6967
Project: Infinispan
Issue Type: Feature Request
Reporter: Wolf-Dieter Fink
If a cluster of ISPN nodes was shuted down in a graceful mannor it will wait for all nodes when restarted.
Assuming that there is one or more nodes not lomger available it should be possible to enable the caches without having all nodes and might loose data if >=NumOwners are missed.
It is possible to restart the cluster by remove the 'graceful' information (unique Node ID's) from the filesystem of each node. But in this case more data is lost because the data of the first node will be rebalanced and shared for thewhole cluser after restart.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (ISPN-6966) Event logger errors in preloading events ignored
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-6966:
--------------------------------------
Summary: Event logger errors in preloading events ignored
Key: ISPN-6966
URL: https://issues.jboss.org/browse/ISPN-6966
Project: Infinispan
Issue Type: Bug
Affects Versions: 8.2.4.Final, 9.0.0.Alpha4
Reporter: Galder Zamarreño
If any errors arise from event logger calls in org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener, these are ignored and simply logged to the standard output, e.g.
{code}
08:09:12,228 ERROR [org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl] (Incoming-2,node1) ISPN000405: Caught exception while invoking a cache manager listener!: org.infinispan.commons.CacheListenerException: ISPN000280: Caught exception [org.infinispan.persistence.spi.PersistenceException] while invoking method [public void org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.handleViewChange(org.infinispan.notifications.cachemanagerlistener.event.ViewChangedEvent)] on listener instance: org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener@79b19ac9
at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:302)
at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:20)
at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:320)
at org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl.invokeListener(CacheManagerNotifierImpl.java:132)
at org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl.notifyViewChange(CacheManagerNotifierImpl.java:88)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$NotifyViewChange.emitNotification(JGroupsTransport.java:841)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.viewAccepted(JGroupsTransport.java:917)
at org.jgroups.blocks.MessageDispatcher.handleUpEvent(MessageDispatcher.java:618)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:666)
at org.jgroups.JChannel.up(JChannel.java:738)
at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:124)
at org.jgroups.stack.Protocol.up(Protocol.java:380)
at org.jgroups.protocols.FORK.up(FORK.java:118)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
at org.jgroups.protocols.pbcast.GMS.installView(GMS.java:730)
at org.jgroups.protocols.pbcast.ParticipantGmsImpl.handleViewChange(ParticipantGmsImpl.java:140)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:917)
at org.jgroups.stack.Protocol.up(Protocol.java:418)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:294)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:480)
at org.jgroups.protocols.pbcast.NAKACK2.deliverBatch(NAKACK2.java:987)
at org.jgroups.protocols.pbcast.NAKACK2.removeAndPassUp(NAKACK2.java:917)
at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:851)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:611)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:310)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)
at org.jgroups.protocols.Discovery.up(Discovery.java:296)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1601)
at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1817)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.infinispan.persistence.spi.PersistenceException: Execution exception!
at org.infinispan.persistence.file.SingleFileStore.process(SingleFileStore.java:549)
at org.infinispan.persistence.manager.PersistenceManagerImpl.preload(PersistenceManagerImpl.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:867)
at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:633)
at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:622)
at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:547)
at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:231)
at org.infinispan.cache.impl.CacheImpl.start(CacheImpl.java:808)
at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:637)
at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:588)
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:453)
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:439)
at org.infinispan.server.eventlogger.ServerEventLogger.getEventCache(ServerEventLogger.java:53)
at org.infinispan.server.eventlogger.ServerEventLogger.eventLog(ServerEventLogger.java:69)
at org.infinispan.server.eventlogger.ServerEventLogger.log(ServerEventLogger.java:61)
at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:40)
at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$logNodeLeft$464(ClusterTopologyManagerImpl.java:756)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.infinispan.topology.ClusterTopologyManagerImpl.logNodeLeft(ClusterTopologyManagerImpl.java:756)
at org.infinispan.topology.ClusterTopologyManagerImpl.access$400(ClusterTopologyManagerImpl.java:76)
at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.handleViewChange(ClusterTopologyManagerImpl.java:720)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:297)
... 35 more
Caused by: java.util.concurrent.ExecutionException: org.infinispan.persistence.spi.PersistenceException: org.infinispan.commons.CacheException: Error reading from input to find externalizer
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.infinispan.executors.ExecutorAllCompletionService.pollUntilEmpty(ExecutorAllCompletionService.java:48)
at org.infinispan.executors.ExecutorAllCompletionService.submit(ExecutorAllCompletionService.java:32)
at org.infinispan.persistence.file.SingleFileStore.process(SingleFileStore.java:531)
... 73 more
Caused by: org.infinispan.persistence.spi.PersistenceException: org.infinispan.commons.CacheException: Error reading from input to find externalizer
at org.infinispan.marshall.core.MarshalledEntryImpl.unmarshall(MarshalledEntryImpl.java:116)
at org.infinispan.marshall.core.MarshalledEntryImpl.getValue(MarshalledEntryImpl.java:61)
at org.infinispan.persistence.manager.PersistenceManagerImpl$1.processEntry(PersistenceManagerImpl.java:265)
at org.infinispan.persistence.file.SingleFileStore$2.call(SingleFileStore.java:537)
at org.infinispan.persistence.file.SingleFileStore$2.call(SingleFileStore.java:531)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:20)
at java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
at org.infinispan.executors.ExecutorAllCompletionService.submit(ExecutorAllCompletionService.java:31)
... 74 more
Caused by: org.infinispan.commons.CacheException: Error reading from input to find externalizer
at org.infinispan.marshall.core.internal.InternalExternalizerTable.findReadExternalizer(InternalExternalizerTable.java:281)
at org.infinispan.marshall.core.internal.BytesObjectInput.readObject(BytesObjectInput.java:31)
at org.infinispan.server.eventlogger.ServerEventImpl$Externalizer.readObject(ServerEventImpl.java:128)
at org.infinispan.server.eventlogger.ServerEventImpl$Externalizer.readObject(ServerEventImpl.java:110)
at org.infinispan.marshall.core.internal.InternalExternalizerTable$ForeignExternalizerAdapter.readObject(InternalExternalizerTable.java:495)
at org.infinispan.marshall.core.internal.InternalMarshaller.objectFromObjectInput(InternalMarshaller.java:90)
at org.infinispan.marshall.core.internal.InternalMarshaller.objectFromByteBuffer(InternalMarshaller.java:127)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:99)
at org.infinispan.marshall.core.MarshalledEntryImpl.unmarshall(MarshalledEntryImpl.java:114)
... 84 more
Caused by: org.infinispan.commons.CacheException: Unknown externalizer type: 115
at org.infinispan.marshall.core.internal.InternalExternalizerTable.findReadExternalizer(InternalExternalizerTable.java:242)
... 92 more
{code}
This is a general problem with listeners, but the question here is whether the Cache/CacheManager should fail to start under such situation. Right now that causes no issues.
Btw, the actual issue was introduced in my changes for ISPN-6906 and this has now been fixed.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months