[JBoss JIRA] (ISPN-6749) Disable SiteProviderTopologyChangeTest.testXSiteSTDuringSiteMasterLeave
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-6749?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-6749:
------------------------------
Component/s: Test Suite - Core
> Disable SiteProviderTopologyChangeTest.testXSiteSTDuringSiteMasterLeave
> -----------------------------------------------------------------------
>
> Key: ISPN-6749
> URL: https://issues.jboss.org/browse/ISPN-6749
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Two sites: LON(nodes A,B,C) and NYC(nodes D,E,F). Node A and D are the site master of LON and NYC respectively.
> Site LON is going to push state to site NYC. The push state request is done in node B and the request is done when node A is leaving the cluster. In order to push the state, it sends a START_RECEIVE to NYC site.
> {noformat}
> About to send to backups [NYC (sync, timeout=2000)], command XSiteStateTransferControlCommand{control=START_RECEIVE, siteName='null', statusOk=false, cacheName='___defaultcache'}
> SiteProviderTopologyChangeTest-NodeB-48243: invoking unicast RPC [req-id=98] on SiteMaster(NYC)
> SiteProviderTopologyChangeTest-NodeB-48243: forwarding message to final destination SiteMaster(NYC) to the current coordinator
> SiteProviderTopologyChangeTest-NodeB-48243: sending msg to SiteProviderTopologyChangeTest-NodeA-41240, src=SiteProviderTopologyChangeTest-NodeB-48243, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=32, TP: [cluster_name=ISPN(SITE LON)]
> {noformat}
> The message is forward to node A (site master LON) that sends it to node D (site master NYC)
> {noformat}
> SiteProviderTopologyChangeTest-NodeA-41240: received [dst: SiteProviderTopologyChangeTest-NodeA-41240, src: SiteProviderTopologyChangeTest-NodeB-48243 (4 headers), size=29 bytes, flags=OOB|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=32, TP: [cluster_name=ISPN(SITE LON)]
> _SiteProviderTopologyChangeTest-NodeA-41240:LON: sending msg to _SiteProviderTopologyChangeTest-NodeD-50088:NYC, src=_SiteProviderTopologyChangeTest-NodeA-41240:LON, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=2, conn_id=1, TP: [cluster_name=global]
> {noformat}
> Response is sent back from node D to node A that forwards it to node B.
> {noformat}
> _SiteProviderTopologyChangeTest-NodeA-41240:LON: received [dst: _SiteProviderTopologyChangeTest-NodeA-41240:LON, src: _SiteProviderTopologyChangeTest-NodeD-50088:NYC (4 headers), size=4 bytes, flags=OOB|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteProviderTopologyChangeTest-NodeB-48243:LON, sender=SiteMaster(NYC)], UNICAST3: DATA, seqno=3, TP: [cluster_name=global]
> SiteProviderTopologyChangeTest-NodeA-41240: sending msg to SiteProviderTopologyChangeTest-NodeB-48243, src=SiteProviderTopologyChangeTest-NodeA-41240, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteProviderTopologyChangeTest-NodeB-48243:LON, sender=SiteMaster(NYC)], UNICAST3: DATA, seqno=30, conn_id=1, TP: [cluster_name=ISPN(SITE LON)]
> {noformat}
> However, since node A is shutting-down, the response never arrives to node B that ends up throwing TimeoutException.
> {noformat}
> SiteProviderTopologyChangeTest-NodeA-41240: sending 1 msgs (218 bytes (0.70% of max_bundle_size) to 1 dests(s): [ISPN(SITE LON):SiteProviderTopologyChangeTest-NodeB-48243]
> 127.0.0.1:7900: server is not running, discarding message to 127.0.0.1:7901
> {noformat}
> The test will be disabled because:
> * This push state is triggered manually and it can be re-triggered in case of exceptions
> * It requires some UNICAST/NAKACK between sites (i.e. changing jgroups)
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6749) Disable SiteProviderTopologyChangeTest.testXSiteSTDuringSiteMasterLeave
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-6749?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-6749:
------------------------------
Status: Open (was: New)
> Disable SiteProviderTopologyChangeTest.testXSiteSTDuringSiteMasterLeave
> -----------------------------------------------------------------------
>
> Key: ISPN-6749
> URL: https://issues.jboss.org/browse/ISPN-6749
> Project: Infinispan
> Issue Type: Bug
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Two sites: LON(nodes A,B,C) and NYC(nodes D,E,F). Node A and D are the site master of LON and NYC respectively.
> Site LON is going to push state to site NYC. The push state request is done in node B and the request is done when node A is leaving the cluster. In order to push the state, it sends a START_RECEIVE to NYC site.
> {noformat}
> About to send to backups [NYC (sync, timeout=2000)], command XSiteStateTransferControlCommand{control=START_RECEIVE, siteName='null', statusOk=false, cacheName='___defaultcache'}
> SiteProviderTopologyChangeTest-NodeB-48243: invoking unicast RPC [req-id=98] on SiteMaster(NYC)
> SiteProviderTopologyChangeTest-NodeB-48243: forwarding message to final destination SiteMaster(NYC) to the current coordinator
> SiteProviderTopologyChangeTest-NodeB-48243: sending msg to SiteProviderTopologyChangeTest-NodeA-41240, src=SiteProviderTopologyChangeTest-NodeB-48243, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=32, TP: [cluster_name=ISPN(SITE LON)]
> {noformat}
> The message is forward to node A (site master LON) that sends it to node D (site master NYC)
> {noformat}
> SiteProviderTopologyChangeTest-NodeA-41240: received [dst: SiteProviderTopologyChangeTest-NodeA-41240, src: SiteProviderTopologyChangeTest-NodeB-48243 (4 headers), size=29 bytes, flags=OOB|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=32, TP: [cluster_name=ISPN(SITE LON)]
> _SiteProviderTopologyChangeTest-NodeA-41240:LON: sending msg to _SiteProviderTopologyChangeTest-NodeD-50088:NYC, src=_SiteProviderTopologyChangeTest-NodeA-41240:LON, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=2, conn_id=1, TP: [cluster_name=global]
> {noformat}
> Response is sent back from node D to node A that forwards it to node B.
> {noformat}
> _SiteProviderTopologyChangeTest-NodeA-41240:LON: received [dst: _SiteProviderTopologyChangeTest-NodeA-41240:LON, src: _SiteProviderTopologyChangeTest-NodeD-50088:NYC (4 headers), size=4 bytes, flags=OOB|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteProviderTopologyChangeTest-NodeB-48243:LON, sender=SiteMaster(NYC)], UNICAST3: DATA, seqno=3, TP: [cluster_name=global]
> SiteProviderTopologyChangeTest-NodeA-41240: sending msg to SiteProviderTopologyChangeTest-NodeB-48243, src=SiteProviderTopologyChangeTest-NodeA-41240, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteProviderTopologyChangeTest-NodeB-48243:LON, sender=SiteMaster(NYC)], UNICAST3: DATA, seqno=30, conn_id=1, TP: [cluster_name=ISPN(SITE LON)]
> {noformat}
> However, since node A is shutting-down, the response never arrives to node B that ends up throwing TimeoutException.
> {noformat}
> SiteProviderTopologyChangeTest-NodeA-41240: sending 1 msgs (218 bytes (0.70% of max_bundle_size) to 1 dests(s): [ISPN(SITE LON):SiteProviderTopologyChangeTest-NodeB-48243]
> 127.0.0.1:7900: server is not running, discarding message to 127.0.0.1:7901
> {noformat}
> The test will be disabled because:
> * This push state is triggered manually and it can be re-triggered in case of exceptions
> * It requires some UNICAST/NAKACK between sites (i.e. changing jgroups)
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (HRJS-17) Failover not happening
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/HRJS-17?page=com.atlassian.jira.plugin.sy... ]
Galder Zamarreño resolved HRJS-17.
----------------------------------
Resolution: Done
> Failover not happening
> ----------------------
>
> Key: HRJS-17
> URL: https://issues.jboss.org/browse/HRJS-17
> Project: Infinispan Javascript client
> Issue Type: Bug
> Affects Versions: 0.2.0
> Reporter: Gustavo Fernandes
> Assignee: Galder Zamarreño
> Fix For: 0.3.0
>
>
> I have a cluster of 3 nodes (127.0.0.1:11422, 127.0.0.1:11222, 127.0.0.1:11322) and connect to it using:
> {code}
> var connected = infinispan.client({port: 11222, host: 127.0.0.1});
> {code}
> Client works normally, until I try to kill one server at a time, leaving just one in the cluster, but apparently when I kill the server the client is connected to, it does not failover:
> Initial Request:
> {noformat}
> [2016-05-27 10:13:06.947] [DEBUG] connection - Connecting to 127.0.0.1:11222
> [2016-05-27 10:13:06.951] [DEBUG] connection - Connected to 127.0.0.1:11222
> [2016-05-27 10:13:06.952] [DEBUG] client - Invoke ping(msgId=1)
> [2016-05-27 10:13:06.952] [TRACE] encoder - Encode operation with topology id 0
> [2016-05-27 10:13:06.954] [TRACE] transport - Write buffer(msgId=1) to 127.0.0.1:11222
> [2016-05-27 10:13:07.009] [TRACE] decoder - Read header(msgId=1): opCode=24, status=0, hasNewTopology=1
> [2016-05-27 10:13:07.012] [DEBUG] transport - New topology(id=4) discovered: [127.0.0.1:11422,127.0.0.1:11222,127.0.0.1:11322]
> [2016-05-27 10:13:07.012] [DEBUG] transport - Removed server: none
> [2016-05-27 10:13:07.012] [DEBUG] transport - Added servers: [127.0.0.1:11422,127.0.0.1:11322]
> [2016-05-27 10:13:07.012] [DEBUG] transport - Connected servers: [127.0.0.1:11222]
> [2016-05-27 10:13:07.013] [TRACE] decoder - Call decode for request(msgId=1)
> [2016-05-27 10:13:07.013] [TRACE] connection - After decoding request(msgId=1), buffer size is 105, and offset 105
> [2016-05-27 10:13:07.013] [DEBUG] connection - Connecting to 127.0.0.1:11422
> [2016-05-27 10:13:07.013] [DEBUG] connection - Connecting to 127.0.0.1:11322
> [2016-05-27 10:13:07.014] [DEBUG] connection - Connected to 127.0.0.1:11422
> [2016-05-27 10:13:07.014] [DEBUG] connection - Connected to 127.0.0.1:11322
> [2016-05-27 10:13:07.015] [TRACE] connection - Complete success for request(msgId=1) with undefined
> [2016-05-27 10:13:14.918] [DEBUG] client - Invoke get(msgId=2,key=Hodor)
> [2016-05-27 10:13:14.919] [TRACE] encoder - Encode operation with topology id 4
> [2016-05-27 10:13:14.921] [TRACE] transport - Write buffer(msgId=2) to 127.0.0.1:11322
> [2016-05-27 10:13:14.948] [TRACE] decoder - Read header(msgId=2): opCode=4, status=2, hasNewTopology=0
> [2016-05-27 10:13:14.948] [TRACE] decoder - Call decode for request(msgId=2)
> [2016-05-27 10:13:14.949] [TRACE] connection - After decoding request(msgId=2), buffer size is 5, and offset 5
> [2016-05-27 10:13:14.949] [TRACE] connection - Complete success for request(msgId=2) with undefined
> [2016-05-27 10:13:14.953] [DEBUG] client - Invoke put(msgId=4,key=Hodor,value=Fri May 27 2016 10:13:14 GMT+0100 (BST),opts=undefined)
> [2016-05-27 10:13:14.953] [TRACE] encoder - Encode operation with topology id 4
> [2016-05-27 10:13:14.953] [TRACE] transport - Write buffer(msgId=4) to 127.0.0.1:11322
> [2016-05-27 10:13:14.960] [TRACE] decoder - Read header(msgId=4): opCode=2, status=0, hasNewTopology=0
> [2016-05-27 10:13:14.960] [TRACE] decoder - Call decode for request(msgId=4)
> [2016-05-27 10:13:14.960] [TRACE] connection - After decoding request(msgId=4), buffer size is 5, and offset 5
> [2016-05-27 10:13:14.960] [TRACE] connection - Complete success for request(msgId=4) with undefined
> {noformat}
> Killing server 127.0.0.1:11222:
> {noformat}
> [2016-05-27 10:13:36.707] [DEBUG] connection - Disconnected from 127.0.0.1:11222
> [2016-05-27 10:13:46.616] [DEBUG] client - Invoke get(msgId=6,key=Hodor)
> [2016-05-27 10:13:46.616] [TRACE] encoder - Encode operation with topology id 4
> [2016-05-27 10:13:46.617] [TRACE] transport - Write buffer(msgId=6) to 127.0.0.1:11322
> [2016-05-27 10:13:46.631] [TRACE] decoder - Read header(msgId=6): opCode=4, status=0, hasNewTopology=1
> [2016-05-27 10:13:46.632] [DEBUG] transport - New topology(id=8) discovered: [127.0.0.1:11422,127.0.0.1:11322]
> [2016-05-27 10:13:46.632] [DEBUG] transport - Removed servers are: [127.0.0.1:11222]
> [2016-05-27 10:13:46.633] [DEBUG] transport - Added servers: []
> [2016-05-27 10:13:46.633] [DEBUG] transport - Connected servers: [127.0.0.1:11422,127.0.0.1:11322]
> [2016-05-27 10:13:46.633] [TRACE] decoder - Call decode for request(msgId=6)
> [2016-05-27 10:13:46.633] [TRACE] connection - After decoding request(msgId=6), buffer size is 133, and offset 133
> [2016-05-27 10:13:46.633] [TRACE] connection - Complete success for request(msgId=6) with Fri May 27 2016 10:13:14 GMT+0100 (BST)
> {noformat}
> Killing server 127.0.0.1:11322, the client hangs:
> {noformat}
> [2016-05-27 10:14:59.623] [DEBUG] connection - Disconnected from 127.0.0.1:11322
> [2016-05-27 10:15:04.813] [DEBUG] client - Invoke get(msgId=8,key=Hodor)
> [2016-05-27 10:15:04.814] [TRACE] encoder - Encode operation with topology id 8
> [2016-05-27 10:15:04.814] [TRACE] transport - Write buffer(msgId=8) to 127.0.0.1:11322
> [2016-05-27 10:15:04.814] [ERROR] connection - Error from 127.0.0.1:11322: This socket has been ended by the other party
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (HRJS-19) Local iteration tests randomly fail
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/HRJS-19?page=com.atlassian.jira.plugin.sy... ]
Galder Zamarreño updated HRJS-19:
---------------------------------
Attachment: tmp-tests.log
server.log
> Local iteration tests randomly fail
> -----------------------------------
>
> Key: HRJS-19
> URL: https://issues.jboss.org/browse/HRJS-19
> Project: Infinispan Javascript client
> Issue Type: Bug
> Reporter: Galder Zamarreño
> Attachments: server.log, tmp-tests.log
>
>
> Failures:
> {code}
> 1) Infinispan local client can iterate over entries
> Message:
> Expected { done : false, key : 'local-it2', value : 'v2' } to equal <jasmine.objectContaining({ key : 'local-it3', value : 'v3', done : false })>.
> Stacktrace:
> Error: Expected { done : false, key : 'local-it2', value : 'v2' } to equal <jasmine.objectContaining({ key : 'local-it3', value : 'v3', done : false })>.
> at /Users/g/0/infinispan/js-client/spec/utils/testing.js:262:43
> at /Users/g/0/infinispan/js-client/spec/utils/testing.js:503:39
> at Function._.map._.collect (/Users/g/0/infinispan/js-client/node_modules/underscore/underscore.js:172:24)
> at /Users/g/0/infinispan/js-client/spec/utils/testing.js:500:7
> at /Users/g/0/infinispan/js-client/spec/utils/testing.js:493:42
> at /Users/g/0/infinispan/js-client/spec/utils/testing.js:443:50
> at tryCallOne (/Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:37:12)
> at /Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:103:15
> 2) Infinispan local client can iterate over entries
> Message:
> Expected false to be truthy.
> Stacktrace:
> Error: Expected false to be truthy.
> at /Users/g/0/infinispan/js-client/spec/utils/testing.js:430:26
> at tryCallOne (/Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:37:12)
> at /Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:103:15
> at flush (/Users/g/0/infinispan/js-client/node_modules/promise/node_modules/asap/raw.js:50:29)
> at process._tickCallback (node.js:448:13)
> {code}
> TRACE logs for client and server attached.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (HRJS-19) Local iteration tests randomly fail
by Galder Zamarreño (JIRA)
Galder Zamarreño created HRJS-19:
------------------------------------
Summary: Local iteration tests randomly fail
Key: HRJS-19
URL: https://issues.jboss.org/browse/HRJS-19
Project: Infinispan Javascript client
Issue Type: Bug
Reporter: Galder Zamarreño
Failures:
{code}
1) Infinispan local client can iterate over entries
Message:
Expected { done : false, key : 'local-it2', value : 'v2' } to equal <jasmine.objectContaining({ key : 'local-it3', value : 'v3', done : false })>.
Stacktrace:
Error: Expected { done : false, key : 'local-it2', value : 'v2' } to equal <jasmine.objectContaining({ key : 'local-it3', value : 'v3', done : false })>.
at /Users/g/0/infinispan/js-client/spec/utils/testing.js:262:43
at /Users/g/0/infinispan/js-client/spec/utils/testing.js:503:39
at Function._.map._.collect (/Users/g/0/infinispan/js-client/node_modules/underscore/underscore.js:172:24)
at /Users/g/0/infinispan/js-client/spec/utils/testing.js:500:7
at /Users/g/0/infinispan/js-client/spec/utils/testing.js:493:42
at /Users/g/0/infinispan/js-client/spec/utils/testing.js:443:50
at tryCallOne (/Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:37:12)
at /Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:103:15
2) Infinispan local client can iterate over entries
Message:
Expected false to be truthy.
Stacktrace:
Error: Expected false to be truthy.
at /Users/g/0/infinispan/js-client/spec/utils/testing.js:430:26
at tryCallOne (/Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:37:12)
at /Users/g/0/infinispan/js-client/node_modules/promise/lib/core.js:103:15
at flush (/Users/g/0/infinispan/js-client/node_modules/promise/node_modules/asap/raw.js:50:29)
at process._tickCallback (node.js:448:13)
{code}
TRACE logs for client and server attached.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (HRJS-18) Local server iterator test fails and hangs randomly with NoSuchElementException
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/HRJS-18?page=com.atlassian.jira.plugin.sy... ]
Galder Zamarreño updated HRJS-18:
---------------------------------
Attachment: tmp-tests.log
server.log
> Local server iterator test fails and hangs randomly with NoSuchElementException
> -------------------------------------------------------------------------------
>
> Key: HRJS-18
> URL: https://issues.jboss.org/browse/HRJS-18
> Project: Infinispan Javascript client
> Issue Type: Bug
> Reporter: Galder Zamarreño
> Attachments: server.log, tmp-tests.log
>
>
> Apart from the server returning {{NoSuchElementException}}, this is causing confusion in the client which results in the testsuite hanging completely.
> Here are snippets from client and server logs:
> {code}
> [2016-06-01 17:57:36.210] [DEBUG] client - Invoke putAll(msgId=323,pairs=[{"key":"local-it1","value":"v1","done":false},{"key":"local-it2","value":"v2","done":false},{"key":"local-it3","value":"v3","done":false}],opts=undefined)
> [2016-06-01 17:57:36.210] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.210] [TRACE] transport - Write buffer(msgId=323) to 127.0.0.1:11222: A0C302192D000003007703096C6F63616C2D697431027631096C6F63616C2D697432027632096C6F63616C2D697433027633
> [2016-06-01 17:57:36.214] [TRACE] decoder - Read header(msgId=323): opCode=46, status=0, hasNewTopology=0
> [2016-06-01 17:57:36.215] [TRACE] decoder - Call decode for request(msgId=323)
> [2016-06-01 17:57:36.215] [TRACE] connection - After decoding request(msgId=323), buffer size is 6, and offset 6
> [2016-06-01 17:57:36.215] [TRACE] connection - Complete success for request(msgId=323) with undefined
> [2016-06-01 17:57:36.215] [DEBUG] client - Invoke iterator(msgId=324,batchSize=1,opts=undefined)
> [2016-06-01 17:57:36.215] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.215] [TRACE] transport - Write buffer(msgId=324) to 127.0.0.1:11222: A0C40219310000030001010100
> [2016-06-01 17:57:36.230] [TRACE] decoder - Read header(msgId=324): opCode=50, status=0, hasNewTopology=0
> [2016-06-01 17:57:36.230] [TRACE] decoder - Call decode for request(msgId=324)
> [2016-06-01 17:57:36.230] [TRACE] connection - After decoding request(msgId=324), buffer size is 43, and offset 43
> [2016-06-01 17:57:36.230] [TRACE] connection - Complete success for request(msgId=324) with {"iterId":"28cab848-73ac-47c5-ad68-f518b89c5ba4","conn":{}}
> [2016-06-01 17:57:36.230] [TRACE] client - Invoke iterator.next(msgId=325,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.230] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=325) to 127.0.0.1:11222: A0C5021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=326,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=326) to 127.0.0.1:11222: A0C6021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=327,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=327) to 127.0.0.1:11222: A0C7021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.244] [TRACE] decoder - Read header(msgId=327): opCode=80, status=133, hasNewTopology=0
> [2016-06-01 17:57:36.244] [ERROR] decoder - Error decoding body of request(msgId=327): java.util.NoSuchElementException
> [2016-06-01 17:57:36.244] [TRACE] connection - After decoding request(msgId=327), buffer size is 39, and offset 39
> [2016-06-01 17:57:36.244] [TRACE] connection - Complete failure for request(msgId=327) with java.util.NoSuchElementException
> [2016-06-01 17:57:36.249] [TRACE] decoder - Read header(msgId=327): opCode=52, status=0, hasNewTopology=0
> {code}
> {code}
> 2016-06-01 17:57:36,216 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationStartRequest, version=25, messageId=324, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (HotRodServerHandler-6-115) Invoked with command EntrySetCommand{cache=default} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@7bc2d6d3]
> 2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (HotRodServerHandler-6-115) Executing command: EntrySetCommand{cache=default}.
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.ContextHandler] (HotRodServerHandler-6-115) Write response IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.Encoder2x$] (HotRodServerWorker-8-2) Write topology response header with no change
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Write buffer contents A1C4023200002432386361623834382D373361632D343763352D616436382D663531386238396335626134 to channel [id: 0xd8959d2b, L:/127.0.0.1:11222 - R:/127.0.0.1:52367]
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=325, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=326, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=327, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,239 DEBUG [org.infinispan.server.hotrod.HotRodExceptionHandler] (HotRodServerWorker-8-2) Exception caught: java.util.NoSuchElementException
> at org.infinispan.stream.impl.RemovableIterator.next(RemovableIterator.java:49)
> at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
> at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:728)
> at scala.collection.immutable.Range.foreach(Range.scala:166)
> at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:727)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:142)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:138)
> at scala.Option.map(Option.scala:146)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager.next(IterationManager.scala:138)
> at org.infinispan.server.hotrod.ContextHandler.realRead(ContextHandler.java:182)
> at org.infinispan.server.hotrod.ContextHandler.lambda$channelRead0$1(ContextHandler.java:56)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-01 17:57:36,244 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg ErrorResponse{version=25, messageId=327, operation=ErrorResponse, status=ServerError, msg=java.util.NoSuchElementException}
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (HRJS-18) Local server iterator test fails and hangs randomly with NoSuchElementException
by Galder Zamarreño (JIRA)
Galder Zamarreño created HRJS-18:
------------------------------------
Summary: Local server iterator test fails and hangs randomly with NoSuchElementException
Key: HRJS-18
URL: https://issues.jboss.org/browse/HRJS-18
Project: Infinispan Javascript client
Issue Type: Bug
Reporter: Galder Zamarreño
Apart from the server returning {{NoSuchElementException}}, this is causing confusion in the client which results in the testsuite hanging completely.
Here are snippets from client and server logs:
{code}
[2016-06-01 17:57:36.210] [DEBUG] client - Invoke putAll(msgId=323,pairs=[{"key":"local-it1","value":"v1","done":false},{"key":"local-it2","value":"v2","done":false},{"key":"local-it3","value":"v3","done":false}],opts=undefined)
[2016-06-01 17:57:36.210] [TRACE] encoder - Encode operation with topology id 0
[2016-06-01 17:57:36.210] [TRACE] transport - Write buffer(msgId=323) to 127.0.0.1:11222: A0C302192D000003007703096C6F63616C2D697431027631096C6F63616C2D697432027632096C6F63616C2D697433027633
[2016-06-01 17:57:36.214] [TRACE] decoder - Read header(msgId=323): opCode=46, status=0, hasNewTopology=0
[2016-06-01 17:57:36.215] [TRACE] decoder - Call decode for request(msgId=323)
[2016-06-01 17:57:36.215] [TRACE] connection - After decoding request(msgId=323), buffer size is 6, and offset 6
[2016-06-01 17:57:36.215] [TRACE] connection - Complete success for request(msgId=323) with undefined
[2016-06-01 17:57:36.215] [DEBUG] client - Invoke iterator(msgId=324,batchSize=1,opts=undefined)
[2016-06-01 17:57:36.215] [TRACE] encoder - Encode operation with topology id 0
[2016-06-01 17:57:36.215] [TRACE] transport - Write buffer(msgId=324) to 127.0.0.1:11222: A0C40219310000030001010100
[2016-06-01 17:57:36.230] [TRACE] decoder - Read header(msgId=324): opCode=50, status=0, hasNewTopology=0
[2016-06-01 17:57:36.230] [TRACE] decoder - Call decode for request(msgId=324)
[2016-06-01 17:57:36.230] [TRACE] connection - After decoding request(msgId=324), buffer size is 43, and offset 43
[2016-06-01 17:57:36.230] [TRACE] connection - Complete success for request(msgId=324) with {"iterId":"28cab848-73ac-47c5-ad68-f518b89c5ba4","conn":{}}
[2016-06-01 17:57:36.230] [TRACE] client - Invoke iterator.next(msgId=325,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
[2016-06-01 17:57:36.230] [TRACE] encoder - Encode operation with topology id 0
[2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=325) to 127.0.0.1:11222: A0C5021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
[2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=326,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
[2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
[2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=326) to 127.0.0.1:11222: A0C6021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
[2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=327,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
[2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
[2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=327) to 127.0.0.1:11222: A0C7021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
[2016-06-01 17:57:36.244] [TRACE] decoder - Read header(msgId=327): opCode=80, status=133, hasNewTopology=0
[2016-06-01 17:57:36.244] [ERROR] decoder - Error decoding body of request(msgId=327): java.util.NoSuchElementException
[2016-06-01 17:57:36.244] [TRACE] connection - After decoding request(msgId=327), buffer size is 39, and offset 39
[2016-06-01 17:57:36.244] [TRACE] connection - Complete failure for request(msgId=327) with java.util.NoSuchElementException
[2016-06-01 17:57:36.249] [TRACE] decoder - Read header(msgId=327): opCode=52, status=0, hasNewTopology=0
{code}
{code}
2016-06-01 17:57:36,216 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationStartRequest, version=25, messageId=324, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (HotRodServerHandler-6-115) Invoked with command EntrySetCommand{cache=default} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@7bc2d6d3]
2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (HotRodServerHandler-6-115) Executing command: EntrySetCommand{cache=default}.
2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.ContextHandler] (HotRodServerHandler-6-115) Write response IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.Encoder2x$] (HotRodServerWorker-8-2) Write topology response header with no change
2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Write buffer contents A1C4023200002432386361623834382D373361632D343763352D616436382D663531386238396335626134 to channel [id: 0xd8959d2b, L:/127.0.0.1:11222 - R:/127.0.0.1:52367]
2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=325, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=326, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=327, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
2016-06-01 17:57:36,239 DEBUG [org.infinispan.server.hotrod.HotRodExceptionHandler] (HotRodServerWorker-8-2) Exception caught: java.util.NoSuchElementException
at org.infinispan.stream.impl.RemovableIterator.next(RemovableIterator.java:49)
at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:728)
at scala.collection.immutable.Range.foreach(Range.scala:166)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:727)
at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:142)
at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:138)
at scala.Option.map(Option.scala:146)
at org.infinispan.server.hotrod.iteration.DefaultIterationManager.next(IterationManager.scala:138)
at org.infinispan.server.hotrod.ContextHandler.realRead(ContextHandler.java:182)
at org.infinispan.server.hotrod.ContextHandler.lambda$channelRead0$1(ContextHandler.java:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
2016-06-01 17:57:36,244 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg ErrorResponse{version=25, messageId=327, operation=ErrorResponse, status=ServerError, msg=java.util.NoSuchElementException}
{code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6746) Server instances fail to boot with WARN messages
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6746?page=com.atlassian.jira.plugin.... ]
Dan Berindei resolved ISPN-6746.
--------------------------------
Resolution: Cannot Reproduce
> Server instances fail to boot with WARN messages
> ------------------------------------------------
>
> Key: ISPN-6746
> URL: https://issues.jboss.org/browse/ISPN-6746
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 9.0.0.Alpha2, 8.2.2.Final
> Reporter: Galder Zamarreño
> Assignee: Dan Berindei
> Fix For: 9.0.0.Alpha3, 9.0.0.Final
>
> Attachments: 6-timeout-booting-trace.tgz
>
>
> Attached can be found TRACE logs.
> {code}
> 2016-05-31 16:20:06,460 WARN [org.jgroups.protocols.pbcast.GMS] (MSC service thread 1-2) node6: JOIN(node6) sent to node6 timed out (after 1000 ms), on try 1
> 2016-05-31 16:20:07,466 WARN [org.jgroups.protocols.pbcast.GMS] (MSC service thread 1-2) node6: JOIN(node6) sent to node6 timed out (after 1000 ms), on try 2
> 2016-05-31 16:20:08,513 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:20:08,514 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:20:08,656 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000078: Starting JGroups channel clustered
> 2016-05-31 16:20:08,673 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000094: Received new cluster view for channel clustered: [node6|5] (5) [node6, 1f1cd996-6d79-eb91-10bf-08e40a8efd75, node4, node6, node5]
> 2016-05-31 16:20:08,682 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000079: Channel clustered local address is node6, physical addresses are [127.0.0.1:55801]
> 2016-05-31 16:20:10,053 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|6], ref-view=[node6|3], joined=[node4, node6, node5]
> 2016-05-31 16:20:56,961 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|7], ref-view=[node6|3], left=[1f1cd996-6d79-eb91-10bf-08e40a8efd75], joined=[node4, node6, node5]
> 2016-05-31 16:21:00,026 INFO [org.jboss.as.server] (Thread-2) WFLYSRV0220: Server shutdown has been requested.
> 2016-05-31 16:21:00,038 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:jboss/datasources/ExampleDS]
> 2016-05-31 16:21:00,045 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) WFLYJCA0019: Stopped Driver service with driver-name = h2
> 2016-05-31 16:21:01,912 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|8], ref-view=[node6|3], left=[1f1cd996-6d79-eb91-10bf-08e40a8efd75], joined=[node4, node6, node5]
> 2016-05-31 16:21:06,915 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|9], ref-view=[node6|3], left=[1f1cd996-6d79-eb91-10bf-08e40a8efd75], joined=[node4, node6, node5]
> 2016-05-31 16:21:08,749 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-8) ISPN000329: Unable to read rebalancing status from coordinator node6: org.infinispan.util.concurrent.TimeoutException: Replication timeout for node6
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:822)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:647)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:46)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:17)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-31 16:21:08,754 INFO [org.infinispan.globalstate.impl.GlobalStateManagerImpl] (MSC service thread 1-8) ISPN000390: Persisted state, version=9.0.0-SNAPSHOT timestamp=2016-05-31T14:21:08.751Z
> 2016-05-31 16:21:08,754 INFO [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-8) ISPN000128: Infinispan version: Infinispan 'Chakra' 9.0.0-SNAPSHOT
> 2016-05-31 16:21:08,795 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:21:08,796 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:21:08,803 INFO [org.infinispan.globalstate.impl.GlobalStateManagerImpl] (MSC service thread 1-4) ISPN000390: Persisted state, version=9.0.0-SNAPSHOT timestamp=2016-05-31T14:21:08.803Z
> 2016-05-31 16:21:08,804 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000080: Disconnecting JGroups channel clustered
> 2016-05-31 16:21:08,805 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000082: Stopping the RpcDispatcher for channel clustered
> 2016-05-31 16:21:09,827 INFO [org.jboss.as] (MSC service thread 1-8) WFLYSRV0050: Infinispan Server 9.0.0-SNAPSHOT (WildFly Core 2.0.10.Final) stopped in 9791ms
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6746) Server instances fail to boot with WARN messages
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6746?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6746:
-------------------------------
Fix Version/s: (was: 9.0.0.Final)
> Server instances fail to boot with WARN messages
> ------------------------------------------------
>
> Key: ISPN-6746
> URL: https://issues.jboss.org/browse/ISPN-6746
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 9.0.0.Alpha2, 8.2.2.Final
> Reporter: Galder Zamarreño
> Assignee: Dan Berindei
> Fix For: 9.0.0.Alpha3
>
> Attachments: 6-timeout-booting-trace.tgz
>
>
> Attached can be found TRACE logs.
> {code}
> 2016-05-31 16:20:06,460 WARN [org.jgroups.protocols.pbcast.GMS] (MSC service thread 1-2) node6: JOIN(node6) sent to node6 timed out (after 1000 ms), on try 1
> 2016-05-31 16:20:07,466 WARN [org.jgroups.protocols.pbcast.GMS] (MSC service thread 1-2) node6: JOIN(node6) sent to node6 timed out (after 1000 ms), on try 2
> 2016-05-31 16:20:08,513 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:20:08,514 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:20:08,656 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000078: Starting JGroups channel clustered
> 2016-05-31 16:20:08,673 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000094: Received new cluster view for channel clustered: [node6|5] (5) [node6, 1f1cd996-6d79-eb91-10bf-08e40a8efd75, node4, node6, node5]
> 2016-05-31 16:20:08,682 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000079: Channel clustered local address is node6, physical addresses are [127.0.0.1:55801]
> 2016-05-31 16:20:10,053 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|6], ref-view=[node6|3], joined=[node4, node6, node5]
> 2016-05-31 16:20:56,961 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|7], ref-view=[node6|3], left=[1f1cd996-6d79-eb91-10bf-08e40a8efd75], joined=[node4, node6, node5]
> 2016-05-31 16:21:00,026 INFO [org.jboss.as.server] (Thread-2) WFLYSRV0220: Server shutdown has been requested.
> 2016-05-31 16:21:00,038 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:jboss/datasources/ExampleDS]
> 2016-05-31 16:21:00,045 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) WFLYJCA0019: Stopped Driver service with driver-name = h2
> 2016-05-31 16:21:01,912 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|8], ref-view=[node6|3], left=[1f1cd996-6d79-eb91-10bf-08e40a8efd75], joined=[node4, node6, node5]
> 2016-05-31 16:21:06,915 WARN [org.jgroups.protocols.pbcast.GMS] (Incoming-2,node6) node6: failed to create view from delta-view; dropping view: java.lang.IllegalStateException: the view-id of the delta view ([node6|3]) doesn't match the current view-id ([node6|5]); discarding delta view [node6|9], ref-view=[node6|3], left=[1f1cd996-6d79-eb91-10bf-08e40a8efd75], joined=[node4, node6, node5]
> 2016-05-31 16:21:08,749 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-8) ISPN000329: Unable to read rebalancing status from coordinator node6: org.infinispan.util.concurrent.TimeoutException: Replication timeout for node6
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:822)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:647)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:46)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:17)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-31 16:21:08,754 INFO [org.infinispan.globalstate.impl.GlobalStateManagerImpl] (MSC service thread 1-8) ISPN000390: Persisted state, version=9.0.0-SNAPSHOT timestamp=2016-05-31T14:21:08.751Z
> 2016-05-31 16:21:08,754 INFO [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-8) ISPN000128: Infinispan version: Infinispan 'Chakra' 9.0.0-SNAPSHOT
> 2016-05-31 16:21:08,795 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:21:08,796 WARN [org.infinispan.configuration.cache.AbstractStoreConfigurationBuilder] (MSC service thread 1-8) ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup
> 2016-05-31 16:21:08,803 INFO [org.infinispan.globalstate.impl.GlobalStateManagerImpl] (MSC service thread 1-4) ISPN000390: Persisted state, version=9.0.0-SNAPSHOT timestamp=2016-05-31T14:21:08.803Z
> 2016-05-31 16:21:08,804 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000080: Disconnecting JGroups channel clustered
> 2016-05-31 16:21:08,805 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000082: Stopping the RpcDispatcher for channel clustered
> 2016-05-31 16:21:09,827 INFO [org.jboss.as] (MSC service thread 1-8) WFLYSRV0050: Infinispan Server 9.0.0-SNAPSHOT (WildFly Core 2.0.10.Final) stopped in 9791ms
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months
[JBoss JIRA] (ISPN-6749) Disable SiteProviderTopologyChangeTest.testXSiteSTDuringSiteMasterLeave
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-6749:
---------------------------------
Summary: Disable SiteProviderTopologyChangeTest.testXSiteSTDuringSiteMasterLeave
Key: ISPN-6749
URL: https://issues.jboss.org/browse/ISPN-6749
Project: Infinispan
Issue Type: Bug
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
Two sites: LON(nodes A,B,C) and NYC(nodes D,E,F). Node A and D are the site master of LON and NYC respectively.
Site LON is going to push state to site NYC. The push state request is done in node B and the request is done when node A is leaving the cluster. In order to push the state, it sends a START_RECEIVE to NYC site.
{noformat}
About to send to backups [NYC (sync, timeout=2000)], command XSiteStateTransferControlCommand{control=START_RECEIVE, siteName='null', statusOk=false, cacheName='___defaultcache'}
SiteProviderTopologyChangeTest-NodeB-48243: invoking unicast RPC [req-id=98] on SiteMaster(NYC)
SiteProviderTopologyChangeTest-NodeB-48243: forwarding message to final destination SiteMaster(NYC) to the current coordinator
SiteProviderTopologyChangeTest-NodeB-48243: sending msg to SiteProviderTopologyChangeTest-NodeA-41240, src=SiteProviderTopologyChangeTest-NodeB-48243, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=32, TP: [cluster_name=ISPN(SITE LON)]
{noformat}
The message is forward to node A (site master LON) that sends it to node D (site master NYC)
{noformat}
SiteProviderTopologyChangeTest-NodeA-41240: received [dst: SiteProviderTopologyChangeTest-NodeA-41240, src: SiteProviderTopologyChangeTest-NodeB-48243 (4 headers), size=29 bytes, flags=OOB|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=32, TP: [cluster_name=ISPN(SITE LON)]
_SiteProviderTopologyChangeTest-NodeA-41240:LON: sending msg to _SiteProviderTopologyChangeTest-NodeD-50088:NYC, src=_SiteProviderTopologyChangeTest-NodeA-41240:LON, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteMaster(NYC), sender=SiteProviderTopologyChangeTest-NodeB-48243:LON], UNICAST3: DATA, seqno=2, conn_id=1, TP: [cluster_name=global]
{noformat}
Response is sent back from node D to node A that forwards it to node B.
{noformat}
_SiteProviderTopologyChangeTest-NodeA-41240:LON: received [dst: _SiteProviderTopologyChangeTest-NodeA-41240:LON, src: _SiteProviderTopologyChangeTest-NodeD-50088:NYC (4 headers), size=4 bytes, flags=OOB|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteProviderTopologyChangeTest-NodeB-48243:LON, sender=SiteMaster(NYC)], UNICAST3: DATA, seqno=3, TP: [cluster_name=global]
SiteProviderTopologyChangeTest-NodeA-41240: sending msg to SiteProviderTopologyChangeTest-NodeB-48243, src=SiteProviderTopologyChangeTest-NodeA-41240, headers are RequestCorrelator: corr_id=200, type=RSP, req_id=98, rsp_expected=true, RELAY2: DATA [dest=SiteProviderTopologyChangeTest-NodeB-48243:LON, sender=SiteMaster(NYC)], UNICAST3: DATA, seqno=30, conn_id=1, TP: [cluster_name=ISPN(SITE LON)]
{noformat}
However, since node A is shutting-down, the response never arrives to node B that ends up throwing TimeoutException.
{noformat}
SiteProviderTopologyChangeTest-NodeA-41240: sending 1 msgs (218 bytes (0.70% of max_bundle_size) to 1 dests(s): [ISPN(SITE LON):SiteProviderTopologyChangeTest-NodeB-48243]
127.0.0.1:7900: server is not running, discarding message to 127.0.0.1:7901
{noformat}
The test will be disabled because:
* This push state is triggered manually and it can be re-triggered in case of exceptions
* It requires some UNICAST/NAKACK between sites (i.e. changing jgroups)
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 6 months