[JBoss JIRA] (ISPN-9066) Successfully configured Slave node with no server configured is not shown in the list of hosts
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-9066?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic updated ISPN-9066:
--------------------------------------
Status: Open (was: New)
> Successfully configured Slave node with no server configured is not shown in the list of hosts
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-9066
> URL: https://issues.jboss.org/browse/ISPN-9066
> Project: Infinispan
> Issue Type: Bug
> Components: Console
> Environment: * Mac OS X
> * Out of the box Infinispan setup with no customization of domain and host XML files
> Reporter: Vladimir Blagojevic
> Assignee: Vladimir Blagojevic
> Attachments: Screen Shot 2018-04-10 at 4.06.46 PM.png
>
>
> Considering two host setup (master and slave), where the master host is running the domain controller and host controller whereas slave host is only running the slave controller. If the slave's host.xml does not have any servers initially configured, despite successful registration with the domain controller running on master, the slave host does appear for the drop-down of the *Add Node* feature on the page *Clusters → Cluster*.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (HRJS-53) nodeJS client expired certificates
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/HRJS-53?page=com.atlassian.jira.plugin.sy... ]
Tristan Tarrant updated HRJS-53:
--------------------------------
Security: (was: Red Hat Internal)
> nodeJS client expired certificates
> ----------------------------------
>
> Key: HRJS-53
> URL: https://issues.jboss.org/browse/HRJS-53
> Project: Infinispan Javascript client
> Issue Type: Bug
> Affects Versions: 0.4.0
> Reporter: Tristan Tarrant
> Assignee: Galder Zamarreño
>
> Certificates expired at nodeJS client.
> 4 tests failed with error:
> Error: certificate has expired
> at Error (native)
> at TLSSocket.<anonymous> (_tls_wrap.js:1092:38)
> at emitNone (events.js:86:13)
> at TLSSocket.emit (events.js:185:7)
> at TLSSocket._finishInit (_tls_wrap.js:610:8)
> at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:440:38)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9062) JGroupsTransport should only send messages to nodes in the cluster view
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-9062?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-9062:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5914
> JGroupsTransport should only send messages to nodes in the cluster view
> -----------------------------------------------------------------------
>
> Key: ISPN-9062
> URL: https://issues.jboss.org/browse/ISPN-9062
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.2.2.Final, 9.3.0.Alpha1
>
>
> {{JGroupsTransport}} only waits for responses from nodes in the JGroups cluster view, but it still sends messages to all the nodes specified as a target. The idea was to optimize the common case by avoiding a {{HashSet.contains()}} call.
> However, when a node is not in the view, messages to it still pass through the entire JGroups stack, and UNICAST3 keeps those messages in a send table for a long time ({{UNICAST3.conn_expiry_timeout}}, changed with ISPN-9038 from {{0}} (unlimited) to 2 minutes (JGroups default)). Having a potentially unlimited number of messages of non-members, each with its own send table, makes it much harder to estimate memory usage.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9067) Tests should set the node name even for non-clustered cache manager
by Dan Berindei (JIRA)
Dan Berindei created ISPN-9067:
----------------------------------
Summary: Tests should set the node name even for non-clustered cache manager
Key: ISPN-9067
URL: https://issues.jboss.org/browse/ISPN-9067
Project: Infinispan
Issue Type: Task
Components: Test Suite - Core
Affects Versions: 9.2.1.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.3.0.Alpha1
The node name is included in the thread name by all the internal executors, but non-clustered cache managers don't have a node name, and that makes it harder to filter log messages for a particular test.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-8974) Avoid JBossMarshaller instance caches growing limitless
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-8974?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-8974:
-----------------------------------
Fix Version/s: 8.2.11.Final
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8974
> URL: https://issues.jboss.org/browse/ISPN-8974
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 8.2.10.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: downstream_dependency
> Fix For: 8.2.11.Final
>
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-8974) Avoid JBossMarshaller instance caches growing limitless
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-8974?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-8974:
-----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8974
> URL: https://issues.jboss.org/browse/ISPN-8974
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 8.2.10.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: downstream_dependency
> Fix For: 8.2.11.Final
>
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9066) Successfully configured Slave node with no server configured is not shown in the list of hosts
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-9066?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic updated ISPN-9066:
--------------------------------------
Description: Considering two host setup (master and slave), where the master host is running the domain controller and host controller whereas slave host is only running the slave controller. If the slave's host.xml does not have any servers initially configured, despite successful registration with the domain controller running on master, the slave host does appear for the drop-down of the *Add Node* feature on the page *Clusters → Cluster*. (was: Considering two host setup (master and slave), where the master host is running the domain controller and host controller whereas slave host is only running the slave controller. If the slave's host.xml does not have any servers initially configured, despite successful registration with the domain controller running on master, the slave host does appear for the drop-down of the *Add Node* feature on the page *Clutsers → Cluster*.)
> Successfully configured Slave node with no server configured is not shown in the list of hosts
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-9066
> URL: https://issues.jboss.org/browse/ISPN-9066
> Project: Infinispan
> Issue Type: Bug
> Components: Console
> Environment: * Mac OS X
> * Out of the box Infinispan setup with no customization of domain and host XML files
> Reporter: Vladimir Blagojevic
> Assignee: Vladimir Blagojevic
> Attachments: Screen Shot 2018-04-10 at 4.06.46 PM.png
>
>
> Considering two host setup (master and slave), where the master host is running the domain controller and host controller whereas slave host is only running the slave controller. If the slave's host.xml does not have any servers initially configured, despite successful registration with the domain controller running on master, the slave host does appear for the drop-down of the *Add Node* feature on the page *Clusters → Cluster*.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9066) Successfully configured Slave node with no server configured is not shown in the list of hosts
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-9066?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic updated ISPN-9066:
--------------------------------------
Security: (was: Red Hat Internal)
> Successfully configured Slave node with no server configured is not shown in the list of hosts
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-9066
> URL: https://issues.jboss.org/browse/ISPN-9066
> Project: Infinispan
> Issue Type: Bug
> Components: Console
> Environment: * Mac OS X
> * Out of the box Infinispan setup with no customization of domain and host XML files
> Reporter: Vladimir Blagojevic
> Assignee: Vladimir Blagojevic
> Attachments: Screen Shot 2018-04-10 at 4.06.46 PM.png
>
>
> Considering two host setup (master and slave), where the master host is running the domain controller and host controller whereas slave host is only running the slave controller. If the slave's host.xml does not have any servers initially configured, despite successful registration with the domain controller running on master, the slave host does appear for the drop-down of the *Add Node* feature on the page *Clutsers → Cluster*.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months
[JBoss JIRA] (ISPN-9064) SiteManualSwitchTest.testManualClusterSwitch randomly fails
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-9064?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-9064:
----------------------------------------
Last known failure: https://ci.infinispan.org/job/Infinispan/job/master/558/testReport/junit/...
> SiteManualSwitchTest.testManualClusterSwitch randomly fails
> -----------------------------------------------------------
>
> Key: ISPN-9064
> URL: https://issues.jboss.org/browse/ISPN-9064
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Labels: testsuite_stability
> Fix For: 9.3.0.Beta1, 9.3.0.Final
>
>
> {code}
> java.lang.AssertionError: expected:<1> but was:<0>
> at org.infinispan.client.hotrod.xsite.AbstractHotRodSiteFailoverTest.assertSiteHit(AbstractHotRodSiteFailoverTest.java:146)
> at org.infinispan.client.hotrod.xsite.SiteManualSwitchTest.assertSingleSiteHit(SiteManualSwitchTest.java:47)
> at org.infinispan.client.hotrod.xsite.SiteManualSwitchTest.testManualClusterSwitch(SiteManualSwitchTest.java:37)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 9 months