[JBoss JIRA] (ISPN-5953) Extract interface from RemoteCacheManager
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-5953?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5953:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.0.0.Alpha1
Resolution: Done
> Extract interface from RemoteCacheManager
> -----------------------------------------
>
> Key: ISPN-5953
> URL: https://issues.jboss.org/browse/ISPN-5953
> Project: Infinispan
> Issue Type: Enhancement
> Components: CDI Integration, Integration
> Reporter: Sebastian Łaskawiec
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 9.0.0.Final, 9.0.0.Alpha1
>
>
> Currently RemoteCacheManager is a concrete class which is problematic when using it in CDI (CDI will register it as a bean). This is very inconvenient because this instance should be created by InfinispanEmbeddedExtension.
> Since this change is not backwards compatible - we need to implement it in ISPN 9.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6239) InitialClusterSizeTest.testInitialClusterSizeFail random failures
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6239?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6239:
--------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> InitialClusterSizeTest.testInitialClusterSizeFail random failures
> -----------------------------------------------------------------
>
> Key: ISPN-6239
> URL: https://issues.jboss.org/browse/ISPN-6239
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 8.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_failure
> Fix For: 9.0.0.Final, 9.0.0.Alpha1, 8.2.0.Final, 8.2.0.CR1
>
>
> The test starts 3 nodes concurrently, but configures Infinispan to wait for a cluster of 4 nodes, and expects that the nodes fail to start in {{initialClusterTimeout}} + 1 second.
> However, because of a bug in {{TEST_PING}}, the first 2 nodes see each other as coordinator and send a {{JOIN}} request to each other, and it takes 3 seconds to recover and start the cluster properly.
> The bug in {{TEST_PING}} is actually a hack introduced for {{ISPN-5106}}. The problem was that the first node (A) to start would install a view with itself as the single node, but the second node to start (B) would start immediately, and the discovery request from B would reach B's {{TEST_PING}} before it saw the view. That way, B could choose itself as the coordinator based on the order of A's and B's UUIDs, and the cluster would start as 2 partitions. Since most of our tests actually remove {{MERGE3}} from the protocol stack, the partitions would never merge and the test would fail with a timeout.
> I fixed this in {{TEST_PING}} by assuming that the sender of the first discovery response is a coordinator, when there is a single response. This worked because all but a few tests start their managers sequentially, however it sometimes introduces this 3 seconds delay when nodes start in parallel.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6390) Transport initial cluster timeout resets whenever a new node joins
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6390?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6390:
--------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Transport initial cluster timeout resets whenever a new node joins
> ------------------------------------------------------------------
>
> Key: ISPN-6390
> URL: https://issues.jboss.org/browse/ISPN-6390
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.2.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Final, 9.0.0.Alpha1, 8.2.1.Final
>
>
> {{JGroupsTransport.waitForInitialNodes()}} calls {{waitForView(currentViewId + 1, timeout, MILLISECONDS)}} repeatedly, and doesn't adjust the timeout when a new view is installed.
> This means a node joining/leaving just before the timeout expire will effectively double the timeout.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6391) Cache managers failing to start do not stop global components
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6391?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6391:
--------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Cache managers failing to start do not stop global components
> -------------------------------------------------------------
>
> Key: ISPN-6391
> URL: https://issues.jboss.org/browse/ISPN-6391
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.2.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Final, 9.0.0.Alpha1, 8.2.1.Final
>
>
> If one of the global components fails to start, {{GlobalComponentRegistry.start()}} removes the volatile components, but it doesn't call {{stop()}} on those components.
> The most likely reason for a global component start failure is a timeout in {{JGroupsTransport.waitForInitialNodes()}}. After such a timeout, the transport isn't stopped, so the channel's sockets and threads are only freed after a few GC cycles (via finalization).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years