[infinispan-issues] [JBoss JIRA] (ISPN-3876) TcpTransportFactory stores failed SocketAddress in RequestBalancingStrategy

Patrick Seeber (JIRA) issues at jboss.org
Tue Jan 28 09:17:28 EST 2014


    [ https://issues.jboss.org/browse/ISPN-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12935819#comment-12935819 ] 

Patrick Seeber edited comment on ISPN-3876 at 1/28/14 9:15 AM:
---------------------------------------------------------------

Thank you for your answer!

Indeed it is not self healing and we believe we found the problem. 
The scenario is:

1. Start 2 servers in replicated mode
2. Start a Client which connects to both servers correctly
3. Shutdown Server 1 during maintenance
4. Perform a client operation, the new topology is now commited to the client, the balancer now knows only Server 2
5. Bring up server 1 again and shutdown server 2 during maintenance
6. Bring up server 2 again

Now the client is broken since the last server he knows (Server 2) wont send any topology changes to the client if the client performs operations now. There is no chance for the client balancer to receive both servers again, which is problematic if the communication to the only known server fails now.

To fix this, we would be forced to perform a client operation between step 5 and 6, but we have 10 clients and we do not want to trigger a getCache 10 times each time we shutdown a server.

Is there any way to ping the servers from the client in a specific interval or to directly inform the client if the topology changes?

                
      was (Author: patrick_seeber):
    Thank you for your answer!

Indeed it is not self healing and we believe we found the problem. 
The scenario is:

1. Start 2 servers in replicated mode
2. Start a Client which connects to both servers correctly
3. Shutdown Server 1 during maintenance
4. Perform a client operation, the new topology is now commited to the client, the balancer now knows only Server 2
5. Bring up server 1 again and shutdown server 2 during maintenance
6. Bring up server 2 again

Now the client is broken since the last server he knows (Server 2) wont send any topology changes to the client if the client performs operations now. There is no chance for the client balancer to receive both servers again, which is problematic if the communication to the only known server fails now.

To fix this, we would be forced to perform a client operation between step 5 and 6, but we have 10 clients and we do not want to trigger a getCache 10 times each time we shutdown a server.

Is there any way to ping the servers from the client in a specific intercal or to directly inform the client if the topology changes?

                  
> TcpTransportFactory stores failed SocketAddress in RequestBalancingStrategy
> ---------------------------------------------------------------------------
>
>                 Key: ISPN-3876
>                 URL: https://issues.jboss.org/browse/ISPN-3876
>             Project: Infinispan
>          Issue Type: Bug
>          Components: Remote Protocols
>    Affects Versions: 5.2.1.Final, 5.3.0.Final, 6.0.0.Final
>         Environment: Hotrod Client, Java
>            Reporter: Patrick Seeber
>            Assignee: Mircea Markus
>
> The "updateServers" Method in the TcpTransportFactory class iterates over all addedServers and adds them to the connection pool if no exceptions are thrown. Howerver, if an exception is thrown, the SocketAddress may not have been added to the conection pool but is added to the balancer afterwards. Therefore, the balancer may contain an invalid SocketAddress which is not contained in the connection pool.
> In our application with few distributed caches, we encounter situations where all servers (SocketAddresses) are corrupt and the application fails to load or store entries in/from the cache.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


More information about the infinispan-issues mailing list