[JBoss JIRA] Created: (EJBTHREE-2066) factoryProxy.toString() fails if called before registration in the Dispatcher
by Carlo de Wolf (JIRA)
factoryProxy.toString() fails if called before registration in the Dispatcher
-----------------------------------------------------------------------------
Key: EJBTHREE-2066
URL: https://jira.jboss.org/jira/browse/EJBTHREE-2066
Project: EJB 3.0
Issue Type: Bug
Components: core
Environment: Running with -Dsun.io.serialization.extendedDebugInfo=true
Reporter: Carlo de Wolf
Assignee: Carlo de Wolf
Fix For: as-int 1.1.23, EJB3_1 1.0.6
org.jboss.aop.NotFoundInDispatcherException: Object with oid: org.jboss.ejb3.test.consumer.QueueTestRemotePRODUCER_FACTORY was not found in the Dispatcher
at org.jboss.aop.Dispatcher.invoke(Dispatcher.java:85)
at org.jboss.aspects.remoting.AOPRemotingInvocationHandler.invoke(AOPRemotingInvocationHandler.java:82)
at org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:897)
at org.jboss.remoting.transport.local.LocalClientInvoker.invoke(LocalClientInvoker.java:106)
at org.jboss.remoting.Client.invoke(Client.java:1927)
at org.jboss.remoting.Client.invoke(Client.java:770)
at org.jboss.aspects.remoting.InvokeRemoteInterceptor.invoke(InvokeRemoteInterceptor.java:60)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102)
at org.jboss.aspects.remoting.IsLocalInterceptor.invoke(IsLocalInterceptor.java:48)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102)
at org.jboss.aspects.remoting.PojiProxy.invoke(PojiProxy.java:62)
at $Proxy128.toString(Unknown Source)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1379)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326)
at java.rmi.MarshalledObject.<init>(MarshalledObject.java:101)
at org.jnp.interfaces.MarshalledValuePair.<init>(MarshalledValuePair.java:65)
at org.jnp.interfaces.NamingContext.createMarshalledValuePair(NamingContext.java:1429)
at org.jnp.interfaces.NamingContext.rebind(NamingContext.java:569)
at org.jnp.interfaces.NamingContext.rebind(NamingContext.java:542)
at javax.naming.InitialContext.rebind(InitialContext.java:408)
at org.jboss.util.naming.Util.rebind(Util.java:131)
at org.jboss.util.naming.Util.rebind(Util.java:117)
at org.jboss.ejb3.mdb.RemoteProducerFactory.start(RemoteProducerFactory.java:89)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 9 months
[JBoss JIRA] Created: (JGRP-1171) Address cache in TP protocol never removes inactive members, which causes enourmous delays sending multicast messages using TCP
by Fedor Cherepanov (JIRA)
Address cache in TP protocol never removes inactive members, which causes enourmous delays sending multicast messages using TCP
-------------------------------------------------------------------------------------------------------------------------------
Key: JGRP-1171
URL: https://jira.jboss.org/jira/browse/JGRP-1171
Project: JGroups
Issue Type: Bug
Affects Versions: 2.9, 2.8
Reporter: Fedor Cherepanov
Assignee: Bela Ban
org.jgroups.blocks.LazyRemovalCache used in org.jgroups.protocols.TP removes marked cache items only when it's size exceeds max_elements size, which is set to 20 in TP.
I'm using jgroups (tried 2.8 and 2.9) with jboss-cache 3.2.1, using TCP protocol. I've tried to investigate why when any node leaves the cluster, replication time increases by a second (around 50ms initially).
Here's what I found:
What a node leaves the cluster and view changes:
1. TP calls logical_addr_cache.retainAll(members);
2. LazyRemovalCache.retainAll updates the map, setting removable flag to true on those members that are not in the view.
3. LazyRemovalCache.checkMaxSizeExceeded NEVER removes them from the cache because it's size is always less than max_elements, which is 20.
1. BasicTCP.sendMulticast calls TP.sendToAllPhysicalAddresses
2. TP.sendToAllPhysicalAddresses iterates through all values in logical_addr_cache calling sendUnicast for each
3. logical_addr_cache contains all the nodes including those killed, and tries to connect to each if them, which causes enormous delays
This is causing replication time to increase for connection timeout for every node removed from cluster
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 9 months
[JBoss JIRA] Created: (JGRP-1147) TP: make logical_addr_cache's timeout configurable
by Bela Ban (JIRA)
TP: make logical_addr_cache's timeout configurable
--------------------------------------------------
Key: JGRP-1147
URL: https://jira.jboss.org/jira/browse/JGRP-1147
Project: JGroups
Issue Type: Feature Request
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 2.9
[email Maher Al Kilani]
I am using jgroups lib. I noticed that when one of the nodes is down, the
lazyRemovalCache class tags it removable and all the nodes stay in
logical_addr_cache (in TP.java). So when sending a message we are actually
sending to all the removable and the alive ones. How can configure the
lazyRemovalCache class to not tag as removable but rather remove the dead
nodes?
Bela: let's make the timeout and max size configurable. Plus, possibly add a getter, so we can hard-flush the caceh programmatically
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 9 months