[JBoss Messaging] - Messaging gets hung up getting sockey
by chip_schoch
JBossAS 4.2.2.GA, JBM 1.4.0.SP1 with remoting 2.2.2.SP5
We have several environments set up with 2 linux severs clustered and two windows servers that run a jms client program. In one environment we are consistently getting the system to hang when running a load test. Below is a stack trace. Once it gets hung up wating for a thread nothing else process messages because of the locks being held. Any ideas about what this means?
Thread: Thread-126 : priority:5, demon:true, threadId:325, threadState:TIMED_WAITING, lockName:java.util.HashSet@42c94f
|
| java.lang.Object.wait(Native Method)
| org.jboss.remoting.transport.bisocket.BisocketClientInvoker.createSocket(BisocketClientInvoker.java:473)
| org.jboss.remoting.transport.socket.MicroSocketClientInvoker.getConnection(MicroSocketClientInvoker.java:801)
| org.jboss.remoting.transport.socket.MicroSocketClientInvoker.transport(MicroSocketClientInvoker.java:551)
| org.jboss.remoting.transport.bisocket.BisocketClientInvoker.transport(BisocketClientInvoker.java:418)
| org.jboss.remoting.MicroRemoteClientInvoker.invoke(MicroRemoteClientInvoker.java:122)
| org.jboss.remoting.Client.invoke(Client.java:1634)
| org.jboss.remoting.Client.invoke(Client.java:548)
| org.jboss.remoting.Client.invokeOneway(Client.java:598)
| org.jboss.remoting.callback.ServerInvokerCallbackHandler.handleCallback(ServerInvokerCallbackHandler.java:826)
| org.jboss.remoting.callback.ServerInvokerCallbackHandler.handleCallbackOneway(ServerInvokerCallbackHandler.java:697)
| org.jboss.jms.server.endpoint.ServerSessionEndpoint.performDelivery(ServerSessionEndpoint.java:1490)
| org.jboss.jms.server.endpoint.ServerSessionEndpoint.handleDelivery(ServerSessionEndpoint.java:1375)
| org.jboss.jms.server.endpoint.ServerConsumerEndpoint.handle(ServerConsumerEndpoint.java:307)
| org.jboss.messaging.core.impl.RoundRobinDistributor.handle(RoundRobinDistributor.java:119)
| org.jboss.messaging.core.impl.MessagingQueue$DistributorWrapper.handle(MessagingQueue.java:582)
| org.jboss.messaging.core.impl.ClusterRoundRobinDistributor.handle(ClusterRoundRobinDistributor.java:79)
| org.jboss.messaging.core.impl.ChannelSupport.deliverInternal(ChannelSupport.java:476)
| org.jboss.messaging.core.impl.MessagingQueue.deliverInternal(MessagingQueue.java:505)
| org.jboss.messaging.core.impl.ChannelSupport.handleInternal(ChannelSupport.java:628)
| org.jboss.messaging.core.impl.ChannelSupport.handle(ChannelSupport.java:144)
| org.jboss.messaging.core.impl.postoffice.MessagingPostOffice.routeInternal(MessagingPostOffice.java:2195)
| org.jboss.messaging.core.impl.postoffice.MessagingPostOffice.route(MessagingPostOffice.java:489)
| org.jboss.jms.server.endpoint.ServerConnectionEndpoint.sendMessage(ServerConnectionEndpoint.java:720)
| org.jboss.jms.server.endpoint.ServerSessionEndpoint.send(ServerSessionEndpoint.java:401)
| org.jboss.jms.server.endpoint.advised.SessionAdvised.org$jboss$jms$server$endpoint$advised$SessionAdvised$send$aop(SessionAdvised.java:87)
| org.jboss.jms.server.endpoint.advised.SessionAdvised$send_7280680627620114891.invokeNext(SessionAdvised$send_7280680627620114891.java)
| org.jboss.jms.server.container.SecurityAspect.handleSend(SecurityAspect.java:157)
| sun.reflect.GeneratedMethodAccessor929.invoke(Unknown Source)
| sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| java.lang.reflect.Method.invoke(Method.java:585)
| org.jboss.aop.advice.PerInstanceAdvice.invoke(PerInstanceAdvice.java:121)
| org.jboss.jms.server.endpoint.advised.SessionAdvised$send_7280680627620114891.invokeNext(SessionAdvised$send_7280680627620114891.java)
| org.jboss.jms.server.endpoint.advised.SessionAdvised.send(SessionAdvised.java)
| org.jboss.jms.wireformat.SessionSendRequest.serverInvoke(SessionSendRequest.java:90)
| org.jboss.jms.server.remoting.JMSServerInvocationHandler.invoke(JMSServerInvocationHandler.java:143)
| org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:809)
| org.jboss.remoting.ServerInvoker$1.run(ServerInvoker.java:1815)
| org.jboss.jms.server.remoting.DirectThreadPool.run(DirectThreadPool.java:63)
| org.jboss.remoting.ServerInvoker.handleOnewayInvocation(ServerInvoker.java:1826)
| org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:758)
| org.jboss.remoting.transport.local.LocalClientInvoker.invoke(LocalClientInvoker.java:101)
| org.jboss.remoting.Client.invoke(Client.java:1634)
| org.jboss.remoting.Client.invoke(Client.java:548)
| org.jboss.remoting.Client.invokeOneway(Client.java:598)
| org.jboss.remoting.Client.invokeOneway(Client.java:786)
| org.jboss.remoting.Client.invokeOneway(Client.java:776)
| org.jboss.jms.client.delegate.DelegateSupport.doInvoke(DelegateSupport.java:178)
| org.jboss.jms.client.delegate.DelegateSupport.doInvokeOneway(DelegateSupport.java:163)
| org.jboss.jms.client.delegate.ClientSessionDelegate.org$jboss$jms$client$delegate$ClientSessionDelegate$send$aop(ClientSessionDelegate.java:478)
| org.jboss.jms.client.delegate.ClientSessionDelegate$send_6145266547759487588.invokeNext(ClientSessionDelegate$send_6145266547759487588.java)
| org.jboss.jms.client.container.SessionAspect.handleSend(SessionAspect.java:632)
| org.jboss.aop.advice.org.jboss.jms.client.container.SessionAspect28.invoke(SessionAspect28.java)
| org.jboss.jms.client.delegate.ClientSessionDelegate$send_6145266547759487588.invokeNext(ClientSessionDelegate$send_6145266547759487588.java)
| org.jboss.jms.client.container.FailoverValveInterceptor.invoke(FailoverValveInterceptor.java:92)
| org.jboss.aop.advice.PerInstanceInterceptor.invoke(PerInstanceInterceptor.java:105)
| org.jboss.jms.client.delegate.ClientSessionDelegate$send_6145266547759487588.invokeNext(ClientSessionDelegate$send_6145266547759487588.java)
| org.jboss.jms.client.container.ClosedInterceptor.invoke(ClosedInterceptor.java:170)
| org.jboss.aop.advice.PerInstanceInterceptor.invoke(PerInstanceInterceptor.java:105)
| org.jboss.jms.client.delegate.ClientSessionDelegate$send_6145266547759487588.invokeNext(ClientSessionDelegate$send_6145266547759487588.java)
| org.jboss.jms.client.delegate.ClientSessionDelegate.send(ClientSessionDelegate.java)
| org.jboss.jms.client.container.ProducerAspect.handleSend(ProducerAspect.java:266)
| org.jboss.aop.advice.org.jboss.jms.client.container.ProducerAspect39.invoke(ProducerAspect39.java)
| org.jboss.jms.client.delegate.ClientProducerDelegate$send_3961598017717988886.invokeNext(ClientProducerDelegate$send_3961598017717988886.java)
| org.jboss.jms.client.container.ClosedInterceptor.invoke(ClosedInterceptor.java:170)
| org.jboss.aop.advice.PerInstanceInterceptor.invoke(PerInstanceInterceptor.java:105)
| org.jboss.jms.client.delegate.ClientProducerDelegate$send_3961598017717988886.invokeNext(ClientProducerDelegate$send_3961598017717988886.java)
| org.jboss.jms.client.delegate.ClientProducerDelegate.send(ClientProducerDelegate.java)
| org.jboss.jms.client.JBossMessageProducer.send(JBossMessageProducer.java:164)
| org.jboss.jms.client.JBossMessageProducer.send(JBossMessageProducer.java:207)
| org.jboss.jms.client.JBossMessageProducer.send(JBossMessageProducer.java:145)
| com.eLynx.Messaging.MessageSender.sendTextMessage(MessageSender.java:410)
| com.eLynx.Messaging.MessageSender.sendXmlRequest(MessageSender.java:521)
| com.eLynx.Messaging.XmlMessageSender.request(XmlMessageSender.java:166)
| com.eLynx.Imaging.USignContainerImager.doImaging(USignContainerImager.java:194)
| com.eLynx.PackageProcessor.USignPackageProcessor.imagePackage(USignPackageProcessor.java:558)
| com.eLynx.PackageProcessor.USignPackageProcessor.executeState(USignPackageProcessor.java:440)
| com.eLynx.PackageProcessor.USignPackageProcessor.resume(USignPackageProcessor.java:907)
| com.eLynx.PackageProcessor.USignPackageProcessor.startPackageProcessing(USignPackageProcessor.java:1081)
| com.eLynx.Service.BpmExecutorMessageHandler.beginProcess(BpmExecutorMessageHandler.java:83)
| com.eLynx.Service.BpmExecutorMessageHandler.processMessage(BpmExecutorMessageHandler.java:230)
| com.eLynx.Messaging.MessageReceiverHandler.onMessage(MessageReceiverHandler.java:147)
| org.jboss.jms.client.container.ClientConsumer.callOnMessage(ClientConsumer.java:157)
| org.jboss.jms.client.container.ClientConsumer$ListenerRunner.run(ClientConsumer.java:965)
| EDU.oswego.cs.dl.util.concurrent.QueuedExecutor$RunLoop.run(QueuedExecutor.java:89)
| java.lang.Thread.run(Thread.java:595)
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4140268#4140268
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4140268
16 years, 10 months
[JBoss Messaging] - OutOfMemory exception in JMS server
by kgreiner
I've setup two applications (servers) running in the same JBoss 4.2.2.GA instance. One acts as the JMS client while the other is obviously the server. In my test, the client is simply generating a stream of object messages (about 50-100K per message) to a queue where that should be persisted by the JMS server to MySQL. A second client, that is not part of this test, would consume these messages at a later time.
I'm running Messaging 1.4.0.SP3 and I believe that I've fixed all of the installation problems related to port bindings, remoting, etc. as I am able to send and receive small numbers of messages between my two applications.
The first production-like test involving thousands of messages crashed the JMS server with an OutOfMemoryException after only a few hundred messages where sent to the queue. I increased the max heap size to 160M which, given that the server is simply persisting messages from one client, seems quite sufficient. That only delayed the error.
I then found that, after several aborted tests, I could not restart the JMS server due to an OutOfMemory while opening the messaging queue. Apparently, starting the queue resulted in all of the messages in the database (about 1,500) being loaded into memory. Setting the FullSize attribute to 100 in my queue's destination fixed this problem (I am certain that I could have used a larger value. I picked 100 to test whether this attribute would resolve my problem rather than to optimize future performance).
When I resumed testing, I found that my JMS server again failed with an OutOfMemory exception. I then tried setting the DownCacheSize attribute to 100 but that didn't help.
I've since run my JMS server under JProfiler and found what appears to be a problem. The server presently has 180 JBossObjectMessage instances in memory (I stopped the test when the server was nearly out of memory) which is substantially higher than the 100 that I expected when I lowered DownCacheSize.
Each JBossObjectMessage has two references to it. The first is a weak reference from the SimpleMessageStore . That reference is exactly what I hoped to find. The second reference is a hard reference via the messageRefs field of the org.jboss.messaging.core.impl.MessagingQueue class. That field contains a list of org.jboss.messaging.core.impl.message.SimpleMessageReference that each contain a hard reference, via their message fields, to the JBossObjectMessage instances.
Should SimpleMessageReference directly reference the message or a key into the SimpleMessageStore?
What should I try next to resolve this problem?
Should I be setting FullSize and/or DownCacheSize? Documentation for either is rather lacking right now.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4140263#4140263
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4140263
16 years, 10 months
[Installation, Configuration & DEPLOYMENT] - Silent application server crash with jboss-4.0.3SP1
by sachin.mohan@kbcfp.com
On Solaris x86 5.10
We have production JBoss deployment where we are running it in three different configurations -
Cluster 1: 2 AS nodes - JMS hosting (just Queue and Topics registered here)
Cluster 2: 2 AS nodes - Core Java service (Remote method invocations over JMS mostly)
Cluster 3: 3 AS nodes - Enterprise java services hosting mostly entity beans, one stateful session bean and a few stateless session beans
Cluster 2 is a purely symmetrical cluster. Cluster 3 on the other hand is not as some state, which is not transferable amongst servers, is kept. For some reason one or another of the Cluster 3 nodes would crash for no reason silently. Cluster 2 (or 1) never have such problems.
There is no stack and no symptoms as to why this is happening. We only know when the users complain about losing connectivity. We tend to blame it on the garbage collection sometimes as this happens around the time Java concurrent GC's promotion fails and a stop the world collection happens. Without any manual intervention (like stop being called), jboss starts undeploying all packages.
Is this a known issue? If not, would anyone suggest ideas to debug this weird occurrence. Every week we will have at least one of the three servers going down.
I will add the log (gc,console and server) as soon as the next crash happens (hopefully it will happen again this week as usual.)
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4140260#4140260
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4140260
16 years, 10 months