[JBoss Seam] - Interportlet Communication Problem
by bulloncito
Hi there, there are a few ways to interact with other portlets around, however most of them relay on session or some messaging boxes (http://www.doc.ic.ac.uk/~mo197/portlets/portlet_messaging/), however I'm thinking they are not multi-browser-window/multi-tabs safe.
Think of it, browser tab A clicks onto something, sets a few parameters on the session on its action phase, then, the server for some undisclosed reasons is running slow (wich almost never happens, yeah, right) then the user changes tab/window and performs another click on different/same page/portlet that affects same variable on same session, imagine the sad, sad order, on wich the first click's action occurs, then the second click's action overwriting the first clicks values on session, then all the renders on non-importan order. First click's renders would be reading params from session wich would not be true.
So, I was thinking, maybe I should pass a lot of parameters per/click, all the time, so they be concurrency-safe, lets say, if for each tab, I pass to the portlet some different categoryId or alikes, it would be there for click's lifecycle so other tabs would not be affected, now, the issue here is that request parameters are not propagated to other portlets, lets say again in the categoryId parameter, it comes from an index portlet wich should change render behaviour on maybe news portlet or popular items portlet, propagate parameters from client side is easy, I just rewrote the following:
| package myActions ;
| import javax.portlet.RenderRequest ;
| public class MyActionURLTag extends org.jboss.portal.portlet.taglib.ActionURLTag {
| private static Logger log = Logger.getLogger( MyActionURLTag.class ) ;
| public int doEndTag() {
| int returnable = 0 ;
| try {
| returnable = super.doEndTag() ;
| RenderRequest request = getRequest() ;
| // And here´s the magic, vars are propagated in a multitab/multiwindow safe manner
| pageContext.getOut().print( "&theVarIWant=" + request.getParameter("theVarIWant") ) ;
| } catch ( Exception e ) {
| log.debug( "something went wrong" , e );
| }
| return returnable ;
| }
| }
|
... this trick allows me to perform actions with click scoped vars, wich are safe to concurrency, since each one has it´s own request and everything.
THE PROBLEM IS: how do I propagate those renderParameters/actionparameters to other portlets on the same concurrency-safe manner, thinking in slow server response, where using APPLICATION_SCOPE session vars would not be safe, nor wise to synchronize.
The whole idea is, to have render/action parameters wich are independent of the session AND that can be read from all rendering portlets on the page, for wich old simple request works for client side, but portlet container is hiding those nice parameters from me :(
jBoss InterportletCommunication throug PortletEventListener seems to work only with WindowEvents, so this way seemed not to solve this problem.
ANY idea would be great.
Thanks.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3973616#3973616
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3973616
19 years, 7 months
[JBossCache] - TcpCacheServer / Transaction propagation
by jzmmek2k2
Dear all,
I am evaluating the JBoss-Cache and it is really impressive.
But I have a question about "TcpCacheServer" and "TransactionManager".
We are planning the following server-structure:
---
2 JBossCache 1.4 instances (standalone, outside of any application-server) with BuddyReplication and one shared storage.
Both instances are running as TcpCacheServer.
---
15 BEA-Weblogic 8.1 having a TcpDelegatingCacheLoader taking part in the Bea-JTA-Transactions.
---
Will the Bea-JTA-Transaction which is only available on the TcpDelegatingCacheLoader-side be propagated in any way to the 2 TcpCacherServer instances or is any further configuration needed ?
I have no TransactionManagerLookup (=> Dummy) configured in the 2 TcpCacheServer instances, because it simply runs outside any application-server.
What problems can arise without having a Transaction on the 2 TcpCacheServer instances ?
Would be great if someone could help me on this currently theoretical questions.
Regards
Jan
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3973610#3973610
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3973610
19 years, 7 months
[EJB 3.0] - How To do EAR Packaging
by chane
Is there a how to on the different packaging options for my applications with new latest JBoss AS? I have looked through the online documenation and a section doesn't jump out at me that I should read.
Basically, I'm trying to figure out how to deploy a bunch of shared dependency jars (around 30 - e.g., seam, apach.commons.*, etc) which wheigh in around 10MB.
Right now I am bundling them into the ear or in the contained war. However, I have three apps that all share the same jar files. I can put them into the jboss/server/default/lib directory. I was wondering if there are other options...so that it might be easier to identify/change which jar files are deployed by me instead of included with jboss (which will be fun if I mix my jar files with those in the lib directory).
So I guess:
- are there other directories I could put jar files into
- can I create a special ear file that would deploy a bunch of jar files
that can be used by all applications
- other thoughts
Sorry for the long message, there is a lot of information out there and I am hoping for a couple of pointers that I can go read and do more research on.
Thanks in advance,
Chris....
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3973609#3973609
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3973609
19 years, 7 months
[Clustering/JBoss] - Cluster Membership after Network Failure
by dfisher
I'm using version 4.0.4 and I can't seem to get my cluster configuration right.
I have 2 nodes each using the TCP config:
| <Config>
| <TCP bind_addr="X.X.X.1" start_port="7800" loopback="true" conn_expire_time="5000"/>
| <TCPPING initial_hosts="X.X.X.1[7800],X.X.X.2[7800]" port_range="1" timeout="3500"
| num_initial_members="2" up_thread="true" down_thread="true"/>
| <MERGE2 min_interval="5000" max_interval="10000"/>
| <FD_SOCK down_thread="false" up_thread="false"/>
| <FD timeout="2500" shun="true" max_tries="5" up_thread="false" down_thread="false" />
| <VERIFY_SUSPECT timeout="1500" down_thread="false" up_thread="false" />
| <pbcast.NAKACK down_thread="true" up_thread="true" gc_lag="100"
| retransmit_timeout="3000"/>
| <pbcast.STABLE desired_avg_gossip="20000" down_thread="false" up_thread="false" />
| <pbcast.GMS join_timeout="5000" join_retry_timeout="2000" shun="false"
| print_local_addr="true" down_thread="true" up_thread="true"/>
| <pbcast.STATE_TRANSFER up_thread="true" down_thread="true"/>
| </Config>
|
If I pull the network cable from one of the nodes, wait a minute, then plug it back in, the cluster membership is never rebuilt on both nodes.
At that point farming doesn't work and I have to restart one of the nodes.
Here is a snippet of a consolidated server log:
anonymous wrote :
| node-1 2006-09-22 11:18:32,100 INFO [org.jboss.ha.framework.interfaces.HAPartition.lifecycle.DefaultPartition] Suspected member: node-2:7800 (additional data: 17 bytes)
| node-2 2006-09-22 11:18:32,203 INFO [org.jboss.ha.framework.interfaces.HAPartition.DefaultPartition] Suspected member: node-1:7800 (additional data: 17 bytes)
| node-2 2006-09-22 11:18:32,212 INFO [org.jboss.ha.framework.interfaces.HAPartition.lifecycle.DefaultPartition] New cluster view for partition DefaultPartition (id: 4, delta: -1) : [X.X.X.2:-1]
| node-2 2006-09-22 11:18:32,216 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] I am (X.X.X.2:-1) received membershipChanged event:
| node-2 2006-09-22 11:18:32,217 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] Dead members: 1 ([X.X.X.1:-1])
| node-2 2006-09-22 11:18:32,217 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] New Members : 0 ([])
| node-2 2006-09-22 11:18:32,218 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] All Members : 1 ([X.X.X.2:-1])
| node-1 2006-09-22 11:18:34,633 INFO [org.jboss.ha.framework.interfaces.HAPartition.lifecycle.DefaultPartition] New cluster view for partition DefaultPartition (id: 4, delta: -1) : [X.X.X.1:-1]
| node-1 2006-09-22 11:18:34,634 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] I am (X.X.X.1:-1) received membershipChanged event:
| node-1 2006-09-22 11:18:34,635 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] Dead members: 1 ([X.X.X.2:-1])
| node-1 2006-09-22 11:18:34,635 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] New Members : 0 ([])
| node-1 2006-09-22 11:18:34,635 INFO [org.jboss.ha.framework.server.DistributedReplicantManagerImpl.DefaultPartition] All Members : 1 ([X.X.X.1:-1])
| node-2 2006-09-22 11:18:34,892 INFO [org.jboss.cache.TreeCache] viewAccepted(): [node-2:7810|2] [node-2:7810]
| node-1 2006-09-22 11:18:36,139 INFO [org.jboss.ha.framework.interfaces.HAPartition.lifecycle.DefaultPartition] Suspected member: node-2:7800 (additional data: 17 bytes)
| node-1 2006-09-22 11:23:52,531 INFO [org.jboss.cache.TreeCache] viewAccepted(): [node-1:7810|2] [node-1:7810]
| node-2 2006-09-22 11:24:05,025 INFO [org.jboss.cache.TreeCache] viewAccepted(): [node-2:7810|0] [node-2:7810]
| node-2 2006-09-22 11:24:05,025 INFO [org.jboss.cache.TreeCache] new cache is null (may be first member in cluster)
| node-1 2006-09-22 11:24:05,059 INFO [org.jboss.cache.TreeCache] viewAccepted(): [node-1:7810|0] [node-1:7810]
| node-1 2006-09-22 11:24:05,059 INFO [org.jboss.cache.TreeCache] new cache is null (may be first member in cluster)
|
And here is a snippet of the jgroups log on node-1:
anonymous wrote :
|
| 2006-09-22 11:18:15,537 DEBUG [org.jgroups.protocols.FD] sending are-you-alive msg to node-2:7810 (own address=node-1:7810)
| 2006-09-22 11:18:15,541 DEBUG [org.jgroups.protocols.FD] sending are-you-alive msg to node-2:7800 (additional data: 17 bytes) (own address=node-1:7800 (additional data: 17 bytes))
| 2006-09-22 11:18:15,541 DEBUG [org.jgroups.protocols.FD] heartbeat missing from node-2:7800 (additional data: 17 bytes) (number=0)
| 2006-09-22 11:18:16,365 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[]
| 2006-09-22 11:18:19,149 DEBUG [org.jgroups.protocols.pbcast.STABLE] mcasting digest [node-1:7810: [0 : 9 (9)], node-2:7810: [0 : 4 (4)]] (num_gossip_runs=1, max_gossip_runs=3)
| 2006-09-22 11:18:19,150 DEBUG [org.jgroups.protocols.pbcast.STABLE] stable task terminating (num_gossip_runs=0, max_gossip_runs=3)
| 2006-09-22 11:18:25,166 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7800 (additional data: 17 bytes)#19 (19), node-2:7800 (additional data: 17 bytes)#87 (87) from node-1:7800 (additional data: 17 bytes) 2006-09-22 11:18:28,082 DEBUG [org.jgroups.protocols.FD] [node-1:7800 (additional data: 17 bytes)]: received no heartbeat ack from node-2:7800 (additional data: 17 bytes) for 6 times (15000 milliseconds), suspecting it
| 2006-09-22 11:18:28,082 DEBUG [org.jgroups.protocols.FD] mbr=node-2:7800 (additional data: 17 bytes) (size=1)
| 2006-09-22 11:18:30,586 DEBUG [org.jgroups.protocols.FD] mbr=node-2:7810 (size=1)
| 2006-09-22 11:18:30,590 DEBUG [org.jgroups.protocols.FD] sending are-you-alive msg to node-2:7800 (additional data: 17 bytes) (own address=node-1:7800 (additional data: 17 bytes))
| 2006-09-22 11:18:30,590 DEBUG [org.jgroups.protocols.FD] heartbeat missing from node-2:7800 (additional data: 17 bytes) (number=0)
| 2006-09-22 11:18:30,590 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7800 (additional data: 17 bytes)]] to group
| 2006-09-22 11:18:30,590 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:18:30,591 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7800 (additional data: 17 bytes)], from=node-1:7800 (additional data: 17 bytes))]
| 2006-09-22 11:18:32,098 DEBUG [org.jgroups.protocols.pbcast.CoordGmsImpl] mbr=node-2:7800 (additional data: 17 bytes)
| 2006-09-22 11:18:32,098 DEBUG [org.jgroups.protocols.pbcast.STABLE] stable task started; num_gossip_runs=3, max_gossip_runs=3
| 2006-09-22 11:18:32,099 DEBUG [org.jgroups.protocols.pbcast.GMS] VID=4, current members=(node-1:7800 (additional data: 17 bytes), node-2:7800 (additional data: 17 bytes)), new_mbrs=(), old_mbrs=(), suspected_mbrs=(
| node-2:7800 (additional data: 17 bytes))
| 2006-09-22 11:18:32,099 DEBUG [org.jgroups.protocols.pbcast.GMS] new view is [node-1:7800 (additional data: 17 bytes)|4] [node-1:7800 (additional data: 17 bytes)]
| 2006-09-22 11:18:32,099 DEBUG [org.jgroups.protocols.pbcast.GMS] mcasting view {[node-1:7800 (additional data: 17 bytes)|4] [node-1:7800 (additional data: 17 bytes)]} (1 mbrs)
|
| 2006-09-22 11:18:32,099 DEBUG [org.jgroups.blocks.RequestCorrelator] suspect=node-2:7800 (additional data: 17 bytes)
| 2006-09-22 11:18:33,098 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7800 (additional data: 17 bytes), node-2:7800 (additional data: 17 bytes)], pingable_mbrs=[node-1:7800 (additional d
| ata: 17 bytes)], local_addr=node-1:7800 (additional data: 17 bytes)
| 2006-09-22 11:18:34,631 DEBUG [org.jgroups.protocols.pbcast.CoordGmsImpl] view=[node-1:7800 (additional data: 17 bytes)|4] [node-1:7800 (additional data: 17 bytes)]
| 2006-09-22 11:18:34,632 DEBUG [org.jgroups.protocols.pbcast.GMS] [local_addr=node-1:7800 (additional data: 17 bytes)] view is [node-1:7800 (additional data: 17 bytes)|4] [node-1:7800 (additional data: 17 byte
| s)]
| 2006-09-22 11:18:34,632 DEBUG [org.jgroups.protocols.pbcast.STABLE] stable task started; num_gossip_runs=3, max_gossip_runs=3
| 2006-09-22 11:18:34,632 DEBUG [org.jgroups.protocols.pbcast.NAKACK] removing node-2:7800 (additional data: 17 bytes) from received_msgs (not member anymore)
| 2006-09-22 11:18:34,632 DEBUG [org.jgroups.protocols.FD] suspected_mbrs: [node-2:7800 (additional data: 17 bytes)], after adjustment: [], stopped: true
| 2006-09-22 11:18:34,633 DEBUG [org.jgroups.protocols.FD_SOCK] VIEW_CHANGE received: [node-1:7800 (additional data: 17 bytes)]
| 2006-09-22 11:18:34,634 DEBUG [org.jgroups.protocols.FD_SOCK] socket to null was reset
| 2006-09-22 11:18:34,634 DEBUG [org.jgroups.protocols.FD_SOCK] pinger thread terminated
| 2006-09-22 11:18:36,138 ERROR [org.jgroups.protocols.pbcast.CoordGmsImpl] mbr node-2:7800 (additional data: 17 bytes) is not a member !
| 2006-09-22 11:18:36,139 DEBUG [org.jgroups.blocks.RequestCorrelator] suspect=node-2:7800 (additional data: 17 bytes)
| 2006-09-22 11:18:38,818 DEBUG [org.jgroups.protocols.pbcast.STABLE] mcasting digest [node-1:7800 (additional data: 17 bytes): [0 : 21 (21)]] (num_gossip_runs=3, max_gossip_runs=3)
| 2006-09-22 11:18:38,819 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7800 (additional data: 17 bytes)#21 (21) from node-1:7800 (additional data: 17 bytes)
| 2006-09-22 11:18:38,819 DEBUG [org.jgroups.protocols.pbcast.STABLE] sending stability msg node-1:7800 (additional data: 17 bytes)#21 (21)
| 2006-09-22 11:18:38,819 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability_task=null, delay is 270
| 2006-09-22 11:18:39,098 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability vector is [node-1:7800 (additional data: 17 bytes)#21]
| 2006-09-22 11:18:39,099 DEBUG [org.jgroups.protocols.pbcast.STABLE] cancelling stability task (running=false)
| 2006-09-22 11:18:39,099 DEBUG [org.jgroups.protocols.pbcast.NAKACK] received digest [node-1:7800 (additional data: 17 bytes): [-1 : 21 (21)]]
| 2006-09-22 11:22:58,567 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7810#9 (9), node-2:7810#4 (4) from node-1:7810
| 2006-09-22 11:22:58,571 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:22:58,580 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:22:58,581 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:22:58,581 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:22:58,581 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:23:00,076 DEBUG [org.jgroups.protocols.pbcast.CoordGmsImpl] mbr=node-2:7810
| 2006-09-22 11:23:00,076 DEBUG [org.jgroups.protocols.pbcast.STABLE] stable task started; num_gossip_runs=3, max_gossip_runs=3
| 2006-09-22 11:23:00,076 DEBUG [org.jgroups.protocols.pbcast.GMS] VID=2, current members=(node-1:7810, node-2:7810), new_mbrs=(), old_mbrs=(), suspected_mbrs=(node-2:7810)
| 2006-09-22 11:23:00,076 DEBUG [org.jgroups.protocols.pbcast.GMS] new view is [node-1:7810|2] [node-1:7810]
| 2006-09-22 11:23:00,076 DEBUG [org.jgroups.protocols.pbcast.GMS] mcasting view {[node-1:7810|2] [node-1:7810]} (1 mbrs)
| 2006-09-22 11:23:00,077 DEBUG [org.jgroups.blocks.RequestCorrelator] suspect=node-2:7810
| 2006-09-22 11:23:01,084 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:23:01,084 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:23:01,084 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:23:04,540 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[]
| 2006-09-22 11:23:26,158 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7800 (additional data: 17 bytes)#23 (23) from node-1:7800 (additional data: 17 bytes)
| 2006-09-22 11:23:26,159 DEBUG [org.jgroups.protocols.pbcast.STABLE] sending stability msg node-1:7800 (additional data: 17 bytes)#23 (23)
| 2006-09-22 11:23:26,159 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability_task=null, delay is 4955
| 2006-09-22 11:23:26,165 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7800 (additional data: 17 bytes)#23 (24) from node-1:7800 (additional data: 17 bytes)
| 2006-09-22 11:23:26,165 DEBUG [org.jgroups.protocols.pbcast.STABLE] sending stability msg node-1:7800 (additional data: 17 bytes)#23 (24)
| 2006-09-22 11:23:26,165 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability_task=org.jgroups.protocols.pbcast.STABLE$StabilitySendTask@d1ebcd, delay is 5216
| 2006-09-22 11:23:28,629 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:23:28,629 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:23:28,630 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:23:31,118 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability vector is [node-1:7800 (additional data: 17 bytes)#23]
| 2006-09-22 11:23:31,118 DEBUG [org.jgroups.protocols.pbcast.STABLE] cancelling stability task (running=false) 2006-09-22 11:23:31,118 DEBUG [org.jgroups.protocols.pbcast.NAKACK] received digest [node-1:7800 (additional data: 17 bytes): [-1 : 23 (23)]]
| 2006-09-22 11:23:33,045 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[] 2006-09-22 11:23:33,570 DEBUG [org.jgroups.protocols.pbcast.STABLE] mcasting digest [node-1:7810: [0 : 10 (11)], node-2:7810: [0 : 4 (4)]] (num_gossip_runs=3, max_gossip_runs=3)
| 2006-09-22 11:23:33,637 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:23:33,637 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:23:33,638 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:23:38,098 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7800 (additional data: 17 bytes), coord_addr=node-2:7800 (additional data: 17 bytes)]]
| 2006-09-22 11:23:44,754 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[]
| 2006-09-22 11:23:46,158 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:23:46,158 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:23:46,158 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:23:48,666 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:23:48,667 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:23:48,667 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:23:49,514 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7800 (additional data: 17 bytes), coord_addr=node-2:7800 (additional data: 17 bytes)]]
| 2006-09-22 11:23:51,174 WARN [org.jgroups.protocols.FD] ping_dest is null: members=[node-1:7810, node-2:7810], pingable_mbrs=[node-1:7810], local_addr=node-1:7810
| 2006-09-22 11:23:51,174 DEBUG [org.jgroups.protocols.FD] broadcasting SUSPECT message [suspected_mbrs=[node-2:7810]] to group
| 2006-09-22 11:23:51,175 DEBUG [org.jgroups.protocols.FD] task done
| 2006-09-22 11:23:52,443 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 006-09-22 11:23:52,530 DEBUG [org.jgroups.protocols.pbcast.CoordGmsImpl] view=[node-1:7810|2] [node-1:7810]
| 2006-09-22 11:23:52,530 DEBUG [org.jgroups.protocols.pbcast.GMS] [local_addr=node-1:7810] view is [node-1:7810|2] [node-1:7810]
| 2006-09-22 11:23:52,531 DEBUG [org.jgroups.protocols.pbcast.STABLE] stable task started; num_gossip_runs=3, max_gossip_runs=3
| 2006-09-22 11:23:52,531 DEBUG [org.jgroups.protocols.pbcast.NAKACK] removing node-2:7810 from received_msgs (not member anymore)
| 2006-09-22 11:23:52,531 DEBUG [org.jgroups.protocols.FD] suspected_mbrs: [node-2:7810], after adjustment: [], stopped: true
| 2006-09-22 11:23:52,534 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:23:52,549 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7810#10 (11), node-2:7810#4 (4) from node-1:7810
| 2006-09-22 11:23:52,549 DEBUG [org.jgroups.protocols.pbcast.STABLE] sending stability msg node-1:7810#10 (11), node-2:7810#4 (4)
| 2006-09-22 11:23:52,549 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability_task=null, delay is 141
| 2006-09-22 11:23:52,553 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:23:52,553 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:23:52,553 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:23:52,553 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-2:7810], from=node-1:7810)]
| 2006-09-22 11:23:52,699 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability vector is [node-1:7810#10, node-2:7810#4]
| 2006-09-22 11:23:52,699 DEBUG [org.jgroups.protocols.pbcast.STABLE] cancelling stability task (running=false)
| 2006-09-22 11:23:52,699 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest (digest=[node-1:7810: [-1 : 10 (11)], node-2:7810: [-1 : 4 (4)]]) which does not match my own digest ([node-1:7810: [-1 : -1]): ignoring digest and re-initializing own digest
| 2006-09-22 11:23:53,950 DEBUG [org.jgroups.protocols.pbcast.CoordGmsImpl] mbr=node-2:7810
| 2006-09-22 11:23:53,951 ERROR [org.jgroups.protocols.pbcast.CoordGmsImpl] mbr node-2:7810 is not a member !
| 2006-09-22 11:23:53,951 DEBUG [org.jgroups.blocks.RequestCorrelator] suspect=node-2:7810
| 2006-09-22 11:23:57,443 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[]
| 2006-09-22 11:23:59,535 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7800 (additional data: 17 bytes), coord_addr=node-2:7800 (additional data: 17 bytes)]]
| 2006-09-22 11:24:00,963 DEBUG [org.jgroups.protocols.FD] node-2:7810 is not in [node-1:7810] ! Telling it to leave group
| 2006-09-22 11:24:00,963 DEBUG [org.jgroups.protocols.FD] [SUSPECT] suspect hdr is [FD: SUSPECT (suspected_mbrs=[node-1:7810], from=node-2:7810)]
| 2006-09-22 11:24:00,963 WARN [org.jgroups.protocols.FD] I was suspected, but will not remove myself from membership (waiting for EXIT message)
| 2006-09-22 11:24:00,976 DEBUG [org.jgroups.protocols.FD] [NOT_MEMBER] I'm being shunned; exiting
| 2006-09-22 11:24:00,979 WARN [org.jgroups.protocols.pbcast.NAKACK] [node-1:7810] discarded message from non-member node-2:7810
| 2006-09-22 11:24:00,980 DEBUG [org.jgroups.protocols.pbcast.NAKACK] contents for node-1:7810:
| sent_msgs: [0 - 13]
| received_msgs:
| node-1:7810: received_msgs: [], delivered_msgs: [0 - 13]
| 2006-09-22 11:24:01,492 DEBUG [org.jgroups.protocols.pbcast.GMS] changed role to org.jgroups.protocols.pbcast.ClientGmsImpl
| 2006-09-22 11:24:05,055 DEBUG [org.jgroups.protocols.pbcast.ClientGmsImpl] initial_mbrs are []
| 2006-09-22 11:24:05,055 DEBUG [org.jgroups.protocols.pbcast.ClientGmsImpl] no initial members discovered: creating group as first member
| 2006-09-22 11:24:05,056 DEBUG [org.jgroups.protocols.pbcast.GMS] [local_addr=node-1:7810] view is [node-1:7810|0] [node-1:7810]
| 2006-09-22 11:24:05,056 DEBUG [org.jgroups.protocols.pbcast.STABLE] stable task started; num_gossip_runs=3, max_gossip_runs=3
| 2006-09-22 11:24:05,056 DEBUG [org.jgroups.protocols.pbcast.GMS] node-1:7810 changed role to org.jgroups.protocols.pbcast.CoordGmsImpl
| 2006-09-22 11:24:05,056 DEBUG [org.jgroups.protocols.pbcast.GMS] node-1:7810 changed role to org.jgroups.protocols.pbcast.CoordGmsImpl
| 2006-09-22 11:24:05,056 DEBUG [org.jgroups.protocols.pbcast.ClientGmsImpl] created group (first member). My view is [node-1:7810|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
| 2006-09-22 11:24:05,057 DEBUG [org.jgroups.protocols.FD] suspected_mbrs: [], after adjustment: [], stopped: true
| 2006-09-22 11:24:05,058 DEBUG [org.jgroups.protocols.MERGE2] merge task started
| 2006-09-22 11:24:05,058 DEBUG [org.jgroups.protocols.pbcast.STATE_TRANSFER] GET_STATE: first member (no state)
| 2006-09-22 11:24:12,723 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7800 (additional data: 17 bytes), coord_addr=node-2:7800 (additional data: 17 bytes)]]
| 2006-09-22 11:24:18,476 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7810, coord_addr=node-2:7810]]
| 2006-09-22 11:24:25,524 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7800 (additional data: 17 bytes), coord_addr=node-2:7800 (additional data: 17 bytes)]]
| 2006-09-22 11:24:28,436 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7810, coord_addr=node-2:7810]]
| 2006-09-22 11:24:35,345 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7800 (additional data: 17 bytes), coord_addr=node-2:7800 (additional data: 17 bytes)]]
| 2006-09-22 11:24:38,985 DEBUG [org.jgroups.protocols.pbcast.STABLE] mcasting digest [node-1:7810: [0 : 0] (num_gossip_runs=3, max_gossip_runs=3)
| 2006-09-22 11:24:39,253 DEBUG [org.jgroups.protocols.MERGE2] initial_mbrs=[[own_addr=node-2:7810, coord_addr=node-2:7810]]
| 2006-09-22 11:24:39,253 DEBUG [org.jgroups.protocols.pbcast.STABLE] received digest node-1:7810#0 (-1) from node-1:7810
| 2006-09-22 11:24:39,253 DEBUG [org.jgroups.protocols.pbcast.STABLE] sending stability msg node-1:7810#0 (-1)
| 2006-09-22 11:24:39,253 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability_task=null, delay is 502
| 2006-09-22 11:24:39,765 DEBUG [org.jgroups.protocols.pbcast.STABLE] stability vector is [node-1:7810#0]
| 2006-09-22 11:24:39,765 DEBUG [org.jgroups.protocols.pbcast.STABLE] cancelling stability task (running=false)
| 2006-09-22 11:24:39,765 DEBUG [org.jgroups.protocols.pbcast.NAKACK] received digest [node-1:7810: [-1 : 0]
|
I have tried the FD config with and without shun, neither option results in the cluster membership being updated.
Any ideas on what I am doing wrong?
Thanks.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3973608#3973608
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3973608
19 years, 7 months
[Installation, Configuration & Deployment] - Re: Jboss 4.0.4 as a window service
by vannguyen0
"mishra_rajneesh" wrote : first of all thanx for the reply...
|
| I uesd the following command to create the service:
|
| JBossService.exe -install JBossAll C:\jdk1.5.0_06\jre\bin\jvm.dll -Djava.class.path=C:\jdk1.5.0_06\lib\tools.jar;C:\JBOSS\run.jar -start org.jboss.Main -params -c all -stop org.jboss.Main -method systemExit -out C:\JBOSS\stdout.log -err C:\JBOSS\stderr.log -current C:\JBOSS -manual
|
| When i tried to run the service, i got the following msg
|
| The JBossAll Service on Local Computer started and then stopped. Some services stop automatically if they have no work to do, for example, the performance Logs and Alerts service.
|
| --------------------------------------------------------------------------------
|
| Please check if i have done anything wrong
I assume that since you have the "-params -c all" option, you renamed the 'default' folder underneath your %JBOSS_HOME%/server directory to 'all'? If so... did you do this because you have more than one JBoss instance running on this server?
I would also check to make sure that you have a jvm.dll in your C:\jdk1.5.0_06\jre\bin directory. From what I can tell on both our JBoss servers (and from the directory structure after the installation of the Java SDK), the jvm.dll is separated into %JAVA_HOME%\jre\bin\{server or client}\jvm.dll
Everything else looks OK.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3973605#3973605
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3973605
19 years, 7 months