[JBoss JIRA] (WFCORE-1475) Death to Kenny
by Brian Stansberry (JIRA)
Brian Stansberry created WFCORE-1475:
----------------------------------------
Summary: Death to Kenny
Key: WFCORE-1475
URL: https://issues.jboss.org/browse/WFCORE-1475
Project: WildFly Core
Issue Type: Enhancement
Components: Server
Reporter: Brian Stansberry
Assignee: Brian Stansberry
Core has been identifying itself with release "codename" Kenny since 1.0.0.Final.
IMO it's time for the codename tradition to die; it's not really useful and is just another detail to get wrong to add to the ever growing pile of such details. I'll see if I can make be an empty string. I considered something like "N/A" or even null, but there's string concat code out there that appends the codename to the version, and an empty string may work better with that.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFCORE-1159) Add a runtime operation which shows filesystem usage and availability for known server locations
by ehsavoie Hugonnet (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1159?page=com.atlassian.jira.plugi... ]
ehsavoie Hugonnet commented on WFCORE-1159:
-------------------------------------------
Since walking the filesystem to get a directory size can consume a lot of time/resource I'm moving the initial requirement of a metric to an operation.
This operation is added automatically to all Path that creates services (thus server-config and domain wide paths won't have it).
This may also be added to attributes that represent a path (like the log file attributes).
You can specify the unit you want but BYTES are the default.
> Add a runtime operation which shows filesystem usage and availability for known server locations
> ------------------------------------------------------------------------------------------------
>
> Key: WFCORE-1159
> URL: https://issues.jboss.org/browse/WFCORE-1159
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management, Server
> Affects Versions: 2.0.2.Final
> Reporter: Tristan Tarrant
> Assignee: ehsavoie Hugonnet
> Fix For: 3.0.0.Alpha1
>
>
> For management purposes it would be very useful to report runtime metrics for filesystem usage and availability for known server locations (data, tmp, logs, etc).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFCORE-1159) Add a runtime operation which shows filesystem usage and availability for known server locations
by ehsavoie Hugonnet (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1159?page=com.atlassian.jira.plugi... ]
ehsavoie Hugonnet updated WFCORE-1159:
--------------------------------------
Summary: Add a runtime operation which shows filesystem usage and availability for known server locations (was: Add a runtime metric which shows filesystem usage and availability for known server locations)
> Add a runtime operation which shows filesystem usage and availability for known server locations
> ------------------------------------------------------------------------------------------------
>
> Key: WFCORE-1159
> URL: https://issues.jboss.org/browse/WFCORE-1159
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management, Server
> Affects Versions: 2.0.2.Final
> Reporter: Tristan Tarrant
> Assignee: ehsavoie Hugonnet
> Fix For: 3.0.0.Alpha1
>
>
> For management purposes it would be very useful to report runtime metrics for filesystem usage and availability for known server locations (data, tmp, logs, etc).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (JGRP-2050) S3_PING: Nodes never removed from .list file
by Mitchell Ackerman (JIRA)
Mitchell Ackerman created JGRP-2050:
---------------------------------------
Summary: S3_PING: Nodes never removed from .list file
Key: JGRP-2050
URL: https://issues.jboss.org/browse/JGRP-2050
Project: JGroups
Issue Type: Feature Request
Affects Versions: 3.6.8
Reporter: Mitchell Ackerman
Assignee: Bela Ban
Priority: Minor
Unfortunately I seem to be running into the same or similar issue as JGRP-1957, even though I've updated to JGroups 3.6.8 and am using the settings you suggest in that (and other) posts.
I'm running in AWS using S3_PING, JDK 1.8.0_66, JGroups 3.6.8, Tomcat 8.0.28.
After terminating servers, mostly non-coordinators, I'm left with an S3 bucket with lots of zombies (there are only 2 active members), here is the file after the system has been stable for over an hour, and my JGroups config file.
Stepping through the code, I have confirmed that the scenario is the same as described in JGRP-1957. upon a view change the new (correct) member list is written to S3, but then it is overwritten with all the old members. When the old members are added back to the logical_addr_cache they all have their removable field set to false, so that all subsequent evictions skip over these members and they are never removed.
thanks, Mitchell
ip-10-89-1-26-8729 72597f74-8a10-04fb-b397-22a3ed35da84 10.89.1.26:7800 F
ip-10-89-0-18-38996 a5325932-e9cd-b281-b367-e2d86845aa75 10.89.0.18:7800 F
ip-10-89-1-62-4868 ef73921a-2265-50a8-95d4-ebb8cae96944 10.89.1.62:7800 T
ip-10-89-1-27-11915 5a0b4a26-b542-56f2-801a-420b5d7dbf34 10.89.1.27:7800 F
ip-10-89-1-19-2542 c30c294d-69b0-b6ca-7010-bf89d1eb8f6f 10.89.1.19:7800 F
ip-10-89-0-62-56914 fa2262c3-9097-7101-b225-24d8a52d905e 10.89.0.62:7800 F
ip-10-89-0-28-32680 5d03124f-b061-becb-d793-6067bf0d7945 10.89.0.28:7800 F
ip-10-89-1-26-51248 07cc18aa-381b-fb5d-0ad6-0612f7a5e9bb 10.89.1.26:7800 F
ip-10-89-1-27-39755 1f9be940-2228-2181-ef80-4a83d319a2b3 10.89.1.27:7800 F
ip-10-89-0-28-41919 4ab543f9-712e-645d-2f20-05304c98a23b 10.89.0.28:7800 F
ip-10-89-1-27-10428 d5b0cb38-75e0-b3e1-c053-66b053b0fb05 10.89.1.27:7800 F
my JGroups config file is:
<?xml version="1.0" encoding="UTF-8"?>
<config>
<TCP
bind_port="7800"
port_range="30"
recv_buf_size="20000000"
send_buf_size="1000000"
max_bundle_size="64000"
max_bundle_timeout="1000"
sock_conn_timeout="2000"
enable_diagnostics="false"
timer_type="new"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="1000"
timer.wheel_size="200"
timer.tick_time="50"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="100"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100000"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="10"
oob_thread_pool.max_threads="100"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="discard"
logical_addr_cache_expiration="1000"
logical_addr_cache_reaper_interval="10000"
/>
<S3_PING location="bob-s3-ping-dev" remove_all_files_on_view_change="true" remove_old_coords_on_view_change="true"/>
<MERGE3 max_interval="60000" min_interval="30000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="5"/>
<VERIFY_SUSPECT timeout="2000"/>
<pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
<UNICAST3/>
<pbcast.STABLE stability_delay="1500" desired_avg_gossip="50000" max_bytes="2m"/>
<pbcast.GMS print_local_addr="false" join_timeout="2500" max_bundling_time="50" view_bundling="true" max_join_attempts="$
{jgroups_max_join_attempts}
"/>
<pbcast.STATE_TRANSFER />
<!-- top -->
<!-- /\ down -->
<!-- \/ up -->
</config>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (WFCORE-64) Filesystem deployment scanner deployment failure removes unrelated deployments
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-64?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated WFCORE-64:
------------------------------------------
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1327093
Bugzilla Update: Perform
> Filesystem deployment scanner deployment failure removes unrelated deployments
> ------------------------------------------------------------------------------
>
> Key: WFCORE-64
> URL: https://issues.jboss.org/browse/WFCORE-64
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Affects Versions: 1.0.0.Alpha5
> Environment: WIndows and Linux platforms both exhibit the issue
> Reporter: Jess Holle
> Assignee: ehsavoie Hugonnet
>
> If one's standalone-full.xml configuration contains something like:
> <deployments>
> <deployment name="MyWebApp.war" runtime-name="MyWebApp.war" enabled="true">
> <fs-exploded path="../../SomeDir/MyWebApp.war" relative-to="jboss.home.dir"/>
> </deployment>
> </deployments>
> whether manually inserted (while the server is not running) or installed via the CLI via
> /deployment=ServiceCenter.war/:add(runtime-name=ServiceCenter.war,content=[{archive=false,path="../../Windchill/ServiceCenter.war",relative-to="jboss.home.dir"}])
> and a deployment scanner like:
> <subsystem xmlns="urn:jboss:domain:deployment-scanner:2.0">
> <deployment-scanner name="1" path="../../../Applications" relative-to="jboss.server.base.dir" scan-interval="5000" auto-deploy-exploded="true"/>
> </subsystem>
> a failure by a deployment-scanner to deploy an application (exploded in my case, though I'm not sure this makes a difference) will cause the explicitly listed <deployments> to be removed from the configuration!
> This occurs irrespective of the value used for auto-deploy-exploded and to <deployment> elements that had already successfully been deployed and started.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years