[JBoss JIRA] (WFCORE-218) wildfly web management console hangs during deploy from cli
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-218?page=com.atlassian.jira.plugin... ]
Brian Stansberry reassigned WFCORE-218:
---------------------------------------
Assignee: (was: Brian Stansberry)
> wildfly web management console hangs during deploy from cli
> -----------------------------------------------------------
>
> Key: WFCORE-218
> URL: https://issues.jboss.org/browse/WFCORE-218
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management
> Affects Versions: 1.0.0.Alpha1
> Reporter: Ian Kent
> Attachments: threaddump-1415735255304.tdump
>
>
> We are running wildfly in domain mode with the following configuration.
> host A running domain controlller
> host B running host controller with one app sever
> host C running host controller with one app server
> host D running host controller with one app server
> When we deloy war using jboss-cli the web console is blocked for usage until deploy completes. I have run jvisualvm and it does not appear that domain controller process is starved for resources (cpu, memory, threads).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (WFCORE-919) Allow read-only data/content storage
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-919?page=com.atlassian.jira.plugin... ]
Brian Stansberry reassigned WFCORE-919:
---------------------------------------
Assignee: (was: Brian Stansberry)
> Allow read-only data/content storage
> ------------------------------------
>
> Key: WFCORE-919
> URL: https://issues.jboss.org/browse/WFCORE-919
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Affects Versions: 2.0.0.Beta4
> Reporter: James Livingston
>
> It would be nice to allow the content repository to be read-only, if there was an external guarantee that all content should be there. This would be useful in situations where all HCs (including the DC) share that over NFS, or deployments are managed via another system (docker, puppet etc).
> It could be a ContentRepository implementation which does not make chances, and fails if attempted or content is missing. On read-only HCs the current writeability check would need to be disabled, along with the cleaner task, and the DC (if it could write) would need to ensure content was not removed prior to HCs performing undeployment.
> This would presumably have some interaction with WFCORE-310.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (WFCORE-887) "Deprecate" using an expression in model refs to interfaces
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-887?page=com.atlassian.jira.plugin... ]
Brian Stansberry reassigned WFCORE-887:
---------------------------------------
Assignee: (was: Brian Stansberry)
> "Deprecate" using an expression in model refs to interfaces
> -----------------------------------------------------------
>
> Key: WFCORE-887
> URL: https://issues.jboss.org/browse/WFCORE-887
> Project: WildFly Core
> Issue Type: Task
> Components: Domain Management
> Reporter: Brian Stansberry
> Fix For: 2.0.0.CR1
>
>
> SocketBindingGroupResourceDefinition and OutboundSocketBindingResourceDefinition both have attributes that represent model refs to interface resources, but which also allow expressions.
> Model references should not allow expressions. These were "grandfathered in" when the large scale expression support roll out happened for AS 7.2 / EAP 6.1.
> There's no metadata facility to record that expression support is deprecated, but the add handler for these should log a WARN if they encounter an expression. Hopefully in EAP 8 we can then remove expression support.
> We should look for other cases like this too, although those changes should be separate JIRAs. The "jts" attribute in the transactions subsystem comes to mind.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (WFCORE-934) IPv6ScopeIdMatchUnitTestCase fails on some machines
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-934?page=com.atlassian.jira.plugin... ]
Brian Stansberry commented on WFCORE-934:
-----------------------------------------
[~mkopecky] WildFly Core master and the 2.0.0.Beta5 release has a tweak to this test that's meant to provide a bit better failure info. If you can still reproduce this can you post the failure stack trace and test output again? Thanks.
> IPv6ScopeIdMatchUnitTestCase fails on some machines
> ---------------------------------------------------
>
> Key: WFCORE-934
> URL: https://issues.jboss.org/browse/WFCORE-934
> Project: WildFly Core
> Issue Type: Bug
> Components: Test Suite
> Affects Versions: 2.0.0.Beta4
> Reporter: Marek Kopecký
> Assignee: Brian Stansberry
>
> *Description of problem:*
> org.jboss.as.controller.interfaces.IPv6ScopeIdMatchUnitTestCase#testNonLoopback in WildFly Core TS fails on Solaris11 SPARC.
> *How reproducible:*
> Always with this configuration on Solaris11 SPARC:
> {noformat}
> $ /usr/sbin/ifconfig -a
> lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
> inet 127.0.0.1 netmask ff000000
> net0: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
> inet 10.16.91.211 netmask fffff800 broadcast 10.16.95.255
> net0:1: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
> inet 10.16.179.24 netmask fffff800 broadcast 10.16.183.255
> net0:2: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
> inet 10.16.179.25 netmask fffff800 broadcast 10.16.183.255
> net0:3: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
> inet 10.16.179.26 netmask fffff800 broadcast 10.16.183.255
> net0:4: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
> inet 10.16.179.27 netmask fffff800 broadcast 10.16.183.255
> lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
> inet6 ::1/128
> net0: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 fe80::8:20ff:fe6d:eab1/10
> net0:1: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 fe80::8:20ff:fe6d:eab2/10
> net0:2: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 fe80::8:20ff:fe6d:eab0/10
> net0:3: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 2620:52:0:105f::ffff:164/64
> net0:4: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 2620:52:0:105f::ffff:165/64
> net0:5: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 2620:52:0:105f::ffff:166/64
> net0:6: flags=120002000841<UP,RUNNING,MULTICAST,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 2620:52:0:105f::ffff:167/64
> net0:7: flags=120002080841<UP,RUNNING,MULTICAST,ADDRCONF,IPv6,PHYSRUNNING> mtu 1500 index 2
> inet6 2620:52:0:105f:8:20ff:fe6d:eab0/64
> $
> {noformat}
> *Steps to Reproduce:*
> # cd controller
> # mvn test -fae -Dmaven.test.failure.ignore=true -DfailIfNoTests=false -Dtest=IPv6ScopeIdMatchUnitTestCase
> *Actual results:*
> * StackTrace:
> {noformat}
> java.lang.AssertionError: expected:</fe80:0:0:0:8:20ff:fe6d:eab0%net0:2> but was:<null>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at org.jboss.as.controller.interfaces.IPv6ScopeIdMatchUnitTestCase.testNonLoopback(IPv6ScopeIdMatchUnitTestCase.java:129)
> {noformat}
> * Test output:
> {noformat}
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address fe80:0:0:0:8:20ff:fe6d:eab2%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address 2620:52:0:105f:8:20ff:fe6d:eab0%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address fe80:0:0:0:8:20ff:fe6d:eab1%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address 2620:52:0:105f:0:0:ffff:167%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address 2620:52:0:105f:0:0:ffff:166%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address 2620:52:0:105f:0:0:ffff:165%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address 2620:52:0:105f:0:0:ffff:164%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address fe80:0:0:0:8:20ff:fe6d:eab0%bogus, so cannot match it to any InetAddress
> WARN (main) [org.jboss.as.controller.management-operation] <InetAddressMatchInterfaceCriteria.java:128> WFLYCTL0001: Cannot resolve address fe80:0:0:0:8:20ff:fe6d:eab0%net0:2, so cannot match it to any InetAddress
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (WFLY-5234) Use of ModelNode.asPropertyList() is slow when marshaling potentially large subsystem model chunks
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFLY-5234?page=com.atlassian.jira.plugin.... ]
Brian Stansberry updated WFLY-5234:
-----------------------------------
Git Pull Request: https://github.com/wildfly/wildfly/pull/8024 (was: https://issues.jboss.org/browse/WFLY-5234)
> Use of ModelNode.asPropertyList() is slow when marshaling potentially large subsystem model chunks
> --------------------------------------------------------------------------------------------------
>
> Key: WFLY-5234
> URL: https://issues.jboss.org/browse/WFLY-5234
> Project: WildFly
> Issue Type: Bug
> Components: JCA, JMS, Security
> Affects Versions: 10.0.0.Beta2
> Reporter: Brian Stansberry
> Assignee: Brian Stansberry
> Fix For: 10.0.0.CR1
>
>
> ModelNode.asPropertyList() results in a clone of the model node. If the node is very large this can be expensive.
> We use this call quite a bit in subsystem xml marshallers. The cost is not likely to be significant in most places, but in a few cases where users may end up adding a large number of nodes of a particular type, moving away from this call to another idiom that doesn't involve cloning may help with performance.
> I've heard of a user planning to add thousands of datasources, for example, and wanting very fast performance of the write ops to add those.
> Things I plan to look at:
> (xa-)datasources
> messaging destinations
> resource adapters
> JCA connectors
> security domains
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (WFLY-5235) CDI interceptors are not called when invoking observer method
by Dirk Weil (JIRA)
Dirk Weil created WFLY-5235:
-------------------------------
Summary: CDI interceptors are not called when invoking observer method
Key: WFLY-5235
URL: https://issues.jboss.org/browse/WFLY-5235
Project: WildFly
Issue Type: Bug
Components: CDI / Weld
Affects Versions: 9.0.1.Final
Reporter: Dirk Weil
Assignee: Stuart Douglas
The following code runs with an active transaction on WFLY 8.2.0, but failes with an TransactionRequiredException on WFLY 9.0.1:
@ApplicationScoped
public class InitCocktailDemoDataService
{
@PersistenceContext
private EntityManager entityManager;
@Transactional
private void createDemoData(@Observes @Initialized(ApplicationScoped.class) Object event)
{
this.entityManager.merge(someEntity);
}
It seems that interceptors aren't called at all - at least for observers of scope lifecycle events.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (JGRP-1956) S3_PING / FILE_PING: remove failed members
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1956?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1956:
---------------------------
Fix Version/s: 3.6.5
> S3_PING / FILE_PING: remove failed members
> ------------------------------------------
>
> Key: JGRP-1956
> URL: https://issues.jboss.org/browse/JGRP-1956
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.4
> Reporter: Karsten Ohme
> Assignee: Bela Ban
> Fix For: 3.6.5
>
>
> When we terminate a member (EC2's "terminate" function) or kill -9 it, then the file (or bucket data in S3) won't get removed. This leads to stale data. On EC2, I expect that virtualized instances are often simply terminated, so this problem is compounded there.
> SOLUTION:
> - Periodically write own data to the file system (FILE_PING) or S3 (S3_PING)
> - On a view change: remove all data that's not in the current view
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (WFCORE-60) Capabilities and requirements in a managed process
by Tomaz Cerar (JIRA)
[ https://issues.jboss.org/browse/WFCORE-60?page=com.atlassian.jira.plugin.... ]
Tomaz Cerar commented on WFCORE-60:
-----------------------------------
this should already be done
> Capabilities and requirements in a managed process
> --------------------------------------------------
>
> Key: WFCORE-60
> URL: https://issues.jboss.org/browse/WFCORE-60
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: Brian Stansberry
> Assignee: Brian Stansberry
> Fix For: 2.0.0.CR1
>
>
> Implement the aspects discussed under the "Runtime" section of https://community.jboss.org/docs/DOC-52712
> Add an API to the OperationContext for handlers to publish capabilities and for other handlers to register a requirement for those capabilities and to access the API object associated with the capability.
> The registry of capabilities and requirements should be maintained with a semantic equivalent to the resource tree. The registry is copied-on-write, making the copy invisible to concurrently executing operations, and then the copy is published on commit of the operation that modified it. If the operation does not commit, the copy is discarded, so handlers have no need to revert changes they make to the registry.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (JGRP-1956) S3_PING / FILE_PING: remove failed members
by Karsten Ohme (JIRA)
[ https://issues.jboss.org/browse/JGRP-1956?page=com.atlassian.jira.plugin.... ]
Karsten Ohme edited comment on JGRP-1956 at 8/28/15 8:53 PM:
-------------------------------------------------------------
This seems to be open again. My developer system is running on local host and is working in single mode.
When starting the server a new file in the S3 bucket with the server name plus a random number is created. When the server is restarted, this old address is read from the bucket and a new one is generated. So e.g. after 7 restarts there a 7 servers address stored in the bucket which are tried to be reached when the server is starting up to find other members. I have set the timeout to one second to limit the effect, but the server still tries to connect 10 times before it is switching to single mode.
The stale files should be removed somehow, also if the server is crashing or the method for the unique server name calculation should be deterministic. This was working with lower versions than 3.6.4
was (Author: k_o_):
This seems to be open again. When starting the server a new file in the S3 bucket with the single DNS name plus a random number is created. When the server is restarted, this old address is read from the bucket and a new one generated. After 7 restarts there a 7 servers address stored in the bucket which are tried to be reached. I have set the timeout to one second to limit the effect, but the server still tries to connect 10 times before it is switching to single mode.
The stale files should be removed somehow, also if the server is crashing or the method for the unique server name calculation should be deterministic. This was working with lower versions than 3.6.4
> S3_PING / FILE_PING: remove failed members
> ------------------------------------------
>
> Key: JGRP-1956
> URL: https://issues.jboss.org/browse/JGRP-1956
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.4
> Reporter: Karsten Ohme
> Assignee: Bela Ban
>
> When we terminate a member (EC2's "terminate" function) or kill -9 it, then the file (or bucket data in S3) won't get removed. This leads to stale data. On EC2, I expect that virtualized instances are often simply terminated, so this problem is compounded there.
> SOLUTION:
> - Periodically write own data to the file system (FILE_PING) or S3 (S3_PING)
> - On a view change: remove all data that's not in the current view
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month
[JBoss JIRA] (JGRP-1956) S3_PING / FILE_PING: remove failed members
by Karsten Ohme (JIRA)
[ https://issues.jboss.org/browse/JGRP-1956?page=com.atlassian.jira.plugin.... ]
Karsten Ohme updated JGRP-1956:
-------------------------------
Issue Type: Bug (was: Feature Request)
Fix Version/s: (was: 2.10)
(was: 2.6.16)
Affects Version/s: 3.6.4
This seems to be open again. When starting the server a new file in the S3 bucket with the single DNS name plus a random number is created. When the server is restarted, this old address is read from the bucket and a new one generated. After 7 restarts there a 7 servers address stored in the bucket which are tried to be reached. I have set the timeout to one second to limit the effect, but the server still tries to connect 10 times before it is switching to single mode.
The stale files should be removed somehow, also if the server is crashing or the method for the unique server name calculation should be deterministic. This was working with lower versions than 3.6.4
> S3_PING / FILE_PING: remove failed members
> ------------------------------------------
>
> Key: JGRP-1956
> URL: https://issues.jboss.org/browse/JGRP-1956
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.4
> Reporter: Karsten Ohme
> Assignee: Bela Ban
>
> When we terminate a member (EC2's "terminate" function) or kill -9 it, then the file (or bucket data in S3) won't get removed. This leads to stale data. On EC2, I expect that virtualized instances are often simply terminated, so this problem is compounded there.
> SOLUTION:
> - Periodically write own data to the file system (FILE_PING) or S3 (S3_PING)
> - On a view change: remove all data that's not in the current view
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 1 month