[JBoss JIRA] Created: (AS7-1343) wsconsume.bat not working
by Mario Antollini (JIRA)
wsconsume.bat not working
-------------------------
Key: AS7-1343
URL: https://issues.jboss.org/browse/AS7-1343
Project: Application Server 7
Issue Type: Bug
Components: Console
Affects Versions: 7.0.0.Final
Environment: Windows 7
Reporter: Mario Antollini
Assignee: Heiko Braun
Fix For: Open To Community
The file wsconsume.bat under the bin directory does not work.
The problem is in this line: "-Djava.endorsed.dirs=file:%JBOSS_HOME%/modules/com/sun/xml/bin/main;file:%JBOSS_HOME%/modules/javax/xml/ws/api/main\" ^
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months
[JBoss JIRA] Created: (JBLOGGING-58) Support setting the logging provider explicitly
by Dan Allen (JIRA)
Support setting the logging provider explicitly
-----------------------------------------------
Key: JBLOGGING-58
URL: https://issues.jboss.org/browse/JBLOGGING-58
Project: JBoss Logging
Issue Type: Feature Request
Security Level: Public (Everyone can see)
Affects Versions: 3.0.0.Beta4-jboss-logging
Reporter: Dan Allen
Assignee: David Lloyd
Priority: Minor
Fix For: 3.0.0.Beta5-jboss-logging
Currently, JBoss Logging uses a classloading approach to look for a logging provider based on a hardcoded order of precedence. While this is sufficient for the default case, there should be a way to specify the logger explicitly, in the case that there is more than one concrete logging implementation on the classpath.
The override should be part of the API, though a system property could also be supported. The API would take precedence.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months
[JBoss JIRA] Created: (AS7-872) Log4J bundle fails to install due to javax.servlet imports
by David Bosschaert (JIRA)
Log4J bundle fails to install due to javax.servlet imports
----------------------------------------------------------
Key: AS7-872
URL: https://issues.jboss.org/browse/AS7-872
Project: Application Server 7
Issue Type: Bug
Components: OSGi
Affects Versions: 7.0.0.CR1
Reporter: David Bosschaert
The following bundle is quite commonly used: http://ebr.springsource.com/repository/app/bundle/version/detail?name=com...
It's the springsource osgi-ified version of Log4J version 1.2.15
It installs without problems in Felix and Equinox but when installing it in AS7 it fails on a mandatory import of javax.swing?
There's a few things fishy about this:
1. why does the log bundle have this mandatory dependency? That probably a bug in the bundle, but anyway.
2. is the framework required to export javax.swing?
I always thought it wasn't but then on the other hand it works with Felix and Equinox, so it should work with us too... Especially such a common bundle should install smoothly IMHO.
{code}12:15:09,520 ERROR [org.jboss.osgi.framework.internal.AbstractBundleState] (MSC service thread 1-4) Could not resolve bundle: com.springsource.org.apache.log4j:1.2.15: org.osgi.framework.BundleException: Cannot resolve bundle resModule: [com.springsource.org.apache.log4j:1.2.15]
at org.jboss.osgi.framework.internal.ResolverPlugin.resolve(ResolverPlugin.java:157)
at org.jboss.osgi.framework.internal.AbstractBundleState.ensureResolved(AbstractBundleState.java:551)
at org.jboss.osgi.framework.internal.HostBundleState.startInternal(HostBundleState.java:185)
at org.jboss.osgi.framework.internal.AbstractBundleState.start(AbstractBundleState.java:489)
at org.jboss.as.osgi.deployment.BundleStartTracker$1.processService(BundleStartTracker.java:135)
at org.jboss.as.osgi.deployment.BundleStartTracker$1.serviceStarted(BundleStartTracker.java:107)
at org.jboss.msc.service.ServiceControllerImpl.invokeListener(ServiceControllerImpl.java:1322) [jboss-msc-1.0.0.Beta8.jar:1.0.0.Beta8]
at org.jboss.msc.service.ServiceControllerImpl.access$2600(ServiceControllerImpl.java:47) [jboss-msc-1.0.0.Beta8.jar:1.0.0.Beta8]
at org.jboss.msc.service.ServiceControllerImpl$ListenerTask.run(ServiceControllerImpl.java:1850) [jboss-msc-1.0.0.Beta8.jar:1.0.0.Beta8]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_23]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_23]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_23]
Caused by: org.jboss.osgi.resolver.XResolverException: Unable to resolve Module[com.springsource.org.apache.log4j:1.2.15]: missing requirement [Module[com.springsource.org.apache.log4j:1.2.15]] package; (package=javax.swing)
at org.jboss.osgi.resolver.felix.FelixResolver.resolveInternal(FelixResolver.java:117)
at org.jboss.osgi.resolver.spi.AbstractResolver.resolve(AbstractResolver.java:148)
at org.jboss.osgi.framework.internal.ResolverPlugin.resolve(ResolverPlugin.java:155)
... 11 more{code}
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months
[JBoss JIRA] Created: (JBWEB-202) Jboss acceptCount parameter works incorrectly
by Dmitry Murashenkov (JIRA)
Jboss acceptCount parameter works incorrectly
---------------------------------------------
Key: JBWEB-202
URL: https://issues.jboss.org/browse/JBWEB-202
Project: JBoss Web
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: Tomcat
Affects Versions: JBossWeb-7.0.0.Beta11
Environment: APR connector is enabled (but seems that same bug is reproduced without it)
Reporter: Dmitry Murashenkov
Assignee: Remy Maucherat
We've found behavior of acceptCount property quite unexpected in JBoss 6. Seems that code is same in the latest repository revision.
TC is very simple:
1. Set connector properties:
<Connector ...
maxThreads="1"
acceptCount="100"/>
2. Send several http requests (few will be enough - 4-5 requests) to server in parallel. We performed load testing when found this bug, so we just started the load client which tried to make several connections and send a single request through each.
Expected result: all requests get handled within some time (because acceptCount is quite high - far bigger than the number of requests)
Actual result: several requests receive "Connection reset by peer"
The actual problem seems to be in AprEndpoint$Acceptor.run():
if (!processSocketWithOptions(socket)) {
Socket.destroy(socket);
}
Method processSocketWithOptions() invokes getWorkerThread() which returns null if no thread is available (at least by default - in LOW_MEMORY setting is not set) and so the socket is destroyed even though acceptCount number of incoming connections is not reached.
Of course formally connection is accepted in the first place and only then is closed, but actual outcome of this case is that connection is closed and cannot be used which is quite unexpected.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months
[JBoss JIRA] Created: (AS7-1260) JGroups subsystem fails to setup some protocol stacks.
by Trustin Lee (JIRA)
JGroups subsystem fails to setup some protocol stacks.
------------------------------------------------------
Key: AS7-1260
URL: https://issues.jboss.org/browse/AS7-1260
Project: Application Server 7
Issue Type: Bug
Reporter: Trustin Lee
Assignee: Paul Ferraro
Priority: Critical
After upgrading from AS 7.0.0.CR1 to AS 7.0.0.Final, I started to see an exception raised by org.jboss.as.clustering.jgroups.JChannelFactory:
{code}
22:11:54,040 ERROR [stderr] (MSC service thread 1-1) java.lang.IllegalArgumentException: org.jgroups.ChannelException: unable to setup the protocol stack
22:11:54,040 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.infinispan.ChannelProvider.getJGroupsChannel(ChannelProvider.java:71)
22:11:54,041 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:273)
22:11:54,041 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:226)
22:11:54,041 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannelAndRPCDispatcher(JGroupsTransport.java:254)
22:11:54,041 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:148)
22:11:54,041 ERROR [stderr] (MSC service thread 1-1) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
22:11:54,041 ERROR [stderr] (MSC service thread 1-1) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
22:11:54,042 ERROR [stderr] (MSC service thread 1-1) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
22:11:54,042 ERROR [stderr] (MSC service thread 1-1) at java.lang.reflect.Method.invoke(Method.java:597)
22:11:54,042 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:172)
22:11:54,042 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:918)
22:11:54,042 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:711)
22:11:54,042 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:611)
22:11:54,043 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:176)
22:11:54,043 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:171)
22:11:54,043 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.CacheImpl.start(CacheImpl.java:366)
22:11:54,043 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:559)
22:11:54,043 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:455)
22:11:54,043 ERROR [stderr] (MSC service thread 1-1) at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:479)
22:11:54,044 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:84)
22:11:54,044 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:73)
22:11:54,044 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.infinispan.subsystem.CacheService.start(CacheService.java:73)
22:11:54,044 ERROR [stderr] (MSC service thread 1-1) at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1765)
22:11:54,044 ERROR [stderr] (MSC service thread 1-1) at org.jboss.msc.service.ServiceControllerImpl$ClearTCCLTask.run(ServiceControllerImpl.java:2291)
22:11:54,044 ERROR [stderr] (MSC service thread 1-1) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
22:11:54,045 ERROR [stderr] (MSC service thread 1-1) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
22:11:54,045 ERROR [stderr] (MSC service thread 1-1) at java.lang.Thread.run(Thread.java:680)
22:11:54,045 ERROR [stderr] (MSC service thread 1-1) Caused by: org.jgroups.ChannelException: unable to setup the protocol stack
22:11:54,045 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.JChannel.init(JChannel.java:1728)
22:11:54,045 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.JChannel.<init>(JChannel.java:249)
22:11:54,045 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.jgroups.JChannelFactory$JChannel.<init>(JChannelFactory.java:269)
22:11:54,046 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.jgroups.JChannelFactory.createChannel(JChannelFactory.java:69)
22:11:54,046 ERROR [stderr] (MSC service thread 1-1) at org.jboss.as.clustering.infinispan.ChannelProvider.getJGroupsChannel(ChannelProvider.java:67)
22:11:54,046 ERROR [stderr] (MSC service thread 1-1) ... 26 more
22:11:54,046 ERROR [stderr] (MSC service thread 1-1) Caused by: java.lang.IllegalArgumentException: the following properties in PING are not recognized: {bind_addr=192.168.0.196}
22:11:54,046 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.stack.Configurator.createLayer(Configurator.java:460)
22:11:54,047 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.stack.Configurator.createProtocols(Configurator.java:393)
22:11:54,047 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:88)
22:11:54,047 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:55)
22:11:54,047 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:534)
22:11:54,047 ERROR [stderr] (MSC service thread 1-1) at org.jgroups.JChannel.init(JChannel.java:1725)
22:11:54,047 ERROR [stderr] (MSC service thread 1-1) ... 30 more
{code}
(Actually, the exception was being swallowed by Infinispan, so I had to modify the source code in Infinispan.)
Reverting back only the JGroups subsystem to 7.0.0.CR1 fixes the problem. The offending subsystem configuration is:
{code}
<subsystem xmlns="urn:jboss:domain:jgroups:1.0" default-stack="udp">
<stack name="udp">
<transport type="UDP"
socket-binding="jgroups-udp"
default-executor="jgroups"
oob-executor="jgroups-oob"
timer-executor="jgroups-timer"/>
<protocol type="PING"/>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="BARRIER"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST"/>
<protocol type="pbcast.STABLE"/>
<protocol type="VIEW_SYNC"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="pbcast.STREAMING_STATE_TRANSFER"/>
<protocol type="pbcast.FLUSH"/>
</stack>
<stack name="tcp">
<transport type="TCP"
socket-binding="jgroups-tcp"
default-executor="jgroups"
oob-executor="jgroups-oob"
timer-executor="jgroups-timer"/>
<protocol type="MPING" socket-binding="jgroups-mping"/>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="BARRIER"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST"/>
<protocol type="pbcast.STABLE"/>
<protocol type="VIEW_SYNC"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="pbcast.STREAMING_STATE_TRANSFER"/>
<protocol type="pbcast.FLUSH"/>
</stack>
</subsystem>
{code}
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months
[JBoss JIRA] Created: (AS7-772) Allow operation handlers to register child ModelNodeRegistrations
by Brian Stansberry (JIRA)
Allow operation handlers to register child ModelNodeRegistrations
-----------------------------------------------------------------
Key: AS7-772
URL: https://issues.jboss.org/browse/AS7-772
Project: Application Server 7
Issue Type: Feature Request
Components: Domain Management
Reporter: Brian Stansberry
Assignee: Brian Stansberry
Fix For: 7.0.0.CR1
Handlers (specifically datasource add handlers) can sometimes add services whose management interface is not known until runtime. The context provided to handlers should provide a hook to allow them to register child ModelNodeRegistration objects, relative to the resource the handler is managing.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months
[JBoss JIRA] Created: (JBVFS-159) Native memory leak due to ZipEntryInputStream
by Samuel Cai (JIRA)
Native memory leak due to ZipEntryInputStream
---------------------------------------------
Key: JBVFS-159
URL: https://jira.jboss.org/browse/JBVFS-159
Project: JBoss VFS
Issue Type: Bug
Security Level: Public (Everyone can see)
Affects Versions: 2.1.2.GA
Environment: Redhat (not sure what version) 2.6.9-78.ELsmp
JBoss 5.1.0.GA
JDK 1.6.0_20
JVM parameter:
-XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -verbose:gc -Dfile.encoding=iso-8859-1 -server -Djava.net.preferIPv4Stack=true -Doracle.jdbc.V8Compatible=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dnetworkaddress.cache.ttl=300 -Xss128k -Xmn500m -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true -XX:+PrintGCDetails -XX:PermSize=256m -XX:MaxPermSize=256m -Xms1500m -Xmx1500m -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC
Reporter: Samuel Cai
Assignee: John Bailey
We used to use JBoss 4.2.1.GA and JDK 1.6.0_11, and trying JBoss 5.1.0.GA and JDK 1.6.0_20 these days.
I found the process size is more larger than before, 2.5G~2.9G compared to 1.9G.
I was thinking this was a bug of JBoss AS, then filed https://jira.jboss.org/browse/JBAS-8066
After these days investigation, I think this is a memory leak in VFS, maybe only happen on our specific environment.
I tried a change on class org.jboss.virtual.plugins.context.zip.ZipFileWrapper, method openStream:
From:
ZipEntryInputStream zis = new ZipEntryInputStream(this, is);
return zis;
To:
//ZipEntryInputStream zis = new ZipEntryInputStream(this, is);
//return zis;
return is;
That is, don't use ZipEntryInputStream, let any class/method invoking openStream to close zipFile's inputStream immediatelly.
ZipFile will be in open status, but all steams will be closed well.
This makes the process size down to same as JBoss 4's. I tried going through first 3 pages of site, no problems. May need QA team to test more.
Btw, I tried updating VFS to 2.1.3.SP1/2.2.0.M4/3.0.0.CR5, first two have same size issue, third one couldn't start.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 4 months