[JBoss JIRA] (WFLY-6052) NPE on session.invalidate()
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-6052?page=com.atlassian.jira.plugin.... ]
Stuart Douglas commented on WFLY-6052:
--------------------------------------
You will need to create a new issue for the clustering exception.
> NPE on session.invalidate()
> ---------------------------
>
> Key: WFLY-6052
> URL: https://issues.jboss.org/browse/WFLY-6052
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.CR5
> Reporter: Juan AMAT
> Assignee: Stuart Douglas
>
> I created a discussion about the issue some time ago but got not answer.
> See: https://developer.jboss.org/message/946976#946976
> The problem is that when a webapp is not marked as 'distributable' the undertow InMemorySessionManager is used.
> As mentioned in the discussion the 'getSession' of this manager does not like to be called with a null parameter, which is what willdfy does.
> This happens when single-sign-on is enable and multiple sessions are associated to the same sso session and 'invalidate' is called on one of the session.
> The workaround is to marked all our webapps as distributable but this will have a performance impact.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6055) Deploying web application using web.xml version 2.5 and older prints errors in log
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-6055?page=com.atlassian.jira.plugin.... ]
Stuart Douglas moved JBEAP-2992 to WFLY-6055:
---------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-6055 (was: JBEAP-2992)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Web (Undertow)
(was: Web (Undertow))
Target Release: (was: 7.0.0.GA)
> Deploying web application using web.xml version 2.5 and older prints errors in log
> ----------------------------------------------------------------------------------
>
> Key: WFLY-6055
> URL: https://issues.jboss.org/browse/WFLY-6055
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Reporter: Stuart Douglas
> Assignee: Stuart Douglas
> Priority: Critical
>
> Deploying web application containing web.xml using version 2.5 including schema definition [1] prints errors to log [2].
> Note this works with EAP 6. Marking as blocker as all deployments working with EAP 6 (which don't use internal APIs) should also work with EAP7 [EAP7-251] and old version of web-app should be also supported per http://download.oracle.com/otndocs/jcp/servlet-3_1-fr-eval-spec/index.html and it worked
> [1]
> {code:xml}
> <web-app xmlns="http://java.sun.com/xml/ns/javaee"
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
> http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
> version="2.5">
> </web-app>
> {code}
> [2]
> {noformat}
> 12:55:14,936 WARN [org.jboss.metadata.parser.util.XMLResourceResolver] (MSC service thread 1-3) Cannot load publicId from resource: web-app_2_5.xsd
> 12:55:16,041 ERROR [org.jboss.metadata.parser.util.XMLSchemaValidator] (MSC service thread 1-3) Cannot get schema for location: http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd: org.xml.sax.SAXParseException; systemId: http://java.sun.com/xml/ns/javaee/javaee_5.xsd; lineNumber: 83; columnNumber: 38; sch-props-correct.2: A schema cannot contain two global components with the same name; this schema contains two occurrences of 'http://java.sun.com/xml/ns/javaee,descriptionGroup'.
> at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:201)
> at org.apache.xerces.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:132)
> at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394)
> at org.apache.xerces.impl.xs.traversers.XSDHandler.reportSchemaError(XSDHandler.java:4093)
> at org.apache.xerces.impl.xs.traversers.XSDHandler.reportSchemaError(XSDHandler.java:4088)
> at org.apache.xerces.impl.xs.traversers.XSDHandler.checkForDuplicateNames(XSDHandler.java:3746)
> at org.apache.xerces.impl.xs.traversers.XSDHandler.buildGlobalNameRegistries(XSDHandler.java:1315)
> at org.apache.xerces.impl.xs.traversers.XSDHandler.parseSchema(XSDHandler.java:610)
> at org.apache.xerces.impl.xs.XMLSchemaLoader.loadSchema(XMLSchemaLoader.java:580)
> at org.apache.xerces.impl.xs.XMLSchemaLoader.loadGrammar(XMLSchemaLoader.java:547)
> at org.apache.xerces.impl.xs.XMLSchemaLoader.loadGrammar(XMLSchemaLoader.java:513)
> at org.apache.xerces.jaxp.validation.XMLSchemaFactory.newSchema(XMLSchemaFactory.java:233)
> at javax.xml.validation.SchemaFactory.newSchema(SchemaFactory.java:638)
> at __redirected.__SchemaFactory.newSchema(__SchemaFactory.java:167)
> at org.jboss.metadata.parser.util.XMLSchemaValidator.getSchemaForLocation(XMLSchemaValidator.java:117)
> at org.jboss.metadata.parser.util.XMLSchemaValidator.validate(XMLSchemaValidator.java:85)
> at org.wildfly.extension.undertow.deployment.WebParsingDeploymentProcessor.deploy(WebParsingDeploymentProcessor.java:104)
> at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:147)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 12:55:16,042 WARN [org.jboss.metadata.parser.util.XMLSchemaValidator] (MSC service thread 1-3) Cannot get schema for location: http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6052) NPE on session.invalidate()
by Juan AMAT (JIRA)
[ https://issues.jboss.org/browse/WFLY-6052?page=com.atlassian.jira.plugin.... ]
Juan AMAT commented on WFLY-6052:
---------------------------------
undertow 1.3.14.Final does fix my issue.
I do have though another problem with session.invalidate().
This is related to the problem described here: https://developer.jboss.org/thread/254200
If I use the configuration that is described in this discussion:
<replicated-cache name="repl" mode="ASYNC" batching="true">
<transaction locking="OPTIMISTIC"/>
<locking isolation="READ_COMMITTED"/>
<file-store/>
</replicated-cache>
then everything is fine until I call session.invalidate(). In this case I get an exception:
Caused by: java.lang.UnsupportedOperationException: Calling lock() on non-transactional caches is not allowed
at org.infinispan.cache.impl.CacheImpl.lock(CacheImpl.java:820)
at org.infinispan.cache.impl.DecoratedCache.lock(DecoratedCache.java:136)
at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.lock(AbstractDelegatingAdvancedCache.java:177)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.remove(InfinispanSessionMetaDataFactory.java:124)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.remove(InfinispanSessionMetaDataFactory.java:39)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.remove(InfinispanSessionFactory.java:89)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.remove(InfinispanSessionFactory.java:40)
at org.wildfly.clustering.web.infinispan.session.InfinispanSession.invalidate(InfinispanSession.java:67)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager$SchedulableSession.invalidate(InfinispanSessionManager.java:439)
at org.wildfly.clustering.web.undertow.session.DistributableSession.invalidate(DistributableSession.java:181)
at io.undertow.servlet.spec.HttpSessionImpl.invalidate(HttpSessionImpl.java:199)
Now I modify the configuration and specify <transaction mode="BATCH" locking="OPTIMISTIC"/>, I do get another exception:
Caused by: org.infinispan.InvalidCacheUsageException: Explicit locking is not allowed with optimistic caches!
at org.infinispan.interceptors.locking.OptimisticLockingInterceptor.visitLockControlCommand(OptimisticLockingInterceptor.java:142)
at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:157)
at org.infinispan.interceptors.TxInterceptor.visitLockControlCommand(TxInterceptor.java:215)
at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:163)
at org.infinispan.statetransfer.TransactionSynchronizerInterceptor.visitLockControlCommand(TransactionSynchronizerInterceptor.java:78)
at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:238)
at org.infinispan.statetransfer.StateTransferInterceptor.visitLockControlCommand(StateTransferInterceptor.java:102)
at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:107)
at org.infinispan.interceptors.InvocationContextInterceptor.visitLockControlCommand(InvocationContextInterceptor.java:81)
at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:163)
at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
at org.infinispan.cache.impl.CacheImpl.lock(CacheImpl.java:828)
at org.infinispan.cache.impl.DecoratedCache.lock(DecoratedCache.java:136)
at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.lock(AbstractDelegatingAdvancedCache.java:177)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.remove(InfinispanSessionMetaDataFactory.java:124)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.remove(InfinispanSessionMetaDataFactory.java:39)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.remove(InfinispanSessionFactory.java:89)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.remove(InfinispanSessionFactory.java:40)
at org.wildfly.clustering.web.infinispan.session.InfinispanSession.invalidate(InfinispanSession.java:67)
at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager$SchedulableSession.invalidate(InfinispanSessionManager.java:439)
at org.wildfly.clustering.web.undertow.session.DistributableSession.invalidate(DistributableSession.java:181)
at io.undertow.servlet.spec.HttpSessionImpl.invalidate(HttpSessionImpl.java:199)
Is there some other configuration that I should use?
> NPE on session.invalidate()
> ---------------------------
>
> Key: WFLY-6052
> URL: https://issues.jboss.org/browse/WFLY-6052
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.CR5
> Reporter: Juan AMAT
> Assignee: Stuart Douglas
>
> I created a discussion about the issue some time ago but got not answer.
> See: https://developer.jboss.org/message/946976#946976
> The problem is that when a webapp is not marked as 'distributable' the undertow InMemorySessionManager is used.
> As mentioned in the discussion the 'getSession' of this manager does not like to be called with a null parameter, which is what willdfy does.
> This happens when single-sign-on is enable and multiple sessions are associated to the same sso session and 'invalidate' is called on one of the session.
> The workaround is to marked all our webapps as distributable but this will have a performance impact.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6052) NPE on session.invalidate()
by Juan AMAT (JIRA)
[ https://issues.jboss.org/browse/WFLY-6052?page=com.atlassian.jira.plugin.... ]
Juan AMAT commented on WFLY-6052:
---------------------------------
Looking at changes in undertow, the problem is fixed: https://issues.jboss.org/browse/UNDERTOW-603
I will patch my version of wildfy with undertow 1.3.14.Final and see how it goes.
> NPE on session.invalidate()
> ---------------------------
>
> Key: WFLY-6052
> URL: https://issues.jboss.org/browse/WFLY-6052
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.CR5
> Reporter: Juan AMAT
> Assignee: Stuart Douglas
>
> I created a discussion about the issue some time ago but got not answer.
> See: https://developer.jboss.org/message/946976#946976
> The problem is that when a webapp is not marked as 'distributable' the undertow InMemorySessionManager is used.
> As mentioned in the discussion the 'getSession' of this manager does not like to be called with a null parameter, which is what willdfy does.
> This happens when single-sign-on is enable and multiple sessions are associated to the same sso session and 'invalidate' is called on one of the session.
> The workaround is to marked all our webapps as distributable but this will have a performance impact.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-5480) SPEC JMS 2007 benchmark fails with AMQ214013: Failed to decode packet
by Clebert Suconic (JIRA)
[ https://issues.jboss.org/browse/WFLY-5480?page=com.atlassian.jira.plugin.... ]
Clebert Suconic commented on WFLY-5480:
---------------------------------------
I am fixing this on Artemis as https://issues.apache.org/jira/browse/ARTEMIS-357
I don't think this is a simple race on the getEncoding(); it seems something more complex with netty buffers that would go away when returning copies.
I am now copying the inner buffer to a pooled Buffer on the sending (which shouldn't be an issue since this buffer will be pooled). This will help me extend this later to only use pooled buffers on messages (which is a next step). I could be wrong on the race analysis but it would be simpler to fix it this way anyway than finding what would be moving the wrtierIndex after the getEncodedEbuffer() and leaking a non cloned buffer. A simpler implementation will do better there.
> SPEC JMS 2007 benchmark fails with AMQ214013: Failed to decode packet
> ---------------------------------------------------------------------
>
> Key: WFLY-5480
> URL: https://issues.jboss.org/browse/WFLY-5480
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Reporter: Ondřej Kalman
> Assignee: Jeff Mesnil
> Priority: Blocker
> Fix For: 10.0.0.Final
>
>
> But I have another problem, when I removed trace logs from config I'm not able to run benchmark on localhost to the end, because clients starts getting :
> AMQ214013: Failed to decode packet
> java.lang.IndexOutOfBoundsException: readerIndex: 2130706436 (expected: 0 <= readerIndex <= writerIndex(2943))
> at io.netty.buffer.AbstractByteBuf.readerIndex(AbstractByteBuf.java:73)
> at io.netty.buffer.WrappedByteBuf.readerIndex(WrappedByteBuf.java:99)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readerIndex(ChannelBufferWrapper.java:405)
> at org.apache.activemq.artemis.core.message.impl.MessageImpl.decode(MessageImpl.java:1052)
> at org.apache.activemq.artemis.core.message.impl.MessageImpl.decodeFromBuffer(MessageImpl.java:459)
> at org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionReceiveMessage.decode(SessionReceiveMessage.java:94)
> at org.apache.activemq.artemis.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:42)
> at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:371)
> at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1374)
> at org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:73)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteCha
> nnel.java:131)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at java.lang.Thread.run(Thread.java:745)
> and
> Thread-9 (ActiveMQ-client-global-threads-1266561810): Uncaught exception.
> java.lang.NumberFormatException: null
> at java.lang.Integer.parseInt(Integer.java:542)
> at java.lang.Integer.valueOf(Integer.java:766)
> at org.apache.activemq.artemis.utils.TypedProperties.getIntProperty(TypedProperties.java:280)
> at org.apache.activemq.artemis.core.message.impl.MessageImpl.getIntProperty(MessageImpl.java:811)
> at org.apache.activemq.artemis.jms.client.ActiveMQMessage.getIntProperty(ActiveMQMessage.java:578)
> at org.spec.jms.agents.SPECWorkerThread.receivedMessage(SPECWorkerThread.java:849)
> at org.spec.jms.agents.SPECWorkerThread.onMessage(SPECWorkerThread.java:820)
> at org.apache.activemq.artemis.jms.client.JMSMessageListenerWrapper.onMessage(JMSMessageListenerWrapper.java:100)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1089)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:47)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1224)
> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Thread-3 (ActiveMQ-client-global-threads-1266561810): Uncaught exception.
> javax.jms.IllegalStateException: AMQ119027: Could not find reference on consumer ID=0, messageId = 104,833 queue = 127\.0\.0\.1_VM1_SPAgent7_0.SP_CallForOffersEH_7_EHID_1PF0
> at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:410)
> at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sendACK(ActiveMQSessionContext.java:461)
> at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.acknowledge(ClientSessionImpl.java:765)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.doAck(ClientConsumerImpl.java:1212)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.flushAcks(ClientConsumerImpl.java:830)
> at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.flushAcks(ClientSessionImpl.java:1852)
> at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.commit(ClientSessionImpl.java:501)
> at org.apache.activemq.artemis.core.client.impl.DelegatingSession.commit(DelegatingSession.java:159)
> at org.apache.activemq.artemis.jms.client.ActiveMQSession.commit(ActiveMQSession.java:218)
> at org.spec.jms.eventhandler.sp.SP_CallForOffersEH.handleMessage(SP_CallForOffersEH.java:306)
> at org.spec.jms.agents.SPECWorkerThread.onMessage(SPECWorkerThread.java:821)
> at org.apache.activemq.artemis.jms.client.JMSMessageListenerWrapper.onMessage(JMSMessageListenerWrapper.java:100)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1089)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:47)
> at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1224)
> at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecut
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6054) Defaults for MetaspaceSize
by Ken Wills (JIRA)
Ken Wills created WFLY-6054:
-------------------------------
Summary: Defaults for MetaspaceSize
Key: WFLY-6054
URL: https://issues.jboss.org/browse/WFLY-6054
Project: WildFly
Issue Type: Enhancement
Reporter: Ken Wills
Assignee: Ken Wills
This is the ticket to perform the analogous changes made to core in WFCORE-1319 regarding default MetasizeSpace/
Since PermGen is no longer used, and has been replaced by Metasize, we probably need to alter the initial startup values. Current WF is using:
-XX:MaxMetaspaceSize=256m
After some testing with garbage collection logging on (\-verbose:gc \-Xloggc:hcgc.log \-XX:+PrintGCDateStamps \-XX:MetaspaceSize=XX), the GC logs were monitored for at least one occurrence of a full GC due to Metadata threshold (example [Full GC (Metadata GC Threshold) 39592K->20187K). Using this information, minimum levels of Metasize for various configurations were determined.
The numbers below are the values used for \-XX:MetaspaceSize=XXM followed by the number of full GCs triggered at that amount measured during boot of WF10-full:
Standalone:
{quote}
standalone.xml 52MB(1), 53MB(0)
standalone-full.xml 64MB(1), 65MB(0)
standalone-ha.xml 52MB(1), 54MB(0)
standalone-full-ha.xml 79MB(1), 80MB(0)
{quote}
For domain mode, the corresponding values were determined to be:
{quote}
Process Controller: 12MB(1), 13MB(0)
Host Controller: 39MB(1), 40MB(0)
{quote}
In domain mode, a very slight, non-scientifically measured boot time difference was observed (1769ms default Metasize vs 1694ms with 40m MetaSize set for host controller).
The approximate cost of increasing MetaSize over the default is summerized below (using top to collect RSS after server boot):
JBoss AS 7.1.1: (default permgen (-XX:PermSize=256m -XX:MaxPermSize=256m), JDK 7)
||Configuration||RSS(KB)||
|standalone.xml| 182,652 |
|standalone-ha.xml | 211,672 |
|standalone-full.xml | 217,636 |
|standalone-full-ha.xml | 289,524 |
|domain:||
| Server-one: | 227,220 |
| Server-two: | 234,944 |
| PC: | 37,584 |
| HC: | 138,428 |
Wildfly 10: (default Metasize == 21M)
||Configuration||RSS(KB)||
|standalone.xml | 293,576 |
|standalone-ha.xml | 303,344 |
|standalone-full.xml | 388,660 |
|standalone-full-ha.xml | 478,576 |
|domain: ||
| Server-one: | 379,076 |
| Server-two: | 377,516 |
| PC: | 55,000 |
| HC: | 272,120 |
Wildfly 10: (Metasize == 64M)
|standalone.xml | 290,236 |
|standalone-ha.xml | 306,032 |
|standalone-full.xml | 396,596 |
|standalone-full-ha.xml | 501,576 |
|domain:|
| Server-one: |
| Server-two: |
| PC: |
| HC: |
Wildfly 10: (Metasize == 96M)
||Configuration||RSS(KB)||
|standalone.xml |317,996 |
|standalone-ha.xml | 306,516 |
|standalone-full.xml |416,008 |
|standalone-full-ha.xml |460,952 |
|domain: |
| Server-one: | 380,816 |
| Server-two: | 374,300 |
| PC: | 55,308 |
| HC: | 273,220 |
Additional measurements. Using just Wildfly-core, the following RSS sizes are measured for the indicated Metasize:
Wildfly-10 Core master
||Metasize || RSS(KB) ||
|21m | 117,760|
|64m | 120,772|
|96m | 131,104|
There is little boot time impact on the change:
Wildfly-10 Core master
||Metasize || Boot time (MS) ||
|21m | 2127 |
|64m | 2066 |
|96m | 2099 |
Based on the memory impact of defaulting to 96M (approx 30mb initially over the default value of 21mb), it would seem to make sense to use this as a default value, which allows maintaining boot times without incurring a full GC due to Metasize and provides enough initial Metasize to both start the application server and perhaps deploy an application without incurring any performance penalty.
An additional note: host*.xml has JVM params set to MetaspaceSize=256m, which is probably too large an initial value.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6053) Suggested defaults for Metasize and Java 8
by Ken Wills (JIRA)
[ https://issues.jboss.org/browse/WFLY-6053?page=com.atlassian.jira.plugin.... ]
Ken Wills updated WFLY-6053:
----------------------------
Description:
This is the ticket to perform the analogous changes made to core in WFCORE-1319 regarding default MetasizeSpace/
Since PermGen is no longer used, and has been replaced by Metasize, we probably need to alter the initial startup values. Current WF is using:
-XX:MaxMetaspaceSize=256m
After some testing with garbage collection logging on (\-verbose:gc \-Xloggc:hcgc.log \-XX:+PrintGCDateStamps \-XX:MetaspaceSize=XX), the GC logs were monitored for at least one occurrence of a full GC due to Metadata threshold (example [Full GC (Metadata GC Threshold) 39592K->20187K). Using this information, minimum levels of Metasize for various configurations were determined.
The numbers below are the values used for \-XX:MetaspaceSize=XXM followed by the number of full GCs triggered at that amount measured during boot of WF10-full:
Standalone:
{quote}
standalone.xml 52MB(1), 53MB(0)
standalone-full.xml 64MB(1), 65MB(0)
standalone-ha.xml 52MB(1), 54MB(0)
standalone-full-ha.xml 79MB(1), 80MB(0)
{quote}
For domain mode, the corresponding values were determined to be:
{quote}
Process Controller: 12MB(1), 13MB(0)
Host Controller: 39MB(1), 40MB(0)
{quote}
In domain mode, a very slight, non-scientifically measured boot time difference was observed (1769ms default Metasize vs 1694ms with 40m MetaSize set for host controller).
The approximate cost of increasing MetaSize over the default is summerized below (using top to collect RSS after server boot):
JBoss AS 7.1.1: (default permgen (-XX:PermSize=256m -XX:MaxPermSize=256m), JDK 7)
||Configuration||RSS(KB)||
|standalone.xml| 182,652 |
|standalone-ha.xml | 211,672 |
|standalone-full.xml | 217,636 |
|standalone-full-ha.xml | 289,524 |
|domain:||
| Server-one: | 227,220 |
| Server-two: | 234,944 |
| PC: | 37,584 |
| HC: | 138,428 |
Wildfly 10: (default Metasize == 21M)
||Configuration||RSS(KB)||
|standalone.xml | 293,576 |
|standalone-ha.xml | 303,344 |
|standalone-full.xml | 388,660 |
|standalone-full-ha.xml | 478,576 |
|domain: ||
| Server-one: | 379,076 |
| Server-two: | 377,516 |
| PC: | 55,000 |
| HC: | 272,120 |
Wildfly 10: (Metasize == 64M)
|standalone.xml | 290,236 |
|standalone-ha.xml | 306,032 |
|standalone-full.xml | 396,596 |
|standalone-full-ha.xml | 501,576 |
|domain:|
| Server-one: |
| Server-two: |
| PC: |
| HC: |
Wildfly 10: (Metasize == 96M)
||Configuration||RSS(KB)||
|standalone.xml |317,996 |
|standalone-ha.xml | 306,516 |
|standalone-full.xml |416,008 |
|standalone-full-ha.xml |460,952 |
|domain: |
| Server-one: | 380,816 |
| Server-two: | 374,300 |
| PC: | 55,308 |
| HC: | 273,220 |
Additional measurements. Using just Wildfly-core, the following RSS sizes are measured for the indicated Metasize:
Wildfly-10 Core master
||Metasize || RSS(KB) ||
|21m | 117,760|
|64m | 120,772|
|96m | 131,104|
There is little boot time impact on the change:
Wildfly-10 Core master
||Metasize || Boot time (MS) ||
|21m | 2127 |
|64m | 2066 |
|96m | 2099 |
Based on the memory impact of defaulting to 96M (approx 30mb initially over the default value of 21mb), it would seem to make sense to use this as a default value, which allows maintaining boot times without incurring a full GC due to Metasize and provides enough initial Metasize to both start the application server and perhaps deploy an application without incurring any performance penalty.
An additional note: host*.xml has JVM params set to MetaspaceSize=256m, which is probably too large an initial value.
was:
This is the ticket to perform the analogous changes made to core in
Since PermGen is no longer used, and has been replaced by Metasize, we probably need to alter the initial startup values. Current WF is using:
-XX:MaxMetaspaceSize=256m
After some testing with garbage collection logging on (\-verbose:gc \-Xloggc:hcgc.log \-XX:+PrintGCDateStamps \-XX:MetaspaceSize=XX), the GC logs were monitored for at least one occurrence of a full GC due to Metadata threshold (example [Full GC (Metadata GC Threshold) 39592K->20187K). Using this information, minimum levels of Metasize for various configurations were determined.
The numbers below are the values used for \-XX:MetaspaceSize=XXM followed by the number of full GCs triggered at that amount measured during boot of WF10-full:
Standalone:
{quote}
standalone.xml 52MB(1), 53MB(0)
standalone-full.xml 64MB(1), 65MB(0)
standalone-ha.xml 52MB(1), 54MB(0)
standalone-full-ha.xml 79MB(1), 80MB(0)
{quote}
For domain mode, the corresponding values were determined to be:
{quote}
Process Controller: 12MB(1), 13MB(0)
Host Controller: 39MB(1), 40MB(0)
{quote}
In domain mode, a very slight, non-scientifically measured boot time difference was observed (1769ms default Metasize vs 1694ms with 40m MetaSize set for host controller).
The approximate cost of increasing MetaSize over the default is summerized below (using top to collect RSS after server boot):
JBoss AS 7.1.1: (default permgen (-XX:PermSize=256m -XX:MaxPermSize=256m), JDK 7)
||Configuration||RSS(KB)||
|standalone.xml| 182,652 |
|standalone-ha.xml | 211,672 |
|standalone-full.xml | 217,636 |
|standalone-full-ha.xml | 289,524 |
|domain:||
| Server-one: | 227,220 |
| Server-two: | 234,944 |
| PC: | 37,584 |
| HC: | 138,428 |
Wildfly 10: (default Metasize == 21M)
||Configuration||RSS(KB)||
|standalone.xml | 293,576 |
|standalone-ha.xml | 303,344 |
|standalone-full.xml | 388,660 |
|standalone-full-ha.xml | 478,576 |
|domain: ||
| Server-one: | 379,076 |
| Server-two: | 377,516 |
| PC: | 55,000 |
| HC: | 272,120 |
Wildfly 10: (Metasize == 64M)
|standalone.xml | 290,236 |
|standalone-ha.xml | 306,032 |
|standalone-full.xml | 396,596 |
|standalone-full-ha.xml | 501,576 |
|domain:|
| Server-one: |
| Server-two: |
| PC: |
| HC: |
Wildfly 10: (Metasize == 96M)
||Configuration||RSS(KB)||
|standalone.xml |317,996 |
|standalone-ha.xml | 306,516 |
|standalone-full.xml |416,008 |
|standalone-full-ha.xml |460,952 |
|domain: |
| Server-one: | 380,816 |
| Server-two: | 374,300 |
| PC: | 55,308 |
| HC: | 273,220 |
Additional measurements. Using just Wildfly-core, the following RSS sizes are measured for the indicated Metasize:
Wildfly-10 Core master
||Metasize || RSS(KB) ||
|21m | 117,760|
|64m | 120,772|
|96m | 131,104|
There is little boot time impact on the change:
Wildfly-10 Core master
||Metasize || Boot time (MS) ||
|21m | 2127 |
|64m | 2066 |
|96m | 2099 |
Based on the memory impact of defaulting to 96M (approx 30mb initially over the default value of 21mb), it would seem to make sense to use this as a default value, which allows maintaining boot times without incurring a full GC due to Metasize and provides enough initial Metasize to both start the application server and perhaps deploy an application without incurring any performance penalty.
An additional note: host*.xml has JVM params set to MetaspaceSize=256m, which is probably too large an initial value.
> Suggested defaults for Metasize and Java 8
> ------------------------------------------
>
> Key: WFLY-6053
> URL: https://issues.jboss.org/browse/WFLY-6053
> Project: WildFly
> Issue Type: Enhancement
> Reporter: Ken Wills
> Assignee: Ken Wills
>
> This is the ticket to perform the analogous changes made to core in WFCORE-1319 regarding default MetasizeSpace/
> Since PermGen is no longer used, and has been replaced by Metasize, we probably need to alter the initial startup values. Current WF is using:
> -XX:MaxMetaspaceSize=256m
> After some testing with garbage collection logging on (\-verbose:gc \-Xloggc:hcgc.log \-XX:+PrintGCDateStamps \-XX:MetaspaceSize=XX), the GC logs were monitored for at least one occurrence of a full GC due to Metadata threshold (example [Full GC (Metadata GC Threshold) 39592K->20187K). Using this information, minimum levels of Metasize for various configurations were determined.
> The numbers below are the values used for \-XX:MetaspaceSize=XXM followed by the number of full GCs triggered at that amount measured during boot of WF10-full:
> Standalone:
> {quote}
> standalone.xml 52MB(1), 53MB(0)
> standalone-full.xml 64MB(1), 65MB(0)
> standalone-ha.xml 52MB(1), 54MB(0)
> standalone-full-ha.xml 79MB(1), 80MB(0)
> {quote}
> For domain mode, the corresponding values were determined to be:
> {quote}
> Process Controller: 12MB(1), 13MB(0)
> Host Controller: 39MB(1), 40MB(0)
> {quote}
> In domain mode, a very slight, non-scientifically measured boot time difference was observed (1769ms default Metasize vs 1694ms with 40m MetaSize set for host controller).
> The approximate cost of increasing MetaSize over the default is summerized below (using top to collect RSS after server boot):
> JBoss AS 7.1.1: (default permgen (-XX:PermSize=256m -XX:MaxPermSize=256m), JDK 7)
> ||Configuration||RSS(KB)||
> |standalone.xml| 182,652 |
> |standalone-ha.xml | 211,672 |
> |standalone-full.xml | 217,636 |
> |standalone-full-ha.xml | 289,524 |
> |domain:||
> | Server-one: | 227,220 |
> | Server-two: | 234,944 |
> | PC: | 37,584 |
> | HC: | 138,428 |
> Wildfly 10: (default Metasize == 21M)
> ||Configuration||RSS(KB)||
> |standalone.xml | 293,576 |
> |standalone-ha.xml | 303,344 |
> |standalone-full.xml | 388,660 |
> |standalone-full-ha.xml | 478,576 |
> |domain: ||
> | Server-one: | 379,076 |
> | Server-two: | 377,516 |
> | PC: | 55,000 |
> | HC: | 272,120 |
> Wildfly 10: (Metasize == 64M)
> |standalone.xml | 290,236 |
> |standalone-ha.xml | 306,032 |
> |standalone-full.xml | 396,596 |
> |standalone-full-ha.xml | 501,576 |
> |domain:|
> | Server-one: |
> | Server-two: |
> | PC: |
> | HC: |
> Wildfly 10: (Metasize == 96M)
> ||Configuration||RSS(KB)||
> |standalone.xml |317,996 |
> |standalone-ha.xml | 306,516 |
> |standalone-full.xml |416,008 |
> |standalone-full-ha.xml |460,952 |
> |domain: |
> | Server-one: | 380,816 |
> | Server-two: | 374,300 |
> | PC: | 55,308 |
> | HC: | 273,220 |
> Additional measurements. Using just Wildfly-core, the following RSS sizes are measured for the indicated Metasize:
> Wildfly-10 Core master
> ||Metasize || RSS(KB) ||
> |21m | 117,760|
> |64m | 120,772|
> |96m | 131,104|
> There is little boot time impact on the change:
> Wildfly-10 Core master
> ||Metasize || Boot time (MS) ||
> |21m | 2127 |
> |64m | 2066 |
> |96m | 2099 |
> Based on the memory impact of defaulting to 96M (approx 30mb initially over the default value of 21mb), it would seem to make sense to use this as a default value, which allows maintaining boot times without incurring a full GC due to Metasize and provides enough initial Metasize to both start the application server and perhaps deploy an application without incurring any performance penalty.
> An additional note: host*.xml has JVM params set to MetaspaceSize=256m, which is probably too large an initial value.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6053) Suggested defaults for Metasize and Java 8
by Ken Wills (JIRA)
[ https://issues.jboss.org/browse/WFLY-6053?page=com.atlassian.jira.plugin.... ]
Ken Wills updated WFLY-6053:
----------------------------
Git Pull Request: (was: https://github.com/wildfly/wildfly-core/pull/1377)
> Suggested defaults for Metasize and Java 8
> ------------------------------------------
>
> Key: WFLY-6053
> URL: https://issues.jboss.org/browse/WFLY-6053
> Project: WildFly
> Issue Type: Enhancement
> Reporter: Ken Wills
> Assignee: Ken Wills
>
> This is the ticket to perform the analogous changes made to core in WFCORE-1319 regarding default MetasizeSpace/
> Since PermGen is no longer used, and has been replaced by Metasize, we probably need to alter the initial startup values. Current WF is using:
> -XX:MaxMetaspaceSize=256m
> After some testing with garbage collection logging on (\-verbose:gc \-Xloggc:hcgc.log \-XX:+PrintGCDateStamps \-XX:MetaspaceSize=XX), the GC logs were monitored for at least one occurrence of a full GC due to Metadata threshold (example [Full GC (Metadata GC Threshold) 39592K->20187K). Using this information, minimum levels of Metasize for various configurations were determined.
> The numbers below are the values used for \-XX:MetaspaceSize=XXM followed by the number of full GCs triggered at that amount measured during boot of WF10-full:
> Standalone:
> {quote}
> standalone.xml 52MB(1), 53MB(0)
> standalone-full.xml 64MB(1), 65MB(0)
> standalone-ha.xml 52MB(1), 54MB(0)
> standalone-full-ha.xml 79MB(1), 80MB(0)
> {quote}
> For domain mode, the corresponding values were determined to be:
> {quote}
> Process Controller: 12MB(1), 13MB(0)
> Host Controller: 39MB(1), 40MB(0)
> {quote}
> In domain mode, a very slight, non-scientifically measured boot time difference was observed (1769ms default Metasize vs 1694ms with 40m MetaSize set for host controller).
> The approximate cost of increasing MetaSize over the default is summerized below (using top to collect RSS after server boot):
> JBoss AS 7.1.1: (default permgen (-XX:PermSize=256m -XX:MaxPermSize=256m), JDK 7)
> ||Configuration||RSS(KB)||
> |standalone.xml| 182,652 |
> |standalone-ha.xml | 211,672 |
> |standalone-full.xml | 217,636 |
> |standalone-full-ha.xml | 289,524 |
> |domain:||
> | Server-one: | 227,220 |
> | Server-two: | 234,944 |
> | PC: | 37,584 |
> | HC: | 138,428 |
> Wildfly 10: (default Metasize == 21M)
> ||Configuration||RSS(KB)||
> |standalone.xml | 293,576 |
> |standalone-ha.xml | 303,344 |
> |standalone-full.xml | 388,660 |
> |standalone-full-ha.xml | 478,576 |
> |domain: ||
> | Server-one: | 379,076 |
> | Server-two: | 377,516 |
> | PC: | 55,000 |
> | HC: | 272,120 |
> Wildfly 10: (Metasize == 64M)
> |standalone.xml | 290,236 |
> |standalone-ha.xml | 306,032 |
> |standalone-full.xml | 396,596 |
> |standalone-full-ha.xml | 501,576 |
> |domain:|
> | Server-one: |
> | Server-two: |
> | PC: |
> | HC: |
> Wildfly 10: (Metasize == 96M)
> ||Configuration||RSS(KB)||
> |standalone.xml |317,996 |
> |standalone-ha.xml | 306,516 |
> |standalone-full.xml |416,008 |
> |standalone-full-ha.xml |460,952 |
> |domain: |
> | Server-one: | 380,816 |
> | Server-two: | 374,300 |
> | PC: | 55,308 |
> | HC: | 273,220 |
> Additional measurements. Using just Wildfly-core, the following RSS sizes are measured for the indicated Metasize:
> Wildfly-10 Core master
> ||Metasize || RSS(KB) ||
> |21m | 117,760|
> |64m | 120,772|
> |96m | 131,104|
> There is little boot time impact on the change:
> Wildfly-10 Core master
> ||Metasize || Boot time (MS) ||
> |21m | 2127 |
> |64m | 2066 |
> |96m | 2099 |
> Based on the memory impact of defaulting to 96M (approx 30mb initially over the default value of 21mb), it would seem to make sense to use this as a default value, which allows maintaining boot times without incurring a full GC due to Metasize and provides enough initial Metasize to both start the application server and perhaps deploy an application without incurring any performance penalty.
> An additional note: host*.xml has JVM params set to MetaspaceSize=256m, which is probably too large an initial value.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months
[JBoss JIRA] (WFLY-6053) Suggested defaults for Metasize and Java 8
by Ken Wills (JIRA)
[ https://issues.jboss.org/browse/WFLY-6053?page=com.atlassian.jira.plugin.... ]
Ken Wills moved WFCORE-1325 to WFLY-6053:
-----------------------------------------
Project: WildFly (was: WildFly Core)
Key: WFLY-6053 (was: WFCORE-1325)
> Suggested defaults for Metasize and Java 8
> ------------------------------------------
>
> Key: WFLY-6053
> URL: https://issues.jboss.org/browse/WFLY-6053
> Project: WildFly
> Issue Type: Enhancement
> Reporter: Ken Wills
> Assignee: Ken Wills
>
> Since PermGen is no longer used, and has been replaced by Metasize, we probably need to alter the initial startup values. Current WF is using:
> -XX:MaxMetaspaceSize=256m
> After some testing with garbage collection logging on (\-verbose:gc \-Xloggc:hcgc.log \-XX:+PrintGCDateStamps \-XX:MetaspaceSize=XX), the GC logs were monitored for at least one occurrence of a full GC due to Metadata threshold (example [Full GC (Metadata GC Threshold) 39592K->20187K). Using this information, minimum levels of Metasize for various configurations were determined.
> The numbers below are the values used for \-XX:MetaspaceSize=XXM followed by the number of full GCs triggered at that amount measured during boot of WF10-full:
> Standalone:
> {quote}
> standalone.xml 52MB(1), 53MB(0)
> standalone-full.xml 64MB(1), 65MB(0)
> standalone-ha.xml 52MB(1), 54MB(0)
> standalone-full-ha.xml 79MB(1), 80MB(0)
> {quote}
> For domain mode, the corresponding values were determined to be:
> {quote}
> Process Controller: 12MB(1), 13MB(0)
> Host Controller: 39MB(1), 40MB(0)
> {quote}
> In domain mode, a very slight, non-scientifically measured boot time difference was observed (1769ms default Metasize vs 1694ms with 40m MetaSize set for host controller).
> The approximate cost of increasing MetaSize over the default is summerized below (using top to collect RSS after server boot):
> JBoss AS 7.1.1: (default permgen (-XX:PermSize=256m -XX:MaxPermSize=256m), JDK 7)
> ||Configuration||RSS(KB)||
> |standalone.xml| 182,652 |
> |standalone-ha.xml | 211,672 |
> |standalone-full.xml | 217,636 |
> |standalone-full-ha.xml | 289,524 |
> |domain:||
> | Server-one: | 227,220 |
> | Server-two: | 234,944 |
> | PC: | 37,584 |
> | HC: | 138,428 |
> Wildfly 10: (default Metasize == 21M)
> ||Configuration||RSS(KB)||
> |standalone.xml | 293,576 |
> |standalone-ha.xml | 303,344 |
> |standalone-full.xml | 388,660 |
> |standalone-full-ha.xml | 478,576 |
> |domain: ||
> | Server-one: | 379,076 |
> | Server-two: | 377,516 |
> | PC: | 55,000 |
> | HC: | 272,120 |
> Wildfly 10: (Metasize == 64M)
> |standalone.xml | 290,236 |
> |standalone-ha.xml | 306,032 |
> |standalone-full.xml | 396,596 |
> |standalone-full-ha.xml | 501,576 |
> |domain:|
> | Server-one: |
> | Server-two: |
> | PC: |
> | HC: |
> Wildfly 10: (Metasize == 96M)
> ||Configuration||RSS(KB)||
> |standalone.xml |317,996 |
> |standalone-ha.xml | 306,516 |
> |standalone-full.xml |416,008 |
> |standalone-full-ha.xml |460,952 |
> |domain: |
> | Server-one: | 380,816 |
> | Server-two: | 374,300 |
> | PC: | 55,308 |
> | HC: | 273,220 |
> Additional measurements. Using just Wildfly-core, the following RSS sizes are measured for the indicated Metasize:
> Wildfly-10 Core master
> ||Metasize || RSS(KB) ||
> |21m | 117,760|
> |64m | 120,772|
> |96m | 131,104|
> There is little boot time impact on the change:
> Wildfly-10 Core master
> ||Metasize || Boot time (MS) ||
> |21m | 2127 |
> |64m | 2066 |
> |96m | 2099 |
> Based on the memory impact of defaulting to 96M (approx 30mb initially over the default value of 21mb), it would seem to make sense to use this as a default value, which allows maintaining boot times without incurring a full GC due to Metasize and provides enough initial Metasize to both start the application server and perhaps deploy an application without incurring any performance penalty.
> An additional note: host*.xml has JVM params set to MetaspaceSize=256m, which is probably too large an initial value.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 3 months