[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by James Strachan (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
James Strachan commented on WFCORE-433:
---------------------------------------
[~brian.stansberry] agreed with all that!
I think having a standard kubernetes based rolling upgrade approach has lots of value as a single canonical way to do any kind of rolling upgrade; whether its a configuration change, a patch to the operating system, JDK or wildfly container itself or a users new deployment being rolled out etc. Then operations folks in testing / production have one way to do rolling updates of all software etc. We can then all reuse really nice rolling upgrade visualisations etc
However in development; all the things wildfly has done over the years to do incremental updates (e.g. redeploying WARs on the fly, reloading classes, reconfiguring on the fly) is really useful as developers build and test their code. i.e. developers want the fastest possible feedback of changes when they are writing code - in the "pre commit" flow; though once a commit is done (or a config value has changed), in testing & production its kinda better to use immutable images that startup - as it just avoids all kinds of complex bugs due to restarting things & possibly having concurrent / reload edge cases that introduces bugs or resource leakage or whatever. i.e. its less risky and more 'ops-ish" to just startup clean containers in production. Its more "developer-ish" to mutate running containers. Both have value at different stages in the overall software production line.
In the Karaf world, we've now a build system that generates an immutable application server image; so that all the image can do is startup what its statically configured to startup - you can't even try to deploy a new version or change a configuration value at runtime. That kind of behaviour could be useful for testing/production environments - while leaving development containers more open to incremental change so that developers don't have to keep waiting for containers to restart to test out their changes etc
A few other thoughts on your main numbered points:
1) - you could do the git stuff either inside wildfly directly; or you can let kubernetes do the git for you with the gitRepo volume; so that the wiidlfy container then just loads config files from disk as usual. ConfigMap can work the same way too; you can mount the files from ConfigMap to the canonical place on disk where the wildfly docker image expects to load them. That way you've 1 wildfly image and you can use the kubernetes pod template manifest to configure whether its using git or configmap. The docker image is then unaware of the differences. i.e. its all just enabled by the kubernetes manifest (a blob of JSON / YAML you generate as part of you build etc)
2) yes, the Admin Console needs a REST/DMR back end that it posts changes of the configuration to. That service ideally would not be the same container as the cluster of wildfly pods you're configuring. There could be a git based back end that does a git commit/push per change, or a ConfigMap back end that does the change. (Or the WildFly Admin console could directly post to kubernetes ConfigMap; but that might be too big a change for the Admin Console? So it might be simpler to just reuse whatever REST / remote API the AdminConsole uses right now to save changes; and have 2 implementations of the back end; one for git, one for ConfigMap? Using a separate WF container image for this sounds totally fine to me. It can be anything really - whatever you folks think is best!
3) I think there needs to be a WildFly specific management console to let folks configure things in an environment (via git / ConfigMap). Running WF on Atomic/OpenShift should generally try to look and feel like using a traditional DC - i.e. folks should have at least the same capabilities and power.
I figured once the WF Admin Console could work nicely with WF inside kubernetes; we'd then just wire the UI into the OpenShift console somehow so it feels like its one piece of glass; but using links and so forth to join the 2 things. We need to do similar things with the iPaaS where we have iPaaS specific screens linked into the OpenShift console. They are actually separate web applications developed by different teams on different release schedules; we just need to style them and link them so they feel like parts of one universal console. e.g. we ship a jolokia / hawtio UI we link to on a per pod basis in OpenShift right now which is really a totally separate UI - we just link them.
However I'd expect the WF team to own their admin console that changes the WF configuration & provides any other WF specific visualisation, metrics, reporting or whatever. I don't know the various console options in WF well enough to make a call on whether a new console should be written or if we can reuse/refactor the current Admin Console. Be that as it may I think folks will want to have a nice WF specific console when using Atomic/OpenShift as the runtime environment - and if the admin console can work with git / ConfigMap back ends; then thats a pretty compelling user experience IMHO.
4) Totally agreed. Both the iPaaS and BRMS folks have very similar requirements; particularly when things change in git that there's a rolling upgrade process kick in (using a slightly different Replication Controller manifest to use the latest git commit revision) etc. So I'd hope thats all generic functionality that the WF team don't have to do really.
In terms of the DC comparison; you could just say using Kubernetes along with gitRepo volumes (and the revision in the ReplicationController's manifest) or ConfigMap is a kind of DC implementation. You could still use the DC, conceptually and maybe some code too, if you want to do the "rolling upgrade and fall back if there is a failure" type logic. So I don't think you need to throw away the DC idea; just we could do with an implementation that uses immutable containers and volumes (git or ConfigMap) rather than swizzling runtime containers. So I see this as more of an implementation detail of how DC should work for testing/production environments really. But its your call really on if, pretending this new immutable containers + git/configmap approach is a DC is too strange for WF users ;)
Though I guess rather like point 4) you raised; the DC concept (when git / config map changes, do a rolling upgrade and if things go bad, rollback, maybe revert the change and raise some user event to warn the user the change was bad) - is maybe a generic thing; as its not really got much in there thats WF centric. e.g. that kind of logic could be reused by the iPaaS and BRMS too
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (DROOLS-1068) java.lang.UnsupportedOperationException thrown by org.drools.core.reteoo.BaseLeftTuple.getBlocker(BaseLeftTuple.java:535)
by Bill Tuminaro (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1068?page=com.atlassian.jira.plugi... ]
Bill Tuminaro updated DROOLS-1068:
----------------------------------
Attachment: UnsupportedOperationTest.java
Person.java
UnsupportedOperationTest.drl
Mario,
I created a much simpler version of the reproducer for this issue that uses the existing drools package structure and extended a class in your model.
I need to sort out getting the CLA signed, so I can put the files in the git repository.
In the mean time, I have attached the files to this comment.
-BillT
> java.lang.UnsupportedOperationException thrown by org.drools.core.reteoo.BaseLeftTuple.getBlocker(BaseLeftTuple.java:535)
> -------------------------------------------------------------------------------------------------------------------------
>
> Key: DROOLS-1068
> URL: https://issues.jboss.org/browse/DROOLS-1068
> Project: Drools
> Issue Type: Bug
> Components: core engine
> Affects Versions: 6.3.0.Final
> Reporter: Bill Tuminaro
> Assignee: Mario Fusco
> Attachments: droolsReproducer.jar, droolsReproducer.jar, Person.java, UnsupportedOperationTest.drl, UnsupportedOperationTest.java
>
>
> We have encountered this issue when we insert a fact matching a certain rule, delete the fact then call fire all rules.
> A reproducer is attached.
> -BillT
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-5990) jboss-web replication-granularity ATTRIBUTE causes IllegalStateException when getting attribute on the session
by Mathieu Lachance (JIRA)
[ https://issues.jboss.org/browse/WFLY-5990?page=com.atlassian.jira.plugin.... ]
Mathieu Lachance commented on WFLY-5990:
----------------------------------------
I'll verify as soon as possible though I'm really busy though these days so feel free to close the issue if you think this is really resolved.
Thanks again.
> jboss-web replication-granularity ATTRIBUTE causes IllegalStateException when getting attribute on the session
> --------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-5990
> URL: https://issues.jboss.org/browse/WFLY-5990
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.CR5
> Environment: JDK 8u60
> Reporter: Mathieu Lachance
> Assignee: Paul Ferraro
>
> when using the replication granularity ATTRIBUTE as followed:
> {code}
> <?xml version="1.0" encoding="ISO-8859-1"?>
> <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee"
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_8_0.xsd"
> version="8.0">
> <replication-config>
> <replication-granularity>ATTRIBUTE</replication-granularity>
> </replication-config>
> </jboss-web>
> {code}
> using the following web cache container configuration (standalone-ha.xml of Wildfly 10.0.0.CR5):
> {code}
> <cache-container name="web" module="org.wildfly.clustering.web.infinispan" default-cache="dist">
> <transport lock-timeout="60000"/>
> <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2">
> <locking isolation="REPEATABLE_READ"/>
> <transaction mode="BATCH"/>
> </distributed-cache>
> </cache-container>
> {code}
> when getting an attribute over the session (i.e. httpServletRequest.getSession().getAttribute(...)) this will eventually result in:
> {code}
> ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-22) ISPN000136: Error executing command GetKeyValueCommand, writing keys []: java.lang.IllegalStateException: Transaction DummyTransaction{xid=DummyXid{id=38}, status=3} is not in a valid state to be invoking cache operations on.
> at org.infinispan.interceptors.TxInterceptor.enlist(TxInterceptor.java:394)
> at org.infinispan.interceptors.TxInterceptor.enlistIfNeeded(TxInterceptor.java:350)
> at org.infinispan.interceptors.TxInterceptor.enlistReadAndInvokeNext(TxInterceptor.java:344)
> at org.infinispan.interceptors.TxInterceptor.visitGetKeyValueCommand(TxInterceptor.java:330)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:40)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:85)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:40)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:405)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleDefault(StateTransferInterceptor.java:390)
> at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:85)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:40)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:107)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:76)
> at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:85)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:40)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:113)
> at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:85)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:40)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
> at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:411)
> at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:403)
> at org.infinispan.cache.impl.AbstractDelegatingCache.get(AbstractDelegatingCache.java:286)
> at org.wildfly.clustering.web.infinispan.session.fine.FineSessionAttributes.getAttribute(FineSessionAttributes.java:71)
> at org.wildfly.clustering.web.undertow.session.DistributableSession.getAttribute(DistributableSession.java:133)
> at io.undertow.servlet.spec.HttpSessionImpl.getAttribute(HttpSessionImpl.java:123)
> at XXX.XXX.core.filter.DevelopmentFilter.testSerialization(DevelopmentFilter.java:155)
> at XXX.XXX.core.filter.DevelopmentFilter.doDevelopmentFilter(DevelopmentFilter.java:133)
> at XXX.XXX.core.filter.DevelopmentFilter.doFilter(DevelopmentFilter.java:112)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.core.filter.IERenderingFilter.doFilter(IERenderingFilter.java:80)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.core.filter.KeepAliveFilter.doFilter(KeepAliveFilter.java:45)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.core.controller.HttpContextFilter.doFilter(HttpContextFilter.java:39)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.common.workers.HalogenHttpCachingFilter.doFilter(HalogenHttpCachingFilter.java:94)
> at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
> at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.core.filter.StaticContentVersionFilter.doFilter(StaticContentVersionFilter.java:73)
> at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
> at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.core.filter.LogFilter.doFilterInternal(LogFilter.java:182)
> at XXX.XXX.core.filter.LogFilter.doSuppressExceptionsFilterInternal(LogFilter.java:201)
> at XXX.XXX.core.filter.LogFilter.doFilter(LogFilter.java:160)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.common.language.LanguageFilter.invokeFilterChain(LanguageFilter.java:105)
> at XXX.XXX.common.language.LanguageFilter.doFilter(LanguageFilter.java:100)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.datasource.legacy.BindThreadLocalConnectionFilter$1.call(BindThreadLocalConnectionFilter.java:44)
> at XXX.XXX.datasource.legacy.BindThreadLocalConnectionFilter$1.call(BindThreadLocalConnectionFilter.java:41)
> at XXX.XXX.datasource.legacy.ThreadLocalConnectionAwareCallable.call(ThreadLocalConnectionAwareCallable.java:86)
> at XXX.XXX.datasource.legacy.ThreadLocalConnectionAwareCallable.withConnection(ThreadLocalConnectionAwareCallable.java:59)
> at XXX.XXX.datasource.legacy.BindThreadLocalConnectionFilter.doFilter(BindThreadLocalConnectionFilter.java:41)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.core.exception.ExceptionFilter.doSuppressExceptionsFilterInternal(ExceptionFilter.java:153)
> at XXX.XXX.core.exception.ExceptionFilter.doFilter(ExceptionFilter.java:134)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at XXX.XXX.web.request.RequestIdentifierFilter.doFilter(RequestIdentifierFilter.java:44)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:121)
> at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176)
> at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145)
> at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:92)
> at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:394)
> at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
> at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
> at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
> at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
> at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
> at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
> at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
> at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
> at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
> at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
> at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
> at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Does this mean that the replication granularity 'ATTRIBUTE' is no longer supported in Wildfly 10?
> if swapping the cache from "distributed" to "replicated" mode, we're getting instead an earlier NullPointerException:
> {code}
> ERROR [io.undertow.servlet.request] (default task-2) UT015012: Failed to generate error page /XXXX/error.jsp for original exception: null. Generating error page resulted in a 500.: java.lang.RuntimeException: org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.NullPointerException
> at io.undertow.servlet.spec.HttpServletResponseImpl.doErrorDispatch(HttpServletResponseImpl.java:156)
> at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:287)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
> at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
> at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.NullPointerException
> at org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:581)
> at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:456)
> at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:402)
> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:346)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
> at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
> at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
> at io.undertow.jsp.JspFileHandler.handleRequest(JspFileHandler.java:32)
> at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:265)
> at io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:200)
> at io.undertow.servlet.spec.RequestDispatcherImpl.error(RequestDispatcherImpl.java:453)
> at io.undertow.servlet.spec.RequestDispatcherImpl.error(RequestDispatcherImpl.java:378)
> at io.undertow.servlet.spec.HttpServletResponseImpl.doErrorDispatch(HttpServletResponseImpl.java:154)
> ... 9 more
> Caused by: javax.servlet.ServletException: java.lang.NullPointerException
> at org.apache.jsp.error_jsp._jspService(error_jsp.java:102)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:433)
> ... 25 more
> Caused by: java.lang.NullPointerException
> at org.infinispan.interceptors.base.BaseStateTransferInterceptor.visitGetKeysInGroupCommand(BaseStateTransferInterceptor.java:50)
> at org.infinispan.commands.remote.GetKeysInGroupCommand.acceptVisitor(GetKeysInGroupCommand.java:95)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:107)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:76)
> at org.infinispan.commands.AbstractVisitor.visitGetKeysInGroupCommand(AbstractVisitor.java:178)
> at org.infinispan.commands.remote.GetKeysInGroupCommand.acceptVisitor(GetKeysInGroupCommand.java:95)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
> at org.infinispan.cache.impl.CacheImpl.internalGetGroup(CacheImpl.java:490)
> at org.infinispan.cache.impl.CacheImpl.getGroup(CacheImpl.java:483)
> at org.infinispan.cache.impl.CacheImpl.getGroup(CacheImpl.java:478)
> at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.getGroup(AbstractDelegatingAdvancedCache.java:227)
> at org.wildfly.clustering.web.infinispan.session.fine.FineSessionAttributesFactory.createValue(FineSessionAttributesFactory.java:73)
> at org.wildfly.clustering.web.infinispan.session.fine.FineSessionAttributesFactory.createValue(FineSessionAttributesFactory.java:45)
> at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.createValue(InfinispanSessionFactory.java:55)
> at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.createValue(InfinispanSessionFactory.java:40)
> at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.createSession(InfinispanSessionManager.java:255)
> at org.wildfly.clustering.web.undertow.session.DistributableSessionManager.createSession(DistributableSessionManager.java:107)
> at io.undertow.servlet.spec.ServletContextImpl.getSession(ServletContextImpl.java:741)
> at io.undertow.servlet.spec.HttpServletRequestImpl.getSession(HttpServletRequestImpl.java:370)
> at io.undertow.servlet.spec.HttpServletRequestImpl.getSession(HttpServletRequestImpl.java:375)
> at org.apache.jasper.runtime.PageContextImpl.initialize(PageContextImpl.java:137)
> at org.apache.jasper.runtime.JspFactoryImpl.internalGetPageContext(JspFactoryImpl.java:109)
> at org.apache.jasper.runtime.JspFactoryImpl.getPageContext(JspFactoryImpl.java:60)
> at org.apache.jsp.error_jsp._jspService(error_jsp.java:80)
> ... 28 more
> {code}
> It the later, it looks like, in the command interceptor, the cache distribution manager is null, which kind of 'make sense' since we are no longer a distributed cache, but a replicated cache. Does it means that session replication is no longer supported and instead only session distribution is? In my case either way is fine, I'll just crank up the number of owner if only distribution is supported.
> Though, in my company usecase, session replication granularity 'attribute' is definitly required to migrate our application to Wildfly 10.
> I know there's still some open jira ticket to complement the Wildfly 10 documentation, but could you provide any additional insight if I've miss anything.
> Thanks,
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (LOGMGR-124) Add marker support
by Robert Knotek (JIRA)
[ https://issues.jboss.org/browse/LOGMGR-124?page=com.atlassian.jira.plugin... ]
Robert Knotek commented on LOGMGR-124:
--------------------------------------
So, You don't want to bring dependency on org.slf4j.Marker to logmanager. Ok, I understand. You are right.
> Add marker support
> ------------------
>
> Key: LOGMGR-124
> URL: https://issues.jboss.org/browse/LOGMGR-124
> Project: JBoss Log Manager
> Issue Type: Feature Request
> Components: core
> Affects Versions: 2.0.2.Final
> Reporter: Rob Heine
> Priority: Optional
>
> The logging engine currently (talking about the one included in WF-9) does not support Markers (like in implementations like "logback" and/or "log4j2").
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6294) mod_cluster session drain wait does not end
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6294?page=com.atlassian.jira.plugin.... ]
Radoslav Husar reassigned WFLY-6294:
------------------------------------
Assignee: Radoslav Husar (was: Paul Ferraro)
> mod_cluster session drain wait does not end
> -------------------------------------------
>
> Key: WFLY-6294
> URL: https://issues.jboss.org/browse/WFLY-6294
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final
> Reporter: Aaron Ogburn
> Assignee: Radoslav Husar
>
> The mod_cluster session drain wait is not ending as expected. mod_cluster adds a session listener to be notified of session destruction. That is fired appropriately, but when the listener is invoked, the infinispan session manager still reports the session as active. Thus, this drain loop doesn't end after the notify because it still sees the active session:
> {code}
> while ((remainingSessions > 0) && (noTimeout || (timeout > 0))) {
> ModClusterLogger.LOGGER.drainSessions(remainingSessions, context.getHost(), context);
> listener.wait(noTimeout ? 0 : timeout);
> current = System.currentTimeMillis();
> timeout = end - current;
> remainingSessions = context.getActiveSessionCount();
> }
> {code}
> Can the listeners be invoked when the session is fully removed and no longer considered active?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6294) mod_cluster session drain wait does not end
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/WFLY-6294?page=com.atlassian.jira.plugin.... ]
Aaron Ogburn updated WFLY-6294:
-------------------------------
Steps to Reproduce:
-Run ha profile with an increased session drain stop timeout:
{code}
<mod-cluster-config advertise-socket="modcluster" proxies="modclusterproxy" stop-context-timeout="300" connector="default" session-draining-strategy="ALWAYS">
{code}
-create a session in an app
-stop jboss
-invalidate that session
-note shutdown drain wait doesn't end
was:
-Run ha profile with an increased session drain stop timeout:
{code}
<mod-cluster-config advertise-socket="modcluster" proxies="modclusterproxy" stop-context-timeout="300" connector="default" session-draining-strategy="ALWAYS">
{code}
-create a session in an app
-start jboss stop
-invalidate that session
-note shutdown drain wait doesn't end
> mod_cluster session drain wait does not end
> -------------------------------------------
>
> Key: WFLY-6294
> URL: https://issues.jboss.org/browse/WFLY-6294
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final
> Reporter: Aaron Ogburn
> Assignee: Paul Ferraro
>
> The mod_cluster session drain wait is not ending as expected. mod_cluster adds a session listener to be notified of session destruction. That is fired appropriately, but when the listener is invoked, the infinispan session manager still reports the session as active. Thus, this drain loop doesn't end after the notify because it still sees the active session:
> {code}
> while ((remainingSessions > 0) && (noTimeout || (timeout > 0))) {
> ModClusterLogger.LOGGER.drainSessions(remainingSessions, context.getHost(), context);
> listener.wait(noTimeout ? 0 : timeout);
> current = System.currentTimeMillis();
> timeout = end - current;
> remainingSessions = context.getActiveSessionCount();
> }
> {code}
> Can the listeners be invoked when the session is fully removed and no longer considered active?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (WFLY-6294) mod_cluster session drain wait does not end
by Aaron Ogburn (JIRA)
Aaron Ogburn created WFLY-6294:
----------------------------------
Summary: mod_cluster session drain wait does not end
Key: WFLY-6294
URL: https://issues.jboss.org/browse/WFLY-6294
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 10.0.0.Final
Reporter: Aaron Ogburn
Assignee: Paul Ferraro
The mod_cluster session drain wait is not ending as expected. mod_cluster adds a session listener to be notified of session destruction. That is fired appropriately, but when the listener is invoked, the infinispan session manager still reports the session as active. Thus, this drain loop doesn't end after the notify because it still sees the active session:
{code}
while ((remainingSessions > 0) && (noTimeout || (timeout > 0))) {
ModClusterLogger.LOGGER.drainSessions(remainingSessions, context.getHost(), context);
listener.wait(noTimeout ? 0 : timeout);
current = System.currentTimeMillis();
timeout = end - current;
remainingSessions = context.getActiveSessionCount();
}
{code}
Can the listeners be invoked when the session is fully removed and no longer considered active?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months