[JBoss JIRA] (DROOLS-501) Propagation context is not set correctly when nodes are shared
by Mario Fusco (JIRA)
[ https://issues.jboss.org/browse/DROOLS-501?page=com.atlassian.jira.plugin... ]
Mario Fusco resolved DROOLS-501.
--------------------------------
Resolution: Done
> Propagation context is not set correctly when nodes are shared
> --------------------------------------------------------------
>
> Key: DROOLS-501
> URL: https://issues.jboss.org/browse/DROOLS-501
> Project: Drools
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Affects Versions: 6.1.0.Beta4
> Reporter: Davide Sottara
> Assignee: Mario Fusco
> Priority: Critical
> Fix For: 6.1.0.CR1
>
>
> The following rules
> {code}
> package org.drools.compiler.loop
> rule "Rule 1"
> ruleflow-group "Start"
> no-loop
> when
> $thing1 : String()
> $thing2 : Integer()
> then
> System.out.println( 'At 1' );
> update( $thing2 );
> end
> rule "Rule 2"
> ruleflow-group "End"
> no-loop
> when
> $thing1 : String()
> $thing2 : Integer()
> then
> System.out.println( 'At 2' );
> update( $thing2 );
> end
> {code}
> cause an endless loop even if controlled by a jBPM process with two rule tasks
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
9 years, 11 months
[JBoss JIRA] (WFLY-3421) Rehashing on view change can result in premature session/ejb expiration
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-3421?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-3421:
-------------------------------
Description:
Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves, each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
There are at least 2 ways to address this:
# When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
# We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
Option 1 adds an additional cluster-wide RPC per request.
Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
Option 2 is the least invasive solution of the two.
EDIT: There is a 3rd options, i.e. modify the expiration tasks such that they skip expiration if the session/ejb is not owned by the current node. This is prevents the premature expiration issue, but we need some additional strategy to reschedule the session/ejb expiration on the node on the current owner.
was:
Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
There are at least 2 ways to address this:
# When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
# We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
Option 1 adds an additional cluster-wide RPC per request.
Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
Option 2 is the least invasive solution of the two.
EDIT: There is a 3rd options, i.e. modify the expiration tasks such that they skip expiration if the session/ejb is not owned by the current node. This is prevents the premature expiration issue, but we need some additional strategy to reschedule the session/ejb expiration on the node on the current owner.
> Rehashing on view change can result in premature session/ejb expiration
> -----------------------------------------------------------------------
>
> Key: WFLY-3421
> URL: https://issues.jboss.org/browse/WFLY-3421
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Clustering
> Affects Versions: 8.1.0.CR2
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Critical
> Fix For: 8.2.0.CR1, 9.0.0.Alpha1
>
>
> Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves, each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
> There are at least 2 ways to address this:
> # When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
> # We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
> Option 1 adds an additional cluster-wide RPC per request.
> Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
> Option 2 is the least invasive solution of the two.
> EDIT: There is a 3rd options, i.e. modify the expiration tasks such that they skip expiration if the session/ejb is not owned by the current node. This is prevents the premature expiration issue, but we need some additional strategy to reschedule the session/ejb expiration on the node on the current owner.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
9 years, 11 months
[JBoss JIRA] (DROOLS-329) ClassFormatException when compile template with latest JDK8 (b114)
by fei xu (JIRA)
[ https://issues.jboss.org/browse/DROOLS-329?page=com.atlassian.jira.plugin... ]
fei xu commented on DROOLS-329:
-------------------------------
Is there any timeline for the fix? We are upgrading our production to 1.8 by blocked due to this bug. Any update is highly appreciated!
> ClassFormatException when compile template with latest JDK8 (b114)
> ------------------------------------------------------------------
>
> Key: DROOLS-329
> URL: https://issues.jboss.org/browse/DROOLS-329
> Project: Drools
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Affects Versions: 5.5.0.Final, 6.0.0.CR5
> Environment: Ubuntu linux, latest JDK1.8 (b114) downloaded from https://jdk8.java.net/download.html
> Reporter: Marek Posolda
> Assignee: Mark Proctor
> Fix For: 5.5.1.Final, 6.1.0.Final
>
>
> When trying to run code for compile templates with latest JDK8 (For instance this example https://github.com/droolsjbpm/drools/blob/master/drools-examples/src/main... )
> it will throw an exception like this:
> {code}
> Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: wrong class format
> at org.drools.template.parser.DefaultTemplateRuleBase.readRule(DefaultTemplateRuleBase.java:148)
> at org.drools.template.parser.DefaultTemplateRuleBase.<init>(DefaultTemplateRuleBase.java:62)
> at org.drools.template.parser.TemplateDataListener.<init>(TemplateDataListener.java:74)
> at org.drools.decisiontable.ExternalSpreadsheetCompiler.compile(ExternalSpreadsheetCompiler.java:95)
> at org.drools.decisiontable.ExternalSpreadsheetCompiler.compile(ExternalSpreadsheetCompiler.java:81)
> at org.drools.examples.templates.SimpleRuleTemplateExample.buildKBase(SimpleRuleTemplateExample.java:84)
> at org.drools.examples.templates.SimpleRuleTemplateExample.executeExample(SimpleRuleTemplateExample.java:49)
> at org.drools.examples.templates.SimpleRuleTemplateExample.main(SimpleRuleTemplateExample.java:43)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
> Caused by: java.lang.RuntimeException: wrong class format
> at org.drools.commons.jci.compilers.EclipseJavaCompiler$2.findType(EclipseJavaCompiler.java:263)
> at org.drools.commons.jci.compilers.EclipseJavaCompiler$2.findType(EclipseJavaCompiler.java:203)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.askForType(LookupEnvironment.java:102)
> at org.eclipse.jdt.internal.compiler.lookup.UnresolvedReferenceBinding.resolve(UnresolvedReferenceBinding.java:49)
> at org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.resolveType(BinaryTypeBinding.java:122)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.getTypeFromTypeSignature(LookupEnvironment.java:1188)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.getTypeFromVariantTypeSignature(LookupEnvironment.java:1244)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.getTypeArgumentsFromSignature(LookupEnvironment.java:1031)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.getTypeFromTypeSignature(LookupEnvironment.java:1193)
> at org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.createMethod(BinaryTypeBinding.java:495)
> at org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.createMethods(BinaryTypeBinding.java:577)
> at org.eclipse.jdt.internal.compiler.lookup.BinaryTypeBinding.cachePartsFrom(BinaryTypeBinding.java:327)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.createBinaryTypeFrom(LookupEnvironment.java:640)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.createBinaryTypeFrom(LookupEnvironment.java:619)
> at org.eclipse.jdt.internal.compiler.Compiler.accept(Compiler.java:295)
> at org.eclipse.jdt.internal.compiler.lookup.LookupEnvironment.askForType(LookupEnvironment.java:133)
> at org.eclipse.jdt.internal.compiler.lookup.PackageBinding.getTypeOrPackage(PackageBinding.java:183)
> at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.findImport(CompilationUnitScope.java:465)
> at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.findSingleImport(CompilationUnitScope.java:519)
> at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.faultInImports(CompilationUnitScope.java:368)
> at org.eclipse.jdt.internal.compiler.lookup.CompilationUnitScope.faultInTypes(CompilationUnitScope.java:444)
> at org.eclipse.jdt.internal.compiler.Compiler.process(Compiler.java:752)
> at org.eclipse.jdt.internal.compiler.Compiler.compile(Compiler.java:464)
> at org.drools.commons.jci.compilers.EclipseJavaCompiler.compile(EclipseJavaCompiler.java:389)
> at org.drools.commons.jci.compilers.AbstractJavaCompiler.compile(AbstractJavaCompiler.java:49)
> at org.drools.rule.builder.dialect.java.JavaDialect.compileAll(JavaDialect.java:371)
> at org.drools.compiler.DialectCompiletimeRegistry.compileAll(DialectCompiletimeRegistry.java:46)
> at org.drools.compiler.PackageRegistry.compileAll(PackageRegistry.java:102)
> at org.drools.compiler.PackageBuilder.compileAll(PackageBuilder.java:1006)
> at org.drools.compiler.PackageBuilder.compileAllRules(PackageBuilder.java:842)
> at org.drools.compiler.PackageBuilder.addPackage(PackageBuilder.java:831)
> at org.drools.compiler.PackageBuilder.addPackageFromDrl(PackageBuilder.java:441)
> at org.drools.compiler.PackageBuilder.addPackageFromDrl(PackageBuilder.java:419)
> at org.drools.template.parser.DefaultTemplateRuleBase.readRule(DefaultTemplateRuleBase.java:139)
> ... 12 more
> Caused by: org.eclipse.jdt.internal.compiler.classfmt.ClassFormatException
> at org.eclipse.jdt.internal.compiler.classfmt.ClassFileReader.<init>(ClassFileReader.java:372)
> at org.drools.commons.jci.compilers.EclipseJavaCompiler$2.createNameEnvironmentAnswer(EclipseJavaCompiler.java:287)
> at org.drools.commons.jci.compilers.EclipseJavaCompiler$2.findType(EclipseJavaCompiler.java:258)
> ... 45 more
> {code}
> Workaround, which worked for me is to switch to Janino compiler (See Workaround description)
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
9 years, 11 months
[JBoss JIRA] (JBWEB-297) NIO can improperly lead to request/response objects being used concurrently
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/JBWEB-297?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated JBWEB-297:
------------------------------------------
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1090103, https://bugzilla.redhat.com/show_bug.cgi?id=1093718, https://bugzilla.redhat.com/show_bug.cgi?id=1103891 (was: https://bugzilla.redhat.com/show_bug.cgi?id=1090103, https://bugzilla.redhat.com/show_bug.cgi?id=1093718)
> NIO can improperly lead to request/response objects being used concurrently
> ---------------------------------------------------------------------------
>
> Key: JBWEB-297
> URL: https://issues.jboss.org/browse/JBWEB-297
> Project: JBoss Web
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Tomcat
> Affects Versions: JBossWeb-7.2.2.GA, JBossWeb-7.3.0.GA, JBossWeb-7.4.0.GA
> Reporter: Aaron Ogburn
> Assignee: Remy Maucherat
> Attachments: JBWEB-297.diff
>
>
> Using NIO with async servlets can improperly lead to a processor and its request/response objects being used by multiple threads to process different requests at once. The problem arises here in Http11NioProtocol.event():
> {code:title=Http11NioProtocol.java|borderStyle=solid}
> public SocketState event(NioChannel channel, SocketStatus status) {
> Http11NioProcessor processor = connections.get(channel.getId());
> SocketState state = SocketState.CLOSED;
> if (processor != null) {
> processor.startProcessing();
> // Call the appropriate event
> try {
> state = processor.event(status);
> } catch (java.net.SocketException e) {
> // SocketExceptions are normal
> CoyoteLogger.HTTP_NIO_LOGGER.socketException(e);
> } catch (java.io.IOException e) {
> // IOExceptions are normal
> CoyoteLogger.HTTP_NIO_LOGGER.socketException(e);
> }
> // Future developers: if you discover any other
> // rare-but-nonfatal exceptions, catch them here, and log as
> // above.
> catch (Throwable e) {
> // any other exception or error is odd. Here we log it
> // with "ERROR" level, so it will show up even on
> // less-than-verbose logs.
> CoyoteLogger.HTTP_NIO_LOGGER.socketError(e);
> } finally {
> if (state != SocketState.LONG) {
> connections.remove(channel.getId());
> recycledProcessors.offer(processor);
> {code}
> If two events occur on the same channel and execute this at the same time, it'll lead to this issue if they result in a SocketState other than LONG. When that happens, they both offer the same processor to recycledProcessors in the finally block. Later on, two different requests can then poll the same processor at once from reycledProcessors; a processor should only ever have one entry in recycledProcessors.
> It looks like we need to synch the Http11NioProtocol.event() call or the NioEndpoint.ChannelProcessor.run().
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
9 years, 11 months