[JBoss JIRA] (WFLY-5352) Fix the documentation in wildfly-singleton_1_0.xsd
by Michal Vinkler (JIRA)
[ https://issues.jboss.org/browse/WFLY-5352?page=com.atlassian.jira.plugin.... ]
Michal Vinkler reopened WFLY-5352:
----------------------------------
The issue is still not fixed:
1. Attributes "name" and "cache-container" have the same documentation.
2. Attribute "quorum" is missing documentation.
The associated pull request fixed missing documentation for another attribute.
> Fix the documentation in wildfly-singleton_1_0.xsd
> --------------------------------------------------
>
> Key: WFLY-5352
> URL: https://issues.jboss.org/browse/WFLY-5352
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Beta2
> Reporter: Michal Vinkler
> Assignee: Paul Ferraro
> Fix For: 10.0.0.CR1
>
>
> https://github.com/wildfly/wildfly/blob/master/clustering/singleton/exten...
> 1. Attributes "name" and "cache-container" have the same documentation.
> 2. Attribute "quorum" is missing documentation.
> {code:xml}
> <xs:complexType name="singleton-policy">
> <!-- ... -->
> <xs:attribute name="name" type="xs:string" use="required">
> <xs:annotation>
> <xs:documentation>Identifies the cache-container used to back the singleton deployment policy.</xs:documentation>
> </xs:annotation>
> </xs:attribute>
> <xs:attribute name="cache-container" type="xs:string" use="required">
> <xs:annotation>
> <xs:documentation>Identifies the cache-container used to back the singleton deployment policy.</xs:documentation>
> </xs:annotation>
> </xs:attribute>
> <!-- ... -->
> <xs:attribute name="quorum" type="xs:integer" default="1">
> <xs:annotation>
> <xs:documentation></xs:documentation>
> </xs:annotation>
> </xs:attribute>
> </xs:complexType>
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (JGRP-1973) FRAG2: message corruption when thread pools are disabled
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1973?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1973:
---------------------------
Description:
When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
* If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
* f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
* f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
* f2 is received and *overwrites* f1 in {{receive_buf}} !
* f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
* {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
* The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
This probably affects {{FRAG}}, too.
Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
SOLUTION: remove the check for DirectExecutor and copy the data if {{copy_buffer}} is true
{code:title=Bar.java|borderStyle=solid}
if(!copy_buffer || pool instanceof DirectExecutor)
pool.execute(new MyHandler(sender, data, offset, length)); // we don't make a copy if we execute on this thread
else {
byte[] tmp=new byte[length];
System.arraycopy(data, offset, tmp, 0, length);
pool.execute(new MyHandler(sender, tmp, 0, tmp.length));
}
{code}
was:
When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
* If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
* f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
* f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
* f2 is received and *overwrites* f1 in {{receive_buf}} !
* f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
* {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
* The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
This probably affects {{FRAG}}, too.
Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
SOLUTION: remove the check for DirectExecutor and copy the data if {{copy_buffer}} is true
{code:title=Bar.java|borderStyle=solid}
if(!copy_buffer || pool instanceof DirectExecutor)
pool.execute(new MyHandler(sender, data, offset, length)); // we don't make a copy if we execute on this thread
else {
byte[] tmp=new byte[length];
System.arraycopy(data, offset, tmp, 0, length);
pool.execute(new MyHandler(sender, tmp, 0, tmp.length));
}
{code}
> FRAG2: message corruption when thread pools are disabled
> --------------------------------------------------------
>
> Key: JGRP-1973
> URL: https://issues.jboss.org/browse/JGRP-1973
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.6
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.6.7
>
>
> When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
> * If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
> * f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
> * f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
> * f2 is received and *overwrites* f1 in {{receive_buf}} !
> * f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
> * {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
> * The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
> This probably affects {{FRAG}}, too.
> Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
> SOLUTION: remove the check for DirectExecutor and copy the data if {{copy_buffer}} is true
> {code:title=Bar.java|borderStyle=solid}
> if(!copy_buffer || pool instanceof DirectExecutor)
> pool.execute(new MyHandler(sender, data, offset, length)); // we don't make a copy if we execute on this thread
> else {
> byte[] tmp=new byte[length];
> System.arraycopy(data, offset, tmp, 0, length);
> pool.execute(new MyHandler(sender, tmp, 0, tmp.length));
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (JGRP-1973) FRAG2: message corruption when thread pools are disabled
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1973?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-1973:
---------------------------
Description:
When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
* If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
* f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
* f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
* f2 is received and *overwrites* f1 in {{receive_buf}} !
* f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
* {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
* The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
This probably affects {{FRAG}}, too.
Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
SOLUTION: remove the check for DirectExecutor and copy the data if {{copy_buffer}} is true
{code:title=Bar.java|borderStyle=solid}
if(!copy_buffer || pool instanceof DirectExecutor)
pool.execute(new MyHandler(sender, data, offset, length)); // we don't make a copy if we execute on this thread
else {
byte[] tmp=new byte[length];
System.arraycopy(data, offset, tmp, 0, length);
pool.execute(new MyHandler(sender, tmp, 0, tmp.length));
}
{code}
was:
When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
* If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
* f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
* f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
* f2 is received and *overwrites* f1 in {{receive_buf}} !
* f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
* {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
* The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
This probably affects {{FRAG}}, too.
Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
SOLUTION:
> FRAG2: message corruption when thread pools are disabled
> --------------------------------------------------------
>
> Key: JGRP-1973
> URL: https://issues.jboss.org/browse/JGRP-1973
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.6
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.6.7
>
>
> When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
> * If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
> * f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
> * f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
> * f2 is received and *overwrites* f1 in {{receive_buf}} !
> * f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
> * {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
> * The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
> This probably affects {{FRAG}}, too.
> Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
> SOLUTION: remove the check for DirectExecutor and copy the data if {{copy_buffer}} is true
> {code:title=Bar.java|borderStyle=solid}
> if(!copy_buffer || pool instanceof DirectExecutor)
> pool.execute(new MyHandler(sender, data, offset, length)); // we don't make a copy if we execute on this thread
> else {
> byte[] tmp=new byte[length];
> System.arraycopy(data, offset, tmp, 0, length);
> pool.execute(new MyHandler(sender, tmp, 0, tmp.length));
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (JGRP-1973) FRAG2: message corruption when thread pools are disabled
by Bela Ban (JIRA)
Bela Ban created JGRP-1973:
------------------------------
Summary: FRAG2: message corruption when thread pools are disabled
Key: JGRP-1973
URL: https://issues.jboss.org/browse/JGRP-1973
Project: JGroups
Issue Type: Bug
Affects Versions: 3.6.6
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 3.6.7
When disabling the thread pools (regular, OOB) and using {{UDP}}, fragments of a message get corrupted as a single buffer ({{UDP.receive_buf}}) is reused.
* If we send a message of 1000 bytes, and {{FRAG2.frag_size}} is set to 600, then {{FRAG2}} sends 2 fragments: f1 (offset=0, length=600) and f2 (offset=600, length=400).
* f1 is received and placed into {{receive_buf}}, then sent up the stack *without copying* as the {{DirectExecutor}} thread pool doesn't copy the data
* f1 is received by {{FRAG2}} and added to the fragments list at index 0. The buffer of the message points to {{receive_buf}}
* f2 is received and *overwrites* f1 in {{receive_buf}} !
* f2 is received by {{FRAG2}} and added to the fragments list at index 1. The buffer of the message points to {{receive_buf}}
* {{FRAG2}} now creates a new message whose buffer is {{receive_buf}}[0-600] and {{receive_buf}}[600-1000].
* The problem here is that {{receive_buf}} contains only f2, which overwrote f1, so the resulting message will be incorrect !
This probably affects {{FRAG}}, too.
Not too critical, as thread pools are enabled by default, and disabling them might even be removed in the future.
SOLUTION:
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (SECURITY-784) LdapExtLoginModule cannot find custom ldap socket factory
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/SECURITY-784?page=com.atlassian.jira.plug... ]
RH Bugzilla Integration commented on SECURITY-784:
--------------------------------------------------
Ondrej Lukas <olukas(a)redhat.com> changed the Status of [bug 1052644|https://bugzilla.redhat.com/show_bug.cgi?id=1052644] from ON_QA to VERIFIED
> LdapExtLoginModule cannot find custom ldap socket factory
> ---------------------------------------------------------
>
> Key: SECURITY-784
> URL: https://issues.jboss.org/browse/SECURITY-784
> Project: PicketBox
> Issue Type: Feature Request
> Components: PicketBox
> Affects Versions: PicketBox_4_0_19.Final
> Reporter: Derek Horton
> Assignee: Pedro Igor
> Attachments: SECURITY-784.patch
>
>
> LdapExtLoginModule cannot find custom ldap socket factory.
> Passing the "java.naming.ldap.factory.socket" property in as an
> module-option:
> <module-option name="java.naming.ldap.factory.socket" value="org.jboss.example.CustomSocketFactory"/>
> results in a ClassNotFoundException:
> Caused by: javax.naming.CommunicationException: 192.168.1.8:389 [Root exception is java.lang.ClassNotFoundException: org/jboss/example/CustomSocketFactory]
> at com.sun.jndi.ldap.Connection.<init>(Connection.java:226) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapClient.<init>(LdapClient.java:136) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1608) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2698) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:316) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:193) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:211) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:154) [rt.jar:1.7.0_45]
> at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:84) [rt.jar:1.7.0_45]
> at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684) [rt.jar:1.7.0_45]
> at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:307) [rt.jar:1.7.0_45]
> at javax.naming.InitialContext.init(InitialContext.java:242) [rt.jar:1.7.0_45]
> at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:153) [rt.jar:1.7.0_45]
> at org.jboss.security.auth.spi.LdapExtLoginModule.constructInitialLdapContext(LdapExtLoginModule.java:767) [picketbox-4.0.17.SP2-redhat-2.jar:4.0.17.SP2-redhat-2]
> I tried making the custom socket factory into a jboss module and adding the module as a dependency to picketbox and
> sun.jdk. Unfortunately, that did not work. I also added the socket
> factory jar to the jre/lib/ext directory. That didn't work either.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (DROOLS-891) Missing alpha node removal when the only rule using it is removed
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/DROOLS-891?page=com.atlassian.jira.plugin... ]
RH Bugzilla Integration commented on DROOLS-891:
------------------------------------------------
Ryan Zhang <rzhang(a)redhat.com> changed the Status of [bug 1273087|https://bugzilla.redhat.com/show_bug.cgi?id=1273087] from MODIFIED to ON_QA
> Missing alpha node removal when the only rule using it is removed
> -----------------------------------------------------------------
>
> Key: DROOLS-891
> URL: https://issues.jboss.org/browse/DROOLS-891
> Project: Drools
> Issue Type: Bug
> Reporter: Mario Fusco
> Assignee: Mario Fusco
> Fix For: 6.3.0.CR2
>
>
> When an alpha node has more than one sink and it is used by only one rule doesn't get removed when the rule itself is removed. The following test case demonstrates the problem.
> {code}
> @Test
> public void testRemoveHasSameConElement() {
> String packageName = "test";
> String rule1 = "package " + packageName + ";" +
> "import java.util.Map; \n" +
> "rule 'rule1' \n" +
> "when \n" +
> " Map(this['type'] == 'Goods' && this['brand'] == 'a') \n" +
> " Map(this['type'] == 'Goods' && this['category'] == 'b') \n" +
> "then \n" +
> "System.out.println('test rule 1'); \n"+
> "end";
> KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
> kbuilder.add( ResourceFactory.newByteArrayResource( rule1.getBytes() ), ResourceType.DRL );
> if ( kbuilder.hasErrors() ) {
> fail( kbuilder.getErrors().toString() );
> }
> KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase();
> kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );
> kbase.removeKnowledgePackage(packageName);
> StatelessKnowledgeSession session = kbase.newStatelessKnowledgeSession();
> session.execute(new HashMap());
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (DROOLS-951) Removing 2 or more rules does not retract justified objects
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/DROOLS-951?page=com.atlassian.jira.plugin... ]
RH Bugzilla Integration commented on DROOLS-951:
------------------------------------------------
Ryan Zhang <rzhang(a)redhat.com> changed the Status of [bug 1273087|https://bugzilla.redhat.com/show_bug.cgi?id=1273087] from MODIFIED to ON_QA
> Removing 2 or more rules does not retract justified objects
> -----------------------------------------------------------
>
> Key: DROOLS-951
> URL: https://issues.jboss.org/browse/DROOLS-951
> Project: Drools
> Issue Type: Bug
> Components: core engine
> Affects Versions: 6.3.0.Final
> Reporter: Zvonimir Bošnjak
> Assignee: Mario Fusco
> Fix For: 6.4.x
>
>
> When removing rules from knowledge which have logically inserted (justified) objects, only one object (from first removed rule) will be retracted.
> In example from AddRemoveRule#184: as it removes the first rule, it re-initializes all other path memories and, in particular, marks them as unlinked (AbstractTerminalNode#204)
> Later, when it tries to flush the deletions (AddRemoveRule#280), the unlinked status prevents the propagation from taking place.
>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months