[JBoss JIRA] (WFLY-9747) wildfly-arquillian-container-managed: java.lang.ClassNotFoundException: org.wildfly.security.permission.AbstractNameSetOnlyPermission
by Geoffrey De Smet (JIRA)
Geoffrey De Smet created WFLY-9747:
--------------------------------------
Summary: wildfly-arquillian-container-managed: java.lang.ClassNotFoundException: org.wildfly.security.permission.AbstractNameSetOnlyPermission
Key: WFLY-9747
URL: https://issues.jboss.org/browse/WFLY-9747
Project: WildFly
Issue Type: Bug
Reporter: Geoffrey De Smet
Assignee: Jason Greene
I get this error:
{code}
[INFO] Running org.optaplanner.openshift.employeerostering.webapp.skill.SkillRestServiceTest
Jan 31, 2018 7:52:20 PM org.jboss.as.arquillian.container.managed.ManagedDeployableContainer startInternal
WARNING: Bundles path is deprecated and no longer used.
Jan 31, 2018 7:52:20 PM org.jboss.as.arquillian.container.managed.ManagedDeployableContainer startInternal
INFO: Starting container with: [/usr/lib/jvm/java-openjdk/bin/java, -D[Standalone], -Djboss.socket.binding.port-offset=10000, -Xms512m, -Xmx1024m, -XX:MaxPermSize=512m, -ea, -Djboss.home.dir=/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final, -Dorg.jboss.boot.log.file=/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/standalone/log/server.log, -Dlogging.configuration=file:/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/standalone/configuration/logging.properties, -jar, /home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/jboss-modules.jar, -mp, /home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/modules, org.jboss.as.standalone, -Djboss.home.dir=/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final, -Djboss.server.base.dir=/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/standalone, -Djboss.server.log.dir=/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/standalone/log, -Djboss.server.config.dir=/home/ge0ffrey/projects/jboss/optashift/optashift-employee-rostering/local/appserver/wildfly-10.1.0.Final/standalone/configuration]
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.449 s <<< FAILURE! - in org.optaplanner.openshift.employeerostering.webapp.skill.SkillRestServiceTest
[ERROR] org.optaplanner.openshift.employeerostering.webapp.skill.SkillRestServiceTest Time elapsed: 0.448 s <<< ERROR!
java.lang.NoClassDefFoundError: org/wildfly/security/permission/AbstractNameSetOnlyPermission
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
Caused by: java.lang.ClassNotFoundException: org.wildfly.security.permission.AbstractNameSetOnlyPermission
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
{code}
With this parent pom:
{code}
<properties>
<version.org.jboss.arquillian>1.2.1.Final</version.org.jboss.arquillian>
<version.org.wildfly.arquillian>2.1.0.Final</version.org.wildfly.arquillian>
<version.org.jboss.resteasy>3.1.4.Final</version.org.jboss.resteasy>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.arquillian</groupId>
<artifactId>arquillian-bom</artifactId>
<version>${version.org.jboss.arquillian}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.wildfly.arquillian</groupId>
<artifactId>wildfly-arquillian-container-managed</artifactId>
<version>${version.org.wildfly.arquillian}</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
<version>${version.org.jboss.resteasy}</version>
</dependency>
...
{code}
and this child pom:
{code}
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.junit</groupId>
<artifactId>arquillian-junit-container</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.shrinkwrap.resolver</groupId>
<artifactId>shrinkwrap-resolver-depchain</artifactId>
<type>pom</type>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.wildfly.arquillian</groupId>
<artifactId>wildfly-arquillian-container-managed</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
<scope>test</scope>
</dependency>
{code}
I am probably not using a correct version combination of arquillian and wildfly, but for mere mortals such as myself it takes days to find a working versions combination of arquillian and wildfly - everytime I need to upgrade wildfly. (The arquillian guides and arquillian-showcase-jaxrs are all hopelessly outdated in this aspect, they still mention jboss-as (note eap)).
Anyway, wildfly-arquillian-managed should automatically detect that it's a wrong version combination and give an error message like "I am not build to work with version x, but it seems like you're combining me with version y."
Spring boot gets this right, so can we.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2248) SASL protocol should not pass null callbackhandlers to Factories
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/JGRP-2248?page=com.atlassian.jira.plugin.... ]
Ryan Emerson commented on JGRP-2248:
------------------------------------
[~belaban] Done. If we can get 4.0.10.Final in time for ISPN 9.2.0.Final (14/02) that would be ideal as it would allow me to remove my current workaround before release.
> SASL protocol should not pass null callbackhandlers to Factories
> ----------------------------------------------------------------
>
> Key: JGRP-2248
> URL: https://issues.jboss.org/browse/JGRP-2248
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 4.0.9
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 4.0.10
>
>
> Currently it's possible for the client and server callback handlers to be null when passed to the respective Sasl Factory. When utilising the Elytron Sasl factories this results in an IllegalArgumentException being thrown. To avoid this we should ensure that the callback handlers are non-null when passed to the factory implementations, e.g. pass a NoOpCallbackHandler.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2245) JGroup JDBC_PING is not clearing the crashed members
by Sibin Karnavar (JIRA)
[ https://issues.jboss.org/browse/JGRP-2245?page=com.atlassian.jira.plugin.... ]
Sibin Karnavar commented on JGRP-2245:
--------------------------------------
Thank you. I will test this out and update you on this.
> JGroup JDBC_PING is not clearing the crashed members
> ----------------------------------------------------
>
> Key: JGRP-2245
> URL: https://issues.jboss.org/browse/JGRP-2245
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.8
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Fix For: 4.0.10
>
>
> 1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
> 2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
> 3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
> I think the issue is
> handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
> {code:java}
> // remove all files which are not from the current members
> protected void handleView(View new_view, View old_view, boolean coord_changed) {
> if(is_coord) {
> if(coord_changed) {
> if(remove_all_data_on_view_change)
> removeAll(cluster_name);
> else if(remove_old_coords_on_view_change) {
> Address old_coord=old_view != null? old_view.getCreator() : null;
> if(old_coord != null)
> remove(cluster_name, old_coord);
> }
> }
> if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
> writeAll();
> if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
> startInfoWriter();
> }
> }
> else if(coord_changed) // I'm no longer the coordinator
> remove(cluster_name, local_addr);
> }
> {code}
> 4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
> sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2241) Bundler using direct memory as buffer
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2241?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2241:
---------------------------
Fix Version/s: 4.0.11
(was: 4.0.10)
> Bundler using direct memory as buffer
> -------------------------------------
>
> Key: JGRP-2241
> URL: https://issues.jboss.org/browse/JGRP-2241
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.11
>
>
> The current bundler impls use a byte[] array in an output stream as buffer into which messages are written.
> If we instead used a direct byte buffer (off-heap), the JDK would not create an additional copy but simply pass the contents to the corresponding socket. This would reduce the memory allocation rate.
> Direct buffers can also be used on the receiver side, investigate.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2248) SASL protocol should not pass null callbackhandlers to Factories
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2248?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2248:
--------------------------------
I plan to release 4.0.10.Final within 1-2 weeks, so if this issue makes that cut, I won't mind...
> SASL protocol should not pass null callbackhandlers to Factories
> ----------------------------------------------------------------
>
> Key: JGRP-2248
> URL: https://issues.jboss.org/browse/JGRP-2248
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 4.0.9
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 4.0.10
>
>
> Currently it's possible for the client and server callback handlers to be null when passed to the respective Sasl Factory. When utilising the Elytron Sasl factories this results in an IllegalArgumentException being thrown. To avoid this we should ensure that the callback handlers are non-null when passed to the factory implementations, e.g. pass a NoOpCallbackHandler.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2245) JGroup JDBC_PING is not clearing the crashed members
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2245?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2245 at 1/31/18 10:30 AM:
----------------------------------------------------------
Will you be able to make JGroups writes only its own address to DB or just the new view or latest
was (Author: sibin.karnavar):
Will you be able to make J GROUP writes only its own address to DB or just the new view or latest
> JGroup JDBC_PING is not clearing the crashed members
> ----------------------------------------------------
>
> Key: JGRP-2245
> URL: https://issues.jboss.org/browse/JGRP-2245
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.8
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Fix For: 4.0.10
>
>
> 1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
> 2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
> 3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
> I think the issue is
> handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
> {code:java}
> // remove all files which are not from the current members
> protected void handleView(View new_view, View old_view, boolean coord_changed) {
> if(is_coord) {
> if(coord_changed) {
> if(remove_all_data_on_view_change)
> removeAll(cluster_name);
> else if(remove_old_coords_on_view_change) {
> Address old_coord=old_view != null? old_view.getCreator() : null;
> if(old_coord != null)
> remove(cluster_name, old_coord);
> }
> }
> if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
> writeAll();
> if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
> startInfoWriter();
> }
> }
> else if(coord_changed) // I'm no longer the coordinator
> remove(cluster_name, local_addr);
> }
> {code}
> 4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
> sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2245) JGroup JDBC_PING is not clearing the crashed members
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2245?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2245:
--------------------------------
Let me know if this works! You need 4.0.10 (not yet released) to test, or a build of JGroups master.
> JGroup JDBC_PING is not clearing the crashed members
> ----------------------------------------------------
>
> Key: JGRP-2245
> URL: https://issues.jboss.org/browse/JGRP-2245
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.8
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Fix For: 4.0.10
>
>
> 1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
> 2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
> 3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
> I think the issue is
> handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
> {code:java}
> // remove all files which are not from the current members
> protected void handleView(View new_view, View old_view, boolean coord_changed) {
> if(is_coord) {
> if(coord_changed) {
> if(remove_all_data_on_view_change)
> removeAll(cluster_name);
> else if(remove_old_coords_on_view_change) {
> Address old_coord=old_view != null? old_view.getCreator() : null;
> if(old_coord != null)
> remove(cluster_name, old_coord);
> }
> }
> if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
> writeAll();
> if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
> startInfoWriter();
> }
> }
> else if(coord_changed) // I'm no longer the coordinator
> remove(cluster_name, local_addr);
> }
> {code}
> 4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
> sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2245) JGroup JDBC_PING is not clearing the crashed members
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2245?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2245:
---------------------------
Priority: Major (was: Critical)
> JGroup JDBC_PING is not clearing the crashed members
> ----------------------------------------------------
>
> Key: JGRP-2245
> URL: https://issues.jboss.org/browse/JGRP-2245
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.8
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Fix For: 4.0.10
>
>
> 1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
> 2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
> 3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
> I think the issue is
> handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
> {code:java}
> // remove all files which are not from the current members
> protected void handleView(View new_view, View old_view, boolean coord_changed) {
> if(is_coord) {
> if(coord_changed) {
> if(remove_all_data_on_view_change)
> removeAll(cluster_name);
> else if(remove_old_coords_on_view_change) {
> Address old_coord=old_view != null? old_view.getCreator() : null;
> if(old_coord != null)
> remove(cluster_name, old_coord);
> }
> }
> if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
> writeAll();
> if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
> startInfoWriter();
> }
> }
> else if(coord_changed) // I'm no longer the coordinator
> remove(cluster_name, local_addr);
> }
> {code}
> 4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
> sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (JGRP-2245) JGroup JDBC_PING is not clearing the crashed members
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2245?page=com.atlassian.jira.plugin.... ]
Bela Ban resolved JGRP-2245.
----------------------------
Resolution: Duplicate Issue
Fixed by JGRP-2232
> JGroup JDBC_PING is not clearing the crashed members
> ----------------------------------------------------
>
> Key: JGRP-2245
> URL: https://issues.jboss.org/browse/JGRP-2245
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.8
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Priority: Critical
> Fix For: 4.0.10
>
>
> 1) In AWS cloud environments, IP address will be different when a node crashes and when a new cluster node gets recreated.
> 2) In this situation, JGroup is not clearing logical_addr_cache and it gets confused, when we restart the cluster nodes.
> 3)logical_addr_cache_max_size and the eviction did not work because, the cache is again getting updated from the ping and it never getting marked as removable.
> I think the issue is
> handleView method is always re writing the entire cache on view change to the db. So even if we clear the table with the help of above mentioned flags (remove_all_data_on_view_change && remove_old_coords_on_view_change) , its getting re written to the table.
> {code:java}
> // remove all files which are not from the current members
> protected void handleView(View new_view, View old_view, boolean coord_changed) {
> if(is_coord) {
> if(coord_changed) {
> if(remove_all_data_on_view_change)
> removeAll(cluster_name);
> else if(remove_old_coords_on_view_change) {
> Address old_coord=old_view != null? old_view.getCreator() : null;
> if(old_coord != null)
> remove(cluster_name, old_coord);
> }
> }
> if(coord_changed || View.diff(old_view, new_view)[1].length > 0) {
> writeAll();
> if(remove_all_data_on_view_change || remove_old_coords_on_view_change)
> startInfoWriter();
> }
> }
> else if(coord_changed) // I'm no longer the coordinator
> remove(cluster_name, local_addr);
> }
> {code}
> 4) Because of the crashed members (non existing ip address), we are getting lot of socket timeouts
> sendToMembers of TP is trying to send messages to old crashed members and writing error logs while startup.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months
[JBoss JIRA] (DROOLS-2286) [DMN engine] Java Object in DMNContext not working properly with Filter function.
by Edson Tirelli (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2286?page=com.atlassian.jira.plugi... ]
Edson Tirelli reassigned DROOLS-2286:
-------------------------------------
Assignee: Matteo Mortari (was: Edson Tirelli)
> [DMN engine] Java Object in DMNContext not working properly with Filter function.
> ---------------------------------------------------------------------------------
>
> Key: DROOLS-2286
> URL: https://issues.jboss.org/browse/DROOLS-2286
> Project: Drools
> Issue Type: Bug
> Components: dmn engine
> Affects Versions: 7.4.1.Final, 7.5.0.Final
> Reporter: Thomas Mantegazzi
> Assignee: Matteo Mortari
> Attachments: FilterJohns.dmn, FilterOnObjectListBug.java
>
>
> When trying to evaluate a FEEL expression like:
> {code:java}
> personList[name = "John"]
> {code}
> by inserting into the _DMNContext_ a _Java Object_, the _DMN engine_ doesn't seem to be able to fetch the _name_ field from the object. This doesn't happen if instead of _Java Objects_ we insert a Map.
> While trying to debug, it seems that the problem is happening in the following method of _FilterExpressionNode_.
> {code:java}
> private void evaluateExpressionInContext(EvaluationContext ctx, List results, Object v) {
> try {
> ctx.enterFrame();
> // handle it as a predicate
> ctx.setValue( "item", v );
> // if it is a Map, need to add all string keys as variables in the context
> if( v instanceof Map ) {
> Set<Map.Entry> set = ((Map) v).entrySet();
> for( Map.Entry ce : set ) {
> if( ce.getKey() instanceof String ) {
> ctx.setValue( (String) ce.getKey(), ce.getValue() );
> }
> }
> }
> Object r = this.filter.evaluate( ctx );
> if( r instanceof Boolean && ((Boolean)r) == Boolean.TRUE ) {
> results.add( v );
> }
> } finally {
> ctx.exitFrame();
> }
> }
> {code}
> Also attatched java and DMN test files that showcases the issue.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 12 months