[JBoss JIRA] (JGRP-2504) Poor throughput over high latency TCP connection when recv_buf_size is configured
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2504?page=com.atlassian.jira.plugin... ]
Bela Ban resolved JGRP-2504.
----------------------------
Resolution: Done
Please test in your env and re-open if the problem still persists.
> Poor throughput over high latency TCP connection when recv_buf_size is configured
> ---------------------------------------------------------------------------------
>
> Key: JGRP-2504
> URL: https://issues.redhat.com/browse/JGRP-2504
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 5.0.0.Final
> Reporter: Andrew Skalski
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 5.1
>
> Attachments: SpeedTest.java, bla5.java, bla6.java, bla7.java
>
>
> I recently finished troubleshooting a unidirectional throughput bottleneck involving a JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP connection.
> The root cause was JGroups improperly configuring the receive/send buffers on the listening socket. According to the tcp(7) man page:
> {code:java}
> On individual connections, the socket buffer size must be set prior to
> the listen(2) or connect(2) calls in order to have it take effect.
> {code}
> However, JGroups does not set the buffer size on the listening side until after accept().
> The result is poor throughput when sending data from client (connecting side) to server (listening side.) Because the issue is a too-small TCP receive window, throughput is ultimately latency-bound.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (WFLY-13880) resource adapter logs plaintext JMS password at Warning level on connection error
by Brian Stansberry (Jira)
[ https://issues.redhat.com/browse/WFLY-13880?page=com.atlassian.jira.plugi... ]
Brian Stansberry updated WFLY-13880:
------------------------------------
Fix Version/s: 21.0.0.Final
> resource adapter logs plaintext JMS password at Warning level on connection error
> ---------------------------------------------------------------------------------
>
> Key: WFLY-13880
> URL: https://issues.redhat.com/browse/WFLY-13880
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 20.0.0.Final
> Reporter: Jiri Danek
> Assignee: Emmanuel Hugonnet
> Priority: Minor
> Labels: resource-adapter
> Fix For: 21.0.0.Final
>
>
> # start jms broker (AMQ 7 Broker, ActiveMQ Artemis based)
> # start wildfly
> # connect to the endpoint that causes JMS messages to be sent
> # kill the broker
> # observe following message in the log, containing {{user=example, pwd=example}}
> {noformat}
> 14:24:51,513 WARN [org.jboss.resource.adapter.jms.JmsManagedConnection] (QpidJMS Connection Executor: ID:a340d7e7-a228-4730-b8ab-3bc7a1f66b41:1) Handling jms exception failure: JmsManagedConnection{mcf=org.jboss.resource.adapter.jms.JmsManagedConnectionFactory@1f572370, info=JmsConnectionRequestInfo{userName=example, password=example, clientID=null, transacted=false, acknowledgeMode=1, type=3}, user=example, pwd=example, isSetUp=true, isDestroyed=false, lock=org.jboss.resource.adapter.jms.ReentrantLock@317e1235[Unlocked], con=org.jboss.resource.adapter.jms.JmsConnectionSession@4b199ffd, session=class org.apache.qpid.jms.JmsSession@1532118793, xaSession=null, xaResource=null, xaTransacted=false, context=org.apache.qpid.jms.JmsContext@3fb9fa6a, xaContext=null}: org.apache.qpid.jms.exceptions.JmsConnectionFailedException: The JMS connection has failed: Transport connection remotely closed.
> at deployment.resource-adapter.rar//org.apache.qpid.jms.provider.exceptions.ProviderFailedException.toJMSException(ProviderFailedException.java:35)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.provider.exceptions.ProviderFailedException.toJMSException(ProviderFailedException.java:21)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.exceptions.JmsExceptionSupport.create(JmsExceptionSupport.java:80)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.exceptions.JmsExceptionSupport.create(JmsExceptionSupport.java:112)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.JmsConnection.onAsyncException(JmsConnection.java:1546)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.JmsConnection.onProviderException(JmsConnection.java:1530)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.JmsConnection.onConnectionFailure(JmsConnection.java:1374)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.provider.amqp.AmqpProvider.fireProviderException(AmqpProvider.java:1150)
> at deployment.resource-adapter.rar//org.apache.qpid.jms.provider.amqp.AmqpProvider.lambda$onTransportClosed$18(AmqpProvider.java:914)
> at deployment.resource-adapter.rar//io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
> at deployment.resource-adapter.rar//io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
> at deployment.resource-adapter.rar//io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
> at deployment.resource-adapter.rar//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> at deployment.resource-adapter.rar//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.apache.qpid.jms.provider.exceptions.ProviderFailedException: Transport connection remotely closed.
> ... 7 more
> {noformat}
> I am not sure how important this is. In my experience, people dislike having their passwords spilled out in plaintext. On the other hand, I'd expect that a report about this will already exist somewhere, and I was unable to find it. So maybe it is not a production isssue for anyone.
> Originally reported at https://github.com/amqphub/amqp-10-resource-adapter/issues/13
> The log message comes from https://github.com/jms-ra/generic-jms-ra/blob/ece9e15843136023c26d3d0bd32...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (JGRP-2504) Poor throughput over high latency TCP connection when recv_buf_size is configured
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2504?page=com.atlassian.jira.plugin... ]
Bela Ban edited comment on JGRP-2504 at 9/30/20 9:50 AM:
---------------------------------------------------------
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* For {{DatagramSocket}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
was (Author: belaban):
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{For {{DatagramSocket}}}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
> Poor throughput over high latency TCP connection when recv_buf_size is configured
> ---------------------------------------------------------------------------------
>
> Key: JGRP-2504
> URL: https://issues.redhat.com/browse/JGRP-2504
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 5.0.0.Final
> Reporter: Andrew Skalski
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 5.1
>
> Attachments: SpeedTest.java, bla5.java, bla6.java, bla7.java
>
>
> I recently finished troubleshooting a unidirectional throughput bottleneck involving a JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP connection.
> The root cause was JGroups improperly configuring the receive/send buffers on the listening socket. According to the tcp(7) man page:
> {code:java}
> On individual connections, the socket buffer size must be set prior to
> the listen(2) or connect(2) calls in order to have it take effect.
> {code}
> However, JGroups does not set the buffer size on the listening side until after accept().
> The result is poor throughput when sending data from client (connecting side) to server (listening side.) Because the issue is a too-small TCP receive window, throughput is ultimately latency-bound.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (JGRP-2504) Poor throughput over high latency TCP connection when recv_buf_size is configured
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2504?page=com.atlassian.jira.plugin... ]
Bela Ban edited comment on JGRP-2504 at 9/30/20 9:50 AM:
---------------------------------------------------------
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{For {{DatagramSocket}}}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
was (Author: belaban):
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{For {{DatagramSocket}}}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
> Poor throughput over high latency TCP connection when recv_buf_size is configured
> ---------------------------------------------------------------------------------
>
> Key: JGRP-2504
> URL: https://issues.redhat.com/browse/JGRP-2504
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 5.0.0.Final
> Reporter: Andrew Skalski
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 5.1
>
> Attachments: SpeedTest.java, bla5.java, bla6.java, bla7.java
>
>
> I recently finished troubleshooting a unidirectional throughput bottleneck involving a JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP connection.
> The root cause was JGroups improperly configuring the receive/send buffers on the listening socket. According to the tcp(7) man page:
> {code:java}
> On individual connections, the socket buffer size must be set prior to
> the listen(2) or connect(2) calls in order to have it take effect.
> {code}
> However, JGroups does not set the buffer size on the listening side until after accept().
> The result is poor throughput when sending data from client (connecting side) to server (listening side.) Because the issue is a too-small TCP receive window, throughput is ultimately latency-bound.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (JGRP-2504) Poor throughput over high latency TCP connection when recv_buf_size is configured
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2504?page=com.atlassian.jira.plugin... ]
Bela Ban edited comment on JGRP-2504 at 9/30/20 9:49 AM:
---------------------------------------------------------
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{For {{DatagramSocket}}}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
was (Author: belaban):
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{For }}{{DatagramSocket}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
> Poor throughput over high latency TCP connection when recv_buf_size is configured
> ---------------------------------------------------------------------------------
>
> Key: JGRP-2504
> URL: https://issues.redhat.com/browse/JGRP-2504
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 5.0.0.Final
> Reporter: Andrew Skalski
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 5.1
>
> Attachments: SpeedTest.java, bla5.java, bla6.java, bla7.java
>
>
> I recently finished troubleshooting a unidirectional throughput bottleneck involving a JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP connection.
> The root cause was JGroups improperly configuring the receive/send buffers on the listening socket. According to the tcp(7) man page:
> {code:java}
> On individual connections, the socket buffer size must be set prior to
> the listen(2) or connect(2) calls in order to have it take effect.
> {code}
> However, JGroups does not set the buffer size on the listening side until after accept().
> The result is poor throughput when sending data from client (connecting side) to server (listening side.) Because the issue is a too-small TCP receive window, throughput is ultimately latency-bound.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (JGRP-2504) Poor throughput over high latency TCP connection when recv_buf_size is configured
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2504?page=com.atlassian.jira.plugin... ]
Bela Ban commented on JGRP-2504:
--------------------------------
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}. This should be copied to the sockets received by {{accept()}}. However, I read that this behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{For }}{{DatagramSocket}} and {{MulticastSocket}}, there is no need to do this, as receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos. Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an indication and the OS can choose to ignore this. As TCP adapts the receive-window size dynamically, I'm also a bit surprised that this didn't happen in your case? Perhaps the receive buffer size is the *max size* of the TCP receive window...?
> Poor throughput over high latency TCP connection when recv_buf_size is configured
> ---------------------------------------------------------------------------------
>
> Key: JGRP-2504
> URL: https://issues.redhat.com/browse/JGRP-2504
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 5.0.0.Final
> Reporter: Andrew Skalski
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 5.1
>
> Attachments: SpeedTest.java, bla5.java, bla6.java, bla7.java
>
>
> I recently finished troubleshooting a unidirectional throughput bottleneck involving a JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP connection.
> The root cause was JGroups improperly configuring the receive/send buffers on the listening socket. According to the tcp(7) man page:
> {code:java}
> On individual connections, the socket buffer size must be set prior to
> the listen(2) or connect(2) calls in order to have it take effect.
> {code}
> However, JGroups does not set the buffer size on the listening side until after accept().
> The result is poor throughput when sending data from client (connecting side) to server (listening side.) Because the issue is a too-small TCP receive window, throughput is ultimately latency-bound.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (WFCORE-5134) AuditLogToTLSSyslogTestCase stucks with oracle JDK 11
by Sonia Zaldana (Jira)
[ https://issues.redhat.com/browse/WFCORE-5134?page=com.atlassian.jira.plug... ]
Sonia Zaldana commented on WFCORE-5134:
---------------------------------------
Hi [~ropalka], no it doesn't time out. Is there a possibility the regression was introduced in recent JDK builds in that case?
> AuditLogToTLSSyslogTestCase stucks with oracle JDK 11
> -----------------------------------------------------
>
> Key: WFCORE-5134
> URL: https://issues.redhat.com/browse/WFCORE-5134
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Reporter: Jean Francois Denise
> Priority: Major
>
> I am using oracle JDK 11, the test get stucks in tear down attempting to execute server tearDown action. I noticed that this test is not run on JDK12 due to a deadlock. The observed problem smells deadlock although jstack doesn't report any.
> JDK:
> java 11.0.8 2020-07-14 LTS
> Java(TM) SE Runtime Environment 18.9 (build 11.0.8+10-LTS)
> Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode)
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (WFCORE-5134) AuditLogToTLSSyslogTestCase stucks with oracle JDK 11
by Richard Opalka (Jira)
[ https://issues.redhat.com/browse/WFCORE-5134?page=com.atlassian.jira.plug... ]
Richard Opalka commented on WFCORE-5134:
----------------------------------------
Hello [~szaldana] . Does it fail due to deadlock or not (did it timeout)?. If yes then there is some probability the regression was reintroduced in recent JDK builds.
> AuditLogToTLSSyslogTestCase stucks with oracle JDK 11
> -----------------------------------------------------
>
> Key: WFCORE-5134
> URL: https://issues.redhat.com/browse/WFCORE-5134
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Reporter: Jean Francois Denise
> Priority: Major
>
> I am using oracle JDK 11, the test get stucks in tear down attempting to execute server tearDown action. I noticed that this test is not run on JDK12 due to a deadlock. The observed problem smells deadlock although jstack doesn't report any.
> JDK:
> java 11.0.8 2020-07-14 LTS
> Java(TM) SE Runtime Environment 18.9 (build 11.0.8+10-LTS)
> Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode)
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (DROOLS-5683) Collection editor not opening for DRL based scenarios
by Yeser Amer (Jira)
[ https://issues.redhat.com/browse/DROOLS-5683?page=com.atlassian.jira.plug... ]
Yeser Amer edited comment on DROOLS-5683 at 9/30/20 8:20 AM:
-------------------------------------------------------------
Hi [~jstastny], currently BigDecimal and LocalDate time are not supported. There is a BAPL https://issues.redhat.com/browse/BAPL-1721 where it's required to implement LocalDate. Can you please close this one and RHDM as duplicate and update 1721 to include BigDecimal?
was (Author: yamer):
Hi [~jstastny], currently BigDecimal and LocalDate time are not supported. There is a BAPL https://issues.redhat.com/browse/BAPL-1721 where the it's required to implement LocalDate. Can you please close this one and RHDM as duplicate and update 1721 to include BigDecimal?
> Collection editor not opening for DRL based scenarios
> -----------------------------------------------------
>
> Key: DROOLS-5683
> URL: https://issues.redhat.com/browse/DROOLS-5683
> Project: Drools
> Issue Type: Bug
> Components: Test Scenarios Editor
> Affects Versions: 7.44.0.Final
> Reporter: Jan Stastny
> Assignee: Yeser Amer
> Priority: Major
> Attachments: Screenshot from 2020-09-30 13-42-57.png
>
>
> DRL based test scenario throws errors when opening Collection editors for lists of some datatypes (didn't do exhaustive search).
> * BigDecimal
> * LocalDateTime
> Unexpected error appears when opening such cell for editing.
> {code}
> 2020-09-30 13:44:09,073 ERROR [org.kie.workbench.common.services.backend.logger.GenericErrorLoggerServiceImpl] (default task-13) Error from user: testadmin Error ID: -1734434608 Location: LibraryPerspective|$ProjectScreen,org.kie.workbench.common.screens.messageconsole.MessageConsole,,,Eorg.docks.PlaceHolder?name=testRunnerReportingPanel,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=2,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=3,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=4,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=5,],,,,,AddAssetsScreen,,,ScenarioSimulationEditor?path_uri=default://master@MySpace/test/src/test/resources/com/myspace/test/test2.scesim&file_name=test2.scesim&has_version_support=true Exception: Uncaught exception: (TypeError) : Cannot read property 'j' of null
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months
[JBoss JIRA] (DROOLS-5683) Collection editor not opening for DRL based scenarios
by Yeser Amer (Jira)
[ https://issues.redhat.com/browse/DROOLS-5683?page=com.atlassian.jira.plug... ]
Yeser Amer commented on DROOLS-5683:
------------------------------------
Hi [~jstastny], currently BigDecimal and LocalDate time are not supported. There is a BAPL https://issues.redhat.com/browse/BAPL-1721 where the it's required to implement LocalDate. Can you please close this one and RHDM as duplicate and update 1721 to include BigDecimal?
> Collection editor not opening for DRL based scenarios
> -----------------------------------------------------
>
> Key: DROOLS-5683
> URL: https://issues.redhat.com/browse/DROOLS-5683
> Project: Drools
> Issue Type: Bug
> Components: Test Scenarios Editor
> Affects Versions: 7.44.0.Final
> Reporter: Jan Stastny
> Assignee: Yeser Amer
> Priority: Major
> Attachments: Screenshot from 2020-09-30 13-42-57.png
>
>
> DRL based test scenario throws errors when opening Collection editors for lists of some datatypes (didn't do exhaustive search).
> * BigDecimal
> * LocalDateTime
> Unexpected error appears when opening such cell for editing.
> {code}
> 2020-09-30 13:44:09,073 ERROR [org.kie.workbench.common.services.backend.logger.GenericErrorLoggerServiceImpl] (default task-13) Error from user: testadmin Error ID: -1734434608 Location: LibraryPerspective|$ProjectScreen,org.kie.workbench.common.screens.messageconsole.MessageConsole,,,Eorg.docks.PlaceHolder?name=testRunnerReportingPanel,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=2,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=3,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=4,Eorg.drools.scenariosimulation.TestTools?scesimeditorid=5,],,,,,AddAssetsScreen,,,ScenarioSimulationEditor?path_uri=default://master@MySpace/test/src/test/resources/com/myspace/test/test2.scesim&file_name=test2.scesim&has_version_support=true Exception: Uncaught exception: (TypeError) : Cannot read property 'j' of null
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 2 months