[Red Hat JIRA] (WFCORE-5216) EAP Pod fails to start when env JBOSS_MODULEPATH=${JBOSS_HOME}/modules:${HOME} is set
by Davy Koravand (Jira)
[ https://issues.redhat.com/browse/WFCORE-5216?page=com.atlassian.jira.plug... ]
Davy Koravand commented on WFCORE-5216:
---------------------------------------
I would like to add that using eval instead of exec helps us as well, as we experienced some problems with spaces in JAVA_OPTS on Unix machines. I think it's worth patching jboss-cli.sh with eval.
> EAP Pod fails to start when env JBOSS_MODULEPATH=${JBOSS_HOME}/modules:${HOME} is set
> -------------------------------------------------------------------------------------
>
> Key: WFCORE-5216
> URL: https://issues.redhat.com/browse/WFCORE-5216
> Project: WildFly Core
> Issue Type: Bug
> Components: Scripts
> Reporter: Ivo Studensky
> Assignee: Ivo Studensky
> Priority: Major
>
> Since JBoss EAP 7.3 for OpenShift, if environment variable JBOSS_MODULEPATH=${JBOSS_HOME}/modules:${HOME} which contains bash variable is set via DeploymentConfig, EAP Pod fails to start with the following messages:
> {noformat}
> INFO Duration: 347 milliseconds
> ERROR Error applying /tmp/cli-configuration-script-1594252654.cli CLI script.
> org.jboss.modules.ModuleNotFoundException: org.jboss.as.cli
> at org.jboss.modules.ModuleLoader.loadModule(ModuleLoader.java:297)
> at org.jboss.modules.Main.main(Main.java:371)
> {noformat}
> This issue does not occur on JBoss EAP 7.2 container.
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (JGRP-2503) Extend DataInput & DataOutput to avoid copy of byte[]
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2503?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2503.
--------------------------
Resolution: Rejected
> Extend DataInput & DataOutput to avoid copy of byte[]
> -----------------------------------------------------
>
> Key: JGRP-2503
> URL: https://issues.redhat.com/browse/JGRP-2503
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 5.0.4
> Reporter: Jingqi Xu
> Assignee: Bela Ban
> Priority: Optional
> Fix For: 5.1.3
>
>
> "DataInput's void readFully(byte b[])" will cause an extra copy when reading bytes from DataInput. Sometimes business logic needs to read bytes from DataInput first, then process them conditionally(or will not process at all) , is it possible to support ByteBuffer api like:
> public interface XDataInput extends DataInput {
> ByteBuffer readBuffer(int n);
> }
> public interface XDataOutput extends DataOutput {
> void writeBuffer(ByteBuffer value);
> }
> ByteArrayDataInputStream.java
> @Override
> public ByteBuffer readBuffer(int n) {
> ByteBuffer r = wrap(this.buf, this.pos, n); this.pos += n; return r;
> }
> public interface Streamable {
> void writeTo(XDataOutput out) throws IOException;
> void readFrom(XDataInput in) throws IOException, ClassNotFoundException;
> }
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (JGRP-2503) Extend DataInput & DataOutput to avoid copy of byte[]
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2503?page=com.atlassian.jira.plugin... ]
Bela Ban commented on JGRP-2503:
--------------------------------
Doesn't work, see my comments in the PR. Closing for now...
> Extend DataInput & DataOutput to avoid copy of byte[]
> -----------------------------------------------------
>
> Key: JGRP-2503
> URL: https://issues.redhat.com/browse/JGRP-2503
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 5.0.4
> Reporter: Jingqi Xu
> Assignee: Bela Ban
> Priority: Optional
> Fix For: 5.1.3
>
>
> "DataInput's void readFully(byte b[])" will cause an extra copy when reading bytes from DataInput. Sometimes business logic needs to read bytes from DataInput first, then process them conditionally(or will not process at all) , is it possible to support ByteBuffer api like:
> public interface XDataInput extends DataInput {
> ByteBuffer readBuffer(int n);
> }
> public interface XDataOutput extends DataOutput {
> void writeBuffer(ByteBuffer value);
> }
> ByteArrayDataInputStream.java
> @Override
> public ByteBuffer readBuffer(int n) {
> ByteBuffer r = wrap(this.buf, this.pos, n); this.pos += n; return r;
> }
> public interface Streamable {
> void writeTo(XDataOutput out) throws IOException;
> void readFrom(XDataInput in) throws IOException, ClassNotFoundException;
> }
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (JGRP-2503) Extend DataInput & DataOutput to avoid copy of byte[]
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2503?page=com.atlassian.jira.plugin... ]
Bela Ban updated JGRP-2503:
---------------------------
Fix Version/s: 5.1.3
(was: 5.1)
Issue Type: Enhancement (was: Feature Request)
> Extend DataInput & DataOutput to avoid copy of byte[]
> -----------------------------------------------------
>
> Key: JGRP-2503
> URL: https://issues.redhat.com/browse/JGRP-2503
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 5.0.4
> Reporter: Jingqi Xu
> Assignee: Bela Ban
> Priority: Optional
> Fix For: 5.1.3
>
>
> "DataInput's void readFully(byte b[])" will cause an extra copy when reading bytes from DataInput. Sometimes business logic needs to read bytes from DataInput first, then process them conditionally(or will not process at all) , is it possible to support ByteBuffer api like:
> public interface XDataInput extends DataInput {
> ByteBuffer readBuffer(int n);
> }
> public interface XDataOutput extends DataOutput {
> void writeBuffer(ByteBuffer value);
> }
> ByteArrayDataInputStream.java
> @Override
> public ByteBuffer readBuffer(int n) {
> ByteBuffer r = wrap(this.buf, this.pos, n); this.pos += n; return r;
> }
> public interface Streamable {
> void writeTo(XDataOutput out) throws IOException;
> void readFrom(XDataInput in) throws IOException, ClassNotFoundException;
> }
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (DROOLS-5922) FromNodes are not correctly shared in executable model
by Mario Fusco (Jira)
Mario Fusco created DROOLS-5922:
-----------------------------------
Summary: FromNodes are not correctly shared in executable model
Key: DROOLS-5922
URL: https://issues.redhat.com/browse/DROOLS-5922
Project: Drools
Issue Type: Bug
Reporter: Mario Fusco
Assignee: Mario Fusco
Utility class FunctionUtils generates generic FunctionN starting from functions with fixed arity, but doing so the fingerprint of the original function gets lost, thus preventing correct comparison of lambdas and sharing of nodes containing them. The effect of this is visible for instance in FromTest.testFromSharingWithAccumulate since when running it with executable model there is no node sharing as expected (modify the test to check for actual node sharing and make it fail otherwise).
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (WFLY-14251) Consolidated JMS queue count not working on Cluster setup.
by rutu rutu (Jira)
[ https://issues.redhat.com/browse/WFLY-14251?page=com.atlassian.jira.plugi... ]
rutu rutu updated WFLY-14251:
-----------------------------
Description:
Cluster setup :
Server 1 : Master i.e manager console.
Server 2 : Slave 1
Server 3 : Slave 2
From Master single EAR gets deployed on al servers.
When JMS message gets pushed then as per MDB limit it get distrubuted in any of the server 1 or 2 as per RoundRobinPolicy.
Here msg distrubutution is happing properly.
Now from any server when Queue is paused , its also getting reflected.
i.e same queue gets paused from another server.
But
say 10 jms msg is pused on Queue.
it got distrubuted as
Server-1 : 3
Server-2 : 7
now when server-1 looks how many msg are present on Queue it gets only 3
As this setup is in Cluster , why queue count information is not being shared.
subsystem xmlns="urn:jboss:domain:messaging-activemq
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector in-vm" ha="true" auto-group="true" transaction="xa" "/>
Here tried both
connectors="http-connector in-vm"
connectors="http-connector"
Note :
connectors="http-connector ---------> This controls start/pause across servers.
connectors="in-vm" ---------> This controls start/pause on Local servers.
Here what should be the setting so count can be same across servers in cluster
<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password:rutu}"/>
<management address="localhost" jmx-enabled="true" jmx-domain="org.apache.activemq.artemis"/>
<message-expiry scan-period="130000"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.mq.sys.dmq" expiry-address="jms.queue.ExpiryQueue" max-delivery-attempts="2" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<remote-acceptor name="internal-messaging-acceptor" socket-binding="internal-messaging"/>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
<cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="dummy" entries="java:/queue/dummy java:jboss/exported/jms/queue/dummy"/>
<jms-queue name="mq.sys.dmq" entries="java:/queue/mq.sys.dmq java:jboss/exported/jms/queue/mq.sys.dmq"/>
<jms-queue name="AccountingQueue" entries="java:/queue/AccountingQueue java:jboss/exported/jms/queue/AccountingQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector" transaction="xa"/>
</server>
</subsystem>
was:
Cluster setup :
Server 1 : Master i.e manager console.
Server 2 : Slave 1
Server 3 : Slave 2
From Master single EAR gets deployed on al servers.
When JMS message gets pushed then as per MDB limit it get distrubuted in any of the server 1 or 2 as per RoundRobinPolicy.
Here msg distrubutution is happing properly.
Now from any server when Queue is paused , its also getting reflected.
i.e same queue gets paused from another server.
But
say 10 jms msg is pused on Queue.
it got distrubuted as
Server-1 : 3
Server-2 : 7
now when server-1 looks how many msg are present on Queue it gets only 3
As this setup is in Cluster , why queue count information is not being shared.
subsystem xmlns="urn:jboss:domain:messaging-activemq
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector in-vm" ha="true" auto-group="true" transaction="xa" "/>
Note :
connectors="http-connector ---------> This controls start/pause across servers.
connectors="in-vm" ---------> This controls start/pause on Local servers.
Here what should be the setting so count can be same across servers in cluster
<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password:rutu}"/>
<management address="localhost" jmx-enabled="true" jmx-domain="org.apache.activemq.artemis"/>
<message-expiry scan-period="130000"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.mq.sys.dmq" expiry-address="jms.queue.ExpiryQueue" max-delivery-attempts="2" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<remote-acceptor name="internal-messaging-acceptor" socket-binding="internal-messaging"/>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
<cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="dummy" entries="java:/queue/dummy java:jboss/exported/jms/queue/dummy"/>
<jms-queue name="mq.sys.dmq" entries="java:/queue/mq.sys.dmq java:jboss/exported/jms/queue/mq.sys.dmq"/>
<jms-queue name="AccountingQueue" entries="java:/queue/AccountingQueue java:jboss/exported/jms/queue/AccountingQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector" transaction="xa"/>
</server>
</subsystem>
> Consolidated JMS queue count not working on Cluster setup.
> ----------------------------------------------------------
>
> Key: WFLY-14251
> URL: https://issues.redhat.com/browse/WFLY-14251
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, JMS
> Reporter: rutu rutu
> Assignee: Paul Ferraro
> Priority: Major
>
> Cluster setup :
> Server 1 : Master i.e manager console.
> Server 2 : Slave 1
> Server 3 : Slave 2
> From Master single EAR gets deployed on al servers.
> When JMS message gets pushed then as per MDB limit it get distrubuted in any of the server 1 or 2 as per RoundRobinPolicy.
> Here msg distrubutution is happing properly.
> Now from any server when Queue is paused , its also getting reflected.
> i.e same queue gets paused from another server.
> But
> say 10 jms msg is pused on Queue.
> it got distrubuted as
> Server-1 : 3
> Server-2 : 7
> now when server-1 looks how many msg are present on Queue it gets only 3
> As this setup is in Cluster , why queue count information is not being shared.
>
> subsystem xmlns="urn:jboss:domain:messaging-activemq
>
> <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector in-vm" ha="true" auto-group="true" transaction="xa" "/>
> Here tried both
> connectors="http-connector in-vm"
> connectors="http-connector"
> Note :
> connectors="http-connector ---------> This controls start/pause across servers.
> connectors="in-vm" ---------> This controls start/pause on Local servers.
> Here what should be the setting so count can be same across servers in cluster
> <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
> <server name="default">
> <security enabled="false"/>
> <cluster password="${jboss.messaging.cluster.password:rutu}"/>
> <management address="localhost" jmx-enabled="true" jmx-domain="org.apache.activemq.artemis"/>
> <message-expiry scan-period="130000"/>
> <security-setting name="#">
> <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
> </security-setting>
> <address-setting name="#" dead-letter-address="jms.queue.mq.sys.dmq" expiry-address="jms.queue.ExpiryQueue" max-delivery-attempts="2" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
> <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
> <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
> <param name="batch-delay" value="50"/>
> </http-connector>
> <in-vm-connector name="in-vm" server-id="0">
> <param name="buffer-pooling" value="false"/>
> </in-vm-connector>
> <http-acceptor name="http-acceptor" http-listener="default"/>
> <http-acceptor name="http-acceptor-throughput" http-listener="default">
> <param name="batch-delay" value="50"/>
> <param name="direct-deliver" value="false"/>
> </http-acceptor>
> <remote-acceptor name="internal-messaging-acceptor" socket-binding="internal-messaging"/>
> <in-vm-acceptor name="in-vm" server-id="0">
> <param name="buffer-pooling" value="false"/>
> </in-vm-acceptor>
> <broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
> <discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
> <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
> <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
> <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
> <jms-queue name="dummy" entries="java:/queue/dummy java:jboss/exported/jms/queue/dummy"/>
> <jms-queue name="mq.sys.dmq" entries="java:/queue/mq.sys.dmq java:jboss/exported/jms/queue/mq.sys.dmq"/>
> <jms-queue name="AccountingQueue" entries="java:/queue/AccountingQueue java:jboss/exported/jms/queue/AccountingQueue"/>
> <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
> <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
> <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector" transaction="xa"/>
>
> </server>
> </subsystem>
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (WFLY-14251) Consolidated JMS queue count not working on Cluster setup.
by rutu rutu (Jira)
rutu rutu created WFLY-14251:
--------------------------------
Summary: Consolidated JMS queue count not working on Cluster setup.
Key: WFLY-14251
URL: https://issues.redhat.com/browse/WFLY-14251
Project: WildFly
Issue Type: Bug
Components: Clustering, JMS
Reporter: rutu rutu
Assignee: Paul Ferraro
Cluster setup :
Server 1 : Master i.e manager console.
Server 2 : Slave 1
Server 3 : Slave 2
From Master single EAR gets deployed on al servers.
When JMS message gets pushed then as per MDB limit it get distrubuted in any of the server 1 or 2 as per RoundRobinPolicy.
Here msg distrubutution is happing properly.
Now from any server when Queue is paused , its also getting reflected.
i.e same queue gets paused from another server.
But
say 10 jms msg is pused on Queue.
it got distrubuted as
Server-1 : 3
Server-2 : 7
now when server-1 looks how many msg are present on Queue it gets only 3
As this setup is in Cluster , why queue count information is not being shared.
subsystem xmlns="urn:jboss:domain:messaging-activemq
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector in-vm" ha="true" auto-group="true" transaction="xa" "/>
Note :
connectors="http-connector ---------> This controls start/pause across servers.
connectors="in-vm" ---------> This controls start/pause on Local servers.
Here what should be the setting so count can be same across servers in cluster
<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password:rutu}"/>
<management address="localhost" jmx-enabled="true" jmx-domain="org.apache.activemq.artemis"/>
<message-expiry scan-period="130000"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.mq.sys.dmq" expiry-address="jms.queue.ExpiryQueue" max-delivery-attempts="2" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<remote-acceptor name="internal-messaging-acceptor" socket-binding="internal-messaging"/>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
<cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="dummy" entries="java:/queue/dummy java:jboss/exported/jms/queue/dummy"/>
<jms-queue name="mq.sys.dmq" entries="java:/queue/mq.sys.dmq java:jboss/exported/jms/queue/mq.sys.dmq"/>
<jms-queue name="AccountingQueue" entries="java:/queue/AccountingQueue java:jboss/exported/jms/queue/AccountingQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="http-connector" transaction="xa"/>
</server>
</subsystem>
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months
[Red Hat JIRA] (JGRP-2503) Extend DataInput & DataOutput to avoid copy of byte[]
by Jingqi Xu (Jira)
[ https://issues.redhat.com/browse/JGRP-2503?page=com.atlassian.jira.plugin... ]
Jingqi Xu reopened JGRP-2503:
-----------------------------
please check this pull request
https://github.com/belaban/JGroups/pull/524
> Extend DataInput & DataOutput to avoid copy of byte[]
> -----------------------------------------------------
>
> Key: JGRP-2503
> URL: https://issues.redhat.com/browse/JGRP-2503
> Project: JGroups
> Issue Type: Feature Request
> Affects Versions: 5.0.4
> Reporter: Jingqi Xu
> Assignee: Bela Ban
> Priority: Optional
> Fix For: 5.1
>
>
> "DataInput's void readFully(byte b[])" will cause an extra copy when reading bytes from DataInput. Sometimes business logic needs to read bytes from DataInput first, then process them conditionally(or will not process at all) , is it possible to support ByteBuffer api like:
> public interface XDataInput extends DataInput {
> ByteBuffer readBuffer(int n);
> }
> public interface XDataOutput extends DataOutput {
> void writeBuffer(ByteBuffer value);
> }
> ByteArrayDataInputStream.java
> @Override
> public ByteBuffer readBuffer(int n) {
> ByteBuffer r = wrap(this.buf, this.pos, n); this.pos += n; return r;
> }
> public interface Streamable {
> void writeTo(XDataOutput out) throws IOException;
> void readFrom(XDataInput in) throws IOException, ClassNotFoundException;
> }
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
4 years, 11 months