[JBoss JIRA] (ISPN-8550) Try to estimate malloc overhead and add to memory based eviction
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-8550?page=com.atlassian.jira.plugin.... ]
William Burns edited comment on ISPN-8550 at 11/22/17 4:00 PM:
---------------------------------------------------------------
So I have been testing this and it seems like an additional overhead of 16 bytes per allocated memory is about correct.
I verified the #'s using valgrind http://valgrind.org/ btw. When I did simple allocations, valgrind said my app requested entries aligned to 8 bytes and used its default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000 entry bytes + 8 overhead bytes) * (100_000 + 100) which equals 100900800 bytes or 96.226501465 MB. This would provide about 8 MB of overhead for the entire JVM as seen in the second row below.
8 bytes overhead (Program Mem is what was provided by valgrind and Other is just program minus entries - hope would be this is just the JVM stuff).
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes. In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
was (Author: william.burns):
So I have been testing this and it seems like an additional overhead of 16 bytes per allocated memory is about correct.
I verified the #'s using valgrind http://valgrind.org/ btw. When I did simple allocations, valgrind said my app requested entries aligned to 8 bytes and used its default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000 entry bytes + 8 overhead bytes) * (100_000 + 100) which equals 100900800 bytes or 96.226501465 MB. This would provide about 8 MB of overhead for the entire JVM as seen in the second row below.
8 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes. In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
> Try to estimate malloc overhead and add to memory based eviction
> ----------------------------------------------------------------
>
> Key: ISPN-8550
> URL: https://issues.jboss.org/browse/ISPN-8550
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: William Burns
> Assignee: William Burns
>
> We should try to also estimate malloc overhead. We could do something like Dan mentioned at https://github.com/infinispan/infinispan/pull/5590#pullrequestreview-7805...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (ISPN-8550) Try to estimate malloc overhead and add to memory based eviction
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-8550?page=com.atlassian.jira.plugin.... ]
William Burns edited comment on ISPN-8550 at 11/22/17 3:55 PM:
---------------------------------------------------------------
So I have been testing this and it seems like an additional overhead of 16 bytes per allocated memory is about correct.
I verified the #'s using valgrind http://valgrind.org/ btw. When I did simple allocations, valgrind said my app requested entries aligned to 8 bytes and used its default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000 entry bytes + 8 overhead bytes) * (100_000 + 100) which equals 100900800 bytes or 96.226501465 MB. This would provide about 8 MB of overhead for the entire JVM as seen in the second row below.
8 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes. In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
was (Author: william.burns):
So I have been testing this and it seems like an additional overhead of 16 bytes per allocated memory is about correct.
I verified the #'s using valgrind http://valgrind.org/ btw. When I did simple allocations, valgrind said my app requested entries aligned to 8 bytes and used its default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000 + 8) * (100_000 + 100) which equals 100900800 bytes or 96.226501465 MB. This would provide about 8 MB of overhead for the entire JVM.
8 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes. In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
> Try to estimate malloc overhead and add to memory based eviction
> ----------------------------------------------------------------
>
> Key: ISPN-8550
> URL: https://issues.jboss.org/browse/ISPN-8550
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: William Burns
> Assignee: William Burns
>
> We should try to also estimate malloc overhead. We could do something like Dan mentioned at https://github.com/infinispan/infinispan/pull/5590#pullrequestreview-7805...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (ISPN-8550) Try to estimate malloc overhead and add to memory based eviction
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-8550?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-8550:
-------------------------------------
So I have been testing this and it seems like an additional overhead of 16 bytes per allocated memory is about correct.
I verified the #'s using valgrind http://valgrind.org/ btw. When I did simple allocations, valgrind said my app requested entries aligned to 8 bytes and used its default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000 + 8) * (100_000 + 100) which equals 100900800 bytes or 96.226501465 MB. This would provide about 8 MB of overhead for the entire JVM.
8 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes. In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
> Try to estimate malloc overhead and add to memory based eviction
> ----------------------------------------------------------------
>
> Key: ISPN-8550
> URL: https://issues.jboss.org/browse/ISPN-8550
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: William Burns
>
> We should try to also estimate malloc overhead. We could do something like Dan mentioned at https://github.com/infinispan/infinispan/pull/5590#pullrequestreview-7805...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (ISPN-8553) Compatibility mode not working with server tasks using Java Streams
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8553?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-8553:
-----------------------------------------
You can try:
{code:java}
@SuppressWarnings("unchecked")
private <K, V> Cache<K, V> getCache() {
return (Cache<K, V>) ctx.getCache().get().getAdvancedCache().withEncoding(IdentityEncoder.class);
}
{code}
> Compatibility mode not working with server tasks using Java Streams
> -------------------------------------------------------------------
>
> Key: ISPN-8553
> URL: https://issues.jboss.org/browse/ISPN-8553
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 9.2.0.Beta1
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
> Fix For: 9.2.0.Beta2, 9.2.0.Final
>
>
> I have a cache defined as:
> {code}
> <distributed-cache name="analytics">
> <compatibility enabled="true" marshaller="org.infinispan.query.remote.CompatibilityProtoStreamMarshaller"/>
> </distributed-cache>
> {code}
> Then, I have a task like this:
> {code}
> package delays.java.stream.task;
> import java.util.Arrays;
> import java.util.Calendar;
> import java.util.Collections;
> import java.util.Date;
> import java.util.Locale;
> import java.util.Map;
> import java.util.TimeZone;
> import java.util.TreeMap;
> import java.util.stream.Collector;
> import java.util.stream.Collectors;
> import org.infinispan.Cache;
> import org.infinispan.stream.CacheCollectors;
> import org.infinispan.tasks.ServerTask;
> import org.infinispan.tasks.TaskContext;
> import org.infinispan.tasks.TaskExecutionMode;
> import org.infinispan.util.function.SerializableSupplier;
> import delays.java.stream.pojos.Stop;
> public class DelayRatioTask implements ServerTask {
> private TaskContext ctx;
> @Override
> public void setTaskContext(TaskContext ctx) {
> this.ctx = ctx;
> }
> @Override
> public String getName() {
> return "delay-ratio";
> }
> @Override
> public Object call() throws Exception {
> System.out.println("Execute delay-ratio task");
> Cache<String, Stop> cache = getCache();
> Map<Integer, Long> totalPerHour = cache.values().stream()
> .collect(
> serialize(() -> Collectors.groupingBy(
> e -> getHourOfDay(e.departureTs),
> Collectors.counting()
> )));
> Map<Integer, Long> delayedPerHour = cache.values().stream()
> .filter(e -> e.delayMin > 0)
> .collect(
> serialize(() -> Collectors.groupingBy(
> e -> getHourOfDay(e.departureTs),
> Collectors.counting()
> )));
> return Arrays.asList(delayedPerHour, totalPerHour);
> // return Arrays.asList(Collections.emptyMap(), Collections.emptyMap());
> }
> @Override
> public TaskExecutionMode getExecutionMode() {
> return TaskExecutionMode.ONE_NODE;
> }
> @SuppressWarnings("unchecked")
> private <K, V> Cache<K, V> getCache() {
> return (Cache<K, V>) ctx.getCache().get();
> }
> private static <T, R> Collector<T, ?, R> serialize(SerializableSupplier<Collector<T, ?, R>> s) {
> return CacheCollectors.serializableCollector(s);
> }
> private static int getHourOfDay(Date date) {
> Calendar c = Calendar.getInstance(TimeZone.getTimeZone("GMT+1"), Locale.ENGLISH);
> c.setTime(date);
> return c.get(Calendar.HOUR_OF_DAY);
> }
> }
> {code}
> When the groupBy executes, it fails with:
> {code}
> java.lang.AssertionError: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=333 returned server error (status=0x85): java.util.concurrent.ExecutionException: java.lang.ClassCastException: [B cannot be cast to delays.java.stream.pojos.Stop
> java.lang.ClassCastException: [B cannot be cast to delays.java.stream.pojos.Stop
> at delays.java.stream.AnalyticsUtil.timed(AnalyticsUtil.java:16)
> at delays.java.stream.AnalyticsVerticle.getDelaysRatio(AnalyticsVerticle.java:72)
> at io.vertx.ext.web.impl.BlockingHandlerDecorator.lambda$handle$0(BlockingHandlerDecorator.java:48)
> at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:271)
> at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:60)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=333 returned server error (status=0x85): java.util.concurrent.ExecutionException: java.lang.ClassCastException: [B cannot be cast to delays.java.stream.pojos.Stop
> java.lang.ClassCastException: [B cannot be cast to delays.java.stream.pojos.Stop
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:363)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:152)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:138)
> at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:60)
> at org.infinispan.client.hotrod.impl.operations.ExecuteOperation.executeOperation(ExecuteOperation.java:50)
> at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:56)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.execute(RemoteCacheImpl.java:542)
> at delays.java.stream.AnalyticsVerticle.lambda$getDelaysRatio$1(AnalyticsVerticle.java:73)
> at delays.java.stream.AnalyticsUtil.timed(AnalyticsUtil.java:14)
> ... 7 more
> {code}
> This is coming from:
> {code}
> 10:36:18,765 WARN [org.infinispan.remoting.inboundhandler.NonTotalOrderPerCacheInboundInvocationHandler] (remote-thread--p2-t22) ISPN000071: Caught exception when handling command StreamRequestCommand{type=TERMINAL_REHASH, includeLoader=true, terminalOperation=org.infinispan.stream.impl.termop.SegmentRetryingOperation@1b024f9, topologyId=9, id=datagrid-1-bmspw0, segments=[128, 130, 6, 135, 137, 138, 11, 12, 140, 13, 143, 16, 144, 17, 146, 22, 152, 155, 28, 29, 31, 36, 37, 41, 42, 44, 172, 173, 177, 178, 179, 181, 183, 57, 185, 60, 61, 189, 64, 192, 65, 193, 66, 197, 201, 75, 204, 207, 80, 208, 82, 83, 84, 212, 85, 86, 89, 92, 96, 225, 98, 226, 99, 100, 101, 102, 231, 105, 233, 234, 107, 108, 237, 112, 242, 115, 246, 247, 120, 251, 124, 125, 253, 255], keys=[], excludedKeys=[]}: java.lang.ClassCastException: [B cannot be cast to delays.java.stream.pojos.Stop
> at java.util.stream.Collectors.lambda$groupingBy$45(Collectors.java:907)
> at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> at java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1812)
> at org.infinispan.commons.util.Closeables$SpliteratorAsCloseableSpliterator.tryAdvance(Closeables.java:144)
> at java.util.Spliterator.forEachRemaining(Spliterator.java:326)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at org.infinispan.stream.impl.local.LocalCacheStream.collect(LocalCacheStream.java:259)
> at org.infinispan.stream.impl.TerminalFunctions$CollectorFunction.apply(TerminalFunctions.java:1093)
> at org.infinispan.stream.impl.TerminalFunctions$CollectorFunction.apply(TerminalFunctions.java:1083)
> at org.infinispan.stream.impl.termop.SegmentRetryingOperation.innerPerformOperation(SegmentRetryingOperation.java:68)
> at org.infinispan.stream.impl.termop.SegmentRetryingOperation.performOperation(SegmentRetryingOperation.java:79)
> at org.infinispan.stream.impl.LocalStreamManagerImpl.streamOperationRehashAware(LocalStreamManagerImpl.java:302)
> at org.infinispan.stream.impl.StreamRequestCommand.invokeAsync(StreamRequestCommand.java:96)
> at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokeCommand(BasePerCacheInboundInvocationHandler.java:102)
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.invoke(BaseBlockingRunnable.java:99)
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.runAsync(BaseBlockingRunnable.java:71)
> at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:40)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month