[JBoss JIRA] (ISPN-9023) Eviction profile for “memory” type is different from “count” type
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9023?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-9023:
-------------------------------------
Assignee: William Burns
> Eviction profile for “memory” type is different from “count” type
> -----------------------------------------------------------------
>
> Key: ISPN-9023
> URL: https://issues.jboss.org/browse/ISPN-9023
> Project: Infinispan
> Issue …
[View More]Type: Bug
> Components: Eviction
> Affects Versions: 9.1.1.Final
> Environment: * Sun JDK 8 (1.8.0_92)
> * Linux x64
> Reporter: Dmitry Katsubo
> Assignee: William Burns
> Priority: Minor
> Attachments: test.7z
>
>
> I would like to use Infinispan as cache for image binary data (Java type {{byte[]}}). I assume that Infinispan default strategy is LIFO or similar strategy that prefers to keep most recently used/added cache entity.
> In my loadtest scenario I make four calls in the round to retrieve the same entity. It is assumed that first call never hits the cache (because each round a unique entity is requested), but following three always hit the cache. So the expected profile should look like this:
> {code}
> #1: MISS-HIT-HIT-HIT
> #2: MISS-HIT-HIT-HIT
> ...
> {code}
> It works perfectly (exactly as expected above) when I configure Infinispan with COUNT eviction type with some number of entities:
> {code}
> <local-cache name="imagesCache" statistics="true">
> <!-- lifespan="30 min" max-idle="30 min" interval="1 min" -->
> <expiration lifespan="1800000" max-idle="1800000" interval="60000" />
> <memory>
> <binary eviction="COUNT" size="500" />
> </memory>
> </local-cache>
> {code}
> Cache hit ratio based on numbers I capture in loadtest: {{(2952-738)/2952 = 0.75}} and that matches the stats I observe via JMX.
> When I change the number of entities to keep in the memory (<binary eviction="COUNT" size="100" />), hit ratio does not change (as expected).
> After that I have restarted the application with only this small change in cache configuration:
> {code}
> <memory>
> <binary eviction="MEMORY" size="1000000" />
> </memory>
> {code}
> I would expect that Infinispan performance has the same profile, however it turns out that once the given amount of memory is fully allocated, newly added entities don't evict old entities but get immediately evicted. It means that roughly after 100 entities are added, all four requests to cache result cache miss (the ratio is now 0.58 instead of 0.75):
> {code}
> #1: MISS-HIT-HIT-HIT
> #2: MISS-HIT-HIT-HIT
> ...
> #101: MISS-MISS-MISS-MISS
> ...
> {code}
> If I increase the memory, indeed hit ratio comes closer to 0.75 however I would like hit ratio to be the same irrespective the memory size (provided that memory can fit at least few entities).
> Once I configure the passivation to file, memory-based eviction policy starts to work as expected:
> {code}
> <local-cache name="imagesCache" statistics="true">
> <expiration lifespan="1800000" max-idle="1800000" interval="60000" />
> <persistence passivation="true">
> <file-store path="/var/cache/infinispan" purge="true">
> <write-behind thread-pool-size="5" />
> </file-store>
> </persistence>
> <memory>
> <binary eviction="MEMORY" size="1000000" />
> </memory>
> </local-cache>
> {code}
> but I would like to force the profile I need without passivation enabled.
> Additional information is provided [on the forum|https://stackoverflow.com/questions/48420712/eviction-for-memory-ty...].
> Bottomline: If underlying component supports different eviction "modes", please expose this setting via XML so that the user of the library can control the mode.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9346) Off-heap implementation crashes JVM
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-9346?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-9346:
-------------------------------------
Assignee: William Burns
> Off-heap implementation crashes JVM
> -----------------------------------
>
> Key: ISPN-9346
> URL: https://issues.jboss.org/browse/ISPN-9346
> Project: Infinispan
> Issue Type: Enhancement
> Components: Off Heap
> …
[View More]Affects Versions: 9.1.1.Final
> Reporter: Dmitry Katsubo
> Assignee: William Burns
> Attachments: logs.7z
>
>
> There is a suspect that off-heap implementation has caused attached JVM crashes (see [^logs.7z]).
> There are no steps to reproduce. Perhaps somebody encounters the same issue.
> Log extract:
> {code}
> SIGSEGV (0xb) at pc=0x00007fb9a2f358ad, pid=42551, tid=140426343024384
> Problematic frame:
> J 17187 C1 org.infinispan.container.offheap.BoundedOffHeapDataContainer.moveToEnd(J)V (137 bytes) @ 0x00007fb9a2f358ad [0x00007fb9a2f357a0+0x10d]
> J 17205 C1 org.infinispan.container.offheap.BoundedOffHeapDataContainer.entryRetrieved(J)V (74 bytes) @ 0x00007fb9a2e02c2c [0x00007fb9a2e02960+0x2cc]
> J 17295 C1 org.infinispan.container.offheap.OffHeapDataContainer.performGet(JLjava/lang/Object;)Lorg/infinispan/container/entries/InternalCacheEntry; (72 bytes) @ 0x00007fb9a310c5ac [0x00007fb9a310c000+0x5ac]
> J 18853 C1 org.infinispan.container.offheap.OffHeapDataContainer.compute(Lorg/infinispan/commons/marshall/WrappedBytes;Lorg/infinispan/container/DataContainer$ComputeAction;)Lorg/infinispan/container/entries/InternalCacheEntry; (157 bytes) @ 0x00007fb9a250e814 [0x00007fb9a250dd00+0xb14]
> J 18851 C1 org.infinispan.container.offheap.BoundedOffHeapDataContainer.compute(Ljava/lang/Object;Lorg/infinispan/container/DataContainer$ComputeAction;)Lorg/infinispan/container/entries/InternalCacheEntry; (10 bytes) @ 0x00007fb9a4658cbc [0x00007fb9a4658ae0+0x1dc]
> J 18840 C1 org.infinispan.expiration.impl.ExpirationManagerImpl.handleInMemoryExpiration(Lorg/infinispan/container/entries/InternalCacheEntry;J)V (24 bytes) @ 0x00007fb9a1a2e55c [0x00007fb9a1a2e0e0+0x47c]
> j org.infinispan.expiration.impl.ExpirationManagerImpl.processExpiration()V+95
> j org.infinispan.expiration.impl.ExpirationManagerImpl$ScheduledTask.run()V+17
> J 15131 C2 java.util.concurrent.Executors$RunnableAdapter.call()Ljava/lang/Object; (14 bytes) @ 0x00007fb9a3cef368 [0x00007fb9a3cee200+0x1168]
> {code}
> Infinispan v9.1.1.Final
> Java v1.8.0_74-b02
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9345) TimeutException involving the org.infinispan.CONFIG cache
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-9345?page=com.atlassian.jira.plugin.... ]
Dan Berindei edited comment on ISPN-9345 at 7/3/18 10:30 AM:
-------------------------------------------------------------
Don't need separate JVMs, starting 2 nodes in the same JVM also reproduces the problem.
I've tracked it down to the fact that JGroups uses IPv6 addresses by default on Linux. Even if {{UDP.mcast_addr}} is set to an IPv4 address, JGroups converts it to an IPv6 address. [IPv6 never fragments …
[View More]packets|https://labs.ripe.net/Members/gih/fragmenting-ipv6], so packets bigger than the MTU are dropped:
{noformat}16:33:40.584803 IP6 denulu-tp3 > ff0e::e406:708: frag (0|1448) 40689 > 46655: UDP, bad length 1453 > 1440{noformat}
(This happens on a wifi network with {{mtu=1500}}, according to ifconfig, and {{FRAG3.frag_size="1391"}})
It would be great if we could report this kind of misconfiguration to the user, but I don't think we can discover this without native code. WDYT [~belaban], maybe we should broadcast a huge message on startup and fail if we don't get a reply back from the coordinator?
As a workaround we can switch to the IPv4 stack with {{\-Djava.net.preferIPv4Stack=true}}, switch to the TCP stack, or reduce the fragment size. Apparently 1280 is the minimum supported MTU in IPv6, so I suggest we change the default fragment size to {{1200}} (1280-(1453-1391)=1218) *and* use the TCP stack by default. We can add a paragraph to the user guide suggesting UDP with a higher fragment size as a riskier and possibly faster alternative.
was (Author: dan.berindei):
Don't need separate JVMs, starting 2 nodes in the same JVM also reproduces the problem.
I've tracked it down to the fact that JGroups uses IPv6 addresses by default on Linux. Even if {{UDP.mcast_addr}} is set to an IPv4 address, JGroups converts it to an IPv6 address. [IPv6 never fragments packets|https://labs.ripe.net/Members/gih/fragmenting-ipv6], so packets bigger than the MTU are dropped:
{noformat}16:33:40.584803 IP6 denulu-tp3 > ff0e::e406:708: frag (0|1448) 40689 > 46655: UDP, bad length 1453 > 1440{noformat}
(This happens on a wifi network with {{mtu=1500}}, according to ifconfig, and {{FRAG3.frag_size="1391"}})
It would be great if we could report this kind of misconfiguration to the user, but I don't think we can discover this without native code. WDYT [~belaban], maybe we should broadcast a huge message on startup and fail if we don't get a reply back from the coordinator?
As a workaround we can switch to the IPv4 stack with {{-Djava.net.preferIPv4Stack=true}}, switch to the TCP stack, or reduce the fragment size. Apparently 1280 is the minimum supported MTU in IPv6, so I suggest we change the default fragment size to {{1200}} (1280-(1453-1391)=1218) *and* use the TCP stack by default. We can add a paragraph to the user guide suggesting UDP with a higher fragment size as a riskier and possibly faster alternative.
> TimeutException involving the org.infinispan.CONFIG cache
> ---------------------------------------------------------
>
> Key: ISPN-9345
> URL: https://issues.jboss.org/browse/ISPN-9345
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.3.0.Final
> Reporter: Gustavo Fernandes
>
> {noformat}
> Caused by: org.infinispan.commons.CacheException: Initial state transfer timed out for cache org.infinispan.CONFIG on jedha-64980
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:233)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:79)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9345) TimeutException involving the org.infinispan.CONFIG cache
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-9345?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-9345:
------------------------------------
Don't need separate JVMs, starting 2 nodes in the same JVM also reproduces the problem.
I've tracked it down to the fact that JGroups uses IPv6 addresses by default on Linux. Even if {{UDP.mcast_addr}} is set to an IPv4 address, JGroups converts it to an IPv6 address. [IPv6 never fragments packets|https://labs.ripe.net/Members/gih/…
[View More]fragmenting-ipv6], so packets bigger than the MTU are dropped:
{noformat}16:33:40.584803 IP6 denulu-tp3 > ff0e::e406:708: frag (0|1448) 40689 > 46655: UDP, bad length 1453 > 1440{noformat}
(This happens on a wifi network with {{mtu=1500}}, according to ifconfig, and {{FRAG3.frag_size="1391"}})
It would be great if we could report this kind of misconfiguration to the user, but I don't think we can discover this without native code. WDYT [~belaban], maybe we should broadcast a huge message on startup and fail if we don't get a reply back from the coordinator?
As a workaround we can switch to the IPv4 stack with {{-Djava.net.preferIPv4Stack=true}}, switch to the TCP stack, or reduce the fragment size. Apparently 1280 is the minimum supported MTU in IPv6, so I suggest we change the default fragment size to {{1200}} (1280-(1453-1391)=1218) *and* use the TCP stack by default. We can add a paragraph to the user guide suggesting UDP with a higher fragment size as a riskier and possibly faster alternative.
> TimeutException involving the org.infinispan.CONFIG cache
> ---------------------------------------------------------
>
> Key: ISPN-9345
> URL: https://issues.jboss.org/browse/ISPN-9345
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.3.0.Final
> Reporter: Gustavo Fernandes
>
> {noformat}
> Caused by: org.infinispan.commons.CacheException: Initial state transfer timed out for cache org.infinispan.CONFIG on jedha-64980
> at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:233)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:79)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9023) Eviction profile for “memory” type is different from “count” type
by Dmitry Katsubo (JIRA)
[ https://issues.jboss.org/browse/ISPN-9023?page=com.atlassian.jira.plugin.... ]
Dmitry Katsubo commented on ISPN-9023:
--------------------------------------
I turned out that off-heap implementation crashes JVM with SIGSEGV on our side, see ISPN-9346.
> Eviction profile for “memory” type is different from “count” type
> -----------------------------------------------------------------
>
> Key: ISPN-9023
> URL: https://issues.jboss.org/browse/…
[View More]ISPN-9023
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 9.1.1.Final
> Environment: * Sun JDK 8 (1.8.0_92)
> * Linux x64
> Reporter: Dmitry Katsubo
> Priority: Minor
> Attachments: test.7z
>
>
> I would like to use Infinispan as cache for image binary data (Java type {{byte[]}}). I assume that Infinispan default strategy is LIFO or similar strategy that prefers to keep most recently used/added cache entity.
> In my loadtest scenario I make four calls in the round to retrieve the same entity. It is assumed that first call never hits the cache (because each round a unique entity is requested), but following three always hit the cache. So the expected profile should look like this:
> {code}
> #1: MISS-HIT-HIT-HIT
> #2: MISS-HIT-HIT-HIT
> ...
> {code}
> It works perfectly (exactly as expected above) when I configure Infinispan with COUNT eviction type with some number of entities:
> {code}
> <local-cache name="imagesCache" statistics="true">
> <!-- lifespan="30 min" max-idle="30 min" interval="1 min" -->
> <expiration lifespan="1800000" max-idle="1800000" interval="60000" />
> <memory>
> <binary eviction="COUNT" size="500" />
> </memory>
> </local-cache>
> {code}
> Cache hit ratio based on numbers I capture in loadtest: {{(2952-738)/2952 = 0.75}} and that matches the stats I observe via JMX.
> When I change the number of entities to keep in the memory (<binary eviction="COUNT" size="100" />), hit ratio does not change (as expected).
> After that I have restarted the application with only this small change in cache configuration:
> {code}
> <memory>
> <binary eviction="MEMORY" size="1000000" />
> </memory>
> {code}
> I would expect that Infinispan performance has the same profile, however it turns out that once the given amount of memory is fully allocated, newly added entities don't evict old entities but get immediately evicted. It means that roughly after 100 entities are added, all four requests to cache result cache miss (the ratio is now 0.58 instead of 0.75):
> {code}
> #1: MISS-HIT-HIT-HIT
> #2: MISS-HIT-HIT-HIT
> ...
> #101: MISS-MISS-MISS-MISS
> ...
> {code}
> If I increase the memory, indeed hit ratio comes closer to 0.75 however I would like hit ratio to be the same irrespective the memory size (provided that memory can fit at least few entities).
> Once I configure the passivation to file, memory-based eviction policy starts to work as expected:
> {code}
> <local-cache name="imagesCache" statistics="true">
> <expiration lifespan="1800000" max-idle="1800000" interval="60000" />
> <persistence passivation="true">
> <file-store path="/var/cache/infinispan" purge="true">
> <write-behind thread-pool-size="5" />
> </file-store>
> </persistence>
> <memory>
> <binary eviction="MEMORY" size="1000000" />
> </memory>
> </local-cache>
> {code}
> but I would like to force the profile I need without passivation enabled.
> Additional information is provided [on the forum|https://stackoverflow.com/questions/48420712/eviction-for-memory-ty...].
> Bottomline: If underlying component supports different eviction "modes", please expose this setting via XML so that the user of the library can control the mode.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9346) Off-heap implementation crashes JVM
by Dmitry Katsubo (JIRA)
Dmitry Katsubo created ISPN-9346:
------------------------------------
Summary: Off-heap implementation crashes JVM
Key: ISPN-9346
URL: https://issues.jboss.org/browse/ISPN-9346
Project: Infinispan
Issue Type: Enhancement
Components: Off Heap
Affects Versions: 9.1.1.Final
Reporter: Dmitry Katsubo
Attachments: logs.7z
There is a suspect that off-heap implementation has caused attached JVM …
[View More]crashes (see [^logs.7z]).
There are no steps to reproduce. Perhaps somebody encounters the same issue.
Log extract:
{code}
SIGSEGV (0xb) at pc=0x00007fb9a2f358ad, pid=42551, tid=140426343024384
Problematic frame:
J 17187 C1 org.infinispan.container.offheap.BoundedOffHeapDataContainer.moveToEnd(J)V (137 bytes) @ 0x00007fb9a2f358ad [0x00007fb9a2f357a0+0x10d]
J 17205 C1 org.infinispan.container.offheap.BoundedOffHeapDataContainer.entryRetrieved(J)V (74 bytes) @ 0x00007fb9a2e02c2c [0x00007fb9a2e02960+0x2cc]
J 17295 C1 org.infinispan.container.offheap.OffHeapDataContainer.performGet(JLjava/lang/Object;)Lorg/infinispan/container/entries/InternalCacheEntry; (72 bytes) @ 0x00007fb9a310c5ac [0x00007fb9a310c000+0x5ac]
J 18853 C1 org.infinispan.container.offheap.OffHeapDataContainer.compute(Lorg/infinispan/commons/marshall/WrappedBytes;Lorg/infinispan/container/DataContainer$ComputeAction;)Lorg/infinispan/container/entries/InternalCacheEntry; (157 bytes) @ 0x00007fb9a250e814 [0x00007fb9a250dd00+0xb14]
J 18851 C1 org.infinispan.container.offheap.BoundedOffHeapDataContainer.compute(Ljava/lang/Object;Lorg/infinispan/container/DataContainer$ComputeAction;)Lorg/infinispan/container/entries/InternalCacheEntry; (10 bytes) @ 0x00007fb9a4658cbc [0x00007fb9a4658ae0+0x1dc]
J 18840 C1 org.infinispan.expiration.impl.ExpirationManagerImpl.handleInMemoryExpiration(Lorg/infinispan/container/entries/InternalCacheEntry;J)V (24 bytes) @ 0x00007fb9a1a2e55c [0x00007fb9a1a2e0e0+0x47c]
j org.infinispan.expiration.impl.ExpirationManagerImpl.processExpiration()V+95
j org.infinispan.expiration.impl.ExpirationManagerImpl$ScheduledTask.run()V+17
J 15131 C2 java.util.concurrent.Executors$RunnableAdapter.call()Ljava/lang/Object; (14 bytes) @ 0x00007fb9a3cef368 [0x00007fb9a3cee200+0x1168]
{code}
Infinispan v9.1.1.Final
Java v1.8.0_74-b02
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9116) Server marshallers/transcoders don't support whitelist when deserializing
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-9116?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-9116:
-----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.4.0.Alpha1
9.4.0.Final
9.3.1.Final
Resolution: Done
> Server marshallers/transcoders don't support whitelist when deserializing
> -------------------------------------------------------------------------
>
> …
[View More] Key: ISPN-9116
> URL: https://issues.jboss.org/browse/ISPN-9116
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 9.3.0.Final, 9.2.5.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 9.4.0.Alpha1, 9.4.0.Final, 9.3.1.Final
>
>
> The server deserializes binary payloads and json/xml payload without any checks. This happens when:
> * Compatibility mode is on
> * Remote listeners with filters
> * Remote iteration with filters
> * Remote tasks with parameters
> * Server is configured with MediaType.APPLICATION_OBJECT
> * Potentially with JSON and XML contents sent via REST
> The remote endpoints affected are REST, Hot Rod and Memcached.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9345) TimeutException involving the org.infinispan.CONFIG cache
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-9345:
---------------------------------------
Summary: TimeutException involving the org.infinispan.CONFIG cache
Key: ISPN-9345
URL: https://issues.jboss.org/browse/ISPN-9345
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 9.3.0.Final
Reporter: Gustavo Fernandes
{noformat}
Caused by: org.infinispan.commons.CacheException: Initial state transfer …
[View More]timed out for cache org.infinispan.CONFIG on jedha-64980
at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:79)
{noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months
[JBoss JIRA] (ISPN-9344) org.infinispan.commons.marshall.NotSerializableException when use DeltaSpike
by Andrey Grigoriev (JIRA)
Andrey Grigoriev created ISPN-9344:
--------------------------------------
Summary: org.infinispan.commons.marshall.NotSerializableException when use DeltaSpike
Key: ISPN-9344
URL: https://issues.jboss.org/browse/ISPN-9344
Project: Infinispan
Issue Type: Bug
Affects Versions: 9.2.4.Final
Environment: Wildfly 13.0.0.Final (Infinispan 9.2.4)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
Apache …
[View More]DeltaSpike: 1.8.2
{code:java}
<dependency>
<groupId>org.apache.deltaspike.modules</groupId>
<artifactId>deltaspike-jsf-module-api</artifactId>
<version>1.8.2</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.deltaspike.modules</groupId>
<artifactId>deltaspike-jsf-module-impl</artifactId>
<version>1.8.2</version>
<scope>runtime</scope>
</dependency>
{code}
Reporter: Andrey Grigoriev
I have project on Wildfly 10, and I use DeltaSpike JSF Module (For @WindowScoped bean). When I migrate to Wildfly 13 we have some errors:
{code:java}
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (default task-2) ISPN000136: Error executing command PrepareCommand, writing keys [SessionCreationMetaDataKey(G7Xi_IuajeE1_Nh517GPjinnDh24LkWB1G8wn0TN),
SessionAttributesKey(G7Xi_IuajeE1_Nh517GPjinnDh24LkWB1G8wn0TN), SessionAccessMetaDataKey(G7Xi_IuajeE1_Nh517GPjinnDh24LkWB1G8wn0TN)]:
org.infinispan.commons.marshall.NotSerializableException: java.lang.ref.WeakReference
Caused by: an exception which occurred:
in field org.jboss.weld.bean.builtin.BeanManagerProxy.reloaderRef
in object org.jboss.weld.bean.builtin.BeanManagerProxy@f0640946
in field org.apache.deltaspike.core.util.context.ContextualStorage.beanManager
in object org.apache.deltaspike.core.util.context.ContextualStorage@1664c5af
in object org.apache.deltaspike.core.util.context.ContextualStorage@1664c5af
in field org.apache.deltaspike.core.impl.scope.AbstractBeanHolder.storageMap
in object org.apache.deltaspike.core.impl.scope.window.WindowBeanHolder@1c45f46e
in field org.jboss.weld.contexts.SerializableContextualInstanceImpl.instance
in object org.jboss.weld.contexts.SerializableContextualInstanceImpl@2a240ffd
in object org.jboss.weld.contexts.SerializableContextualInstanceImpl@2a240ffd
{code}
I asked DeltaSpike team about this, but they say that it is Infinispan problem:
{code:java}
<@struberg> seems like a weld bug if an injected BeanManager is not Serializable
<@struberg> or the serialisation logic in Infinispan cannot deal with it
< manovotn> hmm I can see BeanManagerProxy there, which is Weld's serializable BM version, so I would say it's Infinispan problem
< manovotn> I would try asking there first
<@struberg> point is, I don't think it's a DeltaSpike problem
{code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
[View Less]
6 years, 9 months