[JBoss JIRA] (ISPN-12200) Fix direct landing to to configuration page
by Katia Aresti (Jira)
Katia Aresti created ISPN-12200:
-----------------------------------
Summary: Fix direct landing to to configuration page
Key: ISPN-12200
URL: https://issues.redhat.com/browse/ISPN-12200
Project: Infinispan
Issue Type: Bug
Components: Console
Affects Versions: 11.0.1.Final
Reporter: Katia Aresti
Assignee: Katia Aresti
when we arrive directly to `[http://localhost:9000/console/container/default/configurations/]`
the react app does not load
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
3 years, 9 months
[JBoss JIRA] (ISPN-11176) XSite Max Idle
by Will Burns (Jira)
[ https://issues.redhat.com/browse/ISPN-11176?page=com.atlassian.jira.plugi... ]
Will Burns commented on ISPN-11176:
-----------------------------------
{quote}I would have liked more details about "valid when invoked" and "valid access", specifically how clock skew and latency between sites/nodes are addressed (or not relevant).
{quote}
I have changed gears so that the system clock time does not really matter as everything should just be a replicated touch. There is no reason I can think of currently as to why we need an offset value. This makes time a lot easier as we don't have to worry about clock skews.
{quote}Are there any differences in consistency depending on whether the remote-site touch command is sync or async?
Maybe if the remote node crashes before actually sending the touch command?
{quote}
Even if the remote site node that sends the async touch crashes at worst it means we could have another xsite max idle check on the remote site asking the originator site. Consistency should be fine, just possibly increased latency for a given request.
{quote}Couldn't this origin-site touch command be async as well, maybe using the IRAC version to avoid touching the wrong value?
{quote}
I kept it as sync to be inline with clustered max idle where it is also sync. It being async without changing part of xsite max idle would cause some consistency issues. But this may need to be changed per other comments.
{quote}Maybe the last local-site touch command from NodeB can be async?
{quote}
This is the same as the prior comment?
{quote}I thought this was scenario that the IRAC version was supposed to solve?
Actually I'm not sure how this works with the non-x-site clustered max-idle, either.
{quote}
I think I should have been more specific as to what I meant by "an issue". In this case the backup and primary will both send an xsite touch command, instead of just 1.
This works with non-x-site by both will send remove expired and the first will remove it and the second will be a NOOP
{quote}Or maybe only the primary should ever send remove expired commands, and if the topology changes it should not retry?
{quote}
Yes, sorry that is what I meant by my sentence.
{quote}If you mean that we say the old value has not expired just because there's now a new value, then yes, it sounds like a consistency problem.
I'm not sure what the IRAC consistency guarantees are though, maybe something similar can happen even without max-idle.
{quote}
Yes, getting the previous value would require for a write of any expired entry to also do a touch or get previous value from the other site. This seems like a lot of overhead and can be documented.
Also xsite can cause previous value to not be correct if the other site has a write that has not yet been replicated. So I personally feel like the expiration not returning the correct one should be okay.
{quote}Not sure what you mean here, are you talking about listeners/cluster listeners/client listeners w/ event factory/query?
{quote}
This one was in reference to listeners (which is all above). IRAC with conflict of writes will miss previous value for different writes as it wasn't replicated to the other site.
{quote}I would say yes, we do it for cluster max-idle.
The previous value is also needed for functional commands, both those in FunctionalMap and compute/merge/etc.
{quote}
This is similar to the previous couple points. Given the previous value guarantees of IRAC, I figured it would be fine to do the same for max idle with xsite.
{quote}Probably yes.
{quote}
Yeah, I need to think through it a bit more, but I believe we need to do the touch command only on primary and to fail the command as necessary so it isn't written to primary or backup.
{quote}Probably no.
{quote}
Unfortunately, supporting this will be more costly as we will need to symbolize which value a read maps to for the remote site. It could be that just a version is sufficient though. Have to think more.
{quote}
Nodes crashing during any of these scenarios will complicate things, but I haven't thought really hard about it (even though I think it might be a problem when it comes to the touch command being async).
{quote}
Agreed, will have to think through those as well, but just took a first stab when topology is stable at least.
> XSite Max Idle
> --------------
>
> Key: ISPN-11176
> URL: https://issues.redhat.com/browse/ISPN-11176
> Project: Infinispan
> Issue Type: Enhancement
> Components: Cross-Site Replication, Expiration
> Reporter: Will Burns
> Assignee: Will Burns
> Priority: Major
> Fix For: 12.0.0.Final
>
>
> Max idle expiration currently doesn't work with xsite. That is if an entry was written and replicated to both sites but one site never reads the value, but the other does. If they then need to read the value from the other site it will be expired (assuming the max idle time has elapsed).
> There are a few ways we can do this.
> 1. Keep access times local to every site. When a site finds an entry is expired it asks the other site(s) if it has a more recent access. If a site is known to have gone down we should touch all entries, since they may not have updated access times. Requires very little additional xsite communication.
> 2. Batch touch commands and only send every so often. Has window of loss, but should be small. Requires more site usage. Wouldn't work for really low max idle times as an entry could expire before the touch command is replicated.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
3 years, 9 months
[JBoss JIRA] (ISPN-12197) HotRodNonOwnerStatisticsTest failures
by Gustavo Fernandes (Jira)
[ https://issues.redhat.com/browse/ISPN-12197?page=com.atlassian.jira.plugi... ]
Gustavo Fernandes updated ISPN-12197:
-------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> HotRodNonOwnerStatisticsTest failures
> -------------------------------------
>
> Key: ISPN-12197
> URL: https://issues.redhat.com/browse/ISPN-12197
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite
> Affects Versions: 12.0.0.Dev01
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 12.0.0.Dev02
>
>
> The {{HotRodNonOwnerStatisticsTest}} is failing because it enables the jmx statistics but does not explicitly configure a {{MBeanServerLookup}} instance as required by the {{TestCacheManagerFactory}}.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
3 years, 9 months
[JBoss JIRA] (ISPN-12199) ServerTask - ALL_NODES - No marshaller registered for Java type java.util.Collections$SynchronizedRandomAccessList
by Anatole Lefort (Jira)
[ https://issues.redhat.com/browse/ISPN-12199?page=com.atlassian.jira.plugi... ]
Anatole Lefort updated ISPN-12199:
----------------------------------
Description:
When deploying and running {{ServerTask}}.s in {{ALL_NODES}} execution mode, the task execution fails and the following server error is returned to the hotrod client.
{noformat}
Exception in thread "main" org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=15 returned server error (status=0x85): org.infinispan.commons.marshall.MarshallingException: No marshaller registered for Java type java.util.Collections$SynchronizedRandomAccessList
at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:329)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:168)
at org.infinispan.client.hotrod.impl.transport.netty.HeaderDecoder.decode(HeaderDecoder.java:140)
at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:94)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}
I suspect {{org.infinispan.server.tasks.DistributedServerTaskRunner}} tries to return to the hotrod client a {{java.util.Collections.SynchronizedRandomAccessList}} it created to collect return values from the {{ClusterExecutor}}. I could not work around this by requesting a specific {{DataFormat}} before executing the task - the hotrod client would always fail to unmarshall the received data.
was:
When deploying and running {{ServerTask}}.s in {{ALL_NODES}} execution mode, the task execution fails and the following server error is returned to the hotrod client.
{noformat}
Exception in thread "main" org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=15 returned server error (status=0x85): org.infinispan.commons.marshall.MarshallingException: No marshaller registered for Java type java.util.Collections$SynchronizedRandomAccessList
at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:329)
at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:168)
at org.infinispan.client.hotrod.impl.transport.netty.HeaderDecoder.decode(HeaderDecoder.java:140)
at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:94)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}
I suspect {{org.infinispan.server.tasks.DistributedServerTaskRunner}} tries to return to the hotrod client a {{java.util.Collections.SynchronizedRandomAccessList}} it created to collect return values from the {{ClusterExecutor}}. I later tried to request a specific {{DataFormat}} before executing the task but the hotrod client always failed to unmarshall the received data.
> ServerTask - ALL_NODES - No marshaller registered for Java type java.util.Collections$SynchronizedRandomAccessList
> ------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-12199
> URL: https://issues.redhat.com/browse/ISPN-12199
> Project: Infinispan
> Issue Type: Bug
> Components: Tasks
> Affects Versions: 11.0.0.Final
> Reporter: Anatole Lefort
> Priority: Major
>
> When deploying and running {{ServerTask}}.s in {{ALL_NODES}} execution mode, the task execution fails and the following server error is returned to the hotrod client.
> {noformat}
> Exception in thread "main" org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=15 returned server error (status=0x85): org.infinispan.commons.marshall.MarshallingException: No marshaller registered for Java type java.util.Collections$SynchronizedRandomAccessList
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:329)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:168)
> at org.infinispan.client.hotrod.impl.transport.netty.HeaderDecoder.decode(HeaderDecoder.java:140)
> at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:94)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
> at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> I suspect {{org.infinispan.server.tasks.DistributedServerTaskRunner}} tries to return to the hotrod client a {{java.util.Collections.SynchronizedRandomAccessList}} it created to collect return values from the {{ClusterExecutor}}. I could not work around this by requesting a specific {{DataFormat}} before executing the task - the hotrod client would always fail to unmarshall the received data.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
3 years, 9 months