[JBoss JIRA] (ISPN-7624) Change CacheLoaderInterceptor to ignore in memory for certain operations when passivation is not enabled
by William Burns (Jira)
[ https://issues.jboss.org/browse/ISPN-7624?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-7624:
--------------------------------
Fix Version/s: (was: 9.4.2.Final)
> Change CacheLoaderInterceptor to ignore in memory for certain operations when passivation is not enabled
> --------------------------------------------------------------------------------------------------------
>
> Key: ISPN-7624
> URL: https://issues.jboss.org/browse/ISPN-7624
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: William Burns
> Assignee: William Burns
> Priority: Major
>
> CacheLoader currently reads all in memory entries and then the store for bulk operations. This can be changed to only load from the store when passivation is not used as it should contain all entries that aren't in memory.
> The only issue is that if Flag.SKIP_CACHE_STORE was used on a write. In this case we could store a boolean in the interceptor so that if this Flag is ever used we always do both. Very few users use this Flag and it should improve performance a bit and entries should be seen more consistently.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9690) EXCEPTION eviction strategy should not require transactions
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-9690?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño edited comment on ISPN-9690 at 11/7/18 11:06 AM:
------------------------------------------------------------------
A chat discussion:
{code}
William Burns: @Galder responded to your email, however we can talk here maybe
William Burns: actually @Galder I can see not requiring transactions when the cache mode is SIMPLE, LOCAL or INVALIDATION
William Burns: but that is not an option currently, would have to be added
Galder: Hmmm, we want this for DIST
William Burns: yeah then we would need two phase commit like I mentioned in my email
William Burns: since back up owner may reject a write, but primary may be fine
William Burns: unless we want to relax that requirement
Galder: Hmmmm
Galder: Assuming this is DIST, can't we start rejecting before the limit is reached?
William Burns: the way it currently works is each owner checks if their current size + new entry size > max size and if any of the owners reject, we throw the ContainerFullException
Galder: I mean: could an owner guess that adding something might end up topping non-owners and reject it?
Dan: @Galder @William Burns we could split the limit per segment and throw an exception once a segment is full
Galder: @William Burns Do you understand what I mean?
Galder: @Dan Don't think that'd solve the problem
Galder: You could still have issues where the primary owner is OK but the backup owner ends up failing due to segment full?
William Burns: @Galder do you mean that each node has an idea of how full another owner is, updated every so often in the background?
Galder: Or is the segment size fixed?
Galder: @William Burns Maybe... Or some other way that does not required communication
Galder: Segment based CH means we have more or less the same amount of data everywhere...
Galder: Or at least relatively similar number of entries in each node
Galder: In essence, could each primary owner probabilistically decide that doing a put would topple others?
Dan: @Galder yeah, the number of entries is typically within 10% of the average
Galder: There you go, 10%
Galder: I mean with EXCEPTION strategy only
Dan: But that's for all the entries on a node, a segment always has the same entries on all the owners that have it
Galder: Yeah but each node has different segments
Galder: That's where the variance comes in, right?
Dan: yes, the segments are different
Dan: sometimes by more than 10% :)
Galder: @William Burns I'm not necessarily suggesting each node keeps track of others, i'm more suggesting that given how full a node is, it could decide that others are likely full...
Dan: I was thinking of computing a fixed limit per segment, but you're right, that depends on how many nodes there are in the cluster
William Burns: So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
William Burns: and that some nodes can technically go over
William Burns: this is also very problematic if you have a lot of writes to same nodes
William Burns: ie. 3 nodes A, B, C if majority of writes are always to A and B and not C, C could get OOM
Galder: CH should take care of too many writes in some nodes
Galder: But true, some keys could be written more
Dan: @William Burns do you mean only throw the exception on the primary owner, and accept any write on the backup?
William Burns: well that is what Galder is after, so we don't have to use transactions
William Burns: is my assumption
Galder:
So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
But the user already configures the max, doesn't it?
William Burns: yes and we use transactions to ensure max is never reach on any node
Galder: Right, but using transactions for this seems too heavyweight
Dan: @Galder transactions are the only way to make it precise, so we only insert an entry if all the owners have room for it
William Burns: unfortunately without some sort of two phase operation we can some nodes updated and some others - and I figured don't reinvent the wheel
Dan: Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
William Burns: now I am open to possibly allowing others to go over, but we have to make sure the user knows this to possibly lower the max
Galder:
Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
Would that be fine if partition handling was enabled?
Dan: @William Burns I've been dreaming about lite transactions that don't need a xid and all that baggage ever since Mircea removed support for mixing tx and non-tx operations in the same cache :)
Dan: @Galder
Would that be fine if partition handling was enabled?
Depends on what you mean by "fine" :)
William Burns: @Dan yeah tbh I was thinking that would be nice for this use case
Galder: So when a node fails due to memory issues, we switch the readiness probe to say node not healthy... then the user gives it more memory to the pods... and one at the time they get restarted... partition handling would kick in and address partial replications
Galder: ^ I might be day dreaming :see_no_evil:
William Burns: hrmm, I think just regular rebalance should repair it
William Burns: don't need partition handling
William Burns: assuming we haven't lost >= numOwners
William Burns: and then nothing can save it :D
Galder: Exactly
Galder: TBH, the above should be done by the autoscaler
Dan:
assuming we haven't lost >= numOwners
Yeah, that's the big assumption -- if one node is shut down because it wanted to insert too many entries, those entries will be redirected somewhere else
Galder: Rather than manually by someone
Dan: So an exception in one node may trigger the restart of the entire cluster
William Burns: yeah actually the more I think about it losing that node could lose data then
William Burns: since state transfer writes may be blocked
William Burns: if this fills up another node
Dan: @Galder does OS/K8S keep pods around if they're not healthy?
Galder: Hmmmm, actually, I think it tries to restart them until they're healthy
Dan: Yeah, that's what I thought
William Burns: this is the age old issue of eviction without a store can lose data on a rehash, irrespective of numOwners
William Burns: @Galder will you have a store?
Galder: Actually
Galder: @Dan Depends on the config...
Galder: @Dan Depends how many times the readiness probe fails, at which point it'd restart it
Galder: https://kubernetes.io/docs/tasks/configure-pod-container/configure-livene...
Galder:
@Galder will you have a store?
Not by default
William Burns: so in that case I would say we should err on the side of making sure no node has too much as much as possible
Galder: Ok
Galder: What if you had a store?
William Burns: if we had a store then partial updates should be fine
William Burns: ie. primary writes the value and backup ignores it
Galder: But the stores would also have partial udpates?
William Burns: but we would have to implement writing to a store but not memory
Galder: exactly
Galder: Ok
Galder: I'll create a JIRA and summarise the discussion for now
Galder: I have to head out in a few
William Burns: alright
William Burns: we could even optimize it for shared stores
William Burns: :D
William Burns: actually that is interesting...
William Burns: we could have a store mode with shared where we never write the entry to memory on backups
Galder: Local stores are preferable
Galder: One less thing to worry about
William Burns: yeah agreed for this use case, I was just thinking more in general
Galder: @Dan @William Burns We want the default cache in data grid service to use EXCEPTION strategy, so that data is preserved (as opposed to the cache service where it evicts data, it's a cache), so would you make the default cache transactional? Or have a local file based store instead?
Galder: I'd go for the latter, just wondering if you see any case where the former would be better
Galder: Actually, with the current state you still need transactions because we don't write to store if memory full
William Burns: yeah latter is only way to not lose data
William Burns: former may be faster? would have to confirm
Galder: Yeah, but local file based persistence alone is not enough today to make EXCEPTION strategy work?
William Burns: no, tbh this seems like a different eviction mode at this point with a store
William Burns: cause you would never get an EXCEPTION
William Burns: tbh, I don't know if this is worth it
Galder: true
William Burns: this is essentially regular eviction with a store
William Burns: just changes what elements are in memory
Galder: True...
William Burns: yeah now I remember before that I thought EXCEPTION based eviction and stores don't make that much sense together
{code}
was (Author: galder.zamarreno):
A chat discussion:
{code}
William Burns: @Galder responded to your email, however we can talk here maybe
William Burns: actually @Galder I can see not requiring transactions when the cache mode is SIMPLE, LOCAL or INVALIDATION
William Burns: but that is not an option currently, would have to be added
Galder: Hmmm, we want this for DIST
William Burns: yeah then we would need two phase commit like I mentioned in my email
William Burns: since back up owner may reject a write, but primary may be fine
William Burns: unless we want to relax that requirement
Galder: Hmmmm
Galder: Assuming this is DIST, can't we start rejecting before the limit is reached?
William Burns: the way it currently works is each owner checks if their current size + new entry size > max size and if any of the owners reject, we throw the ContainerFullException
Galder: I mean: could an owner guess that adding something might end up topping non-owners and reject it?
Dan: @Galder @William Burns we could split the limit per segment and throw an exception once a segment is full
Galder: @William Burns Do you understand what I mean?
Galder: @Dan Don't think that'd solve the problem
Galder: You could still have issues where the primary owner is OK but the backup owner ends up failing due to segment full?
William Burns: @Galder do you mean that each node has an idea of how full another owner is, updated every so often in the background?
Galder: Or is the segment size fixed?
Galder: @William Burns Maybe... Or some other way that does not required communication
Galder: Segment based CH means we have more or less the same amount of data everywhere...
Galder: Or at least relatively similar number of entries in each node
Galder: In essence, could each primary owner probabilistically decide that doing a put would topple others?
Dan: @Galder yeah, the number of entries is typically within 10% of the average
Galder: There you go, 10%
Galder: I mean with EXCEPTION strategy only
Dan: But that's for all the entries on a node, a segment always has the same entries on all the owners that have it
Galder: Yeah but each node has different segments
Galder: That's where the variance comes in, right?
Dan: yes, the segments are different
Dan: sometimes by more than 10% :)
Galder: @William Burns I'm not necessarily suggesting each node keeps track of others, i'm more suggesting that given how full a node is, it could decide that others are likely full...
Dan: I was thinking of computing a fixed limit per segment, but you're right, that depends on how many nodes there are in the cluster
William Burns: So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
William Burns: and that some nodes can technically go over
William Burns: this is also very problematic if you have a lot of writes to same nodes
William Burns: ie. 3 nodes A, B, C if majority of writes are always to A and B and not C, C could get OOM
Galder: CH should take care of too many writes in some nodes
Galder: But true, some keys could be written more
Dan: @William Burns do you mean only throw the exception on the primary owner, and accept any write on the backup?
William Burns: well that is what Galder is after, so we don't have to use transactions
William Burns: is my assumption
Galder:
So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
But the user already configures the max, doesn't it?
William Burns: yes and we use transactions to ensure max is never reach on any node
Galder: Right, but using transactions for this seems too heavyweight
Dan: @Galder transactions are the only way to make it precise, so we only insert an entry if all the owners have room for it
William Burns: unfortunately without some sort of two phase operation we can some nodes updated and some others - and I figured don't reinvent the wheel
Dan: Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
William Burns: now I am open to possibly allowing others to go over, but we have to make sure the user knows this to possibly lower the max
Galder:
Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
Would that be fine if partition handling was enabled?
Dan: @William Burns I've been dreaming about lite transactions that don't need a xid and all that baggage ever since Mircea removed support for mixing tx and non-tx operations in the same cache :)
Dan: @Galder
Would that be fine if partition handling was enabled?
Depends on what you mean by "fine" :)
William Burns: @Dan yeah tbh I was thinking that would be nice for this use case
Galder: So when a node fails due to memory issues, we switch the readiness probe to say node not healthy... then the user gives it more memory to the pods... and one at the time they get restarted... partition handling would kick in and address partial replications
Galder: ^ I might be day dreaming :see_no_evil:
William Burns: hrmm, I think just regular rebalance should repair it
William Burns: don't need partition handling
William Burns: assuming we haven't lost >= numOwners
William Burns: and then nothing can save it :D
Galder: Exactly
Galder: TBH, the above should be done by the autoscaler
Dan:
assuming we haven't lost >= numOwners
Yeah, that's the big assumption -- if one node is shut down because it wanted to insert too many entries, those entries will be redirected somewhere else
Galder: Rather than manually by someone
Dan: So an exception in one node may trigger the restart of the entire cluster
William Burns: yeah actually the more I think about it losing that node could lose data then
William Burns: since state transfer writes may be blocked
William Burns: if this fills up another node
Dan: @Galder does OS/K8S keep pods around if they're not healthy?
Galder: Hmmmm, actually, I think it tries to restart them until they're healthy
Dan: Yeah, that's what I thought
William Burns: this is the age old issue of eviction without a store can lose data on a rehash, irrespective of numOwners
William Burns: @Galder will you have a store?
Galder: Actually
Galder: @Dan Depends on the config...
Galder: @Dan Depends how many times the readiness probe fails, at which point it'd restart it
Galder: https://kubernetes.io/docs/tasks/configure-pod-container/configure-livene...
Galder:
@Galder will you have a store?
Not by default
William Burns: so in that case I would say we should err on the side of making sure no node has too much as much as possible
Galder: Ok
Galder: What if you had a store?
William Burns: if we had a store then partial updates should be fine
William Burns: ie. primary writes the value and backup ignores it
Galder: But the stores would also have partial udpates?
William Burns: but we would have to implement writing to a store but not memory
Galder: exactly
Galder: Ok
Galder: I'll create a JIRA and summarise the discussion for now
Galder: I have to head out in a few
William Burns: alright
William Burns: we could even optimize it for shared stores
William Burns: :D
William Burns: actually that is interesting...
William Burns: we could have a store mode with shared where we never write the entry to memory on backups
Galder: Local stores are preferable
Galder: One less thing to worry about
William Burns: yeah agreed for this use case, I was just thinking more in general
{code}
> EXCEPTION eviction strategy should not require transactions
> -----------------------------------------------------------
>
> Key: ISPN-9690
> URL: https://issues.jboss.org/browse/ISPN-9690
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Priority: Major
>
> Unfortunately we need 2 phase commit to guarantee consistency. Imagine the case where one of the owners says the write is okay and another says no, there is no way without two phase commit to guarantee that either all or none of the writes are completed.
> One possibility would be node to deny a write if it expects it would result in other nodes running out of memory. However, this could still fail if some keys store more data than others. It would require Infinispan to calculate some probabilistic method of deciding when a node would run out of memory.
> Another way would be to have a local (or shared) persistent store attached. In that case, if a backup owner will run out of memory if storing data, it would not store it in memory but store it to the persistent layer. If the node is restarted with new memory settings, the persistent stores would be consistent and the rebalance would put the data back in memory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9690) EXCEPTION eviction strategy should not require transactions
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-9690?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-9690:
----------------------------------------
A chat discussion:
{code}
William Burns: @Galder responded to your email, however we can talk here maybe
William Burns: actually @Galder I can see not requiring transactions when the cache mode is SIMPLE, LOCAL or INVALIDATION
William Burns: but that is not an option currently, would have to be added
Galder: Hmmm, we want this for DIST
William Burns: yeah then we would need two phase commit like I mentioned in my email
William Burns: since back up owner may reject a write, but primary may be fine
William Burns: unless we want to relax that requirement
Galder: Hmmmm
Galder: Assuming this is DIST, can't we start rejecting before the limit is reached?
William Burns: the way it currently works is each owner checks if their current size + new entry size > max size and if any of the owners reject, we throw the ContainerFullException
Galder: I mean: could an owner guess that adding something might end up topping non-owners and reject it?
Dan: @Galder @William Burns we could split the limit per segment and throw an exception once a segment is full
Galder: @William Burns Do you understand what I mean?
Galder: @Dan Don't think that'd solve the problem
Galder: You could still have issues where the primary owner is OK but the backup owner ends up failing due to segment full?
William Burns: @Galder do you mean that each node has an idea of how full another owner is, updated every so often in the background?
Galder: Or is the segment size fixed?
Galder: @William Burns Maybe... Or some other way that does not required communication
Galder: Segment based CH means we have more or less the same amount of data everywhere...
Galder: Or at least relatively similar number of entries in each node
Galder: In essence, could each primary owner probabilistically decide that doing a put would topple others?
Dan: @Galder yeah, the number of entries is typically within 10% of the average
Galder: There you go, 10%
Galder: I mean with EXCEPTION strategy only
Dan: But that's for all the entries on a node, a segment always has the same entries on all the owners that have it
Galder: Yeah but each node has different segments
Galder: That's where the variance comes in, right?
Dan: yes, the segments are different
Dan: sometimes by more than 10% :)
Galder: @William Burns I'm not necessarily suggesting each node keeps track of others, i'm more suggesting that given how full a node is, it could decide that others are likely full...
Dan: I was thinking of computing a fixed limit per segment, but you're right, that depends on how many nodes there are in the cluster
William Burns: So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
William Burns: and that some nodes can technically go over
William Burns: this is also very problematic if you have a lot of writes to same nodes
William Burns: ie. 3 nodes A, B, C if majority of writes are always to A and B and not C, C could get OOM
Galder: CH should take care of too many writes in some nodes
Galder: But true, some keys could be written more
Dan: @William Burns do you mean only throw the exception on the primary owner, and accept any write on the backup?
William Burns: well that is what Galder is after, so we don't have to use transactions
William Burns: is my assumption
Galder:
So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
But the user already configures the max, doesn't it?
William Burns: yes and we use transactions to ensure max is never reach on any node
Galder: Right, but using transactions for this seems too heavyweight
Dan: @Galder transactions are the only way to make it precise, so we only insert an entry if all the owners have room for it
William Burns: unfortunately without some sort of two phase operation we can some nodes updated and some others - and I figured don't reinvent the wheel
Dan: Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
William Burns: now I am open to possibly allowing others to go over, but we have to make sure the user knows this to possibly lower the max
Galder:
Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
Would that be fine if partition handling was enabled?
Dan: @William Burns I've been dreaming about lite transactions that don't need a xid and all that baggage ever since Mircea removed support for mixing tx and non-tx operations in the same cache :)
Dan: @Galder
Would that be fine if partition handling was enabled?
Depends on what you mean by "fine" :)
William Burns: @Dan yeah tbh I was thinking that would be nice for this use case
Galder: So when a node fails due to memory issues, we switch the readiness probe to say node not healthy... then the user gives it more memory to the pods... and one at the time they get restarted... partition handling would kick in and address partial replications
Galder: ^ I might be day dreaming :see_no_evil:
William Burns: hrmm, I think just regular rebalance should repair it
William Burns: don't need partition handling
William Burns: assuming we haven't lost >= numOwners
William Burns: and then nothing can save it :D
Galder: Exactly
Galder: TBH, the above should be done by the autoscaler
Dan:
assuming we haven't lost >= numOwners
Yeah, that's the big assumption -- if one node is shut down because it wanted to insert too many entries, those entries will be redirected somewhere else
Galder: Rather than manually by someone
Dan: So an exception in one node may trigger the restart of the entire cluster
William Burns: yeah actually the more I think about it losing that node could lose data then
William Burns: since state transfer writes may be blocked
William Burns: if this fills up another node
Dan: @Galder does OS/K8S keep pods around if they're not healthy?
Galder: Hmmmm, actually, I think it tries to restart them until they're healthy
Dan: Yeah, that's what I thought
William Burns: this is the age old issue of eviction without a store can lose data on a rehash, irrespective of numOwners
William Burns: @Galder will you have a store?
Galder: Actually
Galder: @Dan Depends on the config...
Galder: @Dan Depends how many times the readiness probe fails, at which point it'd restart it
Galder: https://kubernetes.io/docs/tasks/configure-pod-container/configure-livene...
Galder:
@Galder will you have a store?
Not by default
William Burns: so in that case I would say we should err on the side of making sure no node has too much as much as possible
Galder: Ok
Galder: What if you had a store?
William Burns: if we had a store then partial updates should be fine
William Burns: ie. primary writes the value and backup ignores it
Galder: But the stores would also have partial udpates?
William Burns: but we would have to implement writing to a store but not memory
Galder: exactly
Galder: Ok
Galder: I'll create a JIRA and summarise the discussion for now
Galder: I have to head out in a few
William Burns: alright
William Burns: we could even optimize it for shared stores
William Burns: :D
William Burns: actually that is interesting...
William Burns: we could have a store mode with shared where we never write the entry to memory on backups
Galder: Local stores are preferable
Galder: One less thing to worry about
William Burns: yeah agreed for this use case, I was just thinking more in general
{code}
> EXCEPTION eviction strategy should not require transactions
> -----------------------------------------------------------
>
> Key: ISPN-9690
> URL: https://issues.jboss.org/browse/ISPN-9690
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Priority: Major
>
> Unfortunately we need 2 phase commit to guarantee consistency. Imagine the case where one of the owners says the write is okay and another says no, there is no way without two phase commit to guarantee that either all or none of the writes are completed.
> One possibility would be node to deny a write if it expects it would result in other nodes running out of memory. However, this could still fail if some keys store more data than others. It would require Infinispan to calculate some probabilistic method of deciding when a node would run out of memory.
> Another way would be to have a local (or shared) persistent store attached. In that case, if a backup owner will run out of memory if storing data, it would not store it in memory but store it to the persistent layer. If the node is restarted with new memory settings, the persistent stores would be consistent and the rebalance would put the data back in memory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9690) EXCEPTION eviction strategy should not require transactions
by Galder Zamarreño (Jira)
Galder Zamarreño created ISPN-9690:
--------------------------------------
Summary: EXCEPTION eviction strategy should not require transactions
Key: ISPN-9690
URL: https://issues.jboss.org/browse/ISPN-9690
Project: Infinispan
Issue Type: Enhancement
Reporter: Galder Zamarreño
Unfortunately we need 2 phase commit to guarantee consistency. Imagine the case where one of the owners says the write is okay and another says no, there is no way without two phase commit to guarantee that either all or none of the writes are completed.
One possibility would be node to deny a write if it expects it would result in other nodes running out of memory. However, this could still fail if some keys store more data than others. It would require Infinispan to calculate some probabilistic method of deciding when a node would run out of memory.
Another way would be to have a local (or shared) persistent store attached. In that case, if a backup owner will run out of memory if storing data, it would not store it in memory but store it to the persistent layer. If the node is restarted with new memory settings, the persistent stores would be consistent and the rebalance would put the data back in memory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-8564) Cannot select inner entities in Ickle
by Adrian Nistor (Jira)
[ https://issues.jboss.org/browse/ISPN-8564?page=com.atlassian.jira.plugin.... ]
Adrian Nistor reopened ISPN-8564:
---------------------------------
> Cannot select inner entities in Ickle
> -------------------------------------
>
> Key: ISPN-8564
> URL: https://issues.jboss.org/browse/ISPN-8564
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Remote Querying
> Affects Versions: 9.0.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Adrian Nistor
> Priority: Major
> Fix For: 9.4.0.Final
>
>
> Consider a proto mapping:
> {code}
> message Parent {
> optional Child child;
> }
> message Child {
> optional string name = 1;
> }
> {code}
> It is not possible to select any of the Child attributes in the query. The following queries fail:
> {{SELECT p.child FROM Parent p}}
> Fails with ISPN028503
> {{SELECT p.child.name FROM Parent p}}
> Fails with ISPN028502: Unknown alias 'child'
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-8564) Cannot select inner entities in Ickle
by Adrian Nistor (Jira)
[ https://issues.jboss.org/browse/ISPN-8564?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-8564:
-------------------------------------
The first query "SELECT p.child FROM Parent p" is wrong, so it's natural that it fails with "org.infinispan.objectfilter.ParsingException: ISPN028503: Property child can not be selected from type Parent since it is an embedded entity."
The second query, "SELECT p.child.name FROM Parent p" , is correct and it works. A second variation of it ""SELECT child.name FROM Parent" does not work. So I think the bug was closed by mistake. The PR I made was only supposed to add a test for it.
> Cannot select inner entities in Ickle
> -------------------------------------
>
> Key: ISPN-8564
> URL: https://issues.jboss.org/browse/ISPN-8564
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Remote Querying
> Affects Versions: 9.0.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Adrian Nistor
> Priority: Major
> Fix For: 9.4.0.Final
>
>
> Consider a proto mapping:
> {code}
> message Parent {
> optional Child child;
> }
> message Child {
> optional string name = 1;
> }
> {code}
> It is not possible to select any of the Child attributes in the query. The following queries fail:
> {{SELECT p.child FROM Parent p}}
> Fails with ISPN028503
> {{SELECT p.child.name FROM Parent p}}
> Fails with ISPN028502: Unknown alias 'child'
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9511) Expired event is not raised when modifying an expired entry
by William Burns (Jira)
[ https://issues.jboss.org/browse/ISPN-9511?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-9511:
-------------------------------------
Additional test was added with https://github.com/infinispan/infinispan/pull/6375
> Expired event is not raised when modifying an expired entry
> -----------------------------------------------------------
>
> Key: ISPN-9511
> URL: https://issues.jboss.org/browse/ISPN-9511
> Project: Infinispan
> Issue Type: Bug
> Components: Listeners
> Affects Versions: 9.3.3.Final
> Reporter: William Burns
> Assignee: William Burns
> Priority: Major
> Fix For: 9.4.0.Final
>
>
> Due to the old way of implementing remove expired for lifespan, we didn't raise an expired event when writing to an entry. This was mostly to cause circular dependencies. But with the new remove expired max idle changes, this is now possible.
> Without this change listeners can be in an inconsistent state, possibly, as the following could happen:
> 1. Entry is created
> 2. Listener is notified of creation
> 3. Entry expires (no event yet)
> 4. Entry is written to (created)
> 5. Listener is notified of creation.
> In this case there is no intermediate state where the listener thought there was no entry. This also becomes problematic if you are listening only for events that don't include create.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-7863) Ickle lexer wrongly discards letter v as whitespace ruining parsing of identifiers containing v
by Gustavo Lira (Jira)
[ https://issues.jboss.org/browse/ISPN-7863?page=com.atlassian.jira.plugin.... ]
Gustavo Lira updated ISPN-7863:
-------------------------------
Comment: was deleted
(was: A comment with security level 'Red Hat Employee' was removed.)
> Ickle lexer wrongly discards letter v as whitespace ruining parsing of identifiers containing v
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-7863
> URL: https://issues.jboss.org/browse/ISPN-7863
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Remote Querying
> Affects Versions: 9.0.0.Final
> Reporter: Gregory Orciuch
> Assignee: Adrian Nistor
> Priority: Blocker
> Labels: on-hold
> Fix For: 9.4.0.Final, 9.3.4.Final
>
>
> Links to: https://issues.jboss.org/browse/ISPN-7861
> When quering ISPN server over hotrod and trying to access field which begins with "v" character that field access is lost but together with "group by" operator.
> Example of entity using proto/annotations:
> {code:java}
> @ProtoDoc("@Indexed")
> public class Offering implements Serializable {
> private String name;
> private Integer relationSetId;
> private Integer variant;
>
> @ProtoDoc("@Field(store = Store.YES, analyze = Analyze.YES)")
> @ProtoField(number = 5, required = true)
> public String getName() {
> return name;
> }
> public void setName(String name) {
> this.name = name;
> }
>
> @ProtoField(number = 44)
> public Integer getRelationSetId() {
> return relationSetId;
> }
> public void setRelationSetId(Integer relationSetId) {
> this.relationSetId = relationSetId;
> }
>
> @ProtoDoc("@Field(store = Store.YES, analyze = Analyze.NO)")
> @ProtoField(number = 50)
> public Integer getVariant() {
> return variant;
> }
> public void setVariant(Integer variant) {
> this.variant = variant;
> }
>
> }
> {code}
> Then executing query like this:
> {code:sql}
> select min(_gen0.name),min(_gen0.variant) FROM Offering _gen0 WHERE _gen0.variant = 44 GROUP BY _gen0.relationSetId
> {code}
> Produces server side error which mentions "ariant" - v - is lost. Paste is below.
> NOT using group by is causing that query to run well.
> ALSO Changing variable name from variant to "bariant" helps.
> LOOKS like there is some code which restricts the name or parses wrongly.
> Affects not only simple type fields but also List<Variant> variants - "v" is lost.
> {panel:title=log}
> 14:02:50,951 DEBUG [org.infinispan.query.dsl.embedded.impl.QueryEngine] (HotRod-ServerHandler-6-16) Building query 'select min(_gen0.name),min(_gen0.variant) FROM Offering _gen0 WHERE _gen0.variant = 44 GROUP BY _gen0.relationSetId' with parameters null
> 14:02:50,953 DEBUG [org.infinispan.server.hotrod.HotRodExceptionHandler] (HotRod-ServerWorker-4-1) Exception caught: org.infinispan.objectfilter.ParsingException: ISPN028501: The type Offering has no property named '*ariant*'.
> at org.infinispan.objectfilter.impl.syntax.parser.QueryResolverDelegateImpl.normalizeProperty(QueryResolverDelegateImpl.java:191)
> at org.infinispan.objectfilter.impl.syntax.parser.QueryResolverDelegateImpl.normalizeUnqualifiedPropertyReference(QueryResolverDelegateImpl.java:84)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.unqualifiedPropertyReference(QueryResolver.java:7651)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.propertyReferencePath(QueryResolver.java:7548)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.propertyReferenceExpression(QueryResolver.java:5689)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.valueExpressionPrimary(QueryResolver.java:5495)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.valueExpression(QueryResolver.java:5271)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.rowValueConstructor(QueryResolver.java:4490)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.predicate(QueryResolver.java:3326)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.searchCondition(QueryResolver.java:2979)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.whereClause(QueryResolver.java:655)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.querySpec(QueryResolver.java:510)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.queryStatement(QueryResolver.java:379)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.queryStatementSet(QueryResolver.java:292)
> at org.infinispan.objectfilter.impl.ql.parse.QueryResolver.statement(QueryResolver.java:220)
> at org.infinispan.objectfilter.impl.ql.QueryParser.resolve(QueryParser.java:81)
> at org.infinispan.objectfilter.impl.ql.QueryParser.parseQuery(QueryParser.java:69)
> at org.infinispan.objectfilter.impl.syntax.parser.IckleParser.parse(IckleParser.java:19)
> at org.infinispan.query.dsl.embedded.impl.QueryEngine.lambda$parse$1(QueryEngine.java:663)
> at org.infinispan.query.dsl.embedded.impl.QueryCache.lambda$get$0(QueryCache.java:79)
> at org.infinispan.cache.impl.TypeConverterDelegatingAdvancedCache.lambda$convertFunction$1(TypeConverterDelegatingAdvancedCache.java:101)
> at java.util.concurrent.ConcurrentMap.computeIfAbsent(ConcurrentMap.java:324)
> at org.infinispan.cache.impl.AbstractDelegatingCache.computeIfAbsent(AbstractDelegatingCache.java:343)
> at org.infinispan.cache.impl.TypeConverterDelegatingAdvancedCache.computeIfAbsent(TypeConverterDelegatingAdvancedCache.java:161)
> at org.infinispan.query.dsl.embedded.impl.QueryCache.get(QueryCache.java:79)
> at org.infinispan.query.dsl.embedded.impl.QueryEngine.parse(QueryEngine.java:663)
> at org.infinispan.query.dsl.embedded.impl.QueryEngine.buildQueryWithAggregations(QueryEngine.java:299)
> at org.infinispan.query.dsl.embedded.impl.QueryEngine.buildQuery(QueryEngine.java:139)
> at org.infinispan.query.dsl.embedded.impl.DelegatingQuery.createQuery(DelegatingQuery.java:91)
> at org.infinispan.query.dsl.embedded.impl.DelegatingQuery.list(DelegatingQuery.java:98)
> at org.infinispan.query.remote.impl.QueryFacadeImpl.makeResponse(QueryFacadeImpl.java:61)
> at org.infinispan.query.remote.impl.QueryFacadeImpl.query(QueryFacadeImpl.java:53)
> at org.infinispan.server.hotrod.HotRodServer.query(HotRodServer.java:116)
> at org.infinispan.server.hotrod.ContextHandler.realRead(ContextHandler.java:148)
> at org.infinispan.server.hotrod.ContextHandler.lambda$null$0(ContextHandler.java:59)
> at org.infinispan.security.Security.doAs(Security.java:143)
> at org.infinispan.server.hotrod.ContextHandler.lambda$channelRead0$1(ContextHandler.java:58)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> at java.lang.Thread.run(Thread.java:748)
> {panel}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month