[JBoss JIRA] (ISPN-7186) SFS Avoid Reading Index if Purge on startup is enabled
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-7186?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-7186:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/6381
> SFS Avoid Reading Index if Purge on startup is enabled
> ------------------------------------------------------
>
> Key: ISPN-7186
> URL: https://issues.jboss.org/browse/ISPN-7186
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.2.Final
> Reporter: Elias Ross
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 10.0.0.Alpha1, 9.4.2.Final
>
>
> I've observed the following error, which periodically happens when the application restarts.
> This is likely due to a non-clean shutdown truncating the file. Ideally Inifinispan can simply truncate or skip such entries without blocking start-up
> Caused by: org.infinispan.persistence.spi.PersistenceException: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to re
> ad stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:135) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> at org.infinispan.persistence.manager.PersistenceManagerImpl.start(PersistenceManagerImpl.java:144) ~[org.infinispan-infinispan-core-
> 8.2.2.Final.jar:8.2.2.Final]
> ... 115 common frames omitted
> Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to read stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.rebuildIndex(SingleFileStore.java:195) ~[org.infinispan-infinispan-core-8.2.2.Fina
> l.jar:8.2.2.Final]
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:126) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> ... 116 common frames omitted
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-7186) SFS Avoid Reading Index if Purge on startup is enabled
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-7186?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-7186:
-------------------------------
Fix Version/s: 10.0.0.Alpha1
9.4.2.Final
Sprint: Sprint 10.0.0.Alpha1
> SFS Avoid Reading Index if Purge on startup is enabled
> ------------------------------------------------------
>
> Key: ISPN-7186
> URL: https://issues.jboss.org/browse/ISPN-7186
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.2.Final
> Reporter: Elias Ross
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 10.0.0.Alpha1, 9.4.2.Final
>
>
> I've observed the following error, which periodically happens when the application restarts.
> This is likely due to a non-clean shutdown truncating the file. Ideally Inifinispan can simply truncate or skip such entries without blocking start-up
> Caused by: org.infinispan.persistence.spi.PersistenceException: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to re
> ad stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:135) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> at org.infinispan.persistence.manager.PersistenceManagerImpl.start(PersistenceManagerImpl.java:144) ~[org.infinispan-infinispan-core-
> 8.2.2.Final.jar:8.2.2.Final]
> ... 115 common frames omitted
> Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to read stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.rebuildIndex(SingleFileStore.java:195) ~[org.infinispan-infinispan-core-8.2.2.Fina
> l.jar:8.2.2.Final]
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:126) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> ... 116 common frames omitted
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-8691) Infinispan rejects to read cache file bigger than 2147483647 (Integer.MAX_VALUE)
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-8691?page=com.atlassian.jira.plugin.... ]
Ryan Emerson closed ISPN-8691.
------------------------------
Resolution: Explained
Closing as the cause of the issue appears to be a corrupt file. https://issues.jboss.org/browse/ISPN-7186 will ensure that we don't try to rebuild the index if purge is set to true.
> Infinispan rejects to read cache file bigger than 2147483647 (Integer.MAX_VALUE)
> --------------------------------------------------------------------------------
>
> Key: ISPN-8691
> URL: https://issues.jboss.org/browse/ISPN-8691
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 9.1.1.Final
> Reporter: Dmitry Katsubo
> Assignee: Ryan Emerson
> Priority: Minor
> Fix For: 9.4.2.Final
>
>
> In my scenario the cache file size created by {{SingleFileStore}} is 3.054.196.342 bytes. When this file is tried to be loaded, it fails with the following exception:
> {code}
> Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to read stored entries from file. Error in file /work/search-service-layer_data/infinispan/cache_test_dk83146/markupCache.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.rebuildIndex(SingleFileStore.java:182)
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:127)
> ... 155 more
> {code}
> Cache file content:
> {code}
> 0000000000: 46 43 53 31 80 B1 89 47 │ 00 00 00 00 00 00 00 00 FCS1?+%G
> 0000000010: 00 00 00 00 FF FF FF FF │ FF FF FF FF 02 15 4E 06 yyyyyyyy☻§N♠
> 0000000020: 05 03 04 09 00 00 00 2F │ 6F 72 67 2E 73 70 72 69 ♣♥♦○ /org.spri
> 0000000030: 6E 67 66 72 61 6D 65 77 │ 6F 72 6B 2E 63 61 63 68 ngframework.cach
> 0000000040: 65 2E 69 6E 74 65 72 63 │ 65 70 74 6F 72 2E 53 69 e.interceptor.Si
> 0000000050: 6D 70 6C 65 4B 65 79 4C │ 0A 57 03 6B 6D 93 D8 00 mpleKeyL◙W♥km"O
> 0000000060: 00 00 02 00 00 00 08 68 │ 61 73 68 43 6F 64 65 23 ☻ ◘hashCode#
> 0000000070: 00 00 00 00 06 70 61 72 │ 61 6D 73 16 00 16 15 E6 ♠params▬ ▬§?
> {code}
> The problem is that integer value 0x80B18947 is treated as signed integer in line {{SingleFileStore:181}}, hence in expression
> {code}
> if (fe.size < KEY_POS + fe.keyLen + fe.dataLen + fe.metadataLen) {
> throw log.errorReadingFileStore(file.getPath(), filePos);
> }
> {code}
> {{fe.size}} is negative and equal to -2135848633.
> I have tried to configure the persistence storage so that it gets purged on start:
> {code}
> <persistence passivation="true">
> <file-store path="/var/cache/infinispan" purge="true">
> <write-behind thread-pool-size="5" />
> </file-store>
> </persistence>
> {code}
> however this does not help as storage is first read and then purged (see also ISPN-7186).
> It is expected that {{SingleFileStore}} either does not allow to write such big entries to the cache, or handles them correctly.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-7186) SFS Avoid Reading Index if Purge on startup is enabled
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-7186?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-7186:
-------------------------------
Status: Open (was: New)
> SFS Avoid Reading Index if Purge on startup is enabled
> ------------------------------------------------------
>
> Key: ISPN-7186
> URL: https://issues.jboss.org/browse/ISPN-7186
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.2.Final
> Reporter: Elias Ross
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 10.0.0.Alpha1, 9.4.2.Final
>
>
> I've observed the following error, which periodically happens when the application restarts.
> This is likely due to a non-clean shutdown truncating the file. Ideally Inifinispan can simply truncate or skip such entries without blocking start-up
> Caused by: org.infinispan.persistence.spi.PersistenceException: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to re
> ad stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:135) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> at org.infinispan.persistence.manager.PersistenceManagerImpl.start(PersistenceManagerImpl.java:144) ~[org.infinispan-infinispan-core-
> 8.2.2.Final.jar:8.2.2.Final]
> ... 115 common frames omitted
> Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to read stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.rebuildIndex(SingleFileStore.java:195) ~[org.infinispan-infinispan-core-8.2.2.Fina
> l.jar:8.2.2.Final]
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:126) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> ... 116 common frames omitted
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-7186) SFS Avoid Reading Index if Purge on startup is enabled
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-7186?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-7186:
-------------------------------
Summary: SFS Avoid Reading Index if Purge on startup is enabled (was: ISPN000279: Failed to read stored ... should not halt startup)
> SFS Avoid Reading Index if Purge on startup is enabled
> ------------------------------------------------------
>
> Key: ISPN-7186
> URL: https://issues.jboss.org/browse/ISPN-7186
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.2.Final
> Reporter: Elias Ross
> Assignee: Ryan Emerson
> Priority: Major
>
> I've observed the following error, which periodically happens when the application restarts.
> This is likely due to a non-clean shutdown truncating the file. Ideally Inifinispan can simply truncate or skip such entries without blocking start-up
> Caused by: org.infinispan.persistence.spi.PersistenceException: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to re
> ad stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:135) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> at org.infinispan.persistence.manager.PersistenceManagerImpl.start(PersistenceManagerImpl.java:144) ~[org.infinispan-infinispan-core-
> 8.2.2.Final.jar:8.2.2.Final]
> ... 115 common frames omitted
> Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to read stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.rebuildIndex(SingleFileStore.java:195) ~[org.infinispan-infinispan-core-8.2.2.Fina
> l.jar:8.2.2.Final]
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:126) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> ... 116 common frames omitted
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-7186) ISPN000279: Failed to read stored ... should not halt startup
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-7186?page=com.atlassian.jira.plugin.... ]
Ryan Emerson reassigned ISPN-7186:
----------------------------------
Assignee: Ryan Emerson
> ISPN000279: Failed to read stored ... should not halt startup
> -------------------------------------------------------------
>
> Key: ISPN-7186
> URL: https://issues.jboss.org/browse/ISPN-7186
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.2.Final
> Reporter: Elias Ross
> Assignee: Ryan Emerson
> Priority: Major
>
> I've observed the following error, which periodically happens when the application restarts.
> This is likely due to a non-clean shutdown truncating the file. Ideally Inifinispan can simply truncate or skip such entries without blocking start-up
> Caused by: org.infinispan.persistence.spi.PersistenceException: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to re
> ad stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:135) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> at org.infinispan.persistence.manager.PersistenceManagerImpl.start(PersistenceManagerImpl.java:144) ~[org.infinispan-infinispan-core-
> 8.2.2.Final.jar:8.2.2.Final]
> ... 115 common frames omitted
> Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000279: Failed to read stored entries from file. Error in file raster.dat at offset 4
> at org.infinispan.persistence.file.SingleFileStore.rebuildIndex(SingleFileStore.java:195) ~[org.infinispan-infinispan-core-8.2.2.Fina
> l.jar:8.2.2.Final]
> at org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:126) ~[org.infinispan-infinispan-core-8.2.2.Final.jar:8
> .2.2.Final]
> ... 116 common frames omitted
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9696) Exception based eviction should be segment based
by William Burns (Jira)
William Burns created ISPN-9696:
-----------------------------------
Summary: Exception based eviction should be segment based
Key: ISPN-9696
URL: https://issues.jboss.org/browse/ISPN-9696
Project: Infinispan
Issue Type: Enhancement
Components: Eviction
Reporter: William Burns
Exception based eviction currently keeps track of the size of the entries in one stored value. We should keep track by segment. This allows us to report an approximate size to other components, such as state transfer which could be useful in ensuring elements will fit.
Also it allows for us to remove the addRemovalListener method on the InternalDataContainer as this is currently used by exception based eviction when a segment is removed to update its count. Instead it would be notified of a segment being removed and just remove the value internally.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9690) EXCEPTION eviction strategy should not require transactions
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-9690?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-9690:
-----------------------------------
Attachment: ispn-9690-zulip-chat.log
> EXCEPTION eviction strategy should not require transactions
> -----------------------------------------------------------
>
> Key: ISPN-9690
> URL: https://issues.jboss.org/browse/ISPN-9690
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Priority: Major
> Attachments: ispn-9690-zulip-chat.log
>
>
> Unfortunately we need 2 phase commit to guarantee consistency. Imagine the case where one of the owners says the write is okay and another says no, there is no way without two phase commit to guarantee that either all or none of the writes are completed.
> One possibility would be node to deny a write if it expects it would result in other nodes running out of memory. However, this could still fail if some keys store more data than others. It would require Infinispan to calculate some probabilistic method of deciding when a node would run out of memory.
> Another way would be to have a local (or shared) persistent store attached. In that case, if a backup owner will run out of memory if storing data, it would not store it in memory but store it to the persistent layer. If the node is restarted with new memory settings, the persistent stores would be consistent and the rebalance would put the data back in memory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month
[JBoss JIRA] (ISPN-9690) EXCEPTION eviction strategy should not require transactions
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-9690?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-9690:
-----------------------------------
Comment: was deleted
(was: A chat discussion:
{code}
William Burns: @Galder responded to your email, however we can talk here maybe
William Burns: actually @Galder I can see not requiring transactions when the cache mode is SIMPLE, LOCAL or INVALIDATION
William Burns: but that is not an option currently, would have to be added
Galder: Hmmm, we want this for DIST
William Burns: yeah then we would need two phase commit like I mentioned in my email
William Burns: since back up owner may reject a write, but primary may be fine
William Burns: unless we want to relax that requirement
Galder: Hmmmm
Galder: Assuming this is DIST, can't we start rejecting before the limit is reached?
William Burns: the way it currently works is each owner checks if their current size + new entry size > max size and if any of the owners reject, we throw the ContainerFullException
Galder: I mean: could an owner guess that adding something might end up topping non-owners and reject it?
Dan: @Galder @William Burns we could split the limit per segment and throw an exception once a segment is full
Galder: @William Burns Do you understand what I mean?
Galder: @Dan Don't think that'd solve the problem
Galder: You could still have issues where the primary owner is OK but the backup owner ends up failing due to segment full?
William Burns: @Galder do you mean that each node has an idea of how full another owner is, updated every so often in the background?
Galder: Or is the segment size fixed?
Galder: @William Burns Maybe... Or some other way that does not required communication
Galder: Segment based CH means we have more or less the same amount of data everywhere...
Galder: Or at least relatively similar number of entries in each node
Galder: In essence, could each primary owner probabilistically decide that doing a put would topple others?
Dan: @Galder yeah, the number of entries is typically within 10% of the average
Galder: There you go, 10%
Galder: I mean with EXCEPTION strategy only
Dan: But that's for all the entries on a node, a segment always has the same entries on all the owners that have it
Galder: Yeah but each node has different segments
Galder: That's where the variance comes in, right?
Dan: yes, the segments are different
Dan: sometimes by more than 10% :)
Galder: @William Burns I'm not necessarily suggesting each node keeps track of others, i'm more suggesting that given how full a node is, it could decide that others are likely full...
Dan: I was thinking of computing a fixed limit per segment, but you're right, that depends on how many nodes there are in the cluster
William Burns: So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
William Burns: and that some nodes can technically go over
William Burns: this is also very problematic if you have a lot of writes to same nodes
William Burns: ie. 3 nodes A, B, C if majority of writes are always to A and B and not C, C could get OOM
Galder: CH should take care of too many writes in some nodes
Galder: But true, some keys could be written more
Dan: @William Burns do you mean only throw the exception on the primary owner, and accept any write on the backup?
William Burns: well that is what Galder is after, so we don't have to use transactions
William Burns: is my assumption
Galder:
So we are thinking of some fixed percentage, that if a node is actually current > (max *(1-%)) it would throw the exception and not consult other nodes. In that case I would say why even have this %, just let the user configure the max directly knowing this is the behavior
But the user already configures the max, doesn't it?
William Burns: yes and we use transactions to ensure max is never reach on any node
Galder: Right, but using transactions for this seems too heavyweight
Dan: @Galder transactions are the only way to make it precise, so we only insert an entry if all the owners have room for it
William Burns: unfortunately without some sort of two phase operation we can some nodes updated and some others - and I figured don't reinvent the wheel
Dan: Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
William Burns: now I am open to possibly allowing others to go over, but we have to make sure the user knows this to possibly lower the max
Galder:
Any non-2PC solution will have to either allow some inserts to fail on some nodes and succeed on others, or have some nodes go over the configured max
Would that be fine if partition handling was enabled?
Dan: @William Burns I've been dreaming about lite transactions that don't need a xid and all that baggage ever since Mircea removed support for mixing tx and non-tx operations in the same cache :)
Dan: @Galder
Would that be fine if partition handling was enabled?
Depends on what you mean by "fine" :)
William Burns: @Dan yeah tbh I was thinking that would be nice for this use case
Galder: So when a node fails due to memory issues, we switch the readiness probe to say node not healthy... then the user gives it more memory to the pods... and one at the time they get restarted... partition handling would kick in and address partial replications
Galder: ^ I might be day dreaming :see_no_evil:
William Burns: hrmm, I think just regular rebalance should repair it
William Burns: don't need partition handling
William Burns: assuming we haven't lost >= numOwners
William Burns: and then nothing can save it :D
Galder: Exactly
Galder: TBH, the above should be done by the autoscaler
Dan:
assuming we haven't lost >= numOwners
Yeah, that's the big assumption -- if one node is shut down because it wanted to insert too many entries, those entries will be redirected somewhere else
Galder: Rather than manually by someone
Dan: So an exception in one node may trigger the restart of the entire cluster
William Burns: yeah actually the more I think about it losing that node could lose data then
William Burns: since state transfer writes may be blocked
William Burns: if this fills up another node
Dan: @Galder does OS/K8S keep pods around if they're not healthy?
Galder: Hmmmm, actually, I think it tries to restart them until they're healthy
Dan: Yeah, that's what I thought
William Burns: this is the age old issue of eviction without a store can lose data on a rehash, irrespective of numOwners
William Burns: @Galder will you have a store?
Galder: Actually
Galder: @Dan Depends on the config...
Galder: @Dan Depends how many times the readiness probe fails, at which point it'd restart it
Galder: https://kubernetes.io/docs/tasks/configure-pod-container/configure-livene...
Galder:
@Galder will you have a store?
Not by default
William Burns: so in that case I would say we should err on the side of making sure no node has too much as much as possible
Galder: Ok
Galder: What if you had a store?
William Burns: if we had a store then partial updates should be fine
William Burns: ie. primary writes the value and backup ignores it
Galder: But the stores would also have partial udpates?
William Burns: but we would have to implement writing to a store but not memory
Galder: exactly
Galder: Ok
Galder: I'll create a JIRA and summarise the discussion for now
Galder: I have to head out in a few
William Burns: alright
William Burns: we could even optimize it for shared stores
William Burns: :D
William Burns: actually that is interesting...
William Burns: we could have a store mode with shared where we never write the entry to memory on backups
Galder: Local stores are preferable
Galder: One less thing to worry about
William Burns: yeah agreed for this use case, I was just thinking more in general
Galder: @Dan @William Burns We want the default cache in data grid service to use EXCEPTION strategy, so that data is preserved (as opposed to the cache service where it evicts data, it's a cache), so would you make the default cache transactional? Or have a local file based store instead?
Galder: I'd go for the latter, just wondering if you see any case where the former would be better
Galder: Actually, with the current state you still need transactions because we don't write to store if memory full
William Burns: yeah latter is only way to not lose data
William Burns: former may be faster? would have to confirm
Galder: Yeah, but local file based persistence alone is not enough today to make EXCEPTION strategy work?
William Burns: no, tbh this seems like a different eviction mode at this point with a store
William Burns: cause you would never get an EXCEPTION
William Burns: tbh, I don't know if this is worth it
Galder: true
William Burns: this is essentially regular eviction with a store
William Burns: just changes what elements are in memory
Galder: True...
William Burns: yeah now I remember before that I thought EXCEPTION based eviction and stores don't make that much sense together
{code})
> EXCEPTION eviction strategy should not require transactions
> -----------------------------------------------------------
>
> Key: ISPN-9690
> URL: https://issues.jboss.org/browse/ISPN-9690
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Priority: Major
> Attachments: ispn-9690-zulip-chat.log
>
>
> Unfortunately we need 2 phase commit to guarantee consistency. Imagine the case where one of the owners says the write is okay and another says no, there is no way without two phase commit to guarantee that either all or none of the writes are completed.
> One possibility would be node to deny a write if it expects it would result in other nodes running out of memory. However, this could still fail if some keys store more data than others. It would require Infinispan to calculate some probabilistic method of deciding when a node would run out of memory.
> Another way would be to have a local (or shared) persistent store attached. In that case, if a backup owner will run out of memory if storing data, it would not store it in memory but store it to the persistent layer. If the node is restarted with new memory settings, the persistent stores would be consistent and the rebalance would put the data back in memory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 1 month