[JBoss JIRA] (ISPN-2956) putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2956?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2956:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1004193|https://bugzilla.redhat.com/show_bug.cgi?id=1004193] from ON_QA to VERIFIED
> putIfAbsent on Hot Rod Java client doesn't reliably fulfil contract
> -------------------------------------------------------------------
>
> Key: ISPN-2956
> URL: https://issues.jboss.org/browse/ISPN-2956
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: 63gablocker, hotrod-java-client, remote-clients
> Fix For: 7.0.0.Beta1
>
>
> Hot Rod's putIfAbsent might have issues on some edge cases:
> {quote}I want to know whether the putting entry already exists in the remote
> cache cluster, or not.
> I thought that RemoteCache.putIfAbsent() would be useful for that
> purpose, i.e.,
> {code}
> if (remoteCache.putIfAbsent(k,v) == null) {
> // new entry.
> } else {
> // k already exists.
> }
> {code}
> But no.
> The putIfAbsent() for new entry may return non-null value, if one of the
> server crushed while putting.
> The behavior is like the following:
> 1. Client do putIfAbsent(k,v).
> 2. The server receives the request and sends replication requests to
> other servers. If the server crushed before completing replication, some
> servers own that (k,v), but others not.
> 3. Client receives the error. The putIfAbsent() internally retries the
> same request to the next server in the cluster server list.
> 4. If the next server owns the (k,v), the putIfAbsent() returns the
> replicated (k,v) at the step 2, without any error.
> So, putIfAbsent() is not reliable for knowing whether the putting entry
> is *exactly* new or not.
> Does anyone have any idea/workaround for this purpose?{quote}
> A workaround is to do this:
> {quote}We got a simple solution, which can be applied to our customer's application.
> If each value part of putting (k,v) is unique or contains unique value,
> the client can do *double check* wether the entry is new.
> {code}
> val = System.nanoTime(); // or uuid is also useful.
> if ((ret = cache.putIfAbsent(key, val)) == null
> || ret.equals(val)) {
> // new entry, if the return value is just the same.
> } else {
> // key already exists.
> }
> {code}
> We are proposing this workaround which almost works fine.{quote}
> However, this is a bit of a cludge.
> Hot Rod should be improved with an operation that allows a version to be passed when entry is created, instead of relying on the client generating it.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 6 months
[JBoss JIRA] (ISPN-4471) MapReduceTask: memory leak with useIntermediateSharedCache = true
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4471?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-4471:
-------------------------------
Assignee: Vladimir Blagojevic (was: Mircea Markus)
> MapReduceTask: memory leak with useIntermediateSharedCache = true
> -----------------------------------------------------------------
>
> Key: ISPN-4471
> URL: https://issues.jboss.org/browse/ISPN-4471
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Affects Versions: 6.0.2.Final
> Reporter: Rich DiCroce
> Assignee: Vladimir Blagojevic
>
> When using an intermediate shared cache for the reduce phase, MapReduceTask puts the entries into the cache with no expiration and apparently never removes them. This eventually results in OutOfMemoryErrors.
> One workaround is to disable use of the intermediate shared cache, which causes a new cache to be created and destroyed for every task, which "fixes" the problem of not removing intermediate values. However, it causes a ton of log spam:
> {noformat}
> 2014-07-02 11:55:10,014 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-21) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,016 INFO [org.jboss.as.clustering.infinispan] (transport-thread-21) JBAS010281: Started e71dddc0-60ce-4cb9-ac8c-615d60866393 cache from GamingPortal container
> 2014-07-02 11:55:10,023 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-5) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,024 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-4) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,025 INFO [org.jboss.as.clustering.infinispan] (transport-thread-5) JBAS010281: Started 22d387d6-69c6-48b2-9701-ea64c08d66ad cache from GamingPortal container
> 2014-07-02 11:55:10,026 INFO [org.jboss.as.clustering.infinispan] (transport-thread-4) JBAS010281: Started bfaf92a0-a030-4624-93a7-0fee097415d7 cache from NMS container
> 2014-07-02 11:55:10,037 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped 22d387d6-69c6-48b2-9701-ea64c08d66ad cache from GamingPortal container
> 2014-07-02 11:55:10,040 INFO [org.jboss.as.clustering.infinispan] (EJB default - 1) JBAS010282: Stopped bfaf92a0-a030-4624-93a7-0fee097415d7 cache from NMS container
> 2014-07-02 11:55:10,047 INFO [org.jboss.as.clustering.infinispan] (EJB default - 6) JBAS010282: Stopped e71dddc0-60ce-4cb9-ac8c-615d60866393 cache from GamingPortal container
> 2014-07-02 11:55:10,047 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-0) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,048 INFO [org.jboss.as.clustering.infinispan] (transport-thread-0) JBAS010281: Started bed74bd3-a227-43e0-b262-62c19dd444a7 cache from GamingPortal container
> 2014-07-02 11:55:10,052 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped bed74bd3-a227-43e0-b262-62c19dd444a7 cache from GamingPortal container
> 2014-07-02 11:55:10,063 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-7) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,064 INFO [org.jboss.as.clustering.infinispan] (transport-thread-7) JBAS010281: Started 63cce570-0169-40c2-bc9f-e045c2864702 cache from GamingPortal container
> 2014-07-02 11:55:10,068 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped 63cce570-0169-40c2-bc9f-e045c2864702 cache from GamingPortal container
> 2014-07-02 11:55:10,072 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-19) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,073 INFO [org.jboss.as.clustering.infinispan] (transport-thread-19) JBAS010281: Started 83f7b355-d4c6-4a0a-aade-ce2509293d77 cache from GamingPortal container
> 2014-07-02 11:55:10,077 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped 83f7b355-d4c6-4a0a-aade-ce2509293d77 cache from GamingPortal container
> {noformat}
> I also observed one NullPointerException with distributeReducePhase = true and useIntermediateSharedCache = false. This could be related to ISPN-4460, but I'm not sure.
> {noformat}
> Caused by: org.infinispan.commons.CacheException: java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:348) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:634) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$3.call(MapReduceTask.java:652) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapReduceTaskFuture.get(MapReduceTask.java:760) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> ... 63 more
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) [rt.jar:1.7.0_45]
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:845) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:439) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:342) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> ... 66 more
> Caused by: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:100) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.invokeMapCombineLocally(MapReduceTask.java:967) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.access$200(MapReduceTask.java:894) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:916) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:912) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
> Caused by: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapKeysToNodes(MapReduceManagerImpl.java:355) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.migrateIntermediateKeys(MapReduceManagerImpl.java:264) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.combine(MapReduceManagerImpl.java:258) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:98) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> ... 10 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 6 months
[JBoss JIRA] (ISPN-4471) MapReduceTask: memory leak with useIntermediateSharedCache = true
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4471?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-4471:
-------------------------------
Component/s: Distributed Execution and Map/Reduce
> MapReduceTask: memory leak with useIntermediateSharedCache = true
> -----------------------------------------------------------------
>
> Key: ISPN-4471
> URL: https://issues.jboss.org/browse/ISPN-4471
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 6.0.2.Final
> Reporter: Rich DiCroce
> Assignee: Vladimir Blagojevic
>
> When using an intermediate shared cache for the reduce phase, MapReduceTask puts the entries into the cache with no expiration and apparently never removes them. This eventually results in OutOfMemoryErrors.
> One workaround is to disable use of the intermediate shared cache, which causes a new cache to be created and destroyed for every task, which "fixes" the problem of not removing intermediate values. However, it causes a ton of log spam:
> {noformat}
> 2014-07-02 11:55:10,014 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-21) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,016 INFO [org.jboss.as.clustering.infinispan] (transport-thread-21) JBAS010281: Started e71dddc0-60ce-4cb9-ac8c-615d60866393 cache from GamingPortal container
> 2014-07-02 11:55:10,023 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-5) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,024 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-4) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,025 INFO [org.jboss.as.clustering.infinispan] (transport-thread-5) JBAS010281: Started 22d387d6-69c6-48b2-9701-ea64c08d66ad cache from GamingPortal container
> 2014-07-02 11:55:10,026 INFO [org.jboss.as.clustering.infinispan] (transport-thread-4) JBAS010281: Started bfaf92a0-a030-4624-93a7-0fee097415d7 cache from NMS container
> 2014-07-02 11:55:10,037 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped 22d387d6-69c6-48b2-9701-ea64c08d66ad cache from GamingPortal container
> 2014-07-02 11:55:10,040 INFO [org.jboss.as.clustering.infinispan] (EJB default - 1) JBAS010282: Stopped bfaf92a0-a030-4624-93a7-0fee097415d7 cache from NMS container
> 2014-07-02 11:55:10,047 INFO [org.jboss.as.clustering.infinispan] (EJB default - 6) JBAS010282: Stopped e71dddc0-60ce-4cb9-ac8c-615d60866393 cache from GamingPortal container
> 2014-07-02 11:55:10,047 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-0) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,048 INFO [org.jboss.as.clustering.infinispan] (transport-thread-0) JBAS010281: Started bed74bd3-a227-43e0-b262-62c19dd444a7 cache from GamingPortal container
> 2014-07-02 11:55:10,052 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped bed74bd3-a227-43e0-b262-62c19dd444a7 cache from GamingPortal container
> 2014-07-02 11:55:10,063 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-7) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,064 INFO [org.jboss.as.clustering.infinispan] (transport-thread-7) JBAS010281: Started 63cce570-0169-40c2-bc9f-e045c2864702 cache from GamingPortal container
> 2014-07-02 11:55:10,068 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped 63cce570-0169-40c2-bc9f-e045c2864702 cache from GamingPortal container
> 2014-07-02 11:55:10,072 INFO [org.infinispan.jmx.CacheJmxRegistration] (transport-thread-19) ISPN000031: MBeans were successfully registered to the platform MBean server.
> 2014-07-02 11:55:10,073 INFO [org.jboss.as.clustering.infinispan] (transport-thread-19) JBAS010281: Started 83f7b355-d4c6-4a0a-aade-ce2509293d77 cache from GamingPortal container
> 2014-07-02 11:55:10,077 INFO [org.jboss.as.clustering.infinispan] (EJB default - 2) JBAS010282: Stopped 83f7b355-d4c6-4a0a-aade-ce2509293d77 cache from GamingPortal container
> {noformat}
> I also observed one NullPointerException with distributeReducePhase = true and useIntermediateSharedCache = false. This could be related to ISPN-4460, but I'm not sure.
> {noformat}
> Caused by: org.infinispan.commons.CacheException: java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:348) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:634) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$3.call(MapReduceTask.java:652) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapReduceTaskFuture.get(MapReduceTask.java:760) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> ... 63 more
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) [rt.jar:1.7.0_45]
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:845) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:439) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:342) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> ... 66 more
> Caused by: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:100) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.invokeMapCombineLocally(MapReduceTask.java:967) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.access$200(MapReduceTask.java:894) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:916) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:912) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
> Caused by: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapKeysToNodes(MapReduceManagerImpl.java:355) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.migrateIntermediateKeys(MapReduceManagerImpl.java:264) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.combine(MapReduceManagerImpl.java:258) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:98) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
> ... 10 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 6 months
[JBoss JIRA] (ISPN-4470) cache.keySet().size() returns different value than cache.size() for HotRod client
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-4470?page=com.atlassian.jira.plugin.... ]
Martin Gencur commented on ISPN-4470:
-------------------------------------
size() on remote cache does not behave the same way, though. It returns only local cache size. So this issue is not about JavaDocs but about the (unexpected) difference in behavior.
> cache.keySet().size() returns different value than cache.size() for HotRod client
> ---------------------------------------------------------------------------------
>
> Key: ISPN-4470
> URL: https://issues.jboss.org/browse/ISPN-4470
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Affects Versions: 6.0.2.Final, 7.0.0.Alpha4
> Reporter: Martin Gencur
> Assignee: Galder Zamarreño
> Priority: Critical
> Fix For: 7.0.0.Beta1, 7.0.0.Final
>
>
> cache.keySet().size() returns the number of all keys in the cluster (even a distributed one) while cache.size() returns just local cache size (this might be different for a distributed cache)
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 6 months
[JBoss JIRA] (ISPN-4477) infinispan-server.sh grep with log message
by Takayoshi Kimura (JIRA)
Takayoshi Kimura created ISPN-4477:
--------------------------------------
Summary: infinispan-server.sh grep with log message
Key: ISPN-4477
URL: https://issues.jboss.org/browse/ISPN-4477
Project: Infinispan
Issue Type: Enhancement
Security Level: Public (Everyone can see)
Components: Server
Affects Versions: 7.0.0.Alpha4
Reporter: Takayoshi Kimura
Assignee: Mircea Markus
infinispan-server.sh checks whether or not the instance launched, using the following grep:
{code}
grep 'JBAS015874.*started in' $ISPN_SERVER_CONSOLE_LOG > /dev/null
{code}
But it doesn't play nicely with i18n log message and the log message is not necessary here at all. This is already fixed in WildFly, it only uses log ID.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 6 months