[JBoss JIRA] (ISPN-7810) Error executing the MassIndexer with security enabled
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-7810:
---------------------------------------
Summary: Error executing the MassIndexer with security enabled
Key: ISPN-7810
URL: https://issues.jboss.org/browse/ISPN-7810
Project: Infinispan
Issue Type: Bug
Components: Embedded Querying, Security
Reporter: Gustavo Fernandes
Exception thrown:
Caused by: java.lang.SecurityException: ISPN000287: Unauthorized access: subject 'null' lacks 'BULK_READ' permission
The mass indexer uses custom commands sent to one or more members, and the subject is not propagated, thus resulting in the error above
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7809) Multiplex events for multiple listeners over a single connection in client
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-7809:
--------------------------------------
Summary: Multiplex events for multiple listeners over a single connection in client
Key: ISPN-7809
URL: https://issues.jboss.org/browse/ISPN-7809
Project: Infinispan
Issue Type: Enhancement
Components: Remote Protocols
Affects Versions: 9.0.0.Final
Reporter: Galder Zamarreño
Currently the Java Hot Rod client uses a separate connection for each of the listeners that's added to the server. Each of these connections is allocated for each listener and won't be released until listener is removed or the client is closed.
To avoid wasting all these connections, each client should create a single connection to be used for all its listeners, and events should be multiplexed through it.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7808) Upgrade to mockito 2.7.21
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7808?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7808:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5125
> Upgrade to mockito 2.7.21
> -------------------------
>
> Key: ISPN-7808
> URL: https://issues.jboss.org/browse/ISPN-7808
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: Test Suite - Core, Test Suite - Server
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Alpha1
>
>
> While fixing ISPN-7659, I changed {{version.mockito}} in the parent POM to 2.7.21 (the latest version of mockito-core).
> It turns out that all the modules actually depend on {{mockito-all}} without specifying a version. {{version.mockito}} was only used by some OSGi integration tests, and the change broke them.
> The latest version of mockito-all is 1.9.5, which is quite old, so it would be best to upgrade to the latest mockito-core everywhere.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7808) Upgrade to mockito 2.7.21
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7808?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7808:
-------------------------------
Status: Open (was: New)
> Upgrade to mockito 2.7.21
> -------------------------
>
> Key: ISPN-7808
> URL: https://issues.jboss.org/browse/ISPN-7808
> Project: Infinispan
> Issue Type: Component Upgrade
> Components: Test Suite - Core, Test Suite - Server
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Alpha1
>
>
> While fixing ISPN-7659, I changed {{version.mockito}} in the parent POM to 2.7.21 (the latest version of mockito-core).
> It turns out that all the modules actually depend on {{mockito-all}} without specifying a version. {{version.mockito}} was only used by some OSGi integration tests, and the change broke them.
> The latest version of mockito-all is 1.9.5, which is quite old, so it would be best to upgrade to the latest mockito-core everywhere.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7808) Upgrade to mockito 2.7.21
by Dan Berindei (JIRA)
Dan Berindei created ISPN-7808:
----------------------------------
Summary: Upgrade to mockito 2.7.21
Key: ISPN-7808
URL: https://issues.jboss.org/browse/ISPN-7808
Project: Infinispan
Issue Type: Component Upgrade
Components: Test Suite - Core, Test Suite - Server
Affects Versions: 9.0.0.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.1.0.Alpha1
While fixing ISPN-7659, I changed {{version.mockito}} in the parent POM to 2.7.21 (the latest version of mockito-core).
It turns out that all the modules actually depend on {{mockito-all}} without specifying a version. {{version.mockito}} was only used by some OSGi integration tests, and the change broke them.
The latest version of mockito-all is 1.9.5, which is quite old, so it would be best to upgrade to the latest mockito-core everywhere.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7784) BytesObjectInput doesn't implement ObjectInput properly
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7784?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7784:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5124
> BytesObjectInput doesn't implement ObjectInput properly
> -------------------------------------------------------
>
> Key: ISPN-7784
> URL: https://issues.jboss.org/browse/ISPN-7784
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.1.0.Final
>
>
> The {{java.io.ObjectInput#read()}} javadoc says implementations should return -1 if the end of the stream is reached, but {{BytesObjectInput}} throws an {{ArrayIndexOutOfBoundsException}}.
> Actually, {{read()}} is only used because {{ExternalJBossMarshaller.JBossByteInput}} doesn't implement {{read(byte[])}}. This makes unmarshalling of external objects slower than it should be, because {{InputStream.read(byte[])}} fills the buffer by repeatedly calling {{read()}}.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7801) RehashWithL1Test.testPutWithRehashAndCacheClear random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7801?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-7801:
------------------------------------
I think the test is correct after all: the L1 entries on the old nodes should *not* be visible after the joiner became the only owner of all the keys. Writes on the joiner will not send any L1 invalidations, because its {{L1ManagerImpl}} doesn't have any requestors, so the other nodes could see stale values.
So we need to add another requirement for the rebalance/state transfer process: L1 entries should not be visible after the node they were requested from is no longer an owner in the write CH.
[~rvansa] I think the simplest way to do this would be to invalidate L1 entries at the beginning of the {{READ_NEW_WRITE_ALL}} phase. We could either split {{StateConsumerImpl.removeStaleData()}} into two parts, so that invalidation of regular entries still happens after the end of rebalance, or we could invalidate everything during {{READ_NEW_WRITE_ALL}}, but that would require additional logic to skip writing new values to the data container/stores.
> RehashWithL1Test.testPutWithRehashAndCacheClear random failures
> ---------------------------------------------------------------
>
> Key: ISPN-7801
> URL: https://issues.jboss.org/browse/ISPN-7801
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Final
>
>
> The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
> {noformat}
> 03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:26,754 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeA-2588
> 03:24:27,514 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeB-54331
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
> 03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> *** The entry is not removed from NodeB at this point
> 03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
> 03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:28,519 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeB-54331
> 03:24:28,595 DEBUG (testng-Test:[]) [Test] Starting a new joiner
> 03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
> 03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
> java.lang.AssertionError: wrong value for k0
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
> at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
> *** Too late
> 03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7801) RehashWithL1Test.testPutWithRehashAndCacheClear random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7801?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7801:
-------------------------------
Description:
The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
{noformat}
03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:26,754 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeA-2588
03:24:27,514 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeB-54331
03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
*** The entry is not removed from NodeB at this point
03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:28,519 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeB-54331
03:24:28,595 DEBUG (testng-Test:[]) [Test] Starting a new joiner
03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
java.lang.AssertionError: wrong value for k0
at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
*** Too late
03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
{noformat}
was:
The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
{noformat}
03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
*** The entry is not removed from NodeB at this point
03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
java.lang.AssertionError: wrong value for k0
at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
*** Too late
03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
{noformat}
> RehashWithL1Test.testPutWithRehashAndCacheClear random failures
> ---------------------------------------------------------------
>
> Key: ISPN-7801
> URL: https://issues.jboss.org/browse/ISPN-7801
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 9.1.0.Final
>
>
> The test kills the only owner of a key and checks that when a node starts owning an L1 entry, it doesn't send it to other nodes during state transfer. Then it adds a new node (owning the key) and checks that the key isn't transferred to the new node, and it's deleted from L1 on the old nodes. The problem is that it doesn't wait, it assumes all the nodes have already removed it by the time {{getCache()}} returns on the joiner.
> {noformat}
> 03:24:27,606 TRACE (jgroups-5,Test-NodeB-54331:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:27,607 TRACE (jgroups-5,Test-NodeB-54331:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:26,754 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeA-2588
> 03:24:27,514 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeB-54331
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Populating L1 on Test-NodeC-65326
> 03:24:27,777 DEBUG (testng-Test:[]) [Test] Killing node Test-NodeC-65326
> 03:24:27,781 TRACE (transport-thread-Test-NodeA-p51-t2:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> *** The entry is not removed from NodeB at this point
> 03:24:27,936 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:27,998 TRACE (jgroups-5,Test-NodeB-54331:[]) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=MortalCacheValue{value=some data, lifespan=600000, created=1493943867607}} for command ClusteredGetCommand{key=k0, flags=[]}
> 03:24:28,034 TRACE (jgroups-7,Test-NodeA-2588:[]) [L1WriteSynchronizer] Caching remotely retrieved entry for key k0 in L1
> 03:24:28,044 TRACE (jgroups-7,Test-NodeA-2588:[]) [DefaultDataContainer] Store MortalCacheEntry{key=k0, value=some data} in container
> 03:24:28,519 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeB-54331
> 03:24:28,595 DEBUG (testng-Test:[]) [Test] Starting a new joiner
> 03:24:30,261 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [InvocationContextInterceptor] Invoked with command InvalidateCommand{keys=[k0, k1, k2, k3, k4, k5, k6, k7, k8, k9]} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@54c5cc1d]
> 03:24:30,292 DEBUG (testng-Test:[]) [Test] Checking values on Test-NodeA-2588
> 03:24:30,355 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear
> java.lang.AssertionError: wrong value for k0
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:24) ~[testng-6.8.8.jar:?]
> at org.testng.AssertJUnit.assertNull(AssertJUnit.java:282) ~[testng-6.8.8.jar:?]
> at org.infinispan.distribution.rehash.RehashWithL1Test.testPutWithRehashAndCacheClear(RehashWithL1Test.java:78) ~[test-classes/:?]
> *** Too late
> 03:24:30,360 TRACE (transport-thread-Test-NodeA-p51-t6:[Topology-___defaultcache]) [DefaultDataContainer] Removed MortalCacheEntry{key=k0, value=some data} from container
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ISPN-7807) Hot Rod Lightweight Transaction (Synchronization)
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-7807:
---------------------------------
Summary: Hot Rod Lightweight Transaction (Synchronization)
Key: ISPN-7807
URL: https://issues.jboss.org/browse/ISPN-7807
Project: Infinispan
Issue Type: Bug
Components: Remote Protocols, Transactions
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
A JIRA to track future JIRAs related to hot rod transactions. (Syncrhonization only, another JIRA will be created for full XA)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months