Running CDI test suite
by Vladimir Blagojevic
Guys,
As I prepared pull request for ISPN-2181 and I rebased to current
master, my cdi test suite started to fail with:
------------------------------------------------------
T E S T S
-------------------------------------------------------
Running TestSuite
Configuring TestNG with:
org.apache.maven.surefire.testng.conf.TestNGMapConfigurator@324a4e31
log4j:WARN No appenders could be found for logger (org.jboss.logging).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
[INFO]
------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 33.484s
[INFO] Finished at: Wed Sep 05 11:47:54 CEST 2012
[INFO] Final Memory: 18M/265M
[INFO]
------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.2:test
(default-test) on project infinispan-cdi: Execution default-test of goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.2:test failed:
java.lang.reflect.InvocationTargetException; nested exception is
java.lang.reflect.InvocationTargetException: null:
org.testng.xml.XmlTest.getClasses()Ljava/util/List; -> [Help 1]
I thought it must have been something with my branch but then I tried
master and it still fails. Any ideas how to fix this one?
Regards,
Vladimir
12 years, 6 months
Re: [infinispan-dev] Fwd: concurrent mod exc
by Adrian Nistor
Hi Ales,
I've added a JIRA for the ConcurrentModificationException issue (
https://issues.jboss.org/browse/ISPN-2263 )
and also fixed it and made a pull request today. The fix will be on
master soon.
Thanks,
Adrian
On 09/05/2012 04:40 PM, Mircea Markus wrote:
>
>
> Begin forwarded message:
>
>> *From: *Ales Justin <ales.justin(a)gmail.com
>> <mailto:ales.justin@gmail.com>>
>> *Subject: **[infinispan-dev] concurrent mod exc*
>> *Date: *2 September 2012 13:28:04 GMT+01:00
>> *To: *infinispan -Dev List <infinispan-dev(a)lists.jboss.org
>> <mailto:infinispan-dev@lists.jboss.org>>
>> *Reply-To: *infinispan -Dev List <infinispan-dev(a)lists.jboss.org
>> <mailto:infinispan-dev@lists.jboss.org>>
>>
>> Running CapeDwarf cluster tests on 5.2.0.Alpha3:
>> (which still do not work ...)
>>
>> Imo, one should never get this CME.
>>
>> ---
>>
>> Caused by: java.util.ConcurrentModificationException
>> at
>> java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
>> [classes.jar:1.6.0_33]
>> at java.util.AbstractList$Itr.next(AbstractList.java:343)
>> [classes.jar:1.6.0_33]
>> at
>> org.infinispan.statetransfer.StateProviderImpl.cancelOutboundTransfer(StateProviderImpl.java:285)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.infinispan.statetransfer.StateRequestCommand.perform(StateRequestCommand.java:96)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:110)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:82)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:228)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:208)
>> [infinispan-core-5.2.0.Alpha3.jar:5.2.0.Alpha3]
>> at
>> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:465)
>> at
>> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:372)
>> at
>> org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247)
>> at
>> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:601)
>> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130)
>> at org.jgroups.JChannel.up(JChannel.java:715)
>> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
>> at org.jgroups.protocols.RSVP.up(RSVP.java:192)
>> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
>> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
>> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
>> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:899)
>> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
>> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:744)
>> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:414)
>> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:608)
>> at org.jgroups.protocols.BARRIER.up(BARRIER.java:102)
>> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:143)
>> at org.jgroups.protocols.FD.up(FD.java:273)
>> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
>> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
>> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
>> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2568)
>> at org.jgroups.protocols.TP.passMessageUp(TP.java:1211)
>> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1775)
>> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1748)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> [classes.jar:1.6.0_33]
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> [classes.jar:1.6.0_33]
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
> Mircea Markus
> Infinispan lead (www.infinispan.org <http://www.infinispan.org>)
>
>
>
12 years, 6 months
drafts, sketches and pull requests
by Sanne Grinovero
Hi all,
I see that on github there are many pull requests for Infinispan, but
a good amount of them is not to be merged, needing more feedback or
pending for other reasons.
Rather than keeping things open, shouldn't we track these on JIRA? I'd
like to interpret the "pull requests" as things that should be
reviewed for immediate merging.
Shall we close those pulls which need further work?
Cheers,
Sanne
12 years, 6 months
Range of ports infinispan accepts by default: replication timeout happens due to port blocking
by Subash Chaturanga
Hi all,
I came across with this issue, where infinispan cache replication time outs.
Here is the error log I got. Seems this ports got blocked, and hence time
outs. "Replication timeout for blx25ao03-38795"
So what is the default range of ports that infinispan supports or is it
possible to change the range of ports ?
In fact I looked at the jgroups-tcp and udp xmls. There in jgroups-tcp.xml
I saw an attribute called, port_range="30".(I didn't see such for
jgroups-udp.xml).
If this is this range what is the initial port value. Then range should
be( initvalue - initivalue+30). Isn't it ?
Here I am assuming 38795 is the port which hangs, according to logs.
Correct me if I have read the logs incorrectly.
I need to identify the udp/tcp port range infinispan accepts to get rid
from this issue. Appreciate any help on this?
org.infinispan.util.concurrent.TimeoutException: Replication timeout for
blx25ao03-*38795*
at
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:421)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:100)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:124)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:229)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:217)
at
org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:199)
at
org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:193)
at
org.infinispan.interceptors.ReplicationInterceptor.handleCrudMethod(ReplicationInterceptor.java:114)
at
org.infinispan.interceptors.ReplicationInterceptor.visitPutKeyValueCommand(ReplicationInterceptor.java:78)
at
org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at
org.infinispan.interceptors.LockingInterceptor.visitPutKeyValueCommand(LockingInterceptor.java:198)
at
org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at
org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
at
org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:57)
at
org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at
org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:183)
at
org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:132)
at
org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
--
Subash Chaturanga
Department of Computer Science & Engineering
University of Moratuwa
Sri Lanka
Blog - http://subashsdm.blogspot.com/
Twitter - http://twitter.com/subash89
12 years, 6 months
failing queries on one node
by Ales Justin
After fixing that ISPN-2253:
* https://github.com/alesj/infinispan/commit/05229f426e829742902de030548828...
(Sanne is working on even cleaner solution)
It now also looks like our CapeDwarf clustering tests have been (almost) fixed.
It now appears as if one node cannot do proper querying:
Deployment dep1 goes to node1, same deployment dep2 goes to node2.
"Failed tests: queryOnA(org.jboss.test.capedwarf.cluster.DatastoreTestCase): Number of entities: 0"
Where as "queryOnB" doesn't fail.
@Sanne -- do we have any query tests covering this kind of scenario?
I'm using jgroups' auto-master selection.
Tomorrow I'll try fixed jgroups master/slave selection,
and the new dynamic auto-master selection.
Any ideas still welcome. ;-)
-Ales
---
@InSequence(30)
@Test
@OperateOnDeployment("dep1")
public void putStoresEntityOnDepA() throws Exception {
Entity entity = createTestEntity("KIND", 1);
getService().put(entity);
assertStoreContains(entity);
}
@InSequence(31)
@Test
@OperateOnDeployment("dep2")
public void putStoresEntityOnDepB() throws Exception {
Entity entity = createTestEntity("KIND", 2);
getService().put(entity);
assertStoreContains(entity);
}
@InSequence(40)
@Test
@OperateOnDeployment("dep1")
public void getEntityOnDepA() throws Exception {
waitForSync();
Key key = KeyFactory.createKey("KIND", 1);
Entity lookup = getService().get(key);
Assert.assertNotNull(lookup);
Entity entity = createTestEntity("KIND", 1);
Assert.assertEquals(entity, lookup);
}
@InSequence(50)
@Test
@OperateOnDeployment("dep2")
public void getEntityOnDepB() throws Exception {
waitForSync();
Entity entity = createTestEntity("KIND", 1);
assertStoreContains(entity);
}
@InSequence(52)
@Test
@OperateOnDeployment("dep1")
public void queryOnA() throws Exception {
waitForSync();
int count = getService().prepare(new Query("KIND")).countEntities(Builder.withDefaults());
Assert.assertTrue("Number of entities: " + count, count == 2);
}
@InSequence(53)
@Test
@OperateOnDeployment("dep2")
public void queryOnB() throws Exception {
waitForSync();
int count = getService().prepare(new Query("KIND")).countEntities(Builder.withDefaults());
Assert.assertTrue("Number of entities: " + count, count == 2);
}
12 years, 6 months
Running testsuite with infinispan.unsafe.allow_jdk8_chm=true
by Galder Zamarreño
Hi all,
Re: https://issues.jboss.org/browse/ISPN-2237
We don't have a jenkins job to run with infinispan.unsafe.allow_jdk8_ch=true, but instead of creating a new one, I'd suggest switching the testsuite to run with infinispan.unsafe.allow_jdk8_chm=true
The reason I suggest this is cos we're confident things work as expected with the JDK's CHM, so running the default testsuite with infinispan.unsafe.allow_jdk8_chm=true would allow us to catch new failures wo/ the need of another job
Thoughts?
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
12 years, 6 months
parked task?
by Ales Justin
Any idea why this is parked and never goes fwd?
As this then blocks any next cache requests -- see blocked thread below.
Could it be the error below?
This only happens when there is already an existing CapeDwarf app deployed.
* e.g. (to reproduce) https://github.com/capedwarf/capedwarf-blue/blob/master/testsuite/src/tes...
The same test exposed ISPN-2232.
-Ales
---
"http-/192.168.1.101:8080-1" daemon prio=5 tid=0000000002c05000 nid=0xb1ec1000 waiting on condition [00000000b1ebf000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0000000005d3a340> (a java.util.concurrent.FutureTask$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueProcessor.applyWork(LuceneBackendQueueProcessor.java:97)
at org.hibernate.search.backend.impl.jgroups.JGroupsBackendQueueProcessor.applyWork(JGroupsBackendQueueProcessor.java:116)
at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.performOperations(DirectoryBasedIndexManager.java:127)
at org.hibernate.search.backend.impl.WorkQueuePerIndexSplitter.commitOperations(WorkQueuePerIndexSplitter.java:61)
at org.hibernate.search.backend.impl.BatchedQueueingProcessor.performWorks(BatchedQueueingProcessor.java:96)
at org.hibernate.search.backend.impl.PostTransactionWorkQueueSynchronization.afterCompletion(PostTransactionWorkQueueSynchronization.java:99)
at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.afterCompletion(SynchronizationImple.java:96)
at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.afterCompletion(TwoPhaseCoordinator.java:402)
- locked <0000000005d3afc8> (a java.lang.Object)
at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:103)
at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:164)
at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1165)
at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:117)
at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1013)
at org.infinispan.CacheImpl.putIfAbsent(CacheImpl.java:714)
at org.infinispan.DecoratedCache.putIfAbsent(DecoratedCache.java:184)
at org.infinispan.loaders.CacheLoaderManagerImpl.preload(CacheLoaderManagerImpl.java:223)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:236)
at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:891)
at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:641)
at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:630)
at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:533)
at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:174)
at org.infinispan.CacheImpl.start(CacheImpl.java:519)
at org.infinispan.AbstractDelegatingCache.start(AbstractDelegatingCache.java:343)
at org.jboss.capedwarf.common.infinispan.InfinispanUtils.checkCache(InfinispanUtils.java:72)
at org.jboss.capedwarf.common.infinispan.InfinispanUtils.getCache(InfinispanUtils.java:82)
- locked <0000000016847f08> (a $Proxy46)
at org.jboss.capedwarf.common.infinispan.InfinispanUtils.getCache(InfinispanUtils.java:191)
at org.jboss.capedwarf.datastore.AbstractDatastoreService.createStore(AbstractDatastoreService.java:66)
at org.jboss.capedwarf.datastore.AbstractDatastoreService.<init>(AbstractDatastoreService.java:60)
at org.jboss.capedwarf.datastore.JBossDatastoreService.<init>(JBossDatastoreService.java:56)
at com.google.appengine.api.datastore.DatastoreServiceFactory.getDatastoreService(DatastoreServiceFactory.java)
at org.jboss.capedwarf.log.JBossLogService.requestStarted(JBossLogService.java:215)
at org.jboss.capedwarf.appidentity.GAEListener.requestInitialized(GAEListener.java:91)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:680)
"http-/192.168.1.101:8080-2" daemon prio=5 tid=0000000002bd2c00 nid=0xb1fc3000 waiting for monitor entry [00000000b1fc2000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.jboss.capedwarf.common.infinispan.InfinispanUtils.getCache(InfinispanUtils.java:81)
- waiting to lock <0000000016847f08> (a $Proxy46)
at org.jboss.capedwarf.common.infinispan.InfinispanUtils.getCache(InfinispanUtils.java:191)
at org.jboss.capedwarf.datastore.AbstractDatastoreService.createStore(AbstractDatastoreService.java:66)
at org.jboss.capedwarf.datastore.AbstractDatastoreService.<init>(AbstractDatastoreService.java:60)
at org.jboss.capedwarf.datastore.JBossDatastoreService.<init>(JBossDatastoreService.java:56)
at com.google.appengine.api.datastore.DatastoreServiceFactory.getDatastoreService(DatastoreServiceFactory.java)
at org.jboss.capedwarf.log.JBossLogService.requestStarted(JBossLogService.java:215)
at org.jboss.capedwarf.appidentity.GAEListener.requestInitialized(GAEListener.java:91)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:372)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:679)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:931)
at java.lang.Thread.run(Thread.java:680)
--
15:46:57,905 ERROR [stderr] (CacheStartThread,null,LuceneIndexesData) Exception in thread "CacheStartThread,null,LuceneIndexesData" org.infinispan.CacheException: Unable to acquire lock on cache with name LuceneIndexesData
15:46:57,905 ERROR [stderr] (CacheStartThread,null,LuceneIndexesData) at org.infinispan.manager.DefaultCacheManager.wireCache(DefaultCacheManager.java:680)
15:46:57,906 ERROR [stderr] (CacheStartThread,null,LuceneIndexesData) at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:654)
15:46:57,906 ERROR [stderr] (CacheStartThread,null,LuceneIndexesData) at org.infinispan.manager.DefaultCacheManager.access$100(DefaultCacheManager.java:127)
15:46:57,906 ERROR [stderr] (CacheStartThread,null,LuceneIndexesData) at org.infinispan.manager.DefaultCacheManager$1.run(DefaultCacheManager.java:580)
15:46:57,907 ERROR [stderr] (CacheStartThread,null,LuceneIndexesLocking) Exception in thread "CacheStartThread,null,LuceneIndexesLocking" org.infinispan.CacheException: Unable to acquire lock on cache with name LuceneIndexesLocking
15:46:57,907 ERROR [stderr] (CacheStartThread,null,LuceneIndexesLocking) at org.infinispan.manager.DefaultCacheManager.wireCache(DefaultCacheManager.java:680)
15:46:57,908 ERROR [stderr] (CacheStartThread,null,LuceneIndexesLocking) at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:654)
15:46:57,908 ERROR [stderr] (CacheStartThread,null,LuceneIndexesLocking) at org.infinispan.manager.DefaultCacheManager.access$100(DefaultCacheManager.java:127)
15:46:57,908 ERROR [stderr] (CacheStartThread,null,LuceneIndexesLocking) at org.infinispan.manager.DefaultCacheManager$1.run(DefaultCacheManager.java:580)
15:46:57,910 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.1.101:8080-1) JBAS010281: Started LuceneIndexesMetadata cache from capedwarf container
15:46:57,912 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.1.101:8080-1) JBAS010281: Started LuceneIndexesData cache from capedwarf container
15:46:57,913 INFO [org.jboss.as.clustering.infinispan] (http-/192.168.1.101:8080-1) JBAS010281: Started LuceneIndexesLocking cache from capedwarf container
15:46:57,914 INFO [org.hibernate.search.indexes.serialization.avro.impl.AvroSerializationProvider] (http-/192.168.1.101:8080-1) HSEARCH000079: Serialization protocol version 1.0
15:46:57,915 ERROR [stderr] (CacheStartThread,null,LuceneIndexesMetadata) Exception in thread "CacheStartThread,null,LuceneIndexesMetadata" org.infinispan.CacheException: Unable to acquire lock on cache with name LuceneIndexesMetadata
15:46:57,916 ERROR [stderr] (CacheStartThread,null,LuceneIndexesMetadata) at org.infinispan.manager.DefaultCacheManager.wireCache(DefaultCacheManager.java:680)
15:46:57,916 ERROR [stderr] (CacheStartThread,null,LuceneIndexesMetadata) at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:654)
15:46:57,916 ERROR [stderr] (CacheStartThread,null,LuceneIndexesMetadata) at org.infinispan.manager.DefaultCacheManager.access$100(DefaultCacheManager.java:127)
15:46:57,916 ERROR [stderr] (CacheStartThread,null,LuceneIndexesMetadata) at org.infinispan.manager.DefaultCacheManager$1.run(DefaultCacheManager.java:580)
12 years, 6 months