[JBoss JIRA] (ISPN-3143) Cannot store custom objects via HotRod and read via REST
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3143?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-3143:
----------------------------------------
This was the result of a test set up issue (didn't register the ServerBootstrap listener), and wrong expectations in REST read. What the cache contains is a deserialized Person instance, so the REST server returns a java serialized object. Pull req coming up...
> Cannot store custom objects via HotRod and read via REST
> --------------------------------------------------------
>
> Key: ISPN-3143
> URL: https://issues.jboss.org/browse/ISPN-3143
> Project: Infinispan
> Issue Type: Feature Request
> Affects Versions: 5.3.0.Beta2
> Reporter: Martin Gencur
> Assignee: Galder Zamarreño
> Fix For: 5.3.0.Final
>
>
> The following test fails when added to EmbeddedRestHotRodTest
> {code:java}
> public void testCustomObjectHotRodPutEmbeddedRestGet() throws Exception{
> final String key = "4";
> Person p = new Person("Martin");
> // 1. Put with Hot Rod
> RemoteCache<String, Object> remote = cacheFactory.getHotRodCache();
> assertEquals(null, remote.withFlags(Flag.FORCE_RETURN_VALUE).put(key, p));
> // 2. Get with Embedded
> assertTrue(cacheFactory.getEmbeddedCache().get(key) instanceof Person);
> assertEquals(p.getName(), ((Person) cacheFactory.getEmbeddedCache().get(key)).getName());
> // 3. Get with REST
> HttpMethod get = new GetMethod(cacheFactory.getRestUrl() + "/" + key);
> cacheFactory.getRestClient().executeMethod(get);
> assertEquals(HttpServletResponse.SC_OK, get.getStatusCode());
> //^^^ fails here - status code 500, status text: NullPointerException
> Object returnedPerson = remote.getRemoteCacheManager().getMarshaller().objectFromByteBuffer(get.getResponseBodyAsString().getBytes());
> assertTrue(returnedPerson instanceof Person);
> assertEquals(p.getName(), ((Person) returnedPerson).getName());
> }
> public static class Person implements Serializable {
> private String name;
> public Person(String name) {
> this.name = name;
> }
> public String getName() {
> return name;
> }
> }
> {code}
> Storing and retrieving String keys works correctly.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] (ISPN-3136) Argument type mismatch in RHQ for Amend XSite operations
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-3136?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-3136:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
Integrated in master. Thanks!
> Argument type mismatch in RHQ for Amend XSite operations
> ---------------------------------------------------------
>
> Key: ISPN-3136
> URL: https://issues.jboss.org/browse/ISPN-3136
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Tomas Sykora
> Fix For: 5.3.0.CR1, 5.3.0.Final
>
>
> All 3 amend operations for XSite are not callable properly because of argument type mismatch exception.
> We need to specify type property for particular attributes in rhq-plugin.xml. It means we need to change Parameter class in jmx annotation slightly as well as description of XSite operations in XSiteAdminOperations class.
> Basically we need to add type="integer" or type="long" respectively for some fields.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] (ISPN-2786) ThreadLocal memory leak in Tomcat
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2786?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2786:
------------------------------------
The thread local in AbstractJBossMarshaller is not static, so I think it can only leak if there is a reference keeping a AbstractJBossMarshaller instance live.
> ThreadLocal memory leak in Tomcat
> ---------------------------------
>
> Key: ISPN-2786
> URL: https://issues.jboss.org/browse/ISPN-2786
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling, Transactions
> Affects Versions: 5.1.8.Final
> Reporter: Johann Burkard
> Assignee: Galder Zamarreño
> Labels: leak, local, memory, thread, threadlocal
> Fix For: 5.3.0.Final
>
>
> Just started an app using Infinispan 5.1.8.Final on Tomcat and got a few ThreadLocal problems during un-deployment:
> (Shortened)
> {code}
> key=org.jboss.marshalling.UTFUtils.BytesHolder
> value=org.jboss.marshalling.UTFUtils$BytesHolder@697a1686
> key=java.lang.ThreadLocal@36ed5ba6
> value=org.infinispan.context.SingleKeyNonTxInvocationContext{flags=null}
> key=org.infinispan.marshall.jboss.AbstractJBossMarshaller$1
> value=org.infinispan.marshall.jboss.AbstractJBossMarshaller$1@75f10df7
> value=org.infinispan.marshall.jboss.AbstractJBossMarshaller.PerThreadInstanceHolder
> {code}
> I do call {{DefaultCacheManager#shutdown()}} during un-deployment. :)
> Thanks
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] (ISPN-3145) DataRehashedEventTest intermittent failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3145?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-3145:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/1849
> DataRehashedEventTest intermittent failures
> -------------------------------------------
>
> Key: ISPN-3145
> URL: https://issues.jboss.org/browse/ISPN-3145
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite
> Affects Versions: 5.3.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 5.3.0.CR1
>
>
> DataRehashedEventTest sometimes fails because of ISPN-3035.
> But other times it fails because it doesn't wait enough for the rehash event listener to be called:
> {noformat}
> 2013-05-28 11:21:36,550 TRACE (asyncTransportThread-2,NodeA) [org.infinispan.statetransfer.DataRehashedEventTest] New event received: EventImpl{type=DATA_REHASHED, pre=true, cache=Cache '___defaultcache'@NodeA-19135, consistentHashAtStart=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeA-19135]}, consistentHashAtEnd=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeA-19135, NodeB-23459]}, newTopologyId=1}
> 2013-05-28 11:21:36,661 TRACE (testng-DataRehashedEventTest) [org.infinispan.test.TestingUtil] Node NodeA-19135 finished state transfer.
> 2013-05-28 11:21:36,661 TRACE (testng-DataRehashedEventTest) [org.infinispan.test.TestingUtil] Node NodeB-23459 finished state transfer.
> 2013-05-28 11:21:36,662 ERROR (testng-DataRehashedEventTest) [org.infinispan.test.fwk.UnitTestTestNGListener] Test testJoinAndLeave(org.infinispan.statetransfer.DataRehashedEventTest) failed.
> java.lang.AssertionError: expected [2] but found [1]
> at org.testng.Assert.fail(Assert.java:94)
> at org.testng.Assert.failNotEquals(Assert.java:494)
> at org.testng.Assert.assertEquals(Assert.java:123)
> at org.testng.Assert.assertEquals(Assert.java:370)
> at org.testng.Assert.assertEquals(Assert.java:380)
> at org.infinispan.statetransfer.DataRehashedEventTest.testJoinAndLeave(DataRehashedEventTest.java:73)
> ...
> 2013-05-28 11:21:36,670 TRACE (asyncTransportThread-4,NodeA) [org.infinispan.statetransfer.DataRehashedEventTest] New event received: EventImpl{type=DATA_REHASHED, pre=false, cache=Cache '___defaultcache'@NodeA-19135, consistentHashAtStart=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeA-19135]}, consistentHashAtEnd=DefaultConsistentHash{numSegments=60, numOwners=2, members=[NodeA-19135, NodeB-23459]}, newTopologyId=2}
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] (ISPN-3136) Argument type mismatch in RHQ for Amend XSite operations
by Tomas Sykora (JIRA)
[ https://issues.jboss.org/browse/ISPN-3136?page=com.atlassian.jira.plugin.... ]
Tomas Sykora reopened ISPN-3136:
--------------------------------
Need to fix SimpleCacheRecoveryAdminTest tests.
Didn't take into account that ResourceDMBean is not calling only from JON. From our tests (for example) signatures are in the right format and args objects too.
Need another condition for checking this in the invoke method.
> Argument type mismatch in RHQ for Amend XSite operations
> ---------------------------------------------------------
>
> Key: ISPN-3136
> URL: https://issues.jboss.org/browse/ISPN-3136
> Project: Infinispan
> Issue Type: Bug
> Reporter: Tomas Sykora
> Assignee: Tomas Sykora
> Fix For: 5.3.0.CR1, 5.3.0.Final
>
>
> All 3 amend operations for XSite are not callable properly because of argument type mismatch exception.
> We need to specify type property for particular attributes in rhq-plugin.xml. It means we need to change Parameter class in jmx annotation slightly as well as description of XSite operations in XSiteAdminOperations class.
> Basically we need to add type="integer" or type="long" respectively for some fields.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] (ISPN-2891) Gap in time between commit of transaction and actual value update
by Jim Crossley (JIRA)
[ https://issues.jboss.org/browse/ISPN-2891?page=com.atlassian.jira.plugin.... ]
Jim Crossley commented on ISPN-2891:
------------------------------------
Galder, it is using XA transactions. The mode we're using is "FULL_XA". Any other ideas?
> Gap in time between commit of transaction and actual value update
> -----------------------------------------------------------------
>
> Key: ISPN-2891
> URL: https://issues.jboss.org/browse/ISPN-2891
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 5.2.1.Final
> Reporter: Jim Crossley
> Assignee: Mircea Markus
> Labels: 5.2.x
> Fix For: 5.3.0.CR1, 5.3.0.Final
>
> Attachments: bad.log, good.log, pessimistic.log, ugly.log
>
>
> Since upgrading our AS7.2 dependency in Immutant (transitively pulling in 5.2.1.Final), one of our integration tests has begun failing intermittently on our CI server. We've yet to see the failure in local runs, only on CI, so I suspect there's something racist going on.
> The two tests (one for optimistic locking, the other for pessimistic) integrate an Infinispan cache (on which the Immutant cache is built) with HornetQ and XA transactions. A number of queue listeners respond to messages by attempting to increment a value in the cache. The failure occurs with both locking schemes, but much more often with optimistic.
> We've confirmed the failure on 5.2.2 as well.
> Attached you'll find three traces of the optimistic test: the good, the bad, and the ugly. All three correspond to this test: https://github.com/immutant/immutant/blob/31a2ef6222088ccb828898e9e3e4531...
> So you can correlate the log messages prefixed with "JC:" in the traces to the code. Note in particular the last two lines in locking.clj: a logged message containing the count, and then an assertion of the count. Note that the "bad" trace was an actual failing test, but the "ugly" trace was a successful test, even though the trace clearly shows the count logged as 2, not 3. The Infinispan TRACE output clearly shows the value as 3, hence the ugliness of this test.
> It's important to understand that the "work" function occurs within an XA transaction. This means, as I understand it, that if three messages are published to "/queue/done", the cached count should equal 3. Line #44 in locking.clj will block until it receives 3 messages, after which the cached count should be 3.
> These tests always pass locally. They only ever fail on CI, which runs *very* slowly.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] (ISPN-1362) Reduce the number of files a FileCacheStore creates
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-1362?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-1362:
----------------------------------
Issue Type: Enhancement (was: Bug)
> Reduce the number of files a FileCacheStore creates
> ---------------------------------------------------
>
> Key: ISPN-1362
> URL: https://issues.jboss.org/browse/ISPN-1362
> Project: Infinispan
> Issue Type: Enhancement
> Components: Loaders and Stores
> Affects Versions: 5.0.0.FINAL, 5.1.0.FINAL, 5.1.1.FINAL
> Environment: OS: Mac OSX 10.6
> IDE: IntelliJ IDEA 10
> Java: 10.6.0_26
> Hibernate Search: 3.4.0.Final
> Lucene: 3.1.0
> jGroups: 2.12.1.3.Final
> Reporter: Todd Underwood
> Assignee: Manik Surtani
> Labels: persistence
> Fix For: 6.0.0.Final
>
>
> It seems that after ISPN-1300 we allow the FileCacheStore to use _only_ approximately 4 millions of files, this is still too much as the original issue description reports:
> When trying to initalize my index for Hibernate search with persistance I get the following exception after several hours of indexing:
> [2011-08-29 11:30:53,425] ERROR FileCacheStore.java:317 Hibernate Search: indexwriter-154 ) ISPN000063: Exception while saving bucket Bucket{entries={_4o.fdt|M|cnwk.foreman.model.SoftwareDownload=ImmortalCacheEntry{key=_4o.fdt|M|cnwk.foreman.model.SoftwareDownload, value=ImmortalCacheValue{value=FileMetadata{lastModified=1314642653425, size=32768}}}}, bucketId='1509281792'}
> java.io.FileNotFoundException: /var/opt/fullTextStore/LuceneIndexesMetadata/1509281792 (Too many open files)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.createChannel(FileCacheStore.java:494)
> at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.write(FileCacheStore.java:472)
> at org.infinispan.loaders.file.FileCacheStore.updateBucket(FileCacheStore.java:315)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.insertBucket(BucketBasedCacheStore.java:137)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:94)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:49)
> at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:195)
> at org.infinispan.interceptors.CacheStoreInterceptor.visitPutKeyValueCommand(CacheStoreInterceptor.java:210)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:82)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:214)
> at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:162)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:77)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
> at org.infinispan.CacheImpl.put(CacheImpl.java:515)
> at org.infinispan.CacheSupport.put(CacheSupport.java:51)
> at org.infinispan.lucene.InfinispanIndexOutput.close(InfinispanIndexOutput.java:206)
> at org.apache.lucene.util.IOUtils.closeSafely(IOUtils.java:80)
> at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:111)
> at org.apache.lucene.index.FieldsWriter.abort(FieldsWriter.java:121)
> at org.apache.lucene.index.StoredFieldsWriter.abort(StoredFieldsWriter.java:90)
> at org.apache.lucene.index.DocFieldProcessor.abort(DocFieldProcessor.java:71)
> at org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:421)
> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:729)
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2042)
> at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
> at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:96)
> at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:144)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:680)
> The open file limit on my machine has already been increased to try to fix the issue.
> This is the configuration used when the exception is thrown:
> ]<?xml version="1.0" encoding="UTF-8"?>
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"
> xmlns="urn:infinispan:config:4.2">
>
>
> <!-- *************************** -->
> <!-- System-wide global settings -->
> <!-- *************************** -->
>
>
> <global>
>
>
> <!-- Duplicate domains are allowed so that multiple deployments with default configuration
> of Hibernate Search applications work - if possible it would be better to use JNDI to share
> the CacheManager across applications -->
> <globalJmxStatistics
> enabled="true"
> cacheManagerName="HibernateSearch"
> allowDuplicateDomains="true"/>
>
>
> <!-- If the transport is omitted, there is no way to create distributed or clustered
> caches. There is no added cost to defining a transport but not creating a cache that uses one,
> since the transport is created and initialized lazily. -->
> <transport
> clusterName="HibernateSearch-Infinispan-cluster"
> distributedSyncTimeout="50000">
> <!-- Note that the JGroups transport uses sensible defaults if no configuration
> property is defined. See the JGroupsTransport javadocs for more flags -->
> </transport>
>
>
> <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.
> Hibernate Search takes care to stop the CacheManager so registering is not needed -->
> <shutdown
> hookBehavior="DONT_REGISTER"/>
>
>
> </global>
>
>
> <!-- *************************** -->
> <!-- Default "template" settings -->
> <!-- *************************** -->
>
>
> <default>
>
>
> <locking
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="500"
> useLockStriping="false"/>
>
>
> <lazyDeserialization
> enabled="false"/>
>
>
> <!-- Invocation batching is required for use with the Lucene Directory -->
> <invocationBatching
> enabled="true"/>
>
>
> <!-- This element specifies that the cache is clustered. modes supported: distribution
> (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as
> with Hibernate Search DirectoryProvider). Replication is recommended for best performance of
> Lucene indexes, but make sure you have enough memory to store the index in your heap.
> Also distribution scales much better than replication on high number of nodes in the cluster. -->
> <clustering
> mode="replication">
>
>
> <!-- Prefer loading all data at startup than later -->
> <stateRetrieval
> timeout="60000"
> logFlushTimeout="30000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="true"/>
>
>
> <!-- Network calls are synchronous by default -->
> <sync
> replTimeout="20000"/>
> </clustering>
>
>
> <jmxStatistics
> enabled="true"/>
>
>
> <eviction
> maxEntries="-1"
> strategy="NONE"/>
>
>
> <expiration
> maxIdle="-1"/>
>
>
> </default>
>
>
> <!-- ******************************************************************************* -->
> <!-- Individually configured "named" caches. -->
> <!-- -->
> <!-- While default configuration happens to be fine with similar settings across the -->
> <!-- three caches, they should generally be different in a production environment. -->
> <!-- -->
> <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore -->
> <!-- should be enabled, and maybe distribution is desired. -->
> <!-- ******************************************************************************* -->
>
>
> <!-- *************************************** -->
> <!-- Cache to store Lucene's file metadata -->
> <!-- *************************************** -->
> <namedCache name="LuceneIndexesMetadata">
>
>
> <clustering mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync replTimeout="25000"/>
> </clustering>
> <loaders preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
> <properties>
> <property name="location" value="/var/opt/fullTextStore"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
>
>
> <!-- **************************** -->
> <!-- Cache to store Lucene data -->
> <!-- **************************** -->
> <namedCache name="LuceneIndexesData">
>
>
> <clustering mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync
> replTimeout="25000"/>
> </clustering>
> <loaders>
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
> <properties>
> <property name="location" value="/var/opt/fullTextStore"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
>
>
> <!-- ***************************** -->
> <!-- Cache to store Lucene locks -->
> <!-- ***************************** -->
> <namedCache
> name="LuceneIndexesLocking">
> <clustering
> mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync
> replTimeout="25000"/>
> </clustering>
> </namedCache>
>
>
> </infinispan>
> There are 10160 open files in the cache store when the exception is thrown and a total of 10178 files visible in the cache store.
> Submitting this so the issue can be tracked after being suggested to do so on Hibernate Search Forums.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months