[JBoss JIRA] (ISPN-1362) Reduce the number of files a FileCacheStore creates
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1362?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1362:
--------------------------------
Priority: Minor (was: Major)
> Reduce the number of files a FileCacheStore creates
> ---------------------------------------------------
>
> Key: ISPN-1362
> URL: https://issues.jboss.org/browse/ISPN-1362
> Project: Infinispan
> Issue Type: Enhancement
> Components: Loaders and Stores
> Affects Versions: 5.0.0.FINAL, 5.1.0.FINAL, 5.1.1.FINAL
> Environment: OS: Mac OSX 10.6
> IDE: IntelliJ IDEA 10
> Java: 10.6.0_26
> Hibernate Search: 3.4.0.Final
> Lucene: 3.1.0
> jGroups: 2.12.1.3.Final
> Reporter: Todd Underwood
> Assignee: Manik Surtani
> Priority: Minor
> Labels: persistence
> Fix For: 6.0.0.Final
>
>
> It seems that after ISPN-1300 we allow the FileCacheStore to use _only_ approximately 4 millions of files, this is still too much as the original issue description reports:
> When trying to initalize my index for Hibernate search with persistance I get the following exception after several hours of indexing:
> [2011-08-29 11:30:53,425] ERROR FileCacheStore.java:317 Hibernate Search: indexwriter-154 ) ISPN000063: Exception while saving bucket Bucket{entries={_4o.fdt|M|cnwk.foreman.model.SoftwareDownload=ImmortalCacheEntry{key=_4o.fdt|M|cnwk.foreman.model.SoftwareDownload, value=ImmortalCacheValue{value=FileMetadata{lastModified=1314642653425, size=32768}}}}, bucketId='1509281792'}
> java.io.FileNotFoundException: /var/opt/fullTextStore/LuceneIndexesMetadata/1509281792 (Too many open files)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.createChannel(FileCacheStore.java:494)
> at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.write(FileCacheStore.java:472)
> at org.infinispan.loaders.file.FileCacheStore.updateBucket(FileCacheStore.java:315)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.insertBucket(BucketBasedCacheStore.java:137)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:94)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:49)
> at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:195)
> at org.infinispan.interceptors.CacheStoreInterceptor.visitPutKeyValueCommand(CacheStoreInterceptor.java:210)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:82)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:214)
> at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:162)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:77)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
> at org.infinispan.CacheImpl.put(CacheImpl.java:515)
> at org.infinispan.CacheSupport.put(CacheSupport.java:51)
> at org.infinispan.lucene.InfinispanIndexOutput.close(InfinispanIndexOutput.java:206)
> at org.apache.lucene.util.IOUtils.closeSafely(IOUtils.java:80)
> at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:111)
> at org.apache.lucene.index.FieldsWriter.abort(FieldsWriter.java:121)
> at org.apache.lucene.index.StoredFieldsWriter.abort(StoredFieldsWriter.java:90)
> at org.apache.lucene.index.DocFieldProcessor.abort(DocFieldProcessor.java:71)
> at org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:421)
> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:729)
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2042)
> at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
> at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:96)
> at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:144)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:680)
> The open file limit on my machine has already been increased to try to fix the issue.
> This is the configuration used when the exception is thrown:
> ]<?xml version="1.0" encoding="UTF-8"?>
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"
> xmlns="urn:infinispan:config:4.2">
>
>
> <!-- *************************** -->
> <!-- System-wide global settings -->
> <!-- *************************** -->
>
>
> <global>
>
>
> <!-- Duplicate domains are allowed so that multiple deployments with default configuration
> of Hibernate Search applications work - if possible it would be better to use JNDI to share
> the CacheManager across applications -->
> <globalJmxStatistics
> enabled="true"
> cacheManagerName="HibernateSearch"
> allowDuplicateDomains="true"/>
>
>
> <!-- If the transport is omitted, there is no way to create distributed or clustered
> caches. There is no added cost to defining a transport but not creating a cache that uses one,
> since the transport is created and initialized lazily. -->
> <transport
> clusterName="HibernateSearch-Infinispan-cluster"
> distributedSyncTimeout="50000">
> <!-- Note that the JGroups transport uses sensible defaults if no configuration
> property is defined. See the JGroupsTransport javadocs for more flags -->
> </transport>
>
>
> <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.
> Hibernate Search takes care to stop the CacheManager so registering is not needed -->
> <shutdown
> hookBehavior="DONT_REGISTER"/>
>
>
> </global>
>
>
> <!-- *************************** -->
> <!-- Default "template" settings -->
> <!-- *************************** -->
>
>
> <default>
>
>
> <locking
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="500"
> useLockStriping="false"/>
>
>
> <lazyDeserialization
> enabled="false"/>
>
>
> <!-- Invocation batching is required for use with the Lucene Directory -->
> <invocationBatching
> enabled="true"/>
>
>
> <!-- This element specifies that the cache is clustered. modes supported: distribution
> (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as
> with Hibernate Search DirectoryProvider). Replication is recommended for best performance of
> Lucene indexes, but make sure you have enough memory to store the index in your heap.
> Also distribution scales much better than replication on high number of nodes in the cluster. -->
> <clustering
> mode="replication">
>
>
> <!-- Prefer loading all data at startup than later -->
> <stateRetrieval
> timeout="60000"
> logFlushTimeout="30000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="true"/>
>
>
> <!-- Network calls are synchronous by default -->
> <sync
> replTimeout="20000"/>
> </clustering>
>
>
> <jmxStatistics
> enabled="true"/>
>
>
> <eviction
> maxEntries="-1"
> strategy="NONE"/>
>
>
> <expiration
> maxIdle="-1"/>
>
>
> </default>
>
>
> <!-- ******************************************************************************* -->
> <!-- Individually configured "named" caches. -->
> <!-- -->
> <!-- While default configuration happens to be fine with similar settings across the -->
> <!-- three caches, they should generally be different in a production environment. -->
> <!-- -->
> <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore -->
> <!-- should be enabled, and maybe distribution is desired. -->
> <!-- ******************************************************************************* -->
>
>
> <!-- *************************************** -->
> <!-- Cache to store Lucene's file metadata -->
> <!-- *************************************** -->
> <namedCache name="LuceneIndexesMetadata">
>
>
> <clustering mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync replTimeout="25000"/>
> </clustering>
> <loaders preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
> <properties>
> <property name="location" value="/var/opt/fullTextStore"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
>
>
> <!-- **************************** -->
> <!-- Cache to store Lucene data -->
> <!-- **************************** -->
> <namedCache name="LuceneIndexesData">
>
>
> <clustering mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync
> replTimeout="25000"/>
> </clustering>
> <loaders>
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
> <properties>
> <property name="location" value="/var/opt/fullTextStore"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
>
>
> <!-- ***************************** -->
> <!-- Cache to store Lucene locks -->
> <!-- ***************************** -->
> <namedCache
> name="LuceneIndexesLocking">
> <clustering
> mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync
> replTimeout="25000"/>
> </clustering>
> </namedCache>
>
>
> </infinispan>
> There are 10160 open files in the cache store when the exception is thrown and a total of 10178 files visible in the cache store.
> Submitting this so the issue can be tracked after being suggested to do so on Hibernate Search Forums.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-1362) Reduce the number of files a FileCacheStore creates
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-1362?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-1362:
--------------------------------
Fix Version/s: (was: 6.0.0.Final)
> Reduce the number of files a FileCacheStore creates
> ---------------------------------------------------
>
> Key: ISPN-1362
> URL: https://issues.jboss.org/browse/ISPN-1362
> Project: Infinispan
> Issue Type: Enhancement
> Components: Loaders and Stores
> Affects Versions: 5.0.0.FINAL, 5.1.0.FINAL, 5.1.1.FINAL
> Environment: OS: Mac OSX 10.6
> IDE: IntelliJ IDEA 10
> Java: 10.6.0_26
> Hibernate Search: 3.4.0.Final
> Lucene: 3.1.0
> jGroups: 2.12.1.3.Final
> Reporter: Todd Underwood
> Assignee: Manik Surtani
> Priority: Minor
> Labels: persistence
>
> It seems that after ISPN-1300 we allow the FileCacheStore to use _only_ approximately 4 millions of files, this is still too much as the original issue description reports:
> When trying to initalize my index for Hibernate search with persistance I get the following exception after several hours of indexing:
> [2011-08-29 11:30:53,425] ERROR FileCacheStore.java:317 Hibernate Search: indexwriter-154 ) ISPN000063: Exception while saving bucket Bucket{entries={_4o.fdt|M|cnwk.foreman.model.SoftwareDownload=ImmortalCacheEntry{key=_4o.fdt|M|cnwk.foreman.model.SoftwareDownload, value=ImmortalCacheValue{value=FileMetadata{lastModified=1314642653425, size=32768}}}}, bucketId='1509281792'}
> java.io.FileNotFoundException: /var/opt/fullTextStore/LuceneIndexesMetadata/1509281792 (Too many open files)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.createChannel(FileCacheStore.java:494)
> at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.write(FileCacheStore.java:472)
> at org.infinispan.loaders.file.FileCacheStore.updateBucket(FileCacheStore.java:315)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.insertBucket(BucketBasedCacheStore.java:137)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:94)
> at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:49)
> at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:195)
> at org.infinispan.interceptors.CacheStoreInterceptor.visitPutKeyValueCommand(CacheStoreInterceptor.java:210)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:82)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:214)
> at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:162)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
> at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:77)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
> at org.infinispan.CacheImpl.put(CacheImpl.java:515)
> at org.infinispan.CacheSupport.put(CacheSupport.java:51)
> at org.infinispan.lucene.InfinispanIndexOutput.close(InfinispanIndexOutput.java:206)
> at org.apache.lucene.util.IOUtils.closeSafely(IOUtils.java:80)
> at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:111)
> at org.apache.lucene.index.FieldsWriter.abort(FieldsWriter.java:121)
> at org.apache.lucene.index.StoredFieldsWriter.abort(StoredFieldsWriter.java:90)
> at org.apache.lucene.index.DocFieldProcessor.abort(DocFieldProcessor.java:71)
> at org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:421)
> at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:729)
> at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2042)
> at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
> at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:96)
> at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:144)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:680)
> The open file limit on my machine has already been increased to try to fix the issue.
> This is the configuration used when the exception is thrown:
> ]<?xml version="1.0" encoding="UTF-8"?>
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"
> xmlns="urn:infinispan:config:4.2">
>
>
> <!-- *************************** -->
> <!-- System-wide global settings -->
> <!-- *************************** -->
>
>
> <global>
>
>
> <!-- Duplicate domains are allowed so that multiple deployments with default configuration
> of Hibernate Search applications work - if possible it would be better to use JNDI to share
> the CacheManager across applications -->
> <globalJmxStatistics
> enabled="true"
> cacheManagerName="HibernateSearch"
> allowDuplicateDomains="true"/>
>
>
> <!-- If the transport is omitted, there is no way to create distributed or clustered
> caches. There is no added cost to defining a transport but not creating a cache that uses one,
> since the transport is created and initialized lazily. -->
> <transport
> clusterName="HibernateSearch-Infinispan-cluster"
> distributedSyncTimeout="50000">
> <!-- Note that the JGroups transport uses sensible defaults if no configuration
> property is defined. See the JGroupsTransport javadocs for more flags -->
> </transport>
>
>
> <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.
> Hibernate Search takes care to stop the CacheManager so registering is not needed -->
> <shutdown
> hookBehavior="DONT_REGISTER"/>
>
>
> </global>
>
>
> <!-- *************************** -->
> <!-- Default "template" settings -->
> <!-- *************************** -->
>
>
> <default>
>
>
> <locking
> lockAcquisitionTimeout="20000"
> writeSkewCheck="false"
> concurrencyLevel="500"
> useLockStriping="false"/>
>
>
> <lazyDeserialization
> enabled="false"/>
>
>
> <!-- Invocation batching is required for use with the Lucene Directory -->
> <invocationBatching
> enabled="true"/>
>
>
> <!-- This element specifies that the cache is clustered. modes supported: distribution
> (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as
> with Hibernate Search DirectoryProvider). Replication is recommended for best performance of
> Lucene indexes, but make sure you have enough memory to store the index in your heap.
> Also distribution scales much better than replication on high number of nodes in the cluster. -->
> <clustering
> mode="replication">
>
>
> <!-- Prefer loading all data at startup than later -->
> <stateRetrieval
> timeout="60000"
> logFlushTimeout="30000"
> fetchInMemoryState="true"
> alwaysProvideInMemoryState="true"/>
>
>
> <!-- Network calls are synchronous by default -->
> <sync
> replTimeout="20000"/>
> </clustering>
>
>
> <jmxStatistics
> enabled="true"/>
>
>
> <eviction
> maxEntries="-1"
> strategy="NONE"/>
>
>
> <expiration
> maxIdle="-1"/>
>
>
> </default>
>
>
> <!-- ******************************************************************************* -->
> <!-- Individually configured "named" caches. -->
> <!-- -->
> <!-- While default configuration happens to be fine with similar settings across the -->
> <!-- three caches, they should generally be different in a production environment. -->
> <!-- -->
> <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore -->
> <!-- should be enabled, and maybe distribution is desired. -->
> <!-- ******************************************************************************* -->
>
>
> <!-- *************************************** -->
> <!-- Cache to store Lucene's file metadata -->
> <!-- *************************************** -->
> <namedCache name="LuceneIndexesMetadata">
>
>
> <clustering mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync replTimeout="25000"/>
> </clustering>
> <loaders preload="true">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
> <properties>
> <property name="location" value="/var/opt/fullTextStore"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
>
>
> <!-- **************************** -->
> <!-- Cache to store Lucene data -->
> <!-- **************************** -->
> <namedCache name="LuceneIndexesData">
>
>
> <clustering mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync
> replTimeout="25000"/>
> </clustering>
> <loaders>
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
> <properties>
> <property name="location" value="/var/opt/fullTextStore"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
>
>
> <!-- ***************************** -->
> <!-- Cache to store Lucene locks -->
> <!-- ***************************** -->
> <namedCache
> name="LuceneIndexesLocking">
> <clustering
> mode="replication">
> <stateRetrieval
> fetchInMemoryState="true"
> logFlushTimeout="30000"/>
> <sync
> replTimeout="25000"/>
> </clustering>
> </namedCache>
>
>
> </infinispan>
> There are 10160 open files in the cache store when the exception is thrown and a total of 10178 files visible in the cache store.
> Submitting this so the issue can be tracked after being suggested to do so on Hibernate Search Forums.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3175) Upgrade the java hotrod client to support remote querying
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3175?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3175:
--------------------------------
Labels: jdg62 (was: )
> Upgrade the java hotrod client to support remote querying
> ---------------------------------------------------------
>
> Key: ISPN-3175
> URL: https://issues.jboss.org/browse/ISPN-3175
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: Mircea Markus
> Assignee: Galder Zamarreño
> Priority: Blocker
> Labels: jdg62
> Fix For: 6.0.0.Alpha2, 6.0.0.Final
>
>
> Once we'll have ISPN-3169(define query fluent API), ISPN-3173(add a new query operation over hotrod) and ISPN-3174(String-based query language) in place, this is about connecting the dots: invoke the remote query on the server and present the result to the user.
> h3.On the client side
> As described in ISPN-3173, the query is sent as a byte[] to the server: the payload.
> The request payload is query specific (not defined in the HR protocol) and at this stage has the following format: [Q_TYPE] [QUERY_ST] [FIRST_INDEX] [PAGE_SIZE]. This format accommodates the remote query requirements as defined in ISPN-3169.
> - Q_TYPE (protobuf's byte) and query identifier, 1 for JPAQL (this is the query type we'll support). In future we'll add different query types as well.
> - QUERY_STRING (protobuf's string)- JPAQL string generated by the fluent API (ISPN-3169). Parameters are encoded in this string (vs being sent separately)
> - FIRST_INDEX + PAGE_SIZE (protobuf's int)- used for paginating/iterating over the result set.
> HR response: [HR_SUCESS_FLAG] [payload]
> PAYLOAD = [NUM_EL] [PROJ_SIZE] [ELEMENTS]
> - even though at this stage we don't support projections (see ISPN-3169) PROJ_SIZE is added for future compatibility when projection will be supported.
> Note that the payload for both request and response should be marshalled with protobuf as this information is read/written over multiple clients.
> h3.On the server side
> - the server module reads the query operation and the payload (byte[])
> - invokes QueryFacade.query(byte[]) : byte[]
> - QueryFacade is an interface defined in the server modules
> - has an implementation in the remote-query module (new modules)
> - the reason for adding this module: RemoteQueryImpl cannot be in the server modules (ASL) as it has a dependency on the query module (LGPL) which would "contaminate" the server module. Another option would be to merge it in the query module itself, but doesn't feel like a natural fit as its responsibility is serializing data (protobuf). OTOH if it is small we can merge it there so that we don't add a new module to the system?
>
> For an overview on the remote querying see https://community.jboss.org/wiki/QueryingDesignInInfinispan
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-604) Re-design CacheStore transactions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-604?page=com.atlassian.jira.plugin.s... ]
Mircea Markus updated ISPN-604:
-------------------------------
Assignee: Mircea Markus (was: Pedro Ruivo)
> Re-design CacheStore transactions
> ----------------------------------
>
> Key: ISPN-604
> URL: https://issues.jboss.org/browse/ISPN-604
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores, Transactions
> Affects Versions: 4.0.0.Final, 4.1.0.CR2
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Labels: as7-ignored, modshape
> Fix For: 6.0.0.Alpha1
>
>
> Current(4.1.x) transaction implementation in CacheStores is brocken in several ways:
> 1st problem.
> {code}AbstractCacheStore.prepare:
> public void prepare(List<? extends Modification> mods, GlobalTransaction tx, boolean isOnePhase) throws CacheLoaderException {
> if (isOnePhase) {
> applyModifications(mods);
> } else {
> transactions.put(tx, mods);
> }
> }
> {code}
> If this is 1PC we apply the modifications in the prepare phase - we should do it in the commit phase (as JTA does it).
> 2nd problem.
> This currently exhibits during commit/rollback with JdbcXyzCacheStore, but it is rather a more general cache store issue.
> When using a TransactionManager, during TM.commit AbstractCacheStore.commit is being called internally which tries to apply all the modifications that happened during that transaction.
> Within the scope of AbstractCacheStore.commit, JdbcStore obtains a connection from a DataSource and tries to write the modifications on that connection.
> Now if the DataSource is managed (e.g. by an A.S.) on the DS.getConnection call the A.S. would try to enlist the connection with the ongoing transaction by calling Transaction.enlistResource(XAResource xaRes) [1]
> This method fails with an IllegalStateException, because the transaction's status is preparing (see javax.transaction.Transaction.enlistResource).
> Suggested fix:
> - the modifications should be registered to the transaction as they happen(vs. during prepare/commit as it happens now)
> - this requires API changes in CacheStore, e.g.
> void store(InternalCacheEntry entry)
> should become
> void store(InternalCacheEntry entry, GlobalTransaction gtx)
> (gtx would be null if this is not a transactional call).
> [1] This behavior is specified by the JDBC 2.0 Standard Extension API, chapter 7 - distributed transaction
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-604) Re-design CacheStore transactions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-604?page=com.atlassian.jira.plugin.s... ]
Mircea Markus updated ISPN-604:
-------------------------------
Fix Version/s: 6.0.0.Beta1
6.0.0.Final
(was: 6.0.0.Alpha1)
> Re-design CacheStore transactions
> ----------------------------------
>
> Key: ISPN-604
> URL: https://issues.jboss.org/browse/ISPN-604
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores, Transactions
> Affects Versions: 4.0.0.Final, 4.1.0.CR2
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Labels: as7-ignored, modshape
> Fix For: 6.0.0.Beta1, 6.0.0.Final
>
>
> Current(4.1.x) transaction implementation in CacheStores is brocken in several ways:
> 1st problem.
> {code}AbstractCacheStore.prepare:
> public void prepare(List<? extends Modification> mods, GlobalTransaction tx, boolean isOnePhase) throws CacheLoaderException {
> if (isOnePhase) {
> applyModifications(mods);
> } else {
> transactions.put(tx, mods);
> }
> }
> {code}
> If this is 1PC we apply the modifications in the prepare phase - we should do it in the commit phase (as JTA does it).
> 2nd problem.
> This currently exhibits during commit/rollback with JdbcXyzCacheStore, but it is rather a more general cache store issue.
> When using a TransactionManager, during TM.commit AbstractCacheStore.commit is being called internally which tries to apply all the modifications that happened during that transaction.
> Within the scope of AbstractCacheStore.commit, JdbcStore obtains a connection from a DataSource and tries to write the modifications on that connection.
> Now if the DataSource is managed (e.g. by an A.S.) on the DS.getConnection call the A.S. would try to enlist the connection with the ongoing transaction by calling Transaction.enlistResource(XAResource xaRes) [1]
> This method fails with an IllegalStateException, because the transaction's status is preparing (see javax.transaction.Transaction.enlistResource).
> Suggested fix:
> - the modifications should be registered to the transaction as they happen(vs. during prepare/commit as it happens now)
> - this requires API changes in CacheStore, e.g.
> void store(InternalCacheEntry entry)
> should become
> void store(InternalCacheEntry entry, GlobalTransaction gtx)
> (gtx would be null if this is not a transactional call).
> [1] This behavior is specified by the JDBC 2.0 Standard Extension API, chapter 7 - distributed transaction
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-317) when unsafeReturnValues is false, combine put, remove, replace, putIfAbsent, to pull back responses in 1 command
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-317?page=com.atlassian.jira.plugin.s... ]
Mircea Markus commented on ISPN-317:
------------------------------------
this is already the case for non-transactional caches with the delegation model in place. For optimistic caches the optimisation cannot be performed as we only do the actual modification during prepare. So the scope of the optimisation is for pessimistic caches only.
> when unsafeReturnValues is false, combine put, remove, replace, putIfAbsent, to pull back responses in 1 command
> ----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-317
> URL: https://issues.jboss.org/browse/ISPN-317
> Project: Infinispan
> Issue Type: Feature Request
> Affects Versions: 5.1.0.FINAL
> Reporter: Mircea Markus
> Assignee: Manik Surtani
>
> at the moment this is split in two operations: a remote get followed by an put. Optimize this to only reside in one operation.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[JBoss JIRA] (ISPN-3175) Upgrade the java hotrod client to support remote querying
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3175?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3175:
--------------------------------
Assignee: Galder Zamarreño (was: Mircea Markus)
> Upgrade the java hotrod client to support remote querying
> ---------------------------------------------------------
>
> Key: ISPN-3175
> URL: https://issues.jboss.org/browse/ISPN-3175
> Project: Infinispan
> Issue Type: Sub-task
> Reporter: Mircea Markus
> Assignee: Galder Zamarreño
>
> Once we'll have ISPN-3169(define query fluent API), ISPN-3173(add a new query operation over hotrod) and ISPN-3174(String-based query language) in place, this is about connecting the dots: invoke the remote query on the server and present the result to the user.
> h3.On the client side
> As described in ISPN-3173, the query is sent as a byte[] to the server: the payload.
> The request payload is query specific (not defined in the HR protocol) and at this stage has the following format: [Q_TYPE] [QUERY_ST] [FIRST_INDEX] [PAGE_SIZE]. This format accommodates the remote query requirements as defined in ISPN-3169.
> - Q_TYPE (protobuf's byte) and query identifier, 1 for JPAQL (this is the query type we'll support). In future we'll add different query types as well.
> - QUERY_STRING (protobuf's string)- JPAQL string generated by the fluent API (ISPN-3169). Parameters are encoded in this string (vs being sent separately)
> - FIRST_INDEX + PAGE_SIZE (protobuf's int)- used for paginating/iterating over the result set.
> HR response: [HR_SUCESS_FLAG] [payload]
> PAYLOAD = [NUM_EL] [PROJ_SIZE] [ELEMENTS]
> - even though at this stage we don't support projections (see ISPN-3169) PROJ_SIZE is added for future compatibility when projection will be supported.
> Note that the payload for both request and response should be marshalled with protobuf as this information is read/written over multiple clients.
> h3.On the server side
> - the server module reads the query operation and the payload (byte[])
> - invokes QueryFacade.query(byte[]) : byte[]
> - QueryFacade is an interface defined in the server modules
> - has an implementation in the remote-query module (new modules)
> - the reason for adding this module: RemoteQueryImpl cannot be in the server modules (ASL) as it has a dependency on the query module (LGPL) which would "contaminate" the server module. Another option would be to merge it in the query module itself, but doesn't feel like a natural fit as its responsibility is serializing data (protobuf). OTOH if it is small we can merge it there so that we don't add a new module to the system?
>
> For an overview on the remote querying see https://community.jboss.org/wiki/QueryingDesignInInfinispan
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months