[JBoss JIRA] (ISPN-2822) Profile Infinispan when eviction enabled
by Anna Manukyan (JIRA)
[ https://issues.jboss.org/browse/ISPN-2822?page=com.atlassian.jira.plugin.... ]
Anna Manukyan updated ISPN-2822:
--------------------------------
Attachment: Eviction Profiling Results.pdf
> Profile Infinispan when eviction enabled
> ----------------------------------------
>
> Key: ISPN-2822
> URL: https://issues.jboss.org/browse/ISPN-2822
> Project: Infinispan
> Issue Type: Task
> Reporter: Martin Gencur
> Assignee: Anna Manukyan
> Fix For: 5.3.0.Final
>
> Attachments: Eviction Profiling Results.pdf
>
>
> Profile Infinispan for different eviction strategies (LRU, LIRS) and search for hot spots in the code (places where Infinispan spends most time when executing).
> Do this for two scenarios:
> 1) eviction enabled but no entries evicted (when the capacity is high)
> 2) eviction enabled and entries being evicted
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2986) Intermittent failure to start new nodes during heavy write load
by Marc Bridner (JIRA)
[ https://issues.jboss.org/browse/ISPN-2986?page=com.atlassian.jira.plugin.... ]
Marc Bridner commented on ISPN-2986:
------------------------------------
Setting OOB threads to 200 seems to fix the problem, thanks.
> Intermittent failure to start new nodes during heavy write load
> ---------------------------------------------------------------
>
> Key: ISPN-2986
> URL: https://issues.jboss.org/browse/ISPN-2986
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Server
> Affects Versions: 5.2.5.Final
> Environment: 4 servers running linux 2.6.32-220.13.1.el6.x86_64 with 2x QuadCore 2.4ghz CPU's
> Gigabit ethernet, same switch.
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build 1.7.0-b147)
> Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode)
> Reporter: Marc Bridner
> Assignee: Tristan Tarrant
> Attachments: logs.zip, test-infinispan.xml, test-jgroups.xml, test.infinispan.zip
>
>
> When under heavy write load from a hotrod client with 64+ threads and a new node is started, the new node will sometimes fail to start, eventually giving off state transfer timeouts and finally terminating. During the time it takes it to time out (~10 minutes) the hotrod client is totally blocked.
> Setup is as follows:
> 3 servers, 1 client
> * dl380x2385, 10.64.106.21, client
> * dl380x2384, 10.64.106.20, first node
> * dl380x2383, 10.64.106.19, second node
> * dl380x2382, 10.64.106.18, third node
> 2 caches, initial state transfer off, transactions on, config is attached.
> Small app that triggers the problem is also attached.
> Steps to reproduce:
> 1. Start first node
> 2. Start client, wait for counter to reach 50000 (in client)
> 3. Start second node. 10% chance it'll fail.
> 4. Wait for counter to reach 100000 in client.
> 5. Start third node, 50% chance it'll fail.
> If it doesn't fail, terminate everything and start over.
> I realize this may be hard to reproduce, so if any more logs or tests are needed, let me know.
> I've been unable to reproduce it on a single physical machine, and it only occurs when using more than 64 client threads. Changing the ratio of writes between the caches also seems to make it not occur. I was unable to reproduce it with TRACE log level on (too slow), but if you can specify some packages that you want traces of, that might work.
> Turning transactions off makes it worse, 90% chance to fail on second node. Funny enough, disabling the concurrent GC lowers the failure rate to 10% on third node. Guessing race condition somewhere, may be similar to ISPN-2982.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2950) In distributed mode cache store data should be read through the main data owner (vs directly from the store)
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2950?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-2950:
----------------------------------
Reporter: Sanne Grinovero (was: Mircea Markus)
> In distributed mode cache store data should be read through the main data owner (vs directly from the store)
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2950
> URL: https://issues.jboss.org/browse/ISPN-2950
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
> Assignee: Mircea Markus
> Priority: Critical
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> Dist cache with a cache store(shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawback:
> - the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
> - if this is a local cache store, it might contain stale data which would be returned to the user
> - for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
> This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2950) In distributed mode cache store data should be read through the main data owner (vs directly from the store)
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-2950?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-2950:
----------------------------------
Description:
Dist cache with a cache store (shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawbacks:
- the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
- if this is a local cache store, it might contain stale data which would be returned to the user
- for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
was:
Dist cache with a cache store(shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawback:
- the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
- if this is a local cache store, it might contain stale data which would be returned to the user
- for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
> In distributed mode cache store data should be read through the main data owner (vs directly from the store)
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2950
> URL: https://issues.jboss.org/browse/ISPN-2950
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Reporter: Sanne Grinovero
> Assignee: Mircea Markus
> Priority: Critical
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> Dist cache with a cache store (shared or not), k owned by \{N1, N2\}. k is read on N3. What currently happens at this stage, if k is not present in N3's memory (likely unless L1 is configured), the N3's cache store is queried and data is loaded from there. This has several drawbacks:
> - the data might already be in the memory of the owner node (N1,N2) so reading it from the disk is highly inefficient. Especially for hot data: data requested from various nodes at the same time (see also mailing list discussion around lucene query performance depending on this)
> - if this is a local cache store, it might contain stale data which would be returned to the user
> - for async configured cache store this would result in dirty reads, given that a change might be in the async store's memory but not in the store at the moment when it is in read by N3. (Note that using async stores still leaves place to inconsistencies when a node leaves, e.g. because of node crashing before managing to flush the async store.)
> This JIRA is about changing the distribution mode: when asked for a specific key, a node would only touch a cache store if it is an owner of that key, otherwise would first go to the main owner of the key to read the value from there. The ClusterCacheLoader should be deprecated as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2999) getCacheEntry not working when distribution gets go remote
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-2999:
--------------------------------------
Summary: getCacheEntry not working when distribution gets go remote
Key: ISPN-2999
URL: https://issues.jboss.org/browse/ISPN-2999
Project: Infinispan
Issue Type: Bug
Affects Versions: 5.2.5.Final
Reporter: Galder Zamarreño
Assignee: Mircea Markus
Fix For: 5.3.0.Beta1, 5.3.0.Final
Assuming the cache contains byte[], you get this exception when calling getCacheEntry(K) for a key not available locally:
{code}org.infinispan.server.hotrod.HotRodException: java.lang.ClassCastException: [B cannot be cast to org.infinispan.container.entries.CacheEntry
at org.infinispan.server.hotrod.HotRodDecoder.createServerException(HotRodDecoder.scala:216)
at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:79)
at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:49)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:393)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:313)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.ClassCastException: [B cannot be cast to org.infinispan.container.entries.CacheEntry
at org.infinispan.CacheImpl.getCacheEntry(CacheImpl.java:299)
at org.infinispan.CacheImpl.getCacheEntry(CacheImpl.java:304)
at org.infinispan.server.core.AbstractProtocolDecoder.get(AbstractProtocolDecoder.scala:287)
at org.infinispan.server.core.AbstractProtocolDecoder.decodeKey(AbstractProtocolDecoder.scala:117)
at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:73)
... 14 more{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2980) sqlite support
by Aleksandar Kostadinov (JIRA)
[ https://issues.jboss.org/browse/ISPN-2980?page=com.atlassian.jira.plugin.... ]
Aleksandar Kostadinov edited comment on ISPN-2980 at 4/5/13 5:28 PM:
---------------------------------------------------------------------
Haha, nice they picked up my bug report so fast! btw I had to compile the driver because that snapshot is from before the patch. There is another [small patch|https://bitbucket.org/xerial/sqlite-jdbc/issue/55/sqlitedatasource-...] for compiling with jdk7 I submitted if you can run some perf tests with it. (attaching the one I compiled for fedora 18 - [^sqlite-jdbc-3.7.15-SNAPSHOT-f18.jar])
*UPDATE:* the driver above is updated with [this patch|https://bitbucket.org/xerial/sqlite-jdbc/issue/56/getbinarystream-i...] that is also needed. Updated description with option to run sqlite with WAL journal mode for better performance.
Anyways it seems to be working now and my quick test shows that the same thing taking over 7 minutes with PostgreSQL and just over 10 with mysql takes 20-30 seconds with sqlight. This is like the time it takes with MySQL using the MEMORY engine.
There are still significant drawbacks of this solution though:
* xerial jdbc driver does not support running on top of OS bundled sqlite library so sqlite will not be supported by Red Hat
* going through jdbc and a connection pool is still an overhead (for configuration and performance) and that is evident by the even better bdbje performance
* XA transactions are not supported by the xerial jdbc driver
was (Author: akostadinov):
Haha, nice they picked up my bug report so fast! btw I had to compile the driver because that snapshot is from before the patch. There is another [small patch|https://bitbucket.org/xerial/sqlite-jdbc/issue/55/sqlitedatasource-...] for compiling with jdk7 I submitted if you can run some perf tests with it. (attaching the one I compiled for fedora 18 - [^sqlite-jdbc-3.7.15-SNAPSHOT-f18.jar])
*UPDATE:* the driver above is updated with [this patch|https://bitbucket.org/xerial/sqlite-jdbc/issue/56/getbinarystream-i...] that is also needed. Updated description with option to run sqlite with WAL tx mode for better performance.
Anyways it seems to be working now and my quick test shows that the same thing taking over 7 minutes with PostgreSQL and just over 10 with mysql takes 20-30 seconds with sqlight. This is like the time it takes with MySQL using the MEMORY engine.
There are still significant drawbacks of this solution though:
* xerial jdbc driver does not support running on top of OS bundled sqlite library so sqlite will not be supported by Red Hat
* going through jdbc and a connection pool is still an overhead (for configuration and performance) and that is evident by the even better bdbje performance
* XA transactions are not supported by the xerial jdbc driver
> sqlite support
> --------------
>
> Key: ISPN-2980
> URL: https://issues.jboss.org/browse/ISPN-2980
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Affects Versions: 5.2.5.Final
> Reporter: Aleksandar Kostadinov
> Assignee: Mircea Markus
> Labels: cache-loader, cache-store, jdbc, sqlite
> Attachments: sqlite-jdbc-3.7.15-SNAPSHOT-f18.jar
>
>
> It would be very nice is we have SQLite support for infinispan. SQLite is a powerful database supporting terabyte sized databases in a file with competitive performance.
> I tried to use it as a JDBC store but the best driver I find in the internet ([xerial sqlite jdbc driver|https://bitbucket.org/xerial/sqlite-jdbc]) does not implement full jdbc specification and trying to use it results in exceptions.
> I think that perhaps using the [non-jdbc wrapper sqlite4java|http://code.google.com/p/sqlite4java/] may make sense for infinispan because:
> 1. it promises better performance
> 2. it allows using the sqlite library from OS (xerial driver uses a customized build of sqlite)
> FYI here is how I setup sqlite for infinispan (unsuccessfully):
> {code}jboss as cli commands:
> /subsystem=datasources/jdbc-driver=sqlite:add(driver-name="sqlite",driver-module-name="org.xerial",driver-class-name=org.sqlite.JDBC)
> data-source add --name=SQLiteDS --connection-url="jdbc:sqlite:${sqlite.database.string}" --jndi-name=java:jboss/datasources/SQLiteDS --driver-name="sqlite"
> /subsystem=datasources/data-source=SQLiteDS/connection-properties=journal_mode:add(value="WAL")
> /subsystem=datasources/data-source=SQLiteDS:enable
> {code}
> {code}JBoss AS module definition (modules/org/xerial/main/module.xml):
> <?xml version="1.0" encoding="UTF-8"?>
> <module xmlns="urn:jboss:module:1.0" name="org.xerial">
> <resources>
> <resource-root path="sqlite-jdbc.jar" />
> </resources>
> <dependencies>
> <module name="javax.api" />
> <module name="javax.transaction.api"/>
> </dependencies>
> </module>
> {code}
> {code}cache store/loader configuration snippet:
> <stringKeyedJdbcStore xmlns="urn:infinispan:config:jdbc:5.2" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false" key2StringMapper="com.jboss.datagrid.chunchun.util.TwoWayKey2StringChunchunMapper">
> <dataSource jndiUrl="java:jboss/datasources/SQLiteDS" />
> <stringKeyedTable dropOnExit="false" createOnStart="true" prefix="ispn">
> <idColumn name="ID_COLUMN" type="VARCHAR(255)" />
> <dataColumn name="DATA_COLUMN" type="BLOB" />
> <timestampColumn name="TIMESTAMP_COLUMN" type="BIGINT" />
> </stringKeyedTable>
> </stringKeyedJdbcStore>
> </loaders>
> {code}
> sql driver needs to be copied in the same directory as module.xml
> *UPDATE:* the exception is fixed with latest dev code of xerial jdbc driver, please look at comments to see remaining problems.
> The Exception I'm getting is:{code}
> 12:53:10,683 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MSC service thread 1-3) ISPN000136: Execution error: org.infinispan.loaders.CacheLoaderException: Error while storing string key to database; key: 'user41', buffer size of value: 4918 bytes
> at org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore.storeLockSafe(JdbcStringBasedCacheStore.java:253) [infinispan-cachestore-jdbc-5.2.5.Final.jar:5.2.5.Final]
> ...
> Caused by: java.sql.SQLException: not implemented by SQLite JDBC driver
> at org.sqlite.Unused.unused(Unused.java:29) [sqlite-jdbc-3.7.2.jar:]
> at org.sqlite.Unused.setBinaryStream(Unused.java:60) [sqlite-jdbc-3.7.2.jar:]
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.setBinaryStream(WrappedPreparedStatement.java:871)
> at org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore.storeLockSafe(JdbcStringBasedCacheStore.java:247) [infinispan-cachestore-jdbc-5.2.5.Final.jar:5.2.5.Final]
> ... 73 more{code}
> The driver [does not support|http://code.google.com/p/xerial/issues/detail?id=99] setBinaryStream(), only setBytes(). Not sure if there are any other methods required by infinispan but not implemented.
> As a simple comparison between JDBC and direct storage, I tried an app that caches 3000 records of around 5k and 60000 records of around 0.5k (total of less than 60MiB). Bdbje store operation completes in less than a minute. With a local mysql server it takes 10 minutes. And this is on a machine with plenty of CPU and memory over an SSD. Unfortunately bdbje does not work clustered for me (ISPN-2968).
> So my point is that a local disk based, fast, reliable, transactional engine is highly needed.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2980) sqlite support
by Aleksandar Kostadinov (JIRA)
[ https://issues.jboss.org/browse/ISPN-2980?page=com.atlassian.jira.plugin.... ]
Aleksandar Kostadinov updated ISPN-2980:
----------------------------------------
Attachment: sqlite-jdbc-3.7.15-SNAPSHOT-f18.jar
> sqlite support
> --------------
>
> Key: ISPN-2980
> URL: https://issues.jboss.org/browse/ISPN-2980
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Affects Versions: 5.2.5.Final
> Reporter: Aleksandar Kostadinov
> Assignee: Mircea Markus
> Labels: cache-loader, cache-store, jdbc, sqlite
> Attachments: sqlite-jdbc-3.7.15-SNAPSHOT-f18.jar
>
>
> It would be very nice is we have SQLite support for infinispan. SQLite is a powerful database supporting terabyte sized databases in a file with competitive performance.
> I tried to use it as a JDBC store but the best driver I find in the internet ([xerial sqlite jdbc driver|https://bitbucket.org/xerial/sqlite-jdbc]) does not implement full jdbc specification and trying to use it results in exceptions.
> I think that perhaps using the [non-jdbc wrapper sqlite4java|http://code.google.com/p/sqlite4java/] may make sense for infinispan because:
> 1. it promises better performance
> 2. it allows using the sqlite library from OS (xerial driver uses a customized build of sqlite)
> FYI here is how I setup sqlite for infinispan (unsuccessfully):
> {code}jboss as cli commands:
> /subsystem=datasources/jdbc-driver=sqlite:add(driver-name="sqlite",driver-module-name="org.xerial",driver-class-name=org.sqlite.JDBC)
> data-source add --name=SQLiteDS --connection-url="jdbc:sqlite:${sqlite.database.string}" --jndi-name=java:jboss/datasources/SQLiteDS --driver-name="sqlite"
> /subsystem=datasources/data-source=SQLiteDS:enable
> {code}
> {code}JBoss AS module definition (modules/org/xerial/main/module.xml):
> <?xml version="1.0" encoding="UTF-8"?>
> <module xmlns="urn:jboss:module:1.0" name="org.xerial">
> <resources>
> <resource-root path="sqlite-jdbc.jar" />
> </resources>
> <dependencies>
> <module name="javax.api" />
> <module name="javax.transaction.api"/>
> </dependencies>
> </module>
> {code}
> {code}cache store/loader configuration snippet:
> <stringKeyedJdbcStore xmlns="urn:infinispan:config:jdbc:5.2" fetchPersistentState="false" ignoreModifications="false" purgeOnStartup="false" key2StringMapper="com.jboss.datagrid.chunchun.util.TwoWayKey2StringChunchunMapper">
> <dataSource jndiUrl="java:jboss/datasources/SQLiteDS" />
> <stringKeyedTable dropOnExit="false" createOnStart="true" prefix="ispn">
> <idColumn name="ID_COLUMN" type="VARCHAR(255)" />
> <dataColumn name="DATA_COLUMN" type="BLOB" />
> <timestampColumn name="TIMESTAMP_COLUMN" type="BIGINT" />
> </stringKeyedTable>
> </stringKeyedJdbcStore>
> </loaders>
> {code}
> sql driver needs to be copied in the same directory as module.xml
> *UPDATE:* the exception is fixed with latest dev code of xerial jdbc driver, please look at comments to see remaining problems.
> The Exception I'm getting is:{code}
> 12:53:10,683 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MSC service thread 1-3) ISPN000136: Execution error: org.infinispan.loaders.CacheLoaderException: Error while storing string key to database; key: 'user41', buffer size of value: 4918 bytes
> at org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore.storeLockSafe(JdbcStringBasedCacheStore.java:253) [infinispan-cachestore-jdbc-5.2.5.Final.jar:5.2.5.Final]
> ...
> Caused by: java.sql.SQLException: not implemented by SQLite JDBC driver
> at org.sqlite.Unused.unused(Unused.java:29) [sqlite-jdbc-3.7.2.jar:]
> at org.sqlite.Unused.setBinaryStream(Unused.java:60) [sqlite-jdbc-3.7.2.jar:]
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.setBinaryStream(WrappedPreparedStatement.java:871)
> at org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore.storeLockSafe(JdbcStringBasedCacheStore.java:247) [infinispan-cachestore-jdbc-5.2.5.Final.jar:5.2.5.Final]
> ... 73 more{code}
> The driver [does not support|http://code.google.com/p/xerial/issues/detail?id=99] setBinaryStream(), only setBytes(). Not sure if there are any other methods required by infinispan but not implemented.
> As a simple comparison between JDBC and direct storage, I tried an app that caches 3000 records of around 5k and 60000 records of around 0.5k (total of less than 60MiB). Bdbje store operation completes in less than a minute. With a local mysql server it takes 10 minutes. And this is on a machine with plenty of CPU and memory over an SSD. Unfortunately bdbje does not work clustered for me (ISPN-2968).
> So my point is that a local disk based, fast, reliable, transactional engine is highly needed.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months