[infinispan-dev] chunking ability on the JDBC cacheloader

Sanne Grinovero sanne.grinovero at gmail.com
Thu May 19 14:06:54 EDT 2011


As mentioned on the user forum [1], people setting up a JDBC
cacheloader need to be able to define the size of columns to be used.
The Lucene Directory has a feature to autonomously chunk the segment
contents at a configurable specified byte number, and so has the
GridFS; still there are other metadata objects which Lucene currently
doesn't chunk as it's "fairly small" (but undefined and possibly
growing), and in a more general sense anybody using the JDBC
cacheloader would face the same problem: what's the dimension I need
to use ?

While in most cases the maximum size can be estimated, this is still
not good enough, as when you're wrong the byte array might get
truncated, so I think the CacheLoader should take care of this.

what would you think of:
 - adding a max_chunk_size option to JdbcStringBasedCacheStoreConfig
and JdbcBinaryCacheStore
 - have them store in multiple rows the values which would be bigger
than max_chunk_size
 - this will need transactions, which are currently not being used by
the cacheloaders

It looks like to me that only the JDBC cacheloader has these issues,
as the other stores I'm aware of are more "blob oriented". Could it be
worth to build this abstraction in an higher level instead of in the
JDBC cacheloader?

Cheers,
Sanne

[1] - http://community.jboss.org/thread/166760


More information about the infinispan-dev mailing list