On Tue, Mar 29, 2011 at 3:49 AM, Olaf Bergner <olaf.bergner(a)gmx.de> wrote:
I've started working on ISPN-78 - Large Object Support - closely
following Manik's design document
http://community.jboss.org/wiki/LargeObjectSupport. As a starting point
I'm currently trying to implement
OutputStream writeToKey(K key),
As it's a common use case that data is being streamed from disk (or
Socket) anyway, and there's no good "pipe" in the Java SDK, your API
change is an improvement.
What you want to express is:
Cache c;
c.write(K key, new FileInputStream("/var/db/bigfile.txt"));
But being handed an OutputStream is good these cases:
1. If an exception is thrown from an InputStream (disk error), the
exception doesn't have to come through Infinispan. (I suggest the API
supports IOException.)
2. A user can better compose the output. For example, you want to add,
say, a header to a file being read from disk, it's much easier to do a
series of write operations, like os.write(<header>), os.write(<data>).
Still, I wouldn't recommend that.
3. If you want to append new data.
I think it'd be BEST if you could support both models. I would add:
interface Cache {
/**
* Returns a new or existing LargeObject object for the following key.
* @throws ClassCastException if the key exists and is not a LargeObject.
*/
LargeObject largeObject(K key);
}
Use:
Cache<K, LargeObject> c;
c.largeObject(key).append(new FileInputStream(...));
- or -
c.largeObject(key);
/// some time passes ///
OutputStream os = c.largeObject(key).appendStream();
os.write("more data now");
os.close(); // flushes data to Cache
public class LargeObject {
transient final Cache cache;
transient final Object key;
int chunks;
final int chunkSize;
long totalSize;
/** Constructor intended only for Cache itself. But should allow
subclassing for tests. */
protected LargeObject(Cache c, Object key, int chunkSize) {}
/** Data is written to Cache and not entirely stored until the
stream is closed or flushed. */
public OutputStream getAppendStream();
/** Data is read until EOF, then the stream is closed */
public void append(InputStream is);
/** Should support "seek" and "skip" and "available"
methods */
public InputStream getInput();
public long getTotalSize();
public void truncate(long length);
protected void remove();
}
This is certainly doable but leaves me wondering where that proposed
ChunkingInterceptor might come into play.
I would think ideally you don't need to create any new commands. Less
protocol messages is better.
You do need to deal with the case of "remove": Ultimately, you will
need to call LargeObject.remove().
3. The design suggests to use a fresh UUID as the key for each new
chunk. While this in all likelihood gives us a unique new key for each
chunk I currently fail to see how that guarantees that this key maps to
a node that is different from all the nodes already used to store chunks
of the same Large Object. But then again I know next to nothing about
Infinispan's constant hashing algorithm.
I wouldn't use UUID. I'd just store (K, #) where # is the chunk.
4. Finally, the problem regarding eager locking and transactions
mentioned in Manik's comment seems rather ... hairy. If we indeed forego
transactions readers of a key just being written shouldn't be affected
provided we write the LargeObjectMetadata object only after all chunks
have been written. But what about writers?
I would think a use case for this API would be streaming audio or
video, maybe something like access logs even?
In which case, you would want to read while you're writing. So,
locking shouldn't be imposed. I would say, rely on the transaction
manager to keep a consistent view. If transactions aren't being used,
then the user might see some unexpected behavior. The API could
compensate for that.