[ISPN-1797]MongoDB CacheStore
by Guillaume SCHEIBEL
Hi everyone,
Finally, I made the last (for the moment actually :) ) touch to the mongoDB
cache store, the pull request #1473 has been updated.
Hope it's better now, let know what do you think about it.
Cheers,
Guillaume
11 years, 6 months
Fwd: [infinispan] ISPN-2962 Fix thread leaks in the core test suite (#1736)
by Galder Zamarreño
Hey Jonathan,
How's it going?
We're seeing the following Arjuna threads still running when our Infinispan testsuite and we wondered whether:
a) There's a way to disable them.
b) Is there a way to shut them down when Infinispan caches stop.
Cheers,
Begin forwarded message:
> From: Adrian Nistor <notifications(a)github.com>
> Subject: Re: [infinispan] ISPN-2962 Fix thread leaks in the core test suite (#1736)
> Date: March 27, 2013 2:09:06 PM GMT
> To: infinispan/infinispan <infinispan(a)noreply.github.com>
> Reply-To: infinispan/infinispan <reply+i-12400113-692fb20cc01b01d67beffc2275beeaf015f0361a-50187(a)reply.github.com>
>
> I noticed 3 strange threads that still run after the suite:
> com.arjuna.ats.internal.arjuna.recovery.Listener, com.arjuna.ats.internal.arjuna.coordinator.ReaperThread and com.arjuna.ats.internal.arjuna.coordinator.ReaperWorkerThread.
>
> Not sure what we can do about them or if they matter to us.
>
> —
> Reply to this email directly or view it on GitHub.
>
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 7 months
Remote command smarter dispatcher (merge ISPN-2808 and ISPN-2849)
by Pedro Ruivo
Hi all,
To solve ISPN-2808 (avoid blocking JGroups threads in order to allow to
deliver the request responses), I've created another thread pool to move
the possible blocking commands (i.e. the commands that may block until
some state is achieved).
Problem description:
With this solution, the new thread pool should be large in order to be
able to handle the remote commands without deadlocks. The problem is
that all the threads can be block to process the command that may
unblock other commands.
Example: a bunch of commands are blocked waiting for a new topology ID
and the command that will increment the topology ID is in the thread
pool queue.
Solution:
Use a smart command dispatcher, i.e., keep the command in the queue
until we are sure that it will not wait for other commands. I've already
implemented some kind of executor service (ConditionalExecutorService,
in ISPN-2635 and ISPN-2636 branches, Total Order stuff) that only put
the Runnable (more precisely a new interface called ConditionalRunnable)
in the thread pool when it is ready to be processed. Creative guys, it
may need a better name :)
The ConditionalRunnable has a new method (boolean isReady()) that should
return true when the runnable should not block.
Example how to apply this to ISPN-2808:
Most of the commands awaits for a particular topology ID and/or for lock
acquisition. In this way, the isReady() implementation can be something
like:
isReady()
return commandTopologyId <= currentTopologyId && (for all keys; do if
!lock(key).tryLock(); return false; done)
With this, I believe we can keep the number of thread low and avoid the
thread deadlocks.
Now, I have two possible implementations:
1) put a reference for StateTransferManager and/or LockManager in the
commands, and invoke the methods directly (a little dirty)
2) added new method in the CommandInterceptor like: boolean
preProcess<command>(Command, InvocationContext). each interceptor will
check if the command will block on it (returning false) or not (invoke
the next interceptor). For example, the StateTransferInterceptor returns
immediately false if the commandToplogyId is higher than the
currentTopologyId and the *LockingIntercerptor will return false if it
cannot acquire some lock.
Any other suggestions? If I was not clear let me know.
Thanks.
Cheers,
Pedro
11 years, 7 months
Changing configuration at runtime
by Tristan Tarrant
Hi all,
the current list of configuration parameters which can be tweaked at
runtime is quite small:
Global
Nothing
Cache
AsyncStoreConfiguration.flushLockTimeout(long l)
AsyncStoreConfiguration.shutdownTimeout(long l)
LockingConfiguration.lockAcquisitionTimeout(long lockAcquisitionTimeout)
StateTransferConfiguration.timeout(long l)
StoreAsBinaryConfiguration.enabled(boolean enabled)
SyncConfiguration.replTimeout(long l)
TransactionConfiguration.cacheStopTimeout(long l)
TransactionConfiguration.lockingMode(LockingMode lockingMode)
TransactionConfiguration.syncCommitPhase(boolean b)
TransactionConfiguration.syncRollbackPhase(boolean b)
TransactionConfiguration.transactionManagerLookup(TransactionManagerLookup
transactionManagerLookup)
TransactionConfiguration.useEagerLocking(boolean b)
We're probably being over-conservative here, and it would be nice to be
able to tune more parameters, such as async queue size, eviction size,
default expiration, thread pool configuration and whatever else can be
done without too much effort.
Could you all please look at your respective areas of competence and see
if we can expose any tunable parameters ?
Obviously these need to be exposed via JMX/RHQ as well.
Thanks
Tristan
11 years, 7 months
versioning - illegal state
by Ales Justin
Any idea how this can happen?
01:06:36,722 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (http-/127.0.0.1:8080-2) ISPN000136: Execution error: java.lang.IllegalStateException: Entries cannot have null versions!
at org.infinispan.container.entries.ClusteredRepeatableReadEntry.performWriteSkewCheck(ClusteredRepeatableReadEntry.java:59) [infinispan-core-5.2.1.Final.jar:5.2.1.Final]
at org.infinispan.transaction.WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions(WriteSkewHelper.java:89) [infinispan-core-5.2.1.Final.jar:5.2.1.Final]
at org.infinispan.interceptors.locking.ClusteringDependentLogic$DistributionLogic.createNewVersionsAndCheckForWriteSkews(ClusteringDependentLogic.java:291) [infinispan-core-5.2.1.Final.jar:5.2.1.Final]
at org.infinispan.interceptors.VersionedEntryWrappingInterceptor.visitPrepareCommand(VersionedEntryWrappingInterceptor.java:65) [infinispan-core-5.2.1.Final.jar:5.2.1.Final]
---
I'm adding my own interceptor here:
* https://github.com/capedwarf/capedwarf-blue/blob/master/datastore/src/mai...
just before VersionedEntryWrappingInterceptor.
-Ales
11 years, 7 months
Bye bye wrappers, ComparingConcurrentHashMapv8 is here (ISPN-2281)
by Galder Zamarreño
Hi all,
A heads up on what is going on with https://issues.jboss.org/browse/ISPN-2281
While discussing this, Tristan and I came to the conclusion that we could avoid the need to create some wrappers required to fulfill requirements in this JIRA, and as a side effect, reduce the memory consumption of Infinispan servers, if we could have internal data containers based on concurrent hash maps that took a custom function for equals/hashCode…etc. By doing that, you could effectively have **byte[] keys and values for maps**.
By doing that, you avoid creating wrappers (yippee!) for keys (bye bye ByteArrayKey), and combined with a better way to pass metadata into Infinispan Caches (i.e. version) that is stored within the internal cache entries, you avoid wrapper values too! (bye bye CacheValue).
Doing the latter was relatively simple (I have this stashed), but having a CHM that could take a byte[] as key wasn't that easy, since we can't change JDK CHM.
This is why, I've created a new CHM, based on the CHMv8, called ComparingConcurrentHashMapv8 (thx Tristan for the name!). The work for this can be seen in: https://github.com/galderz/infinispan/commit/351e29d327d163ca8e941edf873f...
I'm sending it here so that I can get feedback early on. I've added some tests as well that verify that ComparingConcurrentHashMapv8, with byte[] keys and values, works as expected, and checks that the expectations are opposite with JDK CHM. It also tests new function-based methods.
To make it easier to track changes as original CHMv8 evolves, I've marked all changes with a marker comment that should make it easy to apply same changes in new CHMv8 versions. Plus, with the tests I've added, it can easily be seen if it works as expected or not.
Two important TODOs, which will be most likely separated into separate JIRAs:
1. Note that TreeBin has not been modified to use custom equals/hashCode functions. That is cos I need to implement a way to compare byte arrays, i.e. provide equivalent logic for Comparable.compare().
2. Compare memory consumption of a CHMv8 with wrapper classes for byte arrays versus ComparingConcurrentHashMapv8<byte[], byte[]>. I'll do that once it's closer to CR stages. Right now fulfilling requirements in ISPN-2281 is more priority.
Finally, I need to do the same thing with out BoundedConcurrentHashMap, iow, provide a way to do comparison based on custom equals/hashCode. That's gonna be my next task, before I get to transform Infinispan Servers to take a type directly, and avoid relying on ByteArrayKey or CacheValue wrappers.
IOW, you'll be able to say: create an Infinsipan Server that has String as key and value of type X, where X is the actual data type, no metadata!! The metadata (version, encoding, whatever is requried to fulfill the compatibility reqs in ISPN-2281) will be passed as part of the put/replace…etc (I will email this around when in place).
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 7 months
CacheLoaders, Distribution mode and Interceptors
by James Aley
Hey all,
<OT>
Seeing as this is my first post, I wanted to just quickly thank you
all for Infinispan. So far I'm really enjoying working with it - great
product!
</OT>
I'm using the InfinispanDirectory for a Lucene project at the moment.
We use Lucene directly to build a search product, which has high read
requirements and likely very large indexes. I'm hoping to make use of
a distribution mode cache to keep the whole index in memory across a
cluster of machines (the index will be too big for one server).
The problem I'm having is that after loading a filesystem-based Lucene
directory into InfinispanDirectory via LuceneCacheLoader, no nodes are
retrieving data from the cluster - they instead look up keys in their
local CacheLoaders, which involves lots of disk I/O and is very slow.
I was hoping to just use the CacheLoader to initialize the caches, but
from there on read only from RAM (and network, of course). Is this
supported? Maybe I've misunderstood the purpose of the CacheLoader?
To explain my observations in a little more detail:
* I start a cluster of two servers, using [1] as the cache config.
Both have a local copy of the Lucene index that will be loaded into
the InfinispanDirectory via the loader. This is a test configuration,
where I've set numOwners=1 so that I only need two servers for
distribution to happen.
* Upon startup, things look good. I see the memory usage of the JVM
reflect a pretty near 50/50 split of the data across both servers.
Logging indicates both servers are in the cluster view, all seems
fine.
* When I send a search query to either one of the nodes, I notice the following:
- iotop shows huge (~100MB/s) disk I/O on that node alone from the
JVM process.
- no change in network activity between nodes (~300b/s, same as when idle)
- memory usage on the node running the query increases dramatically,
and stays higher even after the query is finished.
So it seemed to me like each node was favouring use of the CacheLoader
to retrieve keys that are not in memory, instead of using the cluster.
Does that seem reasonable? Is this the expected behaviour?
I started to investigate this by turning on trace logging, in this
made me think perhaps the cause was that the CacheLoader's interceptor
is higher priority in the chain than the the distribution interceptor?
I'm not at all familiar with the design in any level of detail - just
what I picked up in the last 24 hours from browsing the code, so I
could easily be way off. I've attached the log snippets I thought
relevant in [2].
Any advice offered much appreciated.
Thanks!
James.
[1] https://www.refheap.com/paste/12531
[2] https://www.refheap.com/paste/12543
11 years, 7 months
Infinispan testsuite: state update
by Sanne Grinovero
Thanks to latest fixes from Dan I'm now actually able to complete the
test run; I've run it some times and it doesn't seem to hang anymore!
Also the results are encouraging, so here I am with renewed motivation
to stress you all to cover the last mile :)
Still some tests are failing, I guess there are all well known and
being tracked?
I'll share the details, so you can have a look; besides the failing
tests, we also still have a little variation in the number of tests
being reported.
commit id of tested code: b8ad1c72e6f5d2c59050299e3c4fa9b6c127d606
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2586, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2585, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
testJoinAndLeave(org.infinispan.statetransfer.DataRehashedEventTest):
expected [2] but found [1]
Tests run: 2586, Failures: 2, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2586, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2586, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2585, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2584, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testPutRemove(org.infinispan.loaders.decorators.AsyncStoreTest):
testPutRemove-k-780 still in store
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2586, Failures: 2, Errors: 0, Skipped: 0
Results :
Failed tests:
testStateTransfer(org.infinispan.container.versioning.VersionedReplStateTransferTest):
Could not commit implicit transaction
Tests run: 2586, Failures: 1, Errors: 0, Skipped: 0
Results :
Failed tests:
testWriteSkewWithOnlyPut(org.infinispan.api.mvcc.repeatable_read.WriteSkewTest):
org.infinispan.transaction.WriteSkewException: Detected write skew.
Tests run: 2585, Failures: 1, Errors: 0, Skipped: 0
In one case I got:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test
(default-test) on project infinispan-core: Execution default-test of
goal org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test
failed: java.lang.reflect.InvocationTargetException; nested exception
is java.lang.reflect.InvocationTargetException: null:
NullPointerException -> [Help 1]
scrolling up I couldn't relate it to any failed test (there is no
report output); the only interesting stacktrace is the following; do
we really need to kill the Marshaller, preventing it to send any final
commands (and I guess a clean shutdown) ?
2013-03-02 17:31:34,854 ERROR [OutboundTransferTask]
(asyncTransportThread-1,MainOwnerChangesLockTest-NodeF) Failed to send
entries to node MainOwnerChangesLockTest-NodeE-53378 :
java.lang.RuntimeException: Failure to marshal argument(s)
org.infinispan.CacheException: java.lang.RuntimeException: Failure to
marshal argument(s)
at org.infinispan.util.Util.rewrapAsCacheException(Util.java:542)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
at org.infinispan.statetransfer.OutboundTransferTask.sendEntries(OutboundTransferTask.java:257)
at org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:187)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.RuntimeException: Failure to marshal argument(s)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.marshallCall(CommandAwareRpcDispatcher.java:281)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:300)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
... 14 more
Caused by: java.lang.InterruptedException: Cache manager is shutting
down, so type write externalizer for type=java.lang.Integer cannot be
resolved. Interruption being pushed up.
at org.infinispan.marshall.jboss.ExternalizerTable.getObjectWriter(ExternalizerTable.java:185)
at org.infinispan.marshall.jboss.JBossMarshaller$ExternalizerTableProxy.getObjectWriter(JBossMarshaller.java:159)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:139)
at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)
at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:119)
at org.infinispan.marshall.exts.ReplicableCommandExternalizer.writeCommandParameters(ReplicableCommandExternalizer.java:87)
at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.marshallParameters(CacheRpcCommandExternalizer.java:128)
at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.writeObject(CacheRpcCommandExternalizer.java:112)
at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.writeObject(CacheRpcCommandExternalizer.java:73)
at org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.writeObject(ExternalizerTable.java:408)
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:145)
at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)
at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:119)
at org.infinispan.marshall.jboss.AbstractJBossMarshaller.objectToObjectStream(AbstractJBossMarshaller.java:96)
at org.infinispan.marshall.VersionAwareMarshaller.objectToBuffer(VersionAwareMarshaller.java:92)
at org.infinispan.marshall.AbstractMarshaller.objectToBuffer(AbstractMarshaller.java:64)
at org.infinispan.marshall.AbstractDelegatingMarshaller.objectToBuffer(AbstractDelegatingMarshaller.java:109)
at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectToBuffer(MarshallerAdapter.java:45)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.marshallCall(CommandAwareRpcDispatcher.java:279)
... 16 more
Caused by: an exception which occurred:
in object java.lang.Integer@6
in object org.infinispan.statetransfer.StateResponseCommand@58e3da4e
Also I occasionally saw this, which looks like weird:
2013-03-02 17:44:00,457 ERROR [InvocationContextInterceptor]
(testng-LockOwnerCrashPessimisticTest) ISPN000136: Execution error
org.infinispan.util.concurrent.TimeoutException: Could not acquire
lock on MagicKey#null{167a468a@LockOwnerCrashPessimisticTest-NodeL-10361}
on behalf of transaction
GlobalTransaction:<LockOwnerCrashPessimisticTest-NodeJ-32491>:146134:local.
Lock is being held by null
at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.newTimeoutException(AbstractTxLockingInterceptor.java:217)
at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.waitForTransactionsToComplete(AbstractTxLockingInterceptor.java:210)
at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockKeyAndCheckOwnership(AbstractTxLockingInterceptor.java:174)
at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitPutKeyValueCommand(PessimisticLockingInterceptor.java:122)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:62)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:251)
at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:191)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:62)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:211)
at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:194)
at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:136)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
I'm assuming these failures are all well known so I didn't collect
details; please let me know if you'd like me to try getting some of
these to reproduce again and which details I'd need to share.. and of
course trace logs are not an option ;-)
[warn: since it always fails me at infinispan-core, I never tested the
other modules]
Cheers,
Sanne
11 years, 7 months
DefaultExecutorFactory and rejection policy
by Pedro Ruivo
Hi
I'm working on ISPN-2808 (https://issues.jboss.org/browse/ISPN-2808) and
I noticed that the DefaultExecutorFactory is creating the executor
service with an Abortpolicy.
Is there any particular reason for that?
In the new thread pool needed by ISPN-2808, I cannot have the messages
(i.e. the runnables) discarded, because it can cause some inconsistent
state and even block all the cluster.
I have set in my branch a CallerRunPolicy. If you see any issue with
this let me know.
Cheers,
Pedro
11 years, 7 months
Fwd: JBoss Logging and printing byte[]… WDYT?
by Galder Zamarreño
FYI, hoping for JBoss Logging to have better support for array display, particularly byte[].
Begin forwarded message:
> From: David Lloyd <dlloyd(a)redhat.com>
> Subject: Re: JBoss Logging and printing byte[]… WDYT?
> Date: March 19, 2013 11:45:35 PM GMT+01:00
> To: Galder Zamarreño <galder(a)redhat.com>
>
> It's a good idea. I would need to contemplate implementation options though.
>
> --
> - DML
>
>
> On Mar 19, 2013, at 4:56 PM, Galder Zamarreño <galder(a)redhat.com> wrote:
>
>> Hey,
>>
>> I've got an idea!
>>
>> What about JBoss Logging has an option to print arrays, given a particular function?
>>
>> I mean, Infinispan is moving towards being able to have byte[] as keys (by using a custom CHMv8 which takes a Comparing function that allows keys/values to be compared on equality/hashCode).
>>
>> The problem is that now, all the logging we do of keys needs to be wrapped so that keys, which are byte[] are printed properly. It'd be great if JBoss Logging had a way to be plugged with how byte[] should be displayed, since it might be specific to each use case.
>>
>> WDYT?
>>
>> Cheers,
>> --
>> Galder Zamarreño
>> galder(a)redhat.com
>> twitter.com/galderz
>>
>> Project Lead, Escalante
>> http://escalante.io
>>
>> Engineer, Infinispan
>> http://infinispan.org
>>
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
Project Lead, Escalante
http://escalante.io
Engineer, Infinispan
http://infinispan.org
11 years, 7 months