Configuration visitor - Re: [JBoss JIRA] Commented: (ISPN-145) No transport and singleton store enabled should not be allowed
by Vladimir Blagojevic
Hi,
Galder and I talked about this offline. Time to involve you guys!
I just completed visitor pattern for our configuration objects. Visitor
is passed from root of configuration - InfinispanConfiguration object.
InfinispanConfiguration class has a new method:
public void accept(ConfigurationBeanVisitor v)
How do we want to integrate this visitor into existing structure?
1) We add a new factory method to InfinispanConfiguration with
additional ConfigurationBeanVisitor parameter
2) We leave everything as is and if there is a need to pass some visitor
we pass it to InfinispanConfiguration instance directly (from
DefaultCacheManager)
DefaultCacheManager will pass ValidationVisitor to
InfinispanConfiguration that will verify configuration semantically.
Regards,
Vladimir
On 09-09-09 10:19 AM, Galder Zamarreno wrote:
> Good idea :)
>
> On 09/09/2009 04:13 PM, Vladimir Blagojevic wrote:
>> Yeah,
>>
>> I was thinking that we can make a visitor for configuration tree and
>> then you can do verification of any node and other things as well. Use
>> cases will come up in the future for sure.
>>
>> Cheers
>>
>>
>>
>> On 09-09-09 3:29 AM, Galder Zamarreno (JIRA) wrote:
>>> [
>>> https://jira.jboss.org/jira/browse/ISPN-145?page=com.atlassian.jira.plugi...
>>>
>>> ]
>>>
>>> Galder Zamarreno commented on ISPN-145:
>>> ---------------------------------------
>>>
>>> Not sure I understand what you mean by generic though. You mean any
>>> component to have a validation step of some sort?
>>>
>>> Thanks for taking this on :)
>>>
>>>> No transport and singleton store enabled should not be allowed
>>>> --------------------------------------------------------------
>>>>
>>>> Key: ISPN-145
>>>> URL: https://jira.jboss.org/jira/browse/ISPN-145
>>>> Project: Infinispan
>>>> Issue Type: Bug
>>>> Components: Loaders and Stores
>>>> Affects Versions: 4.0.0.ALPHA6
>>>> Reporter: Galder Zamarreno
>>>> Assignee: Vladimir Blagojevic
>>>> Priority: Minor
>>>> Fix For: 4.0.0.CR1
>>>>
>>>>
>>>> Throw configuration exception if singleton store configured without
>>>> transport having been configured.
>>>> It makes no sense to have singleton store enabled when there's no
>>>> transport.
>>
>
13 years, 5 months
Defining new commands in modules
by Manik Surtani
So this is an extension to the discussion around a GenericCommand that has been going around. IMO a GenericCommand is a big -1 from me for various reasons - the whole purpose of the command pattern is so we have strongly typed and unit testable commands. This will help the ongoing work by Mircea, Sanne and Israel on various modules that need to define custom commands.
I proposed the following solution to Mircea earlier today, I'll repeat here for you guys to discuss. Note that this is a *half baked* solution and needs more thought! :-)
* If a module needs to define custom commands, it should define its own ReplicableCommand implementations in its' own module.
* It should define a sub-interface to Visitor (MyModuleVisitor) with additional methods to handle the new commands
* Interceptors defined in this module should extend CommandInterceptor AND implement MyModuleVisitor
* These new commands can be created directly, or via a new CommandFactory specially for these commands.
Now for the un-finished bits. :)
* How does RemoteCommandFactory instantiate these new commands? The module should have a way of registering additional command IDs with RemoteCommandFactory.fromStream(). See
http://fisheye.jboss.org/browse/Infinispan/branches/4.2.x/core/src/main/j...
Perhaps RemoteCommandFactory.fromStream() should look up the ID in a map of command creator instances, and each module can register more of these with the RemoteCommandFactory?
* How do interceptors defined in the core module handle commands it isn't aware of? handleDefault()? Or should we define a new handleUnknown() method in Visitor for this case, which would default to a no-op in AbstractVisitor? E.g., in a module-specific command such as MyModuleCommand, I would implement:
class MyModuleCommand implements ReplicableCommand {
public Object acceptVisitor(InvocationContext ctx, Visitor visitor) throws Throwable {
if (Visitor instanceof MyModuleVisitor) {
return ((MyModuleVisitor) visitor).visitMyModuleCommand(ctx, this);
} else {
return visitor.handleUnknown(ctx, this);
}
}
}
Cheers
Manik
PS: There is no JIRA for this. If we like this approach and it works, I suggest we create a JIRA and implement it for 4.2. The impl should be simple once we resolve the outstanding bits.
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
13 years, 9 months
Primary-Backup replication scheme in Infinispan
by Sebastiano Peluso
Hi all,
first let us introduce ourselves given that it's the first time we write
in this mailing list.
Our names are Sebastiano Peluso and Diego Didona, and we are working at
INESC-ID Lisbon in the context of the Cloud-TM project. Our work is
framed in the context of self-adaptive replication mechanisms (please
refer to previous message by Paolo on this mailing list and to the
www.cloudtm.eu website for additional details on this project).
Up to date we have been working on developing a relatively simple
primary-backup (PB) replication mechanism, which we integrated within
Infinispan 4.2 and 5.0. In this kind of scheme only one node (called the
primary) is allowed to process update transactions, whereas the
remaining nodes only process read-only transactions. This allows coping
very efficiently with write transactions, as the primary does not have
to incur in the cost of remote coordination schemes nor in distributed
deadlocks, which can hamper performance at high contention with two
phase commit schemes. With PB, in fact, the primary can serialize
transactions locally, and simply propagate the updates to the backups
for fault-tolerance as well as to allow them to process read-only
transactions on fresh data of course. On the other hand, the primary is
clearly prone to become the bottleneck especially in large clusters and
write intensive workloads.
Thus, this scheme does not really represent a replacement for the
default 2PC protocol, but rather an alternative approach that results
particularly attractive (as we will illustrate in the following with
some radargun-based performance results) in small scale clusters, or, in
"elastic" cloud scenarios, in periods where the workload is either read
dominated or not very intense. Being this scheme particularly efficient
in these scenarios, in fact, its adoption would allow to minimize the
number of resources acquired from the cloud provider in these periods,
with direct benefits in terms of cost reductions. In Cloud-TM, in fact,
we aim at designing autonomic mechanisms that would dynamically switch
among multiple replication mechanisms depending on the current workload
characteristics.
Before discussing the results of our preliminary benchmarking study, we
would like to briefly overview how we integrated this replication
mechanism within Infinispan. Any comment/feedback is clearly highly
appreciated. First of all we have defined a new command, namely
PassiveReplicationCommand, that is a subclass of PrepareCommand. We had
to define a new command because we had to design customized "visiting"
methods for the interceptors. Note that our protocol only affects the
commit phase of a transaction, specifically, during the prepare phase,
in the prepare method of the TransactionXaAdapter class, if the Primary
Backup mode is enabled, then a PassiveReplicationCommand is built by the
CommandsFactory and it is passed to the invoke method of the invoker.
The PassiveReplicationCommand is then visited by the all the
interceptors in the chain, by means of the
visitPassiveReplicationCommand methods. We describe more in detail the
operations performed by the non-trivial interceptors:
-TxInterceptor: like in the 2PC protocol, if the context is not
originated locally, then for each modification stored in the
PassiveReplicationCommand the chain of interceptors is invoked.
-LockingInterceptor: first the next interceptor is called, then the
cleanupLocks is performed with the second parameter set to true (commit
the operations). This operation is always safe: on the primary it is
called only after that the acks from all the slaves are received (see
the ReplicationInterceptor below); on the slave there are no concurrent
conflicting writes since these are already locally serialized by the
locking scheme performed at the primary.
-ReplicationInterceptor: first it invokes the next iterceptor; then if
the predicate shouldInvokeRemoteTxCommand(ctx) is true, then the method
rpcManager.broadcastRpcCommand(command, true, false) is performed, that
replicates the modifications in a synchronous mode (waiting for an
explicit ack from all the backups).
As for the commit phase:
-Given that in the Primary Backup the prepare phase works as a commit
too, the commit method on a TransactionXaAdapter object in this case
simply returns.
On the resulting extended Infinispan version a subset of unit/functional
tests were executed and successfully passed:
- commands.CommandIdUniquenessTest
- replication.SyncReplicatedAPITest
- replication.SyncReplImplicitLockingTest
- replication.SyncReplLockingTest
- replication.SyncReplTest
- replication.SyncCacheListenerTest
- replication.ReplicationExceptionTest
We have tested this solution using a customized version of Radargun. Our
customizations were first of all aimed at having each thread accessing
data within transactions, instead of executing single put/get
operations. In addition, now every Stresser thread accesses with uniform
probability all of the keys stored by Infinispan, thus generating
conflicts with a probability proportional to the number of concurrently
active threads and inversely proportional to the total number of keys
maintained.
As already hinted, our results highlight that, depending on the current
workload/number of nodes in the system, it is possible to identify
scenarios where the PB scheme significantly outperforms the current 2PC
scheme, and vice versa. Our experiments were performed in a cluster of
homogeneous 8-core (Xeon(a)2.16GhZ) nodes interconnected via a Gigabit
Ethernet and running a Linux Kernel 64 bit version 2.6.32-21-server. The
results were obtained by running for 30 seconds 8 parallel Stresser
threads per nodes, and letting the number of node vary from 2 to 10. In
2PC, each thread executes transactions which consist of 10 (get/put)
operations, with a 10% of probability of generating a put operation.
With PB, the same kind of transactions are executed on the primary, but
the backups execute read-only transactions composed of 10 get
operations. This allows to compare the maximum throughput of update
transactions provided by the two compared schemes, without excessively
favoring PB by keeping the backups totally idle.
The total write transactions' throughput exhibited by the cluster (i.e.
not the throughput per node) is shown in the attached plots, relevant to
caches containing 1000, 10000 and 100000 keys. As already discussed, the
lower the number of keys, the higher the chance of contention and the
probability of aborts due to conflicts. In particular with the 2PC
scheme the number of failed transactions steadily increases at high
contention up to 6% (to the best of our understanding in particular due
to distributed deadlocks). With PB, instead the number of failed txs due
to contention is always 0.
Note that, currently, we are assuming that backup nodes do not generate
update transactions. In practice this corresponds to assuming the
presence of some load balancing scheme which directs (all and only)
update transactions to the primary node, and read transactions to the
backup. In the negative case (a put operation is generated on a backup
node), we simply throw a PassiveReplicationException at the
CacheDelegate level. This is probably suboptimal/undesirable in real
settings, as update transactions may be transparently rerouted (at least
in principle!) to the primary node in a RPC-style. Any suggestion on how
to implement such a redirection facility in a transparent/non-intrusive
manner would be highly appreciated of course! ;-)
To conclude, we are currently working on a statistical model that is
able to predict the best suited replication scheme given the current
workload/number of machines, as well as on a mechanism to dynamically
switch from one replication scheme to the other.
We'll keep you posted on our progresses!
Regards,
Sebastiano Peluso, Diego Didona
14 years
API stability policy for major vs minor vs patch releases
by Paul Ferraro
In my course of upgrading AS6 from 4.2.0.Final to 4.2.1.CR3, I came
across a NoSuchMethodError. Specifically, a couple AS components
utilize DistributionManager.isLocal(String), which was dropped from the
public API sometime after 4.2.1.CR1.
While the fix is trivial enough (in the end I'll need to perform several
component release to compensate), this raises the larger issue of a API
stability policy for major vs minor vs patch releases. At the very
least, I don't think its wise to remove methods from a public interface
in a patch release. In my opinion, removing methods from a public API
should only happen across major releases, and even then, only after
being formally deprecated.
Thoughts?
Paul
14 years
Using Dummy VS JBossTM for running tests
by Mircea Markus
Hi,
I've just run core's test suite with JBossTM and DummyTM.
The duration on my local machine are:
With dummy TM: 5:46.800s
With JBossTM: 5:23.671s
A explanation for JBossTM over performing DummyTM in spite keeping a tx log on the disk (which dummyTm doesn't) is the fact that it has 2PC optimisations for situations where only one resource is registered. In other words if there is only one cache participating in the transaction (most of our tests are like that) there won't be 2RPC but only one.
Shall we use JBossTM as default TM for running local tests ?
Cheers,
Mircea
14 years
Placement for ISPN-905 API
by Galder Zamarreño
Hi all,
I'm working on https://issues.jboss.org/browse/ISPN-905 and just wanted to explain the interface where these methods will be added and why in case anyone differs:
I had initially thought of adding them to CacheContainer but doing that implies that they have a meaning for the RemoteCacheManager.
So, could a remote client call removeCache("x"), taking in account that it does both remove the contents of the cache and any cache loader attached to it, plus stops the cache? Remote clients can currently clear caches, but cannot stop them, so it would not make sense to do this right now given the current remote client capabilities. Also, we agreed not to allow remote clients to stop things in order to reduce potential security issues,
On top of the logical argument, there's a problem of API incompatibility with RemoteCacheManager. If we'd want to add getCache(String, Boolean) to CacheContainer, this would clash with RemoteCacheManager's:
public <K, V> RemoteCache<K, V> getCache(boolean forceReturnValue);
So, that would have forced us to provide a different name in CacheContainer to avoid clashing with this method, and that in turn would have resulted in breaking simmetry with other similar APIs such as AtomicMapLookup API where we define:
public static <MK, K, V> AtomicMap<K, V> getAtomicMap(Cache<MK, ?> cache, MK key, boolean createIfAbsent);
Bottom line, the method suggested by https://issues.jboss.org/browse/ISPN-905 will be included in EmbeddedCacheManager.
If anyone has any different opinions, post reply :)
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years
fancy a green test suite?
by Mircea Markus
Hi,
Manik is currently looking into the test suite failures on core.
Once these failures are fixed, I suggest sticking with the following:
- if you modify anything on core, make sure that the suite is green before committing
- if any intermittent failure noticed, and you have an idea about why it happens fixe it. If not raise a bug and assign it to me - I'll take responsibility of core's suite, and make sure these intermittent failures are fixed.
Cheers,
Mircea
14 years
ISPN-83 (FLUSH removal) causing state transfer timeouts
by Paul Ferraro
After testing AS6 with Infinispan 4.2.1.CR3, I started seeing a slew of
testsuite failures all due to state transfer timeouts. These are
REPL_SYNC tests, with state transfer enabled:
e.g.
2011-02-23 10:59:46,771 DEBUG [org.infinispan.statetransfer.StateTransferManagerImpl] Initiating state transfer process
2011-02-23 10:59:46,771 INFO [org.infinispan.remoting.rpc.RpcManagerImpl] Trying to fetch state from T510-29294
2011-02-23 10:59:46,773 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] GET_STATE: asking T510-29294 for state, passing down a SUSPEND_STABLE event, timeout=10000
2011-02-23 10:59:46,773 DEBUG [org.jgroups.protocols.pbcast.STABLE] suspending message garbage collection
2011-02-23 10:59:46,773 DEBUG [org.jgroups.protocols.pbcast.STABLE] resume task started, max_suspend_time=11000
2011-02-23 10:59:46,778 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] StateProviderThreadSpawner listening at 192.168.0.3:37173...
2011-02-23 10:59:46,778 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] Responding to state requester T510-23375 with address 192.168.0.3:37173 and digest T510-23375: [0 : 0 (0)], T510-29294: [0 : 1 (1)]
2011-02-23 10:59:46,781 DEBUG [org.jgroups.protocols.pbcast.NAKACK]
[overwriteDigest()]
existing digest: T510-23375: [0 : 0 (0)], T510-29294: [0 : 1 (1)]
new digest: T510-29294: [0 : 1 (1)], T510-23375: [0 : 0 (0)]
resulting digest: T510-23375: [0 : 0 (0)], T510-29294: [0 : 1 (1)]
2011-02-23 10:59:46,781 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] Connecting to state provider /192.168.0.3:37173, original buffer size was 43690 and was reset to 8192
2011-02-23 10:59:46,782 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] Connected to state provider, my end of the socket is /192.168.0.3:38305 passing inputstream up...
2011-02-23 10:59:46,783 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] StateProviderThreadSpawner listening at 192.168.0.3:37173...
2011-02-23 10:59:46,783 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] Accepted request for state transfer from /192.168.0.3:38305 handing of to PooledExecutor thread
2011-02-23 10:59:46,786 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] Running on Thread[STREAMING_STATE_TRANSFER-sender-1,T510-29294,5,JGroups]. Accepted request for state transfer from /192.168.0.3:38305, original buffer size was 25434 and was reset to 8192, passing outputstream up...
2011-02-23 10:59:46,789 DEBUG [org.infinispan.statetransfer.StateTransferManagerImpl] Applying state
2011-02-23 10:59:46,789 DEBUG [org.infinispan.statetransfer.StateTransferManagerImpl] Generating state. Can provide? true
2011-02-23 10:59:46,822 DEBUG [org.infinispan.statetransfer.StateTransferManagerImpl] Writing 0 StoredEntries to stream
2011-02-23 10:59:46,833 DEBUG [org.infinispan.statetransfer.StateTransferManagerImpl] State generated, closing object stream
2011-02-23 10:59:46,834 DEBUG [org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER] State writer is closing the socket
2011-02-23 10:59:56,774 DEBUG [org.jgroups.protocols.pbcast.STABLE] resuming message garbage collection
2011-02-23 10:59:56,834 WARN [org.infinispan.remoting.rpc.RpcManagerImpl] Could not find available peer for state, backing off and retrying
2011-02-23 10:59:56,837 ERROR [org.infinispan.remoting.transport.jgroups.JGroupsTransport] Caught while requesting or applying state
org.infinispan.statetransfer.StateTransferException: org.infinispan.util.concurrent.TimeoutException: Timed out after 10 seconds waiting for a response from T510-29294
at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:332)
at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:199)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:582)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(MessageDispatcher.java:712)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:772)
at org.jgroups.JChannel.up(JChannel.java:1422)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:954)
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.connectToStateProvider(STREAMING_STATE_TRANSFER.java:525)
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.handleStateRsp(STREAMING_STATE_TRANSFER.java:464)
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:225)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:190)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:419)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:401)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:888)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:615)
at org.jgroups.protocols.UNICAST.up(UNICAST.java:295)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:707)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:120)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:169)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:269)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:210)
at org.jgroups.protocols.Discovery.up(Discovery.java:292)
at org.jgroups.protocols.PING.up(PING.java:67)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1093)
at org.jgroups.protocols.TP.access$100(TP.java:56)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1633)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1615)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.infinispan.util.concurrent.TimeoutException: Timed out after 10 seconds waiting for a response from T510-29294
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher$ReplicationTask.call(CommandAwareRpcDispatcher.java:267)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommands(CommandAwareRpcDispatcher.java:116)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:394)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:102)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:126)
at org.infinispan.statetransfer.StateTransferManagerImpl.mimicPartialFlushViaRPC(StateTransferManagerImpl.java:300)
at org.infinispan.statetransfer.StateTransferManagerImpl.applyTransactionLog(StateTransferManagerImpl.java:253)
at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:321)
... 30 more
The problematic tests use the default jgroups configuration
(jgroups-udp.xml from the infinispan-core jar). If I override the
default jgroups config and add FLUSH, these tests pass.
Note that there is no cache activity on the 1st cache prior to the time
the 2nd node joins and starts its cache.
Does the cache need to be configured differently if FLUSH is not used?
Thoughts?
14 years
Fwd: [hibernate-dev] Spring Cache Abstraction
by Galder Zamarreño
Interesting discussion going on in the Hibernate-dev list about what Spring are doing caching wise:
This is rooted in: http://blog.springsource.com/2011/02/23/spring-3-1-m1-caching/
Begin forwarded message:
> From: Emmanuel Bernard <emmanuel(a)hibernate.org>
> Date: February 23, 2011 5:37:14 PM GMT+01:00
> To: Marc Schipperheyn <m.schipperheyn(a)gmail.com>
> Cc: hibernate-dev(a)lists.jboss.org
> Subject: Re: [hibernate-dev] Spring Cache Abstraction
>
> It kinda like the declarative aspect of it though there are a lot of nasty strings all around :). The rest is not really new so all old school issues apply.
>
> As usual for higher level caching you need to manually handle data eviction which is likely be a source of bugs.
> They also don't say if they share the same entity instance or serialize / deserialize but unless this is read-only results (ie not modifiable by the user), you'd better be careful. Likewise, make sure you have a multi-threaded implementations of the data structure returned.
> They don't seem to discuss transaction or clustering. I imagine they let the underlying cache provider do the work.
>
> More specifically with Hibernate, if you use such facility within a open session in view pattern, expect weirdness to happen as well as method level cached objects are not going to be managed entities:
> - not updated transparently
> - the same instance might be reattached to several session => badaboom
>
> A good summary of where cache could happen in an application is available here
> http://docs.jboss.org/seam/2.2.1.Final/reference/en-US/html_single/#cache
>
> Due to all the potential issues, I tend to favor HTML page fragment caching:
> - it's higher up the food chain hence likely more efficient
> - many of the issues declared above don't apply (not all disappear though)
>
> But it's sure a useful tool for niche requirements that can live with such constraints.
>
> Emmanuel
>
> On 23 févr. 2011, at 17:04, Marc Schipperheyn wrote:
>
>> It would be interesting to have the Hibernate team comment/blog on the new
>> Spring Cache Abstraction functionality and how it relates to Hibernate
>> managed entities. Perhaps some strategies, etc. It's very attractive to just
>> cache entities in stead of caching entity values with the second level
>> cache.
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
>
> _______________________________________________
> hibernate-dev mailing list
> hibernate-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hibernate-dev
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years