From dan.berindei at gmail.com Thu May 1 06:06:57 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 01 May 2014 10:09:57 +0003 Subject: [infinispan-dev] ABI compatibility of C++ client In-Reply-To: <535FA44E.9000002@redhat.com> References: <535A5481.5080406@redhat.com> <535F86BE.8000603@redhat.com> <535F8D7E.4010509@redhat.com> <535FA44E.9000002@redhat.com> Message-ID: <1398938817.4948.2@smtp.gmail.com> I'm not convinced that assuming that the user has the same compiler/c runtime/c++ library/exception handling is such a big deal. The users already have the sources; if we only ship a binary compiled with VS2013 and they want to compile their project with VS2012, they can recompile the C++ client themselves. On Tue, Apr 29, 2014 at 4:08 PM, Radim Vansa wrote: > On 04/29/2014 01:31 PM, Tristan Tarrant wrote: >> Yes, it is a sticky situation. We can definitely change the API now >> (actually this is the best moment to do this). >> I guess we need to provide "wrappers" of some kind. Any examples >> elsewhere ? > Are we looking for a third-party library for binary-compatible > containers? One example could be [1] although it requires GCC 4.7.2 / > VS2013 compiler. > > > What we need is to pass string, vector, set and map, all of them > constant. Shouldn't be a rocket science to convert them into flat > blobs > (I am not sure about the price of coding it ourselves/using 3rd party > library). > We don't have to use these weird containers in public API nor > internally. But we have to put the "compression" into public headers > (to > be called by user code) and "decompression" (if needed) into > internals. > > It could be a bit tricky to integrate this with marshalling which > happens anyway in user code, to make as few copies as possible (for > example for bulk methods which return std::map - we don't want > to > create std::map, convert it into blob_map array>, > then marshall into blob_map and finally convert to std::map V>). > > > And we should not inherit the public classes from Handle which uses > HR_SHARED_PTR, that's impl detail. Public classes should hold only > opaque pointers to internal data types. > > I would recommend to treat warnings from Windows compilation as > blockers: it seems Visual Studio is much smarter in detecting > DLL-boundary related errors. > > [1] https://github.com/jbandela/cppcomponents > > >> >> Tristan >> >> On 29/04/2014 13:02, Radim Vansa wrote: >>> I was expecting at least some response. >>> >>> Cliff, Ion, Tristan, Vladimir, could you share your opinions? >>> >>> Radim >>> >>> On 04/25/2014 02:26 PM, Radim Vansa wrote: >>>> Hi guys, >>>> >>>> as I've tried to get rid of all the warnings emitted in Windows >>>> build of >>>> C++ HotRod client, I've noticed that the ABI of this library is >>>> not very >>>> well designed. >>>> I am not an expert for this kind of stuff, but many sources I've >>>> found >>>> say that exporting STL containers (such as string or vector, or >>>> shared_ptr) is not ABI-safe. >>>> >>>> For windows, the STL export is allowed [1] when both library and >>>> user >>>> application is linked against the same version of CRT. I am >>>> really not >>>> sure whether we want to force it to the user, and moreover, due >>>> to bug >>>> in VC10 implementation of STL [2] we can't explicitly export >>>> shared_ptr >>>> (I haven't found any workaround for that so far). >>>> >>>> Regarding the GCC-world, situation is not better. The usual >>>> response for >>>> exporting STL classes is "don't do that". It is expected that >>>> these >>>> trouble will be addressed in C++17 (huh :)). >>>> >>>> What can we do about that? Fixing this requires a lot of changes >>>> in >>>> API... can we afford to do that now? Or will we just declare >>>> "compile >>>> with the same versions and compile options as we did"? (we should >>>> state >>>> them, then) >>>> >>>> I have only limited knowledge of the whole C++ ecosystem, if I am >>>> wrong, >>>> I'd be gladly corrected. >>>> >>>> Radim >>>> >>>> [1] http://support.microsoft.com/kb/168958 >>>> [2] >>>> http://connect.microsoft.com/VisualStudio/feedback/details/649531 >>>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140501/4fa66b5c/attachment.html From sanne at infinispan.org Thu May 1 08:59:27 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 1 May 2014 13:59:27 +0100 Subject: [infinispan-dev] New API to iterate over current entries in cache In-Reply-To: References: <53270AA3.30702@redhat.com> Message-ID: On 30 April 2014 15:06, William Burns wrote: > Was wondering if anyone had any opinions on the API for this. > > These are a few options that Dan and I mulled over: > > Note the CloseableIterable inteface mentioned is just an interface > that extends both Closeable and Iterable. > > 1. The API that is very similar to previously proposed in this list > but slightly changed: > > Methods on AdvancedCache > > CloseableIterable> entryIterable(KeyValueFilter super K, ? super V> filter); > > CloseableIterable> entryIterable(KeyValueFilter super K, ? super V> filter, Converter > converter); > > Note the difference here is that it would return an Iterable instead > of Iterator, which would allow for it being used in a for loop. > > Example usage would be (types omitted) > > for (CacheEntry entry : cache.entryIterable(someFilter, someConverter)) { > // Do something > } If it's important to close the Iterable, this example highlights a problem of the API. Ideally I think you might want to drop the need for the #close() method, but I'm guessing that's not an option so I'd avoid the Iterable API in that case. You could think of an intermediary place-holder to still allow for natural iteration: try ( CacheEntryIteratorContext ctx = cache.entryIterable(someFilter, someConverter) ) { for (CacheEntry entry : ctx.asIterable()) { // Do something } } But I'm not liking the names I used above, as I would expect to be able to reuse the same iterator for multiple invocations of iterable(), and have each to restart the iteration from the beginning. Can this be solved with better name choices? > 2. An API that returns a new type EntryIterable for example that can > chain methods to provide a filter and converter. > > on AdvancedCache > > EntryIterable entryIterable(); > > where EntryIterable is defined as: > > public interface EntryIterable extends > CloseableIterable> { > > public EntryIterable filter(KeyValueFilter V> filter); > > public EntryIterable converter(Converter V, ? extends V> converter); > > public CloseableIterable> > projection(Converter converter); > } > > Note that there are 2 methods that take a Converter, this is to > preserve the typing, since the method would return a different > EntryIterable instance. However I can also see removing one of the > converter method and just rename projection to converter instead. > > This API would allow for providing optional fields more cleanly or not > if all if desired. > > Example usage would be (types omitted) > > for (CacheEntry entry : > cache.entryIterable().filter(someFilter).converter(someConverter)) { > // Do something > } This looks very nice, assuming you fix the missing close(). Am I missing a catch? Also it's quite trivial for the user to do his own filtering and conversion in the "do something block", so I'm wondering if there is a reason beyond API shugar to expose this. It would be lovely if for example certain filters could affect loading from CacheStores - narrowing down a relational database select for example - and I guess the same concept could apply to the converted if you'd allow to select a subset of fields. I don't think these optimisations need to be coded right now, but it would be nice to keep the option open for future enhancement. > 3. An API that requires the filter up front in the AdvancedCache > method. This also brings up the point should we require a filter to > always be provided? Unfortuantely this doesn't prevent a user from > querying every entry as they can just use a filter that accepts all > key/value pairs. Why is that unfortunate? > > on AdvancedCache > > EntryIterable entryIterable(Filter filter) > > where EntryIterable is defined as: > > public interface EntryIterable extends > CloseableIterable> { > > public CloseableIterable> > converter(Converter converter); > } > > The usage would be identical to #2 except the filter is always provided. I wouldn't mandate it, but in case you overload the method it probably is a good idea to internally apply an accept-all filter so you have a single implementation. Could be a singleton, which also implies an efficient Externalizer. > > > Let me know what you guys think or if you have any other suggestions. > > Thanks, > > - Will > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Thu May 1 09:50:51 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 1 May 2014 09:50:51 -0400 Subject: [infinispan-dev] New API to iterate over current entries in cache In-Reply-To: References: <53270AA3.30702@redhat.com> Message-ID: On Thu, May 1, 2014 at 8:59 AM, Sanne Grinovero wrote: > On 30 April 2014 15:06, William Burns wrote: >> Was wondering if anyone had any opinions on the API for this. >> >> These are a few options that Dan and I mulled over: >> >> Note the CloseableIterable inteface mentioned is just an interface >> that extends both Closeable and Iterable. >> >> 1. The API that is very similar to previously proposed in this list >> but slightly changed: >> >> Methods on AdvancedCache >> >> CloseableIterable> entryIterable(KeyValueFilter> super K, ? super V> filter); >> >> CloseableIterable> entryIterable(KeyValueFilter> super K, ? super V> filter, Converter >> converter); >> >> Note the difference here is that it would return an Iterable instead >> of Iterator, which would allow for it being used in a for loop. >> >> Example usage would be (types omitted) >> >> for (CacheEntry entry : cache.entryIterable(someFilter, someConverter)) { >> // Do something >> } > > > If it's important to close the Iterable, this example highlights a > problem of the API. > Ideally I think you might want to drop the need for the #close() > method, but I'm guessing that's not an option so I'd avoid the > Iterable API in that case. Good point, I totally forgot to cover the Closeable aspect in the first email. Unfortunately changing it to be Iterable does pose a slight issue. I was thinking we do something along the lines that Dan was thinking of by preventing the Iterable from producing more than 1 Iterable (maybe throw IllegalStateException). This way when we close the Iterable it would also close the underlying Iterator. try (EntryIterable entries = advancedCache.entryIterable(someFilter, someConverter)) { for (Entry e : entries) { ... } } > You could think of an intermediary place-holder to still allow for > natural iteration: > > try ( CacheEntryIteratorContext ctx = cache.entryIterable(someFilter, > someConverter) ) { > for (CacheEntry entry : ctx.asIterable()) { > // Do something > } > } > > But I'm not liking the names I used above, as I would expect to be > able to reuse the same iterator for multiple invocations of > iterable(), and have each to restart the iteration from the beginning. Obviously from above this wouldn't be possible if we made those changes. Do you think this is reason to prevent those changes? Or do you think we should allow multiple iterators but closing the Iterable would also close down each of the Iterators? I am worried this might be a bit cumbersome/surprising, but documenting it might be sufficient. > Can this be solved with better name choices? > > >> 2. An API that returns a new type EntryIterable for example that can >> chain methods to provide a filter and converter. >> >> on AdvancedCache >> >> EntryIterable entryIterable(); >> >> where EntryIterable is defined as: >> >> public interface EntryIterable extends >> CloseableIterable> { >> >> public EntryIterable filter(KeyValueFilter> V> filter); >> >> public EntryIterable converter(Converter> V, ? extends V> converter); >> >> public CloseableIterable> >> projection(Converter converter); >> } >> >> Note that there are 2 methods that take a Converter, this is to >> preserve the typing, since the method would return a different >> EntryIterable instance. However I can also see removing one of the >> converter method and just rename projection to converter instead. >> >> This API would allow for providing optional fields more cleanly or not >> if all if desired. >> >> Example usage would be (types omitted) >> >> for (CacheEntry entry : >> cache.entryIterable().filter(someFilter).converter(someConverter)) { >> // Do something >> } > > This looks very nice, assuming you fix the missing close(). So in that case you like #2 I assume? That is what I was leaning towards as well. > Am I missing a catch? Yes it would be very similar to above with the outer try block and then passing in the CloseableIterable into the inner for loop. > > Also it's quite trivial for the user to do his own filtering and > conversion in the "do something block", so I'm wondering if there is a > reason beyond API shugar to expose this. The filters and converters are sent remotely to reduce resulting network traffic. The filter is applied on each node to determine what data it sends back and the converter is applied before sending back the value as well. > It would be lovely if for example certain filters could affect loading > from CacheStores - narrowing down a relational database select for > example - and I guess the same concept could apply to the converted if > you'd allow to select a subset of fields. There are plans to have a JPA string filter/converter that will live in a new forthcoming module. We could look into enhancing this in the future to have better integration with the JPACacheStore. A possible issue I can think of is if the projection doesn't contain the key value, because currently rehash can cause duplicate values that is detected by the key. > > I don't think these optimisations need to be coded right now, but it > would be nice to keep the option open for future enhancement. I agree that would be nice > > >> 3. An API that requires the filter up front in the AdvancedCache >> method. This also brings up the point should we require a filter to >> always be provided? Unfortuantely this doesn't prevent a user from >> querying every entry as they can just use a filter that accepts all >> key/value pairs. > > Why is that unfortunate? I was saying along the lines if we wanted to make sure a user doesn't query all data. But if we do want them to do this, then it is great. I know some people are on the fence about it. > >> >> on AdvancedCache >> >> EntryIterable entryIterable(Filter filter) >> >> where EntryIterable is defined as: >> >> public interface EntryIterable extends >> CloseableIterable> { >> >> public CloseableIterable> >> converter(Converter converter); >> } >> >> The usage would be identical to #2 except the filter is always provided. > > I wouldn't mandate it, but in case you overload the method it probably > is a good idea to internally apply an accept-all filter so you have a > single implementation. Could be a singleton, which also implies an > efficient Externalizer. Although for performance I guess it would be better to not provide a Filter for the retrieve all case. > >> >> >> Let me know what you guys think or if you have any other suggestions. >> >> Thanks, >> >> - Will >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From tudor.secrieriu at gmail.com Thu May 1 12:10:25 2014 From: tudor.secrieriu at gmail.com (tudor) Date: Thu, 01 May 2014 19:10:25 +0300 Subject: [infinispan-dev] help with Caused by: java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext Message-ID: <536271F1.4090901@gmail.com> Hi all, Maybe someone had this issue before or it can point me in the right direction. I have an env of two Wildfly 8.0.0 Final servers, with Infinispan in cluster used as second level cache provider for hibernate. No changes to the default configurations both in Infinispan and also in hibernate. Any update or delete on the cache identities fail from the entity invalidation cache. Thanks, Tudor. Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from app2/hibernate, see cause for remote stack trace at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:41) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:362) at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:521) at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:281) at org.infinispan.interceptors.InvalidationInterceptor.visitClearCommand(InvalidationInterceptor.java:100) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:321) at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForClear(EntryWrappingInterceptor.java:370) at org.infinispan.interceptors.EntryWrappingInterceptor.visitClearCommand(EntryWrappingInterceptor.java:146) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:197) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255) at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110) at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73) at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1306) at org.infinispan.CacheImpl.clearInternal(CacheImpl.java:443) at org.infinispan.CacheImpl.clear(CacheImpl.java:438) at org.infinispan.CacheImpl.clear(CacheImpl.java:433) at org.infinispan.AbstractDelegatingCache.clear(AbstractDelegatingCache.java:291) at org.hibernate.cache.infinispan.access.TransactionalAccessDelegate.removeAll(TransactionalAccessDelegate.java:223) [hibernate-infinispan-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.cache.infinispan.entity.TransactionalAccess.removeAll(TransactionalAccess.java:84) [hibernate-infinispan-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.action.internal.BulkOperationCleanupAction$EntityCleanup.(BulkOperationCleanupAction.java:227) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.action.internal.BulkOperationCleanupAction$EntityCleanup.(BulkOperationCleanupAction.java:220) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.action.internal.BulkOperationCleanupAction.(BulkOperationCleanupAction.java:82) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.hql.internal.ast.exec.BasicExecutor.doExecute(BasicExecutor.java:83) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.hql.internal.ast.exec.BasicExecutor.execute(BasicExecutor.java:78) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.hql.internal.ast.exec.DeleteExecutor.execute(DeleteExecutor.java:125) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.hql.internal.ast.QueryTranslatorImpl.executeUpdate(QueryTranslatorImpl.java:445) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.engine.query.spi.HQLQueryPlan.performExecuteUpdate(HQLQueryPlan.java:347) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.internal.SessionImpl.executeUpdate(SessionImpl.java:1282) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.internal.QueryImpl.executeUpdate(QueryImpl.java:118) [hibernate-core-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.jpa.internal.QueryImpl.internalExecuteUpdate(QueryImpl.java:371) [hibernate-entitymanager-4.3.1.Final.jar:4.3.1.Final] at org.hibernate.jpa.spi.AbstractQueryImpl.executeUpdate(AbstractQueryImpl.java:78) [hibernate-entitymanager-4.3.1.Final.jar:4.3.1.Final] at com.ubicabs.manager.PolygonManager.deleteAllPoints(PolygonManager.java:110) [classes:] at com.ubicabs.manager.PolygonManager.updatePolygonPoints(PolygonManager.java:78) [classes:] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.7.0_51] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [rt.jar:1.7.0_51] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_51] at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_51] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407) at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:82) [wildfly-weld-8.0.0.Final.jar:8.0.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93) [wildfly-weld-8.0.0.Final.jar:8.0.0.Final] at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-8.0.0.Final.jar:8.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.as.jpa.interceptor.SBInvocationInterceptor.processInvocation(SBInvocationInterceptor.java:47) [wildfly-jpa-8.0.0.Final.jar:8.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:46) [weld-core-impl-2.1.2.Final.jar:2014-01-09 09:23] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) [wildfly-weld-8.0.0.Final.jar:8.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-8.0.0.Final.jar:8.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.as.ejb3.component.interceptors.NonPooledEJBComponentInstanceAssociatingInterceptor.processInvocation(NonPooledEJBComponentInstanceAssociatingInterceptor.java:59) [wildfly-ejb3-8.0.0.Final.jar:8.0.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:251) [wildfly-ejb3-8.0.0.Final.jar:8.0.0.Final] ... 218 more Caused by: java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255) at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110) at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73) at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39) at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48) at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95) at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50) at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172) ... 3 more From mgencur at redhat.com Fri May 2 03:28:44 2014 From: mgencur at redhat.com (Martin Gencur) Date: Fri, 02 May 2014 09:28:44 +0200 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> Message-ID: <5363492C.4070802@redhat.com> Hi, let me comment on this from QA perspective. We're running ISPN test suite with all these JDKs: IBM JDK, OpenJDK, OracleJDK. Until all these JDKs (version 1.8) are installed in Jenkins, we won't be able to run the tests. Think this should happen in the next months but no target date is specified (AFAIK). Martin On 30.4.2014 13:36, Galder Zamarre?o wrote: > Hi all, > > Just thinking out loud: what about we start using JDK8+ for all the test code in Infinispan? > > The production code would still have language level 6/7 (whatever is required?). > > This way we start getting ourselves familiar with JDK8 in a safe environment and we reduce some of the boiler plate code currently existing in the tests. > > This would only problematic for anyone consuming our test jars. They?d need move up to JDK8+ along with us. > > Thoughts? > > p.s. Recently I found https://leanpub.com/whatsnewinjava8/read which provides a great overview on what?s new in JDK8 along with small code samples. > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From tsykora at redhat.com Mon May 5 04:14:16 2014 From: tsykora at redhat.com (Tomas Sykora) Date: Mon, 5 May 2014 04:14:16 -0400 (EDT) Subject: [infinispan-dev] New ISPN github sub-repositories for OData server and ispn-cakery In-Reply-To: <1471282821.256318.1399277137348.JavaMail.zimbra@redhat.com> Message-ID: <684694927.258353.1399277656366.JavaMail.zimbra@redhat.com> Hello all, Mircea, Galder, Sanne, Dan, I have created 2 new projects in the scope of my diploma thesis (http://tsykora-tech.blogspot.cz/2014/02/introducing-infinispan-odata-server.html): infinispan-odata-server & infinispan-cakery I'd like to integrate them "under upstream" and continue development there. Does our policy allow to create new sub-projects / sub-folders for me and grant push rights? I'd like to migrate them from my private repo under Infinispan. Just like we have: https://github.com/infinispan/infinispan-forge, https://github.com/infinispan/Infinispan-book or https://github.com/infinispan/infinispan-site-check It would make me extraordinary happy if I can obtain: https://github.com/infinispan/infinispan-odata-server and https://github.com/infinispan/infinispan-cakery sub-projects with pushing rights so I can start doing more work on it and more publicly. Thank you very much for any response! Tom From dan.berindei at gmail.com Mon May 5 09:38:12 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 05 May 2014 13:41:12 +0003 Subject: [infinispan-dev] New API to iterate over current entries in cache In-Reply-To: References: <53270AA3.30702@redhat.com> Message-ID: <1399297092.22473.2@smtp.gmail.com> On Thu, May 1, 2014 at 4:50 PM, William Burns wrote: > On Thu, May 1, 2014 at 8:59 AM, Sanne Grinovero > wrote: >> On 30 April 2014 15:06, William Burns wrote: >>> Was wondering if anyone had any opinions on the API for this. >>> >>> These are a few options that Dan and I mulled over: >>> >>> Note the CloseableIterable inteface mentioned is just an interface >>> that extends both Closeable and Iterable. >>> >>> 1. The API that is very similar to previously proposed in this list >>> but slightly changed: >>> >>> Methods on AdvancedCache >>> >>> CloseableIterable> entryIterable(KeyValueFilter>> super K, ? super V> filter); >>> >>> CloseableIterable> >>> entryIterable(KeyValueFilter>> super K, ? super V> filter, Converter >>> converter); >>> >>> Note the difference here is that it would return an Iterable >>> instead >>> of Iterator, which would allow for it being used in a for loop. >>> >>> Example usage would be (types omitted) >>> >>> for (CacheEntry entry : cache.entryIterable(someFilter, >>> someConverter)) { >>> // Do something >>> } >> >> >> If it's important to close the Iterable, this example highlights a >> problem of the API. >> Ideally I think you might want to drop the need for the #close() >> method, but I'm guessing that's not an option so I'd avoid the >> Iterable API in that case. > > Good point, I totally forgot to cover the Closeable aspect in the > first email. > > Unfortunately changing it to be Iterable does pose a slight issue. I > was thinking we do something along the lines that Dan was thinking of > by preventing the Iterable from producing more than 1 Iterable (maybe > throw IllegalStateException). This way when we close the Iterable it > would also close the underlying Iterator. > > > try (EntryIterable entries = > advancedCache.entryIterable(someFilter, someConverter)) { > for (Entry e : entries) { > ... > } > } > > >> You could think of an intermediary place-holder to still allow for >> natural iteration: >> >> try ( CacheEntryIteratorContext ctx = >> cache.entryIterable(someFilter, >> someConverter) ) { >> for (CacheEntry entry : ctx.asIterable()) { >> // Do something >> } >> } >> >> But I'm not liking the names I used above, as I would expect to be >> able to reuse the same iterator for multiple invocations of >> iterable(), and have each to restart the iteration from the >> beginning. > > Obviously from above this wouldn't be possible if we made those > changes. Do you think this is reason to prevent those changes? Or do > you think we should allow multiple iterators but closing the Iterable > would also close down each of the Iterators? I am worried this might > be a bit cumbersome/surprising, but documenting it might be > sufficient. I don't think it would be surprising at all to invalidate all iterators on close, just as modifying java.util collections in any way invalidates all iterators. Not allowing the user to iterate twice over the same Iterable, as I suggested, might be surprising, but the user can easily change his code to work around that. I'm not sure the intermediate asIterable() call helps in any way, though, because it's just as easy for the user to "forget" to call close(): for (CacheEntry entry : ctx.entryIterable(someFilter, someConverter).asIterable()) { // Do something } It would be nice if Java would have followed C# in automatically calling close() at the end of a foreach loop, but I don't see a way to force the user to call close(). > > >> Can this be solved with better name choices? I don't think so... >> >> >> >>> 2. An API that returns a new type EntryIterable for example that >>> can >>> chain methods to provide a filter and converter. >>> >>> on AdvancedCache >>> >>> EntryIterable entryIterable(); >>> >>> where EntryIterable is defined as: >>> >>> public interface EntryIterable extends >>> CloseableIterable> { >>> >>> public EntryIterable filter(KeyValueFilter>> super >>> V> filter); >>> >>> public EntryIterable converter(Converter>> super >>> V, ? extends V> converter); >>> >>> public CloseableIterable> >>> projection(Converter converter); >>> } >>> >>> Note that there are 2 methods that take a Converter, this is to >>> preserve the typing, since the method would return a different >>> EntryIterable instance. However I can also see removing one of the >>> converter method and just rename projection to converter instead. >>> >>> This API would allow for providing optional fields more cleanly or >>> not >>> if all if desired. >>> >>> Example usage would be (types omitted) >>> >>> for (CacheEntry entry : >>> cache.entryIterable().filter(someFilter).converter(someConverter)) >>> { >>> // Do something >>> } >> >> This looks very nice, assuming you fix the missing close(). > > So in that case you like #2 I assume? That is what I was leaning > towards as well. > >> Am I missing a catch? > > Yes it would be very similar to above with the outer try block and > then passing in the CloseableIterable into the inner for loop. > >> >> Also it's quite trivial for the user to do his own filtering and >> conversion in the "do something block", so I'm wondering if there >> is a >> reason beyond API shugar to expose this. > > The filters and converters are sent remotely to reduce resulting > network traffic. The filter is applied on each node to determine what > data it sends back and the converter is applied before sending back > the value as well. > >> It would be lovely if for example certain filters could affect >> loading >> from CacheStores - narrowing down a relational database select for >> example - and I guess the same concept could apply to the converted >> if >> you'd allow to select a subset of fields. > > There are plans to have a JPA string filter/converter that will live > in a new forthcoming module. We could look into enhancing this in the > future to have better integration with the JPACacheStore. A possible > issue I can think of is if the projection doesn't contain the key > value, because currently rehash can cause duplicate values that is > detected by the key. > >> >> I don't think these optimisations need to be coded right now, but it >> would be nice to keep the option open for future enhancement. > > I agree that would be nice > >> >> >>> 3. An API that requires the filter up front in the AdvancedCache >>> method. This also brings up the point should we require a filter >>> to >>> always be provided? Unfortuantely this doesn't prevent a user from >>> querying every entry as they can just use a filter that accepts all >>> key/value pairs. >> >> Why is that unfortunate? > > I was saying along the lines if we wanted to make sure a user doesn't > query all data. But if we do want them to do this, then it is great. > I know some people are on the fence about it. This was my original proposal. My main concern was finding a good name for the entryIterable() method, so I figured we could avoid it completely: try (CloseableIterable entries = cache.filter(filter).convert(converter)) { for (CacheEntry entry : ctx.asIterable()) { // Do something } } I figured it would be good to nudge users toward using a filter instead of filtering in the for block, because of the savings in network. OTOH, even though it doesn't prevent the user from iterating over all the entries in the cache, it does make it look a bit awkward. > > >> >>> >>> on AdvancedCache >>> >>> EntryIterable entryIterable(Filter >>> filter) >>> >>> where EntryIterable is defined as: >>> >>> public interface EntryIterable extends >>> CloseableIterable> { >>> >>> public CloseableIterable> >>> converter(Converter converter); >>> } >>> >>> The usage would be identical to #2 except the filter is always >>> provided. >> >> I wouldn't mandate it, but in case you overload the method it >> probably >> is a good idea to internally apply an accept-all filter so you have >> a >> single implementation. Could be a singleton, which also implies an >> efficient Externalizer. > > Although for performance I guess it would be better to not provide a > Filter for the retrieve all case. I doubt the size of a custom filter would matter much compared to all the entries in the cache, the only issue is the ease of use. > > >> >>> >>> >>> Let me know what you guys think or if you have any other >>> suggestions. >>> >>> Thanks, >>> >>> - Will >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140505/a03859bf/attachment.html From mudokonman at gmail.com Mon May 5 11:25:43 2014 From: mudokonman at gmail.com (William Burns) Date: Mon, 5 May 2014 11:25:43 -0400 Subject: [infinispan-dev] New API to iterate over current entries in cache In-Reply-To: <1399297092.22473.2@smtp.gmail.com> References: <53270AA3.30702@redhat.com> <1399297092.22473.2@smtp.gmail.com> Message-ID: On Mon, May 5, 2014 at 9:38 AM, Dan Berindei wrote: > > > On Thu, May 1, 2014 at 4:50 PM, William Burns wrote: > > On Thu, May 1, 2014 at 8:59 AM, Sanne Grinovero > wrote: > > On 30 April 2014 15:06, William Burns wrote: > > Was wondering if anyone had any opinions on the API for this. These are a > few options that Dan and I mulled over: Note the CloseableIterable inteface > mentioned is just an interface that extends both Closeable and Iterable. 1. > The API that is very similar to previously proposed in this list but > slightly changed: Methods on AdvancedCache CloseableIterable V>> entryIterable(KeyValueFilter filter); > CloseableIterable> entryIterable(KeyValueFilter ? super V> filter, Converter converter); Note the > difference here is that it would return an Iterable instead of Iterator, > which would allow for it being used in a for loop. Example usage would be > (types omitted) for (CacheEntry entry : cache.entryIterable(someFilter, > someConverter)) { // Do something } > > If it's important to close the Iterable, this example highlights a problem > of the API. Ideally I think you might want to drop the need for the #close() > method, but I'm guessing that's not an option so I'd avoid the Iterable API > in that case. > > Good point, I totally forgot to cover the Closeable aspect in the first > email. Unfortunately changing it to be Iterable does pose a slight issue. I > was thinking we do something along the lines that Dan was thinking of by > preventing the Iterable from producing more than 1 Iterable (maybe throw > IllegalStateException). This way when we close the Iterable it would also > close the underlying Iterator. try (EntryIterable entries = > advancedCache.entryIterable(someFilter, someConverter)) { for (Entry e > : entries) { ... } } > > You could think of an intermediary place-holder to still allow for natural > iteration: try ( CacheEntryIteratorContext ctx = > cache.entryIterable(someFilter, someConverter) ) { for (CacheEntry entry : > ctx.asIterable()) { // Do something } } But I'm not liking the names I used > above, as I would expect to be able to reuse the same iterator for multiple > invocations of iterable(), and have each to restart the iteration from the > beginning. > > Obviously from above this wouldn't be possible if we made those changes. Do > you think this is reason to prevent those changes? Or do you think we should > allow multiple iterators but closing the Iterable would also close down each > of the Iterators? I am worried this might be a bit cumbersome/surprising, > but documenting it might be sufficient. > > > I don't think it would be surprising at all to invalidate all iterators on > close, just as modifying java.util collections in any way invalidates all > iterators. Sounds good, I will just make sure to document it. > > Not allowing the user to iterate twice over the same Iterable, as I > suggested, might be surprising, but the user can easily change his code to > work around that. Hrmm, this should be fine if we allow for multiple iterators. Each invocation would generate a new one which would start from the beginning. > > I'm not sure the intermediate asIterable() call helps in any way, though, > because it's just as easy for the user to "forget" to call close(): > > for (CacheEntry entry : ctx.entryIterable(someFilter, > someConverter).asIterable()) { > // Do something > } > > It would be nice if Java would have followed C# in automatically calling > close() at the end of a foreach loop, but I don't see a way to force the > user to call close(). > > Can this be solved with better name choices? > > > I don't think so... I would agree to the above points. I am thinking the cleanest way may be to just have the CloseableIterable and have it produce regular Iterators to prevent confusion as to what exactly needs to be closed. > > 2. An API that returns a new type EntryIterable for example that can chain > methods to provide a filter and converter. on AdvancedCache EntryIterable V> entryIterable(); where EntryIterable is defined as: public interface > EntryIterable extends CloseableIterable> { public > EntryIterable filter(KeyValueFilter filter); > public EntryIterable converter(Converter extends V> converter); public CloseableIterable> > projection(Converter converter); } Note that there > are 2 methods that take a Converter, this is to preserve the typing, since > the method would return a different EntryIterable instance. However I can > also see removing one of the converter method and just rename projection to > converter instead. This API would allow for providing optional fields more > cleanly or not if all if desired. Example usage would be (types omitted) for > (CacheEntry entry : > cache.entryIterable().filter(someFilter).converter(someConverter)) { // Do > something } > > This looks very nice, assuming you fix the missing close(). > > So in that case you like #2 I assume? That is what I was leaning towards as > well. > > Am I missing a catch? > > Yes it would be very similar to above with the outer try block and then > passing in the CloseableIterable into the inner for loop. > > Also it's quite trivial for the user to do his own filtering and conversion > in the "do something block", so I'm wondering if there is a reason beyond > API shugar to expose this. > > The filters and converters are sent remotely to reduce resulting network > traffic. The filter is applied on each node to determine what data it sends > back and the converter is applied before sending back the value as well. > > It would be lovely if for example certain filters could affect loading from > CacheStores - narrowing down a relational database select for example - and > I guess the same concept could apply to the converted if you'd allow to > select a subset of fields. > > There are plans to have a JPA string filter/converter that will live in a > new forthcoming module. We could look into enhancing this in the future to > have better integration with the JPACacheStore. A possible issue I can think > of is if the projection doesn't contain the key value, because currently > rehash can cause duplicate values that is detected by the key. > > I don't think these optimisations need to be coded right now, but it would > be nice to keep the option open for future enhancement. > > I agree that would be nice > > 3. An API that requires the filter up front in the AdvancedCache method. > This also brings up the point should we require a filter to always be > provided? Unfortuantely this doesn't prevent a user from querying every > entry as they can just use a filter that accepts all key/value pairs. > > Why is that unfortunate? > > I was saying along the lines if we wanted to make sure a user doesn't query > all data. But if we do want them to do this, then it is great. I know some > people are on the fence about it. > > > This was my original proposal. My main concern was finding a good name for > the entryIterable() method, so I figured we could avoid it completely: > > try (CloseableIterable entries = > cache.filter(filter).convert(converter)) { > for (CacheEntry entry : ctx.asIterable()) { > // Do something > } > } > > I figured it would be good to nudge users toward using a filter instead of > filtering in the for block, because of the savings in network. > OTOH, even though it doesn't prevent the user from iterating over all the > entries in the cache, it does make it look a bit awkward. Yeah that is why I am on the fence either way... It seems there isn't much opinion either way so I am thinking we just do the required filter one, but maybe just brain storm on an appropriate name. Maybe just filterEntries that returns the EntryIterable that can take a converter. WDYT? > > on AdvancedCache EntryIterable entryIterable(Filter V> filter) where EntryIterable is defined as: public interface > EntryIterable extends CloseableIterable> { public > CloseableIterable> converter(Converter V, C> converter); } The usage would be identical to #2 except the filter is > always provided. > > I wouldn't mandate it, but in case you overload the method it probably is a > good idea to internally apply an accept-all filter so you have a single > implementation. Could be a singleton, which also implies an efficient > Externalizer. > > Although for performance I guess it would be better to not provide a Filter > for the retrieve all case. > > > I doubt the size of a custom filter would matter much compared to all the > entries in the cache, the only issue is the ease of use. I agree, I wasn't trying to imply it was a big difference, just another small benefit, but shouldn't sway the discussion. > > Let me know what you guys think or if you have any other suggestions. > Thanks, - Will _______________________________________________ > infinispan-dev mailing list infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue May 6 07:35:37 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 6 May 2014 12:35:37 +0100 Subject: [infinispan-dev] New ISPN github sub-repositories for OData server and ispn-cakery In-Reply-To: <684694927.258353.1399277656366.JavaMail.zimbra@redhat.com> References: <1471282821.256318.1399277137348.JavaMail.zimbra@redhat.com> <684694927.258353.1399277656366.JavaMail.zimbra@redhat.com> Message-ID: Hi all, that looks great! freshly created: - https://github.com/infinispan/infinispan-odata-server - https://github.com/infinispan/infinispan-cakery The repositories are currently empty, just have the default readme and an ASL2 notice. Tomas has admin privileges to these, and also controls the permissions to join the teams - odata-server-gatekeepers - infinispan-cakery-gatekeepers So if anyone whants to contribute, be nice to him :) Looking forward for these to evolve! Cheers, Sanne On 5 May 2014 09:14, Tomas Sykora wrote: > Hello all, > Mircea, Galder, Sanne, Dan, > > I have created 2 new projects in the scope of my diploma thesis (http://tsykora-tech.blogspot.cz/2014/02/introducing-infinispan-odata-server.html): > > infinispan-odata-server & infinispan-cakery > > I'd like to integrate them "under upstream" and continue development there. > Does our policy allow to create new sub-projects / sub-folders for me and grant push rights? I'd like to migrate them from my private repo under Infinispan. > > Just like we have: https://github.com/infinispan/infinispan-forge, https://github.com/infinispan/Infinispan-book or https://github.com/infinispan/infinispan-site-check > > It would make me extraordinary happy if I can obtain: > > https://github.com/infinispan/infinispan-odata-server and https://github.com/infinispan/infinispan-cakery > > sub-projects with pushing rights so I can start doing more work on it and more publicly. > > Thank you very much for any response! > Tom > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Tue May 6 08:16:19 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 6 May 2014 13:16:19 +0100 Subject: [infinispan-dev] New API to iterate over current entries in cache In-Reply-To: <1399297092.22473.2@smtp.gmail.com> References: <53270AA3.30702@redhat.com> <1399297092.22473.2@smtp.gmail.com> Message-ID: Sorry indentation below is broken because someone on this thread is using HTML formatted emails. On 5 May 2014 14:38, Dan Berindei wrote: > > > On Thu, May 1, 2014 at 4:50 PM, William Burns wrote: > > On Thu, May 1, 2014 at 8:59 AM, Sanne Grinovero > wrote: > > On 30 April 2014 15:06, William Burns wrote: > > Was wondering if anyone had any opinions on the API for this. These are a > few options that Dan and I mulled over: Note the CloseableIterable inteface > mentioned is just an interface that extends both Closeable and Iterable. 1. > The API that is very similar to previously proposed in this list but > slightly changed: Methods on AdvancedCache CloseableIterable V>> entryIterable(KeyValueFilter filter); > CloseableIterable> entryIterable(KeyValueFilter ? super V> filter, Converter converter); Note the > difference here is that it would return an Iterable instead of Iterator, > which would allow for it being used in a for loop. Example usage would be > (types omitted) for (CacheEntry entry : cache.entryIterable(someFilter, > someConverter)) { // Do something } > > If it's important to close the Iterable, this example highlights a problem > of the API. Ideally I think you might want to drop the need for the #close() > method, but I'm guessing that's not an option so I'd avoid the Iterable API > in that case. > > Good point, I totally forgot to cover the Closeable aspect in the first > email. Unfortunately changing it to be Iterable does pose a slight issue. I > was thinking we do something along the lines that Dan was thinking of by > preventing the Iterable from producing more than 1 Iterable (maybe throw > IllegalStateException). This way when we close the Iterable it would also > close the underlying Iterator. try (EntryIterable entries = > advancedCache.entryIterable(someFilter, someConverter)) { for (Entry e > : entries) { ... } } > > You could think of an intermediary place-holder to still allow for natural > iteration: try ( CacheEntryIteratorContext ctx = > cache.entryIterable(someFilter, someConverter) ) { for (CacheEntry entry : > ctx.asIterable()) { // Do something } } But I'm not liking the names I used > above, as I would expect to be able to reuse the same iterator for multiple > invocations of iterable(), and have each to restart the iteration from the > beginning. > > Obviously from above this wouldn't be possible if we made those changes. Do > you think this is reason to prevent those changes? Or do you think we should > allow multiple iterators but closing the Iterable would also close down each > of the Iterators? I am worried this might be a bit cumbersome/surprising, > but documenting it might be sufficient. > > > I don't think it would be surprising at all to invalidate all iterators on > close, just as modifying java.util collections in any way invalidates all > iterators. That's not incorrect behaviour, but it's a surprising API, so I think we should avoid suggesting that it might be iterated multiple times. > > Not allowing the user to iterate twice over the same Iterable, as I > suggested, might be surprising, but the user can easily change his code to > work around that. Sure but it might not be immediately noticed, annoying. I generally don't like libraries which force me to go back and fix things when I discover it's not working as expected at a second time, this should be clear from the very beginning of coding. > > I'm not sure the intermediate asIterable() call helps in any way, though, > because it's just as easy for the user to "forget" to call close(): Right. > > for (CacheEntry entry : ctx.entryIterable(someFilter, > someConverter).asIterable()) { > // Do something > } > > It would be nice if Java would have followed C# in automatically calling > close() at the end of a foreach loop, but I don't see a way to force the > user to call close(). > > Can this be solved with better name choices? > > > I don't think so... Let's avoid Iterable then. > > 2. An API that returns a new type EntryIterable for example that can chain > methods to provide a filter and converter. on AdvancedCache EntryIterable V> entryIterable(); where EntryIterable is defined as: public interface > EntryIterable extends CloseableIterable> { public > EntryIterable filter(KeyValueFilter filter); > public EntryIterable converter(Converter extends V> converter); public CloseableIterable> > projection(Converter converter); } Note that there > are 2 methods that take a Converter, this is to preserve the typing, since > the method would return a different EntryIterable instance. However I can > also see removing one of the converter method and just rename projection to > converter instead. This API would allow for providing optional fields more > cleanly or not if all if desired. Example usage would be (types omitted) for > (CacheEntry entry : > cache.entryIterable().filter(someFilter).converter(someConverter)) { // Do > something } > > This looks very nice, assuming you fix the missing close(). > > So in that case you like #2 I assume? That is what I was leaning towards as > well. > > Am I missing a catch? > > Yes it would be very similar to above with the outer try block and then > passing in the CloseableIterable into the inner for loop. > > Also it's quite trivial for the user to do his own filtering and conversion > in the "do something block", so I'm wondering if there is a reason beyond > API shugar to expose this. > > The filters and converters are sent remotely to reduce resulting network > traffic. The filter is applied on each node to determine what data it sends > back and the converter is applied before sending back the value as well. > > It would be lovely if for example certain filters could affect loading from > CacheStores - narrowing down a relational database select for example - and > I guess the same concept could apply to the converted if you'd allow to > select a subset of fields. > > There are plans to have a JPA string filter/converter that will live in a > new forthcoming module. We could look into enhancing this in the future to > have better integration with the JPACacheStore. A possible issue I can think > of is if the projection doesn't contain the key value, because currently > rehash can cause duplicate values that is detected by the key. > > I don't think these optimisations need to be coded right now, but it would > be nice to keep the option open for future enhancement. > > I agree that would be nice > > 3. An API that requires the filter up front in the AdvancedCache method. > This also brings up the point should we require a filter to always be > provided? Unfortuantely this doesn't prevent a user from querying every > entry as they can just use a filter that accepts all key/value pairs. > > Why is that unfortunate? > > I was saying along the lines if we wanted to make sure a user doesn't query > all data. But if we do want them to do this, then it is great. I know some > people are on the fence about it. > > > This was my original proposal. My main concern was finding a good name for > the entryIterable() method, so I figured we could avoid it completely: > > try (CloseableIterable entries = > cache.filter(filter).convert(converter)) { > for (CacheEntry entry : ctx.asIterable()) { > // Do something > } > } Paul had an excellent proposal about filtered views of Caches. >From an API such as "cache.filter(filter)" I would expect a return of type Cache, such a filtered one: for it the return an Iterable is not only surprising but I think it would prevent us to eventually make the filtered cache feature. > > I figured it would be good to nudge users toward using a filter instead of > filtering in the for block, because of the savings in network. > OTOH, even though it doesn't prevent the user from iterating over all the > entries in the cache, it does make it look a bit awkward. > > on AdvancedCache EntryIterable entryIterable(Filter V> filter) where EntryIterable is defined as: public interface > EntryIterable extends CloseableIterable> { public > CloseableIterable> converter(Converter V, C> converter); } The usage would be identical to #2 except the filter is > always provided. > > I wouldn't mandate it, but in case you overload the method it probably is a > good idea to internally apply an accept-all filter so you have a single > implementation. Could be a singleton, which also implies an efficient > Externalizer. > > Although for performance I guess it would be better to not provide a Filter > for the retrieve all case. > > > I doubt the size of a custom filter would matter much compared to all the > entries in the cache, the only issue is the ease of use. > > Let me know what you guys think or if you have any other suggestions. > Thanks, - Will _______________________________________________ > infinispan-dev mailing list infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Tue May 6 03:33:32 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Tue, 6 May 2014 09:33:32 +0200 Subject: [infinispan-dev] Infinispan Test language level to Java 8? In-Reply-To: References: <91F270E3-806A-4CF1-942A-CCE51BAC5006@redhat.com> <5360E8C5.7010005@redhat.com> Message-ID: <4E25705C-D7B5-4046-9128-7F12990857FC@redhat.com> On 30 Apr 2014, at 16:31, Sanne Grinovero wrote: > Valid concerns, but I think we should split those in two very > different categories: > 1- we provide testing utilities which are quite useful to other people too > 2- we run unit tests on our own code to prevent regressions > > If we split the utilities into a properly delivered package - built > with Java7, having a very own Maven identity and maybe even a user > guide - that would be even more useful to consumers. For example I use > some of the utilities in both Hibernate Search and Hibernate OGM, > dependending on the testing classifier of infinispan-core. I'd prefer > to depend on a "proper" module with a somehow stable API, and this > would be a great improvement for our users who start playing with > Infinispan.. I often refer to our testsuite to explain how to setup > things. +1, a testkit module with stuff in TestCacheManagerFactory?etc would certainly help the separation. Akka is also a very good example where they?ve separated the tests from the tools used to help out testing. Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Thu May 8 07:37:35 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Thu, 8 May 2014 13:37:35 +0200 Subject: [infinispan-dev] help with Caused by: java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext In-Reply-To: <536271F1.4090901@gmail.com> References: <536271F1.4090901@gmail.com> Message-ID: <05835898-4CA5-4270-9144-DC415AE228ED@redhat.com> This list is for discussions of development of Infinispan. For user questions, please post them in https://community.jboss.org/en/infinispan/content?filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D For this particular issue, I have not seen such exception since 5.2, and Wildfly uses Infinispan 6+. Maybe you are bundling an old Infinispan version in your deployment? Cheers, On 01 May 2014, at 18:10, tudor wrote: > Hi all, > Maybe someone had this issue before or it can point me in the right > direction. > I have an env of two Wildfly 8.0.0 Final servers, with Infinispan in > cluster used as second level cache provider for hibernate. > No changes to the default configurations both in Infinispan and also in > hibernate. > Any update or delete on the cache identities fail from the entity > invalidation cache. > > Thanks, > Tudor. > > Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received > exception from app2/hibernate, see cause for remote stack trace > at > org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:41) > > at > org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:362) > > at > org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) > > at > org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:521) > > at > org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:281) > > at > org.infinispan.interceptors.InvalidationInterceptor.visitClearCommand(InvalidationInterceptor.java:100) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:321) > > at > org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForClear(EntryWrappingInterceptor.java:370) > > at > org.infinispan.interceptors.EntryWrappingInterceptor.visitClearCommand(EntryWrappingInterceptor.java:146) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:197) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) > > at > org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255) > > at > org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) > > at > org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110) > > at > org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73) > > at > org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) > > at > org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1306) > > at org.infinispan.CacheImpl.clearInternal(CacheImpl.java:443) > at org.infinispan.CacheImpl.clear(CacheImpl.java:438) > at org.infinispan.CacheImpl.clear(CacheImpl.java:433) > at > org.infinispan.AbstractDelegatingCache.clear(AbstractDelegatingCache.java:291) > > at > org.hibernate.cache.infinispan.access.TransactionalAccessDelegate.removeAll(TransactionalAccessDelegate.java:223) > [hibernate-infinispan-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.cache.infinispan.entity.TransactionalAccess.removeAll(TransactionalAccess.java:84) > [hibernate-infinispan-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.action.internal.BulkOperationCleanupAction$EntityCleanup.(BulkOperationCleanupAction.java:227) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.action.internal.BulkOperationCleanupAction$EntityCleanup.(BulkOperationCleanupAction.java:220) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.action.internal.BulkOperationCleanupAction.(BulkOperationCleanupAction.java:82) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.hql.internal.ast.exec.BasicExecutor.doExecute(BasicExecutor.java:83) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.hql.internal.ast.exec.BasicExecutor.execute(BasicExecutor.java:78) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.hql.internal.ast.exec.DeleteExecutor.execute(DeleteExecutor.java:125) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.hql.internal.ast.QueryTranslatorImpl.executeUpdate(QueryTranslatorImpl.java:445) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.engine.query.spi.HQLQueryPlan.performExecuteUpdate(HQLQueryPlan.java:347) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.internal.SessionImpl.executeUpdate(SessionImpl.java:1282) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.internal.QueryImpl.executeUpdate(QueryImpl.java:118) > [hibernate-core-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.jpa.internal.QueryImpl.internalExecuteUpdate(QueryImpl.java:371) > [hibernate-entitymanager-4.3.1.Final.jar:4.3.1.Final] > at > org.hibernate.jpa.spi.AbstractQueryImpl.executeUpdate(AbstractQueryImpl.java:78) > [hibernate-entitymanager-4.3.1.Final.jar:4.3.1.Final] > at > com.ubicabs.manager.PolygonManager.deleteAllPoints(PolygonManager.java:110) > [classes:] > at > com.ubicabs.manager.PolygonManager.updatePolygonPoints(PolygonManager.java:78) > [classes:] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [rt.jar:1.7.0_51] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > [rt.jar:1.7.0_51] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [rt.jar:1.7.0_51] > at java.lang.reflect.Method.invoke(Method.java:606) > [rt.jar:1.7.0_51] > at > org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) > > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) > > at > org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) > > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407) > > at > org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:82) > [wildfly-weld-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93) > [wildfly-weld-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) > > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) > > at > org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) > > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) > [wildfly-ejb3-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.as.jpa.interceptor.SBInvocationInterceptor.processInvocation(SBInvocationInterceptor.java:47) > [wildfly-jpa-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407) > > at > org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:46) > [weld-core-impl-2.1.2.Final.jar:2014-01-09 09:23] > at > org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) > [wildfly-weld-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) > [wildfly-ee-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21) > > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) > > at > org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53) > > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.as.ejb3.component.interceptors.NonPooledEJBComponentInstanceAssociatingInterceptor.processInvocation(NonPooledEJBComponentInstanceAssociatingInterceptor.java:59) > [wildfly-ejb3-8.0.0.Final.jar:8.0.0.Final] > at > org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309) > > at > org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:251) > [wildfly-ejb3-8.0.0.Final.jar:8.0.0.Final] > ... 218 more > Caused by: java.lang.ClassCastException: > org.infinispan.context.impl.NonTxInvocationContext cannot be cast to > org.infinispan.context.impl.TxInvocationContext > at > org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) > > at > org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255) > > at > org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) > > at > org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) > > at > org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110) > > at > org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73) > > at > org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47) > > at > org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38) > > at > org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) > > at > org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39) > > at > org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48) > > at > org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95) > > at > org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50) > > at > org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172) > > ... 3 more > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Thu May 8 09:40:15 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Thu, 8 May 2014 15:40:15 +0200 Subject: [infinispan-dev] Remote Hot Rod events wiki updated In-Reply-To: <53568886.9060306@redhat.com> References: <18D41294-AB90-4119-9E85-BECEAFBA8E38@redhat.com> <533E5F76.4050201@redhat.com> <1AD1098E-AF45-403B-B839-F9B817E2185F@redhat.com> <5347FB43.3060405@redhat.com> <75B1475C-F776-4AE2-B115-A774ECD13BA8@redhat.com> <534F7CD2.5090701@redhat.com> <25BCFD3D-9276-4DB5-8E9F-A551FB316A96@redhat.com> <1398175132.28062.3@smtp.gmail.com> <53568886.9060306@redhat.com> Message-ID: On 22 Apr 2014, at 17:19, Radim Vansa wrote: > On 04/22/2014 03:58 PM, Dan Berindei wrote: >> On Tue, Apr 22, 2014 at 2:30 PM, Galder Zamarre?o wrote: >>> >>> On 17 Apr 2014, at 08:03, Radim Vansa >>> >>> >>> wrote: >>> >>> >>> On 04/16/2014 05:38 PM, William Burns wrote: >>> >>> >>> On Wed, Apr 16, 2014 at 11:14 AM, Galder >>> Zamarre?o >>> >>> wrote: >>> >>> >>> On 11 Apr 2014, at 15:25, Radim Vansa >>> >>> >>> wrote: >>> >>> >>> OK, now I get the picture. Every time we >>> register to a node (whether the first time or after >>> previous node crash), we receive all (filtered) keys >>> from the whole cache, along with versions. Optionally >>> values as well. >>> >>> >>> >>> Exactly. >>> >>> >>> In case that multiple modifications happen >>> in the time window before registering to the new >>> cache, we don't get the notification for them, just >>> again the whole cache and it's up to application to >>> decide whether there was no modification or some >>> modifications. >>> >>> >>> >>> I?m yet to decide on the type of event exactly here, >>> whether cache entry created, cache entry modified or a >>> different one, but regardless, you?d get the key and the >>> server side version associated with that key. A user >>> provided client listener implementation could detect >>> which keys? versions have changed and react to that, >>> i.e. lazily fetch new values. One such user provided >>> client listener implementation could be a listener that >>> maintains a near cache for example. >>> >>> >>> >>> My current code was planning on raising a >>> CacheEntryCreatedEvent in this case. I didn't see any >>> special reason to require a new event type, unless anyone >>> can think of a use case? >>> >>> >>> >>> When the code cannot rely on the fact that created = (null >>> -> some) and modified = (some -> some), it seems to me >>> that the user will have to handle the events in the same >>> way. I don't see the reason to differentiate between them in >>> protocol anyway. One problem that has come to my mind: what >>> about removed entries? If you push the keyset to the client, >>> without marking start and end of these events (and expecting >>> the client to fire removed events for all not mentioned keys >>> internally), the client can miss some entry deletion >>> forever. Are the tombstones planned for any particular >>> version of Infinispan? >>> >>> >>> >>> That?s a good reason why a different event type might be >>> useful. By receiving a special cache entry event when keys are >>> being looped, it can detect that a keyset is being returned, >>> for example, if the server went down and the Hot Rod client >>> transparently failed over to a different node and re-added the >>> client listener. The user of the client, say a near cache, >>> when it receives the first of this special event, it can make >>> a decision to say, clear the near cache contents, since it >>> might have missed some events. >>> The different event type gets around the need for a start/end >>> event. The first time the special event is received, that?s >>> your start, and when you receive something other than the >>> special event, that?s the end, and normal operation is back in >>> place. >>> WDYT? >>> >> >> I'm not sure if you plan multi-threaded event delivery in the Java client, but having a special start event would make it clear that it must be delivered after all the events from the old server and before any events from the new server. >> >> And it should also make special cases like a server dying before it finished sending the initial state easier to handle. >> >> Dan >> > > Is it really wise to have stateful listener? I would prefer the listener to be called only once per server change, and let it iterate the cache via cache.forEach(ForEachTask task), or cache.iterator(). (which would replace the keySet() etc?) That might be an option in the future, but it needs that kind of remote iterator operation to be implemented on top of HotRod. For 7.0, we?ll go with receiving events. > > Radim > > -- > Radim Vansa > > > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From mmarkus at redhat.com Thu May 8 19:12:33 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Fri, 9 May 2014 00:12:33 +0100 Subject: [infinispan-dev] 7.0.0.Alpha4 Message-ID: Hi Pedro, Your turn for release now :-) There's plenty of stuff pending, but when we'll have at least: [1] ISPN-4222 Add support for distributed entry iterator and [2] ISPN-3917 Filter objects using the query DSL (without using an index) I think we should proceed with the release. I know Will and Adrian are quite involved in the review, please coordinate with them and release. [1] https://github.com/infinispan/infinispan/pull/2513 [2] https://github.com/infinispan/infinispan/pull/2514 Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From pedro at infinispan.org Thu May 8 19:16:56 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Fri, 09 May 2014 00:16:56 +0100 Subject: [infinispan-dev] 7.0.0.Alpha4 In-Reply-To: References: Message-ID: <536C1068.1090401@infinispan.org> Hi, Roger that :) Pedro On 05/09/2014 12:12 AM, Mircea Markus wrote: > Hi Pedro, > > Your turn for release now :-) > There's plenty of stuff pending, but when we'll have at least: > [1] ISPN-4222 Add support for distributed entry iterator > and > [2] ISPN-3917 Filter objects using the query DSL (without using an index) I think we should proceed with the release. > I know Will and Adrian are quite involved in the review, please coordinate with them and release. > > [1] https://github.com/infinispan/infinispan/pull/2513 > [2] https://github.com/infinispan/infinispan/pull/2514 > > Cheers, > From rory.odonnell at oracle.com Fri May 9 05:35:37 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 09 May 2014 10:35:37 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b11, JDK 8u20 b13 and JDK 7u60 b15 are available on java.net Message-ID: <536CA169.1020404@oracle.com> Hi Galder, Early Access builds for JDK 9 b11 , JDK 8u20 b13 and JDK 7u60 b15 are available on java.net. As we enter the later phases of development for JDK 7u60 & JDK 8u20 , please log any show stoppers as soon as possible. Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140509/0bec9057/attachment.html From rvansa at redhat.com Mon May 12 03:21:13 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 12 May 2014 09:21:13 +0200 Subject: [infinispan-dev] Reliability of return values Message-ID: <53707669.7070701@redhat.com> Hi, recently I've stumbled upon one already expected behaviour (one instance is [1]), but which did not got much attention. In non-tx cache, when the primary owner fails after the request has been replicated to backup owner, the request is retried in the new topology. Then, the operation is executed on the new primary (the previous backup). The outcome has been already fixed in [2], but the return value may be wrong. For example, when we do a put, the return value for the second attempt will be the currently inserted value (although the entry was just created). Same situation may happen for other operations. Currently, it's not possible to return the correct value (because it has already been overwritten and we don't keep a history of values), but shouldn't we rather throw an exception if we were not able to fulfil the API contract? Radim [1] https://issues.jboss.org/browse/ISPN-2956 [2] https://issues.jboss.org/browse/ISPN-3422 -- Radim Vansa JBoss DataGrid QA From galder at redhat.com Mon May 12 03:47:01 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 12 May 2014 09:47:01 +0200 Subject: [infinispan-dev] New ISPN github sub-repositories for OData server and ispn-cakery In-Reply-To: References: <1471282821.256318.1399277137348.JavaMail.zimbra@redhat.com> <684694927.258353.1399277656366.JavaMail.zimbra@redhat.com> Message-ID: <4795DF62-107E-44F4-AE38-ADF44E5236B7@redhat.com> Thanks Sanne for getting this up and running, and thanks Tomas for the excellent work with the OData server! Looking forward to seeing how Cakery works :) Cheers, On 06 May 2014, at 13:35, Sanne Grinovero wrote: > Hi all, that looks great! > > freshly created: > - https://github.com/infinispan/infinispan-odata-server > - https://github.com/infinispan/infinispan-cakery > > The repositories are currently empty, just have the default readme and > an ASL2 notice. > > Tomas has admin privileges to these, and also controls the permissions > to join the teams > - odata-server-gatekeepers > - infinispan-cakery-gatekeepers > > So if anyone whants to contribute, be nice to him :) > > Looking forward for these to evolve! > Cheers, > Sanne > > > On 5 May 2014 09:14, Tomas Sykora wrote: >> Hello all, >> Mircea, Galder, Sanne, Dan, >> >> I have created 2 new projects in the scope of my diploma thesis (http://tsykora-tech.blogspot.cz/2014/02/introducing-infinispan-odata-server.html): >> >> infinispan-odata-server & infinispan-cakery >> >> I'd like to integrate them "under upstream" and continue development there. >> Does our policy allow to create new sub-projects / sub-folders for me and grant push rights? I'd like to migrate them from my private repo under Infinispan. >> >> Just like we have: https://github.com/infinispan/infinispan-forge, https://github.com/infinispan/Infinispan-book or https://github.com/infinispan/infinispan-site-check >> >> It would make me extraordinary happy if I can obtain: >> >> https://github.com/infinispan/infinispan-odata-server and https://github.com/infinispan/infinispan-cakery >> >> sub-projects with pushing rights so I can start doing more work on it and more publicly. >> >> Thank you very much for any response! >> Tom >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Mon May 12 04:37:54 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Mon, 12 May 2014 11:37:54 +0300 Subject: [infinispan-dev] Reliability of return values In-Reply-To: <53707669.7070701@redhat.com> References: <53707669.7070701@redhat.com> Message-ID: Radim, I would contend that the first and foremost guarantee that put() makes is to leave the cache in a consistent state. So we can't just throw an exception and give up, leaving k=v on one owner and k=null on another. Secondly, put(k, v) being atomic means that it either succeeds, it writes k=v in the cache, and it returns the previous value, or it doesn't succeed, and it doesn't write k=v in the cache. Returning the wrong previous value is bad, but leaving k=v in the cache is just as bad, even if the all the owners have the same value. And last, we can't have one node seeing k=null, then k=v, then k=null again, when the only write we did on the cache was a put(k, v). So trying to undo the write would not help. In the end, we have to make a compromise, and I think returning the wrong value in some of the cases is a reasonable compromise. Of course, we should document that :) I also believe ISPN-2956 could be fixed so that HotRod behaves just like embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the HotRod protocol and to the cache itself. Incidentally, transactional caches have a similar problem when the originator leaves the cluster: ISPN-3421 [1] And we can't handle transactional caches any better than non-transactional caches until we expose transactions to the HotRod client. [1] https://issues.jboss.org/browse/ISPN-2956 Cheers Dan On Mon, May 12, 2014 at 10:21 AM, Radim Vansa wrote: > Hi, > > recently I've stumbled upon one already expected behaviour (one instance > is [1]), but which did not got much attention. > > In non-tx cache, when the primary owner fails after the request has been > replicated to backup owner, the request is retried in the new topology. > Then, the operation is executed on the new primary (the previous > backup). The outcome has been already fixed in [2], but the return value > may be wrong. For example, when we do a put, the return value for the > second attempt will be the currently inserted value (although the entry > was just created). Same situation may happen for other operations. > > Currently, it's not possible to return the correct value (because it has > already been overwritten and we don't keep a history of values), but > shouldn't we rather throw an exception if we were not able to fulfil the > API contract? > > Radim > > [1] https://issues.jboss.org/browse/ISPN-2956 > [2] https://issues.jboss.org/browse/ISPN-3422 > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140512/bf130b35/attachment-0001.html From sanne at infinispan.org Mon May 12 06:02:44 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 12 May 2014 11:02:44 +0100 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> Message-ID: I don't think we are in a position to decide what is a reasonable compromise; we can do better. For example - as Radim suggested - it might seem reasonable to have the older value around for a little while. We'll need a little bit of history of values and tombstones anyway for many other reasons. Sanne On 12 May 2014 09:37, Dan Berindei wrote: > Radim, I would contend that the first and foremost guarantee that put() > makes is to leave the cache in a consistent state. So we can't just throw an > exception and give up, leaving k=v on one owner and k=null on another. > > Secondly, put(k, v) being atomic means that it either succeeds, it writes > k=v in the cache, and it returns the previous value, or it doesn't succeed, > and it doesn't write k=v in the cache. Returning the wrong previous value is > bad, but leaving k=v in the cache is just as bad, even if the all the owners > have the same value. > > And last, we can't have one node seeing k=null, then k=v, then k=null again, > when the only write we did on the cache was a put(k, v). So trying to undo > the write would not help. > > In the end, we have to make a compromise, and I think returning the wrong > value in some of the cases is a reasonable compromise. Of course, we should > document that :) > > I also believe ISPN-2956 could be fixed so that HotRod behaves just like > embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the HotRod > protocol and to the cache itself. > > Incidentally, transactional caches have a similar problem when the > originator leaves the cluster: ISPN-3421 [1] > And we can't handle transactional caches any better than non-transactional > caches until we expose transactions to the HotRod client. > > [1] https://issues.jboss.org/browse/ISPN-2956 > > Cheers > Dan > > > > > On Mon, May 12, 2014 at 10:21 AM, Radim Vansa wrote: >> >> Hi, >> >> recently I've stumbled upon one already expected behaviour (one instance >> is [1]), but which did not got much attention. >> >> In non-tx cache, when the primary owner fails after the request has been >> replicated to backup owner, the request is retried in the new topology. >> Then, the operation is executed on the new primary (the previous >> backup). The outcome has been already fixed in [2], but the return value >> may be wrong. For example, when we do a put, the return value for the >> second attempt will be the currently inserted value (although the entry >> was just created). Same situation may happen for other operations. >> >> Currently, it's not possible to return the correct value (because it has >> already been overwritten and we don't keep a history of values), but >> shouldn't we rather throw an exception if we were not able to fulfil the >> API contract? >> >> Radim >> >> [1] https://issues.jboss.org/browse/ISPN-2956 >> [2] https://issues.jboss.org/browse/ISPN-3422 >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Mon May 12 06:37:41 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 12 May 2014 11:37:41 +0100 Subject: [infinispan-dev] Infinispan REST CacheStore -> OOM Message-ID: Hi all, I'm unable to run the Infinispan testsuite because of OOM exceptions hapening in the testsuite of Infinispan REST CacheStore. Anyone else has seen the same problem? Cheers, Sanne From rvansa at redhat.com Mon May 12 06:54:43 2014 From: rvansa at redhat.com (Radim Vansa) Date: Mon, 12 May 2014 12:54:43 +0200 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> Message-ID: <5370A873.8070509@redhat.com> @Dan: It's absolutely correct to do the further writes in order to make the cache consistent, I am not arguing against that. You've fixed the outcome (state of cache) well. My point was that we should let the user know that the value he gets is not 100% correct when we already know that - and given the API, the only option to do that seems to me as throwing an exception. @Sanne: I was not suggesting that for now - sure, value versioning is (I hope) on the roadmap. But that's more complicated, I though just about making an adjustment to the current implementation. Radim On 05/12/2014 12:02 PM, Sanne Grinovero wrote: > I don't think we are in a position to decide what is a reasonable > compromise; we can do better. > For example - as Radim suggested - it might seem reasonable to have > the older value around for a little while. We'll need a little bit of > history of values and tombstones anyway for many other reasons. > > > Sanne > > On 12 May 2014 09:37, Dan Berindei wrote: >> Radim, I would contend that the first and foremost guarantee that put() >> makes is to leave the cache in a consistent state. So we can't just throw an >> exception and give up, leaving k=v on one owner and k=null on another. >> >> Secondly, put(k, v) being atomic means that it either succeeds, it writes >> k=v in the cache, and it returns the previous value, or it doesn't succeed, >> and it doesn't write k=v in the cache. Returning the wrong previous value is >> bad, but leaving k=v in the cache is just as bad, even if the all the owners >> have the same value. >> >> And last, we can't have one node seeing k=null, then k=v, then k=null again, >> when the only write we did on the cache was a put(k, v). So trying to undo >> the write would not help. >> >> In the end, we have to make a compromise, and I think returning the wrong >> value in some of the cases is a reasonable compromise. Of course, we should >> document that :) >> >> I also believe ISPN-2956 could be fixed so that HotRod behaves just like >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the HotRod >> protocol and to the cache itself. >> >> Incidentally, transactional caches have a similar problem when the >> originator leaves the cluster: ISPN-3421 [1] >> And we can't handle transactional caches any better than non-transactional >> caches until we expose transactions to the HotRod client. >> >> [1] https://issues.jboss.org/browse/ISPN-2956 >> >> Cheers >> Dan >> >> >> >> >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa wrote: >>> Hi, >>> >>> recently I've stumbled upon one already expected behaviour (one instance >>> is [1]), but which did not got much attention. >>> >>> In non-tx cache, when the primary owner fails after the request has been >>> replicated to backup owner, the request is retried in the new topology. >>> Then, the operation is executed on the new primary (the previous >>> backup). The outcome has been already fixed in [2], but the return value >>> may be wrong. For example, when we do a put, the return value for the >>> second attempt will be the currently inserted value (although the entry >>> was just created). Same situation may happen for other operations. >>> >>> Currently, it's not possible to return the correct value (because it has >>> already been overwritten and we don't keep a history of values), but >>> shouldn't we rather throw an exception if we were not able to fulfil the >>> API contract? >>> >>> Radim >>> >>> [1] https://issues.jboss.org/browse/ISPN-2956 >>> [2] https://issues.jboss.org/browse/ISPN-3422 >>> >>> -- >>> Radim Vansa >>> JBoss DataGrid QA >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From sanne at hibernate.org Mon May 12 11:38:52 2014 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 12 May 2014 16:38:52 +0100 Subject: [infinispan-dev] [Search] Handling of mutual dependency with Infinispan Message-ID: Now that finally Infinispan moved to build (and require) Java7, I'm preparing to upgrade the Lucene Directory to Apache Lucene 4.8. Sometimes it's trivial, some others we're out of luck and this is one of such situations: the new Lucene code expects some new methods to create and validate a CRC32 checksum signature of the Directory segments. Not too annoying - I can handle the coding - but it highlights an old problem which is coming back. Currently Infinispan is still using Hibernate Search 4.5, and provides a Lucene Directory for both Lucene versions 3 and 4. The current build provides the LuceneV4 support as an extension of the V3 source module; this is a hack we introduced a year ago to make sure we'd eventually be able to upgrade to Lucene4 and I was hoping now to finally remove this fishy workaround as I initially expected it to be a temporary measure. But in practice such an upgrade of today would have been impossible: Infinispan also depends on Hibernate Search. Having two different modules in Infinispan is what enables us today to start an update to a new version of Lucene. I'm wondering if we should move the Lucene Directory code into Hibernate Search; this also has licensing implications as that's using ASL2 currently. And that's probably only moving the problem one step down, as Infinispan Query still depends on Hibernate Search (engine) and the Lucene Directory would still depend on Infinispan Core. I'm not having a solution in mind; obviously we wouldn't have such a problem if each of our projects *always* guaranteed a clean migration path via default methods and deprecated methods, but this is a rule which is occasionally broken: when it happens, the only thing I can think of is that one of the two projects needs to release a tag which has some broken components. For example, Infinispan to release occasionally without the Query engine. Sanne From emmanuel at hibernate.org Mon May 12 12:15:38 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Mon, 12 May 2014 18:15:38 +0200 Subject: [infinispan-dev] [hibernate-dev] [Search] Handling of mutual dependency with Infinispan In-Reply-To: References: Message-ID: I am not sure I understand everything you said. how about you take 20 mins tomorrow during our Hibernate NoORM team meeting on IRC? Be careful, 20 mins run fast in practice :) On 12 May 2014, at 17:38, Sanne Grinovero wrote: > Now that finally Infinispan moved to build (and require) Java7, I'm > preparing to upgrade the Lucene Directory to Apache Lucene 4.8. > > Sometimes it's trivial, some others we're out of luck and this is one > of such situations: the new Lucene code expects some new methods to > create and validate a CRC32 checksum signature of the Directory > segments. > Not too annoying - I can handle the coding - but it highlights an old > problem which is coming back. > > Currently Infinispan is still using Hibernate Search 4.5, and provides > a Lucene Directory for both Lucene versions 3 and 4. > The current build provides the LuceneV4 support as an extension of the > V3 source module; this is a hack we introduced a year ago to make sure > we'd eventually be able to upgrade to Lucene4 and I was hoping now to > finally remove this fishy workaround as I initially expected it to be > a temporary measure. > > But in practice such an upgrade of today would have been impossible: > Infinispan also depends on Hibernate Search. Having two different > modules in Infinispan is what enables us today to start an update to a > new version of Lucene. > > I'm wondering if we should move the Lucene Directory code into > Hibernate Search; this also has licensing implications as that's using > ASL2 currently. And that's probably only moving the problem one step > down, as Infinispan Query still depends on Hibernate Search (engine) > and the Lucene Directory would still depend on Infinispan Core. > > I'm not having a solution in mind; obviously we wouldn't have such a > problem if each of our projects *always* guaranteed a clean migration > path via default methods and deprecated methods, but this is a rule > which is occasionally broken: when it happens, the only thing I can > think of is that one of the two projects needs to release a tag which > has some broken components. For example, Infinispan to release > occasionally without the Query engine. > > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at infinispan.org Mon May 12 14:04:58 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 12 May 2014 19:04:58 +0100 Subject: [infinispan-dev] Where's the roadmap? In-Reply-To: References: <7335F27A-7B85-4341-A8A0-35670F8E827C@redhat.com> Message-ID: Hi, I think you mentioned having created the roadmap page but I can't find it, and people keep asking about it so I'm probably not the only one not finding it: https://community.jboss.org/message/870798 Could we make it more visible on the website? Cheers, Sanne On 25 February 2014 17:09, Sanne Grinovero wrote: > On 25 February 2014 16:33, Mircea Markus wrote: >> I'm working on it right now.. > > Thanks! As soon as you have a draft I'm happy to help with the Query section. > > Cheers, > Sanne > >> >> On Feb 25, 2014, at 1:39 PM, Sanne Grinovero wrote: >> >>> I was asked about the Infinispan roadmap on a forum post, my draft reads: >>> >>> "Sure it's available online, see.." >>> >>> but then I could actually only find this: >>> https://community.jboss.org/wiki/InfinispanRoadmap >>> >>> (which is very outdated). >>> >>> So, what's the roadmap? >>> >>> Would be nice if we could have it updated and published on the new website. >>> >>> Cheers, >>> Sanne >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev From liguangpeng at huawei.com Tue May 13 05:36:27 2014 From: liguangpeng at huawei.com (Liguangpeng (Roc, IPTechnologyResearchDept&HW)) Date: Tue, 13 May 2014 09:36:27 +0000 Subject: [infinispan-dev] How to terminate Infinispan instance in the distributed mode? Message-ID: <6F4E6B0C717D4641A2B79BC1740D8CF43080A6F9@SZXEMA501-MBX.china.huawei.com> Hello Infinispan experts, I want to create a cluster of Infinispan which consist of multiple nodes. Then I expect to control the number of instances, so I want some nodes to quid gracefully. After reading the documents, I try cache.stop() to do so. But It seems not work, my application embedded with Infinispan does not terminated. Following is my sample code and result. Please point my mistake if I have. If this is not proper list, please tell me the right one. Thank you very much. Sample code: public static void main(String[] args) { test0(); } public static void test0() { EmbeddedCacheManager manager = new DefaultCacheManager( GlobalConfigurationBuilder.defaultClusteredBuilder() .transport().addProperty("configurationFile", "jgroups-tcp-x.xml") .globalJmxStatistics().allowDuplicateDomains(true) .build(), new ConfigurationBuilder() .clustering() .cacheMode(CacheMode.DIST_ASYNC) .hash().numOwners(1) .build() ); Cache myCache = manager.getCache("mycache"); System.out.println("Cache instance started!"); myCache.stop(); System.out.println("Cache instance stopped!"); } Result: Cache instance started! Cache instance stopped! I expected the program terminates here, but it doesn't. Best Regards, Roc Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140513/f9762e73/attachment-0001.html From sanne at infinispan.org Tue May 13 05:40:06 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 13 May 2014 10:40:06 +0100 Subject: [infinispan-dev] How to terminate Infinispan instance in the distributed mode? In-Reply-To: <6F4E6B0C717D4641A2B79BC1740D8CF43080A6F9@SZXEMA501-MBX.china.huawei.com> References: <6F4E6B0C717D4641A2B79BC1740D8CF43080A6F9@SZXEMA501-MBX.china.huawei.com> Message-ID: Hi, you'll have to stop che CacheManager: manager.stop(); On 13 May 2014 10:36, Liguangpeng (Roc, IPTechnologyResearchDept&HW) wrote: > Hello Infinispan experts, > > > > I want to create a cluster of Infinispan which consist of multiple nodes. > Then I expect to control the number of instances, so I want some nodes to > quid gracefully. After reading the documents, I try cache.stop() to do so. > But It seems not work, my application embedded with Infinispan does not > terminated. Following is my sample code and result. Please point my mistake > if I have. If this is not proper list, please tell me the right one. Thank > you very much. > > > > Sample code: > > public static void main(String[] args) { > > test0(); > > } > > public static void test0() { > > EmbeddedCacheManager manager = new DefaultCacheManager( > > GlobalConfigurationBuilder.defaultClusteredBuilder() > > .transport().addProperty("configurationFile", > "jgroups-tcp-x.xml") > > .globalJmxStatistics().allowDuplicateDomains(true) > > .build(), > > new ConfigurationBuilder() > > .clustering() > > .cacheMode(CacheMode.DIST_ASYNC) > > .hash().numOwners(1) > > .build() > > ); > > Cache myCache = manager.getCache("mycache"); > > System.out.println("Cache instance started!"); > > myCache.stop(); > > System.out.println("Cache instance stopped!"); > > } > > > > Result: > > Cache instance started! > > Cache instance stopped! > > I expected the program terminates here, but it doesn?t. > > > > Best Regards, > > Roc Lee > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From liguangpeng at huawei.com Tue May 13 05:50:28 2014 From: liguangpeng at huawei.com (Liguangpeng (Roc, IPTechnologyResearchDept&HW)) Date: Tue, 13 May 2014 09:50:28 +0000 Subject: [infinispan-dev] How to terminate Infinispan instance in the distributed mode? In-Reply-To: References: <6F4E6B0C717D4641A2B79BC1740D8CF43080A6F9@SZXEMA501-MBX.china.huawei.com> Message-ID: <6F4E6B0C717D4641A2B79BC1740D8CF43080A915@SZXEMA501-MBX.china.huawei.com> Yes, this works fine for me. Thank you very much. Roc Lee > -----Original Message----- > From: infinispan-dev-bounces at lists.jboss.org > [mailto:infinispan-dev-bounces at lists.jboss.org] On Behalf Of Sanne Grinovero > Sent: Tuesday, May 13, 2014 5:40 PM > To: infinispan -Dev List > Subject: Re: [infinispan-dev] How to terminate Infinispan instance in the > distributed mode? > > Hi, > you'll have to stop che CacheManager: > > manager.stop(); > > On 13 May 2014 10:36, Liguangpeng (Roc, IPTechnologyResearchDept&HW) > wrote: > > Hello Infinispan experts, > > > > > > > > I want to create a cluster of Infinispan which consist of multiple nodes. > > Then I expect to control the number of instances, so I want some nodes > > to quid gracefully. After reading the documents, I try cache.stop() to do so. > > But It seems not work, my application embedded with Infinispan does > > not terminated. Following is my sample code and result. Please point > > my mistake if I have. If this is not proper list, please tell me the > > right one. Thank you very much. > > > > > > > > Sample code: > > > > public static void main(String[] args) { > > > > test0(); > > > > } > > > > public static void test0() { > > > > EmbeddedCacheManager manager = new DefaultCacheManager( > > > > GlobalConfigurationBuilder.defaultClusteredBuilder() > > > > .transport().addProperty("configurationFile", > > "jgroups-tcp-x.xml") > > > > > > .globalJmxStatistics().allowDuplicateDomains(true) > > > > .build(), > > > > new ConfigurationBuilder() > > > > .clustering() > > > > .cacheMode(CacheMode.DIST_ASYNC) > > > > .hash().numOwners(1) > > > > .build() > > > > ); > > > > Cache myCache = manager.getCache("mycache"); > > > > System.out.println("Cache instance started!"); > > > > myCache.stop(); > > > > System.out.println("Cache instance stopped!"); > > > > } > > > > > > > > Result: > > > > Cache instance started! > > > > Cache instance stopped! > > > > I expected the program terminates here, but it doesn?t. > > > > > > > > Best Regards, > > > > Roc Lee > > > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pierre.sutra at unine.ch Tue May 13 09:10:01 2014 From: pierre.sutra at unine.ch (Pierre Sutra) Date: Tue, 13 May 2014 15:10:01 +0200 Subject: [infinispan-dev] Clustered Listener Message-ID: <537219A9.1060301@unine.ch> Hello, As part of the LEADS project, we have been using recently the clustered listeners API. In our use case, the application is employing a few thousands listeners, constantly installing and un-installing them. The overall picture is that things work smoothly up to a few hundreds listeners, but above the cost is high due to the full replication schema. To sidestep this issue, we have added a mechanism that allows listening only to a single key. In such a case, the listener is solely installed at the key owners. This greatly helps the scalability of the mechanism at the cost of fault-tolerance since, in the current state of the implementation, listeners are not forwarded to new data owners. Since as a next step [1] it is planned to handle topology change, do you plan also to support key (or key range) specific listener ? Besides, regarding this last point and the current state of the implementation, I would have like to know what is the purpose of the re-installation of the cluster listener in case of a view change in the addedListener() method of the CacheNotifierImpl class. Many thanks in advance. Best, Pierre Sutra [1] https://github.com/infinispan/infinispan/wiki/Clustered-listeners#handling-topology-changes From dan.berindei at gmail.com Tue May 13 09:58:11 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Tue, 13 May 2014 16:58:11 +0300 Subject: [infinispan-dev] Reliability of return values In-Reply-To: <5370A873.8070509@redhat.com> References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> Message-ID: On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: > @Dan: It's absolutely correct to do the further writes in order to make > the cache consistent, I am not arguing against that. You've fixed the > outcome (state of cache) well. My point was that we should let the user > know that the value he gets is not 100% correct when we already know > that - and given the API, the only option to do that seems to me as > throwing an exception. > The problem, as I see it, is that users also expect methods that throw an exception to *not* modify the cache. So we would break some of the users' expectations anyway. > > @Sanne: I was not suggesting that for now - sure, value versioning is (I > hope) on the roadmap. But that's more complicated, I though just about > making an adjustment to the current implementation. > Actually, just keeping a history of values would not fix the the return value in all cases. When retrying a put on the new primary owner, the primary owner would still have to compare our value with the latest value, and return the previous value if they are equal. So we could have something like this: A is the originator, B is the primary owner, k = v0 A -> B: put(k, v1) B dies before writing v, C is now primary owner D -> C: put(k, v1) // another put operation from D, with the same value C -> D: null A -> C: retry_put(k, v1) C -> A: v0 // C assumes A is overwriting its own value, so it's returning the previous one To fix that, we'd need a unique version generated by the originator - kind of like a transaction id ;) And to fix the HotRod use case, the HotRod client would have to be the one generating the version. Cheers Dan > Radim > > On 05/12/2014 12:02 PM, Sanne Grinovero wrote: > > I don't think we are in a position to decide what is a reasonable > > compromise; we can do better. > > For example - as Radim suggested - it might seem reasonable to have > > the older value around for a little while. We'll need a little bit of > > history of values and tombstones anyway for many other reasons. > > > > > > Sanne > > > > On 12 May 2014 09:37, Dan Berindei wrote: > >> Radim, I would contend that the first and foremost guarantee that put() > >> makes is to leave the cache in a consistent state. So we can't just > throw an > >> exception and give up, leaving k=v on one owner and k=null on another. > >> > >> Secondly, put(k, v) being atomic means that it either succeeds, it > writes > >> k=v in the cache, and it returns the previous value, or it doesn't > succeed, > >> and it doesn't write k=v in the cache. Returning the wrong previous > value is > >> bad, but leaving k=v in the cache is just as bad, even if the all the > owners > >> have the same value. > >> > >> And last, we can't have one node seeing k=null, then k=v, then k=null > again, > >> when the only write we did on the cache was a put(k, v). So trying to > undo > >> the write would not help. > >> > >> In the end, we have to make a compromise, and I think returning the > wrong > >> value in some of the cases is a reasonable compromise. Of course, we > should > >> document that :) > >> > >> I also believe ISPN-2956 could be fixed so that HotRod behaves just like > >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the > HotRod > >> protocol and to the cache itself. > >> > >> Incidentally, transactional caches have a similar problem when the > >> originator leaves the cluster: ISPN-3421 [1] > >> And we can't handle transactional caches any better than > non-transactional > >> caches until we expose transactions to the HotRod client. > >> > >> [1] https://issues.jboss.org/browse/ISPN-2956 > >> > >> Cheers > >> Dan > >> > >> > >> > >> > >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa > wrote: > >>> Hi, > >>> > >>> recently I've stumbled upon one already expected behaviour (one > instance > >>> is [1]), but which did not got much attention. > >>> > >>> In non-tx cache, when the primary owner fails after the request has > been > >>> replicated to backup owner, the request is retried in the new topology. > >>> Then, the operation is executed on the new primary (the previous > >>> backup). The outcome has been already fixed in [2], but the return > value > >>> may be wrong. For example, when we do a put, the return value for the > >>> second attempt will be the currently inserted value (although the entry > >>> was just created). Same situation may happen for other operations. > >>> > >>> Currently, it's not possible to return the correct value (because it > has > >>> already been overwritten and we don't keep a history of values), but > >>> shouldn't we rather throw an exception if we were not able to fulfil > the > >>> API contract? > >>> > >>> Radim > >>> > >>> [1] https://issues.jboss.org/browse/ISPN-2956 > >>> [2] https://issues.jboss.org/browse/ISPN-3422 > >>> > >>> -- > >>> Radim Vansa > >>> JBoss DataGrid QA > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140513/463fdc64/attachment.html From sanne at infinispan.org Tue May 13 11:44:25 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 13 May 2014 16:44:25 +0100 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> Message-ID: I didn't mean to suggest any solution, just that it should be fixed .. anyway you like :) If versions are needed, so be it.. but I didn't think it through. Cheers, Sanne On 13 May 2014 14:58, Dan Berindei wrote: > > > > On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: >> >> @Dan: It's absolutely correct to do the further writes in order to make >> the cache consistent, I am not arguing against that. You've fixed the >> outcome (state of cache) well. My point was that we should let the user >> know that the value he gets is not 100% correct when we already know >> that - and given the API, the only option to do that seems to me as >> throwing an exception. > > > The problem, as I see it, is that users also expect methods that throw an > exception to *not* modify the cache. > So we would break some of the users' expectations anyway. > >> >> >> @Sanne: I was not suggesting that for now - sure, value versioning is (I >> hope) on the roadmap. But that's more complicated, I though just about >> making an adjustment to the current implementation. > > > > Actually, just keeping a history of values would not fix the the return > value in all cases. > > When retrying a put on the new primary owner, the primary owner would still > have to compare our value with the latest value, and return the previous > value if they are equal. So we could have something like this: > > A is the originator, B is the primary owner, k = v0 > A -> B: put(k, v1) > B dies before writing v, C is now primary owner > D -> C: put(k, v1) // another put operation from D, with the same value > C -> D: null > A -> C: retry_put(k, v1) > C -> A: v0 // C assumes A is overwriting its own value, so it's returning > the previous one > > To fix that, we'd need a unique version generated by the originator - kind > of like a transaction id ;) > And to fix the HotRod use case, the HotRod client would have to be the one > generating the version. > > Cheers > Dan > > > >> >> Radim >> >> On 05/12/2014 12:02 PM, Sanne Grinovero wrote: >> > I don't think we are in a position to decide what is a reasonable >> > compromise; we can do better. >> > For example - as Radim suggested - it might seem reasonable to have >> > the older value around for a little while. We'll need a little bit of >> > history of values and tombstones anyway for many other reasons. >> > >> > >> > Sanne >> > >> > On 12 May 2014 09:37, Dan Berindei wrote: >> >> Radim, I would contend that the first and foremost guarantee that put() >> >> makes is to leave the cache in a consistent state. So we can't just >> >> throw an >> >> exception and give up, leaving k=v on one owner and k=null on another. >> >> >> >> Secondly, put(k, v) being atomic means that it either succeeds, it >> >> writes >> >> k=v in the cache, and it returns the previous value, or it doesn't >> >> succeed, >> >> and it doesn't write k=v in the cache. Returning the wrong previous >> >> value is >> >> bad, but leaving k=v in the cache is just as bad, even if the all the >> >> owners >> >> have the same value. >> >> >> >> And last, we can't have one node seeing k=null, then k=v, then k=null >> >> again, >> >> when the only write we did on the cache was a put(k, v). So trying to >> >> undo >> >> the write would not help. >> >> >> >> In the end, we have to make a compromise, and I think returning the >> >> wrong >> >> value in some of the cases is a reasonable compromise. Of course, we >> >> should >> >> document that :) >> >> >> >> I also believe ISPN-2956 could be fixed so that HotRod behaves just >> >> like >> >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the >> >> HotRod >> >> protocol and to the cache itself. >> >> >> >> Incidentally, transactional caches have a similar problem when the >> >> originator leaves the cluster: ISPN-3421 [1] >> >> And we can't handle transactional caches any better than >> >> non-transactional >> >> caches until we expose transactions to the HotRod client. >> >> >> >> [1] https://issues.jboss.org/browse/ISPN-2956 >> >> >> >> Cheers >> >> Dan >> >> >> >> >> >> >> >> >> >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa >> >> wrote: >> >>> Hi, >> >>> >> >>> recently I've stumbled upon one already expected behaviour (one >> >>> instance >> >>> is [1]), but which did not got much attention. >> >>> >> >>> In non-tx cache, when the primary owner fails after the request has >> >>> been >> >>> replicated to backup owner, the request is retried in the new >> >>> topology. >> >>> Then, the operation is executed on the new primary (the previous >> >>> backup). The outcome has been already fixed in [2], but the return >> >>> value >> >>> may be wrong. For example, when we do a put, the return value for the >> >>> second attempt will be the currently inserted value (although the >> >>> entry >> >>> was just created). Same situation may happen for other operations. >> >>> >> >>> Currently, it's not possible to return the correct value (because it >> >>> has >> >>> already been overwritten and we don't keep a history of values), but >> >>> shouldn't we rather throw an exception if we were not able to fulfil >> >>> the >> >>> API contract? >> >>> >> >>> Radim >> >>> >> >>> [1] https://issues.jboss.org/browse/ISPN-2956 >> >>> [2] https://issues.jboss.org/browse/ISPN-3422 >> >>> >> >>> -- >> >>> Radim Vansa >> >>> JBoss DataGrid QA >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rvansa at redhat.com Tue May 13 11:40:04 2014 From: rvansa at redhat.com (Radim Vansa) Date: Tue, 13 May 2014 17:40:04 +0200 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> Message-ID: <53723CD4.9060300@redhat.com> On 05/13/2014 03:58 PM, Dan Berindei wrote: > > > On Mon, May 12, 2014 at 1:54 PM, Radim Vansa > wrote: > > @Dan: It's absolutely correct to do the further writes in order to > make > the cache consistent, I am not arguing against that. You've fixed the > outcome (state of cache) well. My point was that we should let the > user > know that the value he gets is not 100% correct when we already know > that - and given the API, the only option to do that seems to me as > throwing an exception. > > > The problem, as I see it, is that users also expect methods that throw > an exception to *not* modify the cache. > So we would break some of the users' expectations anyway. When the response from primary owner does not arrive soon, we throw timeout exception and the cache is modified anyway, isn't it? If we throw ~ReturnValueUnreliableException, the user has at least some chance to react. Currently, for code requiring 100% reliable value, you can't do anything but ignore the return value, even for CAS operations. > > @Sanne: I was not suggesting that for now - sure, value versioning > is (I > hope) on the roadmap. But that's more complicated, I though just about > making an adjustment to the current implementation. > > > > Actually, just keeping a history of values would not fix the the > return value in all cases. > > When retrying a put on the new primary owner, the primary owner would > still have to compare our value with the latest value, and return the > previous value if they are equal. So we could have something like this: > > A is the originator, B is the primary owner, k = v0 > A -> B: put(k, v1) > B dies before writing v, C is now primary owner > D -> C: put(k, v1) // another put operation from D, with the same value > C -> D: null > A -> C: retry_put(k, v1) > C -> A: v0 // C assumes A is overwriting its own value, so it's > returning the previous one > > To fix that, we'd need a unique version generated by the originator - > kind of like a transaction id ;) Is it such a problem to associate unique ID with each write? History implementation seems to me like the more complicated part. > And to fix the HotRod use case, the HotRod client would have to be the > one generating the version. I agree. Radim > > Cheers > Dan > > Radim > > On 05/12/2014 12:02 PM, Sanne Grinovero wrote: > > I don't think we are in a position to decide what is a reasonable > > compromise; we can do better. > > For example - as Radim suggested - it might seem reasonable to have > > the older value around for a little while. We'll need a little > bit of > > history of values and tombstones anyway for many other reasons. > > > > > > Sanne > > > > On 12 May 2014 09:37, Dan Berindei > wrote: > >> Radim, I would contend that the first and foremost guarantee > that put() > >> makes is to leave the cache in a consistent state. So we can't > just throw an > >> exception and give up, leaving k=v on one owner and k=null on > another. > >> > >> Secondly, put(k, v) being atomic means that it either succeeds, > it writes > >> k=v in the cache, and it returns the previous value, or it > doesn't succeed, > >> and it doesn't write k=v in the cache. Returning the wrong > previous value is > >> bad, but leaving k=v in the cache is just as bad, even if the > all the owners > >> have the same value. > >> > >> And last, we can't have one node seeing k=null, then k=v, then > k=null again, > >> when the only write we did on the cache was a put(k, v). So > trying to undo > >> the write would not help. > >> > >> In the end, we have to make a compromise, and I think returning > the wrong > >> value in some of the cases is a reasonable compromise. Of > course, we should > >> document that :) > >> > >> I also believe ISPN-2956 could be fixed so that HotRod behaves > just like > >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag > to the HotRod > >> protocol and to the cache itself. > >> > >> Incidentally, transactional caches have a similar problem when the > >> originator leaves the cluster: ISPN-3421 [1] > >> And we can't handle transactional caches any better than > non-transactional > >> caches until we expose transactions to the HotRod client. > >> > >> [1] https://issues.jboss.org/browse/ISPN-2956 > >> > >> Cheers > >> Dan > >> > >> > >> > >> > >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa > > wrote: > >>> Hi, > >>> > >>> recently I've stumbled upon one already expected behaviour > (one instance > >>> is [1]), but which did not got much attention. > >>> > >>> In non-tx cache, when the primary owner fails after the > request has been > >>> replicated to backup owner, the request is retried in the new > topology. > >>> Then, the operation is executed on the new primary (the previous > >>> backup). The outcome has been already fixed in [2], but the > return value > >>> may be wrong. For example, when we do a put, the return value > for the > >>> second attempt will be the currently inserted value (although > the entry > >>> was just created). Same situation may happen for other operations. > >>> > >>> Currently, it's not possible to return the correct value > (because it has > >>> already been overwritten and we don't keep a history of > values), but > >>> shouldn't we rather throw an exception if we were not able to > fulfil the > >>> API contract? > >>> > >>> Radim > >>> > >>> [1] https://issues.jboss.org/browse/ISPN-2956 > >>> [2] https://issues.jboss.org/browse/ISPN-3422 > >>> > >>> -- > >>> Radim Vansa > > >>> JBoss DataGrid QA > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140513/d8e6ff00/attachment.html From sanne at infinispan.org Tue May 13 17:50:26 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 13 May 2014 22:50:26 +0100 Subject: [infinispan-dev] [!] Reorganization of dependencies & release process Message-ID: This is a reboot of the thread previously started on both the infinispan-dev and the hibernate-dev mailing list as "Handling of mutual dependency with Infinispan" [1]. We discussed further during the Hibernate fortnightly meeting [2], and came to the conclusion that we need Infinispan to change how some repositories are organised and how the release is assembled. # The problem To restate the issue, as you might painfully remember, every time there is a need for a Lucene update or a Search update we need to sync up for a complex dance of releases in both projects to accommodate for a small-step iterative process to handle the circular dependency. This problem is not too bad today as since a year we're releasing the Lucene Directory in an unusual - and very unmaintainable - temporary solution to be compatible with two different major versions of Apache Lucene; namely what Infinispan Query needs and what Hibernate Search needs are different modules. But the party is over, and I want to finally drop support for Lucene 3 and cleanup the unusual and unmaintainable build mess targeting a single Lucene version only. As soon as we converge to building a single version however - we're back to the complex problem we had when we supported a single version which is handling of a circular dependency - just that the problem has worsened lately the Lucene project has been more active and more inclined than what it used to be to break both internal and public APIs. In short, we have a circular dependency between Hibernate Search and Infinispan which we've been able to handle via hacks and some luck, but it imposes a serious threat to development flexibility, and the locked-in release process is not desirable either. # The solution we think in conclusion there's a single "proper" way out, and it also happens to provide some very interesting side effects in terms of maintenance overhead for everyone: Infinispan Core needs to release independently from the non-core modules. This would have the Lucene Directory depend on a released tag of infinispan-core, and be able to be released independently. Minor situations with benefit: - we often don't make any change in the Lucene Directory, still we need to release it. - when I actually need a release of it, I'm currently begging for a quick release of Infinispan: very costly The Big Ones: - we can manage the Lucene Directory to provide support for different versions of Lucene without necessarily breaking other modules - we can release quickly what's needed to move Search ahead in terms of Lucene versions without needing to make the Infinispan Query module compatible at the same time (in case you haven't followed this area: this seems to be my main activity rather than making valuable stuff). The goal is of course to linearise the dependencies; it seems to also simplify some of our tasks which is a welcome side-effect. I expect it also to make the project less scary for new contributors. # How does it impact users ## Maven users modules will continue to be modules.. I guess nobody will notice, other than we might have a different versioning scheme, but we help people out via the Infinispan BOM. ## Distribution users There should be no difference, other than (as well) some jars might not be aligned in terms of version. But that's probably even less of a problem, as I expect distribution users to just put what they get on their classpath. # How it impacts us 1) I'll move the Lucene Directory project to an different repository; same for the Query related components. I think you should/could consider the same for other components, based on ad-hoc considerations of the trade offs, but I'd expect ultimately to see a more frequent and "core only" release. 2) We'll have different kinds of releases: the "core only" and the "full releases". I think we'll also see components being released independently, but these are either Maven-only or meant for preparation of other components, or preparation for a "full release". 3) Tests (!) Such a move should in no way relax the regression-safety of infinispan-core: we need to still consider it unacceptable for a core change to break one of the modules moving out of the main tree. Personally I think I've pushed many tests about problems found in the "query modules" as unit tests in core, so that should be relatively safe, but it also happened that someone would "tune" these. I realise it's not practical to expect people to run tests of downstream modules, so we'll have to automate most of these tasks in CI. Careful on perception: if today there are three levels of defence against a regression (the author, the reviewer and CI all running the suite for each change), in such an organisation you have only one. So ignoring a CI failure as a "probable hiccup" could be much more dangerous than usual. # When Doing this _might_ be a blocker for any Lucene update; so since one just happened I'll probably have no urgent need for a couple of weeks at least. But we shouldn't be in a position in which an update could not be possible, so I hope we'll agree to implement this sooner rather than later, so we won't have to do it during an emergency. Also while this might sound a bit crazy at first, I see many flexibility benefits which can't hurt now that the project is getting larger and more complex to release. Not least, having a micro release of "Infinispan essentials" would be very welcome in terms of lowing the initial barrier; this was proposed at various meetings and highly endorsed by many but just never happened. Any comment please? I hope I covered it all, and sorry for that :D Cheers, Sanne 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html 2 - http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html From sanne at infinispan.org Tue May 13 19:36:50 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 May 2014 00:36:50 +0100 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? Message-ID: The testing configuration files seem to point to this URL: http://www.infinispan.org/schemas/infinispan-config-7.0.xsd But I'm getting a 404 when attempting to find it. It would be very helpfull to make this available, as it seems Infinispan 7.0.0.Alpha4 is unable to read the old configuration format :-( Is that expected? Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) at org.testng.TestRunner.privateRun(TestRunner.java:767) at org.testng.TestRunner.run(TestRunner.java:617) at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) ... 24 more From ttarrant at redhat.com Wed May 14 02:50:29 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Wed, 14 May 2014 08:50:29 +0200 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? In-Reply-To: References: Message-ID: <53731235.3040608@redhat.com> Isn't the deployment of XSDs part of the release process ? Tristan On 14/05/2014 01:36, Sanne Grinovero wrote: > The testing configuration files seem to point to this URL: > http://www.infinispan.org/schemas/infinispan-config-7.0.xsd > > But I'm getting a 404 when attempting to find it. > > It would be very helpfull to make this available, as it seems > Infinispan 7.0.0.Alpha4 is unable to read the old configuration format > :-( > > Is that expected? > > Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) > at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) > at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) > at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) > at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) > at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) > at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) > at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) > at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) > at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) > at org.testng.TestRunner.privateRun(TestRunner.java:767) > at org.testng.TestRunner.run(TestRunner.java:617) > at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) > at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) > at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) > at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] > Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) > ... 24 more > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From dan.berindei at gmail.com Wed May 14 03:36:55 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 14 May 2014 10:36:55 +0300 Subject: [infinispan-dev] Reliability of return values In-Reply-To: <53723CD4.9060300@redhat.com> References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> <53723CD4.9060300@redhat.com> Message-ID: On Tue, May 13, 2014 at 6:40 PM, Radim Vansa wrote: > On 05/13/2014 03:58 PM, Dan Berindei wrote: > > > > On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: > >> @Dan: It's absolutely correct to do the further writes in order to make >> the cache consistent, I am not arguing against that. You've fixed the >> outcome (state of cache) well. My point was that we should let the user >> know that the value he gets is not 100% correct when we already know >> that - and given the API, the only option to do that seems to me as >> throwing an exception. >> > > The problem, as I see it, is that users also expect methods that throw > an exception to *not* modify the cache. > So we would break some of the users' expectations anyway. > > > When the response from primary owner does not arrive soon, we throw > timeout exception and the cache is modified anyway, isn't it? > If we throw ~ReturnValueUnreliableException, the user has at least some > chance to react. Currently, for code requiring 100% reliable value, you > can't do anything but ignore the return value, even for CAS operations. > > Yes, but we don't expect the user to handle a TimeoutException in any meaningful way. Instead, we expect the user to choose his hardware and configuration to avoid timeouts, if he cares about consistency. How could you handle an exception that tells you "I may have written the value you asked me to in the cache, or maybe not. Either way, you will never know what the previous value was. Muahahaha!" in an application that cares about consistency? But the proposed ReturnValueUnreliableException can't be avoided by the user, it has to be handled every time the cluster membership changes. So it would be more like WriteSkewException than TimeoutException. And when we throw a WriteSkewException, we don't write anything to the cache. Remember, most users do not care about the previous value at all - that's the reason why JCache and our HotRod client don't return the previous value by default. Those that do care about the previous value, use the conditional write operations, and those already work (well, except for the scenario below). So you would force everyone to handle an exception that they don't care about. It would make sense to throw an exception if we didn't return the previous value by default, and the user requested the return value explicitly. But we do return the value by default, so I don't think it would be a good idea for us. > > >> >> @Sanne: I was not suggesting that for now - sure, value versioning is (I >> hope) on the roadmap. But that's more complicated, I though just about >> making an adjustment to the current implementation. >> > > > Actually, just keeping a history of values would not fix the the return > value in all cases. > > When retrying a put on the new primary owner, the primary owner would > still have to compare our value with the latest value, and return the > previous value if they are equal. So we could have something like this: > > A is the originator, B is the primary owner, k = v0 > A -> B: put(k, v1) > B dies before writing v, C is now primary owner > D -> C: put(k, v1) // another put operation from D, with the same value > C -> D: null > A -> C: retry_put(k, v1) > C -> A: v0 // C assumes A is overwriting its own value, so it's returning > the previous one > > To fix that, we'd need a unique version generated by the originator - > kind of like a transaction id ;) > > > Is it such a problem to associate unique ID with each write? History > implementation seems to me like the more complicated part. > I also think maintaining a version history would be quite complicated, and it also would make it harder for users to estimate their cache's memory usage. That's why I was trying to show that it's not a panacea. > And to fix the HotRod use case, the HotRod client would have to be the > one generating the version. > > > I agree. > > Radim > > > > Cheers > Dan > > > > >> Radim >> >> On 05/12/2014 12:02 PM, Sanne Grinovero wrote: >> > I don't think we are in a position to decide what is a reasonable >> > compromise; we can do better. >> > For example - as Radim suggested - it might seem reasonable to have >> > the older value around for a little while. We'll need a little bit of >> > history of values and tombstones anyway for many other reasons. >> > >> > >> > Sanne >> > >> > On 12 May 2014 09:37, Dan Berindei wrote: >> >> Radim, I would contend that the first and foremost guarantee that put() >> >> makes is to leave the cache in a consistent state. So we can't just >> throw an >> >> exception and give up, leaving k=v on one owner and k=null on another. >> >> >> >> Secondly, put(k, v) being atomic means that it either succeeds, it >> writes >> >> k=v in the cache, and it returns the previous value, or it doesn't >> succeed, >> >> and it doesn't write k=v in the cache. Returning the wrong previous >> value is >> >> bad, but leaving k=v in the cache is just as bad, even if the all the >> owners >> >> have the same value. >> >> >> >> And last, we can't have one node seeing k=null, then k=v, then k=null >> again, >> >> when the only write we did on the cache was a put(k, v). So trying to >> undo >> >> the write would not help. >> >> >> >> In the end, we have to make a compromise, and I think returning the >> wrong >> >> value in some of the cases is a reasonable compromise. Of course, we >> should >> >> document that :) >> >> >> >> I also believe ISPN-2956 could be fixed so that HotRod behaves just >> like >> >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the >> HotRod >> >> protocol and to the cache itself. >> >> >> >> Incidentally, transactional caches have a similar problem when the >> >> originator leaves the cluster: ISPN-3421 [1] >> >> And we can't handle transactional caches any better than >> non-transactional >> >> caches until we expose transactions to the HotRod client. >> >> >> >> [1] https://issues.jboss.org/browse/ISPN-2956 >> >> >> >> Cheers >> >> Dan >> >> >> >> >> >> >> >> >> >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa >> wrote: >> >>> Hi, >> >>> >> >>> recently I've stumbled upon one already expected behaviour (one >> instance >> >>> is [1]), but which did not got much attention. >> >>> >> >>> In non-tx cache, when the primary owner fails after the request has >> been >> >>> replicated to backup owner, the request is retried in the new >> topology. >> >>> Then, the operation is executed on the new primary (the previous >> >>> backup). The outcome has been already fixed in [2], but the return >> value >> >>> may be wrong. For example, when we do a put, the return value for the >> >>> second attempt will be the currently inserted value (although the >> entry >> >>> was just created). Same situation may happen for other operations. >> >>> >> >>> Currently, it's not possible to return the correct value (because it >> has >> >>> already been overwritten and we don't keep a history of values), but >> >>> shouldn't we rather throw an exception if we were not able to fulfil >> the >> >>> API contract? >> >>> >> >>> Radim >> >>> >> >>> [1] https://issues.jboss.org/browse/ISPN-2956 >> >>> [2] https://issues.jboss.org/browse/ISPN-3422 >> >>> >> >>> -- >> >>> Radim Vansa >> >>> JBoss DataGrid QA >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > > > _______________________________________________ > infinispan-dev mailing listinfinispan-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > -- > Radim Vansa > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140514/67d139ca/attachment-0001.html From pedro at infinispan.org Wed May 14 04:58:12 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 14 May 2014 09:58:12 +0100 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? In-Reply-To: <53731235.3040608@redhat.com> References: <53731235.3040608@redhat.com> Message-ID: <53733024.10904@infinispan.org> yes, the release process put the schema here: http://docs.jboss.org/infinispan/schemas/ Pedro On 05/14/2014 07:50 AM, Tristan Tarrant wrote: > Isn't the deployment of XSDs part of the release process ? > > Tristan > > On 14/05/2014 01:36, Sanne Grinovero wrote: >> The testing configuration files seem to point to this URL: >> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd >> >> But I'm getting a 404 when attempting to find it. >> >> It would be very helpfull to make this available, as it seems >> Infinispan 7.0.0.Alpha4 is unable to read the old configuration format >> :-( >> >> Is that expected? >> >> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) >> at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) >> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) >> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) >> at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) >> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) >> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) >> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) >> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) >> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) >> at org.testng.TestRunner.privateRun(TestRunner.java:767) >> at org.testng.TestRunner.run(TestRunner.java:617) >> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) >> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) >> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) >> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> at java.lang.Thread.run(Thread.java:744) >> Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] >> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >> at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) >> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) >> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) >> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) >> ... 24 more >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From pedro at infinispan.org Wed May 14 06:05:08 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 14 May 2014 11:05:08 +0100 Subject: [infinispan-dev] Infinispan 7.0.0.Alpha4 is out! Message-ID: <53733FD4.7060509@infinispan.org> Hi, I'm proud to announce the Alpha4 release of Infinispan 7.0.0. More info in http://blog.infinispan.org/2014/05/infinispan-700alpha4-is-out.html Regards, Pedro From dan.berindei at gmail.com Wed May 14 07:20:29 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Wed, 14 May 2014 14:20:29 +0300 Subject: [infinispan-dev] [!] Reorganization of dependencies & release process In-Reply-To: References: Message-ID: I don't see a lot of value in doing core-only releases. Releases are expensive because we have to update the website and documentation, and we have to announce the release everywhere. Releasing only the core won't change that. Also, we don't try to maintain backwards compatibility between Alpha/Beta releases. So releasing only the core is only practical for minor/micro releases. OTOH, doing a maven-only release is just a matter of starting the release script on the CI machine, and doing a couple clicks an hour later in the Nexus UI to release the staging repository. Doing a core-only maven-only release would have about the same overhead. Wouldn't it be enough to move the Lucene directory to a separate repository (and release schedule)? We could easily do a couple maven-only releases to prepare for Search updgrades, I don't see any problems with that. Cheers Dan On Wed, May 14, 2014 at 12:50 AM, Sanne Grinovero wrote: > This is a reboot of the thread previously started on both the > infinispan-dev and the hibernate-dev mailing list as "Handling of > mutual dependency with Infinispan" [1]. > We discussed further during the Hibernate fortnightly meeting [2], and > came to the conclusion that we need Infinispan to change how some > repositories are organised and how the release is assembled. > > # The problem > > To restate the issue, as you might painfully remember, every time > there is a need for a Lucene update or a Search update we need to sync > up for a complex dance of releases in both projects to accommodate for > a small-step iterative process to handle the circular dependency. > This problem is not too bad today as since a year we're releasing the > Lucene Directory in an unusual - and very unmaintainable - temporary > solution to be compatible with two different major versions of Apache > Lucene; namely what Infinispan Query needs and what Hibernate Search > needs are different modules. > But the party is over, and I want to finally drop support for Lucene 3 > and cleanup the unusual and unmaintainable build mess targeting a > single Lucene version only. > As soon as we converge to building a single version however - we're > back to the complex problem we had when we supported a single version > which is handling of a circular dependency - just that the problem has > worsened lately the Lucene project has been more active and more > inclined than what it used to be to break both internal and public > APIs. > > In short, we have a circular dependency between Hibernate Search and > Infinispan which we've been able to handle via hacks and some luck, > but it imposes a serious threat to development flexibility, and the > locked-in release process is not desirable either. > > # The solution > > we think in conclusion there's a single "proper" way out, and it also > happens to provide some very interesting side effects in terms of > maintenance overhead for everyone: Infinispan Core needs to release > independently from the non-core modules. > This would have the Lucene Directory depend on a released tag of > infinispan-core, and be able to be released independently. > Minor situations with benefit: > - we often don't make any change in the Lucene Directory, still we > need to release it. > - when I actually need a release of it, I'm currently begging for a > quick release of Infinispan: very costly > The Big Ones: > - we can manage the Lucene Directory to provide support for different > versions of Lucene without necessarily breaking other modules > - we can release quickly what's needed to move Search ahead in terms > of Lucene versions without needing to make the Infinispan Query module > compatible at the same time (in case you haven't followed this area: > this seems to be my main activity rather than making valuable stuff). > > The goal is of course to linearise the dependencies; it seems to also > simplify some of our tasks which is a welcome side-effect. I expect it > also to make the project less scary for new contributors. > > # How does it impact users > > ## Maven users > modules will continue to be modules.. I guess nobody will notice, > other than we might have a different versioning scheme, but we help > people out via the Infinispan BOM. > > ## Distribution users > There should be no difference, other than (as well) some jars might > not be aligned in terms of version. But that's probably even less of a > problem, as I expect distribution users to just put what they get on > their classpath. > > # How it impacts us > > 1) I'll move the Lucene Directory project to an different repository; > same for the Query related components. > I think you should/could consider the same for other components, based > on ad-hoc considerations of the trade offs, but I'd expect ultimately > to see a more frequent and "core only" release. > > 2) We'll have different kinds of releases: the "core only" and the > "full releases". > I think we'll also see components being released independently, but > these are either Maven-only or meant for preparation of other > components, or preparation for a "full release". > > 3) Tests (!) > Such a move should in no way relax the regression-safety of > infinispan-core: we need to still consider it unacceptable for a core > change to break one of the modules moving out of the main tree. > Personally I think I've pushed many tests about problems found in the > "query modules" as unit tests in core, so that should be relatively > safe, but it also happened that someone would "tune" these. > I realise it's not practical to expect people to run tests of > downstream modules, so we'll have to automate most of these tasks in > CI. > Careful on perception: if today there are three levels of defence > against a regression (the author, the reviewer and CI all running the > suite for each change), in such an organisation you have only one. So > ignoring a CI failure as a "probable hiccup" could be much more > dangerous than usual. > > # When > > Doing this _might_ be a blocker for any Lucene update; so since one > just happened I'll probably have no urgent need for a couple of weeks > at least. > But we shouldn't be in a position in which an update could not be > possible, so I hope we'll agree to implement this sooner rather than > later, so we won't have to do it during an emergency. > > Also while this might sound a bit crazy at first, I see many > flexibility benefits which can't hurt now that the project is getting > larger and more complex to release. > Not least, having a micro release of "Infinispan essentials" would be > very welcome in terms of lowing the initial barrier; this was proposed > at various meetings and highly endorsed by many but just never > happened. > > Any comment please? I hope I covered it all, and sorry for that :D > > Cheers, > Sanne > > > 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html > 2 - > http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140514/b6a64406/attachment.html From anistor at redhat.com Wed May 14 08:19:58 2014 From: anistor at redhat.com (Adrian Nistor) Date: Wed, 14 May 2014 15:19:58 +0300 Subject: [infinispan-dev] [!] Reorganization of dependencies & release process In-Reply-To: References: Message-ID: <53735F6E.7000807@redhat.com> +1 for moving Infinispan lucene directory out But why move Query components out? And which ones did you have in mind? On 05/14/2014 12:50 AM, Sanne Grinovero wrote: > This is a reboot of the thread previously started on both the > infinispan-dev and the hibernate-dev mailing list as "Handling of > mutual dependency with Infinispan" [1]. > We discussed further during the Hibernate fortnightly meeting [2], and > came to the conclusion that we need Infinispan to change how some > repositories are organised and how the release is assembled. > > # The problem > > To restate the issue, as you might painfully remember, every time > there is a need for a Lucene update or a Search update we need to sync > up for a complex dance of releases in both projects to accommodate for > a small-step iterative process to handle the circular dependency. > This problem is not too bad today as since a year we're releasing the > Lucene Directory in an unusual - and very unmaintainable - temporary > solution to be compatible with two different major versions of Apache > Lucene; namely what Infinispan Query needs and what Hibernate Search > needs are different modules. > But the party is over, and I want to finally drop support for Lucene 3 > and cleanup the unusual and unmaintainable build mess targeting a > single Lucene version only. > As soon as we converge to building a single version however - we're > back to the complex problem we had when we supported a single version > which is handling of a circular dependency - just that the problem has > worsened lately the Lucene project has been more active and more > inclined than what it used to be to break both internal and public > APIs. > > In short, we have a circular dependency between Hibernate Search and > Infinispan which we've been able to handle via hacks and some luck, > but it imposes a serious threat to development flexibility, and the > locked-in release process is not desirable either. > > # The solution > > we think in conclusion there's a single "proper" way out, and it also > happens to provide some very interesting side effects in terms of > maintenance overhead for everyone: Infinispan Core needs to release > independently from the non-core modules. > This would have the Lucene Directory depend on a released tag of > infinispan-core, and be able to be released independently. > Minor situations with benefit: > - we often don't make any change in the Lucene Directory, still we > need to release it. > - when I actually need a release of it, I'm currently begging for a > quick release of Infinispan: very costly > The Big Ones: > - we can manage the Lucene Directory to provide support for different > versions of Lucene without necessarily breaking other modules > - we can release quickly what's needed to move Search ahead in terms > of Lucene versions without needing to make the Infinispan Query module > compatible at the same time (in case you haven't followed this area: > this seems to be my main activity rather than making valuable stuff). > > The goal is of course to linearise the dependencies; it seems to also > simplify some of our tasks which is a welcome side-effect. I expect it > also to make the project less scary for new contributors. > > # How does it impact users > > ## Maven users > modules will continue to be modules.. I guess nobody will notice, > other than we might have a different versioning scheme, but we help > people out via the Infinispan BOM. > > ## Distribution users > There should be no difference, other than (as well) some jars might > not be aligned in terms of version. But that's probably even less of a > problem, as I expect distribution users to just put what they get on > their classpath. > > # How it impacts us > > 1) I'll move the Lucene Directory project to an different repository; > same for the Query related components. > I think you should/could consider the same for other components, based > on ad-hoc considerations of the trade offs, but I'd expect ultimately > to see a more frequent and "core only" release. > > 2) We'll have different kinds of releases: the "core only" and the > "full releases". > I think we'll also see components being released independently, but > these are either Maven-only or meant for preparation of other > components, or preparation for a "full release". > > 3) Tests (!) > Such a move should in no way relax the regression-safety of > infinispan-core: we need to still consider it unacceptable for a core > change to break one of the modules moving out of the main tree. > Personally I think I've pushed many tests about problems found in the > "query modules" as unit tests in core, so that should be relatively > safe, but it also happened that someone would "tune" these. > I realise it's not practical to expect people to run tests of > downstream modules, so we'll have to automate most of these tasks in > CI. > Careful on perception: if today there are three levels of defence > against a regression (the author, the reviewer and CI all running the > suite for each change), in such an organisation you have only one. So > ignoring a CI failure as a "probable hiccup" could be much more > dangerous than usual. > > # When > > Doing this _might_ be a blocker for any Lucene update; so since one > just happened I'll probably have no urgent need for a couple of weeks > at least. > But we shouldn't be in a position in which an update could not be > possible, so I hope we'll agree to implement this sooner rather than > later, so we won't have to do it during an emergency. > > Also while this might sound a bit crazy at first, I see many > flexibility benefits which can't hurt now that the project is getting > larger and more complex to release. > Not least, having a micro release of "Infinispan essentials" would be > very welcome in terms of lowing the initial barrier; this was proposed > at various meetings and highly endorsed by many but just never > happened. > > Any comment please? I hope I covered it all, and sorry for that :D > > Cheers, > Sanne > > > 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html > 2 - http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed May 14 08:24:06 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 May 2014 13:24:06 +0100 Subject: [infinispan-dev] [!] Reorganization of dependencies & release process In-Reply-To: References: Message-ID: On 14 May 2014 12:20, Dan Berindei wrote: > I don't see a lot of value in doing core-only releases. Releases are > expensive because we have to update the website and documentation, and we > have to announce the release everywhere. Releasing only the core won't > change that. > > Also, we don't try to maintain backwards compatibility between Alpha/Beta > releases. So releasing only the core is only practical for minor/micro > releases. > > OTOH, doing a maven-only release is just a matter of starting the release > script on the CI machine, and doing a couple clicks an hour later in the > Nexus UI to release the staging repository. Doing a core-only maven-only > release would have about the same overhead. > > Wouldn't it be enough to move the Lucene directory to a separate repository > (and release schedule)? We could easily do a couple maven-only releases to > prepare for Search updgrades, I don't see any problems with that. Infinispan Query uses the Lucene Directory. So if you move out only LD - and keep Query - but make any change in Infinispan Core which breaks the Directory (and this isn't as unlikely as we'd want to.. but even if it was, the problem is that it's not impossible) - then you wouldn't be able to ship an Infinispan Core release as the Query functionality would be broken. An obvious answer would be to move Query out as well.. but more and more modules are depending on it every day. Not sure if that's a good thing, but I'm sure that making our release process easier is not a good reason to avoid providing useful features. At some point to simplify configuration parsing and validation we even considered moving Query into Infinispan Core - just to remind how pervasive this is. We can only break the cycle if you allow for an acyclic dependency graph, so nothing in the Infinispan release which is created by tagging infinispan-core can depend on infinispan-lucene-directory (directly or indirectly). To me that implies necessarily that a lot of modules - at least all of those somehow depending on queries / lucene - need to be released separately from infinispan-core. And as mentioned this has other welcome side-effects; for one the highly discussed, desired "small" release of Infinispan would I think make life much easier to newcomers, avoiding to scare away users but also contributors. I also like the fact that we can release smaller things more frequently, and when major APIs have to change, we can split the work in smaller iterations rather than one developer having to do all the heavy lifting across all modules. Sanne > > Cheers > Dan > > > On Wed, May 14, 2014 at 12:50 AM, Sanne Grinovero > wrote: >> >> This is a reboot of the thread previously started on both the >> infinispan-dev and the hibernate-dev mailing list as "Handling of >> mutual dependency with Infinispan" [1]. >> We discussed further during the Hibernate fortnightly meeting [2], and >> came to the conclusion that we need Infinispan to change how some >> repositories are organised and how the release is assembled. >> >> # The problem >> >> To restate the issue, as you might painfully remember, every time >> there is a need for a Lucene update or a Search update we need to sync >> up for a complex dance of releases in both projects to accommodate for >> a small-step iterative process to handle the circular dependency. >> This problem is not too bad today as since a year we're releasing the >> Lucene Directory in an unusual - and very unmaintainable - temporary >> solution to be compatible with two different major versions of Apache >> Lucene; namely what Infinispan Query needs and what Hibernate Search >> needs are different modules. >> But the party is over, and I want to finally drop support for Lucene 3 >> and cleanup the unusual and unmaintainable build mess targeting a >> single Lucene version only. >> As soon as we converge to building a single version however - we're >> back to the complex problem we had when we supported a single version >> which is handling of a circular dependency - just that the problem has >> worsened lately the Lucene project has been more active and more >> inclined than what it used to be to break both internal and public >> APIs. >> >> In short, we have a circular dependency between Hibernate Search and >> Infinispan which we've been able to handle via hacks and some luck, >> but it imposes a serious threat to development flexibility, and the >> locked-in release process is not desirable either. >> >> # The solution >> >> we think in conclusion there's a single "proper" way out, and it also >> happens to provide some very interesting side effects in terms of >> maintenance overhead for everyone: Infinispan Core needs to release >> independently from the non-core modules. >> This would have the Lucene Directory depend on a released tag of >> infinispan-core, and be able to be released independently. >> Minor situations with benefit: >> - we often don't make any change in the Lucene Directory, still we >> need to release it. >> - when I actually need a release of it, I'm currently begging for a >> quick release of Infinispan: very costly >> The Big Ones: >> - we can manage the Lucene Directory to provide support for different >> versions of Lucene without necessarily breaking other modules >> - we can release quickly what's needed to move Search ahead in terms >> of Lucene versions without needing to make the Infinispan Query module >> compatible at the same time (in case you haven't followed this area: >> this seems to be my main activity rather than making valuable stuff). >> >> The goal is of course to linearise the dependencies; it seems to also >> simplify some of our tasks which is a welcome side-effect. I expect it >> also to make the project less scary for new contributors. >> >> # How does it impact users >> >> ## Maven users >> modules will continue to be modules.. I guess nobody will notice, >> other than we might have a different versioning scheme, but we help >> people out via the Infinispan BOM. >> >> ## Distribution users >> There should be no difference, other than (as well) some jars might >> not be aligned in terms of version. But that's probably even less of a >> problem, as I expect distribution users to just put what they get on >> their classpath. >> >> # How it impacts us >> >> 1) I'll move the Lucene Directory project to an different repository; >> same for the Query related components. >> I think you should/could consider the same for other components, based >> on ad-hoc considerations of the trade offs, but I'd expect ultimately >> to see a more frequent and "core only" release. >> >> 2) We'll have different kinds of releases: the "core only" and the >> "full releases". >> I think we'll also see components being released independently, but >> these are either Maven-only or meant for preparation of other >> components, or preparation for a "full release". >> >> 3) Tests (!) >> Such a move should in no way relax the regression-safety of >> infinispan-core: we need to still consider it unacceptable for a core >> change to break one of the modules moving out of the main tree. >> Personally I think I've pushed many tests about problems found in the >> "query modules" as unit tests in core, so that should be relatively >> safe, but it also happened that someone would "tune" these. >> I realise it's not practical to expect people to run tests of >> downstream modules, so we'll have to automate most of these tasks in >> CI. >> Careful on perception: if today there are three levels of defence >> against a regression (the author, the reviewer and CI all running the >> suite for each change), in such an organisation you have only one. So >> ignoring a CI failure as a "probable hiccup" could be much more >> dangerous than usual. >> >> # When >> >> Doing this _might_ be a blocker for any Lucene update; so since one >> just happened I'll probably have no urgent need for a couple of weeks >> at least. >> But we shouldn't be in a position in which an update could not be >> possible, so I hope we'll agree to implement this sooner rather than >> later, so we won't have to do it during an emergency. >> >> Also while this might sound a bit crazy at first, I see many >> flexibility benefits which can't hurt now that the project is getting >> larger and more complex to release. >> Not least, having a micro release of "Infinispan essentials" would be >> very welcome in terms of lowing the initial barrier; this was proposed >> at various meetings and highly endorsed by many but just never >> happened. >> >> Any comment please? I hope I covered it all, and sorry for that :D >> >> Cheers, >> Sanne >> >> >> 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html >> 2 - >> http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed May 14 08:32:11 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 May 2014 13:32:11 +0100 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? In-Reply-To: <53733024.10904@infinispan.org> References: <53731235.3040608@redhat.com> <53733024.10904@infinispan.org> Message-ID: On 14 May 2014 09:58, Pedro Ruivo wrote: > yes, the release process put the schema here: > http://docs.jboss.org/infinispan/schemas/ Thanks, I'll use that. But shouldn't we have it at the URL I posted too?? And is it really expected that we don't parse old-style configuration files? That's quite annoying for users. If it's the intended plan, I'm not against it but we'd need some clear warning and also some guidance for an upgrade. (BTW great job on the documentation updates) Cheers, Sanne > > Pedro > > On 05/14/2014 07:50 AM, Tristan Tarrant wrote: >> Isn't the deployment of XSDs part of the release process ? >> >> Tristan >> >> On 14/05/2014 01:36, Sanne Grinovero wrote: >>> The testing configuration files seem to point to this URL: >>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd >>> >>> But I'm getting a 404 when attempting to find it. >>> >>> It would be very helpfull to make this available, as it seems >>> Infinispan 7.0.0.Alpha4 is unable to read the old configuration format >>> :-( >>> >>> Is that expected? >>> >>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) >>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) >>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) >>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) >>> at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:606) >>> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) >>> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) >>> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) >>> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) >>> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) >>> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) >>> at org.testng.TestRunner.privateRun(TestRunner.java:767) >>> at org.testng.TestRunner.run(TestRunner.java:617) >>> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) >>> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) >>> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) >>> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> at java.lang.Thread.run(Thread.java:744) >>> Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] >>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >>> at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) >>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) >>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) >>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) >>> ... 24 more >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From emmanuel at hibernate.org Wed May 14 08:44:56 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 14 May 2014 14:44:56 +0200 Subject: [infinispan-dev] [!] Reorganization of dependencies & release process In-Reply-To: References: Message-ID: <2925FE4E-6782-47B0-BDED-D1EF9C3DA7EF@hibernate.org> Let's not overload the subject here. We can first approach the problem like Dan mentions by doing a tag for core in git and release in maven. Then every other Infinispan bit is tagged and released as one and depend on that core tag. That makes for 1 marketing release (website, blog ect). You an also consider that releasing Infinispan core marketing wise makes sense but that is another story that we should keep separate. > On 14 mai 2014, at 14:24, Sanne Grinovero wrote: > >> On 14 May 2014 12:20, Dan Berindei wrote: >> I don't see a lot of value in doing core-only releases. Releases are >> expensive because we have to update the website and documentation, and we >> have to announce the release everywhere. Releasing only the core won't >> change that. >> >> Also, we don't try to maintain backwards compatibility between Alpha/Beta >> releases. So releasing only the core is only practical for minor/micro >> releases. >> >> OTOH, doing a maven-only release is just a matter of starting the release >> script on the CI machine, and doing a couple clicks an hour later in the >> Nexus UI to release the staging repository. Doing a core-only maven-only >> release would have about the same overhead. >> >> Wouldn't it be enough to move the Lucene directory to a separate repository >> (and release schedule)? We could easily do a couple maven-only releases to >> prepare for Search updgrades, I don't see any problems with that. > > Infinispan Query uses the Lucene Directory. So if you move out only LD > - and keep Query - but make any change in Infinispan Core which breaks > the Directory (and this isn't as unlikely as we'd want to.. but even > if it was, the problem is that it's not impossible) - then you > wouldn't be able to ship an Infinispan Core release as the Query > functionality would be broken. > > An obvious answer would be to move Query out as well.. but more and > more modules are depending on it every day. > Not sure if that's a good thing, but I'm sure that making our release > process easier is not a good reason to avoid providing useful > features. > > At some point to simplify configuration parsing and validation we even > considered moving Query into Infinispan Core - just to remind how > pervasive this is. > > We can only break the cycle if you allow for an acyclic dependency > graph, so nothing in the Infinispan release which is created by > tagging infinispan-core can depend on infinispan-lucene-directory > (directly or indirectly). > To me that implies necessarily that a lot of modules - at least all of > those somehow depending on queries / lucene - need to be released > separately from infinispan-core. > > And as mentioned this has other welcome side-effects; for one the > highly discussed, desired "small" release of Infinispan would I think > make life much easier to newcomers, avoiding to scare away users but > also contributors. > I also like the fact that we can release smaller things more > frequently, and when major APIs have to change, we can split the work > in smaller iterations rather than one developer having to do all the > heavy lifting across all modules. > > Sanne > >> >> Cheers >> Dan >> >> >> On Wed, May 14, 2014 at 12:50 AM, Sanne Grinovero >> wrote: >>> >>> This is a reboot of the thread previously started on both the >>> infinispan-dev and the hibernate-dev mailing list as "Handling of >>> mutual dependency with Infinispan" [1]. >>> We discussed further during the Hibernate fortnightly meeting [2], and >>> came to the conclusion that we need Infinispan to change how some >>> repositories are organised and how the release is assembled. >>> >>> # The problem >>> >>> To restate the issue, as you might painfully remember, every time >>> there is a need for a Lucene update or a Search update we need to sync >>> up for a complex dance of releases in both projects to accommodate for >>> a small-step iterative process to handle the circular dependency. >>> This problem is not too bad today as since a year we're releasing the >>> Lucene Directory in an unusual - and very unmaintainable - temporary >>> solution to be compatible with two different major versions of Apache >>> Lucene; namely what Infinispan Query needs and what Hibernate Search >>> needs are different modules. >>> But the party is over, and I want to finally drop support for Lucene 3 >>> and cleanup the unusual and unmaintainable build mess targeting a >>> single Lucene version only. >>> As soon as we converge to building a single version however - we're >>> back to the complex problem we had when we supported a single version >>> which is handling of a circular dependency - just that the problem has >>> worsened lately the Lucene project has been more active and more >>> inclined than what it used to be to break both internal and public >>> APIs. >>> >>> In short, we have a circular dependency between Hibernate Search and >>> Infinispan which we've been able to handle via hacks and some luck, >>> but it imposes a serious threat to development flexibility, and the >>> locked-in release process is not desirable either. >>> >>> # The solution >>> >>> we think in conclusion there's a single "proper" way out, and it also >>> happens to provide some very interesting side effects in terms of >>> maintenance overhead for everyone: Infinispan Core needs to release >>> independently from the non-core modules. >>> This would have the Lucene Directory depend on a released tag of >>> infinispan-core, and be able to be released independently. >>> Minor situations with benefit: >>> - we often don't make any change in the Lucene Directory, still we >>> need to release it. >>> - when I actually need a release of it, I'm currently begging for a >>> quick release of Infinispan: very costly >>> The Big Ones: >>> - we can manage the Lucene Directory to provide support for different >>> versions of Lucene without necessarily breaking other modules >>> - we can release quickly what's needed to move Search ahead in terms >>> of Lucene versions without needing to make the Infinispan Query module >>> compatible at the same time (in case you haven't followed this area: >>> this seems to be my main activity rather than making valuable stuff). >>> >>> The goal is of course to linearise the dependencies; it seems to also >>> simplify some of our tasks which is a welcome side-effect. I expect it >>> also to make the project less scary for new contributors. >>> >>> # How does it impact users >>> >>> ## Maven users >>> modules will continue to be modules.. I guess nobody will notice, >>> other than we might have a different versioning scheme, but we help >>> people out via the Infinispan BOM. >>> >>> ## Distribution users >>> There should be no difference, other than (as well) some jars might >>> not be aligned in terms of version. But that's probably even less of a >>> problem, as I expect distribution users to just put what they get on >>> their classpath. >>> >>> # How it impacts us >>> >>> 1) I'll move the Lucene Directory project to an different repository; >>> same for the Query related components. >>> I think you should/could consider the same for other components, based >>> on ad-hoc considerations of the trade offs, but I'd expect ultimately >>> to see a more frequent and "core only" release. >>> >>> 2) We'll have different kinds of releases: the "core only" and the >>> "full releases". >>> I think we'll also see components being released independently, but >>> these are either Maven-only or meant for preparation of other >>> components, or preparation for a "full release". >>> >>> 3) Tests (!) >>> Such a move should in no way relax the regression-safety of >>> infinispan-core: we need to still consider it unacceptable for a core >>> change to break one of the modules moving out of the main tree. >>> Personally I think I've pushed many tests about problems found in the >>> "query modules" as unit tests in core, so that should be relatively >>> safe, but it also happened that someone would "tune" these. >>> I realise it's not practical to expect people to run tests of >>> downstream modules, so we'll have to automate most of these tasks in >>> CI. >>> Careful on perception: if today there are three levels of defence >>> against a regression (the author, the reviewer and CI all running the >>> suite for each change), in such an organisation you have only one. So >>> ignoring a CI failure as a "probable hiccup" could be much more >>> dangerous than usual. >>> >>> # When >>> >>> Doing this _might_ be a blocker for any Lucene update; so since one >>> just happened I'll probably have no urgent need for a couple of weeks >>> at least. >>> But we shouldn't be in a position in which an update could not be >>> possible, so I hope we'll agree to implement this sooner rather than >>> later, so we won't have to do it during an emergency. >>> >>> Also while this might sound a bit crazy at first, I see many >>> flexibility benefits which can't hurt now that the project is getting >>> larger and more complex to release. >>> Not least, having a micro release of "Infinispan essentials" would be >>> very welcome in terms of lowing the initial barrier; this was proposed >>> at various meetings and highly endorsed by many but just never >>> happened. >>> >>> Any comment please? I hope I covered it all, and sorry for that :D >>> >>> Cheers, >>> Sanne >>> >>> >>> 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html >>> 2 - >>> http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Wed May 14 08:45:04 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 May 2014 13:45:04 +0100 Subject: [infinispan-dev] [!] Reorganization of dependencies & release process In-Reply-To: <53735F6E.7000807@redhat.com> References: <53735F6E.7000807@redhat.com> Message-ID: On 14 May 2014 13:19, Adrian Nistor wrote: > +1 for moving Infinispan lucene directory out > > But why move Query components out? And which ones did you have in mind? Because it depends on the Directory, but also on Hibernate Search, so it "mandates" a specific version of Apache Lucene. By moving these out of the core release, we can make Infinispan (core) releases independently from specific Lucene versions. Lucene lately is being quite aggressive in changes, and so doing we can have a single point of surgery at a time: the Directory first, tag it. Search, tag it. Then Infinispan Query and Infinispan full releases. Note how each component has the freedom to choose to *not* update Lucene or *not* update Infinispan core, if it needs to make unrelated fixes available early on. So I think it's necessary to move all Query components out too, seems like the only way out. A more open question is if other components would benefit from a similar model? I think so, although the reasons are less urgent. Sanne > > On 05/14/2014 12:50 AM, Sanne Grinovero wrote: >> This is a reboot of the thread previously started on both the >> infinispan-dev and the hibernate-dev mailing list as "Handling of >> mutual dependency with Infinispan" [1]. >> We discussed further during the Hibernate fortnightly meeting [2], and >> came to the conclusion that we need Infinispan to change how some >> repositories are organised and how the release is assembled. >> >> # The problem >> >> To restate the issue, as you might painfully remember, every time >> there is a need for a Lucene update or a Search update we need to sync >> up for a complex dance of releases in both projects to accommodate for >> a small-step iterative process to handle the circular dependency. >> This problem is not too bad today as since a year we're releasing the >> Lucene Directory in an unusual - and very unmaintainable - temporary >> solution to be compatible with two different major versions of Apache >> Lucene; namely what Infinispan Query needs and what Hibernate Search >> needs are different modules. >> But the party is over, and I want to finally drop support for Lucene 3 >> and cleanup the unusual and unmaintainable build mess targeting a >> single Lucene version only. >> As soon as we converge to building a single version however - we're >> back to the complex problem we had when we supported a single version >> which is handling of a circular dependency - just that the problem has >> worsened lately the Lucene project has been more active and more >> inclined than what it used to be to break both internal and public >> APIs. >> >> In short, we have a circular dependency between Hibernate Search and >> Infinispan which we've been able to handle via hacks and some luck, >> but it imposes a serious threat to development flexibility, and the >> locked-in release process is not desirable either. >> >> # The solution >> >> we think in conclusion there's a single "proper" way out, and it also >> happens to provide some very interesting side effects in terms of >> maintenance overhead for everyone: Infinispan Core needs to release >> independently from the non-core modules. >> This would have the Lucene Directory depend on a released tag of >> infinispan-core, and be able to be released independently. >> Minor situations with benefit: >> - we often don't make any change in the Lucene Directory, still we >> need to release it. >> - when I actually need a release of it, I'm currently begging for a >> quick release of Infinispan: very costly >> The Big Ones: >> - we can manage the Lucene Directory to provide support for different >> versions of Lucene without necessarily breaking other modules >> - we can release quickly what's needed to move Search ahead in terms >> of Lucene versions without needing to make the Infinispan Query module >> compatible at the same time (in case you haven't followed this area: >> this seems to be my main activity rather than making valuable stuff). >> >> The goal is of course to linearise the dependencies; it seems to also >> simplify some of our tasks which is a welcome side-effect. I expect it >> also to make the project less scary for new contributors. >> >> # How does it impact users >> >> ## Maven users >> modules will continue to be modules.. I guess nobody will notice, >> other than we might have a different versioning scheme, but we help >> people out via the Infinispan BOM. >> >> ## Distribution users >> There should be no difference, other than (as well) some jars might >> not be aligned in terms of version. But that's probably even less of a >> problem, as I expect distribution users to just put what they get on >> their classpath. >> >> # How it impacts us >> >> 1) I'll move the Lucene Directory project to an different repository; >> same for the Query related components. >> I think you should/could consider the same for other components, based >> on ad-hoc considerations of the trade offs, but I'd expect ultimately >> to see a more frequent and "core only" release. >> >> 2) We'll have different kinds of releases: the "core only" and the >> "full releases". >> I think we'll also see components being released independently, but >> these are either Maven-only or meant for preparation of other >> components, or preparation for a "full release". >> >> 3) Tests (!) >> Such a move should in no way relax the regression-safety of >> infinispan-core: we need to still consider it unacceptable for a core >> change to break one of the modules moving out of the main tree. >> Personally I think I've pushed many tests about problems found in the >> "query modules" as unit tests in core, so that should be relatively >> safe, but it also happened that someone would "tune" these. >> I realise it's not practical to expect people to run tests of >> downstream modules, so we'll have to automate most of these tasks in >> CI. >> Careful on perception: if today there are three levels of defence >> against a regression (the author, the reviewer and CI all running the >> suite for each change), in such an organisation you have only one. So >> ignoring a CI failure as a "probable hiccup" could be much more >> dangerous than usual. >> >> # When >> >> Doing this _might_ be a blocker for any Lucene update; so since one >> just happened I'll probably have no urgent need for a couple of weeks >> at least. >> But we shouldn't be in a position in which an update could not be >> possible, so I hope we'll agree to implement this sooner rather than >> later, so we won't have to do it during an emergency. >> >> Also while this might sound a bit crazy at first, I see many >> flexibility benefits which can't hurt now that the project is getting >> larger and more complex to release. >> Not least, having a micro release of "Infinispan essentials" would be >> very welcome in terms of lowing the initial barrier; this was proposed >> at various meetings and highly endorsed by many but just never >> happened. >> >> Any comment please? I hope I covered it all, and sorry for that :D >> >> Cheers, >> Sanne >> >> >> 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html >> 2 - http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From pedro at infinispan.org Wed May 14 09:00:17 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Wed, 14 May 2014 14:00:17 +0100 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? In-Reply-To: References: <53731235.3040608@redhat.com> <53733024.10904@infinispan.org> Message-ID: <537368E1.6020206@infinispan.org> Hi Sanne, I have added the 7.0 schemas to infinispan.org/schemas. About the 6.0 parsing, yes I think it was supposed to be not parsed :P I think I saw some warning about this change but I can't find it (so maybe I was dreaming). So, agree with you. we need an upgrade guide (however, the documentation already uses the new style) Cheers, Pedro On 05/14/2014 01:32 PM, Sanne Grinovero wrote: > On 14 May 2014 09:58, Pedro Ruivo wrote: >> yes, the release process put the schema here: >> http://docs.jboss.org/infinispan/schemas/ > > Thanks, I'll use that. But shouldn't we have it at the URL I posted too?? > > And is it really expected that we don't parse old-style configuration files? > That's quite annoying for users. If it's the intended plan, I'm not > against it but we'd need some clear warning and also some guidance for > an upgrade. > (BTW great job on the documentation updates) > > Cheers, > Sanne > >> >> Pedro >> >> On 05/14/2014 07:50 AM, Tristan Tarrant wrote: >>> Isn't the deployment of XSDs part of the release process ? >>> >>> Tristan >>> >>> On 14/05/2014 01:36, Sanne Grinovero wrote: >>>> The testing configuration files seem to point to this URL: >>>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd >>>> >>>> But I'm getting a 404 when attempting to find it. >>>> >>>> It would be very helpfull to make this available, as it seems >>>> Infinispan 7.0.0.Alpha4 is unable to read the old configuration format >>>> :-( >>>> >>>> Is that expected? >>>> >>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) >>>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) >>>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) >>>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) >>>> at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>> at java.lang.reflect.Method.invoke(Method.java:606) >>>> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) >>>> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) >>>> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) >>>> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) >>>> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) >>>> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) >>>> at org.testng.TestRunner.privateRun(TestRunner.java:767) >>>> at org.testng.TestRunner.run(TestRunner.java:617) >>>> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) >>>> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) >>>> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) >>>> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) >>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>> at java.lang.Thread.run(Thread.java:744) >>>> Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] >>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >>>> at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) >>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) >>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) >>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) >>>> ... 24 more >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From sanne at infinispan.org Wed May 14 11:16:12 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 14 May 2014 16:16:12 +0100 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? In-Reply-To: <537368E1.6020206@infinispan.org> References: <53731235.3040608@redhat.com> <53733024.10904@infinispan.org> <537368E1.6020206@infinispan.org> Message-ID: On 14 May 2014 14:00, Pedro Ruivo wrote: > Hi Sanne, > > I have added the 7.0 schemas to infinispan.org/schemas. Thanks! > > About the 6.0 parsing, yes I think it was supposed to be not parsed :P I > think I saw some warning about this change but I can't find it (so maybe > I was dreaming). So, agree with you. we need an upgrade guide (however, > the documentation already uses the new style) Yes the docs are very nice. I guess my problem is just that it's an unexpected issue; I somehow guess it from the stacktrace but I don't think everyone will be as familiar with the kind of error message. I'll open an issue to improve the error; if ever someone will volunteer to also write a couple of lines on how to migrate, we could link the page from the error message. > > Cheers, > Pedro > > On 05/14/2014 01:32 PM, Sanne Grinovero wrote: >> On 14 May 2014 09:58, Pedro Ruivo wrote: >>> yes, the release process put the schema here: >>> http://docs.jboss.org/infinispan/schemas/ >> >> Thanks, I'll use that. But shouldn't we have it at the URL I posted too?? >> >> And is it really expected that we don't parse old-style configuration files? >> That's quite annoying for users. If it's the intended plan, I'm not >> against it but we'd need some clear warning and also some guidance for >> an upgrade. >> (BTW great job on the documentation updates) >> >> Cheers, >> Sanne >> >>> >>> Pedro >>> >>> On 05/14/2014 07:50 AM, Tristan Tarrant wrote: >>>> Isn't the deployment of XSDs part of the release process ? >>>> >>>> Tristan >>>> >>>> On 14/05/2014 01:36, Sanne Grinovero wrote: >>>>> The testing configuration files seem to point to this URL: >>>>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd >>>>> >>>>> But I'm getting a 404 when attempting to find it. >>>>> >>>>> It would be very helpfull to make this available, as it seems >>>>> Infinispan 7.0.0.Alpha4 is unable to read the old configuration format >>>>> :-( >>>>> >>>>> Is that expected? >>>>> >>>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >>>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) >>>>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) >>>>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) >>>>> at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) >>>>> at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>>> at java.lang.reflect.Method.invoke(Method.java:606) >>>>> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) >>>>> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) >>>>> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) >>>>> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) >>>>> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) >>>>> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) >>>>> at org.testng.TestRunner.privateRun(TestRunner.java:767) >>>>> at org.testng.TestRunner.run(TestRunner.java:617) >>>>> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) >>>>> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) >>>>> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) >>>>> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>>> at java.lang.Thread.run(Thread.java:744) >>>>> Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] >>>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' >>>>> at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) >>>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) >>>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) >>>>> at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) >>>>> ... 24 more >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mudokonman at gmail.com Thu May 15 09:08:38 2014 From: mudokonman at gmail.com (William Burns) Date: Thu, 15 May 2014 09:08:38 -0400 Subject: [infinispan-dev] Clustered Listener In-Reply-To: <537219A9.1060301@unine.ch> References: <537219A9.1060301@unine.ch> Message-ID: On Tue, May 13, 2014 at 9:10 AM, Pierre Sutra wrote: > Hello, > > As part of the LEADS project, we have been using recently the clustered > listeners API. In our use case, the application is employing a few > thousands listeners, constantly installing and un-installing them. Are you talking about non clustered listeners? It seems unlikely you would need so many cluster listeners. Cluster listeners should allow you to only install a small amount of them, usually you would have only additional ones if you have a Filter applied limiting what key/values are returned. > The > overall picture is that things work smoothly up to a few hundreds > listeners, but above the cost is high due to the full replication > schema. To sidestep this issue, we have added a mechanism that allows > listening only to a single key. Is the KeyFilter or KeyValueFilter not sufficient for this? void addListener(Object listener, KeyFilter filter); void addListener(Object listener, KeyValueFilter filter, Converter converter); Also to note if you are doing any kind of translation of the value to another value it is recommended to do that via the supplied Converter. This can give good performance as the conversion is done on the target node and not all in 1 node and also you can reduce the payload if the resultant value has a serialized form that is smaller than the original value. > In such a case, the listener is solely > installed at the key owners. This greatly helps the scalability of the > mechanism at the cost of fault-tolerance since, in the current state of > the implementation, listeners are not forwarded to new data owners. > Since as a next step [1] it is planned to handle topology change, do you > plan also to support key (or key range) specific listener ? These should be covered with the 2 overloads as I mentioned above. This should be the most performant way as the filter is replicated to the node upon installation so a 1 time cost. But if a key/value pair doesn't pass the filter the event is not sent to the node where the listener is installed. > Besides, > regarding this last point and the current state of the implementation, I > would have like to know what is the purpose of the re-installation of > the cluster listener in case of a view change in the addedListener() > method of the CacheNotifierImpl class. This isn't a re-installation. This is used to propgate the RemoteClusterListener to the other nodes, so that when a new event is generated it can see that and subsequently send it back to the node where the listener is installed. There is also a second check in there in case if a new node joins in the middle. > Many thanks in advance. No problem, glad you guys are testing out this feature already :) > > Best, > Pierre Sutra > > [1] > https://github.com/infinispan/infinispan/wiki/Clustered-listeners#handling-topology-changes > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Thu May 15 09:29:17 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Thu, 15 May 2014 14:29:17 +0100 Subject: [infinispan-dev] Welcome to Gustavo Message-ID: Hi all, today we finally have Gustavo joining us as a full time engineer on Infinispan. He worked with Tristan and myself in Italy before we came to Red Hat, and was already a Lucene expert back then. He then joined Red Hat as a consultant but that didn't last too long: he was too good and customers wanted him to travel an unreasonable amount. So he has been lost for a couple of years, but wisely spent them to deepen his skills in devops, more of Lucene but now in larger scale and distributed environments: a bit of JGroups, Infinispan and Hibernate Search and even some Scala, but also experience with MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this great blend of competences now available full time to improve the Search experience of Infinispan users. Welcome! He's gustavonalle on both IRC and GitHub. Sanne From ttarrant at redhat.com Thu May 15 09:36:19 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 15 May 2014 15:36:19 +0200 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: References: Message-ID: <5374C2D3.2000106@redhat.com> Welcome Gustavo, good to see you (again) !!!!! Tristan On 15/05/2014 15:29, Sanne Grinovero wrote: > Hi all, > today we finally have Gustavo joining us as a full time engineer on Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From pedro at infinispan.org Thu May 15 09:44:53 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 15 May 2014 14:44:53 +0100 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: <5374C2D3.2000106@redhat.com> References: <5374C2D3.2000106@redhat.com> Message-ID: <5374C4D5.5050407@infinispan.org> Welcome Gustavo! Cheers, Pedro On 05/15/2014 02:36 PM, Tristan Tarrant wrote: > Welcome Gustavo, good to see you (again) !!!!! > > Tristan > > On 15/05/2014 15:29, Sanne Grinovero wrote: >> Hi all, >> today we finally have Gustavo joining us as a full time engineer on Infinispan. >> >> He worked with Tristan and myself in Italy before we came to Red Hat, >> and was already a Lucene expert back then. He then joined Red Hat as a >> consultant but that didn't last too long: he was too good and >> customers wanted him to travel an unreasonable amount. >> >> So he has been lost for a couple of years, but wisely spent them to >> deepen his skills in devops, more of Lucene but now in larger scale >> and distributed environments: a bit of JGroups, Infinispan and >> Hibernate Search and even some Scala, but also experience with >> MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this >> great blend of competences now available full time to improve the >> Search experience of Infinispan users. >> >> Welcome! >> >> He's gustavonalle on both IRC and GitHub. >> >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From anistor at redhat.com Thu May 15 10:17:19 2014 From: anistor at redhat.com (Adrian Nistor) Date: Thu, 15 May 2014 17:17:19 +0300 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: References: Message-ID: <5374CC6F.1090900@redhat.com> Welcome Gustavo! On 05/15/2014 04:29 PM, Sanne Grinovero wrote: > Hi all, > today we finally have Gustavo joining us as a full time engineer on Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From dan.berindei at gmail.com Thu May 15 10:30:43 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 15 May 2014 17:30:43 +0300 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: <5374CC6F.1090900@redhat.com> References: <5374CC6F.1090900@redhat.com> Message-ID: Welcome Gustavo! On Thu, May 15, 2014 at 5:17 PM, Adrian Nistor wrote: > Welcome Gustavo! > > On 05/15/2014 04:29 PM, Sanne Grinovero wrote: > > Hi all, > > today we finally have Gustavo joining us as a full time engineer on > Infinispan. > > > > He worked with Tristan and myself in Italy before we came to Red Hat, > > and was already a Lucene expert back then. He then joined Red Hat as a > > consultant but that didn't last too long: he was too good and > > customers wanted him to travel an unreasonable amount. > > > > So he has been lost for a couple of years, but wisely spent them to > > deepen his skills in devops, more of Lucene but now in larger scale > > and distributed environments: a bit of JGroups, Infinispan and > > Hibernate Search and even some Scala, but also experience with > > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > > great blend of competences now available full time to improve the > > Search experience of Infinispan users. > > > > Welcome! > > > > He's gustavonalle on both IRC and GitHub. > > > > Sanne > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140515/eb3e3ec2/attachment.html From emmanuel at hibernate.org Thu May 15 10:50:53 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Thu, 15 May 2014 16:50:53 +0200 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: References: Message-ID: <20140515145053.GG44615@hibernate.org> Welcome :) On Thu 2014-05-15 14:29, Sanne Grinovero wrote: > Hi all, > today we finally have Gustavo joining us as a full time engineer on Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Thu May 15 11:16:14 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Thu, 15 May 2014 17:16:14 +0200 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: References: Message-ID: Bem-vindo Gustavo :)) On 15 May 2014, at 15:29, Sanne Grinovero wrote: > Hi all, > today we finally have Gustavo joining us as a full time engineer on Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Thu May 15 11:24:08 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Thu, 15 May 2014 17:24:08 +0200 Subject: [infinispan-dev] Configuration XSD missing, Infinispan not parsing v.6 configuration files ? In-Reply-To: References: Message-ID: <40A1A5B1-7D13-4881-A670-FF759346D40B@redhat.com> On 14 May 2014, at 01:36, Sanne Grinovero wrote: > The testing configuration files seem to point to this URL: > http://www.infinispan.org/schemas/infinispan-config-7.0.xsd > > But I'm getting a 404 when attempting to find it. Dunno about that ^... > It would be very helpfull to make this available, as it seems > Infinispan 7.0.0.Alpha4 is unable to read the old configuration format > :-( > > Is that expected? Yes, we moved completely over to the new 7.x parser. The 6.x parser was removed. > > Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) > at org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) > at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) > at org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) > at org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) > at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) > at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) > at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) > at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) > at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) > at org.testng.TestRunner.privateRun(TestRunner.java:767) > at org.testng.TestRunner.run(TestRunner.java:617) > at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) > at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) > at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) > at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[5,41] > Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > at org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) > at org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) > ... 24 more > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From manik at infinispan.org Thu May 15 19:52:25 2014 From: manik at infinispan.org (Manik Surtani) Date: Thu, 15 May 2014 16:52:25 -0700 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: References: Message-ID: At last! :) Welcome aboard, dude. On 15 May 2014 06:29, Sanne Grinovero wrote: > Hi all, > today we finally have Gustavo joining us as a full time engineer on > Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140515/8667925b/attachment-0001.html From manik at infinispan.org Thu May 15 19:55:33 2014 From: manik at infinispan.org (Manik Surtani) Date: Thu, 15 May 2014 16:55:33 -0700 Subject: [infinispan-dev] RAFT Message-ID: An awesome visual representation of RAFT that you guys should check out. http://thesecretlivesofdata.com/raft/ Some other good resources. http://thinkdistributed.io/blog/2013/07/12/consensus.html https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf https://github.com/mgodave/barge - M -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140515/01ebb802/attachment.html From bban at redhat.com Fri May 16 02:22:55 2014 From: bban at redhat.com (Bela Ban) Date: Fri, 16 May 2014 08:22:55 +0200 Subject: [infinispan-dev] RAFT In-Reply-To: References: Message-ID: <5375AEBF.1060205@redhat.com> Interesting, thanks for the link ! Makes me want to implement this in JGroups. Should be relatively easy with *static* membership. Different ballgame though with dynamic memberships... Cheers, On 16/05/14 01:55, Manik Surtani wrote: > An awesome visual representation of RAFT that you guys should check out. > > http://thesecretlivesofdata.com/raft/ > > Some other good resources. > > http://thinkdistributed.io/blog/2013/07/12/consensus.html > https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf > https://github.com/mgodave/barge -- Bela Ban, JGroups lead (http://www.jgroups.org) From isavin at redhat.com Fri May 16 05:20:50 2014 From: isavin at redhat.com (Ion Savin) Date: Fri, 16 May 2014 12:20:50 +0300 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: References: Message-ID: <5375D872.7090406@redhat.com> Welcome Gustavo! On 05/15/2014 04:29 PM, Sanne Grinovero wrote: > Hi all, > today we finally have Gustavo joining us as a full time engineer on Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From gustavonalle at gmail.com Fri May 16 07:25:15 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Fri, 16 May 2014 12:25:15 +0100 Subject: [infinispan-dev] Welcome to Gustavo In-Reply-To: <5375D872.7090406@redhat.com> References: <5375D872.7090406@redhat.com> Message-ID: Thanks everyone, It's awesome to be part of such a talented team and vibrant community! Gustavo On 16 May 2014, at 10:20, Ion Savin wrote: > Welcome Gustavo! > > On 05/15/2014 04:29 PM, Sanne Grinovero wrote: >> Hi all, >> today we finally have Gustavo joining us as a full time engineer on Infinispan. >> >> He worked with Tristan and myself in Italy before we came to Red Hat, >> and was already a Lucene expert back then. He then joined Red Hat as a >> consultant but that didn't last too long: he was too good and >> customers wanted him to travel an unreasonable amount. >> >> So he has been lost for a couple of years, but wisely spent them to >> deepen his skills in devops, more of Lucene but now in larger scale >> and distributed environments: a bit of JGroups, Infinispan and >> Hibernate Search and even some Scala, but also experience with >> MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this >> great blend of competences now available full time to improve the >> Search experience of Infinispan users. >> >> Welcome! >> >> He's gustavonalle on both IRC and GitHub. >> >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From gustavonalle at gmail.com Fri May 16 07:29:27 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Fri, 16 May 2014 12:29:27 +0100 Subject: [infinispan-dev] RAFT In-Reply-To: References: Message-ID: <827C43E7-E1D7-45D7-A1D1-FA7BCBE0FEA7@gmail.com> I specially liked the "Raft can even stay consistent in the face of network partitions" part :) Gustavo On 16 May 2014, at 00:55, Manik Surtani wrote: > An awesome visual representation of RAFT that you guys should check out. > > http://thesecretlivesofdata.com/raft/ > > Some other good resources. > > http://thinkdistributed.io/blog/2013/07/12/consensus.html > https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf > https://github.com/mgodave/barge > > - M > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140516/310d1fd9/attachment.html From emmanuel at hibernate.org Fri May 16 11:35:31 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 16 May 2014 17:35:31 +0200 Subject: [infinispan-dev] RAFT In-Reply-To: <5375AEBF.1060205@redhat.com> References: <5375AEBF.1060205@redhat.com> Message-ID: <20140516153531.GA913@hibernate.org> On Fri 2014-05-16 8:22, Bela Ban wrote: > Interesting, thanks for the link ! > Makes me want to implement this in JGroups. Should be relatively easy > with *static* membership. Different ballgame though with dynamic > memberships... That's the same of these approaches. They require a static membership. Or number of nodes at least. From gustavonalle at gmail.com Fri May 16 14:26:51 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Fri, 16 May 2014 19:26:51 +0100 Subject: [infinispan-dev] RAFT In-Reply-To: References: <5375AEBF.1060205@redhat.com> <20140516153531.GA913@hibernate.org> Message-ID: On 16 May 2014 16:36, "Emmanuel Bernard" wrote: > > On Fri 2014-05-16 8:22, Bela Ban wrote: > > Interesting, thanks for the link ! > > Makes me want to implement this in JGroups. Should be relatively easy > > with *static* membership. Different ballgame though with dynamic > > memberships... > > That's the same of these approaches. They require a static membership. > Or number of nodes at least. > Section 6 of the Raft paper covers dynamic configuration (membership) changes, although it looks a bit involved Gustavo -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140516/ff67efce/attachment.html From manik at infinispan.org Fri May 16 14:54:47 2014 From: manik at infinispan.org (Manik Surtani) Date: Fri, 16 May 2014 11:54:47 -0700 Subject: [infinispan-dev] RAFT In-Reply-To: References: <5375AEBF.1060205@redhat.com> <20140516153531.GA913@hibernate.org> Message-ID: CockroachDB is attempting to do the same, making use of Google's Go Raft reference implementation. We probably should work with the Barge devs to the same effect. Off topic, but do we have a RocksDB -based CacheStore yet? :) On 16 May 2014 11:26, Gustavo Fernandes wrote: > On 16 May 2014 16:36, "Emmanuel Bernard" wrote: > > > > On Fri 2014-05-16 8:22, Bela Ban wrote: > > > Interesting, thanks for the link ! > > > Makes me want to implement this in JGroups. Should be relatively easy > > > with *static* membership. Different ballgame though with dynamic > > > memberships... > > > > That's the same of these approaches. They require a static membership. > > Or number of nodes at least. > > > > Section 6 of the Raft paper covers dynamic configuration (membership) > changes, although it looks a bit involved > > Gustavo > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140516/f158c692/attachment-0001.html From bban at redhat.com Sat May 17 02:50:27 2014 From: bban at redhat.com (Bela Ban) Date: Sat, 17 May 2014 08:50:27 +0200 Subject: [infinispan-dev] RAFT In-Reply-To: <20140516153531.GA913@hibernate.org> References: <5375AEBF.1060205@redhat.com> <20140516153531.GA913@hibernate.org> Message-ID: <537706B3.1010000@redhat.com> I read the paper and I'm somewhat less impressed. My major concern (and the same holds for Paxos) is that this is slow. Well, as slow as 2PC, to be precise... :-) Sending the change to the coordinator is 1 unicast, then we need 2 rounds for a change to be 'stable' (they call it 'safe'), ie. agreed upon. I guess though that this can be done in the background, and the client RPC can return as soon as the server has logged the change into its own protocol. The good thing about Raft is that they chose a fixed coordinator (leader) for log replication which means that consensus is only needed for leader election and never for log replication, so no 2PC needed here. Somewhat similar to sequencer based total order (SEQUENCER)... Do you guys see this as being potentially beneficial to Infinispan ? On 16/05/14 17:35, Emmanuel Bernard wrote: > On Fri 2014-05-16 8:22, Bela Ban wrote: >> Interesting, thanks for the link ! >> Makes me want to implement this in JGroups. Should be relatively easy >> with *static* membership. Different ballgame though with dynamic >> memberships... > > That's the same of these approaches. They require a static membership. > Or number of nodes at least. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -- Bela Ban, JGroups lead (http://www.jgroups.org) From bibryam at gmail.com Sun May 18 04:53:53 2014 From: bibryam at gmail.com (Bilgin Ibryam) Date: Sun, 18 May 2014 09:53:53 +0100 Subject: [infinispan-dev] Welcome to Gustavo Message-ID: Welcome Gustavo. It is great to be colleagues again. On 15 May 2014 14:29, wrote: > Send infinispan-dev mailing list submissions to > infinispan-dev at lists.jboss.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.jboss.org/mailman/listinfo/infinispan-dev > or, via email, send a message with subject or body 'help' to > infinispan-dev-request at lists.jboss.org > > You can reach the person managing the list at > infinispan-dev-owner at lists.jboss.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of infinispan-dev digest..." > > > Today's Topics: > > 1. Re: [!] Reorganization of dependencies & release process > (Sanne Grinovero) > 2. Re: Configuration XSD missing, Infinispan not parsing v.6 > configuration files ? (Pedro Ruivo) > 3. Re: Configuration XSD missing, Infinispan not parsing v.6 > configuration files ? (Sanne Grinovero) > 4. Re: Clustered Listener (William Burns) > 5. Welcome to Gustavo (Sanne Grinovero) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 14 May 2014 13:45:04 +0100 > From: Sanne Grinovero > Subject: Re: [infinispan-dev] [!] Reorganization of dependencies & > release process > To: infinispan -Dev List > Cc: Hardy Ferentschik > Message-ID: > < > CAFm4XO1Dev09T+fyVw6OrxV40hGE9-MR3BUVw1gJsXuTnJsppw at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On 14 May 2014 13:19, Adrian Nistor wrote: > > +1 for moving Infinispan lucene directory out > > > > But why move Query components out? And which ones did you have in mind? > > Because it depends on the Directory, but also on Hibernate Search, so > it "mandates" a specific version of Apache Lucene. > By moving these out of the core release, we can make Infinispan (core) > releases independently from specific Lucene versions. > > Lucene lately is being quite aggressive in changes, and so doing we > can have a single point of surgery at a time: the Directory first, tag > it. Search, tag it. > Then Infinispan Query and Infinispan full releases. Note how each > component has the freedom to choose to *not* update Lucene or *not* > update Infinispan core, if it needs to make unrelated fixes available > early on. > > So I think it's necessary to move all Query components out too, seems > like the only way out. A more open question is if other components > would benefit from a similar model? I think so, although the reasons > are less urgent. > > Sanne > > > > > On 05/14/2014 12:50 AM, Sanne Grinovero wrote: > >> This is a reboot of the thread previously started on both the > >> infinispan-dev and the hibernate-dev mailing list as "Handling of > >> mutual dependency with Infinispan" [1]. > >> We discussed further during the Hibernate fortnightly meeting [2], and > >> came to the conclusion that we need Infinispan to change how some > >> repositories are organised and how the release is assembled. > >> > >> # The problem > >> > >> To restate the issue, as you might painfully remember, every time > >> there is a need for a Lucene update or a Search update we need to sync > >> up for a complex dance of releases in both projects to accommodate for > >> a small-step iterative process to handle the circular dependency. > >> This problem is not too bad today as since a year we're releasing the > >> Lucene Directory in an unusual - and very unmaintainable - temporary > >> solution to be compatible with two different major versions of Apache > >> Lucene; namely what Infinispan Query needs and what Hibernate Search > >> needs are different modules. > >> But the party is over, and I want to finally drop support for Lucene 3 > >> and cleanup the unusual and unmaintainable build mess targeting a > >> single Lucene version only. > >> As soon as we converge to building a single version however - we're > >> back to the complex problem we had when we supported a single version > >> which is handling of a circular dependency - just that the problem has > >> worsened lately the Lucene project has been more active and more > >> inclined than what it used to be to break both internal and public > >> APIs. > >> > >> In short, we have a circular dependency between Hibernate Search and > >> Infinispan which we've been able to handle via hacks and some luck, > >> but it imposes a serious threat to development flexibility, and the > >> locked-in release process is not desirable either. > >> > >> # The solution > >> > >> we think in conclusion there's a single "proper" way out, and it also > >> happens to provide some very interesting side effects in terms of > >> maintenance overhead for everyone: Infinispan Core needs to release > >> independently from the non-core modules. > >> This would have the Lucene Directory depend on a released tag of > >> infinispan-core, and be able to be released independently. > >> Minor situations with benefit: > >> - we often don't make any change in the Lucene Directory, still we > >> need to release it. > >> - when I actually need a release of it, I'm currently begging for a > >> quick release of Infinispan: very costly > >> The Big Ones: > >> - we can manage the Lucene Directory to provide support for different > >> versions of Lucene without necessarily breaking other modules > >> - we can release quickly what's needed to move Search ahead in terms > >> of Lucene versions without needing to make the Infinispan Query module > >> compatible at the same time (in case you haven't followed this area: > >> this seems to be my main activity rather than making valuable stuff). > >> > >> The goal is of course to linearise the dependencies; it seems to also > >> simplify some of our tasks which is a welcome side-effect. I expect it > >> also to make the project less scary for new contributors. > >> > >> # How does it impact users > >> > >> ## Maven users > >> modules will continue to be modules.. I guess nobody will notice, > >> other than we might have a different versioning scheme, but we help > >> people out via the Infinispan BOM. > >> > >> ## Distribution users > >> There should be no difference, other than (as well) some jars might > >> not be aligned in terms of version. But that's probably even less of a > >> problem, as I expect distribution users to just put what they get on > >> their classpath. > >> > >> # How it impacts us > >> > >> 1) I'll move the Lucene Directory project to an different repository; > >> same for the Query related components. > >> I think you should/could consider the same for other components, based > >> on ad-hoc considerations of the trade offs, but I'd expect ultimately > >> to see a more frequent and "core only" release. > >> > >> 2) We'll have different kinds of releases: the "core only" and the > >> "full releases". > >> I think we'll also see components being released independently, but > >> these are either Maven-only or meant for preparation of other > >> components, or preparation for a "full release". > >> > >> 3) Tests (!) > >> Such a move should in no way relax the regression-safety of > >> infinispan-core: we need to still consider it unacceptable for a core > >> change to break one of the modules moving out of the main tree. > >> Personally I think I've pushed many tests about problems found in the > >> "query modules" as unit tests in core, so that should be relatively > >> safe, but it also happened that someone would "tune" these. > >> I realise it's not practical to expect people to run tests of > >> downstream modules, so we'll have to automate most of these tasks in > >> CI. > >> Careful on perception: if today there are three levels of defence > >> against a regression (the author, the reviewer and CI all running the > >> suite for each change), in such an organisation you have only one. So > >> ignoring a CI failure as a "probable hiccup" could be much more > >> dangerous than usual. > >> > >> # When > >> > >> Doing this _might_ be a blocker for any Lucene update; so since one > >> just happened I'll probably have no urgent need for a couple of weeks > >> at least. > >> But we shouldn't be in a position in which an update could not be > >> possible, so I hope we'll agree to implement this sooner rather than > >> later, so we won't have to do it during an emergency. > >> > >> Also while this might sound a bit crazy at first, I see many > >> flexibility benefits which can't hurt now that the project is getting > >> larger and more complex to release. > >> Not least, having a micro release of "Infinispan essentials" would be > >> very welcome in terms of lowing the initial barrier; this was proposed > >> at various meetings and highly endorsed by many but just never > >> happened. > >> > >> Any comment please? I hope I covered it all, and sorry for that :D > >> > >> Cheers, > >> Sanne > >> > >> > >> 1 - http://lists.jboss.org/pipermail/hibernate-dev/2014-May/011419.html > >> 2 - > http://transcripts.jboss.org/meeting/irc.freenode.org/hibernate-dev/2014/hibernate-dev.2014-05-13-13.24.log.html > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > ------------------------------ > > Message: 2 > Date: Wed, 14 May 2014 14:00:17 +0100 > From: Pedro Ruivo > Subject: Re: [infinispan-dev] Configuration XSD missing, Infinispan > not parsing v.6 configuration files ? > To: infinispan-dev at lists.jboss.org > Message-ID: <537368E1.6020206 at infinispan.org> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Hi Sanne, > > I have added the 7.0 schemas to infinispan.org/schemas. > > About the 6.0 parsing, yes I think it was supposed to be not parsed :P I > think I saw some warning about this change but I can't find it (so maybe > I was dreaming). So, agree with you. we need an upgrade guide (however, > the documentation already uses the new style) > > Cheers, > Pedro > > On 05/14/2014 01:32 PM, Sanne Grinovero wrote: > > On 14 May 2014 09:58, Pedro Ruivo wrote: > >> yes, the release process put the schema here: > >> http://docs.jboss.org/infinispan/schemas/ > > > > Thanks, I'll use that. But shouldn't we have it at the URL I posted too?? > > > > And is it really expected that we don't parse old-style configuration > files? > > That's quite annoying for users. If it's the intended plan, I'm not > > against it but we'd need some clear warning and also some guidance for > > an upgrade. > > (BTW great job on the documentation updates) > > > > Cheers, > > Sanne > > > >> > >> Pedro > >> > >> On 05/14/2014 07:50 AM, Tristan Tarrant wrote: > >>> Isn't the deployment of XSDs part of the release process ? > >>> > >>> Tristan > >>> > >>> On 14/05/2014 01:36, Sanne Grinovero wrote: > >>>> The testing configuration files seem to point to this URL: > >>>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd > >>>> > >>>> But I'm getting a 404 when attempting to find it. > >>>> > >>>> It would be very helpfull to make this available, as it seems > >>>> Infinispan 7.0.0.Alpha4 is unable to read the old configuration format > >>>> :-( > >>>> > >>>> Is that expected? > >>>> > >>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > >>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) > >>>> at > org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) > >>>> at > org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) > >>>> at > org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) > >>>> at > org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) > >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >>>> at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > >>>> at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > >>>> at java.lang.reflect.Method.invoke(Method.java:606) > >>>> at > org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) > >>>> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) > >>>> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) > >>>> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) > >>>> at > org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) > >>>> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) > >>>> at org.testng.TestRunner.privateRun(TestRunner.java:767) > >>>> at org.testng.TestRunner.run(TestRunner.java:617) > >>>> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) > >>>> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) > >>>> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) > >>>> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) > >>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > >>>> at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > >>>> at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >>>> at java.lang.Thread.run(Thread.java:744) > >>>> Caused by: javax.xml.stream.XMLStreamException: ParseError at > [row,col]:[5,41] > >>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > >>>> at > org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) > >>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) > >>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) > >>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) > >>>> ... 24 more > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > > > ------------------------------ > > Message: 3 > Date: Wed, 14 May 2014 16:16:12 +0100 > From: Sanne Grinovero > Subject: Re: [infinispan-dev] Configuration XSD missing, Infinispan > not parsing v.6 configuration files ? > To: infinispan -Dev List > Message-ID: > < > CAFm4XO2NncDr6z9KSiRH70bNK_gmMMDw2HSD9CM+1NJ3Z6p2Hw at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On 14 May 2014 14:00, Pedro Ruivo wrote: > > Hi Sanne, > > > > I have added the 7.0 schemas to infinispan.org/schemas. > > Thanks! > > > > > About the 6.0 parsing, yes I think it was supposed to be not parsed :P I > > think I saw some warning about this change but I can't find it (so maybe > > I was dreaming). So, agree with you. we need an upgrade guide (however, > > the documentation already uses the new style) > > Yes the docs are very nice. I guess my problem is just that it's an > unexpected issue; I somehow guess it from the stacktrace but I don't > think everyone will be as familiar with the kind of error message. > I'll open an issue to improve the error; if ever someone will > volunteer to also write a couple of lines on how to migrate, we could > link the page from the error message. > > > > > Cheers, > > Pedro > > > > On 05/14/2014 01:32 PM, Sanne Grinovero wrote: > >> On 14 May 2014 09:58, Pedro Ruivo wrote: > >>> yes, the release process put the schema here: > >>> http://docs.jboss.org/infinispan/schemas/ > >> > >> Thanks, I'll use that. But shouldn't we have it at the URL I posted > too?? > >> > >> And is it really expected that we don't parse old-style configuration > files? > >> That's quite annoying for users. If it's the intended plan, I'm not > >> against it but we'd need some clear warning and also some guidance for > >> an upgrade. > >> (BTW great job on the documentation updates) > >> > >> Cheers, > >> Sanne > >> > >>> > >>> Pedro > >>> > >>> On 05/14/2014 07:50 AM, Tristan Tarrant wrote: > >>>> Isn't the deployment of XSDs part of the release process ? > >>>> > >>>> Tristan > >>>> > >>>> On 14/05/2014 01:36, Sanne Grinovero wrote: > >>>>> The testing configuration files seem to point to this URL: > >>>>> http://www.infinispan.org/schemas/infinispan-config-7.0.xsd > >>>>> > >>>>> But I'm getting a 404 when attempting to find it. > >>>>> > >>>>> It would be very helpfull to make this available, as it seems > >>>>> Infinispan 7.0.0.Alpha4 is unable to read the old configuration > format > >>>>> :-( > >>>>> > >>>>> Is that expected? > >>>>> > >>>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > >>>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:100) > >>>>> at > org.infinispan.test.fwk.TestCacheManagerFactory.fromStream(TestCacheManagerFactory.java:106) > >>>>> at > org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:97) > >>>>> at > org.infinispan.test.fwk.TestCacheManagerFactory.fromXml(TestCacheManagerFactory.java:91) > >>>>> at > org.infinispan.manager.CacheManagerXmlConfigurationTest.testBatchingIsEnabled(CacheManagerXmlConfigurationTest.java:126) > >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > >>>>> at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > >>>>> at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > >>>>> at java.lang.reflect.Method.invoke(Method.java:606) > >>>>> at > org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80) > >>>>> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714) > >>>>> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901) > >>>>> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231) > >>>>> at > org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127) > >>>>> at > org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111) > >>>>> at org.testng.TestRunner.privateRun(TestRunner.java:767) > >>>>> at org.testng.TestRunner.run(TestRunner.java:617) > >>>>> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334) > >>>>> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37) > >>>>> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368) > >>>>> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64) > >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > >>>>> at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > >>>>> at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >>>>> at java.lang.Thread.run(Thread.java:744) > >>>>> Caused by: javax.xml.stream.XMLStreamException: ParseError at > [row,col]:[5,41] > >>>>> Message: Unexpected element '{urn:infinispan:config:6.0}infinispan' > >>>>> at > org.infinispan.configuration.parsing.ParserRegistry.parseElement(ParserRegistry.java:137) > >>>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:121) > >>>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:108) > >>>>> at > org.infinispan.configuration.parsing.ParserRegistry.parse(ParserRegistry.java:95) > >>>>> ... 24 more > >>>>> _______________________________________________ > >>>>> infinispan-dev mailing list > >>>>> infinispan-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>>> > >>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> infinispan-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >>>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > ------------------------------ > > Message: 4 > Date: Thu, 15 May 2014 09:08:38 -0400 > From: William Burns > Subject: Re: [infinispan-dev] Clustered Listener > To: infinispan -Dev List > Message-ID: > eZO0P5+sYxWgKC_paeh9vZLH_TY7-rg at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Tue, May 13, 2014 at 9:10 AM, Pierre Sutra > wrote: > > Hello, > > > > As part of the LEADS project, we have been using recently the clustered > > listeners API. In our use case, the application is employing a few > > thousands listeners, constantly installing and un-installing them. > > Are you talking about non clustered listeners? It seems unlikely you > would need so many cluster listeners. Cluster listeners should allow > you to only install a small amount of them, usually you would have > only additional ones if you have a Filter applied limiting what > key/values are returned. > > > The > > overall picture is that things work smoothly up to a few hundreds > > listeners, but above the cost is high due to the full replication > > schema. To sidestep this issue, we have added a mechanism that allows > > listening only to a single key. > > Is the KeyFilter or KeyValueFilter not sufficient for this? > > void addListener(Object listener, KeyFilter filter); > > void addListener(Object listener, KeyValueFilter super V> filter, Converter converter); > > Also to note if you are doing any kind of translation of the value to > another value it is recommended to do that via the supplied Converter. > This can give good performance as the conversion is done on the > target node and not all in 1 node and also you can reduce the payload > if the resultant value has a serialized form that is smaller than the > original value. > > > In such a case, the listener is solely > > installed at the key owners. This greatly helps the scalability of the > > mechanism at the cost of fault-tolerance since, in the current state of > > the implementation, listeners are not forwarded to new data owners. > > Since as a next step [1] it is planned to handle topology change, do you > > plan also to support key (or key range) specific listener ? > > These should be covered with the 2 overloads as I mentioned above. > This should be the most performant way as the filter is replicated to > the node upon installation so a 1 time cost. But if a key/value pair > doesn't pass the filter the event is not sent to the node where the > listener is installed. > > > Besides, > > regarding this last point and the current state of the implementation, I > > would have like to know what is the purpose of the re-installation of > > the cluster listener in case of a view change in the addedListener() > > method of the CacheNotifierImpl class. > > This isn't a re-installation. This is used to propgate the > RemoteClusterListener to the other nodes, so that when a new event is > generated it can see that and subsequently send it back to the node > where the listener is installed. There is also a second check in > there in case if a new node joins in the middle. > > > Many thanks in advance. > > No problem, glad you guys are testing out this feature already :) > > > > > Best, > > Pierre Sutra > > > > [1] > > > https://github.com/infinispan/infinispan/wiki/Clustered-listeners#handling-topology-changes > > > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > ------------------------------ > > Message: 5 > Date: Thu, 15 May 2014 14:29:17 +0100 > From: Sanne Grinovero > Subject: [infinispan-dev] Welcome to Gustavo > To: infinispan -Dev List > Message-ID: > 5KSKEd3DEQxw at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Hi all, > today we finally have Gustavo joining us as a full time engineer on > Infinispan. > > He worked with Tristan and myself in Italy before we came to Red Hat, > and was already a Lucene expert back then. He then joined Red Hat as a > consultant but that didn't last too long: he was too good and > customers wanted him to travel an unreasonable amount. > > So he has been lost for a couple of years, but wisely spent them to > deepen his skills in devops, more of Lucene but now in larger scale > and distributed environments: a bit of JGroups, Infinispan and > Hibernate Search and even some Scala, but also experience with > MongoDB, Hadoop, Elastic Search and Solr so I'm thrilled to have this > great blend of competences now available full time to improve the > Search experience of Infinispan users. > > Welcome! > > He's gustavonalle on both IRC and GitHub. > > Sanne > > > ------------------------------ > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > End of infinispan-dev Digest, Vol 62, Issue 12 > ********************************************** > -- Bilgin Ibryam Apache Camel & Apache OFBiz committer Blog: ofbizian.com Twitter: @bibryam Author of Instant Apache Camel Message Routing http://www.amazon.com/dp/1783283475 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140518/df2e447b/attachment-0001.html From tsykora at redhat.com Mon May 19 07:28:32 2014 From: tsykora at redhat.com (Tomas Sykora) Date: Mon, 19 May 2014 07:28:32 -0400 (EDT) Subject: [infinispan-dev] IntelliJ IDEA license renewal for ISPN OS project In-Reply-To: <1015900187.7272100.1400498687318.JavaMail.zimbra@redhat.com> Message-ID: <517184462.7272821.1400498912152.JavaMail.zimbra@redhat.com> Hello all, current open source Infinispan license for IntelliJ IDEA expires in 3 days. Is there any way how we can obtain a new one for next year? IIRC, Manik took care about this issue. Thanks! Tomas From sanne at infinispan.org Mon May 19 07:42:52 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 May 2014 12:42:52 +0100 Subject: [infinispan-dev] IntelliJ IDEA license renewal for ISPN OS project In-Reply-To: <517184462.7272821.1400498912152.JavaMail.zimbra@redhat.com> References: <1015900187.7272100.1400498687318.JavaMail.zimbra@redhat.com> <517184462.7272821.1400498912152.JavaMail.zimbra@redhat.com> Message-ID: I hope Manik will continue with that, he's not on a different planet. But if any of the other IDEA users wants to volunteer to speed things up, please go ahead. Personally I had a bit of a fight with Eclipse last week, but we're good friends again :) On 19 May 2014 12:28, Tomas Sykora wrote: > Hello all, > current open source Infinispan license for IntelliJ IDEA expires in 3 days. Is there any way how we can obtain a new one for next year? > IIRC, Manik took care about this issue. > > Thanks! > Tomas > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Mon May 19 07:46:13 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 19 May 2014 13:46:13 +0200 Subject: [infinispan-dev] IntelliJ IDEA license renewal for ISPN OS project In-Reply-To: References: <1015900187.7272100.1400498687318.JavaMail.zimbra@redhat.com> <517184462.7272821.1400498912152.JavaMail.zimbra@redhat.com> Message-ID: <5F259B94-68CC-45BD-BE58-FB18B15855F3@redhat.com> On 19 May 2014, at 13:42, Sanne Grinovero wrote: > I hope Manik will continue with that, he's not on a different planet. Well, Mircea is the Infinispan lead, so he should be the one contacting them. > But if any of the other IDEA users wants to volunteer to speed things > up, please go ahead. Or... > > Personally I had a bit of a fight with Eclipse last week, but we're > good friends again :) > > On 19 May 2014 12:28, Tomas Sykora wrote: >> Hello all, >> current open source Infinispan license for IntelliJ IDEA expires in 3 days. Is there any way how we can obtain a new one for next year? >> IIRC, Manik took care about this issue. >> >> Thanks! >> Tomas >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Mon May 19 08:06:50 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 May 2014 13:06:50 +0100 Subject: [infinispan-dev] IntelliJ IDEA license renewal for ISPN OS project In-Reply-To: <5F259B94-68CC-45BD-BE58-FB18B15855F3@redhat.com> References: <1015900187.7272100.1400498687318.JavaMail.zimbra@redhat.com> <517184462.7272821.1400498912152.JavaMail.zimbra@redhat.com> <5F259B94-68CC-45BD-BE58-FB18B15855F3@redhat.com> Message-ID: On 19 May 2014 12:46, Galder Zamarre?o wrote: > > On 19 May 2014, at 13:42, Sanne Grinovero wrote: > >> I hope Manik will continue with that, he's not on a different planet. > > Well, Mircea is the Infinispan lead, so he should be the one contacting them. Mircea being on holidays I wouldn't wait for that. >> But if any of the other IDEA users wants to volunteer to speed things >> up, please go ahead. > > Or... ? > >> >> Personally I had a bit of a fight with Eclipse last week, but we're >> good friends again :) >> >> On 19 May 2014 12:28, Tomas Sykora wrote: >>> Hello all, >>> current open source Infinispan license for IntelliJ IDEA expires in 3 days. Is there any way how we can obtain a new one for next year? >>> IIRC, Manik took care about this issue. >>> >>> Thanks! >>> Tomas >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From sanne at infinispan.org Mon May 19 11:09:34 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Mon, 19 May 2014 16:09:34 +0100 Subject: [infinispan-dev] On ParserRegistry and classloaders Message-ID: I see the ParserRegistry was changed quite a bit; in Infinispan 6 it allowed to specify a different classloader for some operations, now it only takes a classloader during construction time. For WildFly/JBoss users, it is quite common that the configuration file we want parsed is not in the same classloader that the ParserRegistry needs to lookup its own parser components (as its design uses the ServiceLoader to discover components of the parser). This is especially true when Infinispan is not used by the application directly but via another module (so I guess also critical for capedwarf). I initially though to workaround the problem using a "wrapping classloader" so that I could pass a single CL instance which would try both the deployment classloader and the Infinispan's module classloader, but - besides this being suboptimal - it doesn't work as I'm violating isolation between modules: I can get exposed to an Infinispan 6 module which contains also Parser components, which get loaded as a service but are not compatible (different class definition). I'll need these changes in the ParserRegistry reverted please. Happy to send a pull myself, but before I attempt to patch it myself could someone explain what the goal was? thanks, Sanne From galder at redhat.com Mon May 19 10:56:30 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 19 May 2014 16:56:30 +0200 Subject: [infinispan-dev] IRC weekly meeting Message-ID: <8D55D7A3-14A0-4486-8B60-9263CD2B46AA@redhat.com> Minutes from the weekly meetin can be found here: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-05-19-14.06.html Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From emmanuel at hibernate.org Tue May 20 03:29:57 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 20 May 2014 09:29:57 +0200 Subject: [infinispan-dev] JPA CacheStore doc Message-ID: <9FE0B28B-AE0E-42B9-8C13-5D0647B7CCCB@hibernate.org> Hi guys, I was reading http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_jpa_cache_store I have a few remarks and questions. The links on 7.3. Additional References to github lead to 404. Right under the 7. Section title, there is The Infinispan Community :icons: font Not sure what that it. The documentation does not mention support or lack of for associations. I suppose this should be covered. Anyone can lead this? Should I open JIRAs? Emmanuel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140520/5ed7c036/attachment.html From sanne at infinispan.org Wed May 21 06:08:30 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 21 May 2014 11:08:30 +0100 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <537C5E3F.1070309@gmail.com> References: <537C5E3F.1070309@gmail.com> Message-ID: Hi, do you mean you have literally the same error? If not it would help to see the exact error message. Generally speaking if your error refers to missing dependencies, is it possible that you have not activated the JBoss.org Maven repository? There should be an example maven-settings.xml file included in the root of the sources. I've included the developers mailing list in CC, please use that for development related questions. Alternatively, it's probably easier for this kind of problems to ask on IRC. Details for both IRC and the mailing lists can be found here: http://infinispan.org/community/ Regards, Sanne On 21 May 2014 09:05, Mohan Dhawan wrote: > Hi Adrian, Sanne, > > I'm trying to build infinispan-5.3.0 from source --- I checked out a .zip of > the source from GitHub and branch 5.3.x. However, I'm still getting build > errors similar to those listed here > (http://lists.jboss.org/pipermail/infinispan-dev/2013-July/013448.html). > > Kindly advise. > > Regards, > mohan From sanne at infinispan.org Wed May 21 09:35:10 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 21 May 2014 14:35:10 +0100 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <537C7C16.7060603@gmail.com> References: <537C5E3F.1070309@gmail.com> <537C7C16.7060603@gmail.com> Message-ID: Hi, On 21 May 2014 11:12, Mohan Dhawan wrote: > Hi Sanne, > > Thanks for the prompt response. I'm getting the same error. > > How can I 'activate' the JBoss.org Maven repository ? There is a > maven-settings.xml file. The standard way in Maven is tu use the -s parameter, so: # mvn clean install -s maven-settings.xml More recent versions include a build script; I'd advise to use the latest. Instructions to make it permantently available by default are here: https://community.jboss.org/wiki/MavenGettingStarted-Users But if you're new to these build systems I'd suggest to read the chapter about building first: https://docs.jboss.org/author/display/ISPN/Contributing+to+Infinispan Regards, Sanne > > Regards, > mohan > > > On Wednesday 21 May 2014 03:38 PM, Sanne Grinovero wrote: >> >> Hi, >> do you mean you have literally the same error? If not it would help to >> see the exact error message. >> >> Generally speaking if your error refers to missing dependencies, is it >> possible that you have not activated the JBoss.org Maven repository? >> There should be an example maven-settings.xml file included in the >> root of the sources. >> >> I've included the developers mailing list in CC, please use that for >> development related questions. >> Alternatively, it's probably easier for this kind of problems to ask on >> IRC. >> Details for both IRC and the mailing lists can be found here: >> http://infinispan.org/community/ >> >> Regards, >> Sanne >> >> >> On 21 May 2014 09:05, Mohan Dhawan wrote: >>> >>> Hi Adrian, Sanne, >>> >>> I'm trying to build infinispan-5.3.0 from source --- I checked out a .zip >>> of >>> the source from GitHub and branch 5.3.x. However, I'm still getting build >>> errors similar to those listed here >>> (http://lists.jboss.org/pipermail/infinispan-dev/2013-July/013448.html). >>> >>> Kindly advise. >>> >>> Regards, >>> mohan > > From mohan.dhawan at gmail.com Thu May 22 02:14:25 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Wed, 21 May 2014 23:14:25 -0700 (PDT) Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: References: Message-ID: <1400739265461-4029239.post@n3.nabble.com> Hi Sanne, Thanks for the pointer. But I still get another similar error: [ERROR] Failed to execute goal on project infinispan-client-hotrod: Could not resolve dependencies for project org.infinispan:infinispan-client-hotrod:bundle:5.3.1-SNAPSHOT: Failure to find org.infinispan:infinispan-server-hotrod:jar:tests:5.3.1-SNAPSHOT in http://maven.repository.redhat.com/earlyaccess/all/ was cached in the local repository, resolution will not be reattempted until the update interval of redhat-earlyaccess-repository-group has elapsed or updates are forced -> [Help 1] Kindly advise. Regards, mohan -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029239.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From dan.berindei at gmail.com Thu May 22 08:16:17 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Thu, 22 May 2014 15:16:17 +0300 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <1400739265461-4029239.post@n3.nabble.com> References: <1400739265461-4029239.post@n3.nabble.com> Message-ID: Mohan, infinispan-server-hotrod.jar should have been built by the build process itself. I have just tried it on my machine and this worked with an empty local repository: mvn clean install -s maven-settings.xml -DskipTests On Thu, May 22, 2014 at 9:14 AM, Mohan Dhawan wrote: > Hi Sanne, > > Thanks for the pointer. But I still get another similar error: > > [ERROR] Failed to execute goal on project infinispan-client-hotrod: Could > not resolve dependencies for project > org.infinispan:infinispan-client-hotrod:bundle:5.3.1-SNAPSHOT: Failure to > find org.infinispan:infinispan-server-hotrod:jar:tests:5.3.1-SNAPSHOT in > http://maven.repository.redhat.com/earlyaccess/all/ was cached in the > local > repository, resolution will not be reattempted until the update interval of > redhat-earlyaccess-repository-group has elapsed or updates are forced -> > [Help 1] > > Kindly advise. > > Regards, > mohan > > > > > -- > View this message in context: > http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029239.html > Sent from the Infinispan Developer List mailing list archive at Nabble.com. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140522/67f12ae3/attachment.html From mohan.dhawan at gmail.com Fri May 23 03:02:33 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Fri, 23 May 2014 00:02:33 -0700 (PDT) Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: References: <1400739265461-4029239.post@n3.nabble.com> Message-ID: <1400828553457-4029241.post@n3.nabble.com> Hi Dan, Thanks for the reply. Apparently, the .jar was not getting built. Instead, an *-tests.jar.lastUpdated file was being created in the corresponding folder in the repository. Merely, renaming the *.jar.lastUpdated file to *.jar did the trick. However, now I get the following error: [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test (default-test) on project infinispan-jcache-tck-runner: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test failed: The forked VM terminated without saying properly goodbye. VM crash or System.exit called ? [INFO] [ERROR] Command was/bin/sh -c cd /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner && "${env.JAVA_HOME_7}/bin/java" -Xmx1024m -XX:MaxPermSize=256m -Dsun.nio.ch.bugLevel -jar /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefirebooter2569451925699817293.jar /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire8323245713679303102tmp /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire_06253795966223488172tmp [INFO] [ERROR] -> [Help 1] Kindly advise. Regards, mohan -- View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029241.html Sent from the Infinispan Developer List mailing list archive at Nabble.com. From galder at redhat.com Fri May 23 03:03:21 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Fri, 23 May 2014 09:03:21 +0200 Subject: [infinispan-dev] On ParserRegistry and classloaders In-Reply-To: References: Message-ID: <9A22CC58-5B20-4D6B-BA6B-B4A23493979F@redhat.com> Hey Sanne, I?ve looked at ParserRegistry and not sure I see the changes you are referring to? >From what I?ve seen, ParserRegistry has taken class loader in the constructor since the start. I suspect you might be referring to classloader related changes as a result of OSGI integration? Cheers, On 19 May 2014, at 17:09, Sanne Grinovero wrote: > I see the ParserRegistry was changed quite a bit; in Infinispan 6 it > allowed to specify a different classloader for some operations, now it > only takes a classloader during construction time. > > For WildFly/JBoss users, it is quite common that the configuration > file we want parsed is not in the same classloader that the > ParserRegistry needs to lookup its own parser components (as its > design uses the ServiceLoader to discover components of the parser). > This is especially true when Infinispan is not used by the application > directly but via another module (so I guess also critical for > capedwarf). > > I initially though to workaround the problem using a "wrapping > classloader" so that I could pass a single CL instance which would try > both the deployment classloader and the Infinispan's module > classloader, but - besides this being suboptimal - it doesn't work as > I'm violating isolation between modules: I can get exposed to an > Infinispan 6 module which contains also Parser components, which get > loaded as a service but are not compatible (different class > definition). > > I'll need these changes in the ParserRegistry reverted please. Happy > to send a pull myself, but before I attempt to patch it myself could > someone explain what the goal was? > > thanks, > Sanne > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From rvansa at redhat.com Fri May 23 03:10:54 2014 From: rvansa at redhat.com (Radim Vansa) Date: Fri, 23 May 2014 09:10:54 +0200 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <1400828553457-4029241.post@n3.nabble.com> References: <1400739265461-4029239.post@n3.nabble.com> <1400828553457-4029241.post@n3.nabble.com> Message-ID: <537EF47E.8070900@redhat.com> In Maven repository, the *.jar.lastUpdated is no JAR - it's just file with timestamps of last attempt to download the files from external source. The only thing you should do with these files is delete them if you want to reattempt the download again. Radim On 05/23/2014 09:02 AM, Mohan Dhawan wrote: > Hi Dan, > > Thanks for the reply. Apparently, the .jar was not getting built. Instead, > an *-tests.jar.lastUpdated file was being created in the corresponding > folder in the repository. Merely, renaming the *.jar.lastUpdated file to > *.jar did the trick. However, now I get the following error: > > [INFO] [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test (default-test) on > project infinispan-jcache-tck-runner: Execution default-test of goal > org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test failed: The > forked VM terminated without saying properly goodbye. VM crash or > System.exit called ? > [INFO] [ERROR] Command was/bin/sh -c cd > /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner && > "${env.JAVA_HOME_7}/bin/java" -Xmx1024m -XX:MaxPermSize=256m > -Dsun.nio.ch.bugLevel -jar > /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefirebooter2569451925699817293.jar > /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire8323245713679303102tmp > /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire_06253795966223488172tmp > [INFO] [ERROR] -> [Help 1] > > Kindly advise. > > Regards, > mohan > > > > -- > View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029241.html > Sent from the Infinispan Developer List mailing list archive at Nabble.com. > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Radim Vansa JBoss DataGrid QA From mohan.dhawan at gmail.com Fri May 23 03:19:57 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Fri, 23 May 2014 12:49:57 +0530 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <537EF47E.8070900@redhat.com> References: <1400739265461-4029239.post@n3.nabble.com> <1400828553457-4029241.post@n3.nabble.com> <537EF47E.8070900@redhat.com> Message-ID: <537EF69D.10604@gmail.com> Hi Radim, I did try to delete them but the compilation process kept halting at the same places every time. So, I just renamed the file. I've purged the repository, tried doing a clean install, but still get the errors --- the latest being the org.apache.maven.plugins:maven-surefire-plugin error for which I have no clue. Kindly advise. Regards, mohan On Friday 23 May 2014 12:40 PM, Radim Vansa wrote: > In Maven repository, the *.jar.lastUpdated is no JAR - it's just file > with timestamps of last attempt to download the files from external > source. The only thing you should do with these files is delete them if > you want to reattempt the download again. > > Radim > > > On 05/23/2014 09:02 AM, Mohan Dhawan wrote: >> Hi Dan, >> >> Thanks for the reply. Apparently, the .jar was not getting built. Instead, >> an *-tests.jar.lastUpdated file was being created in the corresponding >> folder in the repository. Merely, renaming the *.jar.lastUpdated file to >> *.jar did the trick. However, now I get the following error: >> >> [INFO] [ERROR] Failed to execute goal >> org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test (default-test) on >> project infinispan-jcache-tck-runner: Execution default-test of goal >> org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test failed: The >> forked VM terminated without saying properly goodbye. VM crash or >> System.exit called ? >> [INFO] [ERROR] Command was/bin/sh -c cd >> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner && >> "${env.JAVA_HOME_7}/bin/java" -Xmx1024m -XX:MaxPermSize=256m >> -Dsun.nio.ch.bugLevel -jar >> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefirebooter2569451925699817293.jar >> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire8323245713679303102tmp >> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire_06253795966223488172tmp >> [INFO] [ERROR] -> [Help 1] >> >> Kindly advise. >> >> Regards, >> mohan >> >> >> >> -- >> View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029241.html >> Sent from the Infinispan Developer List mailing list archive at Nabble.com. >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > From ttarrant at redhat.com Fri May 23 03:38:45 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Fri, 23 May 2014 09:38:45 +0200 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <537EF69D.10604@gmail.com> References: <1400739265461-4029239.post@n3.nabble.com> <1400828553457-4029241.post@n3.nabble.com> <537EF47E.8070900@redhat.com> <537EF69D.10604@gmail.com> Message-ID: <537EFB05.6070205@redhat.com> Mohan, first, clear your local maven repo from any Infinispan artifacts (I assume you are using Linux): $ rm -rf $HOME/.m2/repository/org/infinispan now attempt the build again. Please note: tests *must* be built (but you don't need to run them). Therefore you need to add either -DskipTests or -Dmaven.test.skip.exec to your command-line. Do *NOT* add -Dmaven.test.skip as the build will fail. Since Infinispan 6.0 we have a convenient build.sh / build.bat combination together with a maven-settings.xml file. They are not specific to a particular version of Infinispan, so they can be used even to build from a 5.x tag. Tristan On 23/05/2014 09:19, Mohan Dhawan wrote: > Hi Radim, > > I did try to delete them but the compilation process kept halting at the > same places every time. So, I just renamed the file. I've purged the > repository, tried doing a clean install, but still get the errors --- > the latest being the org.apache.maven.plugins:maven-surefire-plugin > error for which I have no clue. > > Kindly advise. > > Regards, > mohan > > On Friday 23 May 2014 12:40 PM, Radim Vansa wrote: >> In Maven repository, the *.jar.lastUpdated is no JAR - it's just file >> with timestamps of last attempt to download the files from external >> source. The only thing you should do with these files is delete them if >> you want to reattempt the download again. >> >> Radim >> >> >> On 05/23/2014 09:02 AM, Mohan Dhawan wrote: >>> Hi Dan, >>> >>> Thanks for the reply. Apparently, the .jar was not getting built. Instead, >>> an *-tests.jar.lastUpdated file was being created in the corresponding >>> folder in the repository. Merely, renaming the *.jar.lastUpdated file to >>> *.jar did the trick. However, now I get the following error: >>> >>> [INFO] [ERROR] Failed to execute goal >>> org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test (default-test) on >>> project infinispan-jcache-tck-runner: Execution default-test of goal >>> org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test failed: The >>> forked VM terminated without saying properly goodbye. VM crash or >>> System.exit called ? >>> [INFO] [ERROR] Command was/bin/sh -c cd >>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner && >>> "${env.JAVA_HOME_7}/bin/java" -Xmx1024m -XX:MaxPermSize=256m >>> -Dsun.nio.ch.bugLevel -jar >>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefirebooter2569451925699817293.jar >>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire8323245713679303102tmp >>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire_06253795966223488172tmp >>> [INFO] [ERROR] -> [Help 1] >>> >>> Kindly advise. >>> >>> Regards, >>> mohan >>> >>> >>> >>> -- >>> View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029241.html >>> Sent from the Infinispan Developer List mailing list archive at Nabble.com. >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From rory.odonnell at oracle.com Fri May 23 04:11:24 2014 From: rory.odonnell at oracle.com (Rory O'Donnell Oracle, Dublin Ireland) Date: Fri, 23 May 2014 09:11:24 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 are available on java.net Message-ID: <537F02AC.7000308@oracle.com> Hi Galder, Early Access builds for JDK 9 b13 , JDK 8u20 b15 and JDK 7u60 b15 are available on java.net. As we enter the later phases of development for JDK 7u60 & JDK 8u20 , please log any show stoppers as soon as possible. Rgds, Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140523/fe81a698/attachment.html From mohan.dhawan at gmail.com Fri May 23 04:47:02 2014 From: mohan.dhawan at gmail.com (Mohan Dhawan) Date: Fri, 23 May 2014 14:17:02 +0530 Subject: [infinispan-dev] infinispan 5.3.0 build issues In-Reply-To: <537EFB05.6070205@redhat.com> References: <1400739265461-4029239.post@n3.nabble.com> <1400828553457-4029241.post@n3.nabble.com> <537EF47E.8070900@redhat.com> <537EF69D.10604@gmail.com> <537EFB05.6070205@redhat.com> Message-ID: <537F0B06.5070308@gmail.com> Hi Tristan, I was using "-Dmaven.test.skip " and the build was failing. Using "-Dmaven.test.skip.exec " worked just like a charm. Thanks for the help. Regards, mohan On Friday 23 May 2014 01:08 PM, Tristan Tarrant wrote: > Mohan, > > first, clear your local maven repo from any Infinispan artifacts (I > assume you are using Linux): > > $ rm -rf $HOME/.m2/repository/org/infinispan > > now attempt the build again. Please note: tests *must* be built (but you > don't need to run them). Therefore you need to add either -DskipTests or > -Dmaven.test.skip.exec to your command-line. > Do *NOT* add -Dmaven.test.skip as the build will fail. > > Since Infinispan 6.0 we have a convenient build.sh / build.bat > combination together with a maven-settings.xml file. They are not > specific to a particular version of Infinispan, so they can be used even > to build from a 5.x tag. > > Tristan > > On 23/05/2014 09:19, Mohan Dhawan wrote: >> Hi Radim, >> >> I did try to delete them but the compilation process kept halting at the >> same places every time. So, I just renamed the file. I've purged the >> repository, tried doing a clean install, but still get the errors --- >> the latest being the org.apache.maven.plugins:maven-surefire-plugin >> error for which I have no clue. >> >> Kindly advise. >> >> Regards, >> mohan >> >> On Friday 23 May 2014 12:40 PM, Radim Vansa wrote: >>> In Maven repository, the *.jar.lastUpdated is no JAR - it's just file >>> with timestamps of last attempt to download the files from external >>> source. The only thing you should do with these files is delete them if >>> you want to reattempt the download again. >>> >>> Radim >>> >>> >>> On 05/23/2014 09:02 AM, Mohan Dhawan wrote: >>>> Hi Dan, >>>> >>>> Thanks for the reply. Apparently, the .jar was not getting built. Instead, >>>> an *-tests.jar.lastUpdated file was being created in the corresponding >>>> folder in the repository. Merely, renaming the *.jar.lastUpdated file to >>>> *.jar did the trick. However, now I get the following error: >>>> >>>> [INFO] [ERROR] Failed to execute goal >>>> org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test (default-test) on >>>> project infinispan-jcache-tck-runner: Execution default-test of goal >>>> org.apache.maven.plugins:maven-surefire-plugin:2.14.1:test failed: The >>>> forked VM terminated without saying properly goodbye. VM crash or >>>> System.exit called ? >>>> [INFO] [ERROR] Command was/bin/sh -c cd >>>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner && >>>> "${env.JAVA_HOME_7}/bin/java" -Xmx1024m -XX:MaxPermSize=256m >>>> -Dsun.nio.ch.bugLevel -jar >>>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefirebooter2569451925699817293.jar >>>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire8323245713679303102tmp >>>> /home/ibmadmin/Downloads/5.3.x/jcache/target/it/tck-runner/target/surefire/surefire_06253795966223488172tmp >>>> [INFO] [ERROR] -> [Help 1] >>>> >>>> Kindly advise. >>>> >>>> Regards, >>>> mohan >>>> >>>> >>>> >>>> -- >>>> View this message in context: http://infinispan-developer-list.980875.n3.nabble.com/Re-infinispan-dev-infinispan-5-3-0-build-issues-tp4029237p4029241.html >>>> Sent from the Infinispan Developer List mailing list archive at Nabble.com. >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From galder at redhat.com Fri May 23 05:12:31 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Fri, 23 May 2014 11:12:31 +0200 Subject: [infinispan-dev] Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 are available on java.net In-Reply-To: <537F02AC.7000308@oracle.com> References: <537F02AC.7000308@oracle.com> Message-ID: <0DABDEB0-82BE-4120-8781-AAE391460FDF@redhat.com> Hi Rory, Is https://bugs.openjdk.java.net/browse/JDK-8036554 being backported to 7u60? Cheers, On 23 May 2014, at 10:11, Rory O'Donnell Oracle, Dublin Ireland wrote: > Hi Galder, > > Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 are available on java.net. > > As we enter the later phases of development for JDK 7u60 & JDK 8u20 , please log any show > stoppers as soon as possible. > > Rgds, Rory > > -- > Rgds,Rory O'Donnell > Quality Engineering Manager > Oracle EMEA , Dublin, Ireland > -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From dan.berindei at gmail.com Fri May 23 06:47:55 2014 From: dan.berindei at gmail.com (Dan Berindei) Date: Fri, 23 May 2014 13:47:55 +0300 Subject: [infinispan-dev] JPA CacheStore doc In-Reply-To: <9FE0B28B-AE0E-42B9-8C13-5D0647B7CCCB@hibernate.org> References: <9FE0B28B-AE0E-42B9-8C13-5D0647B7CCCB@hibernate.org> Message-ID: Please open a single JIRA for all of them. On Tue, May 20, 2014 at 10:29 AM, Emmanuel Bernard wrote: > Hi guys, > > I was reading > http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_jpa_cache_store > > I have a few remarks and questions. > > The links on 7.3. Additional References to github lead to 404. > > Right under the 7. Section title, there is > > The Infinispan Community :icons: font > > Not sure what that it. > > The documentation does not mention support or lack of for associations. I > suppose this should be covered. > > Anyone can lead this? Should I open JIRAs? > > Emmanuel > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140523/b4a593df/attachment.html From emmanuel at hibernate.org Fri May 23 07:48:06 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Fri, 23 May 2014 13:48:06 +0200 Subject: [infinispan-dev] JPA CacheStore doc In-Reply-To: References: <9FE0B28B-AE0E-42B9-8C13-5D0647B7CCCB@hibernate.org> Message-ID: https://issues.jboss.org/browse/ISPN-4320 On 23 May 2014, at 12:47, Dan Berindei wrote: > Please open a single JIRA for all of them. > > > On Tue, May 20, 2014 at 10:29 AM, Emmanuel Bernard wrote: > Hi guys, > > I was reading http://infinispan.org/docs/7.0.x/user_guide/user_guide.html#_jpa_cache_store > > I have a few remarks and questions. > > The links on 7.3. Additional References to github lead to 404. > > Right under the 7. Section title, there is > > The Infinispan Community :icons: font > > Not sure what that it. > > The documentation does not mention support or lack of for associations. I suppose this should be covered. > > Anyone can lead this? Should I open JIRAs? > > Emmanuel > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140523/2d21e70b/attachment.html From sanne at infinispan.org Fri May 23 12:08:06 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Fri, 23 May 2014 17:08:06 +0100 Subject: [infinispan-dev] On ParserRegistry and classloaders In-Reply-To: <9A22CC58-5B20-4D6B-BA6B-B4A23493979F@redhat.com> References: <9A22CC58-5B20-4D6B-BA6B-B4A23493979F@redhat.com> Message-ID: On 23 May 2014 08:03, Galder Zamarre?o wrote: > Hey Sanne, > > I?ve looked at ParserRegistry and not sure I see the changes you are referring to? > > >From what I?ve seen, ParserRegistry has taken class loader in the constructor since the start. Yes, and that was good as we've been using it: it might need directions to be pointed at the right modules to load extension points. My problem is not that the constructor takes a ClassLoader, but that other options have been removed; essentially in my scenario the module containing the extension points does not contain the configuration file I want it to load, and the actual classLoader I want the CacheManager to use is yet a different one. As explained below, assembling a single "catch all" ClassLoader to delegate to all doesn't work as some of these actually need to be strictly isolated to prevent ambiguities. > I suspect you might be referring to classloader related changes as a result of OSGI integration? I didn't check but that sounds like a reasonable estimate. Sanne > > Cheers, > > On 19 May 2014, at 17:09, Sanne Grinovero wrote: > >> I see the ParserRegistry was changed quite a bit; in Infinispan 6 it >> allowed to specify a different classloader for some operations, now it >> only takes a classloader during construction time. >> >> For WildFly/JBoss users, it is quite common that the configuration >> file we want parsed is not in the same classloader that the >> ParserRegistry needs to lookup its own parser components (as its >> design uses the ServiceLoader to discover components of the parser). >> This is especially true when Infinispan is not used by the application >> directly but via another module (so I guess also critical for >> capedwarf). >> >> I initially though to workaround the problem using a "wrapping >> classloader" so that I could pass a single CL instance which would try >> both the deployment classloader and the Infinispan's module >> classloader, but - besides this being suboptimal - it doesn't work as >> I'm violating isolation between modules: I can get exposed to an >> Infinispan 6 module which contains also Parser components, which get >> loaded as a service but are not compatible (different class >> definition). >> >> I'll need these changes in the ParserRegistry reverted please. Happy >> to send a pull myself, but before I attempt to patch it myself could >> someone explain what the goal was? >> >> thanks, >> Sanne >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From rory.odonnell at oracle.com Sun May 25 14:43:02 2014 From: rory.odonnell at oracle.com (Rory O'Donnell) Date: Sun, 25 May 2014 19:43:02 +0100 Subject: [infinispan-dev] Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 are available on java.net In-Reply-To: <537F21DE.6000501@oracle.com> References: <537F02AC.7000308@oracle.com> <0DABDEB0-82BE-4120-8781-AAE391460FDF@redhat.com> <537F21DE.6000501@oracle.com> Message-ID: <538239B6.7050209@oracle.com> Thanks Dalibor. On 23/05/2014 11:24, dalibor topic wrote: > It was. See https://bugs.openjdk.java.net/browse/JDK-8037801 for details. > > On 23.05.2014 11:12, Galder Zamarre?o wrote: >> Hi Rory, >> >> Is https://bugs.openjdk.java.net/browse/JDK-8036554 being backported >> to 7u60? >> >> Cheers, >> >> On 23 May 2014, at 10:11, Rory O'Donnell Oracle, Dublin Ireland >> wrote: >> >>> Hi Galder, >>> >>> Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 >>> are available on java.net. >>> >>> As we enter the later phases of development for JDK 7u60 & JDK 8u20 >>> , please log any show >>> stoppers as soon as possible. >>> >>> Rgds, Rory >>> >>> -- >>> Rgds,Rory O'Donnell >>> Quality Engineering Manager >>> Oracle EMEA , Dublin, Ireland >>> >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> > -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland From isavin at redhat.com Mon May 26 06:06:49 2014 From: isavin at redhat.com (Ion Savin) Date: Mon, 26 May 2014 13:06:49 +0300 Subject: [infinispan-dev] On ParserRegistry and classloaders In-Reply-To: References: <9A22CC58-5B20-4D6B-BA6B-B4A23493979F@redhat.com> Message-ID: <53831239.30601@redhat.com> Hi Sanne, Galder, On 05/23/2014 07:08 PM, Sanne Grinovero wrote: > On 23 May 2014 08:03, Galder Zamarre?o wrote: >> >Hey Sanne, >> > >> >I?ve looked at ParserRegistry and not sure I see the changes you are referring to? >> > >> >>From what I?ve seen, ParserRegistry has taken class loader in the constructor since the start. > Yes, and that was good as we've been using it: it might need > directions to be pointed at the right modules to load extension > points. > > My problem is not that the constructor takes a ClassLoader, but that > other options have been removed; essentially in my scenario the module > containing the extension points does not contain the configuration > file I want it to load, and the actual classLoader I want the > CacheManager to use is yet a different one. As explained below, > assembling a single "catch all" ClassLoader to delegate to all doesn't > work as some of these actually need to be strictly isolated to prevent > ambiguities. > >> >I suspect you might be referring to classloader related changes as a result of OSGI integration? > I didn't check but that sounds like a reasonable estimate. I had a look at the OSGi-related changes done for this class and they don't alter the class interface in any way. The implementation changes related to FileLookup seem to maintain the same behavior for non-OSGi contexts also. Regards, Ion Savin From galder at redhat.com Mon May 26 06:33:43 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Mon, 26 May 2014 12:33:43 +0200 Subject: [infinispan-dev] Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 are available on java.net In-Reply-To: <537F21DE.6000501@oracle.com> References: <537F02AC.7000308@oracle.com> <0DABDEB0-82BE-4120-8781-AAE391460FDF@redhat.com> <537F21DE.6000501@oracle.com> Message-ID: Thanks Dalibor! On 23 May 2014, at 12:24, dalibor topic wrote: > It was. See https://bugs.openjdk.java.net/browse/JDK-8037801 for details. > > On 23.05.2014 11:12, Galder Zamarre?o wrote: >> Hi Rory, >> >> Is https://bugs.openjdk.java.net/browse/JDK-8036554 being backported to 7u60? >> >> Cheers, >> >> On 23 May 2014, at 10:11, Rory O'Donnell Oracle, Dublin Ireland wrote: >> >>> Hi Galder, >>> >>> Early Access builds for JDK 9 b13, JDK 8u20 b15 and JDK 7u60 b15 are available on java.net. >>> >>> As we enter the later phases of development for JDK 7u60 & JDK 8u20 , please log any show >>> stoppers as soon as possible. >>> >>> Rgds, Rory >>> >>> -- >>> Rgds,Rory O'Donnell >>> Quality Engineering Manager >>> Oracle EMEA , Dublin, Ireland >>> >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> > > -- > Dalibor Topic | Principal Product Manager > Phone: +494089091214 | Mobile: +491737185961 > > > ORACLE Deutschland B.V. & Co. KG | K?hneh?fe 5 | 22761 Hamburg > > ORACLE Deutschland B.V. & Co. KG > Hauptverwaltung: Riesstr. 25, D-80992 M?nchen > Registergericht: Amtsgericht M?nchen, HRA 95603 > Gesch?ftsf?hrer: J?rgen Kunz > > Komplement?rin: ORACLE Deutschland Verwaltung B.V. > Hertogswetering 163/167, 3543 AS Utrecht, Niederlande > Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697 > Gesch?ftsf?hrer: Alexander van der Ven, Astrid Kepper, Val Maher > > Oracle is committed to developing > practices and products that help protect the environment -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Mon May 26 11:52:57 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 26 May 2014 17:52:57 +0200 Subject: [infinispan-dev] IRC weekly meeting log Message-ID: <087EC69D-FE92-429B-A9CA-F901B8BEC164@redhat.com> Hi all, Please find this week?s IRC meeting chat logs: http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-05-26-14.02.html Cheers, -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From galder at redhat.com Mon May 26 12:11:28 2014 From: galder at redhat.com (=?windows-1252?Q?Galder_Zamarre=F1o?=) Date: Mon, 26 May 2014 18:11:28 +0200 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> <53723CD4.9060300@redhat.com> Message-ID: Hi all, I?ve been looking into ISPN-2956 last week and I think we have a solution for it which requires a protocol change [1] Since we?re in the middle of the Hot Rod 2.0 development, this is a good opportunity to implement it. Cheers, [1] https://issues.jboss.org/browse/ISPN-2956?focusedCommentId=12970541&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12970541 On 14 May 2014, at 09:36, Dan Berindei wrote: > > > > On Tue, May 13, 2014 at 6:40 PM, Radim Vansa wrote: > On 05/13/2014 03:58 PM, Dan Berindei wrote: >> >> >> On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: >> @Dan: It's absolutely correct to do the further writes in order to make >> the cache consistent, I am not arguing against that. You've fixed the >> outcome (state of cache) well. My point was that we should let the user >> know that the value he gets is not 100% correct when we already know >> that - and given the API, the only option to do that seems to me as >> throwing an exception. >> >> The problem, as I see it, is that users also expect methods that throw an exception to *not* modify the cache. >> So we would break some of the users' expectations anyway. > > When the response from primary owner does not arrive soon, we throw timeout exception and the cache is modified anyway, isn't it? > If we throw ~ReturnValueUnreliableException, the user has at least some chance to react. Currently, for code requiring 100% reliable value, you can't do anything but ignore the return value, even for CAS operations. > > > Yes, but we don't expect the user to handle a TimeoutException in any meaningful way. Instead, we expect the user to choose his hardware and configuration to avoid timeouts, if he cares about consistency. How could you handle an exception that tells you "I may have written the value you asked me to in the cache, or maybe not. Either way, you will never know what the previous value was. Muahahaha!" in an application that cares about consistency? > > But the proposed ReturnValueUnreliableException can't be avoided by the user, it has to be handled every time the cluster membership changes. So it would be more like WriteSkewException than TimeoutException. And when we throw a WriteSkewException, we don't write anything to the cache. > > Remember, most users do not care about the previous value at all - that's the reason why JCache and our HotRod client don't return the previous value by default. Those that do care about the previous value, use the conditional write operations, and those already work (well, except for the scenario below). So you would force everyone to handle an exception that they don't care about. > > It would make sense to throw an exception if we didn't return the previous value by default, and the user requested the return value explicitly. But we do return the value by default, so I don't think it would be a good idea for us. > >> >> >> @Sanne: I was not suggesting that for now - sure, value versioning is (I >> hope) on the roadmap. But that's more complicated, I though just about >> making an adjustment to the current implementation. >> >> >> Actually, just keeping a history of values would not fix the the return value in all cases. >> >> When retrying a put on the new primary owner, the primary owner would still have to compare our value with the latest value, and return the previous value if they are equal. So we could have something like this: >> >> A is the originator, B is the primary owner, k = v0 >> A -> B: put(k, v1) >> B dies before writing v, C is now primary owner >> D -> C: put(k, v1) // another put operation from D, with the same value >> C -> D: null >> A -> C: retry_put(k, v1) >> C -> A: v0 // C assumes A is overwriting its own value, so it's returning the previous one >> >> To fix that, we'd need a unique version generated by the originator - kind of like a transaction id ;) > > Is it such a problem to associate unique ID with each write? History implementation seems to me like the more complicated part. > > I also think maintaining a version history would be quite complicated, and it also would make it harder for users to estimate their cache's memory usage. That's why I was trying to show that it's not a panacea. > > > >> And to fix the HotRod use case, the HotRod client would have to be the one generating the version. > > I agree. > > Radim > > >> >> Cheers >> Dan >> >> >> >> Radim >> >> On 05/12/2014 12:02 PM, Sanne Grinovero wrote: >> > I don't think we are in a position to decide what is a reasonable >> > compromise; we can do better. >> > For example - as Radim suggested - it might seem reasonable to have >> > the older value around for a little while. We'll need a little bit of >> > history of values and tombstones anyway for many other reasons. >> > >> > >> > Sanne >> > >> > On 12 May 2014 09:37, Dan Berindei wrote: >> >> Radim, I would contend that the first and foremost guarantee that put() >> >> makes is to leave the cache in a consistent state. So we can't just throw an >> >> exception and give up, leaving k=v on one owner and k=null on another. >> >> >> >> Secondly, put(k, v) being atomic means that it either succeeds, it writes >> >> k=v in the cache, and it returns the previous value, or it doesn't succeed, >> >> and it doesn't write k=v in the cache. Returning the wrong previous value is >> >> bad, but leaving k=v in the cache is just as bad, even if the all the owners >> >> have the same value. >> >> >> >> And last, we can't have one node seeing k=null, then k=v, then k=null again, >> >> when the only write we did on the cache was a put(k, v). So trying to undo >> >> the write would not help. >> >> >> >> In the end, we have to make a compromise, and I think returning the wrong >> >> value in some of the cases is a reasonable compromise. Of course, we should >> >> document that :) >> >> >> >> I also believe ISPN-2956 could be fixed so that HotRod behaves just like >> >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the HotRod >> >> protocol and to the cache itself. >> >> >> >> Incidentally, transactional caches have a similar problem when the >> >> originator leaves the cluster: ISPN-3421 [1] >> >> And we can't handle transactional caches any better than non-transactional >> >> caches until we expose transactions to the HotRod client. >> >> >> >> [1] https://issues.jboss.org/browse/ISPN-2956 >> >> >> >> Cheers >> >> Dan >> >> >> >> >> >> >> >> >> >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa wrote: >> >>> Hi, >> >>> >> >>> recently I've stumbled upon one already expected behaviour (one instance >> >>> is [1]), but which did not got much attention. >> >>> >> >>> In non-tx cache, when the primary owner fails after the request has been >> >>> replicated to backup owner, the request is retried in the new topology. >> >>> Then, the operation is executed on the new primary (the previous >> >>> backup). The outcome has been already fixed in [2], but the return value >> >>> may be wrong. For example, when we do a put, the return value for the >> >>> second attempt will be the currently inserted value (although the entry >> >>> was just created). Same situation may happen for other operations. >> >>> >> >>> Currently, it's not possible to return the correct value (because it has >> >>> already been overwritten and we don't keep a history of values), but >> >>> shouldn't we rather throw an exception if we were not able to fulfil the >> >>> API contract? >> >>> >> >>> Radim >> >>> >> >>> [1] https://issues.jboss.org/browse/ISPN-2956 >> >>> [2] https://issues.jboss.org/browse/ISPN-3422 >> >>> >> >>> -- >> >>> Radim Vansa >> >>> JBoss DataGrid QA >> >>> >> >>> _______________________________________________ >> >>> infinispan-dev mailing list >> >>> infinispan-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> >> >> _______________________________________________ >> >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ >> > infinispan-dev mailing list >> > infinispan-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > > > JBoss DataGrid QA > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From sanne at infinispan.org Tue May 27 07:08:18 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Tue, 27 May 2014 12:08:18 +0100 Subject: [infinispan-dev] IRC weekly meeting log In-Reply-To: <087EC69D-FE92-429B-A9CA-F901B8BEC164@redhat.com> References: <087EC69D-FE92-429B-A9CA-F901B8BEC164@redhat.com> Message-ID: Thanks for the logs! As someone was wondering: yes it was bank holiday in UK; so Mircea, Gustavo and myself where offline. Good because I think I forgot to tell Gustavo about our weekly synch-up meeting :) Little update on my area: # Gustavo onboarding, he's starting with pull request reviews and a Maven task as an appetizer: ISPN-4295 Adrian: feel free to ask his help (and reviews) to start engaging him in our areas. # Hibernate Search 5 I've rebased my experimental branch; there are painful problems around the Distributed Query functionality, it relied on collections, helpers and even concepts which are gone in latest Lucene. I'm attempting to compensate for it with some ad-hoc crafted workarounds to avoid disabling the tests and then take it from there but it did significantly delay my plans, and have not resolved the full puzzle yet. I hope to get it into a decent enough shape to start collaboration, but then I'll need help to update the other depending modules. # Lucene Directory v4 The above update will allow us to drop Lucene3 support; I've found some more very interesting reasons to do so: expect the Directory code to be significantly simplified as we won't need chunking anymore; which also implies we can drop readlocks and all the overhead done today to emulate read isolation. Cheers, Sanne On 26 May 2014 16:52, Galder Zamarre?o wrote: > Hi all, > > Please find this week?s IRC meeting chat logs: > http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2014/infinispan.2014-05-26-14.02.html > > Cheers, > -- > Galder Zamarre?o > galder at redhat.com > twitter.com/galderz > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From mmarkus at redhat.com Tue May 27 12:47:29 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 27 May 2014 17:47:29 +0100 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> Message-ID: On May 13, 2014, at 14:58, Dan Berindei wrote: > > > > On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: > @Dan: It's absolutely correct to do the further writes in order to make > the cache consistent, I am not arguing against that. You've fixed the > outcome (state of cache) well. My point was that we should let the user > know that the value he gets is not 100% correct when we already know > that - and given the API, the only option to do that seems to me as > throwing an exception. > > The problem, as I see it, is that users also expect methods that throw an exception to *not* modify the cache. > So we would break some of the users' expectations anyway. I don't see how we can guarantee that if a method throws an exception nothing has been applies without a 2PC/TX. I think this should be a expectation for non-tx caches. i.e. if an operation throws an exception, then the state of the data is inconsistent. > > > @Sanne: I was not suggesting that for now - sure, value versioning is (I > hope) on the roadmap. But that's more complicated, I though just about > making an adjustment to the current implementation. > > > Actually, just keeping a history of values would not fix the the return value in all cases. > > When retrying a put on the new primary owner, the primary owner would still have to compare our value with the latest value, and return the previous value if they are equal. So we could have something like this: > > A is the originator, B is the primary owner, k = v0 > A -> B: put(k, v1) > B dies before writing v, C is now primary owner > D -> C: put(k, v1) // another put operation from D, with the same value > C -> D: null > A -> C: retry_put(k, v1) > C -> A: v0 // C assumes A is overwriting its own value, so it's returning the previous one > > To fix that, we'd need a unique version generated by the originator - kind of like a transaction id ;) > And to fix the HotRod use case, the HotRod client would have to be the one generating the version. > > Cheers > Dan > > > > Radim > > On 05/12/2014 12:02 PM, Sanne Grinovero wrote: > > I don't think we are in a position to decide what is a reasonable > > compromise; we can do better. > > For example - as Radim suggested - it might seem reasonable to have > > the older value around for a little while. We'll need a little bit of > > history of values and tombstones anyway for many other reasons. > > > > > > Sanne > > > > On 12 May 2014 09:37, Dan Berindei wrote: > >> Radim, I would contend that the first and foremost guarantee that put() > >> makes is to leave the cache in a consistent state. So we can't just throw an > >> exception and give up, leaving k=v on one owner and k=null on another. > >> > >> Secondly, put(k, v) being atomic means that it either succeeds, it writes > >> k=v in the cache, and it returns the previous value, or it doesn't succeed, > >> and it doesn't write k=v in the cache. Returning the wrong previous value is > >> bad, but leaving k=v in the cache is just as bad, even if the all the owners > >> have the same value. > >> > >> And last, we can't have one node seeing k=null, then k=v, then k=null again, > >> when the only write we did on the cache was a put(k, v). So trying to undo > >> the write would not help. > >> > >> In the end, we have to make a compromise, and I think returning the wrong > >> value in some of the cases is a reasonable compromise. Of course, we should > >> document that :) > >> > >> I also believe ISPN-2956 could be fixed so that HotRod behaves just like > >> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the HotRod > >> protocol and to the cache itself. > >> > >> Incidentally, transactional caches have a similar problem when the > >> originator leaves the cluster: ISPN-3421 [1] > >> And we can't handle transactional caches any better than non-transactional > >> caches until we expose transactions to the HotRod client. > >> > >> [1] https://issues.jboss.org/browse/ISPN-2956 > >> > >> Cheers > >> Dan > >> > >> > >> > >> > >> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa wrote: > >>> Hi, > >>> > >>> recently I've stumbled upon one already expected behaviour (one instance > >>> is [1]), but which did not got much attention. > >>> > >>> In non-tx cache, when the primary owner fails after the request has been > >>> replicated to backup owner, the request is retried in the new topology. > >>> Then, the operation is executed on the new primary (the previous > >>> backup). The outcome has been already fixed in [2], but the return value > >>> may be wrong. For example, when we do a put, the return value for the > >>> second attempt will be the currently inserted value (although the entry > >>> was just created). Same situation may happen for other operations. > >>> > >>> Currently, it's not possible to return the correct value (because it has > >>> already been overwritten and we don't keep a history of values), but > >>> shouldn't we rather throw an exception if we were not able to fulfil the > >>> API contract? > >>> > >>> Radim > >>> > >>> [1] https://issues.jboss.org/browse/ISPN-2956 > >>> [2] https://issues.jboss.org/browse/ISPN-3422 > >>> > >>> -- > >>> Radim Vansa > >>> JBoss DataGrid QA > >>> > >>> _______________________________________________ > >>> infinispan-dev mailing list > >>> infinispan-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> > >> > >> _______________________________________________ > >> infinispan-dev mailing list > >> infinispan-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > infinispan-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > > -- > Radim Vansa > JBoss DataGrid QA > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Tue May 27 14:02:15 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Tue, 27 May 2014 19:02:15 +0100 Subject: [infinispan-dev] IntelliJ IDEA license renewal for ISPN OS project In-Reply-To: References: <1015900187.7272100.1400498687318.JavaMail.zimbra@redhat.com> <517184462.7272821.1400498912152.JavaMail.zimbra@redhat.com> <5F259B94-68CC-45BD-BE58-FB18B15855F3@redhat.com> Message-ID: <58C00CF2-5DB7-46FF-AB30-ABC0F2DBACE1@redhat.com> I have one, ping me if you need it ;) On May 19, 2014, at 13:06, Sanne Grinovero wrote: > On 19 May 2014 12:46, Galder Zamarre?o wrote: >> >> On 19 May 2014, at 13:42, Sanne Grinovero wrote: >> >>> I hope Manik will continue with that, he's not on a different planet. >> >> Well, Mircea is the Infinispan lead, so he should be the one contacting them. > > Mircea being on holidays I wouldn't wait for that. > >>> But if any of the other IDEA users wants to volunteer to speed things >>> up, please go ahead. >> >> Or... > > ? > >> >>> >>> Personally I had a bit of a fight with Eclipse last week, but we're >>> good friends again :) >>> >>> On 19 May 2014 12:28, Tomas Sykora wrote: >>>> Hello all, >>>> current open source Infinispan license for IntelliJ IDEA expires in 3 days. Is there any way how we can obtain a new one for next year? >>>> IIRC, Manik took care about this issue. >>>> >>>> Thanks! >>>> Tomas >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Galder Zamarre?o >> galder at redhat.com >> twitter.com/galderz >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From galder at redhat.com Wed May 28 06:48:04 2014 From: galder at redhat.com (=?iso-8859-1?Q?Galder_Zamarre=F1o?=) Date: Wed, 28 May 2014 12:48:04 +0200 Subject: [infinispan-dev] Reliability of return values In-Reply-To: References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> Message-ID: <2C42B1F4-356A-484D-9045-09B90C9B59E6@redhat.com> On 27 May 2014, at 18:47, Mircea Markus wrote: > > On May 13, 2014, at 14:58, Dan Berindei wrote: > >> >> >> >> On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: >> @Dan: It's absolutely correct to do the further writes in order to make >> the cache consistent, I am not arguing against that. You've fixed the >> outcome (state of cache) well. My point was that we should let the user >> know that the value he gets is not 100% correct when we already know >> that - and given the API, the only option to do that seems to me as >> throwing an exception. >> >> The problem, as I see it, is that users also expect methods that throw an exception to *not* modify the cache. >> So we would break some of the users' expectations anyway. > > I don't see how we can guarantee that if a method throws an exception nothing has been applies without a 2PC/TX. I think this should be a expectation for non-tx caches. i.e. if an operation throws an exception, then the state of the data is inconsistent. If we did that, our retry logic would remain badly broken for situations like the one mentioned in ISPN-2956. Unless you want to get rid of the retry logic altogether and let the client users decide what to do, I think we should improve the retry logic to better deal with such situations, and we have ways to do so [1] [1] https://issues.jboss.org/browse/ISPN-2956?focusedCommentId=12970541&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12970541 > >> >> >> @Sanne: I was not suggesting that for now - sure, value versioning is (I >> hope) on the roadmap. But that's more complicated, I though just about >> making an adjustment to the current implementation. >> >> >> Actually, just keeping a history of values would not fix the the return value in all cases. >> >> When retrying a put on the new primary owner, the primary owner would still have to compare our value with the latest value, and return the previous value if they are equal. So we could have something like this: >> >> A is the originator, B is the primary owner, k = v0 >> A -> B: put(k, v1) >> B dies before writing v, C is now primary owner >> D -> C: put(k, v1) // another put operation from D, with the same value >> C -> D: null >> A -> C: retry_put(k, v1) >> C -> A: v0 // C assumes A is overwriting its own value, so it's returning the previous one >> >> To fix that, we'd need a unique version generated by the originator - kind of like a transaction id ;) >> And to fix the HotRod use case, the HotRod client would have to be the one generating the version. >> >> Cheers >> Dan >> >> >> >> Radim >> >> On 05/12/2014 12:02 PM, Sanne Grinovero wrote: >>> I don't think we are in a position to decide what is a reasonable >>> compromise; we can do better. >>> For example - as Radim suggested - it might seem reasonable to have >>> the older value around for a little while. We'll need a little bit of >>> history of values and tombstones anyway for many other reasons. >>> >>> >>> Sanne >>> >>> On 12 May 2014 09:37, Dan Berindei wrote: >>>> Radim, I would contend that the first and foremost guarantee that put() >>>> makes is to leave the cache in a consistent state. So we can't just throw an >>>> exception and give up, leaving k=v on one owner and k=null on another. >>>> >>>> Secondly, put(k, v) being atomic means that it either succeeds, it writes >>>> k=v in the cache, and it returns the previous value, or it doesn't succeed, >>>> and it doesn't write k=v in the cache. Returning the wrong previous value is >>>> bad, but leaving k=v in the cache is just as bad, even if the all the owners >>>> have the same value. >>>> >>>> And last, we can't have one node seeing k=null, then k=v, then k=null again, >>>> when the only write we did on the cache was a put(k, v). So trying to undo >>>> the write would not help. >>>> >>>> In the end, we have to make a compromise, and I think returning the wrong >>>> value in some of the cases is a reasonable compromise. Of course, we should >>>> document that :) >>>> >>>> I also believe ISPN-2956 could be fixed so that HotRod behaves just like >>>> embedded mode after the ISPN-3422 fix, by adding a RETRY flag to the HotRod >>>> protocol and to the cache itself. >>>> >>>> Incidentally, transactional caches have a similar problem when the >>>> originator leaves the cluster: ISPN-3421 [1] >>>> And we can't handle transactional caches any better than non-transactional >>>> caches until we expose transactions to the HotRod client. >>>> >>>> [1] https://issues.jboss.org/browse/ISPN-2956 >>>> >>>> Cheers >>>> Dan >>>> >>>> >>>> >>>> >>>> On Mon, May 12, 2014 at 10:21 AM, Radim Vansa wrote: >>>>> Hi, >>>>> >>>>> recently I've stumbled upon one already expected behaviour (one instance >>>>> is [1]), but which did not got much attention. >>>>> >>>>> In non-tx cache, when the primary owner fails after the request has been >>>>> replicated to backup owner, the request is retried in the new topology. >>>>> Then, the operation is executed on the new primary (the previous >>>>> backup). The outcome has been already fixed in [2], but the return value >>>>> may be wrong. For example, when we do a put, the return value for the >>>>> second attempt will be the currently inserted value (although the entry >>>>> was just created). Same situation may happen for other operations. >>>>> >>>>> Currently, it's not possible to return the correct value (because it has >>>>> already been overwritten and we don't keep a history of values), but >>>>> shouldn't we rather throw an exception if we were not able to fulfil the >>>>> API contract? >>>>> >>>>> Radim >>>>> >>>>> [1] https://issues.jboss.org/browse/ISPN-2956 >>>>> [2] https://issues.jboss.org/browse/ISPN-3422 >>>>> >>>>> -- >>>>> Radim Vansa >>>>> JBoss DataGrid QA >>>>> >>>>> _______________________________________________ >>>>> infinispan-dev mailing list >>>>> infinispan-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>>> >>>> >>>> _______________________________________________ >>>> infinispan-dev mailing list >>>> infinispan-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> >> -- >> Radim Vansa >> JBoss DataGrid QA >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -- Galder Zamarre?o galder at redhat.com twitter.com/galderz From davide at hibernate.org Wed May 28 06:52:24 2014 From: davide at hibernate.org (Davide D'Alto) Date: Wed, 28 May 2014 11:52:24 +0100 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API Message-ID: Hi all, some time ago we talked on the mailing list about the integration between Hibernate OGM and Hot Rod. To achieve this we would need to include the grouping API in the Hot Rod protocol and to add a couple of methods in the grouping API: - to get the keys in a group - to remove the keys in a group Mircea created an experimental stub where the method " Set getGroupKeys(G group) " is added to the Cache interface. I've rebased the branch to the latest changes (I might have introduce some errors): https://github.com/DavideD/infinispan/compare/ISPN-3981 I should have implemented the methods but I haven't had the time to work on these features. There are also two issues to keep track of this: https://issues.jboss.org/browse/ISPN-3732 https://issues.jboss.org/browse/ISPN-3981 As far as I know, the API for Infinispan 7 is going to be freezed soon, I was wondering if this changes have been taken into account and, if not, is it possible to include them? Thanks, Davide -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140528/79361c07/attachment.html From emmanuel at hibernate.org Wed May 28 07:25:07 2014 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Wed, 28 May 2014 13:25:07 +0200 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: References: Message-ID: <34CF9F91-24FF-44AF-95C5-D8FA780742CA@hibernate.org> We are on our way to Hibernate OGM GA (sometime this summer) so this has an impact on the supported version of Infinispan we will offer which will limit us in the future due to data backward compatibility. On 28 May 2014, at 12:52, Davide D'Alto wrote: > Hi all, > some time ago we talked on the mailing list about the integration between Hibernate OGM and Hot Rod. > > To achieve this we would need to include the grouping API in the Hot Rod protocol and to add a couple of methods in the grouping API: > > - to get the keys in a group > - to remove the keys in a group > > Mircea created an experimental stub where the method " Set getGroupKeys(G group) " is added to the Cache interface. > I've rebased the branch to the latest changes (I might have introduce some errors): https://github.com/DavideD/infinispan/compare/ISPN-3981 > > I should have implemented the methods but I haven't had the time to work on these features. > > There are also two issues to keep track of this: > > https://issues.jboss.org/browse/ISPN-3732 > https://issues.jboss.org/browse/ISPN-3981 > > As far as I know, the API for Infinispan 7 is going to be freezed soon, > I was wondering if this changes have been taken into account and, > if not, is it possible to include them? > > Thanks, > Davide > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20140528/33cf5c97/attachment.html From mmarkus at redhat.com Wed May 28 12:31:44 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 28 May 2014 17:31:44 +0100 Subject: [infinispan-dev] updated release schedule for Infinispan 7.0.0 Message-ID: Hi, There's plenty of stuff on the pipe, so I've added another Alpha5 before the Beta1. Here's the updated timeline: http://goo.gl/VNLAvO Next release by Will, 7.0.0.Alpha5 on the 5th of June. Beta1 on the 13th of June. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sanne at infinispan.org Wed May 28 12:42:38 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 28 May 2014 17:42:38 +0100 Subject: [infinispan-dev] CD datastore inserts slow Message-ID: Hi all, some context, as this conversation started offlist: performance problems with CapeDwarf's storage system, as you probably know based on Infinispan. So, storing some thousand entities in a single transaction takes a little more than 1 second, while running each put operation in a single transaction takes ~50 seconds. I'm afraid that's nothing new? When you use indexing, this is expected: it shines much better in parallel operations than in sequential, individual transactions. I expect some of the changes we have planned for Infinispan 7 to improve on this, but it's unlikely to get faster than a single order of magnitude. These are some issues which affected my previous performance tests: https://issues.jboss.org/browse/ISPN-3690 https://issues.jboss.org/browse/ISPN-1764 https://issues.jboss.org/browse/ISPN-3831 https://issues.jboss.org/browse/ISPN-3891 https://issues.jboss.org/browse/ISPN-3905 And of course the Hibernate Search and Lucene upgrades ;-) (in no particular order, as relevance depends on the test and on what metric I'm looking to improve) If you need better figures the best thing we can do is measure your specific test, and see if we need to add further tasks to my wishlist or if it's enough to prioritize these. Currently I think if you expect to use a transaction for each operation, there is nothing wrong in CD as unfortunately these figures match my expectations on a non-tuned system.. you could tune things a bit, like better sizing of packets for JGroups and network stack, but I wouldn't expect much higher figures without patching things in Infinispan. For the record, I'm more concerned that storing "a few thousand" entities in a single TX takes a full second: that's not expected, but my guess is that in this specific case you're not warming up the JVM nor repeating the test. Having batching enabled, I'd expect this to be in the order of millions / second. Sanne Previous discussion - sorry, the formatting got crazy: > 17:20:17,640 INFO [com.google.appengine.tck.benchmark.ObjectifyBenchmarkTest] (default task-1) >>>> Save [2000] time: 1149ms > 17:21:55,659 INFO [com.google.appengine.tck.benchmark.ObjectifyBenchmarkTest] (default task-1) >>>> Save [2000] time: 50675ms > The first time is when whole 2000 is inserted in single Tx, > the other one is when there is no existing Tx. > OK, in reality doing non-Tx for whole batch is a no go, > so on that side we're good. > But I would still like to see non-Tx do much better then 1sec vs. 50sec (for 2k entities). > -Ales > On 28 May 2014, at 17:20, Mircea Markus wrote: > Can we isolate the tests somehow? > On May 28, 2014, at 15:53, Ales Justin wrote: > Objectify is a popular GAE ORM, so it does plenty of persistence magic. > Hence my simple answer is no. > But it should be easy to take latest CapeDwarf release -- 2.0.0.CR2, > and run + debug things, and see why it's so slow. > Either Terry or me is gonna add this test to GAE TCK. > I'll shoot an email on how to run this against CapeDwarf once it's added. > Looking at this: > * https://gist.github.com/alesj/1d24ad24dfbef8b5e12c > It looks like a problem of Tx usage -- Tx per Cache::put. > This snapshot only shows the time spent within Infinispan. Out of this time, 9% is spent indeed within cache.put. How much time does the test spend doing non-infinispan related activities? Where I'm trying to get to is: are we sure Infinispan is the culprit? > It could be that our default config isn't suited to handle such things. > So we're open for any suggestions. > --- > I've added this test to GAE TCK: > * https://github.com/GoogleCloudPlatform/appengine-tck/blob/master/core/benchmark/src/test/java/com/google/appengine/tck/benchmark/ObjectifyBenchmarkTest.java > You need CapeDwarf 2.0.0.CR2: > * http://capedwarf.org/downloads/ > And clone GAE TCK: > * https://github.com/GoogleCloudPlatform/appengine-tck > Run CapeDwarf: > * https://github.com/capedwarf/capedwarf-blue/blob/master/README.md (see (5)) > Then run the ObjectifyBenchmarkTest: > * cd > * mvn clean install > * cd core/benchmark > * mvn clean install -Pcapedwarf,benchmark > Ping me for any issues. > Thanks! > -Ales > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Wed May 28 12:46:09 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 28 May 2014 17:46:09 +0100 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: References: Message-ID: <47DE42DD-896A-4B63-B4B4-64856F07BED9@redhat.com> Hi Davide, Thanks for the heads up, Pedro started looking into these. 7.0.0.Beta1 is re-scheduled for 13th of June, but give that this is not an intrusive change I'm okay to have this in before Beta2 (24 June) so we should have enough time. On May 28, 2014, at 11:52, Davide D'Alto wrote: > Hi all, > some time ago we talked on the mailing list about the integration between Hibernate OGM and Hot Rod. > > To achieve this we would need to include the grouping API in the Hot Rod protocol and to add a couple of methods in the grouping API: > > - to get the keys in a group > - to remove the keys in a group > > Mircea created an experimental stub where the method " Set getGroupKeys(G group) " is added to the Cache interface. > I've rebased the branch to the latest changes (I might have introduce some errors): https://github.com/DavideD/infinispan/compare/ISPN-3981 > > I should have implemented the methods but I haven't had the time to work on these features. > > There are also two issues to keep track of this: > > https://issues.jboss.org/browse/ISPN-3732 > https://issues.jboss.org/browse/ISPN-3981 > > As far as I know, the API for Infinispan 7 is going to be freezed soon, > I was wondering if this changes have been taken into account and, > if not, is it possible to include them? > > Thanks, > Davide > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Wed May 28 12:48:48 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 28 May 2014 17:48:48 +0100 Subject: [infinispan-dev] updated release schedule for Infinispan 7.0.0 In-Reply-To: References: Message-ID: On May 28, 2014, at 17:31, Mircea Markus wrote: > Hi, > > There's plenty of stuff on the pipe, so I've added another Alpha5 before the Beta1. Here's the updated timeline: http://goo.gl/VNLAvO > Next release by Will, 7.0.0.Alpha5 on the 5th of June. Correction: it will be Vladimir releasing 7.0.0.Alpha5 and Will releasing 7.0.0.Beta1. (note to self V is before W in the alphabet). > Beta1 on the 13th of June. > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Wed May 28 13:02:47 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 28 May 2014 18:02:47 +0100 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: <34CF9F91-24FF-44AF-95C5-D8FA780742CA@hibernate.org> References: <34CF9F91-24FF-44AF-95C5-D8FA780742CA@hibernate.org> Message-ID: On May 28, 2014, at 12:25, Emmanuel Bernard wrote: > We are on our way to Hibernate OGM GA (sometime this summer) so this has an impact on the supported version of Infinispan we will offer which will limit us in the future due to data backward compatibility. I.e. if you're using 6.0 you might have problems upgrading to ISPN 7.0? > > On 28 May 2014, at 12:52, Davide D'Alto wrote: > >> Hi all, >> some time ago we talked on the mailing list about the integration between Hibernate OGM and Hot Rod. >> >> To achieve this we would need to include the grouping API in the Hot Rod protocol and to add a couple of methods in the grouping API: >> >> - to get the keys in a group >> - to remove the keys in a group >> >> Mircea created an experimental stub where the method " Set getGroupKeys(G group) " is added to the Cache interface. >> I've rebased the branch to the latest changes (I might have introduce some errors): https://github.com/DavideD/infinispan/compare/ISPN-3981 >> >> I should have implemented the methods but I haven't had the time to work on these features. >> >> There are also two issues to keep track of this: >> >> https://issues.jboss.org/browse/ISPN-3732 >> https://issues.jboss.org/browse/ISPN-3981 >> >> As far as I know, the API for Infinispan 7 is going to be freezed soon, >> I was wondering if this changes have been taken into account and, >> if not, is it possible to include them? >> >> Thanks, >> Davide >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Wed May 28 13:12:42 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Wed, 28 May 2014 18:12:42 +0100 Subject: [infinispan-dev] Reliability of return values In-Reply-To: <2C42B1F4-356A-484D-9045-09B90C9B59E6@redhat.com> References: <53707669.7070701@redhat.com> <5370A873.8070509@redhat.com> <2C42B1F4-356A-484D-9045-09B90C9B59E6@redhat.com> Message-ID: <2400D983-00B7-408F-9746-EC062E0E0C24@redhat.com> On May 28, 2014, at 11:48, Galder Zamarre?o wrote: > On 27 May 2014, at 18:47, Mircea Markus wrote: > >> >> On May 13, 2014, at 14:58, Dan Berindei wrote: >> >>> >>> >>> >>> On Mon, May 12, 2014 at 1:54 PM, Radim Vansa wrote: >>> @Dan: It's absolutely correct to do the further writes in order to make >>> the cache consistent, I am not arguing against that. You've fixed the >>> outcome (state of cache) well. My point was that we should let the user >>> know that the value he gets is not 100% correct when we already know >>> that - and given the API, the only option to do that seems to me as >>> throwing an exception. >>> >>> The problem, as I see it, is that users also expect methods that throw an exception to *not* modify the cache. >>> So we would break some of the users' expectations anyway. >> >> I don't see how we can guarantee that if a method throws an exception nothing has been applies without a 2PC/TX. I think this should be a expectation for non-tx caches. i.e. if an operation throws an exception, then the state of the data is inconsistent. > > If we did that, our retry logic would remain badly broken for situations like the one mentioned in ISPN-2956. Unless you want to get rid of the retry logic altogether and let the client users decide what to do, I think we should improve the retry logic to better deal with such situations, and we have ways to do so [1] > > [1] https://issues.jboss.org/browse/ISPN-2956?focusedCommentId=12970541&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12970541 You are right, retrying is better client experience than failing blindly. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From sanne at infinispan.org Wed May 28 13:38:39 2014 From: sanne at infinispan.org (Sanne Grinovero) Date: Wed, 28 May 2014 18:38:39 +0100 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: References: <34CF9F91-24FF-44AF-95C5-D8FA780742CA@hibernate.org> Message-ID: On 28 May 2014 18:02, Mircea Markus wrote: > On May 28, 2014, at 12:25, Emmanuel Bernard wrote: > >> We are on our way to Hibernate OGM GA (sometime this summer) so this has an impact on the supported version of Infinispan we will offer which will limit us in the future due to data backward compatibility. > > I.e. if you're using 6.0 you might have problems upgrading to ISPN 7.0? The configuration parser API changed, so we have to pick one Infinispan version at compile time, and users will have to stick with that. That's the only blocker I know of, we could explore working around it (i.e. restoring the old API, since in the parallel thread we couldn't find a reason for it to be different). But there could be other differences, which are hidden by the Parser problem. Another pain point is that Infinispan 7 needs a different configuration file and is unable to read the old ones.. we used to include a default configuration file to not force a newcomer to start by understanding the intricacies of an Infinispan configuration, but this way we can't provide an out-of-the-box experience which would work on both 7 and 6. In conclusion: unless we work towards it to make it easy I think users will need to use a specific version. Sanne > >> >> On 28 May 2014, at 12:52, Davide D'Alto wrote: >> >>> Hi all, >>> some time ago we talked on the mailing list about the integration between Hibernate OGM and Hot Rod. >>> >>> To achieve this we would need to include the grouping API in the Hot Rod protocol and to add a couple of methods in the grouping API: >>> >>> - to get the keys in a group >>> - to remove the keys in a group >>> >>> Mircea created an experimental stub where the method " Set getGroupKeys(G group) " is added to the Cache interface. >>> I've rebased the branch to the latest changes (I might have introduce some errors): https://github.com/DavideD/infinispan/compare/ISPN-3981 >>> >>> I should have implemented the methods but I haven't had the time to work on these features. >>> >>> There are also two issues to keep track of this: >>> >>> https://issues.jboss.org/browse/ISPN-3732 >>> https://issues.jboss.org/browse/ISPN-3981 >>> >>> As far as I know, the API for Infinispan 7 is going to be freezed soon, >>> I was wondering if this changes have been taken into account and, >>> if not, is it possible to include them? >>> >>> Thanks, >>> Davide >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From ttarrant at redhat.com Thu May 29 04:26:42 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 29 May 2014 10:26:42 +0200 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: References: Message-ID: <5386EF42.4020300@redhat.com> On 28/05/2014 12:52, Davide D'Alto wrote: > Mircea created an experimental stub where the method " Set > getGroupKeys(G group) " is added to the Cache interface. Can't we move that to a separate interface, i.e. GroupingCache ? Tristan From pedro at infinispan.org Thu May 29 05:19:17 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 29 May 2014 10:19:17 +0100 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: <5386EF42.4020300@redhat.com> References: <5386EF42.4020300@redhat.com> Message-ID: <5386FB95.6010308@infinispan.org> I was thinking to move the new methods to the AdvancedCache. But, IMO, a new interface is not needed. On 05/29/2014 09:26 AM, Tristan Tarrant wrote: > > > On 28/05/2014 12:52, Davide D'Alto wrote: >> Mircea created an experimental stub where the method " Set >> getGroupKeys(G group) " is added to the Cache interface. > Can't we move that to a separate interface, i.e. GroupingCache ? > > Tristan > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From ttarrant at redhat.com Thu May 29 05:24:57 2014 From: ttarrant at redhat.com (Tristan Tarrant) Date: Thu, 29 May 2014 11:24:57 +0200 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: <5386FB95.6010308@infinispan.org> References: <5386EF42.4020300@redhat.com> <5386FB95.6010308@infinispan.org> Message-ID: <5386FCE9.3060708@redhat.com> Ok, moving to AdvancedCache is also good. Have you already got a proposal for Hot Rod ? Tristan On 29/05/2014 11:19, Pedro Ruivo wrote: > I was thinking to move the new methods to the AdvancedCache. But, IMO, a > new interface is not needed. > > On 05/29/2014 09:26 AM, Tristan Tarrant wrote: >> >> On 28/05/2014 12:52, Davide D'Alto wrote: >>> Mircea created an experimental stub where the method " Set >>> getGroupKeys(G group) " is added to the Cache interface. >> Can't we move that to a separate interface, i.e. GroupingCache ? >> >> Tristan >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > From pedro at infinispan.org Thu May 29 05:27:43 2014 From: pedro at infinispan.org (Pedro Ruivo) Date: Thu, 29 May 2014 10:27:43 +0100 Subject: [infinispan-dev] OGM, Hot Rod and Grouping API In-Reply-To: <5386FCE9.3060708@redhat.com> References: <5386EF42.4020300@redhat.com> <5386FB95.6010308@infinispan.org> <5386FCE9.3060708@redhat.com> Message-ID: <5386FD8F.5030006@infinispan.org> On 05/29/2014 10:24 AM, Tristan Tarrant wrote: > Ok, moving to AdvancedCache is also good. > > Have you already got a proposal for Hot Rod ? not yet. I'm starting with embedded and then I'll add the operation to Hot Rod (Hot Rod is outside my comfort area and I'll need some time to find out how to add the operation :P) Pedro > > Tristan > > On 29/05/2014 11:19, Pedro Ruivo wrote: >> I was thinking to move the new methods to the AdvancedCache. But, IMO, a >> new interface is not needed. >> >> On 05/29/2014 09:26 AM, Tristan Tarrant wrote: >>> >>> On 28/05/2014 12:52, Davide D'Alto wrote: >>>> Mircea created an experimental stub where the method " Set >>>> getGroupKeys(G group) " is added to the Cache interface. >>> Can't we move that to a separate interface, i.e. GroupingCache ? >>> >>> Tristan >>> _______________________________________________ >>> infinispan-dev mailing list >>> infinispan-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev >>> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev >> >> > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev > From mmarkus at redhat.com Thu May 29 06:38:02 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 29 May 2014 11:38:02 +0100 Subject: [infinispan-dev] running ISPN in Karaf - documentation Message-ID: <996C94CA-3552-46F5-A012-2B0DB4B14324@redhat.com> Hi Ion, Would be good to have a documentation page + blog on how to install ISPN in Karaf: where to take the features file from, how to install the jars, what are the limitations (e.g. query not supported yet) etc. Martin has written a nice blog for the hotrod client already: http://blog.infinispan.org/search/label/osgi Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Thu May 29 09:28:31 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 29 May 2014 14:28:31 +0100 Subject: [infinispan-dev] book on Infinispan Message-ID: Hi Wagner, How's the book progressing? If you want someone to review it happy to take a look. Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From mmarkus at redhat.com Thu May 29 09:50:02 2014 From: mmarkus at redhat.com (Mircea Markus) Date: Thu, 29 May 2014 14:50:02 +0100 Subject: [infinispan-dev] book on Infinispan In-Reply-To: References: Message-ID: <938115EC-15CE-4260-A838-5C8A2756F44E@redhat.com> Does anyone have Wagner's email address? The one I have doesn't seem to be valid. On May 29, 2014, at 14:28, Mircea Markus wrote: > Hi Wagner, > > How's the book progressing? If you want someone to review it happy to take a look. > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev Cheers, -- Mircea Markus Infinispan lead (www.infinispan.org) From gustavonalle at gmail.com Thu May 29 10:13:40 2014 From: gustavonalle at gmail.com (Gustavo Fernandes) Date: Thu, 29 May 2014 15:13:40 +0100 Subject: [infinispan-dev] book on Infinispan In-Reply-To: <938115EC-15CE-4260-A838-5C8A2756F44E@redhat.com> References: <938115EC-15CE-4260-A838-5C8A2756F44E@redhat.com> Message-ID: <6365BEAB-8102-4B5B-BACB-5D7E4D0C7B1A@gmail.com> Priyanka Goel should know, which is the responsible for the book in the Packt Publishing side. Gustavo On 29 May 2014, at 14:50, Mircea Markus wrote: > Does anyone have Wagner's email address? The one I have doesn't seem to be valid. > > On May 29, 2014, at 14:28, Mircea Markus wrote: > >> Hi Wagner, >> >> How's the book progressing? If you want someone to review it happy to take a look. >> >> Cheers, >> -- >> Mircea Markus >> Infinispan lead (www.infinispan.org) >> >> >> >> >> >> _______________________________________________ >> infinispan-dev mailing list >> infinispan-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > Cheers, > -- > Mircea Markus > Infinispan lead (www.infinispan.org) > > > > > > _______________________________________________ > infinispan-dev mailing list > infinispan-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/infinispan-dev From isavin at redhat.com Fri May 30 08:09:13 2014 From: isavin at redhat.com (Ion Savin) Date: Fri, 30 May 2014 15:09:13 +0300 Subject: [infinispan-dev] running ISPN in Karaf - documentation In-Reply-To: <996C94CA-3552-46F5-A012-2B0DB4B14324@redhat.com> References: <996C94CA-3552-46F5-A012-2B0DB4B14324@redhat.com> Message-ID: <538874E9.5060901@redhat.com> Hi Mircea, On 05/29/2014 01:38 PM, Mircea Markus wrote: > Hi Ion, > > Would be good to have a documentation page + blog on how to install ISPN in Karaf: where to take the features file from, how to install the jars, what are the limitations (e.g. query not supported yet) etc. Martin has written a nice blog for the hotrod client already: http://blog.infinispan.org/search/label/osgi > > Cheers, > The process is the same for the other modules also. At the moment each bundle/module when installed using Karaf Features will pull in the modules it depends on (e.g. installing infinispan-cachestore-remote will also install commons, core and hotrod-client). Perhaps in addition to this it might help to have an additional feature file which would install the bundles frequently used together (similar to [2]) I'll add this and document the process though it's not different from what Martin described. [1] https://github.com/infinispan/infinispan/pull/2540 [2] https://issues.jboss.org/browse/ISPN-4333 Regards, Ion Savin