Martin Ribaric -- Migration of Infinispan OData server
by Tomas Sykora
Hello everyone :)
I am enclosing Martin's email and sending it from my email address because he sent it twice already and we can't see his emails from his gmail address for whatever strange reason here. Martin is CCed, hopefully will receive all reactions from this thread, please make sure you include him in CC while replying :)
Thank you!
-----------------
Hello Infinispan community,
let me introduce myself shortly, my name is Martin Ribarič and I studied Masaryk University – Faculty of …
[View More]Informatics in Brno. I wrote my bachelor thesis on topic: Migration of Infinispan OData server. I would like to tell you a few words about my work.
The primary goal of my bachelor thesis was to develop/migrate a new solution of OData service for Infinispan. New solution is needed, because old solution was developed using odata4j library and development of this library was kind of stopped. New solution of OData service is offered by project Apache Olingo. In my work, I've created a servlet for Infinispan cache using Apache Olingo library support for OData v4.
We can put, get, manage and most importantly also query JSON documents in the cache. Querying is possible on basic key value or a property of JSON document. Also, in queries, we can use operations AND, OR, EQUALS and operations for reduction of a response list: SKIP and TOP.
Source code of my bachelor thesis is available at:
https://github.com/marib15/OlingoInfinispan-server
Tomas Sykora will review my code and push my solution to https://github.com/infinispan/infinispan-odata-server master soon.
If community will have an interest, we can have a plan to continue in development, so that server supports more operations... and also do performance testing.
I know the solution is definitely not perfect but can definitely serve as a baseline for future work, adjustments or adoption.
I am looking forward to hearing your responses!
Have a nice day :)
Ribarič
[View Less]
8 years, 1 month
Cluster Executor failover and execution policy
by William Burns
As many of you are may or may not be aware the ClusterExecutor interface
and implementation were introduced into Infinispan 8.2 [1]. This class is
a new API that can be used to submit commands to other nodes in a way
similar to DistributedExecutor does while also not being tied to a cache.
The first implementation of ClusterExecutor did not include a couple
features that DistributedExecutor has. For this post I will concentrate on
failover and execution policies. My plan is to introduce some …
[View More]API to
Infinispan 9 to allow for ClusterExecutor to also offer these capabilities.
The first change is that I wanted to add additional options to Execution
Policies. The execution policy is used to limit sending messages to nodes
based on their topology (site, rack & machine id). The old execution
policy allowed for SAME_MACHINE, SAME_RACK, SAME_SITE and ALL. I plan on
adding the opposite of the SAME and also supporting DIFFERENT_MACHINE,
DIFFERENT_RACK and DIFFERENT_SITE in case if the user wants to ensure that
data is processed elsewhere. Unless you think this is unneeded?
The API changes I am thinking of are as below (included in email to allow
for responses inline). Note that existing methods would be unchanged and
thus submit and execute methods would be used to send commands still. One
big difference is that I have not allowed for the user to control the
failover node or the target node when doing a single submission with
multiple available targets. In my mind if a user wants this they should do
it themselves manually, but this is open for discussion as well.
/**
* When a command is submitted it will only be submitted to one node
of the available nodes, there is no strict
* requirements as to which node is chosen and is implementation
specific. Fail over can be used with configuration,
* please see {@link ClusterExecutor#failOverRetries(int)} for more information.
* @return this executor again with commands submitted to a single node
*/
ClusterExecutor singleNodeSubmission();
/**
* When a command is submitted it will submit this command to all of
the available nodes. Fail over is not supported
* with this configuration. This is the default submission method.
* @return this executor again with commands submitted to all nodes
*/
ClusterExecutor allNodeSubmission();
/**
* Enables fail over to occur when using {@link
ClusterExecutor#singleNodeSubmission()}. If the executor
* is not currently in the single node submission mode, this method
will throw {@link IllegalStateException}.
* When fail over count is applied, a submitted command will be
retried up to that many times on the available
* command up to desired amount of times until an exception is not
met. The one exception that is not retried is a
* TimeoutException since this could be related to {@link
ClusterExecutor#timeout(long, TimeUnit)}. Each time the
* fail over occurs a random node in the available nodes will be used
(trying not to reuse the same node).
* @param failOverCount how many times this executor will attempt a failover
* @return this executor again with fail over retries applied
* @throws IllegalStateException if this cluster executor is not
currently configured for single node submission
*/
ClusterExecutor failOverRetries(int failOverCount) throws IllegalStateException;
/**
* Allows for filtering of address nodes by only allowing addresses
that match the given execution policy to be used.
* Note this method overrides any previous filtering that was done (ie. calling
* {@link ClusterExecutor#filterTargets(Collection)}).
* @param policy the policy to determine which nodes can be used
* @return this executor again with the execution policy applied to
determine which nodes are contacted
*/
ClusterExecutor filterTargets(ClusterExecutionPolicy policy);
/**
* Allows for filtering of address nodes dynamically per invocation.
The predicate is applied to each member that
* is part of the execution policy. Note that this method overrides any previous
* filtering that was done (ie. calling {@link
ClusterExecutor#filterTargets(Collection)}).
* @param policy the execution policy applied before predicate to
allow only nodes in that group
* @param predicate the dynamic predicate applied each time an
invocation is done
* @return
*/
ClusterExecutor filterTargets(ClusterExecutionPolicy policy,
Predicate<? super Address> predicate);
Thanks for any input,
- Will
[1]
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
[View Less]
8 years, 1 month
Backwards compatibility issues with Infinispan 9.x and Hibernate 2LC
by Galder Zamarreño
Hi all,
As I've mentioned, I'm working on trying to integrate Hibernate 2LC (5.x branch at the moment) with Infinispan 9.
To start with, I tried to see if I could just run Hibernate 2LC 5.x, compiled with Infinispan 8, with an Infinispan 9 runtime. The first problem here was to do with changes to PrioritizedMethodMetadata [1].
However, that above is the least of our problems... once I tried to compile with Infinispan 9, there are a lot of compilation errors.
Here's a list of what I've …
[View More]found out so far, based on the work in [2] which includes compilation errors and runtime issues I've discovered:
1. Plenty of interceptors have been moved from org.infinispan.interceptors.base and org.infinispan.interceptors packages to org.infinispan.interceptors.impl package.
2. ModuleCommandFactory.fromStream now passes a cache name in ByteString instead of String.
3. DataWriteCommand.setMetadata() method is gone. The reason for this is that FlagAffectedCommand does no longer extend MetadataAwareCommand.
4. Interceptors cannot invoke invokeNextInterceptor() any more in parent, the method has been renamed to invokeNext() (in different class, AsyncInterceptor)
5. A lot of interceptors now take flags as long instead of Set<Flag> which results in compilation error.
6. BaseRpcInterceptor subclasses are now force abstract protected method getLog() to be implemented, again a compilation error.
7. CallInterceptor no longer contains visit... methods, so all interceptors extending it need to extend CommandInterceptor and be placed just before CallInterceptor.
7.1. As a result of that, interceptor positioning calls need to be changed.
8. AdvancedCache.filterEntries() is gone, so need to find an alternative way to do the same.
9. WriteCommand.getAffectedKeys() returns Collection instead of Set now.
10. org.infinispan.filter.NullValueConverter is gone. I removed that as part of marshalling changes since it was not used anywhere within Infinispan repo, but Hibernate 2LC actually uses it.
11. BeginInvalidationCommand and EndInvalidationCommand write directly the lockOwner via `output.writeObject(lockOwner)`, but this causes problem when the lockOwner is a CommandInvocationId since there's no externalizer for it any more. The reason for not having an externalizer is that CommandInvocationId is written via static CommandInvocationId.writeTo() calls.
12. org.infinispan.commands.module.ExtendedModuleCommandFactory is gone.
13. ReplicableCommand.setParameters() method is gone.
14. BaseRpcCommand constructor takes a ByteString instead of String.
15. ReplicableCommand implementations need to implement writeTo() and readFrom() methods.
16. (test) BlockingInterceptor no longer can be added via AdvancedCache.addInterceptor() call because it does not extend CommandInterceptor any more.
17. (test) org.infinispan.util.concurrent.ConcurrentHashSet has been moved.
18. (test) TestingEntityCacheKey must be made extend ExternalPojo so that it can be externally marshalled.
19. (test) 2lc-test-tcp.xml contains attributes that are no longer found by JGroups 4.x and throws errors.
The question here is whether we should work towards making Infinispan 9 backwards compatible with Infinispan 8 as far as Hibernate 2LC integration is concerned.
In theory, Infinispan 9 should be integrated with Hibernate 6.x onwards, but as always, Wildfly might have different opinions... @Paul?
If we need to do something, the time to do it is now, before 9.Final.
Cheers,
p.s. A lot of tests still failing, so the work in [2] is nowhere near finished.
[1] https://gist.github.com/galderz/e26ea9d4838a965500906a6df87e064a
[2] https://github.com/galderz/hibernate-orm/commit/5e36a021db4eaad75d835d321...
--
Galder Zamarreño
Infinispan, Red Hat
[View Less]
8 years, 1 month
State transfer-related terms
by Radim Vansa
Hi,
I've started (again) working on ISPN-5021 [1], and I'd like to get some
common agreement on few terms. So below I summarize my understanding (or
misunderstanding) of these, please state you opinion, thinking a bit
more generally.
State transfer: whole process beginning with some ST-related event =
node being detected to crash/sending join or leave request, and ending
when this event is processed. When another event happens, the current ST
can either be finished or canceled and then *…
[View More]another* ST can begin.
State transfer is a cluster-wide process, though it cannot be started
and ended absolutely simultaneously.
Rebalance: one phase of ST, when the data transfer occurs.
Data rehash: this is a bit painful point: we have DataRehashEvent where
the name suggest that it is related rather to rebalance, but currently
it fires when CacheTopology.getPendingCH() == null (that means when ST
is complete), and the event itself also looks more like it should be
fired at the end of state transfer. If we have something more to do
after the rebalance, I am not sure how useful is firing that just
because all data has been transferred (but for example before old data
has been wiped out). Should I add another StateTransferEvent event (and
appropriate listeners)? That would break the compatibility with tightly
related implementations...
WDYT?
Radim
[1] https://issues.jboss.org/browse/ISPN-5021
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team
[View Less]
8 years, 2 months
My weekly report
by Tristan Tarrant
Hi guys,
I won't be able to attend this week's IRC meeting, so here's my update:
ISPN-7444 Configuration templates should not turn into concrete caches
This was a side-effect of my work on ISPN-7066 (default cache
inheritance) which was causing the Hot Rod server to start templates as
is if they were concrete caches.
ISPN-7442 Server configurations should use embedded defaults when possible
Server always had defaults hard-coded in the configuration resource
descriptors. I've changed it so …
[View More]that it uses whatever defaults the
corresponding embedded configuration element uses
ISPN-7445 Simplify default server configurations
Removed a ton of useless "example" configuration attributes from the
shipped configs, since most of them were duplicating "defaults" (wrong
ones at that) and an adverse impact on performance
ISPN-7446 Make the mode attribute on clustered caches optional and
default to SYNC
SYNC caches are what a user normally wants, so the "mode" attribute,
which was previously mandatory, is now optional and it defaults to SYNC.
This means that <distributed-cache name="mycache"/> is all that is
needed to get good defaults.
I also did some CI surgery/cleanup, especially trying to help Galder and
Sanne identify and solve the OSGi failures.
I released 9.0.0.CR1 and I fixed the website news feed since Google
deprecated the feeds API.
This week I want to go through docs, examples, javadocs and the website
to ensure that everything is in order for when we get to the final release.
Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
[View Less]
8 years, 2 months