JIRA notification
by Romain Pelisse
Hi all,
I would like to be notified whenever a new bug/feature request is created
in ISPN JIRA, but I utterly failed at finding out how to do that. Is this
possible ? (I would guess so) ? If somebody has done that can you give me a
hint on how to set it up ?
Thanks !
--
Romain PELISSE,
*"The trouble with having an open mind, of course, is that people will
insist on coming along and trying to put things in it" -- Terry Pratchett*
Belaran ins Prussia (blog) <http://blog.wordpress.belaran.eu/> (...
finally up and running !)
10 years, 2 months
Infinispan 7.0 feature freeze and future planning
by Tristan Tarrant
Hi all,
Infinispan 7.0 has been in development for over 9 months now and we
really need to release it into the wild since it contains a lot of juicy
stuff :)
For this reason I'm calling a feature freeze and all new features need
to be reassigned over to 7.1 or 7.2.
For the next minor releases I would like to suggest the following strategy:
- use a 3 month timebox where we strive to maintain master in an "always
releasable" state
- complex feature work will need to happen onto dedicated feature
branches, using the usual GitHub pull-request workflow
- only when a feature is complete (code, tests, docs, reviewed,
CI-checked) it will be merged back into master
- if a feature is running late it will be postponed to the following
minor release so as not to hinder other development
Suggestions, amendments to the above are welcome.
Thanks !
Tristan
10 years, 2 months
Differences between default values in the XSD and the code...Part One
by Alan Field
Hey,
I have been looking at the differences between default values in the XSD vs the default values in the configuration builders. [1] I created a list of differences and talked to Dan about his suggestion for the defaults. The numbers in parentheses are Dan's suggestions, but he also asked me to post here to get a wider set of opinions on these values. This list is based on the code used in infinispan-core, so I still need to go through the server code to check the default values there.
1) For locking, the code has concurrency level set to 32, and the XSD has 1000 (32)
2) For eviction:
a) the code has max entries set to -1, and the XSD has 10000 (-1)
b) the code has interval set to 60000, and the XSD has 5000 (60000)
3) For async configuration:
a) the code has queue size set to 1000, and the XSD has 0 (0)
b) the code has queue flush interval set to 5000, and the XSD has 10 (10)
c) the code has remote timeout set to 15000, and the XSD has 17500 (15000)
4) For hash, the code has number of segments set to 60, and the XSD has 80 (60)
5) For l1, the code has l1 cleanup interval set to 600000, and the XSD has 60000 (60000)
Please let me know if you have any opinions on these default values, and also if you have any ideas for avoiding these differences in the future. It seems like there are two possibilities at this point:
1) Generating the XSD from the source code
2) Creating a test case that parses the XSD, creates a cache, and verifies the default values against the parsed values
3) ???
Thanks,
Alan
[1] https://issues.jboss.org/browse/ISPN-4645
10 years, 3 months
putAll, getAll and optimization fruits
by Emmanuel Bernard
We have had a short discussion on putAll, getAll. I’m pushing the info here
>>> getAll and putAll and nothing more than a glorified sequential call to get / put in a for loop.
>>> The execution of gets and puts in parallel for it costs o(n) network trip in latency rather than o(1).
>>> How could we improve it ?
>> Historically getall and putall were not intended as hotrod operations and were actually implemented only to honor the map interface.
>> The most we can do RPC wise to optimize these operations is to group all keys mapping to the same server node into a single request. That would reduce the number of RPCs, but in big O talk it would still be O(numKeys). Executing them in parallel sounds like a good idea to me. Curious to hear other thoughts on this. Galder?
>
> So there are actually three improvements:
>
> * getall and putall as hotrod operations (no matter how that will be implemented by the server itself)
> Galder, is it possible in current HR design to execute requests in parallel, without consuming one thread for each node? That was something my async client should solve, but afaik it's not possible without substantial changes, and we were rather targetting that for future JDK 8 only client.
Doing that it’s 1/2 of the story, because as I’ve already explained in Wolf’s efforts around putAll, the Netty server implementation just calls to Infinispan synchronous cache operations, which often block. So, using an async client will get you 1/2 of the job done. The way to limit the blocking is by limiting that blocking, e.g. splitting keys and sending gets to nodes that own it would mean the gets get resolved locally, similar thing with puts but as Bela/Pedro found out, there could be some blocking still.
> Anyway, we could route the HR request to the node with most matching keys.
I don’t think that’s a good idea.
The best option is to take all keys, divide them by server according to hashing and send paralell native Hot Rod getAll operations contain N keys requested to each server. The same thing for putAll.
I’ve create a JIRA to get getAll/putAll implemented in the Hot Rod 2.0 timeframe: https://issues.jboss.org/browse/ISPN-4752
10 years, 3 months
Re: [infinispan-dev] Hot Rod Remote Events #3: Customizing events
by Galder Zamarreño
Radim, adding -dev list since others might have the same qs:
@Will, some important information below:
On 18 Sep 2014, at 08:16, Radim Vansa <rvansa(a)redhat.com> wrote:
> Hi Galder,
>
> re: to your last blogpost $SUBJ: I miss two information there:
>
> 1) You say that the filter/converter factories are deployed as JAR - do you need to update infinispan modules' dependencies on the server, or can you do that in any other way (via configuration)?
There’s nothing to be updated. The jars are deployed in the deployments/ folder or via CLI or whatever other standard deployment method is used. We have purpousefully built a deployment processor that processes these jars and does all the hard work for the user. For more info, see the filter/converter tests in the Infinispan Server integration testsuite.
> This is more general question (I've ran into that with compatibility mode as well), could you provide a link how custom JARs that Infinispan should use are deployed?
There’s no generic solution at the moment. The current solution is limited to filter/converter jars for remote eventing because we depend on service definitions in the jar to find the SPIs that we need to plugin to the Infinispan Server.
> 2) Let's say that I want to use the converter to produce diffs, therefore the converter needs the previous (overwritten) value as well. Would injecting the cache through CDI work, or is the cache already updated when the converter runs? Can this be reliable at all?
Initially when I started working on remote events stuff, I considered the need of previous value in both converter and filter interfaces. I think they can be useful, but here I’m relying on Will’s core filter/converter instances to provide them to the Hot Rod remote events and at the moment they don't. @Will, are you considering adding this? Since it affects API, it might be a good time to do this now.
In terms of how to workaround it, a relatively heavy weight solution would be for the converter to track key/values as it gets events and them compare event contents with its cache. Values should be refs, so should not take too much space… I doubt injecting a CDI cache would work.
Cheers,
>
> Thanks
>
> Radim
>
> --
> Radim Vansa <rvansa(a)redhat.com>
> JBoss DataGrid QA
>
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
10 years, 3 months
Data versioning
by Pierre Sutra
Hello,
In the context of the LEADS project, we recently wrote a paper |1]
regarding data versioning in key-value stores, and using Infinispan as a
basis to explore various implementations. It will be presented at the
IEEE SRDS'14 conference this October [2]. We hope that it might interest
you. Do not hesitate to address us comments and/or questions.
Regards,
Pierre
[1] http://tinyurl.com/srds14versioning
[2] www-nishio.ist.osaka-u.ac.jp/conf/srds2014/
10 years, 3 months
My weekly status update
by Galder Zamarreño
Hi,
I won’t be around for the weekly IRC meeting, so here’s my status updated.
Last week:
- Sent PRs for:
- ISPN-4707 Hot Rod 2.0 should add error codes for suspected nodes and stopping/stopped caches.
- ISPN-4717 On leave, Hot Rod client ends up with old cluster formation
- ISPN-4567 Sent PR to get more logs on why sometimes Arquillian containers are not closed.
- ISPN-4563 Race condition in JCache creation for interceptors. In conjunction with Sebastian.
- ISPN-4579 SingleNodeJdbcStoreIT.cleanup NPE after test failure. Trivial stuff.
- Created new blog post “Hot Rod Remote Events #3: Customizing Events”
This week:
- Working on:
- ISPN-4736 Add size() operation to Hot Rod.
- Closely related, complete ISPN-4470 using the new size operation.
- ISPN-4737 Noisy exceptions in Hot Rod client when node goes down
- ISPN-4734 Hot Rod marshaller for custom events...etc, needs to be configurable in server
- I’ll write up 4th blog post of the remote event series which will focus on receiving events in a clustered environment.
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
10 years, 3 months