On ParserRegistry and classloaders
                                
                                
                                
                                    
                                        by Sanne Grinovero
                                    
                                
                                
                                        I see the ParserRegistry was changed quite a bit; in Infinispan 6 it
allowed to specify a different classloader for some operations, now it
only takes a classloader during construction time.
For WildFly/JBoss users, it is quite common that the configuration
file we want parsed is not in the same classloader that the
ParserRegistry needs to lookup its own parser components (as its
design uses the ServiceLoader to discover components of the parser).
This is especially true when Infinispan is not used by the application
directly but via another module (so I guess also critical for
capedwarf).
I initially though to workaround the problem using a "wrapping
classloader" so that I could pass a single CL instance which would try
both the deployment classloader and the Infinispan's module
classloader, but - besides this being suboptimal - it doesn't work as
I'm violating isolation between modules: I can get exposed to an
Infinispan 6 module which contains also Parser components, which get
loaded as a service but are not compatible (different class
definition).
I'll need these changes in the ParserRegistry reverted please. Happy
to send a pull myself, but before I attempt to patch it myself could
someone explain what the goal was?
thanks,
Sanne
                                
                         
                        
                                
                                11 years
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Ant based kill not fully working? Re: ISPN-4567
                                
                                
                                
                                    
                                        by Galder Zamarreño
                                    
                                
                                
                                        Hi,
Dan has reported [1]. It appears as if the last server started in infinispan-as-module-client-integrationtests did not really get killed. From what I see, this kill was done via the specific Ant target present in that Maven module.
I also remembered recently [2] was added. Maybe we need to get as-modules/client to be configured with it so that it properly kills servers?
What I’m not sure is where we’d put it so that it can be consumed both by server/integration/testsuite and as-modules/client? The problem is that the class, as is, brings in arquillian dependency. If we can separate the arquillian stuff from the actual code, the class itself could maybe go in commons test source directory?
@Tristan, thoughts?
@Jakub, can I assign this to you?
[1] https://issues.jboss.org/browse/ISPN-4567
[2] https://github.com/infinispan/infinispan/blob/master/server/integration/t...
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
                                
                         
                        
                                
                                11 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Infinispan Jira workflow
                                
                                
                                
                                    
                                        by Tristan Tarrant
                                    
                                
                                
                                        I was just looking at the Jira workflow for Infinispan and noticed that 
all issues start off in the "Open" state and assigned to the default 
owner for the component. Unfortunately this does not mean that the 
actual "assignee" has taken ownership, or that he intends to work on it 
in the near future, or that he has even looked at it. I would therefore 
like to introduce a state for fresh issues which is just before "Open". 
This can be "New" or "Unverified/Untriaged" and will make it easier to 
find all those "lurker" issues which are lost in the noise.
What do you think ?
Tristan
                                
                         
                        
                                
                                11 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Hot Rod partial topology update processing with new segment info - Re: ISPN-4674
                                
                                
                                
                                    
                                        by Galder Zamarreño
                                    
                                
                                
                                        Hey Dan,
Re: https://issues.jboss.org/browse/ISPN-4674
If you remember, the topology updates that we send to clients are sometimes partial. This happens when at the JGroups level we have a new view, but the HR address cache has not yet been updated with the JGroups address to endpoint address. This logic works well with HR protocol 1.x.
With HR 2.x, there’s a slight problem with this. The problem is that we now write segment information in the topology, and when we have this partial set up, calls to locateOwnersForSegment(), for a partial cluster of 2, it can quite possibly return 2.
The problem comes when the client reads the number of servers, discovers it’s one, but reading the segment, it says that there’s two owners. That’s where the ArrayIndexOutOfBoundsException comes from.
The question is: how shall we deal with this segment information in the even of a partial topology update?
>From a client perspective, one option might be to just ignore those segment positions for which there’s no cluster member. IOW, if the number of owners is bigger than the cluster view, it could just decide to create a smaller segment array, of only cluster view size, and then ignore the index of a node that’s not present in the cluster view.
Would this be the best way to solve it? Or could we just avoid sending segment information that’s not right? IOW, directly send from the server segment information with all this filtered.
Thoughts?
Cheers,
--
Galder Zamarreño
galder(a)redhat.com
twitter.com/galderz
                                
                         
                        
                                
                                11 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Nutch atop Hadoop+ISPN
                                
                                
                                
                                    
                                        by Pierre Sutra
                                    
                                
                                
                                        Hello,
As announced previously, we developed a Gora connector for Infinispan
(https://github.com/otrack/gora). The code is quite functional now as we
are able to run Apache Nutch 2.x on top of Infinispan and Yarn+HDFS
(Hadoop 2.x). Nutch is a pipeline of M/R jobs accessing web pages from a
data store (in that case Infinispan). Queries to fetch (and store) pages
are executed via the Gora connector which itself relies on an Apache
Avro remote query module in Infinispan and Hot Rod.
The next step to foster integration would be removing the need for
stable storage (distributing jars to the workers), as well as moving to
Infinispan native M/R support. I have seen that this is related to
https://issues.jboss.org/browse/ISPN-2941. Could someone please give me
more details about the next steps in this direction, in particular
regarding stable storage ? Many thanks.
Cheers,
Pierre
                                
                         
                        
                                
                                11 years, 2 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Caches need be stopped in a specific order to respect cross-cache dependencies
                                
                                
                                
                                    
                                        by Sanne Grinovero
                                    
                                
                                
                                        The goal being to resolve ISPN-4561, I was thinking to expose a very
simple reference counter in the AdvancedCache API.
As you know the Query module - which triggers on indexed caches - can
use the Infinispan Lucene Directory to store its indexes in a
(different) Cache.
When the CacheManager is stopped, if the index storage caches are
stopped first, then the indexed cache is stopped, this might need to
flush/close some pending state on the index and this results in an
illegal operation as the storate is shut down already.
We could either implement a complex dependency graph, or add a method like:
  boolean incRef();
on AdvancedCache.
when the Cache#close() method is invoked, this will do an internal
decrement, and only when hitting zero it will really close the cache.
A CacheManager shutdown will loop through all caches, and invoke
close() on all of them; the close() method should return something so
that the CacheManager shutdown loop understand if it really did close
all caches or if not, in which case it will loop again through all
caches, and loops until all cache instances are really closed.
The return type of "close()" doesn't necessarily need to be exposed on
public API, it could be an internal only variant.
Could we do this?
--Sanne
                                
                         
                        
                                
                                11 years, 2 months