Version annotated stacktraces
by Vladimir Blagojevic
Hi,
Playing around with AS 6.0M3 I've seen the coolest thing ever :) Every stacktrace line is annotated with version of jar where the stacktrace line is originating from. I've heard a rumor from Brian that DML is behind this creative endeavour and I was wondering if we can do this in standalone Infinispan as well?
Vladimir
at java.util.zip.ZipFile.open(Native Method) [:1.6.0_17]
at java.util.zip.ZipFile.<init>(ZipFile.java:114) [:1.6.0_17]
at java.util.jar.JarFile.<init>(JarFile.java:133) [:1.6.0_17]
at sun.net.www.protocol.jar.URLJarFile.<init>(URLJarFile.java:67) [:1.6.0_17]
at sun.net.www.protocol.jar.URLJarFile$1.run(URLJarFile.java:214) [:1.6.0_17]
at java.security.AccessController.doPrivileged(Native Method) [:1.6.0_17]
at sun.net.www.protocol.jar.URLJarFile.retrieve(URLJarFile.java:198) [:1.6.0_17]
at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:50) [:1.6.0_17]
at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:68) [:1.6.0_17]
at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:104) [:1.6.0_17]
at sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:71) [:1.6.0_17]
at com.sun.faces.config.AnnotationScanner.processClasspath(AnnotationScanner.java:290) [:2.0.2-FCS]
at com.sun.faces.config.AnnotationScanner.getAnnotatedClasses(AnnotationScanner.java:215) [:2.0.2-FCS]
at com.sun.faces.config.ConfigManager$AnnotationScanTask.call(ConfigManager.java:765) [:2.0.2-FCS]
at com.sun.faces.config.ConfigManager$AnnotationScanTask.call(ConfigManager.java:736) [:2.0.2-FCS]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_17]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_17]
at com.sun.faces.config.ConfigManager.initialize(ConfigManager.java:329) [:2.0.2-FCS]
at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:223) [:2.0.2-FCS]
at org.jboss.web.jsf.integration.config.JBossJSFConfigureListener.contextInitialized(JBossJSFConfigureListener.java:72) [:6.0.0.20100429-M3]
at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3733) [:6.0.0.20100429-M3]
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4197) [:6.0.0.20100429-M3]
at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:323) [:6.0.0.20100429-M3]
at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:148) [:6.0.0.20100429-M3]
at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:462) [:6.0.0.20100429-M3]
at org.jboss.web.deployers.WebModule.startModule(WebModule.java:116) [:6.0.0.20100429-M3]
at org.jboss.web.deployers.WebModule.start(WebModule.java:95) [:6.0.0.20100429-M3]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [:1.6.0_17]
14 years
Re: [infinispan-dev] XML namespaces in configuration files
by Vladimir Blagojevic
Hey all,
After consultation with Alexey I think the following emerges as the best approach for management of xml schemas, configuration pojos and configuration file backward compatibility.
We should remove references to schema version in namespace and we might even rename namespace to something like "http://infinispan.org/xml/ns/ispn" and use a prefix "ispn" rather than tns. We could roll this in 4.1 final release. We leave 4.0 schemas as is, but 4.1 signifies a break and a new schema name-spacing that we intend to keep. How do we treat customer who want to use their 4.0 configuration files in 4.1? We tell them to remove references to old namespace from their configuration files. Thanks to ISPN-431 they are not affected.
Since we removed references to configuration version in schema, as far as our configuration beans go, we are fine as long as we add properties to elements and even adding new configuration elements in ok. In essence, we are fine as long as we do not remove/rename existing attributes and elements from our configuration pojos. Adding new elements to configuration will still work. For example, lets say that 5.0 adds some configuration elements regarding JPA. Reading 4.1 configuration file in 5.0 is ok since we kept all those elements from 4.1 in 5.0, and for JPA element default configuration settings are initialized anyway.
Let me know what you guys think.
Vladimir
On 2010-05-11, at 5:43 PM, Alexey Loubyansky wrote:
> Hi Vladimir,
>
> yes, that's tricky. I can't say we have an elegant solution for that.
>
> I can't tell you what we do wrt EJB metadata (ejb-jar.xml/jboss.xml) as an example.
> For jboss.xml we have common JBossMetaData.
> http://anonsvn.jboss.org/repos/jbossas/projects/metadata/ejb/trunk/src/ma...
>
> It contains binding annotations but w/o the schema-level ones such namespace, etc. Then per schema version we create a top level class, e.g.
> http://anonsvn.jboss.org/repos/jbossas/projects/metadata/ejb/trunk/src/ma...
> http://anonsvn.jboss.org/repos/jbossas/projects/metadata/ejb/trunk/src/ma...
>
> There we specify the namespace and which properties we want to bind. So, actually, for some schemas common JBossMetaData contains more metadata than it is available in those schemas but those properties are just not bound for those schema versions. But the deployers regardless of the deployment descriptor version deployed use the same JBossMetaData API.
> In this case, though, it's only for the top-level class (root element). Although, we could have some tricks for other classes/elements as well.
>
> Other metadata (sub)projects (web, ear, rar, etc) use the same approach. (There are also tests for consistency between XSD/DTD and Java bindings, i.e. structural equivalence)
>
> What is a bit different, we don't include schema version in the namespace. Schema versions are different but the namespace stays the same. The same is true for JEE spec schemas.
> But even if we did, the current approach would still work.
>
> If you have some tricky cases/requirements then let's discuss them on the forums. It might be relevant to other projects as well.
>
> Best regards,
> Alexey
>
> On 5/11/2010 9:46 PM, Vladimir Blagojevic wrote:
>> Hi Alexey,
>>
>> Need an advice regarding best practices when it comes to XML schema management and references in configuration files. In infinispan project we have used JAXB annotated classes to annotate configuration pojos, automate manage configuration loading, and do schema creation. We have used the package-info.java files annotated with JAXB annotations to declare schema namespaces[1]. Then in turn we have generated XML schema file using JAXBContext#generateSchema.
>>
>> The current namespace is urn:infinispan:config:4.0, but this will change to urn:infinispan:config:4.1, urn:infinispan:config:5.0, etc in the future. What are our options for configuration file backward-compatibility?
>>
>> Best regards,
>> Vladimir
>>
>>
>>
>> [1] http://fisheye.jboss.org/browse/Infinispan/trunk/core/src/main/java/org/i...
14 years
Configuration XML and schema namespaces
by Manik Surtani
Vladimir,
You've used the package-info.java files annotated with JAXB annotations to declare schema namespaces. How does this work with versioning? E.g., the current namespace is urn:infinispan:config:4.0, but I presume this will change to urn:infinispan:config:4.1, urn:infinispan:config:5.0, etc in the future?
How does this work with backward-compatibility?
Cheers
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
14 years
Re: [infinispan-dev] New Fine Grained Replication API (PojoCache feature) for Infinispan 5.0.0
by galder@jboss.org
See below:
----- "kapil nayar" <kapilnayar1(a)gmail.com> wrote:
> Hi Galder,
>
> I looked at the preliminary writeup for the New Fine Grained
> Replication API
> Design at
> http://community.jboss.org/wiki/newfinegrainedreplicationapidesign
>
> When compared to the POJOCache does this design mandate an explicit
> "commit"
> (seems true because of the JPA style) to change the object instance
> stored
> in the cache/ replicated?
Yeah, commit is mandated as per JPA rules. It's at that point that we'll be able to detect differences between objects in session and objects in cache.
> If yes this design would NOT provide an
> exact
> replacement for the POJOCache style APIs and the included
> flexibility.
Well, it's a totally different approach. This is not an attempt to replicate the PojoCache APIs 100%, but instead provide programming model that users are more familiar with and that is less error-prone than AOP based client APIs.
>
> Could you clarify - did I miss something while reading the text.
>
> Thanks,
> Kapil
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
14 years
ISPN-425 - Issues with waiting for rehash to complete on startup
by galder@redhat.com
Hi,
Re: https://jira.jboss.org/jira/browse/ISPN-425
We've been discussing solutions to the fundamental problem in this issue which is the fact that operations are allowed in the cache before rehashing has finished starting up. I've been playing around with a solution based around waiting for rehashing to complete but this is causing issues with Hot Rod distribution tests. In Hot Rod, this is what happens:
1. Start Hot Rod server 1 which starts a replicated topology cache.
2. Start Hot Rod server 2 which starts a replicated topology cache.
3. Send a request for a distributed cache called 'hotRodDistSync' in Hot Rod server 2.
4. As a result of this request, 'hotRodDistSync' cache should be started up but it does not succeed. It stays in a Rehash join loop, saying:
4595 INFO [org.infinispan.remoting.InboundInvocationHandlerImpl] (OOB-2,Infinispan-Cluster,eq-52045:) Cache named hotRodDistSync does not exist on this cache manager!
4595 TRACE [org.infinispan.marshall.VersionAwareMarshaller] (OOB-2,Infinispan-Cluster,eq-52045:) Wrote version 410
4596 TRACE [org.infinispan.marshall.VersionAwareMarshaller] (OOB-2,Infinispan-Cluster,eq-64501:) Read version 410
4596 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Rehasher-eq-64501:) responses: [sender=eq-52045, retval=null, received=true, suspected=false]
4597 DEBUG [org.infinispan.distribution.JoinTask] (Rehasher-eq-64501:) Retrieved old consistent hash address list null
4597 TRACE [org.infinispan.distribution.JoinTask] (Rehasher-eq-64501:) Sleeping for 1.54 seconds
The problem here is that Hot Rod server 1 has not yet started 'hotRodDistSync' cache since no requests where sent to it. Now, this is different to the cache not allowing invocations yet cos it's in middle of the startup. So, I wondered if InboundInvocationHandlerImpl.handle() could return a custom response rather than null and for JoinTask to handle it in such a way that if all the responses received say that the cache does not exist, then consider rehash completed and finish the process.
Now, the reason I'm saying to return a custom response is because I can see that JOIN_REQ returning null can also mean that the coordinator is in the middle of another join (DMI.requestPermissionToJoin). These two situations are not the same, hence why I suggest a different treatment.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years
Hashing generating recipient lists with same address
by galder@redhat.com
Hi all,
As indicated on IRC, running org.infinispan.client.hotrod.TopologyChangeTest.testTwoMembers() fails randomly with replication timeout. It's very easy to replicate. When it fails, this is what happens:
1. During rehashing, a new hash is installed:
2010-05-06 17:54:11,960 4932 TRACE [org.infinispan.distribution.DistributionManagerImpl] (Rehasher-eq-985:) Installing new consistent hash DefaultConsistentHash{addresses ={109=eq-35426, 10032=eq-985, 10033=eq-985}, hash space =10240}
2. Rehash finishes and the previous hash is still installed:
2010-05-06 17:54:11,978 4950 INFO [org.infinispan.distribution.JoinTask] (Rehasher-eq-985:) eq-985 completed join in 30 milliseconds!
3. A put comes in to eq-985 who decides recipients are [eq-985, eq-985]. Most likely, the hash falled somewhere between 109 and 10032 and since owners are 2, it took the next 2:
2010-05-06 17:54:12,307 5279 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (HotRodServerWorker-2-1:) eq-985 broadcasting call PutKeyValueCommand{key=CacheKey{data=ByteArray{size=9, hashCode=d28dfa, array=[-84, -19, 0, 5, 116, 0, 2, 107, 48, ..]}}, value=CacheValue{data=ByteArray{size=9, array=[-84, -19, 0, 5, 116, 0, 2, 118, 48, ..]}, version=281483566645249}, putIfAbsent=false, lifespanMillis=-1000, maxIdleTimeMillis=-1000} to recipient list [eq-985, eq-985]
Everything afterwards is a mess:
4. JGroups removes the local address from the destination. The reason Infinispan does not do it it's because the number of recipients is 2 and the number of members in the cluster 2, so it thinks it's a broadcast:
2010-05-06 17:54:12,308 5280 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (HotRodServerWorker-2-1:) real_dests=[eq-985]
5. JGroups still sends it as a broadcast:
2010-05-06 17:54:12,308 5280 TRACE [org.jgroups.protocols.TCP] (HotRodServerWorker-2-1:) sending msg to null, src=eq-985, headers are RequestCorrelator: id=201, type=REQ, id=12, rsp_expected=true, NAKACK: [MSG, seqno=5], TCP: [channel_name=Infinispan-Cluster]
6. Another node deals with this and replies:
2010-05-06 17:54:12,310 5282 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (OOB-1,Infinispan-Cluster,eq-35426:) Attempting to execute command: SingleRpcCommand{cacheName='___defaultcache', command=PutKeyValueCommand{key=CacheKey{data=ByteArray{size=9, hashCode=43487e, array=[-84, -19, 0, 5, 116, 0, 2, 107, 48, ..]}}, value=CacheValue{data=ByteArray{size=9, array=[-84, -19, 0, 5, 116, 0, 2, 118, 48, ..]}, version=281483566645249}, putIfAbsent=false, lifespanMillis=-1000, maxIdleTimeMillis=-1000}} [sender=eq-985]
...
7. However, no replies yet from eq-985, so u get:
2010-05-06 17:54:27,310 20282 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (HotRodServerWorker-2-1:) responses: [sender=eq-985, retval=null, received=false, suspected=false]
2010-05-06 17:54:27,313 20285 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (HotRodServerWorker-2-1:) replication exception:
org.infinispan.util.concurrent.TimeoutException: Replication timeout for eq-985
Now, I don't understand the reason for creating a hash 10032=eq-985, 10033=eq-985. Shouldn't keeping 10032=eq-985 be enough? Why add 10033=eq-985?
Assuming there was a valid case for it, a naive approach would be to discard a second node that points to the an address already in the recipient list. So, 10032=eq-985 would be accepted for the list but when encountering 10033=eq-985, this would be skipped.
Finally, I thought waiting for rehashing to finish would solve the issue but as u can see in 2., rehashing finished and the hash is still in the same shape. Also, I've attached a log file.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
14 years
Re: [infinispan-dev] Fwd: Stale data read when L1 invalidation happens while UnionConsistentHash is in use
by galder@jboss.org
After looking at this for a bit longer, I think your suggestion on "4. Modify the value on C1 *before* rehashing completes." does not represent what happens in the log.
Instead, I think it should be like this:
"4. Modify the value on C3 *before* rehashing completes."
When you do that, it uses the new installed hash without doing a union. It's the existing nodes that use a union. So, if you were to do 4. C1, it will have a union hash and hence it will replicate to all nodes in the cluster and the test would pass.
The key thing here is doing 4. on C3, while C1 or C2 have the union hashing going on. C3 forces the update but this is not invalidated C1 or C2, so one of them will lose out since C3 does not do a union hash.
I'll do more research next week.
----- galder(a)jboss.org wrote:
> See below:
>
> ----- "Manik Surtani" <manik(a)jboss.org> wrote:
>
> > On 3 May 2010, at 08:51, Galder Zamarreno wrote:
> >
> > > Resending without log until the message is approved.
> > >
> > > --
> > > Galder Zamarreño
> > > Sr. Software Engineer
> > > Infinispan, JBoss Cache
> > >
> > > ----- Forwarded Message -----
> > > From: galder(a)redhat.com
> > > To: "infinispan -Dev List" <infinispan-dev(a)lists.jboss.org>
> > > Sent: Friday, April 30, 2010 6:30:05 PM GMT +01:00 Amsterdam /
> > Berlin / Bern / Rome / Stockholm / Vienna
> > > Subject: Stale data read when L1 invalidation happens while
> > UnionConsistentHash is in use
> > >
> > > Hi,
> > >
> > > I've spent all day chasing down a random Hot Rod testsuite failure
> > related to distribution. This is the last hurdle to close
> > https://jira.jboss.org/jira/browse/ISPN-411. In
> > HotRodDistributionTest, which is still to be committed, I test
> adding
> > a new node, doing a put on this node, and then doing a get in a
> > different node and making sure that I get what was put. The test
> > randomly fails saying that the get returns the old value. The
> failure
> > is nothing to do with Hot Rod itself but rather a race condition
> where
> > union consistent hash is used. Let me explain:
> > >
> > > 1. An earlier operation had set
> > "k-testDistributedPutWithTopologyChanges" key to
> > "v5-testDistributedPutWithTopologyChanges".
> > > 2. Start a new hot rod server in eq-7969.
> > > 2. eq-7969 node calls a put on that key with
> > "v6-testDistributedPutWithTopologyChanges". Recipients for the put
> > are: eq-7969 and eq-61332.
> > > 3. eq-7969 sends an invalidate L1 to all, including eq-13415
> > > 4. eq-13415 should invalidate
> > "k-testDistributedPutWithTopologyChanges" but it doesn't, since it
> > considers that "k-testDistributedPutWithTopologyChanges" is local to
> > eq-13415:
> > >
> > > 2010-04-30 18:02:19,907 6046 TRACE
> > [org.infinispan.distribution.DefaultConsistentHash]
> > (OOB-2,Infinispan-Cluster,eq-13415:) Hash code for key
> > CacheKey{data=ByteArray{size=39, hashCode=17b1683, array=[107, 45,
> > 116, 101, 115, 116, 68, 105, 115, 116, ..]}} is 344897059
> > > 2010-04-30 18:02:19,907 6046 TRACE
> > [org.infinispan.distribution.DefaultConsistentHash]
> > (OOB-2,Infinispan-Cluster,eq-13415:) Candidates for key
> > CacheKey{data=ByteArray{size=39, hashCode=17b1683, array=[107, 45,
> > 116, 101, 115, 116, 68, 105, 115, 116, ..]}} are {5458=eq-7969,
> > 6831=eq-61332}
> > > 2010-04-30 18:02:19,907 6046 TRACE
> > [org.infinispan.distribution.DistributionManagerImpl]
> > (OOB-2,Infinispan-Cluster,eq-13415:) Is local
> > CacheKey{data=ByteArray{size=39, hashCode=17b1683, array=[107, 45,
> > 116, 101, 115, 116, 68, 105, 115, 116, ..]}} to eq-13415 query
> returns
> > true and consistentHash is
> > org.infinispan.distribution.UnionConsistentHash@10747b4
> > >
> > > This is a log with log messages that I added to debug it. The key
> > factor here is that UnionConsistentHash is in use, probably due to
> > rehashing not having fully finished.
> > >
> > > 5. The end result is that a read of
> > "k-testDistributedPutWithTopologyChanges" in eq-13415 returns
> > "v5-testDistributedPutWithTopologyChanges".
> > >
> > > I thought that maybe we could be more conservative here and if
> > rehashing is in progress (or UnionConsistentHash is in use)
> invalidate
> > regardless. Assuming that a put always follows an invalidation in
> > distribution and not viceversa, that would be fine. The only
> downside
> > is that you'd be invalidating too much but put would replace the
> data
> > in the node where invalidation should not have happened but it did,
> so
> > not a problem.
> > >
> > > Thoughts? Alternatively, maybe I need to shape my test so that I
> > wait for rehashing to finish, but the problem would still be there.
> >
> > Yes, this seems to be a bug with concurrent rehashing and
> invalidation
> > rather than HotRod.
> >
> > Could you modify your test to so the following:
> >
> > 1. start 2 caches C1 and C2.
> > 2. put a key K such that K maps on to C1 and C2
> > 3. add a new node, C3. K should now map to C1 and C3.
> > 4. Modify the value on C1 *before* rehashing completes.
> > 5. See if we see the stale value on C2.
> >
> > To do this you would need a custom object for K that hashes the way
> > you would expect (this could be hardcoded) and a value which blocks
> > when serializing so we can control how long rehashing takes.
>
> Since logical addresses are used underneath and these change from one
> run to the other, I'm not sure how I can generate such key
> programatically. It's even more complicated to figure out a key that
> will later, when C3 starts, map to it. Without having these addresses
> locked somehow, or their hash codes, I can't see how this is doable.
> IOW, to be able to do this, I need to mock these addresses into giving
> fixed as hash codes. I'll dig further into this.
>
> >
> > I never promised the test would be simple! :)
> >
> > Cheers
> > Manik
> > --
> > Manik Surtani
> > manik(a)jboss.org
> > Lead, Infinispan
> > Lead, JBoss Cache
> > http://www.infinispan.org
> > http://www.jbosscache.org
> >
> >
> >
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev(a)lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
14 years
TestUtil.blockUntilClusterFormed
by Mircea Markus
Hi,
Is there such a method? I've got intermittent failures[1] now because I'm doing things during re-hashing, i.e. after blockUntilViewReceived.
[1]TopologyChangeTest starts and stops serves
Cheers,
Mircea
14 years
ISPN-275
by Sanne Grinovero
Hello all,
to properly close ISPN-275 I should be able to run the testuite for
both versions of Lucene 2.9.1 and 3.0.1.
I'd like to have both tests covered as the two versions of Lucene are
quite different, and while 3.0 is the current cool version it's very
easy to still support older widely adopted versions, as far as our
directory concerns.
Running the tests manually changing the Lucene dependency to 2.4.1,
2.9.2, 3.0.1 works fine, but any clue about how I could automate this
with maven?
Or if you think it's good enough to test for 3.0 or 2.9 only, then you
can consider the issue fixed.
Regards,
Sanne
14 years