I guess the NoCopyNestedJarHandler and NestedJarHandler should be equal?
| String flag = context.getOptions().get("useNoCopyJarHandler");
| boolean useNoCopyJarHandler = Boolean.valueOf(flag);
| if (useNoCopyJarHandler)
| vfh = new NoCopyNestedJarHandler(context, parent, jar, entry, url);
| vfh = NestedJarHandler.create(context, parent, jar, entry, url, entryName);
We probably never tested this, since when I change the default behavior to useNoCopyJarHandler, all bunch of tests fail.
I'll fix this + add tests that check the two are equal.
Was there ever an idea to use context's options more API like, other than reading them from URI's query?
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115391#4115391
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115391
Well, i'm not asking for help here but to make a suggestion / ask for information i have already searched in this forum without success :
Is server push capability planned into the Seam remoting roadmap ?
I just make my first approach of Seam and i read that, for the moment and like the Echo2 framework and also ZK if i remember well, we have a "cyclic server pulling interval" to emulate a push.
But what about a real push capability we are used to name http streaming / comet / bayeux / reverse ajax ?
Some don't manage to, some tried but have strong drawbacks (like people from lazlo), and some others did it beautifully (dwr with tibco, smartclient...)
If it's planned, perfect.
If it's too hard, would it be easier to use DWR directly into Seam ? It's licence is compliant as i know.
By the way, DWR integrates with many external libraries like Struts, JSF, Webwork, Rife and so on. See the section on integration http://getahead.org/dwr/integration
And about reverse Ajax:
Comet, a.k.a. Reverse Ajax (http://getahead.org/dwr/reverse-ajax) in DWR 2.0 needs careful configuration (http://getahead.org/dwr/reverse-ajax/configuration) to work around the various network problems that can arise.
We have enough ressources to have reverse ajax into Seam to my mind.
Except DWR and still with a compliant licence, we have pushlets http://www.pushlets.com/.
Another example is LightStreamer http://www.lightstreamer.com. This one is free but don't have a compliant licence with the one of Seam.
That would be particulary useful into Seam as there are cool things and real needs that you simply can't answer with a push "simulation" which is more a "trick", a "plan B" than a real solution to the concept.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115304#4115304
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115304
"manik.surtani(a)jboss.com" wrote :
| Or is there something clever in your MarshalledValue class that just passes the byte payload in writeExternal()?
Here's what it does (I take no credit or blame; long predates me ;) )
| public void writeExternal(ObjectOutput out) throws IOException
| int length = serializedForm != null ? serializedForm.length : 0;
| if( length > 0 )
| out.write(serializedForm); // this is a byte created in the c'tor
The class is in the AS server module, org.jboss.invocation.MarshalledValue. It was originally written for use in remote invocations, and thus doesn't lazy-serialize. When we started using it for caching, that IMHO was a mistake; we should have written a version that has the behavior discussed on this thread.
anonymous wrote : Or any other JDK objects - Dates, etc.
Just have to be careful to exclude anything that can wrap a non-JDK type. Also can't use instanceof in type checking.
Re: "releaseObjectReferences(fqn)" as the method name, sounds good. The "flush" name was just me being lazy in my post. :)
+1 as well to having this be the default behavior.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115215#4115215
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115215
Thread for discussion of http://jira.jboss.com/jira/browse/JBCACHE-1251.
Some further analysis of the problem; I don't really know the proper fix as I'm not familiar enough with all the subtle cases this code is handling.
Analyzing this in debugger, it seems the problem is in inconsistencies in handling deleted/invalidated nodes in PLU.acquireLocksWithTimeout() and PLI.lock():
| // this is an additional check to make sure we don't try for too long.
| if (!firstTry && System.currentTimeMillis() > cutoffTime)
| throw new TimeoutException("Unable to acquire lock on Fqn " + fqn + " after " + timeout + " millis");
| created = lock(ctx, fqn, lockType, createIfNotExists, timeout, acquireLockOnParent, reverseRemoveCheck);
| firstTry = false;
| while (createIfNotExists && cache.peek(fqn, true) == null);// keep trying until we have the lock (fixes concurrent remove())
The cache.peek() call in the while is really a convenience version of cache.peek(fqn, true, false). This last "false" param means the loop will continue as long as the peek only finds an "invalid" tombstone.
This is what is happening. Question is why a new "valid" node isn't created in the lock() call.
Looking at lock, a couple things pop out:
1) Creating a new node will only happen if the current node doesn't exist. In this case the current node does exist, it's just invalid. I don't see any logic in the loop to handle this.
2) There's logic to try to detect deleted/orphan nodes that seems odd:
| // make sure the lock we acquired isn't on a deleted node/is an orphan!!
| // look into invalidated nodes as well
| NodeSPI repeek = cache.peek(currentNode.getFqn(), true, true);
| if (currentNode != repeek)
| if (log.isTraceEnabled())
| log.trace("Was waiting for and obtained a lock on a node that doesn't exist anymore! Attempting lock acquisition again.");
| // we have an orphan!! Lose the unnecessary lock and re-acquire the lock (and potentially recreate the node).
Idea here seems to be to relook up the current node from the cache but ignore deleted/invalid nodes. If the node we have in hand isn't the one the cache finds, there's an issue that we deal with.
Shouldn the cache.peek call be cache.peek(currentNode.getFqn(), false, false); -- i.e. ignore deleted/invalid?
Hmm, maybe not, I think I see the idea. You're *only* checking for nodes that are "floating in space". Nodes that are still part of the main cache structure, no matter what their status, are OK.
In that case, I think issue #1 is the problem -- there's no mechanism to create a new node if the existing one is invalid, or to somehow "resurrect" the invalid one.
OT: on the repeek thing -- that seems like quite a bit of overhead since it's repeated on every node from the root down -- for every call. Could that check be limited to the last node in the hierarchy?
Alternative (which requires SPI change) is to add a boolean "orphan" flag to a node, or to make AbstractNode.deleted an enum with "FALSE, DELETING, DELETED" or something. Some way to change the state of the node itself when it's been cut loose from the tree.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115180#4115180
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115180
"alesj" wrote :
| I'm just saying if you did this before
| | assertTrue(isDeployed(someAlreadyDeployedArtifact));
| it failed.
| It failed for me when doing Seam tests, for an example which, as seen in console, deployed normaly.
| Until I introduced mapping to this MainDeployer's method as well.
| But then all subdeployments checks failed. Dunno if this was ever supported in previous versions - e.g. jar entry being deployed check.
this is a problem with the legacy MainDeployer though, not the test code. The MainDeployer has to expect it will be accessed using a regular url that has to be resolved via the VFS. Checking the status of subdeployments given a url never was supported.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115154#4115154
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115154
OK, so I can use the previously discussed mechanism to allow the user to include arbitrary streams into the request, which are mapped to the remote side by the use of stream handlers.
But one issue that comes up in my mind is the issue of thread safety for the stream objects. Specifically, the local instance might be accessed from any number of I/O handler threads (as things stand currently). This might be OK in some cases (after all, it's little different to how RMI works), but it might not be OK in other cases (after all, it's not like how Remoting 2.x worked at all!).
So here's the solution I propose:
* When the context.invoke() method (which is blocking) is used, then the invoking thread (which would be blocking anyway) will be used to handle requests for local stream objects. This way you can use non-threadsafe objects as stream types without worrying about synchronization or visibility problems. More like Remoting 2.x.
* When the context.send() method (which is non-blocking) is used, then all stream objects included in the request must be threadsafe (since the calling thread will be performing other tasks). More like RMI.
This way, if for some reason you want to use the blocking method but you don't want all your stream callbacks happening in the same thread (and your streams are all threadsafe), you can just do context.send(request).get() instead of context.invoke(request).
On the server side, I think we *could* do some convoluted sticky-thread thing, where streams are always handled by the thread that handled the request, but it's probably just easier to require the server side to always use threadsafe objects. Maybe it would even be a good idea to include wrapper classes for all the stream types that do synchronization.
Best of both worlds? Steaming pile of crap? You decide.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115153#4115153
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115153
"scott.stark(a)jboss.org" wrote : I'm using JBossTestServices.getDeployURL without a problem, so what do you mean it fails?
There is no problem with this method.
I'm just saying if you did this before
It failed for me when doing Seam tests, for an example which, as seen in console, deployed normaly.
Until I introduced mapping to this MainDeployer's method as well.
But then all subdeployments checks failed. Dunno if this was ever supported in previous versions - e.g. jar entry being deployed check.
"scott.stark(a)jboss.org" wrote :
| As for testing subdeployments based on name, there should be a isDeployed(URL url, String subpath) method for that.
Or perhaps all this can be handled by VFS.
Splitting the initial path into tokens, and then traversing via VFS contexts.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4115147#4115147
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4115147