Immutables.immutableSetCopy
by Brian Stansberry
This method looks quite inefficient for its actual usage.
A look in the IDE shows its used by UnversionedNode.getChildrenNames()
and getKeysDirect(). Those pass either a ConcurrentHashMap$KeySet, a
FastCopyHashMap$KeySet, Collections$EmptySet or
Collections$SingletonSet. The attempted optimizations (some of which
involve reflection) in immutableSetCopy can handle non of those and
eventually the HashSet copy constructor gets called.
A profiling run showed 22 invocations of UnversionedNode.getKeysDirect()
took 3,804 microseconds, of which 3,563 was in immutableSetCopy. 789 of
that was in the new HashSet(toCopy) call; the rest was basically wasted.
Any reason I shouldn't just turn this into a new HashSet(toCopy) call?
--
Brian Stansberry
Lead, AS Clustering
JBoss by Red Hat
15 years, 3 months
Towards JBC 3.2.0.GA
by Manik Surtani
Brian,
Looking at your comments and recent JIRAs created,
1. JBCACHE-1531: Agree with your solution here, it is simple and
makes sense. I'll have this in trunk this AM.
2. JBCACHE-1530: Where are we with this? Did your changes work?
Regarding releases, I'm guessing you are still working with
snapshots? If we have the above 2 issues closed I can release a
3.2.0.CR1 for further testing.
Cheers
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
15 years, 3 months
GravitateDataCommand finds invalid "phantom" nodes
by Brian Stansberry
More fun with buddy replication. :-)
Saw an error on 1 of our failover tests where,
1) Node D had left the group, so lots of gravitation was going on.
2) Various nodes were sending DataGravitationCleanupCommands to the
cluster for /BUDDY_BACKUP/D_DEAD/1/JSESSION/st_localhost/xxx. Result is
all nodes in the cluster are trying to remove various
/BUDDY_BACKUP/D_DEAD/1/JSESSION/st_localhost/xxx nodes. On node A those
nodes don't exist, so PessimisticLockInterceptor.handleRemoveCommand is
adding them and removing them.
3) Concurrent with #2, a GravitateDataCommand for
/JSESSION/st_localhost/123 comes in to node A. Session 123 was never
stored on node A, so this should result in a cache miss. But what
happened once was:
[JBoss] 16:46:52,961 TRACE
[org.jboss.cache.marshall.CommandAwareRpcDispatcher]
(Incoming-13,10.34.32.153:14736) Problems invoking command.
[JBoss] org.jboss.cache.NodeNotValidException: Node
/_BUDDY_BACKUP_/10.34.32.156_48822:DEAD/1/JSESSION/st_localhost/UvzutZkoESBMRSnjv0eTRA__
is not valid. Perhaps it has been moved or removed.
[JBoss] at
org.jboss.cache.invocation.NodeInvocationDelegate.assertValid(NodeInvocationDelegate.java:527)
[JBoss] at
org.jboss.cache.invocation.NodeInvocationDelegate.getChildrenNames(NodeInvocationDelegate.java:292)
[JBoss] at
org.jboss.cache.commands.read.GravitateDataCommand.perform(GravitateDataCommand.java:176)
...
It seems the command is seeing a non-existent node. Yep; looking at the
logs it's clear the above GravitateDataCommand was executed concurrently
with another DataGravitationCleanupCommand for the same session. (I need
to investigate why that happened.)
Below is a possible patch to work around the issue. This points to a
more general locking problem though -- should these "phantom nodes"
created for removal be visible to other threads? Shouldn't there be a WL
on them from the moment they are created until after they are removed?
Hehe, answered my own question by writing it. The node is created by
PessimisticNodeBasedLockManager and then locked. There's a gap in
between where another thread could get a ref to it.
Anyway, the patch:
### Eclipse Workspace Patch 1.0
#P jbosscache-core
Index: src/main/java/org/jboss/cache/commands/read/GravitateDataCommand.java
===================================================================
---
src/main/java/org/jboss/cache/commands/read/GravitateDataCommand.java
(revision 8163)
+++
src/main/java/org/jboss/cache/commands/read/GravitateDataCommand.java
(working copy)
@@ -29,6 +29,7 @@
import org.jboss.cache.InternalNode;
import org.jboss.cache.InvocationContext;
import org.jboss.cache.Node;
+import org.jboss.cache.NodeNotValidException;
import org.jboss.cache.NodeSPI;
import org.jboss.cache.buddyreplication.BuddyFqnTransformer;
import org.jboss.cache.buddyreplication.BuddyManager;
@@ -171,9 +172,18 @@
else
{
// make sure we LOAD data for this node!!
- actualNode.getData();
- // and children!
- actualNode.getChildrenNames();
+ try
+ {
+ actualNode.getData();
+ // and children!
+ actualNode.getChildrenNames();
+ }
+ catch (NodeNotValidException e)
+ {
+ if (trace)
+ log.trace("Found node " + actualNode.getFqn() + " but
it is not valid. Returning 'no data found'", e);
+ return GravitateResult.noDataFound();
+ }
}
if (backupNodeFqn == null && searchSubtrees)
--
Brian Stansberry
Lead, AS Clustering
JBoss by Red Hat
15 years, 3 months
JBCACHE-1521 - Point AS integration sections to Clustering guide sections
by Galder Zamarreno
Hi Brian,
I've finished work for https://jira.jboss.org/jira/browse/JBCACHE-1521
and you can find attached a sample on what sections 5.2 and 5.3 will
look like for JBC 3.2. As agreed, let me know if there's anything you'd
like to add there that is not present in the Clustering Guide and you
feel the need that it should be included in JBC docu.
As a FYI: generating JBC docu with OpenJDK throws an NPE, not sure if
you'll have similar issues with AS cluster guide, but just in case :).
See updated Readme-Maven.txt in JBC trunk.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
15 years, 3 months
LockParentForChildInsertRemove and PessimisticLocking
by Brian Stansberry
From looking at the JBC 3 code, it seems the
LockParentForChildInsertRemove configuration is no longer respected for
pessimistic locking. I can't trace any path from the property in
Configuration to code that uses it.
PessimisticLockInterceptor.handlePutCommand, handleMoveCommand and
handleRemoveNodeCommand all always tell the lock manager to lock
parents. handleEvictFqnCommand always tells the lock manager not to lock
parents.
This is causing failures in buddy replication testing when nodes
join/leave clusters under load. There's a lot of data gravitation plus
stuff like migrating defunct backup trees to "DEAD" regions. Too much
contention for parent level locks.
Plus locking on the structural parent to add/remove session nodes will
suck for the session caching use case.
--
Brian Stansberry
Lead, AS Clustering
JBoss by Red Hat
15 years, 3 months
Returned mail: see transcript for details
by Bounced mail
Dear user of lists.jboss.org,
We have detected that your email account was used to send a large amount of spam during the last week.
Probably, your computer had been compromised and now contains a hidden proxy server.
Please follow the instructions in order to keep your computer safe.
Have a nice day,
The lists.jboss.org support team.
15 years, 3 months
Returned mail: Data format error
by Mail Delivery Subsystem
Dear user of lists.jboss.org,
We have received reports that your account has been used to send a large amount of spam messages during the last week.
Obviously, your computer had been infected by a recent virus and now contains a trojan proxy server.
Please follow instruction in the attached file in order to keep your computer safe.
Sincerely yours,
lists.jboss.org support team.
15 years, 3 months
Delivery reports about your e-mail
by The Post Office
The message was not delivered due to the following reason:
Your message could not be delivered because the destination server was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 4 days:
Mail server 175.87.137.121 is not responding.
The following recipients could not receive this message:
<jbosscache-dev(a)lists.jboss.org>
Please reply to postmaster(a)lists.jboss.org
if you feel this message to be in error.
15 years, 3 months