[JBoss JIRA] Created: (ISPN-1102) Adaptive marshalling buffer size
by Galder Zamarreño (JIRA)
Adaptive marshalling buffer size
--------------------------------
Key: ISPN-1102
URL: https://issues.jboss.org/browse/ISPN-1102
Project: Infinispan
Issue Type: Enhancement
Components: Marshalling
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.0.0.CR3
The default marshaller buffer size is 512 which is often too big.
We need a more adaptive method that builds more accurate buffers.
Dan's suggestion of reservoir sampling could be handy here to get more accurate buffers rather than fixed size.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 7 months
[JBoss JIRA] Created: (ISPN-1109) Expose JGroups JChannel JMX monitoring
by Mathieu Lachance (JIRA)
Expose JGroups JChannel JMX monitoring
--------------------------------------
Key: ISPN-1109
URL: https://issues.jboss.org/browse/ISPN-1109
Project: Infinispan
Issue Type: Feature Request
Components: JMX, reporting and management
Affects Versions: 5.0.0.CR2
Reporter: Mathieu Lachance
Assignee: Manik Surtani
Priority: Trivial
Fix For: 5.0.0.CR3
To use JGroups JChannel JMX monitoring, it's needed to register it to a MBeanServer.
Would it be possible to add a configuration key to be able to monitor the cache at a network level ?
Here's the actual JGroups documentation to activate JMX monitoring programmatically : http://community.jboss.org/wiki/JMX
I guess the correct place to implements the feature should be located in org.infinispan.remoting.transport.jgroups.JGroupsTransport::startJGroupsChannelIfNeeded()
and implementation could look like that :
protected void startJGroupsChannelIfNeeded() {
if (startChannel) {
try {
channel.connect(configuration.getClusterName());
// my first contribution M Lachance
ArrayList servers = MBeanServerFactory.findMBeanServer(null);
if (servers == null || servers.size() == 0) {
log.log(Logger.Level.WARN, "No Available MBean Servers");
//throw new Exception("No MBeanServers found;" +
// "\nJmxTest needs to be run with an MBeanServer present, or inside JDK 5");
} else {
MBeanServer server = (MBeanServer) servers.get(0);
try {
if (server != null) {
JmxConfigurator.registerChannel((JChannel) channel, server, "JChannel=" + channel.getChannelName(), "", true);
}
} catch (Exception e) {
log.log(Logger.Level.WARN, "Could not resgister with JMX",e);
}
}
} catch (ChannelException e) {
throw new CacheException("Unable to start JGroups Channel", e);
}
}
address = new JGroupsAddress(channel.getAddress());
if (log.isInfoEnabled())
log.localAndPhysicalAddress(getAddress(), getPhysicalAddresses());
}
Thanks,
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 7 months
[JBoss JIRA] Created: (ISPN-1126) Exception hidden on Hot Rod client
by Galder Zamarreño (JIRA)
Exception hidden on Hot Rod client
----------------------------------
Key: ISPN-1126
URL: https://issues.jboss.org/browse/ISPN-1126
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 4.2.1.FINAL
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 4.2.2.BETA1
RetryOnFailureOperation is not logging exceptions that lead to a retry correctlty and this is hiding a crucial exception to clarify JBPAPP-6113. So, here's the log:
{code}tcpTransport.log:33670:2011-05-17 11:23:31,119 211988 TRACE [org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation] (Runner - 0:) Exception encountered. Retry 4 out of 40{code}
So, what exception was encountered? No idea. Why? Log code is:
{code}log.trace(message, i, transportFactory.getTransportCount(), te);{code}
Which translates to talking to "void trace(Object message, Object... params);" which won't print the stacktrace as cause.
Instead, the log code should be:
{code}log.trace(message, te, i, transportFactory.getTransportCount());{code}
So that it uses "void trace(Object message, Throwable t, Object... params);"
Verify master as well.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 8 months
[JBoss JIRA] Created: (ISPN-1049) transaction participant failure after prepare causes data inconsistency
by Mircea Markus (JIRA)
transaction participant failure after prepare causes data inconsistency
------------------------------------------------------------------------
Key: ISPN-1049
URL: https://issues.jboss.org/browse/ISPN-1049
Project: Infinispan
Issue Type: Feature Request
Components: Transactions
Affects Versions: 4.2.1.FINAL
Reporter: Mircea Markus
Assignee: Mircea Markus
Fix For: 5.0.0.FINAL
cluster {A, B, C, D}, dist, numOwners=3.
transaction started on A touches B and C. A prepares then C crashes.
When TM commits the user gets a TimeoutException as commit rpc to C failed.
The state of the cluster after commit is: tx state successfully applied on A and B, but not on D!
The tx should successfully be applied on D as well, as numOwners=3. Or, at least, it should rollback on A and B as well; the point being the cluster should remain in a consistent state.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 8 months
[JBoss JIRA] Created: (ISPN-878) update documentation for http://community.jboss.org/wiki/MultipleTiersofCaches
by Mircea Markus (JIRA)
update documentation for http://community.jboss.org/wiki/MultipleTiersofCaches
-------------------------------------------------------------------------------
Key: ISPN-878
URL: https://issues.jboss.org/browse/ISPN-878
Project: Infinispan
Issue Type: Task
Reporter: Mircea Markus
Assignee: Manik Surtani
Fix For: 5.0.0.Final
Update http://community.jboss.org/wiki/MultipleTiersofCaches as follows:
Feedback from Galder: though, the text in the diagram is very very small and can hardly be read. Maybe you can improve it a little?
Feedback from Manik:
Also some more feedback:
* Spelling: "Multi-Tiered", not "Multiered"
* Your diagram probably wants to demonstrate 2 separate tiers, not just 1 client with a clustered server backend. E.g.,
FE1 --- FE2 --- FE3
===============
BE1 --- BE2 --- BE3
where FE = front-end servers (embedded, maybe an app server running a webapp, with an embedded Infinispan instance as a "near cache")
BE = backend, dedicated cache tier, running Hot Rod endpoints
FE configured with RemoteCacheStore, and also using Invalidation so that FE's can invalidate each other
BE's running DIST for max addressable space + ultimate scalability
This way, with the FE's running in INVAL mode, it doesn't matter if the BE's cannot communicate changes back to the FE's, since the FE's wil already know about changes and will invalidate accordingly.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 8 months