Suspect Exceptions
by Sanne Grinovero
Is it expected that an exception such as
org.infinispan.remoting.transport.jgroups.SuspectException is
propagated back to the user API?
Does anyone have suggestions on how I should deal with such an error
from the application point of view?
I guess it means the topology changed while performing an operation,
but then I'd expect it to be cared of by Infinispan, and retry on the
new topology if possible.. no?
Tia,
Sanne
13 years, 1 month
Auto Scaling for Infinispan
by Paolo Romano
Hi,
one more result from the Cloud-TM project that we thought might be
interesting for the Infinispan community (and possibly OpenShift).
Our last effort is a system for automating elastic scaling for
Infinispan, which we named TAS: Transactional Auto Scaler.
TAS uses a hybrid methodology combining analytical modelling, for
forecasting the effects of data contention, and machine learning
techniques (we experimented both with Radial Basis Functions Aritificial
Neural Networks and Regression Decision Trees), for forecasting the
effects of contention on hardware resources.
Applications of TAS range from on-line self-optimization of
in-production applications, to the automatic generation of QoS/cost
driven elastic scaling policies, and support for what-if analysis on the
scalability of transactional applications.
Results look pretty good, probably even better than we were originally
hoping! ;-)
Cheers,
Paolo
13 years, 1 month
tomcat infinispan session manager
by Zdeněk Henek
Hi,
I am working on prototype of https://issues.jboss.org/browse/ISPN-465.
Currently works basic functionality like create session, add,update,
remove values from session, remove session.
Session could be with or without jvmRoute ... see details
here:https://github.com/zvrablik/tomcatInfinispanSessionManager)
I have a few questions related to infinispan.
1. classloading issue when used only one shared classloader
============================================
Current master (https://github.com/zvrablik/tomcatInfinispanSessionManager)
creates
cache manager per war application. I would like use one shared cache manager.
Is it possible?
I have created branch
(https://github.com/zvrablik/tomcatInfinispanSessionManager/tree/classloader)
where only first InfinispanSessionManager create DefaultCacheManager.
All created caches set explicitly class loader to configuration and
DecoratedCache is used to access cache.
See InfinispanSessionManager.initInfinispan method.
my infinispan configuration (all created caches use default settings):
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.0
http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
xmlns="urn:infinispan:config:5.0">
<global>
<transport clusterName="tomcatSession"/>
<globalJmxStatistics enabled="true" allowDuplicateDomains="true"/>
</global>
<!-- use only default cache for all caches created by session manger -->
<!-- to specify custom parameters to one cache create named cache with
name _session_attrContainerName
where ContainerName is name of war application -->
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<l1 enabled="false" lifespan="600000"/>
<hash numOwners="2" rehashRpcTimeout="6000"/>
<sync/>
</clustering>
<invocationBatching enabled="true"/>
</default>
</infinispan>
I use tomcat 6.0.29 and Infinispan 5.1.Beta5, java 6 update 26 on
Debian stable 64bit.
I get exception on node which should replicate state sent from another node.
See attached tomcat log.
The missing class is available only through testLB war application classloader.
2. locking and distributed transactions
============================
I use FineGrainedAtomicMap to store session attriubtes. I don't use
any locking nor XA transactions.
Do I have to use locking or XA transactions? I think autocommit mode
could be better in this case
and tomcat doesn't have XA manager by default. I am using distributed
transactions only in relational databases.
3. propagating session events to other nodes, sharing session metadata
==============================================
Is it possible to send (broadcast) custom events through Infinispan?
Currently it is possible to remove session from all nodes when session
is removed in any node,
but there must be done more. Session timeout, session custom events ...
other possible aproach is to create separated cache with session metadata.
I think creating new cache for metadata is better approach. I think
there will be
less network traffic when nodes will get information only when requested.
session object could have assigned listeners and broadcast events
which are not related to session attributes.
Thanks for help.
Regards,
Zdenek Henek
13 years, 1 month
trace vs log.isTraceEnabled
by Mircea Markus
Hi,
I'm not aware of any convention on using trace vs log.isTraceEnabled() to guard the trace statements.
if (trace) log.trace("some request related stuff")
vs
if (log.isTraceEnabled()) log.trace("some request related stuff");
The former is more efficient, but cannot be managed at runtime. It seems to be the preferred form, so shall we stick with it?
Cheers,
Mircea
13 years, 1 month
Meaning of locking in Infinispan: ISPN-1546 and better general throughput
by Sanne Grinovero
So in Infinispan we have a single type of lock associated with each
entry; I assume it's designed like it is to save on memory, as are
features like lock striping, and as the reason to not support both
read and write locks.
A problem I'm having more and more often, is that the "lock" is being
used both as something that leaks out as a way to control data
consistency from a "user" point of view, and as an internal artifact
to ensure consistent internal mutations of our data structure.
So the same Lock instance can be acquired for long term by a user
process, and prevent for example the data to be passivated on a
CacheLoader as this will attempt to grab the same lock.
These should really be two different things!
It might bring up some confusion as well, like with the
FineGrainedAtomicMap: the feature I need is to be able to lock values
in an atomic map individually, in terms of data consistency. This does
NOT imply that Infinispan should not be allowed to acquire locks on
the AtomicMap itself for brief moments, to proceed with other internal
processes.. be it to create a threadsafe iterator, passivate the entry
to a CacheLoader or even transfer the element to a new owned in the
grid.
Finally, to save memory I think that we don't need to guarantee these
lock instances exists all the time; granted it might be more efficient
to keep them as part of the Entry to avoid re-creating them too often,
but in some cases it might not be the case, so we might even make it
possible to create a different kind of Entry to optimise for specific
usage patterns.
In pratical terms, it's currently quite hard to design a consistent
data access using Infinispan's locks from a user point of view if you
have to consider that Infinispan might lock the keys for it's own
internal needs.
To solve ISPN-1546, I think it's totally fine to acquire a lock on the
FGAM for the time needed to create an iterator. But this lock needs to
be a different instance than the entry itself, and will be very short
lived, and not clustered in any way. it's just a means to guarantee we
can make a safe copy of the needed Array, and acquiring this lock
should have nothing to do with the "data experience" of preventing
some entries of the FGAM to be updated.
thoughts?
Cheers,
Sanne
13 years, 1 month
Shipping logical address String in Hot Rod protocol v1.1?
by Galder Zamarreño
Hi,
We've been having a discussion this morning with regards to the Hot Rod changes introduced in 5.1 with regards to hashing.
When Hot Rod server is deployed in AS, in order to start correctly, it requires the Hot Rod server to start before any other (clustered) caches in AS. This is because any hashing can only be done once the hash has been calculated on the hot rod endpoint.
Although this can be fixed in a hacky way (have all caches configured to start lazily and let the Hot Rod server start the topology cache and then all defined caches, ugh), we're considering a slight change to Hot Rod protocol v 1.1 (https://docs.jboss.org/author/display/ISPN/Hot+Rod+Protocol+-+Version+1.1) that would solve this problem.
Instead of hashing on the hot rod endpoint address (host:port), we could hash on JGroups's Address UTF-8 toString representation, which is the logical address. The advantage here is any cache can calculate the hash on this from the start, no need to wait for Hot Rod server to start. The downside is that Hot Rod clients need to be aware of this UTF-8 string in order to hash to the same thing, so it'd mean shipping this back to the clients alongside the hot rod endpoint info.
In spite of the added bytes, there's a benefit to this, and that is the fact that we're not tying a specific format to the String. That is, we're just telling the clients to take this UTF-8 string and hash on that. So, it's internal representation could evolve over time with no impact to the client. In the current v 1.1 protocol, clients and servers assume a String of format host:port to do the hashing.
A third way to fix this would have been to have Hot Rod servers run with a different cache manager to the rest (but still same JGroups channel), but according to Dan, AS7 is not capable of doing this.
So, what does people think of this protocol change in v1.1? This version was introduced in 5.1 and since we still have not released yet, this is the right time to consider protocol changes like this. Personally, I'm in favour.
Cheers,
--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache
13 years, 1 month
Bucket-based cache stores not removing buckets when they're empty
by Martin Gencur
I found out through our tests that bucket-based cache stores
(JdbcBinaryCacheStore and FileCacheStore) are not removing buckets (this
is either a database table row or a file) when they remove last
entry from the bucket and they remain empty in the database/filesystem.
(https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...)
Buckets are removed only when the cache store is purged (purge
attribute==true) so when the purge attribute is not set then the number
of buckets will only grow.
Shouldn't we change this to remove buckets when removing last entry from
them?
Thanks for any thoughts
--
Martin Gencur
--
JBoss QE, Enterprise Data Grid
Desk phone: +420 532 294 192, ext. 62192
13 years, 1 month
failing build on master
by Michal Linhard
we have a failing build on master: some tests can't compile
mvn install -DskipTests=true
m.
--
Michal Linhard
Quality Assurance Engineer
JBoss Enterprise Datagrid
Red Hat Czech s.r.o.
Purkynova 99 612 45 Brno, Czech Republic
phone: +420 532 294 320 ext. 62320
mobile: +420 728 626 363
13 years, 1 month
Fwd: [JBoss JIRA] (ISPN-1562) Alternative needed for Cache.getConfiguration()
by Pete Muir
All,
Any ideas on the below? Issue is that the sane name for this method is getConfiguration() but this name is already taken. Options I see are:
1) Use another name (ugh)
2) Swap the return types with no deprecation stage (ugh)
Any better ideas?
Begin forwarded message:
> From: Galder Zamarreño (Created) (JIRA) <jira-events(a)lists.jboss.org>
> Subject: [JBoss JIRA] (ISPN-1562) Alternative needed for Cache.getConfiguration()
> Date: 24 November 2011 09:51:41 GMT
> To: pmuir(a)bleepbleep.org.uk
>
> Alternative needed for Cache.getConfiguration()
> -----------------------------------------------
>
> Key: ISPN-1562
> URL: https://issues.jboss.org/browse/ISPN-1562
> Project: Infinispan
> Issue Type: Bug
> Components: Configuration, Core API
> Affects Versions: 5.1.0.BETA5
> Reporter: Galder Zamarreño
> Assignee: Pete Muir
> Fix For: 5.1.0.CR1
>
>
> Provide an alternative way of retrieving a Cache's configuration instead of deprecated Cache.getConfiguration()
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>
>
13 years, 1 month