[JBoss JIRA] Created: (ISPN-385) Add idle timeout to memcached and hot rod servers.
by Galder Zamarreno (JIRA)
Add idle timeout to memcached and hot rod servers.
--------------------------------------------------
Key: ISPN-385
URL: https://jira.jboss.org/jira/browse/ISPN-385
Project: Infinispan
Issue Type: Task
Components: Cache Server
Reporter: Galder Zamarreno
Assignee: Galder Zamarreno
Fix For: 4.1.0.BETA1
Add an IdleStateHandler to memcached and hot rod servers so that idle connections can be closed automatically. This is necessary to handle error conditions such as cases where clients erroneously indicate the server that it needs to read 20 bytes but they only send 10 bytes. Without such handler, the server will carry on waiting for bytes forever. The handler will provide a defence mechanism for such cases.
The timeout will be configurable via the command line. Besides, clients wanting to do some connection pooling will require to configure this accordingly in the server. In other words, there's hardly any point in having servers configured with idle timeout of 30 seconds and clients closing connections after 60 seconds of idle time. These two should be aligned accordingly.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 10 months
[JBoss JIRA] Created: (ISPN-689) Read past EOF caused in Lucene Directory when Lucene flushes but doesn't close the segment
by Sanne Grinovero (JIRA)
Read past EOF caused in Lucene Directory when Lucene flushes but doesn't close the segment
------------------------------------------------------------------------------------------
Key: ISPN-689
URL: https://jira.jboss.org/browse/ISPN-689
Project: Infinispan
Issue Type: Bug
Components: Lucene Directory
Affects Versions: 4.2.0.ALPHA2, 4.1.0.Final
Reporter: Sanne Grinovero
Assignee: Sanne Grinovero
Fix For: 4.2.0.BETA1, 4.2.0.Final, 5.0.0.Final
While this is not the default access scenario performed by Lucene it's possible in some branches to flush the segment and be able to read it before the close.
I could reproduce the following stacktrace in high load / huge sized index but not in a unit test; disabling the batching started between .flush() and .close() seems to resolve the issue; also this batch seems totally useless as I couldn't find any chance in performance when disabling it.
java.io.IOException: Read past EOF: Chunk value could not be found for key _ni.fdt|4|issues
at org.infinispan.lucene.InfinispanIndexInput.setBufferToCurrentChunk(InfinispanIndexInput.java:138)
at org.infinispan.lucene.InfinispanIndexInput.nextChunk(InfinispanIndexInput.java:131)
at org.infinispan.lucene.InfinispanIndexInput.readBytes(InfinispanIndexInput.java:96)
at org.apache.lucene.store.IndexInput.readBytes(IndexInput.java:61)
at org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:228)
at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:184)
at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:2342)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:4359)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:4264)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:4255)
at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:4133)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4206)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:4179)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 11 months
[JBoss JIRA] Created: (ISPN-439) Define and implement configuration file backward and forward compatibility policy
by Vladimir Blagojevic (JIRA)
Define and implement configuration file backward and forward compatibility policy
---------------------------------------------------------------------------------
Key: ISPN-439
URL: https://jira.jboss.org/jira/browse/ISPN-439
Project: Infinispan
Issue Type: Task
Affects Versions: 4.1.0.BETA1
Reporter: Vladimir Blagojevic
Assignee: Manik Surtani
Fix For: 4.1.0.CR1
Backward compatibility:
Process any previous version configuration file from the same minor version. For example, Infinispan release 4.2 should process configuration file produced for version 4.1 without any configuration file changes or any other adjustments. For configuration options present in 4.2, but not in 4.1 assume default values. However, Infinispan version 5.0 is not required to process configuration files from previous major versions, i.e 4.0...4.x.
Forward compatibility.
Do not process any forward version configuration file. For example, Infinispan release 4.1, give configuration file input from any succeeding versions (i.e 4.2...4.x, 5.x) would simply fail outright with proper error message.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 12 months
[JBoss JIRA] Created: (ISPN-658) DistributionManager not considerate of cache state changes
by Paul Ferraro (JIRA)
DistributionManager not considerate of cache state changes
----------------------------------------------------------
Key: ISPN-658
URL: https://jira.jboss.org/browse/ISPN-658
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 4.2.0.ALPHA2
Reporter: Paul Ferraro
Assignee: Manik Surtani
Considering a cache manager with 2 caches in DIST mode (C1 and C2) deployed on 2 nodes (N1 and N2).
Currently, the DistributionManager does not properly handle the following scenarios:
1. Stop C1 on N1. This ought to trigger a rehash for the C1 cache. Currently, rehashing is only triggered via view change. Failure to rehash on stopping of a cache can inadvertently cause data loss, if all backups of a given cache entry have stopped.
2. A new DIST mode cache, C3, is started on N2. If N1 is the coordinator, the join request sent to N1 will get stuck in an infinite loop, since the cache manager on N1 does not contain a C3 cache.
3. Less critically, a new node, N3 is started. It does not yet have a C1 or C2 cache, though it's cache manager is started. This prematurely triggers a rehash of C1 and C2, even though there are no new caches instances to consider.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 12 months
[JBoss JIRA] Created: (ISPN-434) Creating a new distributed cache on a non-coordinator node causes rehashing to hang
by Manik Surtani (JIRA)
Creating a new distributed cache on a non-coordinator node causes rehashing to hang
-----------------------------------------------------------------------------------
Key: ISPN-434
URL: https://jira.jboss.org/jira/browse/ISPN-434
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 4.1.0.ALPHA3
Reporter: Manik Surtani
Assignee: Manik Surtani
Fix For: 4.1.0.CR1, 4.1.0.Final
If a cluster is already formed (perhaps due to another cache instance being started, such as a replicated one) and subsequently a distributed cache is created and started, first on a non-coordinator node, the startup sequence will hang until the cache is also started on the coordinator.
E.g.,
1. C1 (coord) starts replicated cache
2. C2 starts replicated cache
3. C2 starts distributed cache
4. C1 starts distributed cache
Step 3 will hang until step 4 completes. So unless these happen in different threads - on different servers - this rehash will hang.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] Created: (ISPN-448) Consider all topology cache updates to be done by coordinator in Hot Rod
by Galder Zamarreno (JIRA)
Consider all topology cache updates to be done by coordinator in Hot Rod
------------------------------------------------------------------------
Key: ISPN-448
URL: https://jira.jboss.org/browse/ISPN-448
Project: Infinispan
Issue Type: Task
Components: Cache Server
Reporter: Galder Zamarreno
Assignee: Galder Zamarreno
Fix For: 5.0.0.BETA1
Based on the discussion below, consider all topology cache updates to be done by coordinator to avoid concurrency issues when updating it.
> Looks good.
> What is causing this unsuccessful add? If it is caused by timeouts due
> to multiple caches operating on the same key an alternative would be
> to only perform the operation on the coordinator and rest of the
> members to have node added listeners ...
Currently, each node when it starts, it's responsible of adding itself to the view and when it stops, it's responsible from removing itself. Apart from this, there's a crashed member listener running only in coordinator that detects whether any member left without updating the topology view. Your suggestion to have the coordinator control it all seems like could work and get around potential timeouts.
I'll create a JIRA to investigate this but won't do it for CR1 since I'm expect this to be a major issue. Metadata size is small and it's not constantly uddated.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] Created: (ISPN-302) Enable templated values for manually adding instances via jopr
by Galder Zamarreno (JIRA)
Enable templated values for manually adding instances via jopr
--------------------------------------------------------------
Key: ISPN-302
URL: https://jira.jboss.org/jira/browse/ISPN-302
Project: Infinispan
Issue Type: Feature Request
Components: JMX, reporting and management
Reporter: Galder Zamarreno
Assignee: Galder Zamarreno
Fix For: 4.1.0.BETA1
Enable templated values for manually adding instances via jopr
>> For manually importing - you should in the plugin descriptor put the JMX-remoting url as
>> >> default -- perhaps with the port as XXX, so the user does not have to copy& paste from an
>> >> external location, but only click in the text field and replace XXX by the real port.
>> >> same for Objectname of the cache manager.
> >
> > I did that. I told you it did not work and your reply was that it was fragile and you didn't looked into it further...
You / we were talking about c:template - which would allow to have several templates -- see jmx-plugin
e.g http://git.fedorahosted.org/git/rhq/rht.git?p=rhq/rhq.git;a=blob;f=module...
from line 47 on
What you can do, an which works is
<c:simple-property name="v1Community" type="string" default="public"/>
Here the default value 'public' is shown to the user for this property.
Sorry that I was confusing things
Heiko
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 1 month