[JBoss JIRA] Created: (ISPN-1103) Soft schema-based storage
by Manik Surtani (JIRA)
Soft schema-based storage
-------------------------
Key: ISPN-1103
URL: https://issues.jboss.org/browse/ISPN-1103
Project: Infinispan
Issue Type: Feature Request
Components: Core API
Reporter: Manik Surtani
Assignee: Manik Surtani
Fix For: 5.1.0.BETA1, 5.1.0.Final
This JIRA is about storing metadata alongside values. Perhaps encapsulating values as SchematicValues, which could be described as:
{code}
class SchematicValue {
String jsonMetadata;
String jsonObject;
}
{code}
Metadata would allow for a few interesting features:
* Extracting of lifespan and timestamp data if manipulated over a remote protocol (REST, HotRod, etc)
* Content type for REST responses
* Timestamps for REST headers, will affect HTTP content caches
* Validation information (may not be processed by Infinispan, but can be used by client libs)
* Classloader/marshaller/classdef version info
* General structure of the information stored
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2311) JDBC store configuration builder is not entirely fluent
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-2311:
-------------------------------------
Summary: JDBC store configuration builder is not entirely fluent
Key: ISPN-2311
URL: https://issues.jboss.org/browse/ISPN-2311
Project: Infinispan
Issue Type: Bug
Components: Configuration
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
Priority: Minor
Fix For: 5.2.0.Final
Some items in the JDBC configuration builder API (e.g. TableManipulationConfiguration) are not completely fluent (e.g. they interrupt the JDBC CacheStore builder fluency)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2337) Query iterators are buggy
by Marko Lukša (JIRA)
Marko Lukša created ISPN-2337:
---------------------------------
Summary: Query iterators are buggy
Key: ISPN-2337
URL: https://issues.jboss.org/browse/ISPN-2337
Project: Infinispan
Issue Type: Bug
Components: Querying
Affects Versions: 5.2.0.Alpha4
Reporter: Marko Lukša
Assignee: Marko Lukša
I have found multiple problems with the iterators returned by CacheQueryImpl.
- using LazyIterator with fetchSize fails with ArrayIndexOutOfBoundsException
- calling previous() after next() doesn't return expected element
- calling nextIndex() on a new iterator should return 0, not 1 (the same also applies when calling nextIndex() after calling first())
- if fetchSize is greater than 1, LazyIterator fills the whole buffer on every invocation of .previous()
- nextIndex() and previousIndex() throw NoSuchElementException, which violates the contract of ListInterface
- next() and previous() throw ArrayIndexOutOfBoundsException when they should be throwing NoSuchElementException
- ...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] Created: (ISPN-939) Index corruption when remote node dies during commit
by Tristan Tarrant (JIRA)
Index corruption when remote node dies during commit
----------------------------------------------------
Key: ISPN-939
URL: https://issues.jboss.org/browse/ISPN-939
Project: Infinispan
Issue Type: Bug
Components: Lucene Directory
Affects Versions: 4.2.1.CR2
Reporter: Tristan Tarrant
Assignee: Sanne Grinovero
Using a scenario similar to the one described in ISPN-909:
Infinispan: 3 caches: lockCache (replicated, volatile, no eviction), metadataCache (replicated, persisted, no eviction), dataCache (distributed, persisted, eviction, hash numOwners=2)
Node 1: coordinator, IndexWriter open constantly and writing a stream of documents, committing after each one
Node 2: opens a read-only IndexReader to perform queries, using reopen to keep in sync with the updates coming from node 1
If we "kill -9" node 2 (to simulate a crash), we get a SuspectException in node 1 during the pre-commit phase (within IndexWriter.commit()). Catching the Throwable we then close() the writer but from then on we get "Read past EOF" errors when trying to access the index (both with readers and writers).
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2356) xsite replication: only the first site replication eror is reported
by Bela Ban (JIRA)
Bela Ban created ISPN-2356:
------------------------------
Summary: xsite replication: only the first site replication eror is reported
Key: ISPN-2356
URL: https://issues.jboss.org/browse/ISPN-2356
Project: Infinispan
Issue Type: Bug
Components: Cross-Site Replication
Affects Versions: 5.2.0.Alpha4
Reporter: Bela Ban
Assignee: Mircea Markus
Priority: Minor
When we have a couple of backup sites (e.g. NYC and SFO) and none of them are running, a sync replication with backupFailurePolicy=FAIL with throw an exception on the first failure, e.g. SFO. However it will not report the failure for NYC.
Not sure if this is crucial, but I suspect somewhere someone counts the failures per site, and in this case NYC would not show any failures until SFO is taken offline.
The code is in BackupSenderImpl:
{code}
public void processResponses(BackupResponse backupResponse, VisitableCommand command, Transaction transaction) throws Throwable {
backupResponse.waitForBackupToFinish();
SitesConfiguration sitesConfiguration = config.sites();
Map<String, Throwable> failures = backupResponse.getFailedBackups();
for (Map.Entry<String, Throwable> failure : failures.entrySet()) {
BackupFailurePolicy policy = sitesConfiguration.getFailurePolicy(failure.getKey());
if (policy == BackupFailurePolicy.CUSTOM) {
CustomFailurePolicy customFailurePolicy = siteFailurePolicy.get(failure.getKey());
command.acceptVisitor(null, new CustomBackupPolicyInvoker(failure.getKey(), customFailurePolicy, transaction));
}
if (policy == BackupFailurePolicy.WARN) {
log.warnXsiteBackupFailed(cacheName, failure.getKey(), failure.getValue());
} else if (policy == BackupFailurePolicy.FAIL) {
throw new BackupFailureException(failure.getValue(),failure.getKey(), cacheName);
}
}
}
{code}
Iterating through the failure map, we throw a BackupFailureException on the *first* failure. I suggest to either collect all exceptions (also when invoking the custom failure policy !) and throw them as a new exception (listing all of them) or to simply log the situation as an error.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month