[JBoss JIRA] (TEIID-4544) Change Teradata defaults for dependent joins
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-4544?page=com.atlassian.jira.plugin... ]
Steven Hawkins resolved TEIID-4544.
-----------------------------------
Resolution: Done
Updated to use a max in of 1024 (taken from the hibernate docs) and a max dependent in predicates of 5 (from the associated forum post). Also added the class name for the hibernate dialect to allow dependent join pushdown. If this is backported to 8.12.x the reference to the Teradata14Dialect will not be valid and must be removed.
> Change Teradata defaults for dependent joins
> --------------------------------------------
>
> Key: TEIID-4544
> URL: https://issues.jboss.org/browse/TEIID-4544
> Project: Teiid
> Issue Type: Enhancement
> Components: JDBC Connector
> Reporter: Steven Hawkins
> Assignee: Steven Hawkins
> Fix For: 9.2
>
>
> Teradata does not suggest hard limits on in predicates, but uses the wording:
> "Queries that contain thousands of arguments within an IN or NOT IN clause sometimes fail."
> It has been reported that the defaults of 50 in predicates with 1000 values each will cause such a failure.
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 1 month
[JBoss JIRA] (TEIID-4544) Change Teradata defaults for dependent joins
by Steven Hawkins (JIRA)
Steven Hawkins created TEIID-4544:
-------------------------------------
Summary: Change Teradata defaults for dependent joins
Key: TEIID-4544
URL: https://issues.jboss.org/browse/TEIID-4544
Project: Teiid
Issue Type: Enhancement
Components: JDBC Connector
Reporter: Steven Hawkins
Assignee: Steven Hawkins
Fix For: 9.2
Teradata does not suggest hard limits on in predicates, but uses the wording:
"Queries that contain thousands of arguments within an IN or NOT IN clause sometimes fail."
It has been reported that the defaults of 50 in predicates with 1000 values each will cause such a failure.
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 1 month
[JBoss JIRA] (TEIID-4526) Integrate with Debezium for CDC for maintaining materialized views
by Randall Hauch (JIRA)
[ https://issues.jboss.org/browse/TEIID-4526?page=com.atlassian.jira.plugin... ]
Randall Hauch commented on TEIID-4526:
--------------------------------------
{quote}
Our problems with CDC starts with the metadata. Since we allow for hand editing and only capture a simple quoted name in source, we first have a matching problem from any event source to our source metadata.
{quote}
We're soon going to be adding the database name and table name in the {{source}} structure in each event, and hopefully this will make it a bit easier than inferring the source table name from other information. It may still be somewhat difficult depending upon what's in the Teiid metadata.
{quote}
* There can be type issues or even the consideration for masking or other effects. Inferring values directly from the change event can be an issue as the CDC layer is broadcasting raw values, not what you would fetch over jdbc.
{quote}
Yes, Debezium is limited in the way values can be represented, so values will have to be converted. We have a goal that the different connectors use a consistent type mapping, but that won't eliminate the complexity here.
One other point. I've suggested that whoever's working on this first try [embedding connectors|http://debezium.io/docs/embedded/]. It eliminates lots of complexities in infrastructure (e.g., Kafka, connections, topic mappings, etc.) so that you can start out focusing on the basics of dealing with events.
> Integrate with Debezium for CDC for maintaining materialized views
> ------------------------------------------------------------------
>
> Key: TEIID-4526
> URL: https://issues.jboss.org/browse/TEIID-4526
> Project: Teiid
> Issue Type: Feature Request
> Components: Server
> Affects Versions: 9.2
> Reporter: Van Halbert
> Assignee: Steven Hawkins
> Priority: Critical
>
> Integrate with Debezium so that Teiid will be able to consume and react to the row-level change events and do something interesting with them, such as update the materialized view(s).
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 1 month
[JBoss JIRA] (TEIID-4543) Rewrite parse/format of standard formats to cast instead
by Steven Hawkins (JIRA)
Steven Hawkins created TEIID-4543:
-------------------------------------
Summary: Rewrite parse/format of standard formats to cast instead
Key: TEIID-4543
URL: https://issues.jboss.org/browse/TEIID-4543
Project: Teiid
Issue Type: Enhancement
Components: Query Engine
Reporter: Steven Hawkins
Assignee: Steven Hawkins
Fix For: 9.2
Since parse/format functions have limited pushdown support, it would be better to treat the standard formats as casts instead.
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 1 month
[JBoss JIRA] (TEIID-4284) Implement Salesforce Bulk API for SELECTS to Salesforce Connector
by sameer P (JIRA)
[ https://issues.jboss.org/browse/TEIID-4284?page=com.atlassian.jira.plugin... ]
sameer P commented on TEIID-4284:
---------------------------------
Hi [~shawkins], As I tested it on my salesforce Account. looks like there is some critical problem when there are many batches (more than 4) which get created when the number of rows in the table are huge (pretty large than the Maximum Chunk Size or something).
When I do a simple select * from such a table multiple times in a row (sometimes even on first query), I get the following error :
{code:java}
Error: TEIID30504 Remote org.teiid.core.TeiidProcessingException: TEIID30504 TEIID30020 Processing exception for request IR8UKFQnuXi4.17 'TEIID30504 dssSOU: null'. Originally TeiidProcessingException SalesforceConnectionImpl.java:570.
SQLState: TEIID30504
ErrorCode: 0
{code}
> Implement Salesforce Bulk API for SELECTS to Salesforce Connector
> -----------------------------------------------------------------
>
> Key: TEIID-4284
> URL: https://issues.jboss.org/browse/TEIID-4284
> Project: Teiid
> Issue Type: Feature Request
> Components: Salesforce Connector
> Affects Versions: 8.13.5
> Environment: With Salesforce datasource
> Reporter: sameer P
> Assignee: Steven Hawkins
> Fix For: 9.2
>
>
> There is some huge data (many GBs) in the Salesforce which has around 1.5 million rows and doing some simple select * on it fails with QUERY_TIMEOUT.
> The salesforce guys suggested to try Bulk API for select with PK chunking as stated in https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asy... .
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 1 month