[
https://issues.jboss.org/browse/TEIID-4284?page=com.atlassian.jira.plugin...
]
Steven Hawkins resolved TEIID-4284.
-----------------------------------
Resolution: Done
For the first phase of this the implementation an explicit source hint is required:
select /*+ sh salesforce:'bulk' */ name, id ... from Account
here the the source name is salesforce.
We'll do a minimal amount of validation, such that it's not a relationship query
and doesn't use sum/count aggregates, etc. The default 100,000 chunk size will be
used.
A follow on to this if needed would be to detect when bulk/pk chunking should be used -
such as against the tables listed in
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asy...
and if the cardinality of the tables are greater than 10,000,000 and there is no of
seemingly low filtering predicates.
Also I did not yet convert the insert logic to csv, but that is possible also.
Implement Salesforce Bulk API for SELECTS to Salesforce Connector
-----------------------------------------------------------------
Key: TEIID-4284
URL:
https://issues.jboss.org/browse/TEIID-4284
Project: Teiid
Issue Type: Feature Request
Components: Salesforce Connector
Affects Versions: 8.13.5
Environment: With Salesforce datasource
Reporter: sameer P
Assignee: Steven Hawkins
Fix For: 9.1
There is some huge data (many GBs) in the Salesforce which has around 1.5 million rows
and doing some simple select * on it fails with QUERY_TIMEOUT.
The salesforce guys suggested to try Bulk API for select with PK chunking as stated in
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asy...
.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)