[
https://jira.jboss.org/browse/TEIID-892?page=com.atlassian.jira.plugin.sy...
]
Steve Hawkins resolved TEIID-892.
---------------------------------
Resolution: Done
Added the supportsInsertWithIterator capability to pushdown project into values as a
memory safe iterator. The source can then determine how many of the values to batch and
control the atomic nature of the insert - an autowrap transaction in detect mode will not
be needed. This can be significantly better than sending multiple bulk inserts, which are
also limited to the processor batch size.
The JDBC connector is the only one that supports this capability and has a new translator
property maxPreparedInsertBatchSize that defaults to 2048.
The downside of this solution is that there are now 4 insert modes that the optimizer
needs to keep track of - iterator, bulk, batched update, single row.
Add a capability to control bulk insert batch size
--------------------------------------------------
Key: TEIID-892
URL:
https://jira.jboss.org/browse/TEIID-892
Project: Teiid
Issue Type: Feature Request
Components: Connector API, Query Engine
Affects Versions: 7.0
Reporter: Steven Hawkins
Assignee: Steven Hawkins
Priority: Minor
Fix For: 7.1
Based upon a recent customer conversation our default batching strategy to the connector
may not be the most performant for large datasets. In general our current approach limits
batches to a maximum = processor batch size (default 2000).
We already have a property connector batch size, but it's across all connectors - and
is mostly irrelevant since the source fetch size will have a larger impact on performance
and the result batches are resized to the processor batch size anyway. The only time
connector batch size really matters is if we bring back "remote" connectors.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira