[
https://issues.jboss.org/browse/TEIID-2504?page=com.atlassian.jira.plugin...
]
Steven Hawkins resolved TEIID-2504.
-----------------------------------
Resolution: Done
Most of the apparent performance gains stems from the prefetch, which needed changes to
the way results are written/read - so that full result deserialization can be performed
after the prefetch result is sent (a more simplistic approach, such was employed with MMX
still would still largely put the deserialization cost in the request/response critical
path).
The changes also allow for a limited multi-batch fetch. Due to the abstractions surround
our socket usage it is not easy to have the server reserve/unreserve memory with the
asynch nature of the server to client writes. So a compromise is to allow approximately 3
(more if the actual memory footprint of the batches is smaller than the estimate) batches
to be fetched together. This memory usage is essentially unaccounted for, so extreme
amounts of concurrent response processing may require rethinking this approach - the most
simplistic being to change the notion of the default fetch size, which is currently still
2048.
Improve socket results processing
---------------------------------
Key: TEIID-2504
URL:
https://issues.jboss.org/browse/TEIID-2504
Project: Teiid
Issue Type: Enhancement
Components: JDBC Driver, Server
Reporter: Steven Hawkins
Assignee: Steven Hawkins
Fix For: 8.4
To increase large result transfer performance we can:
1. use a prefetch (which ideally will be performed before full deserialization)
2. allow for the client to fetch multiple server batches (although this has quite a few
implementation considerations
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
http://www.atlassian.com/software/jira