[
https://issues.jboss.org/browse/TEIID-2892?page=com.atlassian.jira.plugin...
]
Steven Hawkins commented on TEIID-2892:
---------------------------------------
The choice to use the scroll intensive resultset was more to support
"skip" and "batching" functionality than the row count. Since sending
all the results on the first request is not feasible, and there is no "easier"
way to keep track of session for next set of results.
Actually I think the skip functionality is broken. Starting with
d1a8d36696a3ada03cac7009cdd24bb2257cbff9 it doesn't appear that we are positioning the
cursor at the skip location.
Generally then, there is no need to use forward only to support skip and batching as you
would just manually iterate. The only reason for scrolling is to determine the inline
count and then reposition the cursor. Subsequent requests are feed off of the cached or
new result set, not off of the original so there isn't a need for further
repositioning.
One option could be we could by provide an option to turn on
"forward-only" resultset through either
That should be unnecessary. Much of the distinction that I've made between
forward-only vs. scroll is moot as when we cache we are also bypassing the flow control
logic.
Steve: I do not think I followed your comment below
I'm saying that much of the apparent performance beyond odata overhead is coming from
the rate at which we let the plan build up data in the output buffer. So I will take a
look at if we should slow down the rate at which scrolling, cached, or transactional
output buffer results are collected.
OData buffers ALL rows from resultset before returning the first
batch
----------------------------------------------------------------------
Key: TEIID-2892
URL:
https://issues.jboss.org/browse/TEIID-2892
Project: Teiid
Issue Type: Bug
Components: OData
Affects Versions: 8.4.1
Environment: Tested with Jboss DV 6.0.0. GA (enterprise edition) on Apple OSX
10.9.2 and Oracle Java VM 1.7.0_51.
Reporter: Patrick Deenen
Assignee: Steven Hawkins
Attachments: logfiles.zip
OData doesn’t batch internally opposed to JDBC which does. E.g. when in JDBC a query is
done with a large result, only the first 2048 rows are physically fetched from the source
database and only the first 200 rows (depending on client application) are returned. But
when the same query is executed by the use of Odata ALL rows in the result set are
physically fetched by DV and stored in the buffer. Even with the default Odata fetch batch
size of 256. This makes the Odata interface very inefficient for large query results where
one only is interessed in the first 256 rows.
Attached you can find two log files which show the problem.
The Odata query used is:
http://localhost:8080/odata/PMA/EVENT_FACT?$filter=event_fact_id%20ge%207...
Which is identical to the JDBC query used:
select * from event_fact where event_fact_id between 747000000 and 747200000;
In both cases the result contains 200.000 rows
ODATA log information analysis (log file ’server start + odata batch 256.log’):
row 4543 - 4657 - Start query
row 4658 - 9030 - Read ALL results from result set and store them in buffer
row 9031 - 9035 - Close DB connection
row 9036 - 14647 - Clean buffers and create response?
row 14648 - 14661 - return first batch and close connection
JDBC log information analysis (log file ’server start + jdbc.log’):
row 4925 - 5112 - Start query
row 5113 - 5166 - Read ONLY the first 2048 results from result set and store them in
buffer and return response
row 5157 - 5214 - Close DB connection
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
http://www.atlassian.com/software/jira