[JBoss JIRA] (TEIID-2892) OData buffers ALL rows from resultset before returning the first batch
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-2892?page=com.atlassian.jira.plugin... ]
Steven Hawkins updated TEIID-2892:
----------------------------------
Issue Type: Quality Risk (was: Bug)
Changing this issue to a quality risk. When can address as an enhancement if a specific proposal is made.
> OData buffers ALL rows from resultset before returning the first batch
> ----------------------------------------------------------------------
>
> Key: TEIID-2892
> URL: https://issues.jboss.org/browse/TEIID-2892
> Project: Teiid
> Issue Type: Quality Risk
> Components: OData
> Affects Versions: 8.4.1
> Environment: Tested with Jboss DV 6.0.0. GA (enterprise edition) on Apple OSX 10.9.2 and Oracle Java VM 1.7.0_51.
> Reporter: Patrick Deenen
> Assignee: Steven Hawkins
> Attachments: logfiles.zip
>
>
> OData doesn’t batch internally opposed to JDBC which does. E.g. when in JDBC a query is done with a large result, only the first 2048 rows are physically fetched from the source database and only the first 200 rows (depending on client application) are returned. But when the same query is executed by the use of Odata ALL rows in the result set are physically fetched by DV and stored in the buffer. Even with the default Odata fetch batch size of 256. This makes the Odata interface very inefficient for large query results where one only is interessed in the first 256 rows.
> Attached you can find two log files which show the problem.
> The Odata query used is:
> http://localhost:8080/odata/PMA/EVENT_FACT?$filter=event_fact_id%20ge%207...
> Which is identical to the JDBC query used:
> select * from event_fact where event_fact_id between 747000000 and 747200000;
> In both cases the result contains 200.000 rows
> ODATA log information analysis (log file ’server start + odata batch 256.log’):
> row 4543 - 4657 - Start query
> row 4658 - 9030 - Read ALL results from result set and store them in buffer
> row 9031 - 9035 - Close DB connection
> row 9036 - 14647 - Clean buffers and create response?
> row 14648 - 14661 - return first batch and close connection
> JDBC log information analysis (log file ’server start + jdbc.log’):
> row 4925 - 5112 - Start query
> row 5113 - 5166 - Read ONLY the first 2048 results from result set and store them in buffer and return response
> row 5157 - 5214 - Close DB connection
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 8 months
[JBoss JIRA] (TEIID-2934) OData improvements
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-2934?page=com.atlassian.jira.plugin... ]
Steven Hawkins resolved TEIID-2934.
-----------------------------------
Resolution: Done
Switched to forward only non-cached results, when reading all values. Also forcing a full reach in the cached/skipToken case to ensure that the cache entry is properly created.
> OData improvements
> ------------------
>
> Key: TEIID-2934
> URL: https://issues.jboss.org/browse/TEIID-2934
> Project: Teiid
> Issue Type: Enhancement
> Components: OData
> Reporter: Steven Hawkins
> Assignee: Steven Hawkins
> Fix For: 8.7.1, 8.8
>
>
> We should only use scrolling when performing caching and we should ensure that the cache entries are being created - either by reading the whole resultset or by adding read all functionality to the cache hint.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 8 months
[JBoss JIRA] (TEIID-2936) Combining multiple 'in' criteria can prevent pushing criteria to database
by Mike Higgins (JIRA)
Mike Higgins created TEIID-2936:
-----------------------------------
Summary: Combining multiple 'in' criteria can prevent pushing criteria to database
Key: TEIID-2936
URL: https://issues.jboss.org/browse/TEIID-2936
Project: Teiid
Issue Type: Feature Request
Affects Versions: 8.7
Reporter: Mike Higgins
Assignee: Steven Hawkins
When two 'in' criteria containing constant values on the same field are combined with an 'or', Teiid will combine the two into a single 'in' operation.
Since Oracle (for example) has a 1000 item limit on the 'in' values, if the resulting combined list contains more than 1000 values, it can no longer be pushed down into Oracle for execution.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 8 months
[JBoss JIRA] (TEIID-2892) OData buffers ALL rows from resultset before returning the first batch
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/TEIID-2892?page=com.atlassian.jira.plugin... ]
Ramesh Reddy commented on TEIID-2892:
-------------------------------------
As long as there is confidence that with out sort results are predictable or consistent
> OData buffers ALL rows from resultset before returning the first batch
> ----------------------------------------------------------------------
>
> Key: TEIID-2892
> URL: https://issues.jboss.org/browse/TEIID-2892
> Project: Teiid
> Issue Type: Bug
> Components: OData
> Affects Versions: 8.4.1
> Environment: Tested with Jboss DV 6.0.0. GA (enterprise edition) on Apple OSX 10.9.2 and Oracle Java VM 1.7.0_51.
> Reporter: Patrick Deenen
> Assignee: Steven Hawkins
> Attachments: logfiles.zip
>
>
> OData doesn’t batch internally opposed to JDBC which does. E.g. when in JDBC a query is done with a large result, only the first 2048 rows are physically fetched from the source database and only the first 200 rows (depending on client application) are returned. But when the same query is executed by the use of Odata ALL rows in the result set are physically fetched by DV and stored in the buffer. Even with the default Odata fetch batch size of 256. This makes the Odata interface very inefficient for large query results where one only is interessed in the first 256 rows.
> Attached you can find two log files which show the problem.
> The Odata query used is:
> http://localhost:8080/odata/PMA/EVENT_FACT?$filter=event_fact_id%20ge%207...
> Which is identical to the JDBC query used:
> select * from event_fact where event_fact_id between 747000000 and 747200000;
> In both cases the result contains 200.000 rows
> ODATA log information analysis (log file ’server start + odata batch 256.log’):
> row 4543 - 4657 - Start query
> row 4658 - 9030 - Read ALL results from result set and store them in buffer
> row 9031 - 9035 - Close DB connection
> row 9036 - 14647 - Clean buffers and create response?
> row 14648 - 14661 - return first batch and close connection
> JDBC log information analysis (log file ’server start + jdbc.log’):
> row 4925 - 5112 - Start query
> row 5113 - 5166 - Read ONLY the first 2048 results from result set and store them in buffer and return response
> row 5157 - 5214 - Close DB connection
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 8 months
[JBoss JIRA] (TEIID-2892) OData buffers ALL rows from resultset before returning the first batch
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-2892?page=com.atlassian.jira.plugin... ]
Steven Hawkins edited comment on TEIID-2892 at 4/25/14 11:20 AM:
-----------------------------------------------------------------
Yes as per the spec recommendation. However that means if the source does not support sorting, then all applicable results will be pulled prior to returning a batch.
So this issue isn't a bug, but a performance issue related to odata and his particular source. We can proceed to resolve this or modify it to be some kind of enhancement about not enforcing order.
was (Author: shawkins):
Yes as per the spec recommendation. However that means if the source does not support sorting, then all applicable results will be pulled prior to returning a batch.
> OData buffers ALL rows from resultset before returning the first batch
> ----------------------------------------------------------------------
>
> Key: TEIID-2892
> URL: https://issues.jboss.org/browse/TEIID-2892
> Project: Teiid
> Issue Type: Bug
> Components: OData
> Affects Versions: 8.4.1
> Environment: Tested with Jboss DV 6.0.0. GA (enterprise edition) on Apple OSX 10.9.2 and Oracle Java VM 1.7.0_51.
> Reporter: Patrick Deenen
> Assignee: Steven Hawkins
> Attachments: logfiles.zip
>
>
> OData doesn’t batch internally opposed to JDBC which does. E.g. when in JDBC a query is done with a large result, only the first 2048 rows are physically fetched from the source database and only the first 200 rows (depending on client application) are returned. But when the same query is executed by the use of Odata ALL rows in the result set are physically fetched by DV and stored in the buffer. Even with the default Odata fetch batch size of 256. This makes the Odata interface very inefficient for large query results where one only is interessed in the first 256 rows.
> Attached you can find two log files which show the problem.
> The Odata query used is:
> http://localhost:8080/odata/PMA/EVENT_FACT?$filter=event_fact_id%20ge%207...
> Which is identical to the JDBC query used:
> select * from event_fact where event_fact_id between 747000000 and 747200000;
> In both cases the result contains 200.000 rows
> ODATA log information analysis (log file ’server start + odata batch 256.log’):
> row 4543 - 4657 - Start query
> row 4658 - 9030 - Read ALL results from result set and store them in buffer
> row 9031 - 9035 - Close DB connection
> row 9036 - 14647 - Clean buffers and create response?
> row 14648 - 14661 - return first batch and close connection
> JDBC log information analysis (log file ’server start + jdbc.log’):
> row 4925 - 5112 - Start query
> row 5113 - 5166 - Read ONLY the first 2048 results from result set and store them in buffer and return response
> row 5157 - 5214 - Close DB connection
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 8 months
[JBoss JIRA] (TEIID-2892) OData buffers ALL rows from resultset before returning the first batch
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-2892?page=com.atlassian.jira.plugin... ]
Steven Hawkins commented on TEIID-2892:
---------------------------------------
Yes as per the spec recommendation. However that means if the source does not support sorting, then all applicable results will be pulled prior to returning a batch.
> OData buffers ALL rows from resultset before returning the first batch
> ----------------------------------------------------------------------
>
> Key: TEIID-2892
> URL: https://issues.jboss.org/browse/TEIID-2892
> Project: Teiid
> Issue Type: Bug
> Components: OData
> Affects Versions: 8.4.1
> Environment: Tested with Jboss DV 6.0.0. GA (enterprise edition) on Apple OSX 10.9.2 and Oracle Java VM 1.7.0_51.
> Reporter: Patrick Deenen
> Assignee: Steven Hawkins
> Attachments: logfiles.zip
>
>
> OData doesn’t batch internally opposed to JDBC which does. E.g. when in JDBC a query is done with a large result, only the first 2048 rows are physically fetched from the source database and only the first 200 rows (depending on client application) are returned. But when the same query is executed by the use of Odata ALL rows in the result set are physically fetched by DV and stored in the buffer. Even with the default Odata fetch batch size of 256. This makes the Odata interface very inefficient for large query results where one only is interessed in the first 256 rows.
> Attached you can find two log files which show the problem.
> The Odata query used is:
> http://localhost:8080/odata/PMA/EVENT_FACT?$filter=event_fact_id%20ge%207...
> Which is identical to the JDBC query used:
> select * from event_fact where event_fact_id between 747000000 and 747200000;
> In both cases the result contains 200.000 rows
> ODATA log information analysis (log file ’server start + odata batch 256.log’):
> row 4543 - 4657 - Start query
> row 4658 - 9030 - Read ALL results from result set and store them in buffer
> row 9031 - 9035 - Close DB connection
> row 9036 - 14647 - Clean buffers and create response?
> row 14648 - 14661 - return first batch and close connection
> JDBC log information analysis (log file ’server start + jdbc.log’):
> row 4925 - 5112 - Start query
> row 5113 - 5166 - Read ONLY the first 2048 results from result set and store them in buffer and return response
> row 5157 - 5214 - Close DB connection
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 8 months