[teiid-issues] [JBoss JIRA] (TEIID-5680) Improve performance of odata expand operations

Christoph John (Jira) issues at jboss.org
Mon Mar 11 13:07:00 EDT 2019


    [ https://issues.jboss.org/browse/TEIID-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706731#comment-13706731 ] 

Christoph John edited comment on TEIID-5680 at 3/11/19 1:06 PM:
----------------------------------------------------------------

Hello Steven thanks for the feedback. Then it seems now is the time for me to migrate to 12.0 I am not aware of the CARDINALITY metadata concept yet. Need to get hands on this first as well. But I will try out your recommendation. 
As a matter of principle I expect to have a large row count in a productive scenario for the Diary table as well. The small number of rows is only because I am running with a small test setup. For the CARDINALITY to make sense for TEIID, what kind of category limits are there to be specified. I mean does Teiid behave differently with different orders of magnitude for the cardinality or is there just something like small or large table depending on a given threshold? How will I have to use the metadata feature in a productive environment? Is this something I need to update in the lifecycle when tables grow larger or do I just make an educated guess how the future will look like?
Thanks for your help.

Best regards,
 Christoph


was (Author: cjohn001):
Hello Steven thanks for the feedback. Then it seems now is the time for me to migrate to 12.0 I am not aware of the CARDINALITY metadata concept yet. Need to get hands on this first as well. But I will try out your recommendation. 
As a matter of principle I expect to have a large row count in a productive scenario for the Diary table as well. The small number of rows is only because I am running with a small test setup. For the CARDINALITY to make sense for TEIID, what kind of categories are there to be specified. I mean does Teiid behave differently with different orders of magnitude for the cardinality or is there just something like small or large table depending on the threshold? How will I have to use the metadata feature in a productive environment? Is this something I need to update in the livecycle when tables grow larger?
Thanks for your help.

Best regards,
 Christoph

> Improve performance of odata expand operations
> ----------------------------------------------
>
>                 Key: TEIID-5680
>                 URL: https://issues.jboss.org/browse/TEIID-5680
>             Project: Teiid
>          Issue Type: Enhancement
>          Components: OData
>            Reporter: Christoph John
>            Assignee: Steven Hawkins
>            Priority: Major
>         Attachments: test2.txt
>
>
> Hello Ramesh and Steven,
> this is a follow up regarding an observation in the discussion from TEIID-5643. I thought I open an extra issue for the topic as this seems not to be related to TEIID-5500. 
> As you already know, I am using SAPUI5 as frontend for ODATA requests. SAPUI5 supports binding of a user interface control group (like a list with its list items) to a single ODATA path at a time only. If the control group items require additional information which is stored in a different table in the database, I have to expand those parameters in the odata query.
> When doing so, I am running in a serious performance issue with TEIID, which would render the approach of using sapui5 with Teiid infeasible if we cannot find a way to speedup the issue. At the moment I have a small table with entries (table Diary with about 20 records) for which the query extracts several items (just a single one in the example given below). Now the filtered item is expanded with data from a larger table in the database (FDBProducts with about 680.000 records). The whole query takes about 15s to be processed. The query is given as:
> https://morpheus.fritz.box/odata4/svc/my_nutri_diary/Diary?$select=AmountInG,idDiaryEntry&$expand=fkDiaryToFDBProducts($select=brands,energy_100g,idCode,product_name)&$filter=AddedDateTime%20ge%202019-03-06T00:00:00%2B01:00%20and%20AddedDateTime%20le%202019-03-07T00:00:00%2B01:00%20and%20MealNumber%20eq%20%270%27&$skip=0&$top=100
> I checked the output when using
>  <logger category="org.teiid.CONNECTOR"><level name="TRACE"/></logger>
> This shows the problem. It seems the join operation is not pushed down to the database but the data are rather joined within Teiid. Teiid therefore downloads the entire dataset of the large FDBProducts table, which makes the expand approach infeasible for real world datasets with a certain size. So  my question is, if you can modify Teiid to push down the entire join operation to the underlaying database (I assume this would be the most efficient approach), or alternatively query just the items from the table to be joined which where filtered from the first table if the first option is not possible?
> Thanks for your help.
>  Christoph



--
This message was sent by Atlassian Jira
(v7.12.1#712002)


More information about the teiid-issues mailing list