[JBoss JIRA] (TEIID-5643) When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
by Christoph John (Jira)
[ https://issues.jboss.org/browse/TEIID-5643?page=com.atlassian.jira.plugin... ]
Christoph John updated TEIID-5643:
----------------------------------
Attachment: Heap Dump analysis tool.pdf
> When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
> --------------------------------------------------------------------------------------------
>
> Key: TEIID-5643
> URL: https://issues.jboss.org/browse/TEIID-5643
> Project: Teiid
> Issue Type: Bug
> Affects Versions: 11.2.1
> Reporter: Christoph John
> Assignee: Steven Hawkins
> Priority: Major
> Attachments: Heap Dump analysis tool.pdf
>
>
> Please note, I am using Teiid 11.2.0 and not 11.2.1 as specified above in this ticket. I was not able to select a version 11.2.0.
> The issue: I tried to query a table with a larger amount of records (about 600.000 records in the table). Teiid responds with
> 504 Gateway Time-out. The server didn't respond in time.
> If Teiid tries to respond with 600.000 entries to such a request, it is reasonable that it runs out of memory or other weird things happen. However, I would expect that in such a case Teiid just answers with a fixed amount of records, say for example 1000, and provides a kind of curser to enable subsequent requests to retrieve more data.
> I am currently trying to find a workaround for the issue, as the issue blocks my work. Is there maybe a kind of configuration option that I have to set explicitly for such a kind of behavior I previously sketched?
> Further note: When I send such a kind of select * request on the table to Teiid, I am also not able for a certain time interval, about a minute, to get a responds to different queries which select a single element on the mentioned table. Such a request then also results in a 504 Gateway Time-out.
> Attached error log:
> 2019-02-01 22:33:36,126 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY IJ000407: No lazy enlistment available for my_nutri_diary
> 2019-02-01 22:33:36,135 INFO [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY MySQLExecutionFactory Commit=true;DatabaseProductName=MySQL;DatabaseProductVersion=8.0.13;DriverMajorVersion=8;DriverMajorVersion=0;DriverName=MySQL Connector/J;DriverVersion=mysql-connector-java-8.0.13 (Revision: 66459e9d39c8fd09767992bc592acd2053279be6);IsolationLevel=2
> 2019-02-01 22:36:56,959 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY Connector worker process failed for atomic-request=eayVFuq9y7OY.0.0.0: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Java heap space
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
> at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:974)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1024)
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:504)
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119)
> ... 18 more
> 2019-02-01 22:36:56,977 WARN [org.teiid.PROCESSOR] (default task-4) eayVFuq9y7OY TEIID30020 Processing exception for request eayVFuq9y7OY.0 'TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]'. Originally TeiidProcessingException 'Java heap space' SQLError.java:129. Enable more detailed logging to see the entire stacktrace.
> 2019-02-01 22:36:56,987 WARN [org.teiid.PROCESSOR] (default task-4) TEIID16053 Unable to process odata request due to: TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0] Increase the log level to see the entire stack trace.
> 2019-02-01 22:36:57,213 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue6) NdaDihMR2Vlt Connector worker process failed for atomic-request=NdaDihMR2Vlt.0.0.1: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5643) When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
by Christoph John (Jira)
[ https://issues.jboss.org/browse/TEIID-5643?page=com.atlassian.jira.plugin... ]
Christoph John commented on TEIID-5643:
---------------------------------------
Hello Steven,
I found a way to do a heap dump in the docker engine. I did to do a dump during execution of such a select * statement. When doing so I could not make a heap dump but got an error message
[jboss@844521ce1925 ~]$ jmap -dump:live,format=b,file=heapDump.hprof 66
66: Unable to open socket file: target process not responding or HotSpot VM not loaded
Doing it shortly afterwords I could make one. But it has a size of 104 mb. How can I sent it to you?
> When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
> --------------------------------------------------------------------------------------------
>
> Key: TEIID-5643
> URL: https://issues.jboss.org/browse/TEIID-5643
> Project: Teiid
> Issue Type: Bug
> Affects Versions: 11.2.1
> Reporter: Christoph John
> Assignee: Steven Hawkins
> Priority: Major
>
> Please note, I am using Teiid 11.2.0 and not 11.2.1 as specified above in this ticket. I was not able to select a version 11.2.0.
> The issue: I tried to query a table with a larger amount of records (about 600.000 records in the table). Teiid responds with
> 504 Gateway Time-out. The server didn't respond in time.
> If Teiid tries to respond with 600.000 entries to such a request, it is reasonable that it runs out of memory or other weird things happen. However, I would expect that in such a case Teiid just answers with a fixed amount of records, say for example 1000, and provides a kind of curser to enable subsequent requests to retrieve more data.
> I am currently trying to find a workaround for the issue, as the issue blocks my work. Is there maybe a kind of configuration option that I have to set explicitly for such a kind of behavior I previously sketched?
> Further note: When I send such a kind of select * request on the table to Teiid, I am also not able for a certain time interval, about a minute, to get a responds to different queries which select a single element on the mentioned table. Such a request then also results in a 504 Gateway Time-out.
> Attached error log:
> 2019-02-01 22:33:36,126 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY IJ000407: No lazy enlistment available for my_nutri_diary
> 2019-02-01 22:33:36,135 INFO [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY MySQLExecutionFactory Commit=true;DatabaseProductName=MySQL;DatabaseProductVersion=8.0.13;DriverMajorVersion=8;DriverMajorVersion=0;DriverName=MySQL Connector/J;DriverVersion=mysql-connector-java-8.0.13 (Revision: 66459e9d39c8fd09767992bc592acd2053279be6);IsolationLevel=2
> 2019-02-01 22:36:56,959 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY Connector worker process failed for atomic-request=eayVFuq9y7OY.0.0.0: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Java heap space
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
> at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:974)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1024)
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:504)
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119)
> ... 18 more
> 2019-02-01 22:36:56,977 WARN [org.teiid.PROCESSOR] (default task-4) eayVFuq9y7OY TEIID30020 Processing exception for request eayVFuq9y7OY.0 'TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]'. Originally TeiidProcessingException 'Java heap space' SQLError.java:129. Enable more detailed logging to see the entire stacktrace.
> 2019-02-01 22:36:56,987 WARN [org.teiid.PROCESSOR] (default task-4) TEIID16053 Unable to process odata request due to: TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0] Increase the log level to see the entire stack trace.
> 2019-02-01 22:36:57,213 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue6) NdaDihMR2Vlt Connector worker process failed for atomic-request=NdaDihMR2Vlt.0.0.1: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5643) When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
by Christoph John (Jira)
[ https://issues.jboss.org/browse/TEIID-5643?page=com.atlassian.jira.plugin... ]
Christoph John edited comment on TEIID-5643 at 2/4/19 7:07 PM:
---------------------------------------------------------------
Hello Steven,
>Ideally it should not - unless the odata batch size is set so high that creating the response message is the problem. We do need to cache the entire result set, but that should page the disk as necessary. Is it possible to produce a heap dump? What is your database type? In some cases it could be the database driver itself that is fully materializing the result rather than delivering in batches.
I use a mysql 8 database with the mysql 8 driver. I also use the same driver together with mysql workbench (not using odata) to configure the database, which by default limits query sizes to 1000 items. This is working without any issues. Therefore I think it is not an issue of the driver or database.
I have no idea how to do a heap dump. I am using teiid in a docker container with the following base image jboss/wildfly:11.0.0.Final. Could you give me instuctions how to do a heap dump there?
>Correct, and from the default configuration you are showing the odata results should be paged back 256 rows at a time.
This is also my thinking
>Can you manually use top and skip to fetch a smaller result total result set?
Interestingly, if I do a top=250 I do not get such a blocking behavior. The problem is, I cannot take this as a workaround, as I need to ensure that the user cannot freeze my database by sending such a query.
>There is nothing at the odata or engine level that should inherently block one request for another like that. Is there some locking at the >database level? Or can you produce a thread dump to see why threads may be waiting?
I have the same problem with the thread dump in the docker container like with the heap dump mentioned above. I do not know how to do it. Could you give me a hint here as well?
Thanks for your help.
was (Author: cjohn001):
Hello Steven,
>Ideally it should not - unless the odata batch size is set so high that creating the response message is the problem. We do need to cache the entire result set, but that should page the disk as necessary. Is it possible to produce a heap dump? What is your database type? In some cases it could be the database driver itself that is fully materializing the result rather than delivering in batches.
I use a mysql 8 database with the mysql 8 driver. I also use the same driver together with mysql workbench (not using odata) to configure the database, which by default limits query sizes to 1000 items. This is working without any issues. Therefore I think it is not an issue of the driver or database.
I have no idea how to do a heap dump. I am using teiid in a docker container with the following base image jboss/wildfly:11.0.0.Final. Could you give me instuctions how to do a heap dump there?
>Correct, and from the default configuration you are showing the odata results should be paged back 256 rows at a time.
this is also my thinking
>Can you manually use top and skip to fetch a smaller result total result set?
Interestingly, if I do a top=250 I do not get such a blocking behavior. The problem is, I cannot take this as a workaround, as I need to ensure that the user cannot freeze my database by sending such a query.
>There is nothing at the odata or engine level that should inherently block one request for another like that. Is there some locking at the >database level? Or can you produce a thread dump to see why threads may be waiting?
I have the same problem with the thread dump in the docker container like with the heap dump mentioned above. I do not know how to do it. Could you give me a hint here as well?
Thanks for your help.
> When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
> --------------------------------------------------------------------------------------------
>
> Key: TEIID-5643
> URL: https://issues.jboss.org/browse/TEIID-5643
> Project: Teiid
> Issue Type: Bug
> Affects Versions: 11.2.1
> Reporter: Christoph John
> Assignee: Steven Hawkins
> Priority: Major
>
> Please note, I am using Teiid 11.2.0 and not 11.2.1 as specified above in this ticket. I was not able to select a version 11.2.0.
> The issue: I tried to query a table with a larger amount of records (about 600.000 records in the table). Teiid responds with
> 504 Gateway Time-out. The server didn't respond in time.
> If Teiid tries to respond with 600.000 entries to such a request, it is reasonable that it runs out of memory or other weird things happen. However, I would expect that in such a case Teiid just answers with a fixed amount of records, say for example 1000, and provides a kind of curser to enable subsequent requests to retrieve more data.
> I am currently trying to find a workaround for the issue, as the issue blocks my work. Is there maybe a kind of configuration option that I have to set explicitly for such a kind of behavior I previously sketched?
> Further note: When I send such a kind of select * request on the table to Teiid, I am also not able for a certain time interval, about a minute, to get a responds to different queries which select a single element on the mentioned table. Such a request then also results in a 504 Gateway Time-out.
> Attached error log:
> 2019-02-01 22:33:36,126 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY IJ000407: No lazy enlistment available for my_nutri_diary
> 2019-02-01 22:33:36,135 INFO [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY MySQLExecutionFactory Commit=true;DatabaseProductName=MySQL;DatabaseProductVersion=8.0.13;DriverMajorVersion=8;DriverMajorVersion=0;DriverName=MySQL Connector/J;DriverVersion=mysql-connector-java-8.0.13 (Revision: 66459e9d39c8fd09767992bc592acd2053279be6);IsolationLevel=2
> 2019-02-01 22:36:56,959 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY Connector worker process failed for atomic-request=eayVFuq9y7OY.0.0.0: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Java heap space
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
> at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:974)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1024)
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:504)
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119)
> ... 18 more
> 2019-02-01 22:36:56,977 WARN [org.teiid.PROCESSOR] (default task-4) eayVFuq9y7OY TEIID30020 Processing exception for request eayVFuq9y7OY.0 'TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]'. Originally TeiidProcessingException 'Java heap space' SQLError.java:129. Enable more detailed logging to see the entire stacktrace.
> 2019-02-01 22:36:56,987 WARN [org.teiid.PROCESSOR] (default task-4) TEIID16053 Unable to process odata request due to: TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0] Increase the log level to see the entire stack trace.
> 2019-02-01 22:36:57,213 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue6) NdaDihMR2Vlt Connector worker process failed for atomic-request=NdaDihMR2Vlt.0.0.1: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5643) When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
by Christoph John (Jira)
[ https://issues.jboss.org/browse/TEIID-5643?page=com.atlassian.jira.plugin... ]
Christoph John edited comment on TEIID-5643 at 2/4/19 7:06 PM:
---------------------------------------------------------------
Hello Steven,
>Ideally it should not - unless the odata batch size is set so high that creating the response message is the problem. We do need to cache the entire result set, but that should page the disk as necessary. Is it possible to produce a heap dump? What is your database type? In some cases it could be the database driver itself that is fully materializing the result rather than delivering in batches.
I use a mysql 8 database with the mysql 8 driver. I also use the same driver together with mysql workbench (not using odata) to configure the database, which by default limits query sizes to 1000 items. This is working without any issues. Therefore I think it is not an issue of the driver or database.
I have no idea how to do a heap dump. I am using teiid in a docker container with the following base image jboss/wildfly:11.0.0.Final. Could you give me instuctions how to do a heap dump there?
>Correct, and from the default configuration you are showing the odata results should be paged back 256 rows at a time.
this is also my thinking
>Can you manually use top and skip to fetch a smaller result total result set?
Interestingly, if I do a top=250 I do not get such a blocking behavior. The problem is, I cannot take this as a workaround, as I need to ensure that the user cannot freeze my database by sending such a query.
>There is nothing at the odata or engine level that should inherently block one request for another like that. Is there some locking at the >database level? Or can you produce a thread dump to see why threads may be waiting?
I have the same problem with the thread dump in the docker container like with the heap dump mentioned above. I do not know how to do it. Could you give me a hint here as well?
Thanks for your help.
was (Author: cjohn001):
Hello Steven,
>Ideally it should not - unless the odata batch size is set so high that creating the response message is the problem. We do need to >cache the entire result set, but that should page the disk as necessary. Is it possible to produce a heap dump? What is your database >type? In some cases it could be the database driver itself that is fully materializing the result rather than delivering in batches.
I use a mysql 8 database with the mysql 8 driver. I also use the same driver together with mysql workbench (not using odata) to configure the database, which by default limits query sizes to 1000 items. This is working without any issues. Therefore I think it is not an issue of the driver or database.
I have no idea how to do a heap dump. I am using teiid in a docker container with the following base image jboss/wildfly:11.0.0.Final. Could you give me instuctions how to do a heap dump there?
>Correct, and from the default configuration you are showing the odata results should be paged back 256 rows at a time.
this is also my thinking
>Can you manually use top and skip to fetch a smaller result total result set?
Interestingly, if I do a top=250 I do not get such a blocking behavior. The problem is, I cannot take this as a workaround, as I need to ensure that the user cannot freeze my database by sending such a query.
>There is nothing at the odata or engine level that should inherently block one request for another like that. Is there some locking at the >database level? Or can you produce a thread dump to see why threads may be waiting?
I have the same problem with the thread dump in the docker container like with the heap dump mentioned above. I do not know how to do it. Could you give me a hint here as well?
Thanks for your help.
> When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
> --------------------------------------------------------------------------------------------
>
> Key: TEIID-5643
> URL: https://issues.jboss.org/browse/TEIID-5643
> Project: Teiid
> Issue Type: Bug
> Affects Versions: 11.2.1
> Reporter: Christoph John
> Assignee: Steven Hawkins
> Priority: Major
>
> Please note, I am using Teiid 11.2.0 and not 11.2.1 as specified above in this ticket. I was not able to select a version 11.2.0.
> The issue: I tried to query a table with a larger amount of records (about 600.000 records in the table). Teiid responds with
> 504 Gateway Time-out. The server didn't respond in time.
> If Teiid tries to respond with 600.000 entries to such a request, it is reasonable that it runs out of memory or other weird things happen. However, I would expect that in such a case Teiid just answers with a fixed amount of records, say for example 1000, and provides a kind of curser to enable subsequent requests to retrieve more data.
> I am currently trying to find a workaround for the issue, as the issue blocks my work. Is there maybe a kind of configuration option that I have to set explicitly for such a kind of behavior I previously sketched?
> Further note: When I send such a kind of select * request on the table to Teiid, I am also not able for a certain time interval, about a minute, to get a responds to different queries which select a single element on the mentioned table. Such a request then also results in a 504 Gateway Time-out.
> Attached error log:
> 2019-02-01 22:33:36,126 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY IJ000407: No lazy enlistment available for my_nutri_diary
> 2019-02-01 22:33:36,135 INFO [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY MySQLExecutionFactory Commit=true;DatabaseProductName=MySQL;DatabaseProductVersion=8.0.13;DriverMajorVersion=8;DriverMajorVersion=0;DriverName=MySQL Connector/J;DriverVersion=mysql-connector-java-8.0.13 (Revision: 66459e9d39c8fd09767992bc592acd2053279be6);IsolationLevel=2
> 2019-02-01 22:36:56,959 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY Connector worker process failed for atomic-request=eayVFuq9y7OY.0.0.0: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Java heap space
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
> at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:974)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1024)
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:504)
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119)
> ... 18 more
> 2019-02-01 22:36:56,977 WARN [org.teiid.PROCESSOR] (default task-4) eayVFuq9y7OY TEIID30020 Processing exception for request eayVFuq9y7OY.0 'TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]'. Originally TeiidProcessingException 'Java heap space' SQLError.java:129. Enable more detailed logging to see the entire stacktrace.
> 2019-02-01 22:36:56,987 WARN [org.teiid.PROCESSOR] (default task-4) TEIID16053 Unable to process odata request due to: TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0] Increase the log level to see the entire stack trace.
> 2019-02-01 22:36:57,213 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue6) NdaDihMR2Vlt Connector worker process failed for atomic-request=NdaDihMR2Vlt.0.0.1: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5643) When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
by Christoph John (Jira)
[ https://issues.jboss.org/browse/TEIID-5643?page=com.atlassian.jira.plugin... ]
Christoph John commented on TEIID-5643:
---------------------------------------
Hello Steven,
>Ideally it should not - unless the odata batch size is set so high that creating the response message is the problem. We do need to >cache the entire result set, but that should page the disk as necessary. Is it possible to produce a heap dump? What is your database >type? In some cases it could be the database driver itself that is fully materializing the result rather than delivering in batches.
I use a mysql 8 database with the mysql 8 driver. I also use the same driver together with mysql workbench (not using odata) to configure the database, which by default limits query sizes to 1000 items. This is working without any issues. Therefore I think it is not an issue of the driver or database.
I have no idea how to do a heap dump. I am using teiid in a docker container with the following base image jboss/wildfly:11.0.0.Final. Could you give me instuctions how to do a heap dump there?
>Correct, and from the default configuration you are showing the odata results should be paged back 256 rows at a time.
this is also my thinking
>Can you manually use top and skip to fetch a smaller result total result set?
Interestingly, if I do a top=250 I do not get such a blocking behavior. The problem is, I cannot take this as a workaround, as I need to ensure that the user cannot freeze my database by sending such a query.
>There is nothing at the odata or engine level that should inherently block one request for another like that. Is there some locking at the >database level? Or can you produce a thread dump to see why threads may be waiting?
I have the same problem with the thread dump in the docker container like with the heap dump mentioned above. I do not know how to do it. Could you give me a hint here as well?
Thanks for your help.
> When querying a larger dataset, teiid answers with 504 Gateway Time-out and throws exception
> --------------------------------------------------------------------------------------------
>
> Key: TEIID-5643
> URL: https://issues.jboss.org/browse/TEIID-5643
> Project: Teiid
> Issue Type: Bug
> Affects Versions: 11.2.1
> Reporter: Christoph John
> Assignee: Steven Hawkins
> Priority: Major
>
> Please note, I am using Teiid 11.2.0 and not 11.2.1 as specified above in this ticket. I was not able to select a version 11.2.0.
> The issue: I tried to query a table with a larger amount of records (about 600.000 records in the table). Teiid responds with
> 504 Gateway Time-out. The server didn't respond in time.
> If Teiid tries to respond with 600.000 entries to such a request, it is reasonable that it runs out of memory or other weird things happen. However, I would expect that in such a case Teiid just answers with a fixed amount of records, say for example 1000, and provides a kind of curser to enable subsequent requests to retrieve more data.
> I am currently trying to find a workaround for the issue, as the issue blocks my work. Is there maybe a kind of configuration option that I have to set explicitly for such a kind of behavior I previously sketched?
> Further note: When I send such a kind of select * request on the table to Teiid, I am also not able for a certain time interval, about a minute, to get a responds to different queries which select a single element on the mentioned table. Such a request then also results in a 504 Gateway Time-out.
> Attached error log:
> 2019-02-01 22:33:36,126 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY IJ000407: No lazy enlistment available for my_nutri_diary
> 2019-02-01 22:33:36,135 INFO [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY MySQLExecutionFactory Commit=true;DatabaseProductName=MySQL;DatabaseProductVersion=8.0.13;DriverMajorVersion=8;DriverMajorVersion=0;DriverName=MySQL Connector/J;DriverVersion=mysql-connector-java-8.0.13 (Revision: 66459e9d39c8fd09767992bc592acd2053279be6);IsolationLevel=2
> 2019-02-01 22:36:56,959 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5) eayVFuq9y7OY Connector worker process failed for atomic-request=eayVFuq9y7OY.0.0.0: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: Java heap space
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
> at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
> at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:974)
> at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1024)
> at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:504)
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119)
> ... 18 more
> 2019-02-01 22:36:56,977 WARN [org.teiid.PROCESSOR] (default task-4) eayVFuq9y7OY TEIID30020 Processing exception for request eayVFuq9y7OY.0 'TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]'. Originally TeiidProcessingException 'Java heap space' SQLError.java:129. Enable more detailed logging to see the entire stacktrace.
> 2019-02-01 22:36:56,987 WARN [org.teiid.PROCESSOR] (default task-4) TEIID16053 Unable to process odata request due to: TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0] Increase the log level to see the entire stack trace.
> 2019-02-01 22:36:57,213 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue6) NdaDihMR2Vlt Connector worker process failed for atomic-request=NdaDihMR2Vlt.0.0.1: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER BY c_0]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
> at com.sun.proxy.$Proxy51.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5555) Create OpenAPI (swagger) metadata document based on OData
by Steven Hawkins (Jira)
[ https://issues.jboss.org/browse/TEIID-5555?page=com.atlassian.jira.plugin... ]
Steven Hawkins resolved TEIID-5555.
-----------------------------------
Resolution: Done
Added openapi handling based upon the xsl transformation of the xml edm metadata. A couple of additional refinements could include turning the openapi access metadata off to cut down on the servlet init time, and logic to control what is generated.
> Create OpenAPI (swagger) metadata document based on OData
> ---------------------------------------------------------
>
> Key: TEIID-5555
> URL: https://issues.jboss.org/browse/TEIID-5555
> Project: Teiid
> Issue Type: Enhancement
> Components: OData
> Reporter: Ramesh Reddy
> Assignee: Steven Hawkins
> Priority: Critical
> Fix For: 12.1
>
>
> Automatically generate Swagger metadata document based on OData metadata such the tools like 3Scale can understand the metadata through existing plugins until OData specific plugin is generated. Only swagger 2.0 document is considered for now. 3.0 support is not considered.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5644) materialization procedures can hit max string length
by Steven Hawkins (Jira)
Steven Hawkins created TEIID-5644:
-------------------------------------
Summary: materialization procedures can hit max string length
Key: TEIID-5644
URL: https://issues.jboss.org/browse/TEIID-5644
Project: Teiid
Issue Type: Bug
Components: Query Engine
Reporter: Steven Hawkins
Assignee: Steven Hawkins
The concatenations performed to build dynamic sql can become too long for just a string - and by our current rules will be truncated rather than converted to a clob.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5555) Create OpenAPI (swagger) metadata document based on OData
by Steven Hawkins (Jira)
[ https://issues.jboss.org/browse/TEIID-5555?page=com.atlassian.jira.plugin... ]
Steven Hawkins commented on TEIID-5555:
---------------------------------------
Added a pr using the oasis xslt that looks for /swagger.json get requests - this will be swagger/openapi v2 metadata url. For v3 we'll likely have /openapi.json
The xslt is initially compiled (can take a second or more), and each metadata response is cached as a temp file just to cut down on the amount of work / garbage in the event of larger entity models.
> Create OpenAPI (swagger) metadata document based on OData
> ---------------------------------------------------------
>
> Key: TEIID-5555
> URL: https://issues.jboss.org/browse/TEIID-5555
> Project: Teiid
> Issue Type: Enhancement
> Components: OData
> Reporter: Ramesh Reddy
> Assignee: Steven Hawkins
> Priority: Critical
> Fix For: 12.1
>
>
> Automatically generate Swagger metadata document based on OData metadata such the tools like 3Scale can understand the metadata through existing plugins until OData specific plugin is generated. Only swagger 2.0 document is considered for now. 3.0 support is not considered.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months
[JBoss JIRA] (TEIID-5561) Prevent alias collisions with built-in odata schema
by Steven Hawkins (Jira)
[ https://issues.jboss.org/browse/TEIID-5561?page=com.atlassian.jira.plugin... ]
Steven Hawkins commented on TEIID-5561:
---------------------------------------
We should add to this that schema cannot be named static nor keycloak.
> Prevent alias collisions with built-in odata schema
> ---------------------------------------------------
>
> Key: TEIID-5561
> URL: https://issues.jboss.org/browse/TEIID-5561
> Project: Teiid
> Issue Type: Quality Risk
> Components: OData
> Reporter: Steven Hawkins
> Assignee: Steven Hawkins
> Priority: Major
> Fix For: 12.1
>
>
> It is problematic to have a Teiid model/schema with the name:
> olingo-extensions
> teiid
> edm
> as that will use an alias that conflicts with built-in handling. We should provide an error/warning as appropriate or come up with the logic to introduce a non-conflicting alias.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 10 months