[
https://issues.jboss.org/browse/TEIID-5643?page=com.atlassian.jira.plugin...
]
Steven Hawkins commented on TEIID-5643:
---------------------------------------
Question 1: Is this due to streaming from the database to the teiid
server or are these explicit requests which er send to the mysql database.
The engine processes rows in batches. At an access node we determine the effective
working row count based upon the estimated data width, then fetch that number of rows from
the source. The engine will process on that batch while usually prefetching the next.
The batch size that is determined for a source result is set as the fetch size via JDBC -
which may or may not be honored depending upon the driver.
Question 2: Are the data reloaded entirely with each select* request
or are the data in the cash just populated for the first time?
The simplest scenario over odata is that the user issues an initial request. The results
are cached for use with the skiptoken returned with each batch. If a another initial
request is issued, that will create a new cached entry. If resultSetCacheMode is enabled,
then if the initial request is already in cache it will be reused. Then of course a
materialized view can be loaded / refereshed as per your needs.
Question 3: Are there any config options to improve performance here.
Maybe by enforcing reuse of cached data. I mean if only one Teiid instance is running and
nobody else is accessing the database except via Teiid, Teiid should be able to serve most
data from the cache and just push down write requests once the cache was pupulated.
See above. Caching of data is done only when there is a clear intent - we don't want
to surprise anyone with stale data.
Question 4: regarding chunk size: Is there something I could
configure performance wise that larger than 512 rows are transfered?
That is likely not an issue. It appears from you log that the problem is at the OData
layer or higher.
Question 5a:What I have also interpreted from the log file is, that
the org.teiid.BUFFER_MGR starts freeing memory space even so it is still loading rows from
the table. Might it be that I am low on memory and might need to increase the amount of
memory?
It's a little complicated. There are three memory tiers that the buffer manager
manages. On-heap live objects, on or off heap fixed memory buffer (which is both a
serialization buffer and a cache) and finally disk. When you add anything to the buffer
manager it is in the first tier - live objects on the heap. One of the responsibilities
of the buffer cleaner (and the processing threads) is to take these initial entries and
move them to the second tier - because the serialized form of a batch is much more compact
and that will free up heap for other work. It does this based upon memory pressure and
the age of the initial entry. As the second tier fills up, items will be moved to disk -
that will result in messages about free space from the fixed memory buffer.
I wouldn't adjust anything with the memory until there's an understanding of what
is occurring in your scenario. Since we don't have a heap dump or logs that contain
anything out of the ordinary from the engine's perspective, it's hard to say what
you are seeing. Perhaps the best thing would be to focus on a reproducer that you can
share. I can quickly test locally a similar row count through odata, but I know that
similar tests were done as part of odata performance testing in which we didn't see
anything like this.
Question 5b: What config options would this be? From Teiid, Wildfly
or from the Java runtime?
It's best to focus on reproducing at this point. Capture your datasource
configuration, jvm settings, etc. so that we can try to emulate what you have.
Further news: When I have had a look on the topic this morning I
arrived at a new situation which I have never seen before. The only difference was that I
had the Debug Logging for Teiid active over night. A select on a very small table already
failed with error 504. Maybe memory is running out somewhere? I copied the sql query from
the Teiid logfile and executed it directly against my mysql database. This is working as
expected. When repeating the query with Teiid I had the luck to get a Java exception. I
attached logfile3.txt to this comment. Do you have an idea what could be going wrong here?
That exception should be due to exhausting connections from the datasource connection
pool. More than likely that indicates that connections are not being released, which of
course should not happen. Could you take a thread dump to see if there are any hung
threads?
When querying a larger dataset, teiid answers with 504 Gateway
Time-out and throws exception
--------------------------------------------------------------------------------------------
Key: TEIID-5643
URL:
https://issues.jboss.org/browse/TEIID-5643
Project: Teiid
Issue Type: Bug
Affects Versions: 11.2.1
Reporter: Christoph John
Assignee: Steven Hawkins
Priority: Major
Attachments: Heap Dump analysis tool.pdf, logfile2.txt, logfile3.txt
Please note, I am using Teiid 11.2.0 and not 11.2.1 as specified above in this ticket. I
was not able to select a version 11.2.0.
The issue: I tried to query a table with a larger amount of records (about 600.000
records in the table). Teiid responds with
504 Gateway Time-out. The server didn't respond in time.
If Teiid tries to respond with 600.000 entries to such a request, it is reasonable that
it runs out of memory or other weird things happen. However, I would expect that in such a
case Teiid just answers with a fixed amount of records, say for example 1000, and provides
a kind of curser to enable subsequent requests to retrieve more data.
I am currently trying to find a workaround for the issue, as the issue blocks my work. Is
there maybe a kind of configuration option that I have to set explicitly for such a kind
of behavior I previously sketched?
Further note: When I send such a kind of select * request on the table to Teiid, I am
also not able for a certain time interval, about a minute, to get a responds to different
queries which select a single element on the mentioned table. Such a request then also
results in a 504 Gateway Time-out.
Attached error log:
2019-02-01 22:33:36,126 WARN
[org.jboss.jca.core.connectionmanager.pool.strategy.OnePool]
(Worker2_QueryProcessorQueue5) eayVFuq9y7OY IJ000407: No lazy enlistment available for
my_nutri_diary
2019-02-01 22:33:36,135 INFO [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5)
eayVFuq9y7OY MySQLExecutionFactory
Commit=true;DatabaseProductName=MySQL;DatabaseProductVersion=8.0.13;DriverMajorVersion=8;DriverMajorVersion=0;DriverName=MySQL
Connector/J;DriverVersion=mysql-connector-java-8.0.13 (Revision:
66459e9d39c8fd09767992bc592acd2053279be6);IsolationLevel=2
2019-02-01 22:36:56,959 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue5)
eayVFuq9y7OY Connector worker process failed for atomic-request=eayVFuq9y7OY.0.0.0:
org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing
statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1,
g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS
c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7,
g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9,
g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12,
g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS
c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER
BY c_0]
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
at com.sun.proxy.$Proxy51.execute(Unknown Source)
at
org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
at
org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
at
org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
at
org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
at
org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Java heap space
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:974)
at
com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1024)
at
org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:504)
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119)
... 18 more
2019-02-01 22:36:56,977 WARN [org.teiid.PROCESSOR] (default task-4) eayVFuq9y7OY
TEIID30020 Processing exception for request eayVFuq9y7OY.0 'TEIID30504 my_nutri_diary:
0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: [] SQL: SELECT
g_0.`idCode` AS c_0, g_0.`lc` AS c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3,
g_0.`brands` AS c_4, g_0.`quantity` AS c_5, g_0.`nova_group` AS c_6,
g_0.`nutrition_grade_fr` AS c_7, g_0.`ingredients_text_with_allergens` AS c_8,
g_0.`energy_100g` AS c_9, g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11,
g_0.`proteins_100g` AS c_12, g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14,
g_0.`saturated_fat_modifier` AS c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17
FROM `FDBProducts` AS g_0 ORDER BY c_0]'. Originally TeiidProcessingException
'Java heap space' SQLError.java:129. Enable more detailed logging to see the
entire stacktrace.
2019-02-01 22:36:56,987 WARN [org.teiid.PROCESSOR] (default task-4) TEIID16053 Unable
to process odata request due to: TEIID30504 my_nutri_diary: 0 TEIID11008:TEIID11004 Error
executing statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS
c_1, g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity`
AS c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7,
g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9,
g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12,
g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS
c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER
BY c_0] Increase the log level to see the entire stack trace.
2019-02-01 22:36:57,213 WARN [org.teiid.CONNECTOR] (Worker2_QueryProcessorQueue6)
NdaDihMR2Vlt Connector worker process failed for atomic-request=NdaDihMR2Vlt.0.0.1:
org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing
statement(s): [Prepared Values: [] SQL: SELECT g_0.`idCode` AS c_0, g_0.`lc` AS c_1,
g_0.`product_name` AS c_2, g_0.`origins` AS c_3, g_0.`brands` AS c_4, g_0.`quantity` AS
c_5, g_0.`nova_group` AS c_6, g_0.`nutrition_grade_fr` AS c_7,
g_0.`ingredients_text_with_allergens` AS c_8, g_0.`energy_100g` AS c_9,
g_0.`carbohydrates_100g` AS c_10, g_0.`sugars_100g` AS c_11, g_0.`proteins_100g` AS c_12,
g_0.`fat_100g` AS c_13, g_0.`saturated_fat_100g` AS c_14, g_0.`saturated_fat_modifier` AS
c_15, g_0.`salt_100g` AS c_16, g_0.`sodium_100g` AS c_17 FROM `FDBProducts` AS g_0 ORDER
BY c_0]
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:393)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:229)
at com.sun.proxy.$Proxy51.execute(Unknown Source)
at
org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
at
org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
at
org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:61)
at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:278)
at
org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:115)
at
org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link
failure
--
This message was sent by Atlassian Jira
(v7.12.1#712002)