[JBoss JIRA] (TEIID-4758) Permanent materialization load failure is when target source goes down
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/TEIID-4758?page=com.atlassian.jira.plugin... ]
Ramesh Reddy updated TEIID-4758:
--------------------------------
Fix Version/s: 9.3
> Permanent materialization load failure is when target source goes down
> ----------------------------------------------------------------------
>
> Key: TEIID-4758
> URL: https://issues.jboss.org/browse/TEIID-4758
> Project: Teiid
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.12
> Reporter: Ramesh Reddy
> Assignee: Ramesh Reddy
> Fix For: 9.3
>
>
> During the external materialization load, if the target cache database goes offline, the materialization job stops, but the {{Status}} table is left in {{LOADING}} state, which will never recover when the target cache database comes back up again.
> This situation is observed when JDG is used in OpenShift along with JDV However, behavior can occur in standalone situations too. The system should resilient and must recover in this situation.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (TEIID-4758) Permanent materialization load failure is when target source goes down
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/TEIID-4758?page=com.atlassian.jira.plugin... ]
Ramesh Reddy commented on TEIID-4758:
-------------------------------------
Contrary to the comment as mentioned in the description of JIRA, when the databases do comeback online the previous materialization job is not resumed but a new job will be rescheduled to execute. So, the system does recover in my testing. If this not working for JDG, then it may be isolated issue with that source.
> Permanent materialization load failure is when target source goes down
> ----------------------------------------------------------------------
>
> Key: TEIID-4758
> URL: https://issues.jboss.org/browse/TEIID-4758
> Project: Teiid
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.12
> Reporter: Ramesh Reddy
> Assignee: Ramesh Reddy
>
> During the external materialization load, if the target cache database goes offline, the materialization job stops, but the {{Status}} table is left in {{LOADING}} state, which will never recover when the target cache database comes back up again.
> This situation is observed when JDG is used in OpenShift along with JDV However, behavior can occur in standalone situations too. The system should resilient and must recover in this situation.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (TEIID-4759) PrestoDB - not all custom timezone IDs are supported
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-4759?page=com.atlassian.jira.plugin... ]
Steven Hawkins resolved TEIID-4759.
-----------------------------------
Fix Version/s: 9.2
Resolution: Done
Merged the doc note. Thanks Juraj.
> PrestoDB - not all custom timezone IDs are supported
> ----------------------------------------------------
>
> Key: TEIID-4759
> URL: https://issues.jboss.org/browse/TEIID-4759
> Project: Teiid
> Issue Type: Enhancement
> Components: Documentation
> Reporter: Juraj Duráni
> Assignee: Juraj Duráni
> Priority: Minor
> Fix For: 9.2
>
>
> The PrestoDB JDBC driver uses Joda Time library. If user needs to set custom timezone for Teiid server (e.g. exporting in JAVA_OPTS), he/she cannot use _GMT_ format as Joda Time does not recognize it \[1\]. However, equivalent format _Etc/*_ can be used (see [this page|http://joda-time.sourceforge.net/timezones.html]).
> It would be nice to have a note in the documentation.
> {code:plain|title=\[1\] Exception}
> 09:52:27,653 WARN [org.teiid.CONNECTOR] (Worker1_QueryProcessorQueue1) Connector worker process failed for atomic-request=5D1r1sDRkmmk.0.0.0: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [SQL: SELECT g_0.intkey AS c_0, g_0.stringkey AS c_1, g_0.intnum AS c_2, g_0.stringnum AS c_3, g_0.floatnum AS c_4, g_0.longnum AS c_5, g_0.doublenum AS c_6, g_0.bytenum AS c_7, g_0.datevalue AS c_8, g_0.timevalue AS c_9, g_0.timestampvalue AS c_10, g_0.booleanvalue AS c_11, g_0.charvalue AS c_12, g_0.shortvalue AS c_13, cast(g_0.bigintegervalue AS bigint) AS c_14, g_0.bigdecimalvalue AS c_15, g_0.objectvalue AS c_16 FROM smalla AS g_0 LIMIT 100]
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:131) [translator-jdbc-8.12.5.redhat-8.jar:8.12.5.redhat-8]
> at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:364)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0-internal]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0-internal]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0-internal]
> at java.lang.reflect.Method.invoke(Method.java:483) [rt.jar:1.8.0-internal]
> at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:211)
> at com.sun.proxy.$Proxy49.execute(Unknown Source)
> at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:306)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:112)
> at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0-internal]
> at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:65)
> at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:276)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:119)
> at org.teiid.dqp.internal.process.ThreadReuseExecutor$3.run(ThreadReuseExecutor.java:210)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0-internal]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0-internal]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.8.0-internal]
> Caused by: java.sql.SQLException: Error executing query
> at com.facebook.presto.jdbc.PrestoStatement.execute(PrestoStatement.java:232)
> at com.facebook.presto.jdbc.PrestoStatement.executeQuery(PrestoStatement.java:69)
> at org.jboss.jca.adapters.jdbc.WrappedStatement.executeQuery(WrappedStatement.java:344)
> at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:119) [translator-jdbc-8.12.5.redhat-8.jar:8.12.5.redhat-8]
> ... 18 more
> Caused by: java.lang.IllegalArgumentException: The datetime zone id 'GMT+01:00' is not recognised
> at com.facebook.presto.jdbc.internal.joda.time.DateTimeZone.forID(DateTimeZone.java:229)
> at com.facebook.presto.jdbc.PrestoResultSet.<init>(PrestoResultSet.java:122)
> at com.facebook.presto.jdbc.PrestoStatement.execute(PrestoStatement.java:212)
> ... 21 more
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (TEIID-4758) Permanent materialization load failure is when target source goes down
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/TEIID-4758?page=com.atlassian.jira.plugin... ]
Ramesh Reddy commented on TEIID-4758:
-------------------------------------
Even in server mode, when the data source itself goes down, there is no notification. Teiid only tracks the addition/removal of data source connection configuration not the data source itself. When a data source goes, WF can't give a valid connection, but VDB is still considers active as the connection configuration is complete. We would need to come up with another way the proactively validate the "liveliness" sources for this.
> Permanent materialization load failure is when target source goes down
> ----------------------------------------------------------------------
>
> Key: TEIID-4758
> URL: https://issues.jboss.org/browse/TEIID-4758
> Project: Teiid
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.12
> Reporter: Ramesh Reddy
> Assignee: Ramesh Reddy
>
> During the external materialization load, if the target cache database goes offline, the materialization job stops, but the {{Status}} table is left in {{LOADING}} state, which will never recover when the target cache database comes back up again.
> This situation is observed when JDG is used in OpenShift along with JDV However, behavior can occur in standalone situations too. The system should resilient and must recover in this situation.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (TEIID-4758) Permanent materialization load failure is when target source goes down
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/TEIID-4758?page=com.atlassian.jira.plugin... ]
Ramesh Reddy commented on TEIID-4758:
-------------------------------------
I have tried to duplicate the error using 3 different databases. Here is the behavior
H2 -> Saves the {{Status}} table
PG -> Is Source Database. I have TPCH generated data.
MySQL -> Target database to save the materialized view, which takes several minutes to complete.
Now when MySQL is taken down during, the materialized run the sequence of events happen like below
{code}
scheduling job on table large_view
Name = large_view LoadState = LOADING Valid = FALSE
Name = large_view LoadState = LOADING Valid = FALSE
2017-02-14 11:05:01,074 WARN [Worker1_QueryProcessorQueue60] org.teiid.CONNECTOR - Connector worker process failed for atomic-request=As/Jp3qn4/Vd.0.102.31
Name = large_view LoadState = FAILED_LOAD Valid = FALSE
scheduling job on table large_view
Name = large_view LoadState = LOADING Valid = FALSE
Name = large_view LoadState = LOADING Valid = FALSE
{code}
so, it means if target database is not available, it flips the loading status to "FAIL" but since it immediately starts another scheduled job, it flips the status back to "loading". When the load fails again, then cycle is continued.
Now if I keep the target database alive and take down the source database (PG) in this case, the sequence is
{code}
scheduling job on table large_view
Name = large_view LoadState = LOADING Valid = FALSE
Name = large_view LoadState = LOADING Valid = FALSE
2017-02-14 11:15:47,740 WARN [Worker0_QueryProcessorQueue14] org.teiid.CONNECTOR - Connector worker process failed for atomic-request=ae3kcWA7YCfi.0.101.4
scheduling job on table large_view
Name = large_view LoadState = FAILED_LOAD Valid = FALSE
2017-02-14 11:15:50,787 WARN [Worker3_QueryProcessorQueue34] org.teiid.CONNECTOR - Connector worker process failed for atomic-request=TI1aTvRX4JcG.0.113.18
scheduling job on table large_view
Name = large_view LoadState = FAILED_LOAD Valid = FALSE
2017-02-14 11:15:53,821 WARN [Worker3_QueryProcessorQueue44] org.teiid.CONNECTOR - Connector worker process failed for atomic-request=z/modN36UD/y.0.125.23
scheduling job on table large_view
2017-02-14 11:15:56,850 WARN [Worker3_QueryProcessorQueue55] org.teiid.CONNECTOR - Connector worker process failed for atomic-request=TUKsriWs1ito.0.137.29
scheduling job on table large_view
Name = large_view LoadState = FAILED_LOAD Valid = FALSE
2017-02-14 11:15:59,906 WARN [Worker3_QueryProcessorQueue65] org.teiid.CONNECTOR - Connector worker process failed for atomic-request=+33ACkiLU6TS.0.101.34
scheduling job on table large_view
scheduling job on table large_view
Name = large_view LoadState = FAILED_LOAD Valid = FALSE
scheduling job on table large_view
{code}
Here when the source is down, the materialization fails fast, and re-schedules. The cycle happens quickly, repeatedly.
The simplest solution may be to check the Validity of the VDB, before scheduling the next job. When a database source is down, the VDB is still active, but its validity is false. But this will only work in Server mode as Teiid server actively monitors the data source connections.
> Permanent materialization load failure is when target source goes down
> ----------------------------------------------------------------------
>
> Key: TEIID-4758
> URL: https://issues.jboss.org/browse/TEIID-4758
> Project: Teiid
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.12
> Reporter: Ramesh Reddy
> Assignee: Ramesh Reddy
>
> During the external materialization load, if the target cache database goes offline, the materialization job stops, but the {{Status}} table is left in {{LOADING}} state, which will never recover when the target cache database comes back up again.
> This situation is observed when JDG is used in OpenShift along with JDV However, behavior can occur in standalone situations too. The system should resilient and must recover in this situation.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (TEIID-4763) Max open statements property limits statements count to n+1
by Juraj Duráni (JIRA)
Juraj Duráni created TEIID-4763:
-----------------------------------
Summary: Max open statements property limits statements count to n+1
Key: TEIID-4763
URL: https://issues.jboss.org/browse/TEIID-4763
Project: Teiid
Issue Type: Bug
Components: JDBC Driver
Reporter: Juraj Duráni
Assignee: Juraj Duráni
Priority: Optional
When user wants to limit max number of open statements for a single connection, he/she can use system property *org.teiid.maxOpenStatements*. However, if this property is set to _n_ it actually limits number of connections to _n+1_.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months