[teiid-issues] [JBoss JIRA] (TEIID-4997) Teiid on/with Spark

Steven Hawkins (JIRA) issues at jboss.org
Thu Jul 27 08:04:00 EDT 2017


    [ https://issues.jboss.org/browse/TEIID-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13441047#comment-13441047 ] 

Steven Hawkins commented on TEIID-4997:
---------------------------------------

> Please open a JIRA for this.

This can be done under TEIID-5007

> can you elaborate little more, I will make this part of previous JIRA

If we are co-located then our entry point will be when our driver class is loaded and/or when a vdb is accessed with the embedded connection.

> Can elaborate here too. My preference is, lets drop the import VDB and multi VDB in a single container, and work with single database hiding the semantics of VDB altogether from user

I'm not saying the user will be aware of the vdb construct.  Only that if you are co-locating you have consider how the appropriate metadata will get associated with the spark workers.

> I think plan cost should decide this no? there may be cases where a small set of data processing does not really warrant a spark cluster usage.

I'm not clear enough with that conclusion.  I'm saying that it will simplify a lot of concerns to embedded Spark in Teiid for the moment.  While that does not take advantage of a spark cluster, it does allow for a much lower effort POC.





> Teiid on/with Spark
> -------------------
>
>                 Key: TEIID-4997
>                 URL: https://issues.jboss.org/browse/TEIID-4997
>             Project: Teiid
>          Issue Type: Feature Request
>          Components: Build/Kits, Query Engine
>            Reporter: Steven Hawkins
>            Assignee: Steven Hawkins
>
> With the availability of Spark on OpenShift, we should provide a cooperative planning/execution mode for Teiid that utilizes the Spark engine.
> Roughly this would look like a Teiid master running embedded with the Spark master serving the typical JDBC/ODBC/OData endpoints.  On an incoming query the optimizer would choose to process against Spark or to process with Teiid - if processing with Teiid that may still require submitting the job to a worker to avoid burdening the master.  Alternatively the Teiid master could run in a separate pod with the additional serialization costs, however initially the remote Spark [JDBC/ODBC layer|https://spark.apache.org/docs/latest/sql-programming-guide.html#distributed-sql-engine] will not be available in the OpenShift effort.
> If execution against Spark is chosen, then instead of a typical Teiid processor plan a spark job will be created instead.  Initially this could be limited to relational plans, but could be expanded to include procedure language support translated to python, scala, etc.  The spark job would represent each source access as a [temporary view|https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases] accessing the relevant pushdown query.  Ideally this would be executed against a Teiid Embedded instance running in the worker node.  If remote this would incur an extra hop and have security considerations.  This can be thought of as using Teiid for its virtualization and access layer features.  The rest of the processing about the access layers could then be represented as Spark SQL.
> For example a Teiid user query of "select * from hdfs.tbl h, oracle.tbl o where h.id = o.id order by h.col" would become the Spark SQL job:
> CREATE TEMPORARY VIEW h
> USING org.apache.spark.sql.jdbc
> OPTIONS (
>   url "jdbc:teiid:vdb",
>   dbtable "(select col ... from hdfs.tbl)",
>   fetchSize '1024,
>   ...
> )
> CREATE TEMPORARY VIEW o
> USING org.apache.spark.sql.jdbc
> OPTIONS (
>   url "jdbc:teiid:vdb",
>   dbtable "(select col ... from oracle.tbl)",
>   fetchSize '1024,
>   ...
> )
> SELECT * FROM h inner join o on h.id
> The challenges/considerations of this are:
> * Utilizing embedded with coordinated VDB management.  There's the associated issue of driver management as well.
> * Translating Teiid SQL to Spark SQL.  All Teiid functions, udfs, aggregate functions would need to be made known to Spark.  Table function constructs, such as XMLTABLE, TEXTTABLE, etc. could initially just be treated as access layer concerns.  Type issues would exist as xml/clob/json would map to string.
> * no xa support
> * we'd need to provide reasonable values for fetch size, partition information, etc. in the access layer queries.
> * We'd have to determine the extent to which federated join optimizations need to be conveyed (dependent join and pushdown) as that would go beyond simply translating to Spark SQL.
> * there's a potential to use [global temporary views|http://www.gatorsmile.io/globaltempview/] which is a more convenient way of adding virtualization to Spark.  
> * Large internal materialization should be re-targeted to Spark or JDG 



--
This message was sent by Atlassian JIRA
(v7.2.3#72005)


More information about the teiid-issues mailing list