[JBoss JIRA] (TEIID-3455) Improve salesforce importing
by Steven Hawkins (JIRA)
[ https://issues.jboss.org/browse/TEIID-3455?page=com.atlassian.jira.plugin... ]
Steven Hawkins resolved TEIID-3455.
-----------------------------------
Assignee: Steven Hawkins (was: Mark Drilling)
Resolution: Done
Updated the importing to first import the tables, then import column and relationship info in batches.
> Improve salesforce importing
> ----------------------------
>
> Key: TEIID-3455
> URL: https://issues.jboss.org/browse/TEIID-3455
> Project: Teiid
> Issue Type: Enhancement
> Components: Salesforce Connector
> Reporter: Steven Hawkins
> Assignee: Steven Hawkins
> Fix For: 8.11
>
>
> In working TEIID-3429 we should improve the saleforce logic to not pull table metadata a single table at a time. The sf api can describe up to 100 objects at a time.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 8 months
[JBoss JIRA] (TEIID-3456) NPE with excluded tables and the salesforce importer
by Steven Hawkins (JIRA)
Steven Hawkins created TEIID-3456:
-------------------------------------
Summary: NPE with excluded tables and the salesforce importer
Key: TEIID-3456
URL: https://issues.jboss.org/browse/TEIID-3456
Project: Teiid
Issue Type: Bug
Components: Salesforce Connector
Affects Versions: 8.10
Reporter: Steven Hawkins
Assignee: Mark Drilling
Fix For: 8.11
If a child table is excluded from import a null pointer exception will occur when we process the relationship information.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 8 months
[JBoss JIRA] (TEIID-3455) Improve salesforce importing
by Steven Hawkins (JIRA)
Steven Hawkins created TEIID-3455:
-------------------------------------
Summary: Improve salesforce importing
Key: TEIID-3455
URL: https://issues.jboss.org/browse/TEIID-3455
Project: Teiid
Issue Type: Enhancement
Components: Salesforce Connector
Reporter: Steven Hawkins
Assignee: Mark Drilling
Fix For: 8.11
In working TEIID-3429 we should improve the saleforce logic to not pull table metadata a single table at a time. The sf api can describe up to 100 objects at a time.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 8 months
[JBoss JIRA] (TEIID-3451) OData does not inject schema into queries
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/TEIID-3451?page=com.atlassian.jira.plugin... ]
RH Bugzilla Integration updated TEIID-3451:
-------------------------------------------
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1214445, https://bugzilla.redhat.com/show_bug.cgi?id=1215675 (was: https://bugzilla.redhat.com/show_bug.cgi?id=1214445)
> OData does not inject schema into queries
> -----------------------------------------
>
> Key: TEIID-3451
> URL: https://issues.jboss.org/browse/TEIID-3451
> Project: Teiid
> Issue Type: Bug
> Components: OData
> Affects Versions: 8.7
> Reporter: Debbie Steigner
> Assignee: Steven Hawkins
> Fix For: 8.7.1.6_2, 8.11
>
>
> OData service does not inject a fully qualified object names for tables for POSTS, PUTs, and DELETEs:
> 12:50:34,367 DEBUG [org.teiid.COMMAND_LOG] (http-localhost/127.0.0.1:8080-1) START USER COMMAND: startTime=2015-04-22 12:50:34.367 requestID=XxHTbednhDq7.0 txID=null sessionID=XxHTbednhDq7 applicationName=JDBC principal=teiidUser@teiid-security vdbName=ImsOne vdbVersion=2 sql=INSERT INTO Subscription (SUBSCRIPTION_ID, CLIENT_NAME, DEST_CONNECTION_URI, DEST_SCHEMA_NAME, DEST_TABLE_NAME, PROVIDER_URL, TOPIC_NAME) VALUES (?, ?, ?, ?, ?, ?, ?)
> 12:50:34,380 DEBUG [org.teiid.COMMAND_LOG] (http-localhost/127.0.0.1:8080-1) ERROR USER COMMAND: endTime=2015-04-22 12:50:34.379 requestID=XxHTbednhDq7.0 txID=null sessionID=XxHTbednhDq7 principal=teiidUser@teiid-security vdbName=ImsOne vdbVersion=2 finalRowCount=null
> 12:50:34,380 WARN [org.teiid.PROCESSOR] (http-localhost/127.0.0.1:8080-1) TEIID30020 Processing exception for request XxHTbednhDq7.0 'Group specified is ambiguous, resubmit the query by fully qualifying the group name: Subscription'. Originally QueryResolverException ResolverUtil.java:814. Enable more detailed logging to see the entire stacktrace.
> 12:50:34,383 WARN [org.teiid.ODATA] (http-localhost/127.0.0.1:8080-1) TEIID16012 Could not produce a successful OData response. Returning status ServerErrorException with message Group specified is ambiguous, resubmit the query by fully qualifying the group name: Subscription.
> Same insert works fine over JDBC. Offending line from 8.7.0 public github:
> https://github.com/teiid/teiid/blob/8.7.x/odata/src/main/java/org/teiid/o...
> Note that private Table findTable() on line 925 depends upon org.odata4j.core.EdmEntitySet#getName to return the name, but this is not fully qualified
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 8 months
[JBoss JIRA] (TEIID-3442) Apache Spark support via SparkSQL and DataFrames
by Ramesh Reddy (JIRA)
[ https://issues.jboss.org/browse/TEIID-3442?page=com.atlassian.jira.plugin... ]
Ramesh Reddy commented on TEIID-3442:
-------------------------------------
[~blue666man] is this something you have bandwidth to contribute to Teiid? For me, it seems like using the Thrift JDBC driver is good idea. Also, want to make sure you are primarily intending to use as source?
> Apache Spark support via SparkSQL and DataFrames
> ------------------------------------------------
>
> Key: TEIID-3442
> URL: https://issues.jboss.org/browse/TEIID-3442
> Project: Teiid
> Issue Type: Feature Request
> Components: Misc. Connectors
> Affects Versions: 8.10
> Reporter: John Muller
> Labels: Connectors, Spark, Translators
> Fix For: Open To Community
>
> Original Estimate: 20 weeks
> Remaining Estimate: 20 weeks
>
> Eliciting comments for Apache Spark support. With the release of Panda's like DataFrames, it is a little more feasible to directly translate to SparkSQL:
> https://spark.apache.org/docs/latest/sql-programming-guide.html
> Options in order of complexity:
> 1. Use the existing Hive connector / translator. Spark still uses the Hive metastore.
> 2. Thrift JDBC driver. This is what Microstrategy, Tableau, QlikView and others use, most rudimentary API for accessing Spark.
> 3. Native SparkSQL via building Spark jobs and submitting them to a running Spark driver.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
9 years, 8 months