When integrating information using a federated query planner it is useful to view the query plans to better understand how information is being accessed and processed, and to troubleshoot problems. |
A query plan (also known as an execution or processing plan) is a set of instructions created by a query engine for executing a command submitted by a user or application. The purpose of the query plan is to execute the user's query in as efficient a way as possible. |
h1. Getting a Query Plan |
... |
You can get a query plan any time you execute a command. The SQL options available are as follows: |
*SET SHOWPLAN \[ON\|DEBUG\]*\- Returns the processing plan or the plan and the full planner debug log. [Debug Log|Query Planner]. See also the [Set Statement]. |
With the above options, the query plan is available from the Statement object by casting to the {{org.teiid.jdbc.TeiidStatement}} interface or by using the "SHOW PLAN" [statement|Show Statement]. |
... |
In a procedural context the ordering of child nodes implies the order of execution. In most other situation, child nodes may be executed in any order even in parallel. Only in specific optimizations, such as dependent join, will the children of a join execute serially. |
h1. Relational Execution Query Plans |
Relational plans represent the processing plan that is composed of nodes representing building blocks of logical relational operations. Relational processing plans differ from logical debug relational plans in that they will contain additional operations and execution specifics that were chosen by the optimizer. |
... |
h3. Reading a Processor Plan |
The query processor plan can be obtained in a plain text or xml format. The plan text format is typically easier to read, while the xml format is easier to process by tooling. When possible tooling should be used to examine the plans as the tree structures can be deeply nested. |
Data flows from the leafs of the tree to the root. Sub plans for procedure execution can be shown inline, and are differentiated by different indentation. Given a user query of "SELECT pm1.g1.e1, pm1.g2.e2, pm1.g3.e3 from pm1.g1 inner join (pm1.g2 left outer join pm1.g3 on pm1.g2.e1=pm1.g3.e1) on pm1.g1.e1=pm1.g3.e1" the text for a processor plan that does not push down the joins would look like: |
... |
{code} |
Note that the same information appears in each of the plan forms. In some cases it can actually be easier to follow the simplified format of the debug plan final processor plan. From the [#Debug Log|Query Planner] the same plan as above would appear as: |
{code} |
... |
XML document model queries and procedure execution (including instead of triggers) use intermediate and final plan forms that include relational plans. Generally the structure of the xml/procedure plans will closely match their logical forms. It's the nested relational plans that will be of interest when analyzing performance issues. |
h3. Procedure Planner The procedure planner is fairly simple. It converts the statements in the procedure into instructions in a program that will be run during processing. This is mostly a 1\-to\-1 mapping and very little optimization is performed. h3. XML Planner The XML Planner creates an XML plan that is relatively close to the end result of the Procedure Planner – a program with instructions. Many of the instructions are even similar \(while loop, execute SQL, etc). Additional instructions deal with producing the output result document \(adding elements and attributes). The XML planner does several types of planning \(not necessarily in this order): * Document selection - determine which tags of the virtual document should be excluded from the output document. This is done based on a combination of the model \(which marks parts of the document excluded) and the query \(which may specify a subset of columns to include in the SELECT clause). * Criteria evaluation - breaks apart the user’s criteria, determine which result set the criteria should be applied to, and add that criteria to that result set query. * Result set ordering - the query’s ORDER BY clause is broken up and the ORDER BY is applied to each result set as necessary * Result set planning - ultimately, each result set is planned using the relational planner and taking into account all the impacts from the user's query. The planner will also look to automatically create staging tables and dependent joins based upon the mapping class hierarchy. * Program generation - a set of instructions to produce the desired output document is produced, taking into account the final result set queries and the excluded parts of the document. Generally, this involves walking through the virtual document in document order, executing queries as necessary and emitting elements and attributes. XML programs can also be recursive, which involves using the same document fragment for both the initial fragment and a set of repeated fragments \(each a new query) until some termination criteria or limit is met. h1. Debug Log A relational processing plan is created by the optimizer after the logical plan is manipulated by a series of rules. The application of rules is determined both by the query structure and by the rules themselves. The node structure of the debug plan resembles that of the processing plan, but the node types more logically represent SQL operations. h3. All Nodes * ACCESS - a source access or plan execution. * DUP_REMOVE - removes duplicate rows * JOIN - a join (LEFT OUTER, FULL OUTER, INNER, CROSS, SEMI, etc.) * PROJECT - a projection of tuple values * SELECT - a filtering of tuples * SORT - an ordering operation, which may be inserted to process other operations such as joins * SOURCE - any logical source of tuples including an inline view, a source access, XMLTABLE, etc. * GROUP - a grouping operation * SET_OP - a set operation (UNION/INTERSECT/EXCEPT) * NULL - a source of no tuples * TUPLE_LIMIT - row offset / limit User SQL statements after rewrite are converted into a canonical plan form. The canonical plan form most closely resembles the initial SQL structure. A SQL select query has the following possible clauses \(all but SELECT are optional): WITH, SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY, LIMIT. These clauses are logically executed in the following order: # WITH \(create common table expressions) # FROM \(read and join all data from tables) # WHERE \(filter rows) # GROUP BY \(group rows into collapsed rows) # HAVING \(filter grouped rows) # SELECT \(evaluate expressions and return only requested rows) # INTO # ORDER BY \(sort rows) # LIMIT \(limit result set to a certain range of results) These clauses translate into the following types of planning nodes: * FROM: Source node for each from clause item, Join node \(if >1 table) * WHERE: Select node * GROUP BY: Group node * HAVING: Select node * SELECT: Project node and DUP_REMOVE node \(for SELECT DISTINCT) * INTO: Project node with a SOURCE Node * ORDER BY: Sort node * LIMIT: Limit node * UNION, EXCEPT, INTERSECT: SET_OP Node For example, a SQL statement such as SELECT max(pm1.g1.e1) FROM pm1.g1 WHERE e2 = 1 creates a logical plan: {code} Project(groups=[anon_grp0], props={PROJECT_COLS=[anon_grp0.agg0 AS expr1]}) Group(groups=[anon_grp0], props={SYMBOL_MAP={anon_grp0.agg0=MAX(pm1.g1.e1)}}) Select(groups=[pm1.g1], props={SELECT_CRITERIA=e2 = 1}) Source(groups=[pm1.g1]) {code} Here the Source corresponds to the FROM clause, the Select corresponds to the WHERE clause, the Group corresponds to the implied grouping to create the max aggregate, and the Project corresponds to the SELECT clause. Note that the affect of grouping generates what is effectively an inline view, anon_grp0, to handle the projection of values created by the grouping. h3. Node Properties Each node has a set of applicable properties that are typically shown on the node. * Access Properties ** ATOMIC_REQUEST - The final form of a source request ** MODEL_ID - The metadata object for the target model/schema ** PROCEDURE_CRITERIA/PROCEDURE_INPUTS/PROCEDURE_DEFAULTS - Used in planning procedureal relational queries ** IS_MULTI_SOURCE - set to true when the node represents a multi-source access ** SOURCE_NAME - used to track the multi-source source name ** CONFORMED_SOURCES - tracks the set of conformed sources when the conformed extension metadata is used ** SUB_PLAN/SUB_PLANS - used in multi-source planning * SET_OPERATION/USE_ALL - defines the set operation (UNION/INTERSECT/EXCEPT) and if all rows or distinct rows are used. * Join Properties ** JOIN_CRITERIA - all join predicates ** JOIN_TYPE - type of join (INNER, LEFT OUTER, etc.) ** JOIN_STRATEGY - the algorithm to use (nested loop, merge, etc.) ** LEFT_EXPRESSIONS - the expressions in equi-join predicates that originate from the left side of the join ** RIGHT_EXPRESSIONS - the expressions in equi-join predicates that originate from the right side of the join ** DEPENDENT_VALUE_SOURCE - set if a dependent join is used ** NON_EQUI_JOIN_CRITERIA - non-equi join predicates ** SORT_LEFT - if the left side needs sorted for join processing ** SORT_RIGHT - if the right side needs sorted for join processing ** IS_OPTIONAL - if the join is optional ** IS_LEFT_DISTINCT - if the left side is distinct with respect to the equi join predicates ** IS_RIGHT_DISTINCT - if the right side is distinct with respect to the equi join predicates ** IS_SEMI_DEP - if the dependent join represents a semi-join ** PRESERVE - if the preserve hint is preserving the join order * Project Properties ** PROJECT_COLS - the expressions projected ** INTO_GROUP - the group targeted if this is a select into or insert with a query expression ** HAS_WINDOW_FUNCTIONS - true if window functions are used ** CONSTRAINT - the constraint that must be met if the values are being projected into a group * Select Properties ** SELECT_CRITERIA - the filter ** IS_HAVING - if the filter is applied after grouping ** IS_PHANTOM - true if the node is marked for removal, but temporarily left in the plan. ** IS_TEMPORARY - inferred criteria that may not be used in the final plan ** IS_COPIED - if the criteria has already been processed by rule copy criteria ** IS_PUSHED - if the criteria is pushed as far as possible ** IS_DEPENDENT_SET - if the criteria is the filter of a dependent join * Sort Properties ** SORT_ORDER - the order by that defines the sort ** UNRELATED_SORT - if the ordering includes a value that is not being projected ** IS_DUP_REMOVAL - if the sort should also perform duplicate removal over the entire projection * Source Properties - many source properties also become present on associated access nodes ** SYMBOL_MAP - the mapping from the columns above the source to the projected expressions. Also present on Group nodes ** PARTITION_INFO - the partitioning of the union branches ** VIRTUAL_COMMAND - if the source represents an view or inline view, the query that defined the view ** MAKE_DEP - hint information ** PROCESSOR_PLAN - the processor plan of a non-relational source (typically from the NESTED_COMMAND) ** NESTED_COMMAND - the non-relational command ** TABLE_FUNCTION - the table function (XMLTABLE, OBJECTTABLE, etc.) defining the source ** CORRELATED_REFERENCES - the correlated references for the nodes below the source ** MAKE_NOT_DEP - if make not dep is set ** INLINE_VIEW - If the source node represents an inline view ** NO_UNNEST - if the no_unnest hint is set ** MAKE_IND - if the make ind hint is set ** SOURCE_HINT - the source hint. See [Federated Optimizations]. ** ACCESS_PATTERNS - access patterns yet to be satisfied ** ACCESS_PATTERN_USED - satisfied access patterns ** REQUIRED_ACCESS_PATTERN_GROUPS - groups needed to satisfy the access patterns. Used in join planning. * Group Properties ** GROUP_COLS - the grouping columns ** ROLLUP - if the grouping includes a rollup * Tuple Limit Properties ** MAX_TUPLE_LIMIT - expression that evaluates to the max number of tuples generated ** OFFSET_TUPLE_COUNT - Expression that evaluates to the tuple offset of the starting tuple ** IS_IMPLICIT_LIMIT - if the limit is created by the rewriter as part of a subquery optimization ** IS_NON_STRICT - if the unordered limit should not be enforced strictly * General and Costing Properties ** OUTPUT_COLS - the output columns for the node. Is typically set after rule assign output elements. ** EST_SET_SIZE - represents the estimated set size this node would produce for a sibling node as the independent node in a dependent join scenario ** EST_DEP_CARDINALITY - value that represents the estimated cardinality (amount of rows) produced by this node as the dependent node in a dependent join scenario ** EST_DEP_JOIN_COST - value that represents the estimated cost of a dependent join (the join strategy for this could be Nested Loop or Merge) ** EST_JOIN_COST - value that represents the estimated cost of a merge join (the join strategy for this could be Nested Loop or Merge) ** EST_CARDINALITY - represents the estimated cardinality (amount of rows) produced by this node ** EST_COL_STATS - column statistics including number of null values, distinct value count, etc. ** EST_SELECTIVITY - represents the selectivity of a criteria node h3. Rules Relational optimization is based upon rule execution that evolves the initial plan into the execution plan. There are a set of pre\-defined rules that are dynamically assembled into a rule stack for every query. The rule stack is assembled based on the contents of the user’s query and the views/procedures accessed. For example, if there are no view layers, then rule Merge Virtual, which merges view layers together, is not needed and will not be added to the stack. This allows the rule stack to reflect the complexity of the query. Logically the plan node data structure represents a tree of nodes where the source data comes up from the leaf nodes \(typically Access nodes in the final plan), flows up through the tree and produces the user’s results out the top. The nodes in the plan structure can have bidirectional links, dynamic properties, and allow any number of child nodes. Processing plans in contrast typically have fixed properties. Plan rule manipulate the plan tree, fire other rules, and drive the optimization process. Each rule is designed to perform a narrow set of tasks. Some rules can be run multiple times. Some rules require a specific set of precursors to run properly. * Access Pattern Validation - ensures that all access patterns have been satisfied * Apply Security - applies row and column level security * Assign Output Symbol - this rule walks top down through every node and calculates the output columns for each node. Columns that are not needed are dropped at every node, which is known as projection minimization. This is done by keeping track of both the columns needed to feed the parent node and also keeping track of columns that are “created” at a certain node. * Calculate Cost - adds costing information to the plan * Choose Dependent - this rule looks at each join node and determines whether the join should be made dependent and in which direction. Cardinality, the number of distinct values, and primary key information are used in several formulas to determine whether a dependent join is likely to be worthwhile. The dependent join differs in performance ideally because a fewer number of values will be returned from the dependent side. Also, we must consider the number of values passed from independent to dependent side. If that set is larger than the max number of values in an IN criteria on the dependent side, then we must break the query into a set of queries and combine their results. Executing each query in the connector has some overhead and that is taken into account. Without costing information a lot of common cases where the only criteria specified is on a non\-unique \(but strongly limiting) field are missed. A join is eligible to be dependent if: ** there is at least one equi\-join criterion, i.e. tablea.col = tableb.col ** the join is not a full outer join and the dependent side of the join is on the inner side of the join The join will be made dependent if one of the following conditions, listed in precedence order, holds: ** There is an unsatisfied access pattern that can be satisfied with the dependent join criteria ** The potential dependent side of the join is marked with an option makedep ** \(4.3.2) if costing was enabled, the estimated cost for the dependent join \(5.0+ possibly in each direction in the case of inner joins) is computed and compared to not performing the dependent join. If the costs were all determined \(which requires all relevant table cardinality, column ndv, and possibly nnv values to be populated) the lowest is chosen. ** If key metadata information indicates that the potential dependent side is not “small” and the other side is “not small” or \(5.0.1) the potential dependent side is the inner side of a left outer join. Dependent join is the key optimization we use to efficiently process multi\-source joins. Instead of reading all of source A and all of source B and joining them on A.x = B.x, we read all of A then build a set of A.x that are passed as a criteria when querying B. In cases where A is small and B is large, this can drastically reduce the data retrieved from B, thus greatly speeding the overall query. * Choose Join Strategy - choose the join strategy based upon the cost and attributes of the join. * Clean Criteria - removes phantom criteria * Collapse Source - takes all of the nodes below an access node and creates a SQL query representation * Copy Criteria - this rule copies criteria over an equality criteria that is present in the criteria of a join. Since the equality defines an equivalence, this is a valid way to create a new criteria that may limit results on the other side of the join \(especially in the case of a multi\-source join). * Decompose Join - this rule perfomrs a partition\-wise join optimization on joins of [Federated Optimizations#Partitioned Union]. The decision to decompose is based upon detecting that each side of the join is a partitioned union \(note that non\-ansi joins of more than 2 tables may cause the optimization to not detect the appropriate join). The rule currently only looks for situations where at most 1 partition matches from each side. * Implement Join Strategy - adds necessary sort and other nodes to process the chosen join strategy * Merge Criteria - combines select nodes and can convert subqueries to semi-joins * Merge Virtual - removes view and inline view layers * Place Access - places access nodes under source nodes. An access node represents the point at which everything below the access node gets pushed to the source or is a plan invocation. Later rules focus on either pushing under the access or pulling the access node up the tree to move more work down to the sources. This rule is also responsible for placing [Federated Optimizations#Access Patterns]. * Plan Joins - this rule attempts to find an optimal ordering of the joins performed in the plan, while ensuring that [Federated Optimizations#Access Patterns] dependencies are met. This rule has three main steps. First it must determine an ordering of joins that satisfy the access patterns present. Second it will heuristically create joins that can be pushed to the source \(if a set of joins are pushed to the source, we will not attempt to create an optimal ordering within that set. More than likely it will be sent to the source in the non\-ANSI multi\-join syntax and will be optimized by the database). Third it will use costing information to determine the best left\-linear ordering of joins performed in the processing engine. This third step will do an exhaustive search for 6 or less join sources and is heuristically driven by join selectivity for 7 or more sources. * Plan Procedures - plans procedures that appear in procedural relational queries * Plan Sorts - optimizations around sorting, such as combining sort operations or moving projection * Plan Unions - reorders union children for more pushdown * Plan Aggregates - performs aggregate decomposition over a join or union * Push Limit - pushes the affect of a limit node further into the plan * Push Non-Join Criteria - this rule will push predicates out of an on clause if it is not necessary for the correctness of the join. * Push Select Criteria - pushed select nodes as far as possible through unions, joins, and views layers toward the access nodes. In most cases movement down the tree is good as this will filter rows earlier in the plan. We currently do not undo the decisions made by Push Select Criteria. However in situations where criteria cannot be evaluated by the source, this can lead to sub optimal plans. One of the most important optimization related to pushing criteria, is how the criteria will be pushed trough join. Consider the following plan tree that represents a subtree of the plan for the query "select ... from A inner join b on \(A.x = B.x) where A.y = 3" {code} SELECT (B.y = 3) | JOIN - Inner Join on (A.x = B.x / \ SRC (A) SRC (B) {code} {info} SELECT nodes represent criteria, and SRC stands for SOURCE.{info} It is always valid for inner join and cross joins to push \(single source) criteria that are above the join, below the join. This allows for criteria originating in the user query to eventually be present in source queries below the joins. This result can be represented visually as: {code} JOIN - Inner Join on (A.x = B.x) / \ / SELECT (B.y = 3) | | SRC (A) SRC (B) {code} The same optimization is valid for criteria specified against the outer side of an outer join. For example: {code} SELECT (B.y = 3) | JOIN - Right Outer Join on (A.x = B.x) / \ SRC (A) SRC (B){code} Becomes {code} JOIN - Right Outer Join on (A.x = B.x) / \ / SELECT (B.y = 3) | | SRC (A) SRC (B) {code} However criteria specified against the inner side of an outer join needs special consideration. The above scenario with a left or full outer join is not the same. For example: {code} SELECT (B.y = 3) | JOIN - Left Outer Join on (A.x = B.x) / \ SRC (A) SRC (B) {code} Can become \(available only after 5.0.2): {code} JOIN - Inner Join on (A.x = B.x) / \ / SELECT (B.y = 3) | | SRC (A) SRC (B) {code} Since the criterion is not dependent upon the null values that may be populated from the inner side of the join, the criterion is eligible to be pushed below the join – but only if the join type is also changed to an inner join. On the other hand, criteria that are dependent upon the presence of null values CANNOT be moved. For example: {code} SELECT (B.y is null) | JOIN - Left Outer Join on (A.x = B.x) / \ SRC (A) SRC (B) {code} This plan tree must have the criteria remain above the join, since the outer join may be introducing null values itself. * Raise Access - this rule attempts to raise the Access nodes as far up the plan as possible. This is mostly done by looking at the source’s capabilities and determining whether the operations can be achieved in the source or not. * Raise Null - raises null nodes. Raising a null node removes the need to consider any part of the old plan that was below the null node. * Remove Optional Joins - removes joins that are marked as or determined to be optional * Substitute Expressions - used only when a function based index is present * Validate Where All - ensures criteria is used when required by the source h3. Reading a Debug Plan As each relational sub plan is optimized, the plan will show what is being optimized and it's canonical form: {code} OPTIMIZE: SELECT e1 FROM (SELECT e1 FROM pm1.g1) AS x ---------------------------------------------------------------------------- GENERATE CANONICAL: SELECT e1 FROM (SELECT e1 FROM pm1.g1) AS x CANONICAL PLAN: Project(groups=[x], props={PROJECT_COLS=[e1]}) Source(groups=[x], props={NESTED_COMMAND=SELECT e1 FROM pm1.g1, SYMBOL_MAP={x.e1=e1}}) Project(groups=[pm1.g1], props={PROJECT_COLS=[e1]}) Source(groups=[pm1.g1]) {code} With more complicated user queries, such as a procedure invocation or one containing subqueries, the sub plans may be nested within the overall plan. Each plan ends by showing the final processing plan: {code} ---------------------------------------------------------------------------- OPTIMIZATION COMPLETE: PROCESSOR PLAN: AccessNode(0) output=[e1] SELECT g_0.e1 FROM pm1.g1 AS g_0 {code} The affect of rules can be seen by the state of the plan tree before and after the rule fires. For example, the debug log below shows the application of rule merge virtual, which will remove the "x" inline view layer: {code} EXECUTING AssignOutputElements AFTER: Project(groups=[x], props={PROJECT_COLS=[e1], OUTPUT_COLS=[e1]}) Source(groups=[x], props={NESTED_COMMAND=SELECT e1 FROM pm1.g1, SYMBOL_MAP={x.e1=e1}, OUTPUT_COLS=[e1]}) Project(groups=[pm1.g1], props={PROJECT_COLS=[e1], OUTPUT_COLS=[e1]}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3335, OUTPUT_COLS=[e1]}) Source(groups=[pm1.g1], props={OUTPUT_COLS=[e1]}) ============================================================================ EXECUTING MergeVirtual AFTER: Project(groups=[pm1.g1], props={PROJECT_COLS=[e1], OUTPUT_COLS=[e1]}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3335, OUTPUT_COLS=[e1]}) Source(groups=[pm1.g1]) {code} Some important planning decisions are shown in the plan as they occur as an annotation. For example the snippet below shows that the access node could not be raised as the parent select node contained an unsupported subquery. {code} Project(groups=[pm1.g1], props={PROJECT_COLS=[e1], OUTPUT_COLS=null}) Select(groups=[pm1.g1], props={SELECT_CRITERIA=e1 IN /*+ NO_UNNEST */ (SELECT e1 FROM pm2.g1), OUTPUT_COLS=null}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3341, OUTPUT_COLS=null}) Source(groups=[pm1.g1], props={OUTPUT_COLS=null}) ============================================================================ EXECUTING RaiseAccess LOW Relational Planner SubqueryIn is not supported by source pm1 - e1 IN /*+ NO_UNNEST */ (SELECT e1 FROM pm2.g1) was not pushed AFTER: Project(groups=[pm1.g1]) Select(groups=[pm1.g1], props={SELECT_CRITERIA=e1 IN /*+ NO_UNNEST */ (SELECT e1 FROM pm2.g1), OUTPUT_COLS=null}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3341, OUTPUT_COLS=null}) Source(groups=[pm1.g1]) {code} h3. XQuery XQuery is eligible for specific [optimizations|XQuery Optimization]. Document projection is the most common optimization. It will be shown in the debug plan as an annotation. For example with the user query containing "xmltable('/a/b' passing doc columns x string path '@x', val string path '/.')", the debug plan would show a tree of the document that will effectively be used by the context and path XQuerys: {code} MEDIUM XQuery Planning Projection conditions met for /a/b - Document projection will be used childelement(Q{}a) childelement(Q{}b) attributeattribute(Q{}x) childtext() childtext() {code} |
When integrating information using a federated query planner it is useful to view the query plans to better understand how information is being accessed and processed, and to troubleshoot problems.
A query plan (also known as an execution or processing plan) is a set of instructions created by a query engine for executing a command submitted by a user or application. The purpose of the query plan is to execute the user's query in as efficient a way as possible.
You can get a query plan any time you execute a command. The SQL options available are as follows:
SET SHOWPLAN [ON|DEBUG]- Returns the processing plan or the plan and the full planner Debug Log. See also the SET Statement.
With the above options, the query plan is available from the Statement object by casting to the org.teiid.jdbc.TeiidStatement interface or by using the "SHOW PLAN" statement.
statement.execute("set showplan on"); ResultSet rs = statement.executeQuery("select ..."); TeiidStatement tstatement = statement.unwrap(TeiidStatement.class); PlanNode queryPlan = tstatement.getPlanDescription(); System.out.println(queryPlan);
The query plan is made available automatically in several of Teiid's tools.
Once a query plan has been obtained you will most commonly be looking for:
All of the above information can be determined from the processing plan. You will typically be interested in analyzing the textual form of the final processing plan. To understand why particular decisions are made for debugging or support you will want to obtain the full debug log which will contain the intermediate planning steps as well as annotations as to why specific pushdown decisions are made.
A query plan consists of a set of nodes organized in a tree structure. If you are executing a procedure or generating an XML document from an XML Document Model, the overall query plan will contain additional information related the surrounding procedural execution.
In a procedural context the ordering of child nodes implies the order of execution. In most other situation, child nodes may be executed in any order even in parallel. Only in specific optimizations, such as dependent join, will the children of a join execute serially.
Relational plans represent the processing plan that is composed of nodes representing building blocks of logical relational operations. Relational processing plans differ from logical debug relational plans in that they will contain additional operations and execution specifics that were chosen by the optimizer.
The nodes for a relational query plan are:
Every node has a set of statistics that are output. These can be used to determine the amount of data flowing through the node. Before execution a processor plan will not contain node statistics. Also the statistics are updated as the plan is processed, so typically you'll want the final statistics after all rows have been processed by the client.
Statistic | Description | Units |
---|---|---|
Node Output Rows | Number of records output from the node | count |
Node Process Time | Time processing in this node only | millisec |
Node Cumulative Process Time | Elapsed time from beginning of processing to end | millisec |
Node Cumulative Next Batch Process Time | Time processing in this node + child nodes | millisec |
Node Next Batch Calls | Number of times a node was called for processing | count |
Node Blocks | Number of times a blocked exception was thrown by this node or a child | count |
In addition to node statistics, some nodes display cost estimates computed at the node.
Cost Estimates | Description | Units |
---|---|---|
Estimated Node Cardinality | Estimated number of records that will be output from the node; -1 if unknown | count |
The root node will display additional information.
Top level Statistics | Description | Units |
---|---|---|
Data Bytes Sent | The size of the serialized data result (row and lob values) sent to the client | bytes |
The query processor plan can be obtained in a plain text or xml format. The plan text format is typically easier to read, while the xml format is easier to process by tooling. When possible tooling should be used to examine the plans as the tree structures can be deeply nested.
Data flows from the leafs of the tree to the root. Sub plans for procedure execution can be shown inline, and are differentiated by different indentation. Given a user query of "SELECT pm1.g1.e1, pm1.g2.e2, pm1.g3.e3 from pm1.g1 inner join (pm1.g2 left outer join pm1.g3 on pm1.g2.e1=pm1.g3.e1) on pm1.g1.e1=pm1.g3.e1" the text for a processor plan that does not push down the joins would look like:
ProjectNode + Output Columns: 0: e1 (string) 1: e2 (integer) 2: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Child 0: JoinNode + Output Columns: 0: e1 (string) 1: e2 (integer) 2: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Child 0: JoinNode + Output Columns: 0: e1 (string) 1: e1 (string) 2: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Child 0: AccessNode + Output Columns:e1 (string) + Cost Estimates:Estimated Node Cardinality: -1.0 + Query:SELECT g_0.e1 AS c_0 FROM pm1.g1 AS g_0 ORDER BY c_0 + Model Name:pm1 + Child 1: AccessNode + Output Columns: 0: e1 (string) 1: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Query:SELECT g_0.e1 AS c_0, g_0.e3 AS c_1 FROM pm1.g3 AS g_0 ORDER BY c_0 + Model Name:pm1 + Join Strategy:MERGE JOIN (ALREADY_SORTED/ALREADY_SORTED) + Join Type:INNER JOIN + Join Criteria:pm1.g1.e1=pm1.g3.e1 + Child 1: AccessNode + Output Columns: 0: e1 (string) 1: e2 (integer) + Cost Estimates:Estimated Node Cardinality: -1.0 + Query:SELECT g_0.e1 AS c_0, g_0.e2 AS c_1 FROM pm1.g2 AS g_0 ORDER BY c_0 + Model Name:pm1 + Join Strategy:ENHANCED SORT JOIN (SORT/ALREADY_SORTED) + Join Type:INNER JOIN + Join Criteria:pm1.g3.e1=pm1.g2.e1 + Select Columns: 0: pm1.g1.e1 1: pm1.g2.e2 2: pm1.g3.e3
Note that the nested join node is using a merge join and expects the source queries from each side to produce the expected ordering for the join. The parent join is an enhanced sort join which can delay the decision to perform sorting based upon the incoming rows. Note that the outer join from the user query has been modified to an inner join since none of the null inner values can be present in the query result.
The same plan in xml form looks like:
<?xml version="1.0" encoding="UTF-8"?> <node name="ProjectNode"> <property name="Output Columns"> <value>e1 (string)</value> <value>e2 (integer)</value> <value>e3 (boolean)</value> </property> <property name="Cost Estimates"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name="Child 0"> <node name="JoinNode"> <property name="Output Columns"> <value>e1 (string)</value> <value>e2 (integer)</value> <value>e3 (boolean)</value> </property> <property name="Cost Estimates"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name="Child 0"> <node name="JoinNode"> <property name="Output Columns"> <value>e1 (string)</value> <value>e1 (string)</value> <value>e3 (boolean)</value> </property> <property name="Cost Estimates"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name="Child 0"> <node name="AccessNode"> <property name="Output Columns"> <value>e1 (string)</value> </property> <property name="Cost Estimates"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name="Query"> <value>SELECT g_0.e1 AS c_0 FROM pm1.g1 AS g_0 ORDER BY c_0</value> </property> <property name="Model Name"> <value>pm1</value> </property> </node> </property> <property name="Child 1"> <node name="AccessNode"> <property name="Output Columns"> <value>e1 (string)</value> <value>e3 (boolean)</value> </property> <property name="Cost Estimates"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name="Query"> <value>SELECT g_0.e1 AS c_0, g_0.e3 AS c_1 FROM pm1.g3 AS g_0 ORDER BY c_0</value> </property> <property name="Model Name"> <value>pm1</value> </property> </node> </property> <property name="Join Strategy"> <value>MERGE JOIN (ALREADY_SORTED/ALREADY_SORTED)</value> </property> <property name="Join Type"> <value>INNER JOIN</value> </property> <property name="Join Criteria"> <value>pm1.g1.e1=pm1.g3.e1</value> </property> </node> </property> <property name="Child 1"> <node name="AccessNode"> <property name="Output Columns"> <value>e1 (string)</value> <value>e2 (integer)</value> </property> <property name="Cost Estimates"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name="Query"> <value>SELECT g_0.e1 AS c_0, g_0.e2 AS c_1 FROM pm1.g2 AS g_0 ORDER BY c_0</value> </property> <property name="Model Name"> <value>pm1</value> </property> </node> </property> <property name="Join Strategy"> <value>ENHANCED SORT JOIN (SORT/ALREADY_SORTED)</value> </property> <property name="Join Type"> <value>INNER JOIN</value> </property> <property name="Join Criteria"> <value>pm1.g3.e1=pm1.g2.e1</value> </property> </node> </property> <property name="Select Columns"> <value>pm1.g1.e1</value> <value>pm1.g2.e2</value> <value>pm1.g3.e3</value> </property> </node>
Note that the same information appears in each of the plan forms. In some cases it can actually be easier to follow the simplified format of the debug plan final processor plan. From the Debug Log the same plan as above would appear as:
OPTIMIZATION COMPLETE: PROCESSOR PLAN: ProjectNode(0) output=[pm1.g1.e1, pm1.g2.e2, pm1.g3.e3] [pm1.g1.e1, pm1.g2.e2, pm1.g3.e3] JoinNode(1) [ENHANCED SORT JOIN (SORT/ALREADY_SORTED)] [INNER JOIN] criteria=[pm1.g3.e1=pm1.g2.e1] output=[pm1.g1.e1, pm1.g2.e2, pm1.g3.e3] JoinNode(2) [MERGE JOIN (ALREADY_SORTED/ALREADY_SORTED)] [INNER JOIN] criteria=[pm1.g1.e1=pm1.g3.e1] output=[pm1.g3.e1, pm1.g1.e1, pm1.g3.e3] AccessNode(3) output=[pm1.g1.e1] SELECT g_0.e1 AS c_0 FROM pm1.g1 AS g_0 ORDER BY c_0 AccessNode(4) output=[pm1.g3.e1, pm1.g3.e3] SELECT g_0.e1 AS c_0, g_0.e3 AS c_1 FROM pm1.g3 AS g_0 ORDER BY c_0 AccessNode(5) output=[pm1.g2.e1, pm1.g2.e2] SELECT g_0.e1 AS c_0, g_0.e2 AS c_1 FROM pm1.g2 AS g_0 ORDER BY c_0
XML document model queries and procedure execution (including instead of triggers) use intermediate and final plan forms that include relational plans. Generally the structure of the xml/procedure plans will closely match their logical forms. It's the nested relational plans that will be of interest when analyzing performance issues.