The EXPLAIN feature analyzes and displays how Firebolt’s query processing system runs your query. You can use this information to understand resource utilization, and identify opportunities to improve and optimize your query performance.If you specify the ANALYZE option for EXPLAIN, Firebolt also collects detailed metrics about each operator during query runtime, including the amount of time spent on the operator, and how much data it processes.
You can augment the output of EXPLAIN by specifying options. The following table lists all available options and their functionalities:
Option Name
Option Values
Description
LOGICAL
TRUE, FALSE
Returns the optimized logical query plan by default, unless otherwise specified.
PHYSICAL
TRUE, FALSE
Returns an optimized physical query plan containing shuffle operators for queries on distributed engines, showing how work is distributed between nodes of an engine.
ANALYZE
TRUE, FALSE
Returns an optimized physical query plan annotated with metrics from query execution. For more information about these metrics, see Example with ANALYZE.
ALL
TRUE, FALSE
Returns all of the previous LOGICAL, PHYSICAL, and ANALYZE plans. Use the following sample syntax: EXPLAIN (ALL) <select statement>.
STATISTICS
TRUE, FALSE
Returns an annotated query plan that includes estimates from Firebolt’s query optimizer. This option works with LOGICAL, PHYSICAL, ANALYZE, and ALL. Use the following sample syntax: EXPLAIN (STATISTICS) <select statement>.
FORMAT
TEXT, JSON
Changes the result format of the EXPLAIN command. TEXT is optimized for humans and LLMs. JSON makes it easy to work with the plans programmatically.
You may specify only one of the following options: LOGICAL, PHYSICAL, ANALYZE, and ALL. If you need to view several plans at once, use the ALL option.
All examples in this section were run on Firebolt version 4.26.0, on a two-node M storage-optimized engine.
The underlying dataset is TPC-H at scale factor 100. The lineitem table contains 77GiB of uncompressed data.
The following example shows how to generate an EXPLAIN statement for a SELECT query on a table named lineitem.
Copy
Ask AI
SELECT l_shipdate, l_linestatus, AVG(l_discount)FROM lineitemWHERE l_returnflag = 'N' AND l_shipdate > '1996-01-01'GROUP BY ALLORDER BY 1,2,3;
Many examples in this section use the TEXT format.
We constantly optimize how easy the TEXT format is to understand for humans and LLMs.
When writing code against EXPLAIN results, use the much more stable JSON format.
The EXPLAIN output shows the sequence of operations that Firebolt’s engine will use to run the query:
Read the required columns from the lineitem table.
Filter the lineitem table by the WHERE conditions.
Remove the columns that are no longer required.
Perform the aggregation as specified by the GROUP BY clause.
Sort resulting rows in ascending order by all columns from the ORDER BY clause.
Bring the columns into the order specified in the SELECT clause.
The following code example runs the same query as the previous example on a multi-node engine using the (ALL) option, which is the same as specifying (ALL TRUE) or (LOGICAL, PHYSICAL, ANALYZE).
The query returns three columns explain_logical, explain_physical, and explain_analyze.
Each column can be toggled on or off using the corresponding options.
For example, to display only explain_logical and explain_analyze, you can specify either (LOGICAL, ANALYZE) or (ALL, PHYSICAL FALSE).
With the ANALYZE option enabled, this query took 2.5 seconds to run.
Without the ANALYZE option, the query is only planned and should return the result almost immediately.
Copy
Ask AI
EXPLAIN (ALL)SELECT l_shipdate, l_linestatus, AVG(l_discount)FROM lineitemWHERE l_returnflag = 'N' AND l_shipdate > '1996-01-01'GROUP BY ALLORDER BY 1,2,3;
The LOGICAL plan shows the optimized logical query plan of the SELECT statement.
Additionally, it shows the output row type for each operator.
For operators with multiple input operators such as the Join operator, input columns are concatenated.
For example, if the first input produces three columns, and the second input produces four columns,
the join produces seven columns.
The PHYSICAL plan shows a detailed, optimized physical plan that includes Shuffle operators for distributed query execution, offering insights into how tasks are distributed across the engine’s nodes.
A Shuffle operator redistributes data across engine nodes.
For example, scans of FACT tables, like operator [10] [TableFuncScan], and the operators following it are automatically distributed across all nodes of an engine.
A Shuffle operator of type Hash indicates that the workload of the following operators is distributed across all nodes, while a Shuffle operator of type Gather consolidates data onto a single node of the engine.
In the previous example, only the operator [2] [SortMerge] runs on a single node, merging the sorted partial query results from all other nodes.
The MaybeCache operator, at the top of the plan, caches its input in main memory on a best-effort basis for future runs of the same (sub-) query.
It considers the entire plan leading to its input, as well as the state of the scanned tables.
If the data in a table changes, the MaybeCache operator will know not to read an outdated cached entry.
It may also skip caching large results or prioritize caching results that offer greater time savings.
The MaybeCache operator may appear in different places of a plan, but in this query it caches the full query result.
For more information about the MaybeCache operator, see Subresult Reuse in Firebolt.
The ANALYZE plan contains the same plan shape as the PHYSICAL explain, but annotated with metrics collected during query execution.
For each operator, it shows the number of rows it produced in output_cardinality, and how much time was spent on that operator in thread_time and cpu_time.
The thread_time is the sum of the wall-clock time that threads spent working on the operator across all nodes, while cpu_time is the total time those threads were scheduled on a CPU core.
A significantly smaller cpu_time compared to thread_time suggests that the operator is either waiting on input or output operations or that the engine is handling multiple queries simultaneously.
For TableFuncScan operators that read from managed tables, additional metrics may be displayed:
granules shows the output granules (not pruned) out of the total number of granules that were considered.
This helps you understand the effectiveness of primary index pruning in reducing the amount of data scanned during query execution.
In the example above, granules: 80577/80577 indicates that no granules were pruned by the primary index, meaning all 80,577 granules were scanned.
cardinality_before_prewhere measures the number of rows selected for scanning after tablet and granule pruning.
After this static pruning step, prewhere optimizes scanning by reading columns with the most selective predicates first.
By evaluating these predicates early, the scan can perform a second pruning pass, excluding additional granules from being read for the remaining columns.
If the ratio cardinality_before_prewhere / output_cardinality is large, it indicates that prewhere was effective in eliminating additional rows before the main scan.
In the previous example, the cpu_time is almost as high as the thread_time on the TableFuncScan node. This indicates that the data of the lineitem table was in the SSD cache.
On a cold query run, where data must be retrieved from an Amazon S3 bucket, thread_time is considerably higher than cpu_time for the same operator:
Copy
Ask AI
\_[10] [TableFuncScan] lineitem.l_discount: $0.l_discount, lineitem.l_returnflag: $0.l_returnflag, lineitem.l_linestatus: $0.l_linestatus, lineitem.l_shipdate: $0.l_shipdate | $0 = read_tablets(table_name => lineitem, tablet) | [Types]: lineitem.l_discount: double precision not null, lineitem.l_returnflag: text not null, lineitem.l_linestatus: text not null, lineitem.l_shipdate: date not null | [Execution Metrics]: output cardinality = 334006486, thread time = 32432ms, cpu time = 26592ms, cardinality_before_prewhere = 659431284, granules: 80577/80577
Additionally, a significant amount of time is devoted to performing the distributed pre-aggregation:
Copy
Ask AI
\_[7] [AggregateState partial] GroupBy: [lineitem.l_linestatus, lineitem.l_shipdate] Aggregates: [avg2_0: avg2(lineitem.l_discount)] | [Types]: avg2_0: aggregatefunction(avg2, double precision not null) not null | [Execution Metrics]: output cardinality = 2130, thread time = 9921ms, cpu time = 9686ms
When running the query a second time, we can see that the result cache is used to return the results immediately:
Copy
Ask AI
[0] [Projection] lineitem.l_shipdate, lineitem.l_linestatus, avg2_1| [Execution Metrics]: Optimized out \_[1] [MaybeCache] | [Execution Metrics]: output cardinality = 1065, thread time = 0ms, cpu time = 0ms \_[2] [SortMerge] OrderBy: [lineitem.l_shipdate Ascending Last, lineitem.l_linestatus Ascending Last, avg2_1 Ascending Last] | [Execution Metrics]: Nothing was executed
The following code example uses the STATISTICS option to display the logical and physical query plan for the SELECT statement, annotated with estimated statistics from Firebolt’s query optimizer:
Each plan node is annotated with a [Logical Profile], indicating the source of the estimation for that node as well as the row count estimate.
Copy
Ask AI
EXPLAIN (STATISTICS)SELECT l_shipdate, l_linestatus, AVG(l_discount)FROM lineitemWHERE l_returnflag = 'N' AND l_shipdate > '1996-01-01'GROUP BY ALLORDER BY 1,2,3;
The possible sources are as follows:
Source
Description
hardcoded
This node received hard-coded defaults because no additional information is available, a scenario that typically occurs only with external tables.
estimated
This node’s profile was computed from its child nodes’ profiles, based on assumptions about the data that may not always hold true. These assumptions can introduce inaccuracies, which may be further amplified.
metadata
This node’s profile was constructed from available metadata, ensuring that the information is either precise or highly accurate.
history
This node’s profile was obtained from a previously recorded run, and is accurate unless data has changed since the recording.
learned
This node’s profile was predicted using machine learning.
The following example uses the FORMAT option to query telemetry in different ways.
The TEXT format makes it easy to understand telemetry for humans and LLMs.
As options are fully composable, it can be combined with other options such as ANALYZE.
Copy
Ask AI
EXPLAIN (ANALYZE, FORMAT TEXT)WITH mat_cte(name, id) AS MATERIALIZED ( SELECT concat('Hello, World', x) , x FROM generate_series(1, 10000000) r(x))SELECT m1.name, m2.nameFROM mat_cte m1 INNER JOIN mat_cte m2 ON (m1.id + 1 = m2.id) LIMIT 10;
The text-based format makes it easy to spot things such as:
Most time is spent in the INNER JOIN (776ms thread time). After that, the concat projection is the second most expensive operation (286ms thread time).
The MATERIALIZED CTE led to the planner adding the Loopback shuffle.
This way, the result of Node [6] can be consumed multiple times.
The Recurring Node at the bottom of the plan shows the second time mat_cte gets consumed.
The probe side of the join terminates early due to the LIMIT on top.
Only around 590k rows make it out of the projection in Node [5].
If you want to programmatically access and work with Firebolt telemetry, you can retrieve it as a JSON document.
Copy
Ask AI
EXPLAIN (ANALYZE, FORMAT JSON)WITH mat_cte(name, id) AS MATERIALIZED ( SELECT concat('Hello, World', x) , x FROM generate_series(1, 10000000) r(x))SELECT m1.name, m2.nameFROM mat_cte m1 INNER JOIN mat_cte m2 ON (m1.id + 1 = m2.id) LIMIT 10;
Building programs that work with JSON telemetry is easy:
The JSON format contains telemetry for all operators under the operators array.
Every operator has a unique operator_id.
Navigating the operator DAG can be done through the input_ids attached to each node.
The DAG’s root operator id is stored in the top-level root_operator_id.
Every operator can have annotations. When ANALYZE is selected as an option, these contain execution_metrics.