== Physical Plan ==
TakeOrderedAndProject (59)
+- * HashAggregate (58)
   +- Exchange (57)
      +- * HashAggregate (56)
         +- * HashAggregate (55)
            +- Exchange (54)
               +- * HashAggregate (53)
                  +- * Project (52)
                     +- * BroadcastHashJoin Inner BuildRight (51)
                        :- * Project (49)
                        :  +- * BroadcastHashJoin Inner BuildRight (48)
                        :     :- * Project (43)
                        :     :  +- * BroadcastHashJoin Inner BuildRight (42)
                        :     :     :- * Project (37)
                        :     :     :  +- * BroadcastHashJoin Inner BuildRight (36)
                        :     :     :     :- * HashAggregate (31)
                        :     :     :     :  +- Exchange (30)
                        :     :     :     :     +- * HashAggregate (29)
                        :     :     :     :        +- * Project (28)
                        :     :     :     :           +- * BroadcastHashJoin Inner BuildRight (27)
                        :     :     :     :              :- * Project (22)
                        :     :     :     :              :  +- * BroadcastHashJoin Inner BuildRight (21)
                        :     :     :     :              :     :- Union (19)
                        :     :     :     :              :     :  :- * Project (11)
                        :     :     :     :              :     :  :  +- * BroadcastHashJoin Inner BuildRight (10)
                        :     :     :     :              :     :  :     :- * Project (4)
                        :     :     :     :              :     :  :     :  +- * Filter (3)
                        :     :     :     :              :     :  :     :     +- * ColumnarToRow (2)
                        :     :     :     :              :     :  :     :        +- Scan parquet spark_catalog.default.catalog_sales (1)
                        :     :     :     :              :     :  :     +- BroadcastExchange (9)
                        :     :     :     :              :     :  :        +- * Project (8)
                        :     :     :     :              :     :  :           +- * Filter (7)
                        :     :     :     :              :     :  :              +- * ColumnarToRow (6)
                        :     :     :     :              :     :  :                 +- Scan parquet spark_catalog.default.item (5)
                        :     :     :     :              :     :  +- * Project (18)
                        :     :     :     :              :     :     +- * BroadcastHashJoin Inner BuildRight (17)
                        :     :     :     :              :     :        :- * Project (15)
                        :     :     :     :              :     :        :  +- * Filter (14)
                        :     :     :     :              :     :        :     +- * ColumnarToRow (13)
                        :     :     :     :              :     :        :        +- Scan parquet spark_catalog.default.web_sales (12)
                        :     :     :     :              :     :        +- ReusedExchange (16)
                        :     :     :     :              :     +- ReusedExchange (20)
                        :     :     :     :              +- BroadcastExchange (26)
                        :     :     :     :                 +- * Filter (25)
                        :     :     :     :                    +- * ColumnarToRow (24)
                        :     :     :     :                       +- Scan parquet spark_catalog.default.customer (23)
                        :     :     :     +- BroadcastExchange (35)
                        :     :     :        +- * Filter (34)
                        :     :     :           +- * ColumnarToRow (33)
                        :     :     :              +- Scan parquet spark_catalog.default.store_sales (32)
                        :     :     +- BroadcastExchange (41)
                        :     :        +- * Filter (40)
                        :     :           +- * ColumnarToRow (39)
                        :     :              +- Scan parquet spark_catalog.default.customer_address (38)
                        :     +- BroadcastExchange (47)
                        :        +- * Filter (46)
                        :           +- * ColumnarToRow (45)
                        :              +- Scan parquet spark_catalog.default.store (44)
                        +- ReusedExchange (50)


(1) Scan parquet spark_catalog.default.catalog_sales
Output [3]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_sold_date_sk#3]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(cs_sold_date_sk#3), dynamicpruningexpression(cs_sold_date_sk#3 IN dynamicpruning#4)]
PushedFilters: [IsNotNull(cs_item_sk), IsNotNull(cs_bill_customer_sk)]
ReadSchema: struct<cs_bill_customer_sk:int,cs_item_sk:int>

(2) ColumnarToRow [codegen id : 2]
Input [3]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_sold_date_sk#3]

(3) Filter [codegen id : 2]
Input [3]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_sold_date_sk#3]
Condition : (isnotnull(cs_item_sk#2) AND isnotnull(cs_bill_customer_sk#1))

(4) Project [codegen id : 2]
Output [3]: [cs_sold_date_sk#3 AS sold_date_sk#5, cs_bill_customer_sk#1 AS customer_sk#6, cs_item_sk#2 AS item_sk#7]
Input [3]: [cs_bill_customer_sk#1, cs_item_sk#2, cs_sold_date_sk#3]

(5) Scan parquet spark_catalog.default.item
Output [3]: [i_item_sk#8, i_class#9, i_category#10]
Batched: true
Location [not included in comparison]/{warehouse_dir}/item]
PushedFilters: [IsNotNull(i_category), IsNotNull(i_class), EqualTo(i_category,Women                                             ), EqualTo(i_class,maternity                                         ), IsNotNull(i_item_sk)]
ReadSchema: struct<i_item_sk:int,i_class:string,i_category:string>

(6) ColumnarToRow [codegen id : 1]
Input [3]: [i_item_sk#8, i_class#9, i_category#10]

(7) Filter [codegen id : 1]
Input [3]: [i_item_sk#8, i_class#9, i_category#10]
Condition : ((((isnotnull(i_category#10) AND isnotnull(i_class#9)) AND (i_category#10 = Women                                             )) AND (i_class#9 = maternity                                         )) AND isnotnull(i_item_sk#8))

(8) Project [codegen id : 1]
Output [1]: [i_item_sk#8]
Input [3]: [i_item_sk#8, i_class#9, i_category#10]

(9) BroadcastExchange
Input [1]: [i_item_sk#8]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=1]

(10) BroadcastHashJoin [codegen id : 2]
Left keys [1]: [item_sk#7]
Right keys [1]: [i_item_sk#8]
Join type: Inner
Join condition: None

(11) Project [codegen id : 2]
Output [2]: [sold_date_sk#5, customer_sk#6]
Input [4]: [sold_date_sk#5, customer_sk#6, item_sk#7, i_item_sk#8]

(12) Scan parquet spark_catalog.default.web_sales
Output [3]: [ws_item_sk#11, ws_bill_customer_sk#12, ws_sold_date_sk#13]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(ws_sold_date_sk#13), dynamicpruningexpression(ws_sold_date_sk#13 IN dynamicpruning#4)]
PushedFilters: [IsNotNull(ws_item_sk), IsNotNull(ws_bill_customer_sk)]
ReadSchema: struct<ws_item_sk:int,ws_bill_customer_sk:int>

(13) ColumnarToRow [codegen id : 4]
Input [3]: [ws_item_sk#11, ws_bill_customer_sk#12, ws_sold_date_sk#13]

(14) Filter [codegen id : 4]
Input [3]: [ws_item_sk#11, ws_bill_customer_sk#12, ws_sold_date_sk#13]
Condition : (isnotnull(ws_item_sk#11) AND isnotnull(ws_bill_customer_sk#12))

(15) Project [codegen id : 4]
Output [3]: [ws_sold_date_sk#13 AS sold_date_sk#14, ws_bill_customer_sk#12 AS customer_sk#15, ws_item_sk#11 AS item_sk#16]
Input [3]: [ws_item_sk#11, ws_bill_customer_sk#12, ws_sold_date_sk#13]

(16) ReusedExchange [Reuses operator id: 9]
Output [1]: [i_item_sk#17]

(17) BroadcastHashJoin [codegen id : 4]
Left keys [1]: [item_sk#16]
Right keys [1]: [i_item_sk#17]
Join type: Inner
Join condition: None

(18) Project [codegen id : 4]
Output [2]: [sold_date_sk#14, customer_sk#15]
Input [4]: [sold_date_sk#14, customer_sk#15, item_sk#16, i_item_sk#17]

(19) Union

(20) ReusedExchange [Reuses operator id: 64]
Output [1]: [d_date_sk#18]

(21) BroadcastHashJoin [codegen id : 7]
Left keys [1]: [sold_date_sk#5]
Right keys [1]: [d_date_sk#18]
Join type: Inner
Join condition: None

(22) Project [codegen id : 7]
Output [1]: [customer_sk#6]
Input [3]: [sold_date_sk#5, customer_sk#6, d_date_sk#18]

(23) Scan parquet spark_catalog.default.customer
Output [2]: [c_customer_sk#19, c_current_addr_sk#20]
Batched: true
Location [not included in comparison]/{warehouse_dir}/customer]
PushedFilters: [IsNotNull(c_customer_sk), IsNotNull(c_current_addr_sk)]
ReadSchema: struct<c_customer_sk:int,c_current_addr_sk:int>

(24) ColumnarToRow [codegen id : 6]
Input [2]: [c_customer_sk#19, c_current_addr_sk#20]

(25) Filter [codegen id : 6]
Input [2]: [c_customer_sk#19, c_current_addr_sk#20]
Condition : (isnotnull(c_customer_sk#19) AND isnotnull(c_current_addr_sk#20))

(26) BroadcastExchange
Input [2]: [c_customer_sk#19, c_current_addr_sk#20]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)),false), [plan_id=2]

(27) BroadcastHashJoin [codegen id : 7]
Left keys [1]: [customer_sk#6]
Right keys [1]: [c_customer_sk#19]
Join type: Inner
Join condition: None

(28) Project [codegen id : 7]
Output [2]: [c_customer_sk#19, c_current_addr_sk#20]
Input [3]: [customer_sk#6, c_customer_sk#19, c_current_addr_sk#20]

(29) HashAggregate [codegen id : 7]
Input [2]: [c_customer_sk#19, c_current_addr_sk#20]
Keys [2]: [c_customer_sk#19, c_current_addr_sk#20]
Functions: []
Aggregate Attributes: []
Results [2]: [c_customer_sk#19, c_current_addr_sk#20]

(30) Exchange
Input [2]: [c_customer_sk#19, c_current_addr_sk#20]
Arguments: hashpartitioning(c_customer_sk#19, c_current_addr_sk#20, 5), ENSURE_REQUIREMENTS, [plan_id=3]

(31) HashAggregate [codegen id : 12]
Input [2]: [c_customer_sk#19, c_current_addr_sk#20]
Keys [2]: [c_customer_sk#19, c_current_addr_sk#20]
Functions: []
Aggregate Attributes: []
Results [2]: [c_customer_sk#19, c_current_addr_sk#20]

(32) Scan parquet spark_catalog.default.store_sales
Output [3]: [ss_customer_sk#21, ss_ext_sales_price#22, ss_sold_date_sk#23]
Batched: true
Location: InMemoryFileIndex []
PartitionFilters: [isnotnull(ss_sold_date_sk#23), dynamicpruningexpression(ss_sold_date_sk#23 IN dynamicpruning#24)]
PushedFilters: [IsNotNull(ss_customer_sk)]
ReadSchema: struct<ss_customer_sk:int,ss_ext_sales_price:decimal(7,2)>

(33) ColumnarToRow [codegen id : 8]
Input [3]: [ss_customer_sk#21, ss_ext_sales_price#22, ss_sold_date_sk#23]

(34) Filter [codegen id : 8]
Input [3]: [ss_customer_sk#21, ss_ext_sales_price#22, ss_sold_date_sk#23]
Condition : isnotnull(ss_customer_sk#21)

(35) BroadcastExchange
Input [3]: [ss_customer_sk#21, ss_ext_sales_price#22, ss_sold_date_sk#23]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)),false), [plan_id=4]

(36) BroadcastHashJoin [codegen id : 12]
Left keys [1]: [c_customer_sk#19]
Right keys [1]: [ss_customer_sk#21]
Join type: Inner
Join condition: None

(37) Project [codegen id : 12]
Output [4]: [c_customer_sk#19, c_current_addr_sk#20, ss_ext_sales_price#22, ss_sold_date_sk#23]
Input [5]: [c_customer_sk#19, c_current_addr_sk#20, ss_customer_sk#21, ss_ext_sales_price#22, ss_sold_date_sk#23]

(38) Scan parquet spark_catalog.default.customer_address
Output [3]: [ca_address_sk#25, ca_county#26, ca_state#27]
Batched: true
Location [not included in comparison]/{warehouse_dir}/customer_address]
PushedFilters: [IsNotNull(ca_address_sk), IsNotNull(ca_county), IsNotNull(ca_state)]
ReadSchema: struct<ca_address_sk:int,ca_county:string,ca_state:string>

(39) ColumnarToRow [codegen id : 9]
Input [3]: [ca_address_sk#25, ca_county#26, ca_state#27]

(40) Filter [codegen id : 9]
Input [3]: [ca_address_sk#25, ca_county#26, ca_state#27]
Condition : ((isnotnull(ca_address_sk#25) AND isnotnull(ca_county#26)) AND isnotnull(ca_state#27))

(41) BroadcastExchange
Input [3]: [ca_address_sk#25, ca_county#26, ca_state#27]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)),false), [plan_id=5]

(42) BroadcastHashJoin [codegen id : 12]
Left keys [1]: [c_current_addr_sk#20]
Right keys [1]: [ca_address_sk#25]
Join type: Inner
Join condition: None

(43) Project [codegen id : 12]
Output [5]: [c_customer_sk#19, ss_ext_sales_price#22, ss_sold_date_sk#23, ca_county#26, ca_state#27]
Input [7]: [c_customer_sk#19, c_current_addr_sk#20, ss_ext_sales_price#22, ss_sold_date_sk#23, ca_address_sk#25, ca_county#26, ca_state#27]

(44) Scan parquet spark_catalog.default.store
Output [2]: [s_county#28, s_state#29]
Batched: true
Location [not included in comparison]/{warehouse_dir}/store]
PushedFilters: [IsNotNull(s_county), IsNotNull(s_state)]
ReadSchema: struct<s_county:string,s_state:string>

(45) ColumnarToRow [codegen id : 10]
Input [2]: [s_county#28, s_state#29]

(46) Filter [codegen id : 10]
Input [2]: [s_county#28, s_state#29]
Condition : (isnotnull(s_county#28) AND isnotnull(s_state#29))

(47) BroadcastExchange
Input [2]: [s_county#28, s_state#29]
Arguments: HashedRelationBroadcastMode(List(input[0, string, false], input[1, string, false]),false), [plan_id=6]

(48) BroadcastHashJoin [codegen id : 12]
Left keys [2]: [ca_county#26, ca_state#27]
Right keys [2]: [s_county#28, s_state#29]
Join type: Inner
Join condition: None

(49) Project [codegen id : 12]
Output [3]: [c_customer_sk#19, ss_ext_sales_price#22, ss_sold_date_sk#23]
Input [7]: [c_customer_sk#19, ss_ext_sales_price#22, ss_sold_date_sk#23, ca_county#26, ca_state#27, s_county#28, s_state#29]

(50) ReusedExchange [Reuses operator id: 69]
Output [1]: [d_date_sk#30]

(51) BroadcastHashJoin [codegen id : 12]
Left keys [1]: [ss_sold_date_sk#23]
Right keys [1]: [d_date_sk#30]
Join type: Inner
Join condition: None

(52) Project [codegen id : 12]
Output [2]: [c_customer_sk#19, ss_ext_sales_price#22]
Input [4]: [c_customer_sk#19, ss_ext_sales_price#22, ss_sold_date_sk#23, d_date_sk#30]

(53) HashAggregate [codegen id : 12]
Input [2]: [c_customer_sk#19, ss_ext_sales_price#22]
Keys [1]: [c_customer_sk#19]
Functions [1]: [partial_sum(UnscaledValue(ss_ext_sales_price#22))]
Aggregate Attributes [1]: [sum#31]
Results [2]: [c_customer_sk#19, sum#32]

(54) Exchange
Input [2]: [c_customer_sk#19, sum#32]
Arguments: hashpartitioning(c_customer_sk#19, 5), ENSURE_REQUIREMENTS, [plan_id=7]

(55) HashAggregate [codegen id : 13]
Input [2]: [c_customer_sk#19, sum#32]
Keys [1]: [c_customer_sk#19]
Functions [1]: [sum(UnscaledValue(ss_ext_sales_price#22))]
Aggregate Attributes [1]: [sum(UnscaledValue(ss_ext_sales_price#22))#33]
Results [1]: [cast((MakeDecimal(sum(UnscaledValue(ss_ext_sales_price#22))#33,17,2) / 50) as int) AS segment#34]

(56) HashAggregate [codegen id : 13]
Input [1]: [segment#34]
Keys [1]: [segment#34]
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#35]
Results [2]: [segment#34, count#36]

(57) Exchange
Input [2]: [segment#34, count#36]
Arguments: hashpartitioning(segment#34, 5), ENSURE_REQUIREMENTS, [plan_id=8]

(58) HashAggregate [codegen id : 14]
Input [2]: [segment#34, count#36]
Keys [1]: [segment#34]
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#37]
Results [3]: [segment#34, count(1)#37 AS num_customers#38, (segment#34 * 50) AS segment_base#39]

(59) TakeOrderedAndProject
Input [3]: [segment#34, num_customers#38, segment_base#39]
Arguments: 100, [segment#34 ASC NULLS FIRST, num_customers#38 ASC NULLS FIRST], [segment#34, num_customers#38, segment_base#39]

===== Subqueries =====

Subquery:1 Hosting operator id = 1 Hosting Expression = cs_sold_date_sk#3 IN dynamicpruning#4
BroadcastExchange (64)
+- * Project (63)
   +- * Filter (62)
      +- * ColumnarToRow (61)
         +- Scan parquet spark_catalog.default.date_dim (60)


(60) Scan parquet spark_catalog.default.date_dim
Output [3]: [d_date_sk#18, d_year#40, d_moy#41]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_moy), IsNotNull(d_year), EqualTo(d_moy,12), EqualTo(d_year,1998), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_year:int,d_moy:int>

(61) ColumnarToRow [codegen id : 1]
Input [3]: [d_date_sk#18, d_year#40, d_moy#41]

(62) Filter [codegen id : 1]
Input [3]: [d_date_sk#18, d_year#40, d_moy#41]
Condition : ((((isnotnull(d_moy#41) AND isnotnull(d_year#40)) AND (d_moy#41 = 12)) AND (d_year#40 = 1998)) AND isnotnull(d_date_sk#18))

(63) Project [codegen id : 1]
Output [1]: [d_date_sk#18]
Input [3]: [d_date_sk#18, d_year#40, d_moy#41]

(64) BroadcastExchange
Input [1]: [d_date_sk#18]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=9]

Subquery:2 Hosting operator id = 12 Hosting Expression = ws_sold_date_sk#13 IN dynamicpruning#4

Subquery:3 Hosting operator id = 32 Hosting Expression = ss_sold_date_sk#23 IN dynamicpruning#24
BroadcastExchange (69)
+- * Project (68)
   +- * Filter (67)
      +- * ColumnarToRow (66)
         +- Scan parquet spark_catalog.default.date_dim (65)


(65) Scan parquet spark_catalog.default.date_dim
Output [2]: [d_date_sk#30, d_month_seq#42]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_month_seq), GreaterThanOrEqual(d_month_seq,ScalarSubquery#43), LessThanOrEqual(d_month_seq,ScalarSubquery#44), IsNotNull(d_date_sk)]
ReadSchema: struct<d_date_sk:int,d_month_seq:int>

(66) ColumnarToRow [codegen id : 1]
Input [2]: [d_date_sk#30, d_month_seq#42]

(67) Filter [codegen id : 1]
Input [2]: [d_date_sk#30, d_month_seq#42]
Condition : (((isnotnull(d_month_seq#42) AND (d_month_seq#42 >= ReusedSubquery Subquery scalar-subquery#43, [id=#10])) AND (d_month_seq#42 <= ReusedSubquery Subquery scalar-subquery#44, [id=#11])) AND isnotnull(d_date_sk#30))

(68) Project [codegen id : 1]
Output [1]: [d_date_sk#30]
Input [2]: [d_date_sk#30, d_month_seq#42]

(69) BroadcastExchange
Input [1]: [d_date_sk#30]
Arguments: HashedRelationBroadcastMode(List(cast(input[0, int, true] as bigint)),false), [plan_id=12]

Subquery:4 Hosting operator id = 67 Hosting Expression = ReusedSubquery Subquery scalar-subquery#43, [id=#10]

Subquery:5 Hosting operator id = 67 Hosting Expression = ReusedSubquery Subquery scalar-subquery#44, [id=#11]

Subquery:6 Hosting operator id = 65 Hosting Expression = Subquery scalar-subquery#43, [id=#10]
* HashAggregate (76)
+- Exchange (75)
   +- * HashAggregate (74)
      +- * Project (73)
         +- * Filter (72)
            +- * ColumnarToRow (71)
               +- Scan parquet spark_catalog.default.date_dim (70)


(70) Scan parquet spark_catalog.default.date_dim
Output [3]: [d_month_seq#45, d_year#46, d_moy#47]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_year), IsNotNull(d_moy), EqualTo(d_year,1998), EqualTo(d_moy,12)]
ReadSchema: struct<d_month_seq:int,d_year:int,d_moy:int>

(71) ColumnarToRow [codegen id : 1]
Input [3]: [d_month_seq#45, d_year#46, d_moy#47]

(72) Filter [codegen id : 1]
Input [3]: [d_month_seq#45, d_year#46, d_moy#47]
Condition : (((isnotnull(d_year#46) AND isnotnull(d_moy#47)) AND (d_year#46 = 1998)) AND (d_moy#47 = 12))

(73) Project [codegen id : 1]
Output [1]: [(d_month_seq#45 + 1) AS (d_month_seq + 1)#48]
Input [3]: [d_month_seq#45, d_year#46, d_moy#47]

(74) HashAggregate [codegen id : 1]
Input [1]: [(d_month_seq + 1)#48]
Keys [1]: [(d_month_seq + 1)#48]
Functions: []
Aggregate Attributes: []
Results [1]: [(d_month_seq + 1)#48]

(75) Exchange
Input [1]: [(d_month_seq + 1)#48]
Arguments: hashpartitioning((d_month_seq + 1)#48, 5), ENSURE_REQUIREMENTS, [plan_id=13]

(76) HashAggregate [codegen id : 2]
Input [1]: [(d_month_seq + 1)#48]
Keys [1]: [(d_month_seq + 1)#48]
Functions: []
Aggregate Attributes: []
Results [1]: [(d_month_seq + 1)#48]

Subquery:7 Hosting operator id = 65 Hosting Expression = Subquery scalar-subquery#44, [id=#11]
* HashAggregate (83)
+- Exchange (82)
   +- * HashAggregate (81)
      +- * Project (80)
         +- * Filter (79)
            +- * ColumnarToRow (78)
               +- Scan parquet spark_catalog.default.date_dim (77)


(77) Scan parquet spark_catalog.default.date_dim
Output [3]: [d_month_seq#49, d_year#50, d_moy#51]
Batched: true
Location [not included in comparison]/{warehouse_dir}/date_dim]
PushedFilters: [IsNotNull(d_year), IsNotNull(d_moy), EqualTo(d_year,1998), EqualTo(d_moy,12)]
ReadSchema: struct<d_month_seq:int,d_year:int,d_moy:int>

(78) ColumnarToRow [codegen id : 1]
Input [3]: [d_month_seq#49, d_year#50, d_moy#51]

(79) Filter [codegen id : 1]
Input [3]: [d_month_seq#49, d_year#50, d_moy#51]
Condition : (((isnotnull(d_year#50) AND isnotnull(d_moy#51)) AND (d_year#50 = 1998)) AND (d_moy#51 = 12))

(80) Project [codegen id : 1]
Output [1]: [(d_month_seq#49 + 3) AS (d_month_seq + 3)#52]
Input [3]: [d_month_seq#49, d_year#50, d_moy#51]

(81) HashAggregate [codegen id : 1]
Input [1]: [(d_month_seq + 3)#52]
Keys [1]: [(d_month_seq + 3)#52]
Functions: []
Aggregate Attributes: []
Results [1]: [(d_month_seq + 3)#52]

(82) Exchange
Input [1]: [(d_month_seq + 3)#52]
Arguments: hashpartitioning((d_month_seq + 3)#52, 5), ENSURE_REQUIREMENTS, [plan_id=14]

(83) HashAggregate [codegen id : 2]
Input [1]: [(d_month_seq + 3)#52]
Keys [1]: [(d_month_seq + 3)#52]
Functions: []
Aggregate Attributes: []
Results [1]: [(d_month_seq + 3)#52]


