@@ -366,10 +366,10 @@ channel from which input should be received.
366
366
| `steps/step/inbound/nativeCollectionType` (xref:#_pipeline_step_inbound_native_collection_type_options[details])
367
367
| No
368
368
| Yes
369
- | If using native as the `inbound` type, `nativeCollectionType` allows the implementation of the collection object being
370
- passed into the step to be customized to any valid xref:type-metamodel.adoc[Type Manager] type. If not
371
- specified, it will default to dataset (which in turn is defaulted to `org.apache.spark.sql.Dataset` for a Spark-based
372
- implementation).
369
+ | If using native as the `inbound` type, the name of the `nativeCollectionType` must be defined in the dictionary where
370
+ its `simpleType` can be any valid xref:type-metamodel.adoc[Type Manager] type. The Type Manger type will determine the
371
+ collection implementation being passed into the step. If not specified, it will default to dataset (which in turn is
372
+ defaulted to `org.apache.spark.sql.Dataset` for a Spark-based implementation).
373
373
374
374
| `steps/step/inbound/recordType` (xref:#_pipeline_step_inbound_record_type_options[details])
375
375
| No
@@ -496,12 +496,10 @@ channel to which output should be sent.
496
496
| `steps/step/outbound/nativeCollectionType` (xref:#_pipeline_step_outbound_native_collection_type_options[details])
497
497
| No
498
498
| Yes
499
- | If using native as the `outbound` type, `nativeCollectionType` allows the implementation of the collection object being
500
- returned from the step to be customized to any valid xref:type-metamodel.adoc[Type Manager] type. If
501
- not specified, it will default to dataset (which in turn is defaulted to `org.apache.spark.sql.Dataset` for a
502
- Spark-based implementation).
503
-
504
- This has been changed to be defined to a valid dictionary type.
499
+ | If using native as the `outbound` type, the name of the `nativeCollectionType` must be defined in the dictionary where
500
+ its `simplyType` can be any valid xref:type-metamodel.adoc[Type Manager] type. The Type Manager type will determine
501
+ the collection implementation return from the step. If not specified, it will default to dataset (which in turn is
502
+ defaulted to `org.apache.spark.sql.Dataset` for a Spark-based implementation).
505
503
506
504
| `steps/step/outbound/recordType` (xref:_pipeline_step_outbound_record_type_options[details])
507
505
| No
@@ -640,9 +638,10 @@ modes] for details on the options.
640
638
| `steps/step/persist/collectionType` (xref:_pipeline_step_persist_collection_type_options[details])
641
639
| No
642
640
| Yes
643
- | Allows the implementation of the collection object being persisted from the step to be customized to
644
- any valid xref:type-metamodel.adoc[Type Manager] type. If not specified, it will default to dataset
645
- (which in turn is defaulted to `org.apache.spark.sql.Dataset` for a Spark-based implementation).
641
+ | The name of the `collectionType` must be defined in the dictionary where its `simpleType` can be any valid
642
+ xref:type-metamodel.adoc[Type Manager] type, which is also the implementation of the collection object being persisted
643
+ from the step to be customized to. If not specified, it will default to dataset (which in turn is defaulted to
644
+ `org.apache.spark.sql.Dataset` for a Spark-based implementation).
646
645
647
646
| `steps/step/persist/recordType` (xref:_pipeline_step_persist_record_type_options[details])
648
647
| No
0 commit comments