You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: DRAFT_RELEASE_NOTES.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Spark and PySpark have been upgraded from version 3.5.2 to 3.5.4.
15
15
## Record Relation
16
16
To enable nested data records, we have added a new relation feature to the record metamodel. This allows records to reference other records. For more details, refer to the [Record Relation Options](https://boozallen.github.io/aissemble/aissemble/current-dev/record-metamodel.html#_record_relation_options).
17
17
Several features are still a work in progress:
18
-
- PySpark schema generation for records with any multiplicity
18
+
- PySpark and Spark based validation for records with a One to Many multiplicity. (Object validation is available.)
19
19
20
20
## Helm Charts Resource Specification
21
21
The following Helm charts have been updated to include the configuration options for specifying container resource requests/limits:
Copy file name to clipboardexpand all lines: foundation/foundation-mda/src/main/java/com/boozallen/aiops/mda/metamodel/element/pyspark/PySparkSchemaRecord.java
Copy file name to clipboardexpand all lines: foundation/foundation-mda/src/main/resources/templates/data-delivery-data-records/pyspark.schema.base.py.vm
+52-12
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,10 @@
1
1
from abc import ABC
2
+
2
3
from pyspark.sql.dataframe import DataFrame
3
4
from pyspark.sql.column import Column
4
5
from pyspark.sql.types import StructType
5
6
from pyspark.sql.types import DataType
6
-
from pyspark.sql.functions import col
7
+
from pyspark.sql.functions import col, lit
7
8
from typing import List
8
9
import types
9
10
#foreach ($import in $record.baseImports)
@@ -34,6 +35,9 @@ class ${record.capitalizedName}SchemaBase(ABC):
[${relation.snakeCaseName}.as_row() for ${relation.snakeCaseName} in self.${relation.snakeCaseName}] if self.${relation.snakeCaseName} is not None else None#if ($foreach.hasNext),#end
86
+
#else
87
+
self.${relation.snakeCaseName}.as_row() if self.${relation.snakeCaseName} is not None else None#if ($foreach.hasNext),#end
Copy file name to clipboardexpand all lines: foundation/foundation-mda/src/main/resources/templates/data-delivery-data-records/spark.schema.base.java.vm
+4-8
Original file line number
Diff line number
Diff line change
@@ -283,17 +283,13 @@ public abstract class ${record.capitalizedName}SchemaBase extends SparkSchema {
283
283
/**
284
284
* Validate the given ${relation.capitalizedName} 1:M multiplicity relation dataset against ${relation.capitalizedName}Schema.
285
285
* A false will be return if any one of the relation records schema validation is failed.
286
+
* Currently not implemented so it throws a NotImplementedException
286
287
* @param ${relation.uncapitalizedName}Dataset
287
-
* @return boolean value to indicate validation result
Copy file name to clipboardexpand all lines: test/test-mda-models/aissemble-test-data-delivery-pyspark-model/src/aissemble_test_data_delivery_pyspark_model/resources/dictionaries/AddressDictionary.json
Copy file name to clipboardexpand all lines: test/test-mda-models/aissemble-test-data-delivery-pyspark-model/src/aissemble_test_data_delivery_pyspark_model/resources/dictionaries/PysparkDataDeliveryDictionary.json
Copy file name to clipboardexpand all lines: test/test-mda-models/aissemble-test-data-delivery-pyspark-model/src/aissemble_test_data_delivery_pyspark_model/resources/pipelines/PysparkDataDeliveryPatterns.json
Copy file name to clipboardexpand all lines: test/test-mda-models/aissemble-test-data-delivery-pyspark-model/src/aissemble_test_data_delivery_pyspark_model/resources/records/Address.json
Copy file name to clipboardexpand all lines: test/test-mda-models/aissemble-test-data-delivery-pyspark-model/src/aissemble_test_data_delivery_pyspark_model/resources/records/CustomData.json
0 commit comments