Skip to content

Commit d1744d1

Browse files
terraform-docs: automated action
1 parent 728ad51 commit d1744d1

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

README.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -398,22 +398,22 @@ No modules.
398398
| Name | Description | Type | Default | Required |
399399
|------|-------------|------|---------|:--------:|
400400
| <a name="input_cloud_name"></a> [cloud\_name](#input\_cloud\_name) | Cloud Name | `string` | n/a | yes |
401-
| <a name="input_clusters"></a> [clusters](#input\_clusters) | Set of objects with parameters to configure Databricks clusters and assign permissions to it for certain custom groups | <pre>set(object({<br> cluster_name = string<br> spark_version = optional(string, "15.3.x-scala2.12")<br> spark_conf = optional(map(any), {})<br> spark_env_vars = optional(map(any), {})<br> data_security_mode = optional(string, "USER_ISOLATION")<br> aws_attributes = optional(object({<br> availability = optional(string)<br> zone_id = optional(string)<br> first_on_demand = optional(number)<br> spot_bid_price_percent = optional(number)<br> ebs_volume_count = optional(number)<br> ebs_volume_size = optional(number)<br> ebs_volume_type = optional(string)<br> }), {<br> availability = "ON_DEMAND"<br> zone_id = "auto"<br> first_on_demand = 0<br> spot_bid_price_percent = 100<br> ebs_volume_count = 1<br> ebs_volume_size = 100<br> ebs_volume_type = "GENERAL_PURPOSE_SSD"<br> })<br> azure_attributes = optional(object({<br> availability = optional(string)<br> first_on_demand = optional(number)<br> spot_bid_max_price = optional(number, 1)<br> }), {<br> availability = "ON_DEMAND_AZURE"<br> first_on_demand = 0<br> })<br> node_type_id = optional(string, null)<br> autotermination_minutes = optional(number, 20)<br> min_workers = optional(number, 1)<br> max_workers = optional(number, 2)<br> cluster_log_conf_destination = optional(string, null)<br> init_scripts_workspace = optional(set(string), [])<br> init_scripts_volumes = optional(set(string), [])<br> init_scripts_dbfs = optional(set(string), [])<br> init_scripts_abfss = optional(set(string), [])<br> single_user_name = optional(string, null)<br> single_node_enable = optional(bool, false)<br> custom_tags = optional(map(string), {})<br> permissions = optional(set(object({<br> group_name = string<br> permission_level = string<br> })), [])<br> pypi_library_repository = optional(set(string), [])<br> maven_library_repository = optional(set(object({<br> coordinates = string<br> exclusions = set(string)<br> })), [])<br> }))</pre> | `[]` | no |
402-
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN\_USE permissions on it to certain custom groups<br>name - name of custom cluster policy to create<br>can\_use - list of string, where values are custom group names, there groups have to be created with Terraform;<br>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br> name = string<br> can_use = list(string)<br> definition = any<br> }))</pre> | <pre>[<br> {<br> "can_use": null,<br> "definition": null,<br> "name": null<br> }<br>]</pre> | no |
403-
| <a name="input_custom_config"></a> [custom\_config](#input\_custom\_config) | Map of AD databricks workspace custom config | `map(string)` | <pre>{<br> "enable-X-Content-Type-Options": "true",<br> "enable-X-Frame-Options": "true",<br> "enable-X-XSS-Protection": "true",<br> "enableDbfsFileBrowser": "false",<br> "enableExportNotebook": "false",<br> "enableIpAccessLists": "true",<br> "enableNotebookTableClipboard": "false",<br> "enableResultsDownloading": "false",<br> "enableUploadDataUis": "false",<br> "enableVerboseAuditLogs": "true",<br> "enforceUserIsolation": "true",<br> "storeInteractiveNotebookResultsInCustomerAccount": "true"<br>}</pre> | no |
404-
| <a name="input_default_cluster_policies_override"></a> [default\_cluster\_policies\_override](#input\_default\_cluster\_policies\_override) | Provides an ability to override default cluster policy<br>name - name of cluster policy to override<br>family\_id - family id of corresponding policy<br>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br> name = string<br> family_id = string<br> definition = any<br> }))</pre> | <pre>[<br> {<br> "definition": null,<br> "family_id": null,<br> "name": null<br> }<br>]</pre> | no |
405-
| <a name="input_iam_account_groups"></a> [iam\_account\_groups](#input\_iam\_account\_groups) | List of objects with group name and entitlements for this group | <pre>list(object({<br> group_name = optional(string)<br> entitlements = optional(list(string))<br> }))</pre> | `[]` | no |
406-
| <a name="input_iam_workspace_groups"></a> [iam\_workspace\_groups](#input\_iam\_workspace\_groups) | Used to create workspace group. Map of group name and its parameters, such as users and service principals added to the group. Also possible to configure group entitlements. | <pre>map(object({<br> user = optional(list(string))<br> service_principal = optional(list(string))<br> entitlements = optional(list(string))<br> }))</pre> | `{}` | no |
407-
| <a name="input_ip_addresses"></a> [ip\_addresses](#input\_ip\_addresses) | A map of IP address ranges | `map(string)` | <pre>{<br> "all": "0.0.0.0/0"<br>}</pre> | no |
408-
| <a name="input_key_vault_secret_scope"></a> [key\_vault\_secret\_scope](#input\_key\_vault\_secret\_scope) | Object with Azure Key Vault parameters required for creation of Azure-backed Databricks Secret scope | <pre>list(object({<br> name = string<br> key_vault_id = string<br> dns_name = string<br> tenant_id = string<br> }))</pre> | `[]` | no |
409-
| <a name="input_mount_configuration"></a> [mount\_configuration](#input\_mount\_configuration) | Configuration for mounting storage, including only service principal details | <pre>object({<br> service_principal = object({<br> client_id = string<br> client_secret = string<br> tenant_id = string<br> })<br> })</pre> | <pre>{<br> "service_principal": {<br> "client_id": null,<br> "client_secret": null,<br> "tenant_id": null<br> }<br>}</pre> | no |
401+
| <a name="input_clusters"></a> [clusters](#input\_clusters) | Set of objects with parameters to configure Databricks clusters and assign permissions to it for certain custom groups | <pre>set(object({<br/> cluster_name = string<br/> spark_version = optional(string, "15.3.x-scala2.12")<br/> spark_conf = optional(map(any), {})<br/> spark_env_vars = optional(map(any), {})<br/> data_security_mode = optional(string, "USER_ISOLATION")<br/> aws_attributes = optional(object({<br/> availability = optional(string)<br/> zone_id = optional(string)<br/> first_on_demand = optional(number)<br/> spot_bid_price_percent = optional(number)<br/> ebs_volume_count = optional(number)<br/> ebs_volume_size = optional(number)<br/> ebs_volume_type = optional(string)<br/> }), {<br/> availability = "ON_DEMAND"<br/> zone_id = "auto"<br/> first_on_demand = 0<br/> spot_bid_price_percent = 100<br/> ebs_volume_count = 1<br/> ebs_volume_size = 100<br/> ebs_volume_type = "GENERAL_PURPOSE_SSD"<br/> })<br/> azure_attributes = optional(object({<br/> availability = optional(string)<br/> first_on_demand = optional(number)<br/> spot_bid_max_price = optional(number, 1)<br/> }), {<br/> availability = "ON_DEMAND_AZURE"<br/> first_on_demand = 0<br/> })<br/> node_type_id = optional(string, null)<br/> autotermination_minutes = optional(number, 20)<br/> min_workers = optional(number, 1)<br/> max_workers = optional(number, 2)<br/> cluster_log_conf_destination = optional(string, null)<br/> init_scripts_workspace = optional(set(string), [])<br/> init_scripts_volumes = optional(set(string), [])<br/> init_scripts_dbfs = optional(set(string), [])<br/> init_scripts_abfss = optional(set(string), [])<br/> single_user_name = optional(string, null)<br/> single_node_enable = optional(bool, false)<br/> custom_tags = optional(map(string), {})<br/> permissions = optional(set(object({<br/> group_name = string<br/> permission_level = string<br/> })), [])<br/> pypi_library_repository = optional(set(string), [])<br/> maven_library_repository = optional(set(object({<br/> coordinates = string<br/> exclusions = set(string)<br/> })), [])<br/> }))</pre> | `[]` | no |
402+
| <a name="input_custom_cluster_policies"></a> [custom\_cluster\_policies](#input\_custom\_cluster\_policies) | Provides an ability to create custom cluster policy, assign it to cluster and grant CAN\_USE permissions on it to certain custom groups<br/>name - name of custom cluster policy to create<br/>can\_use - list of string, where values are custom group names, there groups have to be created with Terraform;<br/>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br/> name = string<br/> can_use = list(string)<br/> definition = any<br/> }))</pre> | <pre>[<br/> {<br/> "can_use": null,<br/> "definition": null,<br/> "name": null<br/> }<br/>]</pre> | no |
403+
| <a name="input_custom_config"></a> [custom\_config](#input\_custom\_config) | Map of AD databricks workspace custom config | `map(string)` | <pre>{<br/> "enable-X-Content-Type-Options": "true",<br/> "enable-X-Frame-Options": "true",<br/> "enable-X-XSS-Protection": "true",<br/> "enableDbfsFileBrowser": "false",<br/> "enableExportNotebook": "false",<br/> "enableIpAccessLists": "true",<br/> "enableNotebookTableClipboard": "false",<br/> "enableResultsDownloading": "false",<br/> "enableUploadDataUis": "false",<br/> "enableVerboseAuditLogs": "true",<br/> "enforceUserIsolation": "true",<br/> "storeInteractiveNotebookResultsInCustomerAccount": "true"<br/>}</pre> | no |
404+
| <a name="input_default_cluster_policies_override"></a> [default\_cluster\_policies\_override](#input\_default\_cluster\_policies\_override) | Provides an ability to override default cluster policy<br/>name - name of cluster policy to override<br/>family\_id - family id of corresponding policy<br/>definition - JSON document expressed in Databricks Policy Definition Language. No need to call 'jsonencode()' function on it when providing a value; | <pre>list(object({<br/> name = string<br/> family_id = string<br/> definition = any<br/> }))</pre> | <pre>[<br/> {<br/> "definition": null,<br/> "family_id": null,<br/> "name": null<br/> }<br/>]</pre> | no |
405+
| <a name="input_iam_account_groups"></a> [iam\_account\_groups](#input\_iam\_account\_groups) | List of objects with group name and entitlements for this group | <pre>list(object({<br/> group_name = optional(string)<br/> entitlements = optional(list(string))<br/> }))</pre> | `[]` | no |
406+
| <a name="input_iam_workspace_groups"></a> [iam\_workspace\_groups](#input\_iam\_workspace\_groups) | Used to create workspace group. Map of group name and its parameters, such as users and service principals added to the group. Also possible to configure group entitlements. | <pre>map(object({<br/> user = optional(list(string))<br/> service_principal = optional(list(string))<br/> entitlements = optional(list(string))<br/> }))</pre> | `{}` | no |
407+
| <a name="input_ip_addresses"></a> [ip\_addresses](#input\_ip\_addresses) | A map of IP address ranges | `map(string)` | <pre>{<br/> "all": "0.0.0.0/0"<br/>}</pre> | no |
408+
| <a name="input_key_vault_secret_scope"></a> [key\_vault\_secret\_scope](#input\_key\_vault\_secret\_scope) | Object with Azure Key Vault parameters required for creation of Azure-backed Databricks Secret scope | <pre>list(object({<br/> name = string<br/> key_vault_id = string<br/> dns_name = string<br/> tenant_id = string<br/> }))</pre> | `[]` | no |
409+
| <a name="input_mount_configuration"></a> [mount\_configuration](#input\_mount\_configuration) | Configuration for mounting storage, including only service principal details | <pre>object({<br/> service_principal = object({<br/> client_id = string<br/> client_secret = string<br/> tenant_id = string<br/> })<br/> })</pre> | <pre>{<br/> "service_principal": {<br/> "client_id": null,<br/> "client_secret": null,<br/> "tenant_id": null<br/> }<br/>}</pre> | no |
410410
| <a name="input_mount_enabled"></a> [mount\_enabled](#input\_mount\_enabled) | Boolean flag that determines whether mount point for storage account filesystem is created | `bool` | `false` | no |
411-
| <a name="input_mountpoints"></a> [mountpoints](#input\_mountpoints) | Mountpoints for databricks | <pre>map(object({<br> storage_account_name = string<br> container_name = string<br> }))</pre> | `{}` | no |
411+
| <a name="input_mountpoints"></a> [mountpoints](#input\_mountpoints) | Mountpoints for databricks | <pre>map(object({<br/> storage_account_name = string<br/> container_name = string<br/> }))</pre> | `{}` | no |
412412
| <a name="input_pat_token_lifetime_seconds"></a> [pat\_token\_lifetime\_seconds](#input\_pat\_token\_lifetime\_seconds) | The lifetime of the token, in seconds. If no lifetime is specified, the token remains valid indefinitely | `number` | `315569520` | no |
413-
| <a name="input_secret_scope"></a> [secret\_scope](#input\_secret\_scope) | Provides an ability to create custom Secret Scope, store secrets in it and assigning ACL for access management<br>scope\_name - name of Secret Scope to create;<br>acl - list of objects, where 'principal' custom group name, this group is created in 'Premium' module; 'permission' is one of "READ", "WRITE", "MANAGE";<br>secrets - list of objects, where object's 'key' param is created key name and 'string\_value' is a value for it; | <pre>list(object({<br> scope_name = string<br> scope_acl = optional(list(object({<br> principal = string<br> permission = string<br> })))<br> secrets = optional(list(object({<br> key = string<br> string_value = string<br> })))<br> }))</pre> | `[]` | no |
414-
| <a name="input_sql_endpoint"></a> [sql\_endpoint](#input\_sql\_endpoint) | Set of objects with parameters to configure SQL Endpoint and assign permissions to it for certain custom groups | <pre>set(object({<br> name = string<br> cluster_size = optional(string, "2X-Small")<br> min_num_clusters = optional(number, 0)<br> max_num_clusters = optional(number, 1)<br> auto_stop_mins = optional(string, "30")<br> enable_photon = optional(bool, false)<br> enable_serverless_compute = optional(bool, false)<br> spot_instance_policy = optional(string, "COST_OPTIMIZED")<br> warehouse_type = optional(string, "PRO")<br> permissions = optional(set(object({<br> group_name = string<br> permission_level = string<br> })), [])<br> }))</pre> | `[]` | no |
413+
| <a name="input_secret_scope"></a> [secret\_scope](#input\_secret\_scope) | Provides an ability to create custom Secret Scope, store secrets in it and assigning ACL for access management<br/>scope\_name - name of Secret Scope to create;<br/>acl - list of objects, where 'principal' custom group name, this group is created in 'Premium' module; 'permission' is one of "READ", "WRITE", "MANAGE";<br/>secrets - list of objects, where object's 'key' param is created key name and 'string\_value' is a value for it; | <pre>list(object({<br/> scope_name = string<br/> scope_acl = optional(list(object({<br/> principal = string<br/> permission = string<br/> })))<br/> secrets = optional(list(object({<br/> key = string<br/> string_value = string<br/> })))<br/> }))</pre> | `[]` | no |
414+
| <a name="input_sql_endpoint"></a> [sql\_endpoint](#input\_sql\_endpoint) | Set of objects with parameters to configure SQL Endpoint and assign permissions to it for certain custom groups | <pre>set(object({<br/> name = string<br/> cluster_size = optional(string, "2X-Small")<br/> min_num_clusters = optional(number, 0)<br/> max_num_clusters = optional(number, 1)<br/> auto_stop_mins = optional(string, "30")<br/> enable_photon = optional(bool, false)<br/> enable_serverless_compute = optional(bool, false)<br/> spot_instance_policy = optional(string, "COST_OPTIMIZED")<br/> warehouse_type = optional(string, "PRO")<br/> permissions = optional(set(object({<br/> group_name = string<br/> permission_level = string<br/> })), [])<br/> }))</pre> | `[]` | no |
415415
| <a name="input_suffix"></a> [suffix](#input\_suffix) | Optional suffix that would be added to the end of resources names. | `string` | `""` | no |
416-
| <a name="input_system_schemas"></a> [system\_schemas](#input\_system\_schemas) | Set of strings with all possible System Schema names | `set(string)` | <pre>[<br> "access",<br> "billing",<br> "compute",<br> "marketplace",<br> "storage"<br>]</pre> | no |
416+
| <a name="input_system_schemas"></a> [system\_schemas](#input\_system\_schemas) | Set of strings with all possible System Schema names | `set(string)` | <pre>[<br/> "access",<br/> "billing",<br/> "compute",<br/> "marketplace",<br/> "storage"<br/>]</pre> | no |
417417
| <a name="input_system_schemas_enabled"></a> [system\_schemas\_enabled](#input\_system\_schemas\_enabled) | System Schemas only works with assigned Unity Catalog Metastore. Boolean flag to enabled this feature | `bool` | `false` | no |
418418
| <a name="input_workspace_admin_token_enabled"></a> [workspace\_admin\_token\_enabled](#input\_workspace\_admin\_token\_enabled) | Boolean flag to specify whether to create Workspace Admin Token | `bool` | n/a | yes |
419419

0 commit comments

Comments
 (0)