You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/user_guides/fs/storage_connector/creation/s3.md
+6-2
Original file line number
Diff line number
Diff line change
@@ -71,8 +71,12 @@ If you have SSE-KMS enabled for your bucket, you can find the key ARN in the "Pr
71
71
### Step 5: Add Spark Options (optional)
72
72
Here you can specify any additional spark options that you wish to add to the spark context at runtime. Multiple options can be added as key - value pairs.
73
73
74
-
!!! tip
75
-
To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage.
74
+
To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage.
75
+
76
+
!!! warning "Spark Configuration"
77
+
When using the storage connector within a Spark application, the credentials are set at application level. This allows users to access multiple buckets with the same storage connector within the same application (assuming the credentials allow it).
78
+
You can disable this behaviour by setting the option `fs.s3a.global-conf` to `False`. If the `global-conf` option is disabled, the credentials are set on a per-bucket basis and users will be able to use the credentials to access data only from the bucket specified in the storage connector configuration.
79
+
76
80
## Next Steps
77
81
78
82
Move on to the [usage guide for storage connectors](../usage.md) to see how you can use your newly created S3 connector.
0 commit comments