Skip to content

Commit 9ca2140

Browse files
committed
[FSTORE-1604] Add option to avoid setting s3a global options (logicalclocks#424)
1 parent dd5c61a commit 9ca2140

File tree

1 file changed

+6
-2
lines changed
  • docs/user_guides/fs/storage_connector/creation

1 file changed

+6
-2
lines changed

docs/user_guides/fs/storage_connector/creation/s3.md

+6-2
Original file line numberDiff line numberDiff line change
@@ -73,8 +73,12 @@ If you have SSE-KMS enabled for your bucket, you can find the key ARN in the "Pr
7373
### Step 5: Add Spark Options (optional)
7474
Here you can specify any additional spark options that you wish to add to the spark context at runtime. Multiple options can be added as key - value pairs.
7575

76-
!!! tip
77-
To connect to a S3 compatible storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage.
76+
To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage.
77+
78+
!!! warning "Spark Configuration"
79+
When using the storage connector within a Spark application, the credentials are set at application level. This allows users to access multiple buckets with the same storage connector within the same application (assuming the credentials allow it).
80+
You can disable this behaviour by setting the option `fs.s3a.global-conf` to `False`. If the `global-conf` option is disabled, the credentials are set on a per-bucket basis and users will be able to use the credentials to access data only from the bucket specified in the storage connector configuration.
81+
7882
## Next Steps
7983

8084
Move on to the [usage guide for storage connectors](../usage.md) to see how you can use your newly created S3 connector.

0 commit comments

Comments
 (0)