Skip to content

Commit 87e470a

Browse files
authored
[FSTORE-1604] Add option to avoid setting s3a global options (#424)
1 parent 61f8a6d commit 87e470a

File tree

1 file changed

+6
-2
lines changed
  • docs/user_guides/fs/storage_connector/creation

1 file changed

+6
-2
lines changed

docs/user_guides/fs/storage_connector/creation/s3.md

+6-2
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,12 @@ If you have SSE-KMS enabled for your bucket, you can find the key ARN in the "Pr
7171
### Step 5: Add Spark Options (optional)
7272
Here you can specify any additional spark options that you wish to add to the spark context at runtime. Multiple options can be added as key - value pairs.
7373

74-
!!! tip
75-
To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage.
74+
To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The storage connector will then be able to read from your specified S3 compatible storage.
75+
76+
!!! warning "Spark Configuration"
77+
When using the storage connector within a Spark application, the credentials are set at application level. This allows users to access multiple buckets with the same storage connector within the same application (assuming the credentials allow it).
78+
You can disable this behaviour by setting the option `fs.s3a.global-conf` to `False`. If the `global-conf` option is disabled, the credentials are set on a per-bucket basis and users will be able to use the credentials to access data only from the bucket specified in the storage connector configuration.
79+
7680
## Next Steps
7781

7882
Move on to the [usage guide for storage connectors](../usage.md) to see how you can use your newly created S3 connector.

0 commit comments

Comments
 (0)