Skip to content

Commit 4d8506f

Browse files
committed
init
1 parent 1091fb1 commit 4d8506f

File tree

2 files changed

+1
-0
lines changed
  • docs
    • assets/images/guides/fs/storage_connector
    • user_guides/fs/storage_connector/creation

2 files changed

+1
-0
lines changed
Loading

docs/user_guides/fs/storage_connector/creation/s3.md

+1
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ When you're finished, you'll be able to read files using Spark through HSFS APIs
1717
Before you begin this guide you'll need to retrieve the following information from your AWS S3 account and bucket:
1818

1919
- **Bucket:** You will need a S3 bucket that you have access to. The bucket is identified by its name.
20+
- **Path (Optional):** If needed, a path can be defined to ensure that all operations are restricted to a specific location within the bucket.
2021
- **Region (Optional):** You will need an S3 region to have complete control over data when managing the feature group that relies on this storage connector. The region is identified by its code.
2122
- **Authentication Method:** You can authenticate using Access Key/Secret, or use IAM roles. If you want to use an IAM role it either needs to be attached to the entire Hopsworks cluster or Hopsworks needs to be able to assume the role. See [IAM role documentation](../../../../setup_installation/admin/roleChaining.md) for more information.
2223
- **Server Side Encryption details:** If your bucket has server side encryption (SSE) enabled, make sure you know which algorithm it is using (AES256 or SSE-KMS). If you are using SSE-KMS, you need the resource ARN of the managed key.

0 commit comments

Comments
 (0)