-
Notifications
You must be signed in to change notification settings - Fork 228
Remote environment: Support infrastructure files for Terraform. #4973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,6 +8,7 @@ import ( | |
"context" | ||
"errors" | ||
"fmt" | ||
"io" | ||
"os" | ||
"path/filepath" | ||
"slices" | ||
|
@@ -19,6 +20,7 @@ import ( | |
"github.com/azure/azure-dev/cli/azd/pkg/azsdk/storage" | ||
"github.com/azure/azure-dev/cli/azd/pkg/config" | ||
"github.com/azure/azure-dev/cli/azd/pkg/contracts" | ||
"github.com/azure/azure-dev/cli/azd/pkg/environment/azdcontext" | ||
"github.com/google/uuid" | ||
"github.com/joho/godotenv" | ||
) | ||
|
@@ -31,12 +33,15 @@ var ( | |
type StorageBlobDataStore struct { | ||
configManager config.Manager | ||
blobClient storage.BlobClient | ||
azdContext *azdcontext.AzdContext | ||
} | ||
|
||
func NewStorageBlobDataStore(configManager config.Manager, blobClient storage.BlobClient) RemoteDataStore { | ||
func NewStorageBlobDataStore(configManager config.Manager, blobClient storage.BlobClient, | ||
azdContext *azdcontext.AzdContext) RemoteDataStore { | ||
return &StorageBlobDataStore{ | ||
configManager: configManager, | ||
blobClient: blobClient, | ||
azdContext: azdContext, | ||
} | ||
} | ||
|
||
|
@@ -50,6 +55,11 @@ func (fs *StorageBlobDataStore) ConfigPath(env *Environment) string { | |
return fmt.Sprintf("%s/%s", env.name, ConfigFileName) | ||
} | ||
|
||
// InfraPath returns the path to the infra files for the given environment | ||
func (fs *StorageBlobDataStore) InfraPath(env *Environment) string { | ||
return fmt.Sprintf("%s/%s", env.name, InfraFilesDir) | ||
} | ||
|
||
func (sbd *StorageBlobDataStore) List(ctx context.Context) ([]*contracts.EnvListEnvironment, error) { | ||
blobs, err := sbd.blobClient.Items(ctx) | ||
if err != nil { | ||
|
@@ -143,6 +153,22 @@ func (sbd *StorageBlobDataStore) Save(ctx context.Context, env *Environment, opt | |
return fmt.Errorf("uploading .env: %w", describeError(err)) | ||
} | ||
|
||
// Upload Infra Files if any | ||
filesToUpload := sbd.azdContext.GetEnvironmentInfraFiles(env.name) | ||
|
||
for _, file := range filesToUpload { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens if some uploads succeed and some fail? This would leave an environment in an inconsistent state. Is there a way to make the upload more atomic? Maybe upload a zip of all the files instead of files individually? |
||
fileName := filepath.Base(file) | ||
fileBuffer, err := os.Open(file) | ||
if err != nil { | ||
return fmt.Errorf("failed to open file for upload: %w", err) | ||
} | ||
defer fileBuffer.Close() | ||
err = sbd.blobClient.Upload(ctx, fmt.Sprintf("%s/%s/%s", env.name, InfraFilesDir, fileName), fileBuffer) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How are secrets protected in this approach? Seems like a user could upload a lot of sensitive information to a storage account that I don't see any documentation on how to secure correctly. There's no mention of turning on blob encryption, turning off public access, restricting network access, or any other best practices for putting tfstate in a storage account https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/remote-environments-support If I'm working across two computers, how is the other computer notified of the change so I have the latest state? Do I have to manually re-run |
||
if err != nil { | ||
return fmt.Errorf("uploading infra file: %w", err) | ||
} | ||
} | ||
Comment on lines
+157
to
+170
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @HadwaAbdelhalem Instead of being selective here - what if we just upload any files that are in the environment folder? That would solve this specific use case as well as any other similar issues. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @wbreza I wanted to be more specific in case users have local tmp, or test files in the local environment folder that was not meant to be persisted and to follow the pattern of the environment items being well structured and defined ( configfile, env file) by adding infrafiles There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What would happen with Synchronizing a local tfstate file doesn't seem to account for concurrency, locking, and conflict resolution which might leave customers in a much worse state than just using Terraform remote state. It exposes customers to inconsistencies and potential data loss when a conflict occurs. I have not seen manually synchronizing a local state file be recommended over just using terraform remote state. I believe we should be recommending best practices to our customers using |
||
|
||
tracing.SetUsageAttributes(fields.StringHashed(fields.EnvNameKey, env.Name())) | ||
return nil | ||
} | ||
|
@@ -191,6 +217,41 @@ func (sbd *StorageBlobDataStore) Reload(ctx context.Context, env *Environment) e | |
tracing.SetGlobalAttributes(fields.StringHashed(fields.SubscriptionIdKey, env.GetSubscriptionId())) | ||
} | ||
|
||
// Reload infra config file if any | ||
items, err := sbd.blobClient.Items(ctx) | ||
if err != nil { | ||
return describeError(err) | ||
} | ||
|
||
for _, item := range items { | ||
if strings.Contains(item.Path, sbd.InfraPath(env)) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Using Contains is fragile – if one environment’s name is a substring of another, this strings.Contains check can incorrectly match blobs from a different environment. For example, an environment named "prod" would match blobs for "prod2" |
||
|
||
blobStream, err := sbd.blobClient.Download(ctx, item.Path) | ||
if err != nil { | ||
return fmt.Errorf("failed to download blob: %w", err) | ||
} | ||
defer blobStream.Close() | ||
|
||
localInfraDir := sbd.azdContext.GetEnvironmentInfraDirectory(env.name) | ||
err = os.MkdirAll(localInfraDir, os.ModePerm) | ||
if err != nil { | ||
return fmt.Errorf("failed to create directory: %w", err) | ||
} | ||
|
||
localFilePath := fmt.Sprintf("%s/%s", localInfraDir, filepath.Base(item.Path)) | ||
file, err := os.Create(localFilePath) | ||
if err != nil { | ||
return fmt.Errorf("failed to create local file: %w", err) | ||
} | ||
defer file.Close() | ||
|
||
_, err = io.Copy(file, blobStream) | ||
if err != nil { | ||
return fmt.Errorf("failed to write blob to local file: %w", err) | ||
} | ||
} | ||
} | ||
|
||
return nil | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can files be deleted from remote env? how would I remove an individual file and have it removed from storage and other machines using this env?