-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Backup support #475
base: main
Are you sure you want to change the base?
feat: Backup support #475
Conversation
WalkthroughThe changes integrate a complete backup feature into the BirdNET-Go application. A new backup command has been added to the CLI, and a comprehensive backup management system is implemented. This includes interfaces for backup sources and targets, support for various target types (local filesystem, FTP, SFTP, rsync, and Google Drive), and a SQLite backup source. Additionally, backup-related configuration options, default values, and validations are introduced, along with dependency updates and disk space utilities for both Unix and Windows. Changes
Sequence Diagram(s)sequenceDiagram
participant User as "CLI User"
participant CLI as "BirdNET-Go CLI"
participant BackupCmd as "Backup Command"
participant Manager as "Backup Manager"
participant Source as "Backup Source (SQLite)"
participant Target as "Backup Target(s)"
User->>CLI: Execute backup command
CLI->>BackupCmd: Invoke backup command
BackupCmd->>Manager: Validate backup configuration & initialize manager
Manager->>Source: Register & validate backup source
Manager->>Target: Register & validate backup targets
Manager->>Manager: Execute backup process (create archive, add files, encrypt if needed)
Manager-->>BackupCmd: Return backup outcome
BackupCmd-->>CLI: Display success or error message
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (24)
internal/diskmanager/disk_usage_windows.go (1)
38-61
: Consider refactoring to reduce code duplication.Both
GetDiskUsage
andGetAvailableSpace
share similar setup code. Consider extracting the common functionality into a helper function.Here's a suggested refactoring:
+// getDiskSpace is a helper function that retrieves disk space information +func getDiskSpace(baseDir string) (freeBytesAvailable int64, totalNumberOfBytes int64, totalNumberOfFreeBytes int64, err error) { + kernel32 := syscall.NewLazyDLL("kernel32.dll") + getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") + + utf16Path, err := syscall.UTF16PtrFromString(baseDir) + if err != nil { + return 0, 0, 0, err + } + + _, _, err = getDiskFreeSpaceEx.Call( + uintptr(unsafe.Pointer(utf16Path)), + uintptr(unsafe.Pointer(&freeBytesAvailable)), + uintptr(unsafe.Pointer(&totalNumberOfBytes)), + uintptr(unsafe.Pointer(&totalNumberOfFreeBytes)), + ) + if err != syscall.Errno(0) { + return 0, 0, 0, err + } + + return freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes, nil +} + func GetDiskUsage(baseDir string) (float64, error) { - kernel32 := syscall.NewLazyDLL("kernel32.dll") - getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") - - var freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes int64 - - utf16Path, err := syscall.UTF16PtrFromString(baseDir) - if err != nil { - return 0, err - } - - _, _, err = getDiskFreeSpaceEx.Call( - uintptr(unsafe.Pointer(utf16Path)), - uintptr(unsafe.Pointer(&freeBytesAvailable)), - uintptr(unsafe.Pointer(&totalNumberOfBytes)), - uintptr(unsafe.Pointer(&totalNumberOfFreeBytes)), - ) - if err != syscall.Errno(0) { + _, totalNumberOfBytes, totalNumberOfFreeBytes, err := getDiskSpace(baseDir) + if err != nil { return 0, err } used := totalNumberOfBytes - totalNumberOfFreeBytes - return (float64(used) / float64(totalNumberOfBytes)) * 100, nil } func GetAvailableSpace(baseDir string) (uint64, error) { - kernel32 := syscall.NewLazyDLL("kernel32.dll") - getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") - - var freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes int64 - - utf16Path, err := syscall.UTF16PtrFromString(baseDir) - if err != nil { - return 0, err - } - - _, _, err = getDiskFreeSpaceEx.Call( - uintptr(unsafe.Pointer(utf16Path)), - uintptr(unsafe.Pointer(&freeBytesAvailable)), - uintptr(unsafe.Pointer(&totalNumberOfBytes)), - uintptr(unsafe.Pointer(&totalNumberOfFreeBytes)), - ) - if err != syscall.Errno(0) { + freeBytesAvailable, _, _, err := getDiskSpace(baseDir) + if err != nil { return 0, err } - return uint64(freeBytesAvailable), nil }cmd/backup/backup.go (4)
17-29
: Add usage examples or more detailed help messages.
It may be beneficial to include usage examples or additional notes in the command’sLong
field to guide new users.
31-60
: Check MySQL fallback logic and user guidance.
Whensettings.Output.MySQL.Enabled
is true, the current logic returns without performing a backup. Consider clarifying how users should proceed or provide alternative methods if MySQL is incorrectly enabled in the config.
62-104
: Consider partial target registration outcomes.
Currently, a single failing target initialization logs a warning but proceeds to register others. This is fine, but ensure your documentation clarifies that backups continue despite some targets failing. You may provide a summary of all failed targets once registration completes.
106-125
: Allow the timeout to be configurable.
A hard-coded 10-minute timeout may be insufficient for large databases or slow network targets. Providing a configurable timeout or a way to extend it could prevent incomplete backups.internal/backup/targets/gdrive.go (2)
81-124
: Consider concurrent uploads for large files.
Relying onio.Copy
to a single writer is acceptable for small backups. For significantly large backups, you may consider chunked or resumable uploads in GCS.
126-171
: Pagination or partial listing might be beneficial.
For very large buckets, you may want to implement pagination or an upper limit on results to avoid excessive listing operations.internal/backup/targets/rsync.go (3)
94-99
: Revisit rsync flags for performance and completeness.You're using
-av
which handles archive mode and verbosity. If preserving extended attributes or ACLs is needed in production, consider adding flags like-A
,-X
, or-H
. Also note that-v
can be very noisy for large backups, so gauge whether detailed verbosity is truly beneficial in debug mode.
141-164
: Harden parsing of ls output to handle unexpected elements.Your parsing uses specific indices (e.g.,
parts[4]
for size,[5..7]
for time). If the output fromls
deviates due to unusual filenames, localized output, or extended attributes, parsing could fail or yield incorrect results. Consider robust checks or a safer parsing strategy.
196-229
: Enhance validation by confirming write permissions.Your
Validate()
method tests connectivity via SSH and rsync using a dry run, which is helpful. For a stricter check, consider verifying that the base path is writable by actually creating and removing a small test file or directory (rather than just a.test
path in dry-run mode).internal/backup/targets/sftp.go (3)
88-104
: Use secure host key validation instead ofssh.InsecureIgnoreHostKey()
.Though perfectly acceptable in development or testing, relying on
ssh.InsecureIgnoreHostKey()
in production can pose security risks by allowing potential MITM attacks. Consider switching to a more secure approach (e.g., known_hosts checks orssh.FixedHostKey()
).
199-210
: Consider caching connections for repeated listing.Every call to
List()
triggers a new SFTP connection, which might be costly for frequent listing operations. You could optionally keep a short-lived cached connection if your app performs repeated listing in quick succession.
291-296
: Add explicit logging for successful directory creation.In
createDirectory()
, you silently succeed when directories are created. For better observability (especially whent.debug
is true), consider logging or printing a debug statement, consistent with other methods.internal/backup/targets/ftp.go (1)
85-94
: Confirm logger usage across all operations.You provide a configurable logger but use
log.Printf
in some places. For consistency and possible external overrides, consider routing all messages (including debug logs) throught.logger.Printf
or an equivalent method.internal/backup/backup.go (4)
158-163
: Refine the simplistic schedule parsing logic.
This code usesnil
as a placeholder infmt.Sscanf(schedule, "%d %d * * *", nil, &hour)
, which is unconventional and may cause confusion. Additionally, it ignores the minutes field although the comment suggests it only supports a single daily schedule at a specific hour. Consider providing clearer error messages or capturing minute values if needed.
244-245
: Reevaluate the narrow definition of daily backups.
Currently, a backup is marked as daily only if it runs between 00:00 and 00:15. Users scheduling a backup at 00:16 or slightly later lose the 'daily' designation. This might lead to unexpected retention behaviors or confusion.
430-447
: Avoid storing the encryption key in plain text configuration.
Here, the code auto-generates an encryption key and persists it to the config file in plaintext. This can be a security risk if the config file is accessible. Consider integrating a more secure key management approach or environment-based secret storage.
521-581
: Ensure non-daily backups are also retained or purged appropriately.
The cleanup logic only handles backups markedIsDaily
. If backups occur at times outside the narrow daily window, they won't be governed by this routine, potentially leading to unbounded growth. Consider applying a similar retention policy to all backups or providing separate policies for each backup type.internal/backup/logger.go (1)
5-9
: Enhance logging with levels or structured logs.
The current Logger interface provides onlyPrintf
andPrintln
. Adding log levels (e.g., debug, info, warning, error) or using structured logging libraries could make troubleshooting more consistent and scalable.internal/backup/targets/local.go (3)
18-29
: Add documentation for path field format.Consider adding documentation for the
path
field to specify:
- Whether it should be absolute or relative
- Expected permissions
- Platform-specific path considerations
108-153
: Improve List method robustness and performance.Consider the following improvements:
- Add context cancellation checks
- Group errors for better reporting
- Add pagination or limit to prevent memory issues with large number of backups
-func (t *LocalTarget) List(ctx context.Context) ([]backup.BackupInfo, error) { +func (t *LocalTarget) List(ctx context.Context, limit int) ([]backup.BackupInfo, error) { + if err := ctx.Err(); err != nil { + return nil, fmt.Errorf("context cancelled: %w", err) + } + if t.debug { t.logger.Printf("Listing backups in local target") } + var warnings []string entries, err := os.ReadDir(t.path) if err != nil { return nil, fmt.Errorf("failed to read backup directory: %w", err) } var backups []backup.BackupInfo for _, entry := range entries { + if err := ctx.Err(); err != nil { + return nil, fmt.Errorf("context cancelled while listing backups: %w", err) + } + if !entry.IsDir() { continue } metadataPath := filepath.Join(t.path, entry.Name(), "metadata.json") metadataFile, err := os.Open(metadataPath) if err != nil { - t.logger.Printf("Warning: skipping backup %s: %v", entry.Name(), err) + warnings = append(warnings, fmt.Sprintf("skipping backup %s: %v", entry.Name(), err)) continue } var metadata backup.Metadata if err := json.NewDecoder(metadataFile).Decode(&metadata); err != nil { metadataFile.Close() - t.logger.Printf("Warning: invalid metadata in backup %s: %v", entry.Name(), err) + warnings = append(warnings, fmt.Sprintf("invalid metadata in backup %s: %v", entry.Name(), err)) continue } metadataFile.Close() backupInfo := backup.BackupInfo{ Metadata: metadata, Target: t.Name(), } backups = append(backups, backupInfo) + + // Apply limit if specified + if limit > 0 && len(backups) >= limit { + break + } } // Sort backups by timestamp (newest first) sort.Slice(backups, func(i, j int) bool { return backups[i].ID > backups[j].ID }) + // Log accumulated warnings + if len(warnings) > 0 { + t.logger.Printf("Warnings while listing backups:\n%s", strings.Join(warnings, "\n")) + } return backups, nil }
169-218
: Improve Validate method robustness.Consider the following improvements:
- Use
defer
for write test cleanup- Make minimum disk space configurable
- Move logger nil check to the beginning
- Add periodic revalidation capability
+// MinimumDiskSpaceGB is the minimum required disk space in GB +const MinimumDiskSpaceGB = 1.0 + func (t *LocalTarget) Validate() error { + if t.logger == nil { + t.logger = backup.DefaultLogger() + } + // Check if path is absolute if !filepath.IsAbs(t.path) { absPath, err := filepath.Abs(t.path) if err != nil { return fmt.Errorf("failed to resolve absolute path: %w", err) } t.path = absPath } // Check if path exists and is a directory if info, err := os.Stat(t.path); err != nil { if os.IsNotExist(err) { return fmt.Errorf("backup path does not exist: %w", err) } return fmt.Errorf("failed to check backup path: %w", err) } else if !info.IsDir() { return fmt.Errorf("backup path is not a directory") } // Check if path is writable tmpFile := filepath.Join(t.path, ".write_test") f, err := os.Create(tmpFile) if err != nil { return fmt.Errorf("backup path is not writable: %w", err) } - f.Close() - os.Remove(tmpFile) + defer func() { + f.Close() + if err := os.Remove(tmpFile); err != nil { + t.logger.Printf("Failed to remove write test file: %v", err) + } + }() // Check available disk space availableBytes, err := diskmanager.GetAvailableSpace(t.path) if err != nil { return fmt.Errorf("failed to check disk space: %w", err) } // Convert available bytes to gigabytes availableGB := float64(availableBytes) / (1024 * 1024 * 1024) - // Ensure at least 1GB free space is available - if availableGB < 1.0 { + // Ensure minimum required free space is available + if availableGB < MinimumDiskSpaceGB { return fmt.Errorf("insufficient disk space: %.1f GB available, minimum 1 GB required", availableGB) } - if t.logger != nil { - t.logger.Printf("Available disk space at backup location: %.1f GB", availableGB) - } + t.logger.Printf("Available disk space at backup location: %.1f GB", availableGB) return nil }internal/conf/defaults.go (1)
182-243
: Enhance security and platform compatibility of default settings.Consider the following improvements:
- Use more secure default paths (e.g., user's home directory)
- Disable password authentication by default for remote targets
- Use environment variables for sensitive settings
// Backup configuration viper.SetDefault("backup.enabled", false) viper.SetDefault("backup.debug", false) viper.SetDefault("backup.schedule", "0 0 * * *") // Daily at midnight viper.SetDefault("backup.encryption", false) // Encryption disabled by default viper.SetDefault("backup.retention.maxage", "30d") viper.SetDefault("backup.retention.maxbackups", 30) viper.SetDefault("backup.retention.minbackups", 7) viper.SetDefault("backup.targets", []map[string]interface{}{ { "type": "local", "enabled": true, "settings": map[string]interface{}{ - "path": "backups/", + "path": "${HOME}/.birdnet/backups/", }, }, { "type": "ftp", "enabled": false, "settings": map[string]interface{}{ "host": "localhost", "port": 21, - "username": "", - "password": "", + "username": "${BACKUP_FTP_USER}", + "password": "${BACKUP_FTP_PASS}", "path": "backups/", "timeout": "30s", }, }, // Similar changes for other remote targets... })internal/conf/config.go (1)
352-382
: Enhance configuration struct documentation and type safety.Consider the following improvements:
- Add detailed documentation for all fields
- Use more specific types for settings
- Add validation tags
// BackupProvider defines settings for a specific backup provider type BackupProvider struct { - Type string `yaml:"type"` // "local", "cifs", "nfs", "ftp", "onedrive", etc. - Enabled bool `yaml:"enabled"` // true to enable this provider - Settings map[string]interface{} `yaml:"settings"` // Provider-specific settings + // Type specifies the backup provider type. + // Supported values: "local", "cifs", "nfs", "ftp", "onedrive" + Type string `yaml:"type" validate:"required,oneof=local cifs nfs ftp onedrive"` + + // Enabled indicates whether this provider is active + Enabled bool `yaml:"enabled"` + + // Settings contains provider-specific configuration + Settings map[string]interface{} `yaml:"settings" validate:"required"` } // BackupRetention defines backup retention policy type BackupRetention struct { - MaxAge string `yaml:"maxage"` // Duration string like "30d", "6m", "1y" - MaxBackups int `yaml:"maxbackups"` // Maximum number of backups to keep - MinBackups int `yaml:"minbackups"` // Minimum number of backups to keep regardless of age + // MaxAge specifies how long to keep backups. + // Format: "<number>[d|m|y]" (e.g., "30d", "6m", "1y") + MaxAge string `yaml:"maxage" validate:"omitempty,regexp=^\\d+[dmy]$"` + + // MaxBackups specifies the maximum number of backups to keep + MaxBackups int `yaml:"maxbackups" validate:"required,min=1"` + + // MinBackups specifies the minimum number of backups to keep + // regardless of their age + MinBackups int `yaml:"minbackups" validate:"required,min=1"` } // BackupTarget defines settings for a backup target type BackupTarget struct { - Type string `yaml:"type"` // "local", "ftp", "sftp", "rsync", "gdrive" - Enabled bool `yaml:"enabled"` // true to enable this target - Settings map[string]interface{} `yaml:"settings"` // Target-specific settings + // Type specifies the backup target type. + // Supported values: "local", "ftp", "sftp", "rsync", "gdrive" + Type string `yaml:"type" validate:"required,oneof=local ftp sftp rsync gdrive"` + + // Enabled indicates whether this target is active + Enabled bool `yaml:"enabled"` + + // Settings contains target-specific configuration + Settings map[string]interface{} `yaml:"settings" validate:"required"` } // BackupConfig defines the configuration for backups type BackupConfig struct { - Enabled bool `yaml:"enabled"` // true to enable backup functionality - Debug bool `yaml:"debug"` // true to enable debug logging - Schedule string `yaml:"schedule"` // Cron expression for backup schedule - Encryption bool `yaml:"encryption"` // true to enable backup encryption - EncryptionKey string `yaml:"encryption_key"` // AES-256 key for encrypting backups (hex-encoded) - Retention BackupRetention `yaml:"retention"` // Backup retention settings - Targets []BackupTarget `yaml:"targets"` // List of backup targets + // Enabled indicates whether backup functionality is active + Enabled bool `yaml:"enabled"` + + // Debug enables detailed logging for backup operations + Debug bool `yaml:"debug"` + + // Schedule specifies when backups should run using cron syntax + Schedule string `yaml:"schedule" validate:"required"` + + // Encryption enables backup file encryption + Encryption bool `yaml:"encryption"` + + // EncryptionKey is the hex-encoded AES-256 key for backup encryption + // Required when Encryption is true + EncryptionKey string `yaml:"encryption_key" validate:"required_if=Encryption true,len=64,hexadecimal"` + + // Retention specifies how long backups should be kept + Retention BackupRetention `yaml:"retention" validate:"required"` + + // Targets is the list of configured backup destinations + Targets []BackupTarget `yaml:"targets" validate:"required,min=1,dive"` }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (16)
cmd/backup/backup.go
(1 hunks)cmd/root.go
(3 hunks)go.mod
(2 hunks)internal/backup/backup.go
(1 hunks)internal/backup/logger.go
(1 hunks)internal/backup/sources/sqlite.go
(1 hunks)internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)internal/conf/config.go
(2 hunks)internal/conf/defaults.go
(1 hunks)internal/conf/validate.go
(3 hunks)internal/diskmanager/disk_usage_unix.go
(1 hunks)internal/diskmanager/disk_usage_windows.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (17)
internal/diskmanager/disk_usage_unix.go (1)
25-34
: LGTM! Well-implemented disk space calculation.The implementation correctly uses
syscall.Statfs
to calculate available disk space in bytes, with proper error handling and clear documentation.internal/diskmanager/disk_usage_windows.go (1)
38-61
: LGTM! Correct implementation of Windows disk space calculation.The implementation correctly uses the Windows API
GetDiskFreeSpaceExW
to calculate available disk space in bytes, with proper error handling and clear documentation.cmd/backup/backup.go (2)
1-2
: File-level overview looks good.
No issues found in the package comment or declaration.
4-15
: No issues with imports.
These imports appear necessary for the backup command and do not present any problems.internal/backup/sources/sqlite.go (3)
19-24
: Struct definition is straightforward.
No issues observed with the fields or naming.
25-31
: Constructor logic is clean.
Utilizing a default logger is reasonable. No immediate concerns here.
154-191
: Validation steps are solid.
Verifying the file existence and a successfulPing()
ensures basic availability. If you expect large or locked databases, consider more elaborate error-handling or user guidance to avoid locked-file issues.internal/backup/targets/gdrive.go (2)
31-74
: Bucket vs. Drive mismatch verification.
Because this code references a “bucket” rather than typical “Drive” semantics, confirm that the user truly wants GCS. Otherwise, use the official Drive API if actual Google Drive storage is needed.
193-225
: Success path is correct—test objects are used properly.
Note that leftover test objects (in case of partial failures) might clutter the bucket. Consider robust cleanup if the creation or removal fails frequently.internal/backup/targets/rsync.go (2)
28-34
: Validate non-empty host configuration for clarity.You're already checking for the presence of the
host
key, but consider also validating that it's not an empty string to improve error reporting and catch cases where the value might erroneously be "".
123-131
: Ensurels -l --time-style=full-iso
is widely supported.Some operating systems (particularly BSD variants) may not support
--time-style=full-iso
. Consider a fallback mechanism or doc note to avoid breakage on non-GNU-based systems.internal/backup/targets/sftp.go (1)
141-154
: Confirm concurrency safety when transmitting connection results.The goroutine populates
resultChan
once, which is safe for the single-consumer scenario here. Just be mindful that no data race or repeated usage ofresultChan
occurs.internal/backup/targets/ftp.go (2)
103-138
: Evaluate connection concurrency patterns.Using channels to handle connection timeouts and errors is solid. Just be wary of any possible resource leaks if the goroutine remains blocked. Your
select
onctx.Done()
ensures coverage, so this looks okay.
380-418
: Validate usage of thereadMetadata
helper.
readMetadata
uses JSON decoding after an FTPRETR
. Ensure the file you retrieve is indeed JSON or handle cases where it is not. If this method is for future usage, consider adding tests or documentation clarifying the expected remote metadata format.cmd/root.go (1)
45-46
: Confirm user-facing command details.
The new backup command is successfully integrated. However, verify thatbackup.Command(settings)
provides a user-friendly help/usage description, so end users understand how to invoke backups and configure them.go.mod (2)
39-82
: Enhance Backup Support with Updated Indirect Dependencies
This block introduces several new and updated indirect dependencies—such as those for Google Cloud (e.g.,cloud.google.com/go/compute/metadata
updated to v0.6.0), OpenTelemetry modules, and other related libraries—which are essential for enabling the new backup functionality across various targets (local, FTP, SFTP, rsync, Google Drive). Please verify that all these versions are compatible with the backup feature implementation, especially given their role in handling cloud storage and operational telemetry.
136-136
: Updated Protobuf Dependency Version
The version update ofgoogle.golang.org/protobuf
to v1.35.2 is correctly applied. It is advisable to confirm that this minor bump aligns with other dependencies (e.g., gRPC) to avoid any compatibility issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (7)
internal/backup/sources/sqlite.go (4)
25-29
: Use consistent logger injection for flexibility
Currently, you assignbackup.DefaultLogger()
in the constructor. This is fine for most cases, but allowing the caller to provide a logger instance may facilitate testing and more flexible log handling.
228-242
: Suggest adding a quick integrity check when performance is critical
PRAGMA integrity_check
is correct for thorough verification but can be slow for large databases. Complementing withPRAGMA quick_check
could improve performance in certain scenarios.
375-408
: Lock file approach may not scale to multi-host environments
While creating a.lock
file is effective to prevent concurrent backups on the same host, it may not protect against overlapping backups in a distributed setup. If multi-host concurrency is expected, consider a shared lock mechanism.
607-617
: Consider adding unit tests for the validation logic
Testing this method ensures that misconfigurations are caught early. If coverage is lacking, I can help provide a sample test scaffolding.internal/backup/errors.go (1)
12-39
: Consider a stringer method for better error code introspection
Defining a_String()
method forErrorCode
can simplify logging and debugging by providing human-readable names instead of numeric codes.internal/conf/defaults.go (1)
183-245
: Rsync target lacks a global timeout setting
While other targets like FTP and SFTP have atimeout
parameter, rsync is missing an analogous option. For consistency and reliability, you may want to include a timeout in the rsync target's settings too.internal/conf/config.go (1)
319-319
: Add documentation and YAML tag for the TempDir field.Consider adding documentation and a YAML tag for consistency:
- TempDir string // path to temporary directory for backups + TempDir string `yaml:"tempdir"` // Path to temporary directory used during backup operations
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/backup/errors.go
(1 hunks)internal/backup/sources/sqlite.go
(1 hunks)internal/conf/config.go
(3 hunks)internal/conf/defaults.go
(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (7)
internal/backup/sources/sqlite.go (3)
474-498
: Backup concurrency and transaction safety
RunningVACUUM INTO
while the application is actively writing ensures a consistent snapshot, but it can impact ongoing transactions, especially in high-traffic environments. Verify your system performance under load, and consider scheduling backups at off-peak times if needed.
513-594
: Overall backup workflow looks solid
The orchestration, error handling, and cleanup stages are well-structured and thorough.
190-226
:❓ Verification inconclusive
Ensure SQLite version compatibility for
VACUUM INTO
TheVACUUM INTO
command requires SQLite 3.27 or higher. If your production environment uses an older version, this will fail. Consider verifying your SQLite version or providing a fallback mechanism.
🏁 Script executed:
#!/bin/bash # Confirm the SQLite version meets the 3.27+ requirement. sqlite3 --versionLength of output: 117
Action Required: Manually Verify SQLite Version Compatibility
It appears that the current environment did not include the
sqlite3
binary (the version check returned "command not found"), so we couldn’t automatically confirm that SQLite meets the 3.27+ requirement needed for theVACUUM INTO
command. Please verify manually that the production environment uses SQLite 3.27 or later. If an older version is in use, consider implementing a fallback mechanism.internal/backup/errors.go (1)
41-116
: Structured error handling is well-implemented
The design cleanly separates error categories and makes them easily distinguishable. This aligns with best practices for robust error handling.internal/conf/defaults.go (1)
173-173
: Confirm usage ofoutput.sqlite.tempdir
You set a default empty string here. Make sure your code handles this setting correctly when no temp directory is specified, as you do invalidateTempDir
.internal/conf/config.go (2)
363-364
:❓ Verification inconclusive
Add validation for backup retention limits.
The
MaxBackups
andMinBackups
fields need validation to ensure:
- MaxBackups is greater than MinBackups
- Both values are non-negative
Let's verify the retention validation:
🏁 Script executed:
#!/bin/bash # Search for backup retention validation ast-grep --pattern 'func ValidateSettings($settings *Settings) error { $$$ if $settings.Backup.Retention.MaxBackups < $settings.Backup.Retention.MinBackups { $$$ } $$$ }'Length of output: 184
Backup Retention Validation Missing – Manual Verification Needed
The current code in
internal/conf/config.go
(lines 363-364) does not appear to implement any validations for backup retention limits. Specifically, the following validations should be added:
MaxBackups
should be greater thanMinBackups
.- Both
MaxBackups
andMinBackups
must be non-negative.The AST search for these validations (within
ValidateSettings
) produced no output. Please manually verify whether the required validations exist elsewhere or add the necessary checks to ensure correct configuration.
369-369
:❓ Verification inconclusive
Verify supported backup target types.
The
Type
field inBackupTarget
supports "local", "ftp", "sftp", "rsync", and "gdrive" as mentioned in the PR objectives. Consider:
- Adding an enum or constants for supported types.
- Adding validation to ensure only supported types are used.
Let's verify the type validation:
🏁 Script executed:
#!/bin/bash # Search for backup target type validation ast-grep --pattern 'func ValidateSettings($settings *Settings) error { $$$ for _, target := range $settings.Backup.Targets { $$$ } $$$ }'Length of output: 151
Action: Verify Backup Target Type Validation Implementation
Our search using the initial AST grep pattern did not locate any evidence of a validation function (such as
ValidateSettings
) enforcing that theType
field only accepts "local", "ftp", "sftp", "rsync", or "gdrive". Please take a moment to manually verify if these validations are implemented elsewhere or if relying solely on documentation/comments. In case the validation is missing, consider the following improvements:
- Define an enum or constants for the backup target types.
- Implement validation logic (e.g., within a
ValidateSettings
function) that restricts theType
field to the supported values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
internal/conf/config.go (1)
354-358
: Consider using type-safe provider settings.Using
map[string]interface{}
for provider settings makes it difficult to ensure type safety and proper validation. Consider:
- Creating specific structs for each provider type
- Using a union type or interface for provider settings
Example approach:
type ProviderSettings interface { Validate() error } type FTPSettings struct { Host string `yaml:"host"` Port int `yaml:"port"` Username string `yaml:"username"` Password string `yaml:"password"` } type BackupProvider struct { Type string `yaml:"type"` Enabled bool `yaml:"enabled"` Settings ProviderSettings `yaml:"settings"` }internal/backup/backup.go (2)
333-347
: Use configurable timeout for source backup.The method uses a hard-coded 30-minute timeout. Consider using the timeout from the configuration or the default timeouts struct.
-backupCtx, backupCancel := context.WithTimeout(ctx, 30*time.Minute) +backupCtx, backupCancel := context.WithTimeout(ctx, m.getBackupTimeout())
682-705
: Improve retention age parsing accuracy and validation.The current implementation has some limitations:
- Uses approximate values for months (30 days) and years (365 days)
- No validation for negative numbers
Consider:
- Using more accurate calculations for months and years
- Adding validation for negative numbers
if _, err := fmt.Sscanf(age, "%d%s", &num, &unit); err != nil { return 0, NewError(ErrValidation, fmt.Sprintf("invalid retention age format: %s", age), err) } +if num < 0 { + return 0, NewError(ErrValidation, fmt.Sprintf("negative retention age not allowed: %d", num), nil) +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/backup/backup.go
(1 hunks)internal/conf/config.go
(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: unit-tests
- GitHub Check: Analyze (go)
🔇 Additional comments (6)
internal/conf/config.go (3)
378-380
: Consider security implications of storing encryption key in config.The
EncryptionKey
field in the config file might expose sensitive data. Consider:
- Using environment variables for the encryption key
- Storing the key in a secure key management system
- Adding a comment warning users about security implications
385-390
: LGTM! Good use of time.Duration for timeouts.The use of
time.Duration
for operation timeouts is a good practice as it provides type safety and clear semantics.
361-365
:❓ Verification inconclusive
Add validation for retention policy settings.
The retention policy configuration needs validation to ensure:
- MaxAge contains a valid duration string format (e.g., "30d", "6m", "1y")
- MinBackups is less than or equal to MaxBackups
- Both MinBackups and MaxBackups are non-negative
Let's verify if validation is implemented:
🏁 Script executed:
#!/bin/bash # Search for retention policy validation ast-grep --pattern 'func ValidateSettings($settings *Settings) error { $$$ if $settings.Backup.Retention.$_ { $$$ } $$$ }'Length of output: 136
Enhance retention policy validation logic.
Our review did not uncover any concrete evidence that retention policy settings are being validated. In particular, there is no indication that:
- The
MaxAge
string is parsed and checked for a valid duration format (e.g., "30d", "6m", "1y").- A validation exists to ensure that
MinBackups
is less than or equal toMaxBackups
.- Both backup counts are verified to be non-negative.
Please verify manually or add a validation function (or expand an existing one) to ensure these checks are enforced.
internal/backup/backup.go (3)
27-34
: LGTM! Well-designed interface.The Source interface is well-defined with clear method purposes and includes validation support.
620-667
: LGTM! Secure encryption implementation.The encryption implementation follows cryptographic best practices:
- Uses AES-256-GCM for authenticated encryption
- Secure random nonce generation
- Proper error handling with specific error types
776-841
: LGTM! Well-implemented cleanup logic.The cleanup implementation includes:
- Proper context handling and cancellation
- Good error handling and aggregation
- Efficient backup grouping by target and type
- Helpful debug logging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (10)
internal/backup/targets/ftp.go (3)
358-424
: Handle partial successes when failures occur after the backup file is stored.
If the metadata upload fails after successfully uploading the main backup file, the backup is left on the server without corresponding metadata. To maintain consistency, consider removing or rolling back the backup file in such failure scenarios.
426-467
: Skip listing metadata files, or parse them for richer backup information.
TheList
method currently returns all files (including potential “.meta” files), which may not be user-friendly. Consider filtering out*.meta
entries to prevent them from appearing as separate backups, or parse them to provide complete backup details.
489-538
: Evaluate feature-based FTP checks.
A warning message indicates that feature checks are not currently supported by the FTP library. If you rely on specific server features (e.g., MLSD, MFMT, etc.), consider exploring an alternative library or manually verifying these features to ensure compatibility across different FTP servers.internal/backup/targets/local.go (7)
20-30
: Consider making key constants user-configurable.
HardcodingmaxBackupSize
at 10GB andmaxPathLength
at 255 might not meet all use cases, particularly on Windows with long paths or in environments where more flexible size limits are needed. Exposing these as configurable options can enhance usability.
170-188
: Add context check during backoff sleep inwithRetry
to avoid needless waiting.
While you do handle transient errors with exponential backoff, the current approach may unnecessarily wait for the backoff interval even if the context has been canceled. Refactor the loop to respect context cancellation immediately.func (t *LocalTarget) withRetry(op func() error) error { var lastErr error for i := 0; i < maxRetries; i++ { if err := op(); err == nil { return nil } else if !isTransientError(err) { return err } else { lastErr = err - time.Sleep(backoffDuration(i)) + select { + case <-time.After(backoffDuration(i)): + // proceed + case <-context.Background().Done(): + // or use the existing ctx if accessible + return backup.NewError(backup.ErrCanceled, "operation canceled in backoff sleep", nil) + } } } return backup.NewError(backup.ErrIO, "operation failed after retries", lastErr) }
301-417
: Consider implementing checksum logic inStore
.
You have aChecksum
field in the metadata but it is never populated or validated. Including a checksum could strengthen integrity checks beyond simple file size verification.
419-439
: Leverage checksums for improved backup verification.
Currently,verifyBackup
only checks file size. Implementing a checksum can help detect partial writes or data corruption beyond what file size alone can catch.
441-494
: Align backup sorting with actual timestamps.
You're sorting backups byID
in descending order. IfID
consistently matches a timestamp string like"YYYYMMDDHHmmss"
, this is fine. Otherwise, consider sorting by theTimestamp
field to guarantee chronological listing.
510-559
: Evaluate free-space checks for larger backups.
The existing 1GB minimum check may be insufficient for backups that approach the 10GB upper size limit. You might dynamically scale this threshold or make it configurable to handle bigger backups safely.
1-560
: Add targeted unit tests for the local filesystem target.
Although the code appears robust, having direct unit tests to validate scenarios (e.g., path validation, space checks, retry logic) will help maintain correctness and quickly detect regressions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)
🔇 Additional comments (5)
internal/backup/targets/ftp.go (3)
66-67
: Consider utilizing theMinSpace
configuration.
Currently, theMinSpace
field is never referenced in the code, which can cause confusion. If it is intended to ensure a minimum required space on the server, consider implementing checks (e.g., performing a storage-size query before initiating backups) or removing the unused field to avoid confusion.
574-602
: Validate concurrency use withconnPool
duringClose()
.
TheClose()
method locks a mutex and drains the connection pool, while other methods (getConnection
/returnConnection
) do not lock it. Because channels are safe for concurrent sends/receives, this may be fine. However, ensure no references remain afterClose()
is called. Otherwise, continuing operations could introduce unexpected behavior.
1-632
: Overall Implementation Review
The FTP backup target is well-structured, with clear atomic upload processes, retry logic, and connection pooling. The approach for context-aware operations and thorough error handling is commendable. Ensure thorough unit and integration testing for edge cases (e.g., network issues, partial writes, large file sizes) to validate correctness.internal/backup/targets/local.go (2)
261-294
: Good adoption of security measures per previous feedback.
Using0o700
permissions and thorough path validation addresses earlier review comments about tightening security and preventing directory traversal.
196-259
:❓ Verification inconclusive
Beware of Windows reparse points and advanced link scenarios.
The symlink check here is great, but on Windows, special directories or reparse points could bypass regularLstat
checks. If you expect malicious or unusual usage, consider extended checks (e.g., restricting reparse points).
🌐 Web query:
Are there known edge cases on Windows with Lstat for reparse points that might require extra caution?
💡 Result:
When working with
Lstat
on Windows for files or directories with reparse points, several edge cases and nuances require careful handling:1. App-Exec Links (IO_REPARSE_TAG_APPEXECLINK)
- Behavior: These UWP app activation links are treated as reparse points but not as POSIX-style symlinks by default[1].
os.lstat()
may succeed whileos.stat()
fails for these paths[1].is_link()
returnsFalse
despite being a valid reparse point[1].- Risk: Code expecting all reparse points to behave like symlinks may misbehave (e.g., copying via
readlink
+symlink
creates invalid app-exec links)[1].2. Junctions vs Symlinks
- Junctions (IO_REPARSE_TAG_MOUNT_POINT):
- May lack "print names" (human-readable paths), causing discrepancies in path resolution[1][3].
- Case sensitivity: NTFS may uppercase portions of paths after junctions (e.g.,
\Pool\HELLOWORLD\XXXX
vs original\Pool\HelloWorld\XxXx
)[3].- Symlinks (IO_REPARSE_TAG_SYMLINK):
- Treated more strictly as POSIX-style links by APIs like
readlink()
[1].3. Directory-Specific Constraints
- Reparse-point directories must be empty when created; otherwise, NTFS rejects them[5].
- Mixing extended attributes with reparse points is unsupported[5].
4. Path-Length and Reparse-Point Limits
- A single path can contain up to 63 reparse points, but this decreases if targets include long qualified paths[5].
5. Security Risks
- Maliciously crafted reparse points can cause infinite loops or overflow conditions if parsed naively[2].
Recommendations for Safe Handling
- Use specialized APIs (e.g.,
GetFileInformationByHandleEx
) to verify tags before acting on reparse data[1][9].- Validate case sensitivity when resolving paths across mount points/symlinks[3].
- Avoid assuming all reparse points are "links" – check
st_reparse_tag
explicitly[1][9].These edge cases highlight the importance of distinguishing between different types of Windows reparse tags and adhering to documented constraints for reliable filesystem operations[1][5][9].
Citations:
- 1: https://bugs.python.org/issue37834
- 2: https://learn.microsoft.com/en-us/windows-hardware/drivers/ifs/reparse-points-in-a-file-system-filter-driver
- 3: https://community.osr.com/t/case-sensitivity-crossing-reparse-points/54672
- 4: https://answers.microsoft.com/en-us/windows/forum/all/reparse-point/beb25c79-c017-4bbf-b20f-c07f89ff9fcb
- 5: https://learn.microsoft.com/en-us/windows/win32/fileio/reparse-points
- 6: https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
- 7: https://answers.microsoft.com/en-us/windows/forum/all/directory-was-not-backed-up-because-it-is-a/8d2c423c-268b-4517-ade7-5e1c047d1216
- 8: https://docs.modular.com/mojo/changelog/
- 9: os,path/filepath: make os.Readlink and filepath.EvalSymlinks on Windows (mostly) only evaluate symlinks golang/go#63703
- 10: https://www.benedictine.edu/wp-content/uploads/files/academics/benedictine-college-course-catalog.pdf
Attention: Windows Reparse Points Need Extra Consideration
The current
os.Lstat
check correctly blocks basic symlinks; however, as confirmed by recent research, it may not catch advanced Windows reparse point scenarios (such as App-Exec Links, junctions with case sensitivity issues, etc.). If your application might encounter these edge cases—or if there's any risk of maliciously crafted reparse points—it would be wise to evaluate incorporating extended checks (e.g., using specialized Windows APIs likeGetFileInformationByHandleEx
) or, at a minimum, document this limitation for future maintainers.
- File:
internal/backup/targets/local.go
(lines 196-259)- Context: The symlink check currently being done with
os.Lstat
may miss some advanced reparse points on Windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (12)
internal/backup/targets/rsync.go (7)
22-23
: Consider making retry logic more flexibleCurrently
rsyncMaxRetries
andrsyncRetryBackoff
are hard-coded. For users operating in challenging or diverse network environments, you may want to let them configure these values at runtime or via environment variables to fine-tune reliability and performance.
42-54
: Allow overriding MaxRetries and RetryBackoffAlthough much of the config is derived from user inputs,
MaxRetries
andRetryBackoff
are set to fixed defaults. Consider making them configurable for advanced scenarios or slow network connections.
80-158
: Validate presence ofusername
If
username
is omitted, the code can generate a remote path like@hostname
, which might cause rsync or SSH errors. Consider defaulting to the current user when not provided, or requiring an explicit username in the configuration.Additionally, confirming that
rsync
andssh
exist in the system's PATH is a great practice to fail fast if they're missing.
160-176
: Expand transient error detectionConsider adding additional transient error signatures such as “network is unreachable” or “could not resolve host” to handle more ephemeral network errors and improve resilience.
177-205
: Consider exponential backoffCurrently the retry logic uses linear backoff (
time.Sleep(t.config.RetryBackoff * time.Duration(i+1))
). You might explore an exponential backoff to cope more gracefully with persistent network disruptions.
221-276
: Potential limitations for Windows UNC pathsThe sanitization logic removes drive letters and disallows leading slashes, which could break usage of Windows UNC paths (e.g.,
\\server\share
). If UNC support is needed, consider a distinct approach or an allowlist for such paths.
584-636
: Consider filtering out metadata files in the listingThe
List
method currently shows all files in the backup directory, including.meta
files. This might clutter the results. If you only want to display actual backup files, consider ignoring or storing metadata separately.internal/backup/targets/sftp.go (2)
220-260
: Nice retry pattern.
This retry mechanism effectively handles transient errors. Consider adding jitter to the backoff sleep to avoid potential thundering herd issues when multiple retries run concurrently.
263-340
: Consider context cancellation in the goroutine.
Theconnect
method launches a goroutine to performssh.Dial
, but there's no mechanism to terminate that goroutine if the context is canceled mid-dial, potentially leading to a resource leak. Passing the context directly tossh.Dial
or ensuring this goroutine exits when the context is done could help prevent unwanted resource usage.internal/backup/targets/gdrive.go (2)
75-89
: Rate limiter acquisition timeout.
The 30-second deadline effectively limits how long a request waits for a token. However, if tasks may run longer or concurrency is high, consider making this timeout configurable or integrating with the request’s own context to handle cancellations more gracefully.
248-259
: Prompt-based OAuth flow for credentials.
Prompting for the authorization code via standard input works well for local testing but can break in headless or automated environments. Provide an alternative non-interactive setup (e.g., service accounts or pre-fetched tokens) if you anticipate such use cases.internal/backup/errors.go (1)
72-82
: Efficient error code checks.
Usingerrors.As
greatly simplifies the process of discriminating among custom error types. As complexity grows, you might group related codes (e.g., network-specific) or add domain-specific logic to keep error handling straightforward.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
internal/backup/errors.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)internal/conf/defaults.go
(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (14)
internal/backup/targets/rsync.go (5)
374-439
: Check usage of--delete-during
When transferring single files,
--delete-during
may be harmless. However, if you ever configure rsync for directory sync, this flag can lead to accidental deletion on the remote side. You might want to confirm it’s needed or remove it to stay safe.
441-474
: Secured rename approachUsing
mv --
with sanitized file paths is a robust strategy for preventing path-injection vulnerabilities.
476-501
: Secured deletion approachSimilarly, adopting
rm --
alongside path sanitization is a good practice for avoiding malicious shell expansions.
548-577
: Graceful resource cleanupCalling
cleanupTempFiles
withinClose
is a thoughtful design choice to ensure temporary files are not orphaned. This shows attention to detail in managing resources.
619-619
: Check error handling for file size parsingThe error from
parseInt64(parts[4])
is discarded, which can silently default the file size to 0. Consider logging or returning an error if the parsing fails to avoid misleading backup info.internal/backup/targets/sftp.go (3)
342-395
: Robust security checks for paths.
This function provides strong validation against directory traversal and suspicious file paths. Great job! Note that you allow a.write_test
directory as an exception for hidden paths—just confirm it's not exploitable in production.
690-722
: Graceful resource cleanup.
This method correctly cleans up temporary files and drains connections from the pool. If you anticipate a large backlog of idle connections, consider parallelizing or bounding the time spent closing them.
767-788
: Resolution of previously reported insecure HostKeyCallback.
Earlier feedback noted the use of an insecure host key callback. Here, you always enforce a known_hosts file without reverting toInsecureIgnoreHostKey
, which properly addresses the prior security concern.internal/backup/targets/gdrive.go (2)
380-429
: Possible parallel quota check race.
Multiple backups running simultaneously might each pass the cached quota check and overshoot total available space. If high concurrency is expected, consider refreshing quota more frequently for large uploads or implementing an atomic usage tracker to avoid oversubscription.
708-724
: Metadata file query by naming convention.
The approach searches for metadata files usingid + .meta
. Confirm that no user files share this naming pattern to avoid accidental deletions or missed metadata. If file naming can vary, a more robust lookup may be needed.internal/backup/errors.go (1)
63-70
: Good structured error approach.
Centralizing error creation with structured codes and chaining an underlying error fosters maintainability and consistent handling across the codebase.internal/conf/defaults.go (3)
173-173
: LGTM!The SQLite temp directory configuration with an empty string default is appropriate, allowing SQLite to use system defaults for temporary files during operations like backups.
191-245
:❓ Verification inconclusive
Review security considerations for backup targets.
The backup target configurations include sensitive information (passwords, keys, credentials). Ensure:
- Sensitive fields are properly masked in logs
- Credentials are stored securely
- Default paths don't create security risks
🏁 Script executed:
#!/bin/bash # Description: Check for secure handling of sensitive backup configuration # Look for password/credential masking in logs rg -A 5 "password.*mask|credentials.*mask" # Check for secure credential storage implementation rg -A 5 "password.*encrypt|credentials.*encrypt"Length of output: 95
Security Considerations for Backup Targets – Manual Verification Required
The backup target configurations in
internal/conf/defaults.go
include sensitive fields (passwords, keys, credentials), yet our automated checks did not detect any evidence of masking or encryption. This lack of output prevents us from confirming that:
- Sensitive fields are being properly masked when logged.
- Credentials and keys are stored securely (e.g., encrypted or managed via secure channels).
- Default paths (like
"backups/"
) are not introducing unintended security risks.Please perform a manual review to verify that these security measures are in place.
188-190
:❓ Verification inconclusive
Verify retention policy values.
The retention policy values need validation to ensure:
maxage
is a valid duration stringmaxbackups
is greater thanminbackups
- Values are reasonable for the expected backup size and frequency
🏁 Script executed:
#!/bin/bash # Description: Search for retention policy validation in the codebase # Look for validation of retention policy values rg -A 5 "backup.retention" # Look for duration string parsing rg -A 5 "maxage.*duration"Length of output: 1049
Retention Policy Validation – Please Verify
The defaults in
internal/conf/defaults.go
(i.e.,"30d"
,30
, and7
) and the definitions ininternal/conf/config.go
set expectations for backup retention. However, the searches did not reveal explicit validations for:
- Ensuring
maxage
is parsed as a valid duration string (note: Golang’s standard duration parser does not support a"d"
suffix without customization).- Confirming that
maxbackups
is greater thanminbackups
.- Verifying that the chosen values are appropriate based on expected backup sizes and frequencies.
Please review the configuration-loading logic to confirm whether these validations are performed (or if a custom duration parser is in place) and ensure that any potential misconfigurations are caught early. If they are missing, consider adding explicit checks to validate:
- The format of
maxage
.- The relationship between
maxbackups
andminbackups
.
internal/conf/defaults.go
Outdated
viper.SetDefault("backup.enabled", false) | ||
viper.SetDefault("backup.debug", false) | ||
viper.SetDefault("backup.schedule", "0 0 * * *") // Daily at midnight | ||
viper.SetDefault("backup.encryption", false) // Encryption disabled by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider enabling encryption by default for enhanced security.
Backups often contain sensitive data. While encryption is configurable, it should be enabled by default to ensure secure backups, especially when using remote storage.
-viper.SetDefault("backup.encryption", false) // Encryption disabled by default
+viper.SetDefault("backup.encryption", true) // Encryption enabled by default
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
viper.SetDefault("backup.encryption", false) // Encryption disabled by default | |
viper.SetDefault("backup.encryption", true) // Encryption enabled by default |
"type": "local", | ||
"enabled": true, | ||
"settings": map[string]interface{}{ | ||
"path": "backups/", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Ensure backup paths are properly sanitized.
The default paths (backups/
) should be validated to:
- Prevent directory traversal attacks
- Handle absolute vs relative paths consistently
- Ensure proper permissions
Also applies to: 207-207, 221-221, 233-233, 242-242
🏁 Script executed:
#!/bin/bash
# Description: Check for path validation in backup implementation
# Look for path validation/sanitization
rg -A 5 "path.*validate|path.*sanitize"
# Check for directory traversal prevention
rg -A 5 "path.*traversal"
Length of output: 113726
Backup Path Sanitization Needed
The current configuration simply hard-codes "path": "backups/"
without any accompanying validation. Our search for sanitization- or traversal-related checks (using patterns like validate
, sanitize
, or traversal
) did not reveal any code guarding against directory traversal, nor any logic that distinguishes between absolute and relative paths or enforces permissions on the backup path.
- Location:
internal/conf/defaults.go
(line 196). - Concerns:
- No check prevents directory traversal attacks.
- There is no handling to ensure consistency between absolute and relative paths.
- No permissions validation is performed on the backup directory path.
Please update the implementation to perform proper sanitization of the backup paths—verify that user input cannot inject malicious path segments, correctly resolve relative paths to prevent unintended directory access, and enforce the necessary file system permissions for security.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (12)
internal/backup/targets/gdrive.go (2)
66-74
: Consider adding a safeguard to prevent token overflow in low-usage scenarios.
The current implementation ofreplenish()
continuously adds tokens to the channel until it is full. In low-usage scenarios, it might quickly saturate, effectively bypassing the concurrency limit once consumption slows down.
348-380
: Confirm robust handling of token-expiration edge cases.
isAPIError()
triggers token refresh only on 401 errors. If tokens are close to expiration but not yet invalid, the upload might later fail unexpectedly. Consider proactively refreshing tokens if they are close to expiry.cmd/backup/backup.go (2)
55-56
: Consider partial fallback for MySQL backups.
Currently, MySQL backups produce an immediate warning and return. You might consider a fallback approach (e.g., skipping database backup but proceeding with config file backups) rather than halting the entire operation. This would allow partial backups to complete successfully.
62-109
: Unify or parameterize logger usage across targets.
While registering multiple targets, logger creation repeats. Consider adopting a single logger or a factory pattern so the same logging approach, format, and level of detail are consistently applied to all targets.internal/backup/targets/local.go (3)
261-294
: Leverage path validation for security and clarity.
You’ve done a thorough job ensuring directory traversal is prevented and that the path is absolute. For extra robustness, consider checking if the final absolute path is indeed the same as the intended directory after symbolic link cleanup (e.g., multiple symlinks).
302-417
: Ensure metadata consistency upon partial failures.
The finalverifyBackup()
step is great, but if metadata fails to write after the file has been copied, or if the verification fails, the file remains stored without corresponding metadata. The cleanup logic only removes the backup directory if we haven't reached the success flag. Consider tidying in these partial-failure scenarios to avoid orphaned data.
442-496
: Switch sorting logic for clarity.
Line 494 sorts backups by comparing their IDs. In typical usage, the ID is not strictly chronological. It might be safer to sort by the actual timestamp contained in the metadata, ensuring the newest backups appear first.-sort.Slice(backups, func(i, j int) bool { - return backups[i].ID > backups[j].ID +sort.Slice(backups, func(i, j int) bool { + return backups[i].Metadata.Timestamp.After(backups[j].Metadata.Timestamp) })internal/backup/targets/rsync.go (5)
19-27
: Consider using time.Duration for timeout constants.The
defaultRsyncTimeout
constant is defined using multiplication, which is less readable than using time.Duration's built-in constants.-defaultRsyncTimeout = 30 * time.Second +defaultRsyncTimeout = 30 * time.Second +rsyncConnectTimeout = 5 * time.Minute // Add separate timeout for rsync connections
67-78
: Consider adding buffer size configuration for progress tracking.The
RsyncProgressReader
implementation could benefit from configurable buffer sizes to optimize performance for different network conditions.type RsyncProgressReader struct { io.Reader Progress func(n int64) + BufferSize int // Add configurable buffer size } func (r *RsyncProgressReader) Read(p []byte) (n int, err error) { + if r.BufferSize > 0 && len(p) > r.BufferSize { + p = p[:r.BufferSize] + } n, err = r.Reader.Read(p) if n > 0 && r.Progress != nil { r.Progress(int64(n)) } return }
161-175
: Consider adding more transient error patterns.The
isTransientError
function could be enhanced to handle more network-related errors.func (t *RsyncTarget) isTransientError(err error) bool { if err == nil { return false } errStr := err.Error() return strings.Contains(errStr, "connection reset") || strings.Contains(errStr, "timeout") || strings.Contains(errStr, "temporary") || strings.Contains(errStr, "broken pipe") || strings.Contains(errStr, "connection refused") || strings.Contains(errStr, "no route to host") || strings.Contains(errStr, "connection closed") || strings.Contains(errStr, "ssh: handshake failed") || + strings.Contains(errStr, "network is unreachable") || + strings.Contains(errStr, "connection timed out") || + strings.Contains(errStr, "no buffer space") || strings.Contains(errStr, "EOF") }
393-403
: Consider adding bandwidth limit option for rsync.The rsync command could benefit from a configurable bandwidth limit to prevent network saturation.
args := []string{ "-av", // Archive mode and verbose "--protect-args", // Protect special characters "--chmod=Du=rwx,Dg=,Do=,Fu=rw,Fg=,Fo=", // Strict permissions "--timeout=300", // Connection timeout "--delete-during", // Clean deletions "--checksum", // Verify checksums "--no-implied-dirs", // Prevent directory creation outside target "--no-relative", // Disable relative path mode + "--bwlimit=2000", // Limit bandwidth to 2000 KB/s "-e", t.buildSSHCmd(), // SSH command with custom port and security options }
717-721
: Consider using strconv.ParseInt for better error handling.The
parseInt64
helper function usesfmt.Sscanf
which is less efficient thanstrconv.ParseInt
for simple integer parsing.func parseInt64(s string) (int64, error) { - var result int64 - _, err := fmt.Sscanf(s, "%d", &result) - return result, err + return strconv.ParseInt(s, 10, 64) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cmd/backup/backup.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: unit-tests
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
internal/backup/targets/gdrive.go (1)
703-734
: Optimize metadata file deletion lookup.
InDelete()
, the metadata file lookup uses the file's ID as the name, which may not align with the actual metadata naming convention if the file name differs from the ID. Ensure the search query matches how metadata files are named (e.g., using the actual filename +.meta
).internal/backup/targets/local.go (1)
86-132
: Validate atomic rename behavior on non-POSIX filesystems.
atomicWriteFile
usesos.Rename()
, which is typically atomic on POSIX systems but may not be on certain network shares or non-POSIX filesystems. Confirm that this meets your reliability requirements or add a fallback if necessary.internal/backup/targets/rsync.go (1)
1-722
: Excellent implementation with robust security measures!The implementation shows careful attention to security details:
- Proper path sanitization
- Secure SSH configuration
- Atomic file operations
- Temporary file cleanup
- Retry logic for transient errors
- Input validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (20)
internal/backup/targets/gdrive.go (7)
38-95
: Consider optimizing token replenishment.The rate limiter implementation is solid, but the replenishment could be more efficient by pre-allocating the token channel capacity.
Apply this optimization:
func newRateLimiter(maxTokens int, resetInterval time.Duration) *rateLimiter { + tokens := make(chan struct{}, maxTokens) + for i := 0; i < maxTokens; i++ { + tokens <- struct{}{} + } rl := &rateLimiter{ - tokens: make(chan struct{}, maxTokens), + tokens: tokens, ticker: time.NewTicker(resetInterval), } - - // Initialize tokens - for i := 0; i < maxTokens; i++ { - rl.tokens <- struct{}{} - }
105-131
: Add cache expiration to prevent stale entries.The folder cache implementation is good but could benefit from an expiration mechanism to prevent stale entries from consuming memory indefinitely.
Consider adding expiration:
type folderCache struct { cache map[string]string // path -> ID mapping + expiry map[string]time.Time mu sync.RWMutex } func newFolderCache() *folderCache { return &folderCache{ cache: make(map[string]string), + expiry: make(map[string]time.Time), } } func (fc *folderCache) get(path string) (string, bool) { fc.mu.RLock() defer fc.mu.RUnlock() id, ok := fc.cache[path] + if !ok { + return "", false + } + if time.Since(fc.expiry[path]) > time.Hour { + delete(fc.cache, path) + delete(fc.expiry, path) + return "", false + } return id, ok } func (fc *folderCache) set(path, id string) { fc.mu.Lock() defer fc.mu.Unlock() fc.cache[path] = id + fc.expiry[path] = time.Now() }
133-144
: Add validation for metadata fields.The metadata structure is well-defined but could benefit from validation to ensure data integrity.
Consider adding a validation method:
type GDriveMetadataV1 struct { Version int `json:"version"` Timestamp time.Time `json:"timestamp"` Size int64 `json:"size"` Type string `json:"type"` Source string `json:"source"` IsDaily bool `json:"is_daily"` ConfigHash string `json:"config_hash,omitempty"` AppVersion string `json:"app_version,omitempty"` Compression string `json:"compression,omitempty"` } + +func (m *GDriveMetadataV1) Validate() error { + if m.Version != gdriveMetadataVersion { + return fmt.Errorf("invalid metadata version: %d", m.Version) + } + if m.Timestamp.IsZero() { + return fmt.Errorf("timestamp is required") + } + if m.Size <= 0 { + return fmt.Errorf("size must be positive") + } + if m.Type == "" { + return fmt.Errorf("type is required") + } + if m.Source == "" { + return fmt.Errorf("source is required") + } + return nil +}
160-170
: Add configuration validation.The configuration structure is well-defined but could benefit from explicit validation of field values.
Consider adding a validation method:
type GDriveTargetConfig struct { CredentialsFile string // Path to the credentials.json file TokenFile string // Path to store the token.json file BasePath string // Base path in Google Drive Timeout time.Duration // Timeout for operations Debug bool // Enable debug logging MaxRetries int // Maximum number of retries RetryBackoff time.Duration // Time to wait between retries MinSpace int64 // Minimum required space in bytes } + +func (c *GDriveTargetConfig) Validate() error { + if c.CredentialsFile == "" { + return fmt.Errorf("credentials file is required") + } + if c.MaxRetries < 0 { + return fmt.Errorf("max retries must be non-negative") + } + if c.RetryBackoff < 0 { + return fmt.Errorf("retry backoff must be non-negative") + } + if c.MinSpace < 0 { + return fmt.Errorf("minimum space must be non-negative") + } + return nil +}
748-814
: Enhance validation checks.The validation is good but could be more comprehensive to ensure the target is fully operational.
Consider these additional checks:
func (t *GDriveTarget) Validate() error { ctx, cancel := context.WithTimeout(context.Background(), t.config.Timeout) defer cancel() + // Verify credentials and token + if err := t.refreshTokenIfNeeded(ctx); err != nil { + return backup.NewError(backup.ErrValidation, "gdrive: failed to verify credentials", err) + } + + // Verify rate limiter + if err := t.rateLimiter.acquire(ctx); err != nil { + return backup.NewError(backup.ErrValidation, "gdrive: rate limiter not working", err) + } + return t.withRetry(ctx, func() error { // ... existing code ... }) }
816-864
: Enhance type checking in map configuration.The type assertions are good but could be more robust to handle different numeric types.
Consider these improvements:
func NewGDriveTargetFromMap(settings map[string]interface{}) (*GDriveTarget, error) { + // Helper function for numeric conversions + getInt64 := func(v interface{}) (int64, bool) { + switch n := v.(type) { + case int: + return int64(n), true + case int32: + return int64(n), true + case int64: + return n, true + case float64: + return int64(n), true + default: + return 0, false + } + } // ... existing code ... - if minSpace, ok := settings["min_space"].(int64); ok { + if v, ok := settings["min_space"]; ok { + if minSpace, ok := getInt64(v); ok { config.MinSpace = minSpace + } else { + return nil, backup.NewError(backup.ErrValidation, "gdrive: invalid min_space type", nil) + } }
866-884
: Add quota alerts.The quota management is good but could benefit from alerting when usage approaches limits.
Consider adding alerts:
func (t *GDriveTarget) getQuota(ctx context.Context) (*quotaInfo, error) { about, err := t.service.About.Get().Fields("storageQuota").Context(ctx).Do() if err != nil { return nil, err } t.quotaMu.Lock() defer t.quotaMu.Unlock() + // Calculate usage percentage + var usagePercent float64 + if about.StorageQuota.Limit > 0 { + usagePercent = float64(about.StorageQuota.Usage) / float64(about.StorageQuota.Limit) * 100 + } + + // Alert if usage is high + if usagePercent > 90 { + t.logger.Printf("⚠️ GDrive: Storage usage is %.1f%% of quota", usagePercent) + } + t.quota = "aInfo{ limit: about.StorageQuota.Limit, usage: about.StorageQuota.Usage, available: about.StorageQuota.Limit - about.StorageQuota.Usage, updateTime: time.Now(), } return t.quota, nil }internal/backup/errors.go (1)
12-41
: Consider explicitly numbering error codes instead of relying on iota.
Relying oniota
can inadvertently change error code values if new items are inserted in the middle. Explicitly assigning numeric values can help maintain stability for consumers that rely on specific numeric codes.internal/backup/backup.go (4)
283-330
: Potential memory usage concern when reading large backup files.
You are loading the entire backup file into memory (os.ReadFile(backupPath)
) prior to zipping or encrypting. For large backups, streaming the file directly into the zip would be more memory-efficient.- backupData, err := os.ReadFile(backupPath) - archive.AddFile(dbFilename, backupData) + // Example snippet: stream file contents into zip + file, err := os.Open(backupPath) + if err != nil { + return tempDir, nil, NewError(ErrIO, fmt.Sprintf("failed to open backup file %s", backupPath), err) + } + defer file.Close() + + // Archive logic to read in chunks and write to zip to avoid high memory usage
464-474
: Partial cleaning approach might lose visibility on multiple failures.
combineErrors
handles aggregated errors, but if some backups fail to delete while others succeed, users only see a combined message. This is workable, but consider reporting each error individually in logs or collecting them more distinctly for better troubleshooting.
612-659
: Consider storing the encryption key more securely.
Currently the key is generated/stored on disk, which is a straightforward solution but might be risky if an attacker can read files from disk. Depending on security requirements, you could explore dedicated secret management solutions.
928-937
: Deleting backups across multiple targets only returns the last error.
If several targets fail during deletion, the user only sees the final error. For improved debugging, consider aggregating all errors or returning a combined error.internal/conf/validate.go (2)
284-293
: Cron expression validation is still marked TODO.
You might leverage a library likegithub.com/robfig/cron
to validate cron syntax or parse advanced schedules.Do you want help incorporating cron expression validation here?
310-313
: At least one backup target is required.
This is a sensible requirement. Consider also validating that each target is unique to prevent configuration duplication errors.internal/backup/targets/local.go (3)
20-53
: Add validation for metadata fields.The
MetadataV1
struct could benefit from additional validation to ensure data integrity:
- Version compatibility checks
- Size validation
- Timestamp validation
type MetadataV1 struct { + // Add validation methods + func (m *MetadataV1) Validate() error { + if m.Version != currentVersion { + return fmt.Errorf("unsupported metadata version: %d", m.Version) + } + if m.Size < 0 { + return fmt.Errorf("invalid size: %d", m.Size) + } + if m.Timestamp.IsZero() { + return fmt.Errorf("timestamp is required") + } + return nil + }
196-259
: Enhance logging for security events.The path validation is thorough, but security-related events should be logged for audit purposes.
func validatePath(path string) error { + logSecurityEvent := func(reason string) { + if t.logger != nil { + t.logger.Printf("🚫 Security: Rejected path %q: %s", path, reason) + } + } + if strings.Contains(cleanPath, "..") { + logSecurityEvent("directory traversal attempt") return backup.NewError(backup.ErrValidation, "path must not contain directory traversal sequences", nil) }
301-417
: Add detailed progress reporting.The backup operation could benefit from more granular progress reporting to improve user experience.
func (t *LocalTarget) Store(ctx context.Context, sourcePath string, metadata *backup.Metadata) error { + type Progress struct { + Phase string // e.g., "copying", "verifying" + Percent float64 + BytesDone int64 + BytesTotal int64 + } + progressCh := make(chan Progress, 1) + if t.debug { + go func() { + for p := range progressCh { + t.logger.Printf("📊 Backup progress: %s %.1f%% (%d/%d bytes)", + p.Phase, p.Percent, p.BytesDone, p.BytesTotal) + } + }() + }internal/backup/targets/rsync.go (2)
393-400
: Add validation for rsync command arguments.The rsync command construction could benefit from additional validation:
- Validate rsync flags for security
- Sanitize command arguments
args := []string{ "-av", // Archive mode and verbose "--protect-args", // Protect special characters "--chmod=Du=rwx,Dg=,Do=,Fu=rw,Fg=,Fo=", // Strict permissions "--timeout=300", // Connection timeout "--delete-during", // Clean deletions "--checksum", // Verify checksums "--no-implied-dirs", // Prevent directory creation outside target "--no-relative", // Disable relative path mode "-e", t.buildSSHCmd(), // SSH command with custom port and security options } + +// Validate rsync flags +for _, arg := range args { + if strings.HasPrefix(arg, "--") { + allowed := map[string]bool{ + "--protect-args": true, + "--chmod": true, + "--timeout": true, + "--delete-during": true, + "--checksum": true, + "--no-implied-dirs": true, + "--no-relative": true, + } + flag := strings.SplitN(arg, "=", 2)[0] + if !allowed[flag] { + return fmt.Errorf("unsupported rsync flag: %s", flag) + } + } +}
204-218
: Enhance error categorization.The error classification in
isTransientError
could be more comprehensive:
- Add more specific error patterns
- Consider OS-specific errors
func (t *RsyncTarget) isTransientError(err error) bool { if err == nil { return false } errStr := err.Error() - return strings.Contains(errStr, "connection reset") || - strings.Contains(errStr, "timeout") || - strings.Contains(errStr, "temporary") || - strings.Contains(errStr, "broken pipe") || - strings.Contains(errStr, "connection refused") || - strings.Contains(errStr, "no route to host") || - strings.Contains(errStr, "connection closed") || - strings.Contains(errStr, "ssh: handshake failed") || - strings.Contains(errStr, "EOF") + // Network-related errors + networkErrors := []string{ + "connection reset", + "connection refused", + "no route to host", + "network is unreachable", + "connection timed out", + } + + // I/O related errors + ioErrors := []string{ + "broken pipe", + "input/output error", + "device not configured", + } + + // SSH specific errors + sshErrors := []string{ + "ssh: handshake failed", + "ssh: unable to authenticate", + "ssh: connection closed", + } + + for _, pattern := range append(append(networkErrors, ioErrors...), sshErrors...) { + if strings.Contains(strings.ToLower(errStr), strings.ToLower(pattern)) { + return true + } + } + + return falseinternal/backup/targets/sftp.go (1)
163-202
: Enhance connection pool management.The connection pool implementation could benefit from additional features:
- Connection health checks
- Pool statistics
- Automatic pool cleanup
+// PoolStats holds statistics about the connection pool +type PoolStats struct { + Active int // Number of active connections + Idle int // Number of idle connections + Requests int64 // Total number of connection requests + Hits int64 // Number of times an existing connection was reused + Misses int64 // Number of times a new connection had to be created + Timeouts int64 // Number of times getting a connection timed out + Errors int64 // Number of connection errors + LastError time.Time // Time of last error +} + func (t *SFTPTarget) getConnection(ctx context.Context) (*sftp.Client, error) { + t.stats.Requests++ // Try to get a connection from the pool select { case client := <-t.connPool: if t.isConnectionAlive(client) { + t.stats.Hits++ return client, nil } + t.stats.Errors++ + t.stats.LastError = time.Now() // Connection is dead, close it and create a new one client.Close() default: + t.stats.Misses++ // Pool is empty, create a new connection }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
internal/backup/backup.go
(1 hunks)internal/backup/errors.go
(1 hunks)internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)internal/conf/config.go
(3 hunks)internal/conf/validate.go
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/backup/targets/ftp.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (9)
internal/backup/targets/gdrive.go (4)
1-24
: LGTM! Well-organized imports and package structure.The imports are properly organized and all dependencies are necessary for the implementation.
25-36
: LGTM! Well-defined constants with reasonable values.The rate limiting and timeout values are well-balanced for typical usage patterns.
146-158
: Verify Google Drive API usage.Despite using the Google Drive API package, the implementation seems more aligned with Google Cloud Storage patterns. This could lead to confusion and potential issues.
Let's verify the API usage:
#!/bin/bash # Check if the code uses Google Drive-specific features rg -A 5 "drive\.google\.com/drive" . rg -A 5 "google-apps\.folder" .
656-713
: Improve pagination handling in List method.The List method correctly implements pagination, but it could benefit from a more efficient implementation.
Consider these improvements:
func (t *GDriveTarget) List(ctx context.Context) ([]backup.BackupInfo, error) { + const maxResults = 1000 // Adjust based on your needs if t.config.Debug { t.logger.Printf("🔄 GDrive: Listing backups") } var backups []backup.BackupInfo + backups = make([]backup.BackupInfo, 0, maxResults) // Pre-allocate slice return backups, t.withRetry(ctx, func() error { // ... existing code ... query := fmt.Sprintf("'%s' in parents and trashed=false", folderId) pageToken := "" for { fileList, err := t.service.Files.List().Q(query). Fields("nextPageToken, files(id, name, size, createdTime, description)"). + PageSize(maxResults). PageToken(pageToken). Context(ctx).Do()internal/backup/errors.go (3)
50-84
: Great approach for user-friendly error prefixing.
Using emojis in the error prefix is a creative way to make logs more readable. Ensure that logging/output pipelines can handle these symbols consistently.
86-106
: Error struct implementation looks clean and consistent.
TheError()
,Unwrap()
, andNewError()
methods follow idiomatic Go error wrapping. This helps preserve the underlying error context.
108-153
: Utility “IsXError” functions are appropriate but could be tested.
Consider adding unit tests for each “Is…” function to ensure correctness, especially if new error codes are introduced in the future.internal/conf/validate.go (2)
69-75
: Backup validation logic is a good addition.
You’re correctly gating backup validation behindsettings.Backup.Enabled
. This ensures users without backups enabled aren’t blocked by validation errors.
322-347
: Add path traversal checks for local targets.
Similar to prior suggestions, ensure thatpath
doesn’t contain “..” or invalid sequences that traverse outside desired directories. This was mentioned in a previous review.
internal/backup/targets/gdrive.go
Outdated
target := &GDriveTarget{ | ||
config: *config, | ||
logger: logger, | ||
tempFiles: make(map[string]bool), | ||
rateLimiter: newRateLimiter(defaultRateLimitTokens, defaultRateLimitReset), | ||
folderCache: newFolderCache(), | ||
} | ||
|
||
// Initialize the Google Drive service | ||
if err := target.initializeService(); err != nil { | ||
target.rateLimiter.stop() | ||
return nil, err | ||
} | ||
|
||
return target, nil | ||
} | ||
|
||
// initializeService initializes the Google Drive service | ||
func (t *GDriveTarget) initializeService() error { | ||
ctx := context.Background() | ||
|
||
b, err := os.ReadFile(t.config.CredentialsFile) | ||
if err != nil { | ||
return backup.NewError(backup.ErrConfig, "gdrive: unable to read credentials file", err) | ||
} | ||
|
||
// If modifying these scopes, delete your previously saved token.json. | ||
config, err := google.ConfigFromJSON(b, drive.DriveFileScope) | ||
if err != nil { | ||
return backup.NewError(backup.ErrConfig, "gdrive: unable to parse credentials", err) | ||
} | ||
|
||
client, err := t.getClient(config) | ||
if err != nil { | ||
return err | ||
} | ||
|
||
srv, err := drive.NewService(ctx, option.WithHTTPClient(client)) | ||
if err != nil { | ||
return backup.NewError(backup.ErrIO, "gdrive: unable to initialize service", err) | ||
} | ||
|
||
t.service = srv | ||
return nil | ||
} | ||
|
||
// getClient retrieves a token, saves the token, then returns the generated client | ||
func (t *GDriveTarget) getClient(config *oauth2.Config) (*http.Client, error) { | ||
tok, err := t.tokenFromFile() | ||
if err != nil { | ||
tok, err = t.getTokenFromWeb(config) | ||
if err != nil { | ||
return nil, err | ||
} | ||
if err := t.saveToken(tok); err != nil { | ||
return nil, err | ||
} | ||
} | ||
return config.Client(context.Background(), tok), nil | ||
} | ||
|
||
// getTokenFromWeb requests a token from the web | ||
func (t *GDriveTarget) getTokenFromWeb(config *oauth2.Config) (*oauth2.Token, error) { | ||
authURL := config.AuthCodeURL("state-token", oauth2.AccessTypeOffline) | ||
t.logger.Printf("Go to the following link in your browser then type the authorization code:\n%v\n", authURL) | ||
|
||
var authCode string | ||
if _, err := fmt.Scan(&authCode); err != nil { | ||
return nil, backup.NewError(backup.ErrValidation, "gdrive: unable to read authorization code", err) | ||
} | ||
|
||
tok, err := config.Exchange(context.TODO(), authCode) | ||
if err != nil { | ||
return nil, backup.NewError(backup.ErrValidation, "gdrive: unable to retrieve token from web", err) | ||
} | ||
return tok, nil | ||
} | ||
|
||
// tokenFromFile retrieves a token from a local file | ||
func (t *GDriveTarget) tokenFromFile() (*oauth2.Token, error) { | ||
f, err := os.Open(t.config.TokenFile) | ||
if err != nil { | ||
return nil, err | ||
} | ||
defer f.Close() | ||
tok := &oauth2.Token{} | ||
err = json.NewDecoder(f).Decode(tok) | ||
return tok, err | ||
} | ||
|
||
// saveToken saves a token to a file | ||
func (t *GDriveTarget) saveToken(token *oauth2.Token) error { | ||
f, err := os.OpenFile(t.config.TokenFile, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0o600) | ||
if err != nil { | ||
return backup.NewError(backup.ErrIO, "gdrive: unable to cache oauth token", err) | ||
} | ||
defer f.Close() | ||
return json.NewEncoder(f).Encode(token) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance token file security.
The token file is created with 0o600 permissions, which is good, but additional security measures could be implemented.
Consider these security enhancements:
func (t *GDriveTarget) saveToken(token *oauth2.Token) error {
+ // Create token file with secure permissions
+ tokenDir := filepath.Dir(t.config.TokenFile)
+ if err := os.MkdirAll(tokenDir, 0o700); err != nil {
+ return backup.NewError(backup.ErrIO, "gdrive: unable to create token directory", err)
+ }
+
f, err := os.OpenFile(t.config.TokenFile, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0o600)
if err != nil {
return backup.NewError(backup.ErrIO, "gdrive: unable to cache oauth token", err)
}
defer f.Close()
+
+ // Encrypt token before saving
+ encrypted, err := t.encryptToken(token)
+ if err != nil {
+ return backup.NewError(backup.ErrIO, "gdrive: unable to encrypt token", err)
+ }
- return json.NewEncoder(f).Encode(token)
+ return json.NewEncoder(f).Encode(encrypted)
}
Committable suggestion skipped: line range outside the PR's diff.
internal/conf/config.go
Outdated
// BackupConfig contains backup-related configuration | ||
type BackupConfig struct { | ||
Enabled bool `yaml:"enabled"` // true to enable backup functionality | ||
Debug bool `yaml:"debug"` // true to enable debug logging | ||
Schedule string `yaml:"schedule"` // Cron expression for backup schedule | ||
Encryption bool `yaml:"encryption"` // true to enable backup encryption | ||
Retention BackupRetention `yaml:"retention"` // Backup retention settings | ||
Targets []BackupTarget `yaml:"targets"` // List of backup targets | ||
|
||
// OperationTimeouts defines timeouts for various backup operations | ||
OperationTimeouts struct { | ||
Backup time.Duration // Timeout for entire backup operation (default: 2h) | ||
Store time.Duration // Timeout for storing a backup (default: 15m) | ||
Cleanup time.Duration // Timeout for cleanup operation (default: 10m) | ||
Delete time.Duration // Timeout for delete operation (default: 2m) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for backup schedule and retention settings.
The BackupConfig
struct needs validation for:
- Cron expression format in
Schedule
- Duration format in
Retention.MaxAge
- Reasonable limits for
Retention.MaxBackups
andRetention.MinBackups
Add validation in the ValidateSettings
function:
import "github.com/robfig/cron/v3"
func validateBackupConfig(config *BackupConfig) error {
if !config.Enabled {
return nil
}
// Validate schedule
if config.Schedule != "" {
_, err := cron.ParseStandard(config.Schedule)
if err != nil {
return fmt.Errorf("invalid backup schedule: %w", err)
}
}
// Validate retention
if config.Retention.MaxAge != "" {
if _, err := time.ParseDuration(config.Retention.MaxAge); err != nil {
return fmt.Errorf("invalid retention max age: %w", err)
}
}
if config.Retention.MaxBackups < config.Retention.MinBackups {
return fmt.Errorf("max backups (%d) cannot be less than min backups (%d)",
config.Retention.MaxBackups, config.Retention.MinBackups)
}
return nil
}
internal/conf/config.go
Outdated
Enabled bool `yaml:"enabled"` // true to enable backup functionality | ||
Debug bool `yaml:"debug"` // true to enable debug logging | ||
Schedule string `yaml:"schedule"` // Cron expression for backup schedule | ||
Encryption bool `yaml:"encryption"` // true to enable backup encryption |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance encryption key management.
The current boolean flag for encryption needs to be enhanced with secure key management:
- Use environment variables or a key management system
- Support key rotation
- Add encryption algorithm configuration
Add encryption configuration:
type BackupConfig struct {
- Encryption bool `yaml:"encryption"` // true to enable backup encryption
+ Encryption struct {
+ Enabled bool `yaml:"enabled"` // true to enable backup encryption
+ KeySource string `yaml:"keysource"` // "env", "file", or "vault"
+ KeyName string `yaml:"keyname"` // Name of the key or environment variable
+ Algorithm string `yaml:"algorithm"` // "AES256", "ChaCha20", etc.
+ KeyRotation struct {
+ Enabled bool `yaml:"enabled"` // Enable automatic key rotation
+ Interval time.Duration `yaml:"interval"` // How often to rotate keys
+ RetainOld int `yaml:"retainold"` // Number of old keys to retain
+ } `yaml:"keyrotation"`
+ } `yaml:"encryption"`
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Encryption bool `yaml:"encryption"` // true to enable backup encryption | |
type BackupConfig struct { | |
- Encryption bool `yaml:"encryption"` // true to enable backup encryption | |
+ Encryption struct { | |
+ Enabled bool `yaml:"enabled"` // true to enable backup encryption | |
+ KeySource string `yaml:"keysource"` // "env", "file", or "vault" | |
+ KeyName string `yaml:"keyname"` // Name of the key or environment variable | |
+ Algorithm string `yaml:"algorithm"` // "AES256", "ChaCha20", etc. | |
+ KeyRotation struct { | |
+ Enabled bool `yaml:"enabled"` // Enable automatic key rotation | |
+ Interval time.Duration `yaml:"interval"` // How often to rotate keys | |
+ RetainOld int `yaml:"retainold"` // Number of old keys to retain | |
+ } `yaml:"keyrotation"` | |
+ } `yaml:"encryption"` | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (14)
internal/backup/targets/local.go (5)
21-30
: Consider making maxBackupSize configurable.The hardcoded 10GB limit might not suit all use cases. Consider making this configurable through the
LocalTargetConfig
struct while keeping 10GB as the default.type LocalTargetConfig struct { Path string Debug bool + MaxBackupSize int64 // Maximum backup size in bytes, defaults to 10GB }
56-70
: Add input validation to metadata conversion functions.The conversion functions should validate input parameters to prevent nil pointer dereferences.
func convertToVersionedMetadata(m *backup.Metadata) MetadataV1 { + if m == nil { + return MetadataV1{Version: currentVersion} + } return MetadataV1{ Version: currentVersion, // ... rest of the implementation } } func convertFromVersionedMetadata(m *MetadataV1) backup.Metadata { + if m == nil { + return backup.Metadata{} + } return backup.Metadata{ // ... rest of the implementation } }Also applies to: 72-84
87-132
: Add context support to atomicWriteFile.The function should accept a context parameter to support cancellation during long-running writes.
-func atomicWriteFile(targetPath, tempPattern string, perm os.FileMode, write func(*os.File) error) error { +func atomicWriteFile(ctx context.Context, targetPath, tempPattern string, perm os.FileMode, write func(*os.File) error) error { // ... existing implementation ... // Write data + if err := ctx.Err(); err != nil { + return fmt.Errorf("context cancelled before write: %w", err) + } if err := write(tempFile); err != nil { return err } // ... rest of the implementation ... }
301-417
: Consider adding progress reporting for large files.For large backup files, it would be helpful to provide progress updates during the copy operation.
type LocalTarget struct { path string debug bool logger backup.Logger + progressCallback func(bytesWritten, totalBytes int64) } func (t *LocalTarget) Store(ctx context.Context, sourcePath string, metadata *backup.Metadata) error { // ... existing implementation ... copyDone := make(chan error, 1) go func() { - copyDone <- copyFile(tempFile, srcFile) + copyDone <- copyFileWithProgress(tempFile, srcFile, srcInfo.Size(), t.progressCallback) }() // ... rest of the implementation ... } +func copyFileWithProgress(dst, src *os.File, totalBytes int64, progressFn func(written, total int64)) error { + buf := make([]byte, copyBufferSize) + written := int64(0) + for { + nr, err := src.Read(buf) + if nr > 0 { + nw, err := dst.Write(buf[0:nr]) + if err != nil { + return err + } + written += int64(nw) + if progressFn != nil { + progressFn(written, totalBytes) + } + } + if err == io.EOF { + break + } + if err != nil { + return err + } + } + return nil +}
539-547
: Use os.CreateTemp for write permission test.The current implementation could leave stale test files if the process is interrupted. Use
os.CreateTemp
instead.- tmpFile := filepath.Join(t.path, "write_test") - f, err := os.Create(tmpFile) + f, err := os.CreateTemp(t.path, "write_test") if err != nil { return backup.NewError(backup.ErrValidation, "backup path is not writable", err) } + tmpFile := f.Name() f.Close() os.Remove(tmpFile)internal/backup/backup.go (6)
27-35
: Consider adding specialized error codes for backup source validations.
Currently, validation errors return a genericErrValidation
. If you anticipate differentiating between various validation failures (e.g., missing configurations vs. unreachable resources), specialized error codes or messages might provide more context.
176-195
: Extend support for more complex scheduling.
The currentparseCronSchedule
only handles daily hour-based schedules. You might consider supporting more extensive cron syntax (e.g., minute granularity, day-of-week) or explicitly limiting the user’s configuration options to avoid confusion.
269-269
: Revisit the daily backup detection logic.
Relying onnow.Hour() == 0 && now.Minute() < 15
may inadvertently skip daily backups if the process runs late or experiences delays. Consider an approach that uses a schedule-based trigger rather than a time window check.
438-439
: Evaluate memory usage for large backups.
Storing the entire backup file into memory (backupData
) before adding it to the archive may become problematic for large databases. Consider streaming the data or chunk-based zipping to reduce memory overhead.
718-739
: Consider a password-protected encryption key option.
Though AES-256-GCM encryption is solid, storing the raw key in a hex file on disk may raise security concerns if attackers gain filesystem access. Offering a passphrase or environment-based approach could add another layer of protection.
951-954
: Add logging or metrics for cleanup successes.
Although errors are logged, successful cleanup of old backups isn’t explicitly logged. Consider providing feedback for successful deletions, which can help operators confirm that retention policies are taking effect.internal/backup/targets/sftp.go (3)
94-94
: Consider clarifying default forBasePath
.
When no path is specified, it defaults to"backups"
. Documenting this default within code comments or parameter references would help users understand the fallback behavior.
205-218
: Assess whether non-network transient errors are also retried.
TheisTransientError
method checks for strings typically associated with network issues (e.g., “connection reset,” “timeout,” etc.). There may be other ephemeral errors (like resource exhaustion) worth including to avoid failing prematurely.
353-380
: Validate hidden-file restrictions for advanced usage.
Blocking “.”-prefixed paths helps secure the system, but some users might store backups in “.backup” folders for organizational reasons. Consider whether they should optionally allow hidden directories or have a separate policy for them.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
internal/backup/backup.go
(1 hunks)internal/backup/metadata_unix.go
(1 hunks)internal/backup/metadata_windows.go
(1 hunks)internal/backup/sources/sqlite.go
(1 hunks)internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- internal/backup/metadata_windows.go
🚧 Files skipped from review as they are similar to previous changes (3)
- internal/backup/targets/rsync.go
- internal/backup/targets/ftp.go
- internal/backup/sources/sqlite.go
🔇 Additional comments (7)
internal/backup/targets/local.go (2)
197-259
: LGTM! Comprehensive path validation.The path validation implementation is thorough, covering:
- Directory traversal prevention
- Windows-specific checks
- Symlink restrictions
- Path length limits
441-498
: LGTM! Robust backup listing with versioned metadata support.The List implementation gracefully handles both versioned and legacy metadata formats, with proper error handling and sorting.
internal/backup/backup.go (1)
768-792
: Guard against potential partial writes when zipping large files.
TheaddFileToZip
function writes the file into the zip without chunk-based monitoring. For large files, an I/O error could leave a corrupted zip. Consider verifying write progress or ensuring partial writes are handled gracefully.internal/backup/metadata_unix.go (1)
11-17
: LGTM: Unix-specific file metadata handling looks solid.
This function safely checks for astat
object before assigning UID/GID. No critical issues found.internal/backup/targets/sftp.go (3)
280-285
: Prevent proceeding altogether if no known_hosts file is configured.
The code currently raises an error ifKnownHostFile
is empty. This ensures secure host verification. Well done. If this configuration is optional in future, clarify or log a stronger warning about potential security risks.
477-506
: Ensure consistent cleanup on rename failures.
Ifclient.Rename(tempName, remotePath)
fails, the code removes the temporary file, but it doesn’t attempt to remove any partial files atremotePath
(in the event the rename partially succeeded). Confirm whether SFTP rename attempts can cause partial moves.
701-725
: Close all connections even if cleanup fails.
You’re attempting to close all connections in a loop, which is good. If an error occurs mid-loop (e.g., inclient.Close()
), ensure the function continues closing the all other clients rather than returning early. The current logic proceeds correctly, but keep an eye on error handling or logging to track partial failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
internal/backup/targets/sftp.go (2)
25-25
: Consider increasing the default timeout.The default timeout of 30 seconds might be insufficient for large files or slow connections.
-defaultSFTPTimeout = 30 * time.Second +defaultSFTPTimeout = 120 * time.Second
500-505
: Improve error handling in uploadFile cleanup.The error from the cleanup operation is silently ignored. Consider logging it in debug mode.
if err := t.uploadFile(ctx, client, localPath, tempName); err != nil { // Try to clean up the temporary file - _ = client.Remove(tempName) + if cleanupErr := client.Remove(tempName); cleanupErr != nil && t.config.Debug { + fmt.Printf("⚠️ SFTP: Failed to cleanup temporary file %s: %v\n", tempName, cleanupErr) + } return err }internal/backup/targets/ftp.go (4)
55-69
: Consider adding TLS/FTPS configuration options.The
FTPTargetConfig
could benefit from additional security options such as:
- TLS configuration for FTPS support
- Certificate verification options
- Explicit vs. implicit FTPS mode
type FTPTargetConfig struct { Host string Port int Username string Password string BasePath string Timeout time.Duration Debug bool MaxConns int MaxRetries int RetryBackoff time.Duration Features []string MinSpace int64 + UseTLS bool // Enable FTPS + TLSConfig *tls.Config // Custom TLS configuration + ExplicitTLS bool // Use explicit FTPS mode }
162-307
: Add connection pool monitoring.The connection pool implementation would benefit from metrics to monitor its health and performance:
- Pool utilization
- Connection errors
- Connection lifetime
type FTPTarget struct { config FTPTargetConfig logger backup.Logger connPool chan *ftp.ServerConn mu sync.Mutex tempFiles map[string]bool tempFilesMu sync.Mutex initialDir string + metrics struct { + poolSize int64 + errors int64 + connections int64 + waitTime time.Duration + } } func (t *FTPTarget) getConnection(ctx context.Context) (*ftp.ServerConn, error) { + start := time.Now() select { case conn := <-t.connPool: + atomic.AddInt64(&t.metrics.waitTime, time.Since(start).Nanoseconds()) if t.isConnectionAlive(conn) { return conn, nil } _ = conn.Quit() + atomic.AddInt64(&t.metrics.errors, 1) default: }
514-608
: Enhance validation with additional checks.The validation could be more comprehensive:
- Verify disk space availability
- Check server capabilities (FEAT command)
- Verify file permissions
func (t *FTPTarget) Validate() error { ctx, cancel := context.WithTimeout(context.Background(), t.config.Timeout) defer cancel() return t.withRetry(ctx, func(conn *ftp.ServerConn) error { + // Check server features + features, err := conn.Features() + if err != nil { + return backup.NewError(backup.ErrValidation, "ftp: failed to get server features", err) + } + + // Verify required features + requiredFeatures := map[string]bool{ + "SIZE": false, + "MDTM": false, + "UTF8": false, + } + for _, feat := range features { + if _, ok := requiredFeatures[feat]; ok { + requiredFeatures[feat] = true + } + } + + // Check if all required features are supported + for feat, supported := range requiredFeatures { + if !supported { + return backup.NewError(backup.ErrValidation, + fmt.Sprintf("ftp: required feature %s not supported", feat), nil) + } + }
644-672
: Improve cleanup reliability during shutdown.The Close method could be enhanced:
- Add timeout for cleanup operations
- Implement graceful shutdown
- Add cleanup status reporting
func (t *FTPTarget) Close() error { + ctx, cancel := context.WithTimeout(context.Background(), t.config.Timeout) + defer cancel() + t.mu.Lock() defer t.mu.Unlock() var lastErr error + var cleanupStatus struct { + tempFiles int + connections int + errors []error + } // Try to clean up temporary files with one last connection - conn, err := t.connect(context.Background()) + conn, err := t.connect(ctx) if err == nil { t.cleanupTempFiles(conn) if err := conn.Quit(); err != nil { lastErr = err + cleanupStatus.errors = append(cleanupStatus.errors, err) } } // Close all pooled connections + timeout := time.After(t.config.Timeout) for { select { case conn := <-t.connPool: + cleanupStatus.connections++ if err := conn.Quit(); err != nil { lastErr = err t.logger.Printf("Warning: failed to close FTP connection: %v", err) + cleanupStatus.errors = append(cleanupStatus.errors, err) } + case <-timeout: + if len(cleanupStatus.errors) > 0 { + t.logger.Printf("Warning: cleanup timed out with %d errors", len(cleanupStatus.errors)) + } + return lastErr default: + if t.config.Debug { + t.logger.Printf("Cleanup completed: %d connections closed, %d errors", + cleanupStatus.connections, len(cleanupStatus.errors)) + } return lastErr } } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: unit-tests
- GitHub Check: Socket Security: Pull Request Alerts
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
internal/backup/targets/ftp.go (1)
1-18
: LGTM! Well-organized package structure.The package is properly documented and imports are well-organized.
// knownHostsCallback creates a host key callback function from a known_hosts file | ||
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) { | ||
// Check if the known_hosts file exists | ||
if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) { | ||
// Create the .ssh directory if it doesn't exist | ||
sshDir := filepath.Dir(knownHostsFile) | ||
if err := os.MkdirAll(sshDir, 0o700); err != nil { | ||
return nil, backup.NewError(backup.ErrIO, "failed to create .ssh directory", err) | ||
} | ||
// Create an empty known_hosts file | ||
if err := os.WriteFile(knownHostsFile, []byte{}, 0o600); err != nil { | ||
return nil, backup.NewError(backup.ErrIO, "failed to create known_hosts file", err) | ||
} | ||
} | ||
|
||
callback, err := knownhosts.New(knownHostsFile) | ||
if err != nil { | ||
return nil, backup.NewError(backup.ErrValidation, "failed to parse known_hosts file", err) | ||
} | ||
return callback, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix security vulnerability in known_hosts handling.
The current implementation creates an empty known_hosts file if it doesn't exist, which defeats the purpose of host key verification and makes the connection vulnerable to MITM attacks.
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) {
// Check if the known_hosts file exists
if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) {
- // Create the .ssh directory if it doesn't exist
- sshDir := filepath.Dir(knownHostsFile)
- if err := os.MkdirAll(sshDir, 0o700); err != nil {
- return nil, backup.NewError(backup.ErrIO, "failed to create .ssh directory", err)
- }
- // Create an empty known_hosts file
- if err := os.WriteFile(knownHostsFile, []byte{}, 0o600); err != nil {
- return nil, backup.NewError(backup.ErrIO, "failed to create known_hosts file", err)
- }
+ return nil, backup.NewError(backup.ErrSecurity,
+ "known_hosts file does not exist. Please create it with valid host keys", err)
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// knownHostsCallback creates a host key callback function from a known_hosts file | |
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) { | |
// Check if the known_hosts file exists | |
if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) { | |
// Create the .ssh directory if it doesn't exist | |
sshDir := filepath.Dir(knownHostsFile) | |
if err := os.MkdirAll(sshDir, 0o700); err != nil { | |
return nil, backup.NewError(backup.ErrIO, "failed to create .ssh directory", err) | |
} | |
// Create an empty known_hosts file | |
if err := os.WriteFile(knownHostsFile, []byte{}, 0o600); err != nil { | |
return nil, backup.NewError(backup.ErrIO, "failed to create known_hosts file", err) | |
} | |
} | |
callback, err := knownhosts.New(knownHostsFile) | |
if err != nil { | |
return nil, backup.NewError(backup.ErrValidation, "failed to parse known_hosts file", err) | |
} | |
return callback, nil | |
} | |
// knownHostsCallback creates a host key callback function from a known_hosts file | |
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) { | |
// Check if the known_hosts file exists | |
if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) { | |
return nil, backup.NewError(backup.ErrSecurity, | |
"known_hosts file does not exist. Please create it with valid host keys", err) | |
} | |
callback, err := knownhosts.New(knownHostsFile) | |
if err != nil { | |
return nil, backup.NewError(backup.ErrValidation, "failed to parse known_hosts file", err) | |
} | |
return callback, nil | |
} |
// getConnection gets a connection from the pool or creates a new one | ||
func (t *SFTPTarget) getConnection(ctx context.Context) (*sftp.Client, error) { | ||
// Try to get a connection from the pool | ||
select { | ||
case client := <-t.connPool: | ||
if t.isConnectionAlive(client) { | ||
return client, nil | ||
} | ||
// Connection is dead, close it and create a new one | ||
client.Close() | ||
default: | ||
// Pool is empty, create a new connection | ||
} | ||
|
||
return t.connect(ctx) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add connection pool size check.
The current implementation might create too many connections if the pool is empty. Consider adding a check against MaxConns.
func (t *SFTPTarget) getConnection(ctx context.Context) (*sftp.Client, error) {
+ t.mu.Lock()
+ currentConns := len(t.connPool)
+ t.mu.Unlock()
+
+ if currentConns >= t.config.MaxConns {
+ return nil, backup.NewError(backup.ErrIO, "sftp: connection limit reached", nil)
+ }
+
// Try to get a connection from the pool
select {
case client := <-t.connPool:
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// getConnection gets a connection from the pool or creates a new one | |
func (t *SFTPTarget) getConnection(ctx context.Context) (*sftp.Client, error) { | |
// Try to get a connection from the pool | |
select { | |
case client := <-t.connPool: | |
if t.isConnectionAlive(client) { | |
return client, nil | |
} | |
// Connection is dead, close it and create a new one | |
client.Close() | |
default: | |
// Pool is empty, create a new connection | |
} | |
return t.connect(ctx) | |
} | |
// getConnection gets a connection from the pool or creates a new one | |
func (t *SFTPTarget) getConnection(ctx context.Context) (*sftp.Client, error) { | |
t.mu.Lock() | |
currentConns := len(t.connPool) | |
t.mu.Unlock() | |
if currentConns >= t.config.MaxConns { | |
return nil, backup.NewError(backup.ErrIO, "sftp: connection limit reached", nil) | |
} | |
// Try to get a connection from the pool | |
select { | |
case client := <-t.connPool: | |
if t.isConnectionAlive(client) { | |
return client, nil | |
} | |
// Connection is dead, close it and create a new one | |
client.Close() | |
default: | |
// Pool is empty, create a new connection | |
} | |
return t.connect(ctx) | |
} |
// atomicUpload performs an atomic upload operation using a temporary file | ||
func (t *FTPTarget) atomicUpload(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error { | ||
// Create a temporary filename | ||
tempFile, err := os.CreateTemp(filepath.Dir(localPath), "ftp-upload-*") | ||
if err != nil { | ||
return backup.NewError(backup.ErrIO, "ftp: failed to create temporary file", err) | ||
} | ||
tempName := tempFile.Name() | ||
tempFile.Close() | ||
os.Remove(tempName) // Remove the local temp file as we only need its name pattern | ||
|
||
// Create the remote temp file name using the same base name | ||
tempName = path.Join(path.Dir(remotePath), filepath.Base(tempName)) | ||
t.trackTempFile(tempName) | ||
defer t.untrackTempFile(tempName) | ||
|
||
// Upload to temporary file | ||
if err := t.uploadFile(ctx, conn, localPath, tempName); err != nil { | ||
// Try to clean up the temporary file | ||
_ = conn.Delete(tempName) | ||
return err | ||
} | ||
|
||
// Rename to final destination | ||
if err := conn.Rename(tempName, remotePath); err != nil { | ||
// Try to clean up the temporary file | ||
_ = conn.Delete(tempName) | ||
return backup.NewError(backup.ErrIO, "ftp: failed to rename temporary file", err) | ||
} | ||
|
||
return nil | ||
} | ||
|
||
// uploadFile handles the actual file upload | ||
func (t *FTPTarget) uploadFile(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error { | ||
file, err := os.Open(localPath) | ||
if err != nil { | ||
return backup.NewError(backup.ErrIO, "ftp: failed to open local file", err) | ||
} | ||
defer file.Close() | ||
|
||
// Create a pipe for streaming | ||
pr, pw := io.Pipe() | ||
errChan := make(chan error, 1) | ||
|
||
// Start upload in a goroutine | ||
go func() { | ||
defer pw.Close() | ||
_, err := io.Copy(pw, file) | ||
if err != nil { | ||
errChan <- backup.NewError(backup.ErrIO, "ftp: failed to copy file data", err) | ||
return | ||
} | ||
errChan <- nil | ||
}() | ||
|
||
// Store the file on the FTP server | ||
if err := conn.Stor(remotePath, pr); err != nil { | ||
// Debug print failing file | ||
if t.config.Debug { | ||
t.logger.Printf("FTP: Failed to store file %s: %v", remotePath, err) | ||
} | ||
return backup.NewError(backup.ErrIO, "ftp: failed to store file", err) | ||
} | ||
|
||
// Wait for upload completion or context cancellation | ||
select { | ||
case <-ctx.Done(): | ||
return backup.NewError(backup.ErrCanceled, "ftp: upload operation canceled", ctx.Err()) | ||
case err := <-errChan: | ||
return err | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance atomic upload reliability.
The atomic upload implementation could be more robust:
- Add checksum verification
- Implement resume capability for large files
- Add progress tracking for large uploads
func (t *FTPTarget) uploadFile(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error {
file, err := os.Open(localPath)
if err != nil {
return backup.NewError(backup.ErrIO, "ftp: failed to open local file", err)
}
defer file.Close()
+ // Calculate checksum before upload
+ hash := sha256.New()
+ if _, err := io.Copy(hash, file); err != nil {
+ return backup.NewError(backup.ErrIO, "ftp: failed to calculate checksum", err)
+ }
+ expectedSum := hash.Sum(nil)
+
+ // Reset file position
+ if _, err := file.Seek(0, 0); err != nil {
+ return backup.NewError(backup.ErrIO, "ftp: failed to reset file position", err)
+ }
+
// Create a pipe for streaming
pr, pw := io.Pipe()
errChan := make(chan error, 1)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// atomicUpload performs an atomic upload operation using a temporary file | |
func (t *FTPTarget) atomicUpload(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error { | |
// Create a temporary filename | |
tempFile, err := os.CreateTemp(filepath.Dir(localPath), "ftp-upload-*") | |
if err != nil { | |
return backup.NewError(backup.ErrIO, "ftp: failed to create temporary file", err) | |
} | |
tempName := tempFile.Name() | |
tempFile.Close() | |
os.Remove(tempName) // Remove the local temp file as we only need its name pattern | |
// Create the remote temp file name using the same base name | |
tempName = path.Join(path.Dir(remotePath), filepath.Base(tempName)) | |
t.trackTempFile(tempName) | |
defer t.untrackTempFile(tempName) | |
// Upload to temporary file | |
if err := t.uploadFile(ctx, conn, localPath, tempName); err != nil { | |
// Try to clean up the temporary file | |
_ = conn.Delete(tempName) | |
return err | |
} | |
// Rename to final destination | |
if err := conn.Rename(tempName, remotePath); err != nil { | |
// Try to clean up the temporary file | |
_ = conn.Delete(tempName) | |
return backup.NewError(backup.ErrIO, "ftp: failed to rename temporary file", err) | |
} | |
return nil | |
} | |
// uploadFile handles the actual file upload | |
func (t *FTPTarget) uploadFile(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error { | |
file, err := os.Open(localPath) | |
if err != nil { | |
return backup.NewError(backup.ErrIO, "ftp: failed to open local file", err) | |
} | |
defer file.Close() | |
// Create a pipe for streaming | |
pr, pw := io.Pipe() | |
errChan := make(chan error, 1) | |
// Start upload in a goroutine | |
go func() { | |
defer pw.Close() | |
_, err := io.Copy(pw, file) | |
if err != nil { | |
errChan <- backup.NewError(backup.ErrIO, "ftp: failed to copy file data", err) | |
return | |
} | |
errChan <- nil | |
}() | |
// Store the file on the FTP server | |
if err := conn.Stor(remotePath, pr); err != nil { | |
// Debug print failing file | |
if t.config.Debug { | |
t.logger.Printf("FTP: Failed to store file %s: %v", remotePath, err) | |
} | |
return backup.NewError(backup.ErrIO, "ftp: failed to store file", err) | |
} | |
// Wait for upload completion or context cancellation | |
select { | |
case <-ctx.Done(): | |
return backup.NewError(backup.ErrCanceled, "ftp: upload operation canceled", ctx.Err()) | |
case err := <-errChan: | |
return err | |
} | |
} | |
// uploadFile handles the actual file upload | |
func (t *FTPTarget) uploadFile(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error { | |
file, err := os.Open(localPath) | |
if err != nil { | |
return backup.NewError(backup.ErrIO, "ftp: failed to open local file", err) | |
} | |
defer file.Close() | |
// Calculate checksum before upload | |
hash := sha256.New() | |
if _, err := io.Copy(hash, file); err != nil { | |
return backup.NewError(backup.ErrIO, "ftp: failed to calculate checksum", err) | |
} | |
expectedSum := hash.Sum(nil) | |
// Reset file position | |
if _, err := file.Seek(0, 0); err != nil { | |
return backup.NewError(backup.ErrIO, "ftp: failed to reset file position", err) | |
} | |
// Create a pipe for streaming | |
pr, pw := io.Pipe() | |
errChan := make(chan error, 1) | |
// Start upload in a goroutine | |
go func() { | |
defer pw.Close() | |
_, err := io.Copy(pw, file) | |
if err != nil { | |
errChan <- backup.NewError(backup.ErrIO, "ftp: failed to copy file data", err) | |
return | |
} | |
errChan <- nil | |
}() | |
// Store the file on the FTP server | |
if err := conn.Stor(remotePath, pr); err != nil { | |
// Debug print failing file | |
if t.config.Debug { | |
t.logger.Printf("FTP: Failed to store file %s: %v", remotePath, err) | |
} | |
return backup.NewError(backup.ErrIO, "ftp: failed to store file", err) | |
} | |
// Wait for upload completion or context cancellation | |
select { | |
case <-ctx.Done(): | |
return backup.NewError(backup.ErrCanceled, "ftp: upload operation canceled", ctx.Err()) | |
case err := <-errChan: | |
return err | |
} | |
} |
internal/backup/targets/ftp.go
Outdated
// NewFTPTarget creates a new FTP target with the given configuration | ||
func NewFTPTarget(config *FTPTargetConfig, logger backup.Logger) (*FTPTarget, error) { | ||
// Validate required fields | ||
if config.Host == "" { | ||
return nil, backup.NewError(backup.ErrConfig, "ftp: host is required", nil) | ||
} | ||
|
||
// Set defaults for optional fields | ||
if config.Port == 0 { | ||
config.Port = defaultFTPPort | ||
} | ||
if config.Timeout == 0 { | ||
config.Timeout = defaultTimeout | ||
} | ||
if config.BasePath == "" { | ||
config.BasePath = defaultBasePath | ||
} else { | ||
config.BasePath = strings.TrimRight(config.BasePath, "/") | ||
} | ||
if config.MaxConns == 0 { | ||
config.MaxConns = defaultMaxConnections | ||
} | ||
if config.MaxRetries == 0 { | ||
config.MaxRetries = defaultMaxRetries | ||
} | ||
if config.RetryBackoff == 0 { | ||
config.RetryBackoff = time.Second | ||
} | ||
|
||
if logger == nil { | ||
logger = backup.DefaultLogger() | ||
} | ||
|
||
target := &FTPTarget{ | ||
config: *config, | ||
logger: logger, | ||
connPool: make(chan *ftp.ServerConn, config.MaxConns), | ||
tempFiles: make(map[string]bool), | ||
} | ||
|
||
return target, nil | ||
} | ||
|
||
// NewFTPTargetFromMap creates a new FTP target from a map configuration (for backward compatibility) | ||
func NewFTPTargetFromMap(settings map[string]interface{}) (*FTPTarget, error) { | ||
config := FTPTargetConfig{} | ||
|
||
// Required settings | ||
host, ok := settings["host"].(string) | ||
if !ok { | ||
return nil, backup.NewError(backup.ErrConfig, "ftp: host is required", nil) | ||
} | ||
config.Host = host | ||
|
||
// Optional settings | ||
if port, ok := settings["port"].(int); ok { | ||
config.Port = port | ||
} | ||
if username, ok := settings["username"].(string); ok { | ||
config.Username = username | ||
} | ||
if password, ok := settings["password"].(string); ok { | ||
config.Password = password | ||
} | ||
if path, ok := settings["path"].(string); ok { | ||
config.BasePath = path | ||
} | ||
if timeout, ok := settings["timeout"].(string); ok { | ||
duration, err := time.ParseDuration(timeout) | ||
if err != nil { | ||
return nil, backup.NewError(backup.ErrValidation, "ftp: invalid timeout format", err) | ||
} | ||
config.Timeout = duration | ||
} | ||
if debug, ok := settings["debug"].(bool); ok { | ||
config.Debug = debug | ||
} | ||
|
||
var logger backup.Logger | ||
if l, ok := settings["logger"].(backup.Logger); ok { | ||
logger = l | ||
} | ||
|
||
return NewFTPTarget(&config, logger) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance password security in configuration.
Consider these security improvements for password handling:
- Clear password from memory after use
- Support for password file or environment variable
- Support for SSH key authentication as an alternative
func NewFTPTarget(config *FTPTargetConfig, logger backup.Logger) (*FTPTarget, error) {
// Validate required fields
if config.Host == "" {
return nil, backup.NewError(backup.ErrConfig, "ftp: host is required", nil)
}
+
+ // Support password from environment
+ if strings.HasPrefix(config.Password, "env:") {
+ envVar := strings.TrimPrefix(config.Password, "env:")
+ config.Password = os.Getenv(envVar)
+ }
+
+ // Support password from file
+ if strings.HasPrefix(config.Password, "file:") {
+ passFile := strings.TrimPrefix(config.Password, "file:")
+ passBytes, err := os.ReadFile(passFile)
+ if err != nil {
+ return nil, backup.NewError(backup.ErrConfig, "ftp: failed to read password file", err)
+ }
+ config.Password = strings.TrimSpace(string(passBytes))
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// NewFTPTarget creates a new FTP target with the given configuration | |
func NewFTPTarget(config *FTPTargetConfig, logger backup.Logger) (*FTPTarget, error) { | |
// Validate required fields | |
if config.Host == "" { | |
return nil, backup.NewError(backup.ErrConfig, "ftp: host is required", nil) | |
} | |
// Set defaults for optional fields | |
if config.Port == 0 { | |
config.Port = defaultFTPPort | |
} | |
if config.Timeout == 0 { | |
config.Timeout = defaultTimeout | |
} | |
if config.BasePath == "" { | |
config.BasePath = defaultBasePath | |
} else { | |
config.BasePath = strings.TrimRight(config.BasePath, "/") | |
} | |
if config.MaxConns == 0 { | |
config.MaxConns = defaultMaxConnections | |
} | |
if config.MaxRetries == 0 { | |
config.MaxRetries = defaultMaxRetries | |
} | |
if config.RetryBackoff == 0 { | |
config.RetryBackoff = time.Second | |
} | |
if logger == nil { | |
logger = backup.DefaultLogger() | |
} | |
target := &FTPTarget{ | |
config: *config, | |
logger: logger, | |
connPool: make(chan *ftp.ServerConn, config.MaxConns), | |
tempFiles: make(map[string]bool), | |
} | |
return target, nil | |
} | |
// NewFTPTargetFromMap creates a new FTP target from a map configuration (for backward compatibility) | |
func NewFTPTargetFromMap(settings map[string]interface{}) (*FTPTarget, error) { | |
config := FTPTargetConfig{} | |
// Required settings | |
host, ok := settings["host"].(string) | |
if !ok { | |
return nil, backup.NewError(backup.ErrConfig, "ftp: host is required", nil) | |
} | |
config.Host = host | |
// Optional settings | |
if port, ok := settings["port"].(int); ok { | |
config.Port = port | |
} | |
if username, ok := settings["username"].(string); ok { | |
config.Username = username | |
} | |
if password, ok := settings["password"].(string); ok { | |
config.Password = password | |
} | |
if path, ok := settings["path"].(string); ok { | |
config.BasePath = path | |
} | |
if timeout, ok := settings["timeout"].(string); ok { | |
duration, err := time.ParseDuration(timeout) | |
if err != nil { | |
return nil, backup.NewError(backup.ErrValidation, "ftp: invalid timeout format", err) | |
} | |
config.Timeout = duration | |
} | |
if debug, ok := settings["debug"].(bool); ok { | |
config.Debug = debug | |
} | |
var logger backup.Logger | |
if l, ok := settings["logger"].(backup.Logger); ok { | |
logger = l | |
} | |
return NewFTPTarget(&config, logger) | |
} | |
// NewFTPTarget creates a new FTP target with the given configuration | |
func NewFTPTarget(config *FTPTargetConfig, logger backup.Logger) (*FTPTarget, error) { | |
// Validate required fields | |
if config.Host == "" { | |
return nil, backup.NewError(backup.ErrConfig, "ftp: host is required", nil) | |
} | |
// Support password from environment | |
if strings.HasPrefix(config.Password, "env:") { | |
envVar := strings.TrimPrefix(config.Password, "env:") | |
config.Password = os.Getenv(envVar) | |
} | |
// Support password from file | |
if strings.HasPrefix(config.Password, "file:") { | |
passFile := strings.TrimPrefix(config.Password, "file:") | |
passBytes, err := os.ReadFile(passFile) | |
if err != nil { | |
return nil, backup.NewError(backup.ErrConfig, "ftp: failed to read password file", err) | |
} | |
config.Password = strings.TrimSpace(string(passBytes)) | |
} | |
// Set defaults for optional fields | |
if config.Port == 0 { | |
config.Port = defaultFTPPort | |
} | |
if config.Timeout == 0 { | |
config.Timeout = defaultTimeout | |
} | |
if config.BasePath == "" { | |
config.BasePath = defaultBasePath | |
} else { | |
config.BasePath = strings.TrimRight(config.BasePath, "/") | |
} | |
if config.MaxConns == 0 { | |
config.MaxConns = defaultMaxConnections | |
} | |
if config.MaxRetries == 0 { | |
config.MaxRetries = defaultMaxRetries | |
} | |
if config.RetryBackoff == 0 { | |
config.RetryBackoff = time.Second | |
} | |
if logger == nil { | |
logger = backup.DefaultLogger() | |
} | |
target := &FTPTarget{ | |
config: *config, | |
logger: logger, | |
connPool: make(chan *ftp.ServerConn, config.MaxConns), | |
tempFiles: make(map[string]bool), | |
} | |
return target, nil | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🔭 Outside diff range comments (1)
internal/backup/targets/rsync.go (1)
781-786
: 🛠️ Refactor suggestionAdd error handling for size parsing.
The parseInt64 function is used in the List method but errors are not properly handled, which could lead to incorrect size reporting.
func parseInt64(s string) (int64, error) { var result int64 - _, err := fmt.Sscanf(s, "%d", &result) - return result, err + n, err := fmt.Sscanf(s, "%d", &result) + if err != nil { + return 0, err + } + if n != 1 { + return 0, fmt.Errorf("expected to scan 1 item, got %d", n) + } + return result, nil }
♻️ Duplicate comments (2)
internal/backup/targets/ftp.go (1)
294-366
: 🛠️ Refactor suggestionEnhance atomic upload reliability.
The atomic upload implementation could be more robust with checksum verification and resume capability.
func (t *FTPTarget) atomicUpload(ctx context.Context, conn *ftp.ServerConn, localPath, remotePath string) error { + // Calculate checksum before upload + hash := sha256.New() + file, err := os.Open(localPath) + if err != nil { + return backup.NewError(backup.ErrIO, "ftp: failed to open local file", err) + } + if _, err := io.Copy(hash, file); err != nil { + file.Close() + return backup.NewError(backup.ErrIO, "ftp: failed to calculate checksum", err) + } + expectedSum := hash.Sum(nil) + + // Reset file position + if _, err := file.Seek(0, 0); err != nil { + file.Close() + return backup.NewError(backup.ErrIO, "ftp: failed to reset file position", err) + } + defer file.Close()internal/backup/targets/sftp.go (1)
732-752
:⚠️ Potential issueFix security vulnerability in known_hosts handling.
Creating an empty known_hosts file defeats the purpose of host key verification and makes the connection vulnerable to MITM attacks.
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) { // Check if the known_hosts file exists if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) { - // Create the .ssh directory if it doesn't exist - sshDir := filepath.Dir(knownHostsFile) - if err := os.MkdirAll(sshDir, 0o700); err != nil { - return nil, backup.NewError(backup.ErrIO, "failed to create .ssh directory", err) - } - // Create an empty known_hosts file - if err := os.WriteFile(knownHostsFile, []byte{}, 0o600); err != nil { - return nil, backup.NewError(backup.ErrIO, "failed to create known_hosts file", err) - } + return nil, backup.NewError(backup.ErrSecurity, + "known_hosts file does not exist. Please create it with valid host keys", err) }
🧹 Nitpick comments (20)
cmd/restore/restore.go (2)
47-47
: Combine parameters in function signature
The static analysis suggests combining the parameters to avoid repetitive type declarations.-func runDecrypt(settings *conf.Settings, backupPath string, outputPath string) error { +func runDecrypt(settings *conf.Settings, backupPath, outputPath string) error {🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Check: lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Actions: golangci-lint
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: Build docker image
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: golangci-test
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
112-112
: Adopt new octal literal style
Use the0o
prefix (e.g.,0o755
) to comply with modern Go conventions, as flagged by static analysis.- if err := os.MkdirAll(filepath.Dir(outputPath), 0755); err != nil { + if err := os.MkdirAll(filepath.Dir(outputPath), 0o755); err != nil { - if err := os.WriteFile(outputPath, decryptedData, 0600); err != nil { + if err := os.WriteFile(outputPath, decryptedData, 0o600); err != nil { - if err := os.WriteFile(newMetaPath, updatedMetaData, 0600); err != nil { + if err := os.WriteFile(newMetaPath, updatedMetaData, 0o600); err != nil {Also applies to: 117-117, 133-133
🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)🪛 GitHub Check: lint
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)internal/backup/encryption.go (2)
31-69
: Consider external secret management
Currently, the encryption key is generated and persisted to a file. For enhanced security, you might store the key externally (e.g., environment variables or a secure secret manager) to reduce risk if the file system is compromised.
136-165
: Key rotation strategy
While generating an encryption key on demand is helpful for initial use, consider providing a secure method for forwarding or rotating keys to adapt to compromised keys or revised security policies.internal/backup/state.go (1)
258-260
: Avoid copying large struct values
Static analysis warns that each iteration may copy large structs. Consider referencing values or storing pointers if the struct grows in size, to limit memory overhead.-for k, v := range sm.state.Stats { - stats[k] = v +for k := range sm.state.Stats { + current := sm.state.Stats[k] + stats[k] = current }🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)internal/backup/scheduler.go (2)
14-22
: Confirm the correctness of usingWeekday = -1
for daily backups.
Assigning-1
totime.Weekday
for daily backups could be confusing. Although it appears intentional, it relies on extra logic in other parts of the code (e.g.,calculateNextRun
). Consider introducing a dedicatedIsDaily
field or using a sentinel enum for clarity.
239-239
: Avoid copying large stats structs in range loops.
Usingfor targetName, targetStats := range stats { ... }
creates a copy oftargetStats
on each iteration, which can be costly if the struct is large. Consider iterating with indexes or pointers to avoid unnecessary copying.-for targetName, targetStats := range stats { +for targetName := range stats { + targetStats := stats[targetName]🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)internal/backup/sources/sqlite.go (3)
171-185
: Integrity checks are good; consider capturing more details.
You correctly verify againstPRAGMA integrity_check
. For advanced auditing, you might expose partial results if the string isn’t “ok” and if the user needs more details on corruption.
278-287
: Check concurrency exposures when usingwithDatabase
.
withDatabase
ensures a singleopenDatabase
call is closed promptly. While this is good, any background writes by the application while a read-only connection is open might cause backups to read partial data.Do you want to add a locked snapshot mechanism or re-check for changes after finalizing the read?
410-489
: Secure creation and cleanup of temporary backup files.
You handle temporary file creation and removal properly withCreateTemp
anddefer os.Remove(tempPath)
. On shared systems, consider adding stricter file permissions (0600) to hinder unauthorized reading of the backup.- tempFile, err := os.CreateTemp("", "birdnet-go-backup-*.db") + tempFile, err := os.OpenFile(filepath.Join(os.TempDir(), "birdnet-go-backup-*.db"), os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)cmd/backup/backup.go (1)
133-133
: Make backup timeout configurable.The backup timeout is hardcoded to 10 minutes, which might not be suitable for all scenarios, especially with large databases or slow network connections.
Consider making the timeout configurable in the backup settings:
+ // Use configured timeout or default to 10 minutes + timeout := 10 * time.Minute + if settings.Backup.Timeout != "" { + if t, err := time.ParseDuration(settings.Backup.Timeout); err == nil { + timeout = t + } + } - ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) + ctx, cancel := context.WithTimeout(context.Background(), timeout)internal/conf/defaults.go (2)
189-191
: Review retention policy defaults.The current retention policy might lead to excessive disk usage:
- 30-day retention with 30 maximum backups could mean daily backups are kept
- 7 minimum backups might be too high for some use cases
Consider adjusting the defaults to be more conservative:
-viper.SetDefault("backup.retention.maxage", "30d") -viper.SetDefault("backup.retention.maxbackups", 30) -viper.SetDefault("backup.retention.minbackups", 7) +viper.SetDefault("backup.retention.maxage", "14d") +viper.SetDefault("backup.retention.maxbackups", 14) +viper.SetDefault("backup.retention.minbackups", 3)
207-209
: Add secure defaults for FTP configuration.The FTP configuration should encourage secure practices:
- Default to a secure port
- Set a reasonable default timeout
Apply this diff to improve security defaults:
- "port": 21, + "port": 990, // Default to FTPS port "username": "", "password": "", - "timeout": "30s", + "timeout": "5m", // More reasonable timeout for large files + "secure": true, // Default to using FTPSinternal/backup/targets/local.go (2)
21-29
: Consider making maxBackupSize configurable.The hardcoded 10GB limit might be too restrictive for some use cases. Consider making this configurable through the target configuration.
const ( - maxBackupSize = 10 * 1024 * 1024 * 1024 // 10GB dirPermissions = 0o700 // rwx------ (owner only) filePermissions = 0o600 // rw------- (owner only) maxPathLength = 255 // Maximum path length copyBufferSize = 32 * 1024 // 32KB buffer for file copies maxRetries = 3 // Maximum number of retries for transient errors baseBackoffDelay = 100 * time.Millisecond // Base delay for exponential backoff ) +// LocalTargetConfig holds configuration for the local filesystem target type LocalTargetConfig struct { Path string Debug bool + MaxBackupSize int64 // Maximum size of a single backup file in bytes }
253-353
: Add progress tracking to Store method.The Store method could benefit from more granular progress tracking, especially for large files.
func (t *LocalTarget) Store(ctx context.Context, sourcePath string, metadata *backup.Metadata) error { + var bytesWritten int64 + progressCallback := func(n int64) { + bytesWritten += n + if t.debug { + progress := float64(bytesWritten) / float64(metadata.Size) * 100 + t.logger.Printf("Upload progress: %.2f%% (%d/%d bytes)", progress, bytesWritten, metadata.Size) + } + } // Create a buffered copy with context cancellation copyDone := make(chan error, 1) go func() { - copyDone <- copyFile(tempFile, srcFile) + copyDone <- copyFileWithProgress(tempFile, srcFile, progressCallback) }()internal/conf/config.go (2)
354-358
: Consider using type-safe settings.The
Settings
field usingmap[string]interface{}
is flexible but lacks type safety. Consider:
- Using provider-specific structs for settings.
- Adding validation for required settings.
Example:
type BackupProviderSettings interface { Validate() error } type FTPProviderSettings struct { Host string `yaml:"host"` Port int `yaml:"port"` Username string `yaml:"username"` Password string `yaml:"password"` } type BackupProvider struct { Type string `yaml:"type"` Enabled bool `yaml:"enabled"` Settings BackupProviderSettings `yaml:"settings"` }
368-372
: Consider using type-safe settings.Similar to
BackupProvider
, theSettings
field usingmap[string]interface{}
lacks type safety. Consider:
- Using target-specific structs for settings.
- Adding validation for required settings.
Example:
type BackupTargetSettings interface { Validate() error } type FTPTargetSettings struct { Host string `yaml:"host"` Port int `yaml:"port"` Username string `yaml:"username"` Password string `yaml:"password"` Path string `yaml:"path"` } type BackupTarget struct { Type string `yaml:"type"` Enabled bool `yaml:"enabled"` Settings BackupTargetSettings `yaml:"settings"` }internal/backup/backup.go (2)
954-954
: Optimize range loops for performance.The range loops are copying large structs in each iteration. Consider using pointers or indexing to avoid copying:
-for _, backup := range backups { +for i := range backups { + backup := &backups[i]This applies to the range loops at:
- Line 954 (BackupInfo: 176 bytes)
- Line 986 (BackupStats: 128 bytes)
- Line 1079 (BackupInfo: 176 bytes)
Also applies to: 986-986, 1079-1079
🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
665-688
: Consider using exact duration calculations.The function uses approximate values for months (30 days) and years (365 days), which might lead to drift over time. Consider:
- Using
time.AddDate
for exact month/year calculations.- Adding a comment about the approximation.
Example:
func (m *Manager) parseRetentionAge(age string) (time.Duration, error) { if age == "" { return 0, nil } var num int var unit string if _, err := fmt.Sscanf(age, "%d%s", &num, &unit); err != nil { return 0, NewError(ErrValidation, fmt.Sprintf("invalid retention age format: %s", age), err) } now := time.Now() var future time.Time switch unit { case "d": future = now.AddDate(0, 0, num) case "m": future = now.AddDate(0, num, 0) case "y": future = now.AddDate(num, 0, 0) default: return 0, NewError(ErrValidation, fmt.Sprintf("invalid retention age unit: %s", unit), nil) } return future.Sub(now), nil }views/settings/backupSettings.html (1)
125-144
: Backup Targets Initialization and ManagementThe backup targets section is well-structured using Alpine.js. The methods
addTarget()
,removeTarget(index)
, andinitializeTarget(target)
provide a clear mechanism to add, configure, and remove backup targets.
A minor consideration: in theinitializeTarget()
usage within the x-init block (lines 172–180), it may be more robust to reference the method explicitly asthis.initializeTarget(target)
to avoid potential scope ambiguities.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (23)
cmd/backup/backup.go
(1 hunks)cmd/restore/restore.go
(1 hunks)cmd/root.go
(3 hunks)internal/backup/backup.go
(1 hunks)internal/backup/encryption.go
(1 hunks)internal/backup/scheduler.go
(1 hunks)internal/backup/sources/sqlite.go
(1 hunks)internal/backup/state.go
(1 hunks)internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)internal/conf/config.go
(3 hunks)internal/conf/defaults.go
(2 hunks)internal/conf/validate.go
(2 hunks)internal/httpcontroller/routes.go
(1 hunks)views/elements/sidebar.html
(1 hunks)views/settings/backupSettings.html
(1 hunks)views/settings/templates/checkbox.html
(1 hunks)views/settings/templates/hostField.html
(2 hunks)views/settings/templates/passwordField.html
(4 hunks)views/settings/templates/textField.html
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/backup/targets/gdrive.go
🧰 Additional context used
🪛 GitHub Check: golangci / lint
internal/backup/state.go
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
cmd/restore/restore.go
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)
internal/backup/scheduler.go
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
internal/backup/backup.go
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
[failure] 986-986:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 1079-1079:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
🪛 GitHub Check: lint
internal/backup/state.go
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
cmd/restore/restore.go
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)
internal/backup/scheduler.go
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
internal/backup/backup.go
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
[failure] 986-986:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 1079-1079:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
🪛 GitHub Actions: golangci-lint
cmd/restore/restore.go
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: Build docker image
cmd/restore/restore.go
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: golangci-test
cmd/restore/restore.go
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
🔇 Additional comments (34)
cmd/restore/restore.go (3)
16-46
: Clean and well-structured CLI subcommands
This command setup is straightforward and aligns well with Cobra best practices, providing a clear subcommand (decrypt
) for the restore workflow.
80-86
: Verify forcibly enabling encryption
Here,settings.Backup.Encryption = true
is set unconditionally. This could unexpectedly alter global settings ifrunDecrypt
is used in other contexts. Consider enabling encryption only locally or temporarily within the manager instance to avoid side effects.
47-79
: Overall decryption logic
The logic for file checks, metadata validation, and output path defaults is coherent and robust, addressing key error scenarios and ensuring a valid decrypted output.Also applies to: 87-111, 118-132, 134-140
🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Check: lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Actions: golangci-lint
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: Build docker image
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: golangci-test
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
internal/backup/encryption.go (1)
86-107
: Effective AES-256-GCM usage
The encryption and decryption flows, including nonce generation and usage of the GCM mode, are correctly implemented for secure data handling.internal/backup/state.go (1)
15-21
: Comprehensive state tracking
Maintaining schedules, targets, missed runs, and stats in a single state struct is logical and makes the system’s backup status fully transparent.internal/backup/scheduler.go (4)
26-33
: Consider concurrency safety for shared state.
s.schedules
ands.isRunning
are protected by a mutex, which is good. Ensure external code doesn't mutate schedules concurrently via references returned from any public getters. If a read-only slice is needed externally, consider returning a copy ofs.schedules
or providing a read-only API.
55-86
: Validate the time logic inAddSchedule
.
The logic to setweekday = -1
for daily backups is correct. However, for weekly backups, the code leaps straight tocalculateNextRun
without verifying that the provided weekday is valid (e.g., ensuring it's not-1
). Consider adding an explicit validation.
420-420
: Same structure copying concern when iterating over stats.
This loop has the same range copy issue. Refactor it similarly to minimize allocations.🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
190-249
: Ensure robust handling of concurrent backup triggers.
runBackup
is invoked in a goroutine and interacts with shared state. This approach is valid, but confirm thats.manager.RunBackup(ctx)
can handle multiple concurrent calls or that the system logic prevents overlapping backups if not desired.🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)internal/backup/sources/sqlite.go (2)
102-115
: Validate concurrency for file access.
Even thoughverifySourceDatabase
checks that the database file is accessible, ensure that the DB won't be locked or partially overwritten during backup. SQLite’s WAL mode helps, but confirm upstream code properly serializes writes if a consistent snapshot is required.
492-529
: Great approach for streaming backups.
The on-demand, pipe-based streaming solution is flexible for piping backups to remote destinations or further processing. The staged verification steps (config validation and integrity check) reduce the chance of generating invalid backups.cmd/root.go (1)
11-19
: Subcommands integration looks good.
Registering thebackup
andrestore
commands follows the standard Cobra pattern. Ensure that each subcommand prints helpful usage if run without arguments.Also applies to: 46-47, 58-59
internal/httpcontroller/routes.go (1)
76-76
: LGTM!The new backup settings route follows the established pattern for settings routes and correctly requires authentication.
internal/conf/defaults.go (1)
187-187
: Consider enabling encryption by default for enhanced security.Backups often contain sensitive data. While encryption is configurable, it should be enabled by default to ensure secure backups, especially when using remote storage.
internal/conf/config.go (4)
319-319
: LGTM!The
TempDir
field is properly documented and serves a clear purpose for backup operations.
361-365
: Add validation for retention settings.The
MaxAge
field needs validation to ensure it contains a valid duration string format (e.g., "30d", "6m", "1y").
375-381
: Add validation for schedule settings.The schedule fields need validation:
- Hour (0-23)
- Minute (0-59)
- Weekday format
384-400
: Enhance encryption configuration.The current boolean flag for encryption needs to be enhanced with secure key management.
views/settings/templates/checkbox.html (1)
4-14
: LGTM! Great accessibility improvements.The changes enhance accessibility and dynamic behavior:
- Added proper ARIA attributes
- Dynamic ID generation
- Improved tooltip accessibility
Also applies to: 17-17, 20-21, 25-29
views/settings/templates/textField.html (1)
4-11
: LGTM! Great accessibility and validation improvements.The changes enhance accessibility and validation:
- Added proper ARIA attributes
- Improved validation logic
- Enhanced tooltip accessibility
- Added maxlength validation
Also applies to: 15-18, 23-42, 45-50, 59-63
views/settings/templates/hostField.html (4)
5-16
: Dynamic ID and Method Updates in x-data ObjectThe changes introducing a dynamic
fieldId
(concatenating.id
with an optional_index
) and updating thevalidInput()
andvalidHost()
methods are well done. This approach makes the component more flexible and improves accessibility.
Ensure that the data provided for.id
,.index
, and.model
is always defined and of the expected type to avoid runtime errors.
20-23
: Label Association with Dynamic IDUpdating the
<label>
element’sfor
attribute to bind with the newly definedfieldId
ensures correct association between the label and its input element. This is a solid accessibility improvement.
28-29
: Input Field Attribute BindingsThe input field now uses
:id="fieldId"
and:x-ref="fieldId"
, and thearia-describedby
attribute also referencesfieldId
dynamically. These changes are consistent and provide better linkage between elements and their validation messages.Also applies to: 35-36
45-46
: Enhanced aria-describedby ConstructionThe dynamic concatenation for the
aria-describedby
attribute (combining error, warning, and tooltip element IDs) is well implemented and aids assistive technologies.views/settings/templates/passwordField.html (2)
4-8
: Dynamic Field ID Generation in Password FieldThe introduction of a dynamic
fieldId
based on the conditional presence of.dynamicId
is a good step toward flexible UI components. One concern is that when.dynamicId
is true, the expression uses+ index
—please verify thatindex
is indeed defined in the scope of this template (e.g. when used inside an iteration).
14-18
: Label and Tooltip BindingsThe updates to the
<label>
’s:for
attribute, tooltip show handling withshowTooltip = fieldId
, and the consistent use offieldId
in associated elements all enhance the dynamic behavior and accessibility of the component.views/elements/sidebar.html (1)
70-71
: Addition of Backup Menu ItemThe new list item for the "Backup" settings menu is well integrated into the sidebar. The use of
:class="{ 'active': isExactRouteActive('/settings/backup') }"
is consistent with existing menu items, ensuring the menu state is accurately represented.views/settings/backupSettings.html (7)
1-8
: New Backup Settings Template & Hidden InputThe new
{{define "backupSettings"}}
template and the inclusion of a hidden input fortemplateName
set a solid foundation for backup configuration. The data binding for thebackup
object, including properties for enabling backups, debug mode, encryption, and sanitizing configuration, is clearly structured.
10-28
: x-data Initialization for Backup SettingsThe
x-data
object initialization (lines 10–25) and the deep watcher on thebackup
object (line 27) are implemented well to track configuration changes. The properties and nested retention settings (max age, max backups, min backups) are mapped appropriately from the server-side data.
30-43
: Collapse Component for Backup Settings UIThe collapse mechanism using an input checkbox (line 31) and the clear labeling for the "Backup Settings" section improve usability. The conditional display of a "changed" badge when settings are modified is a neat interactive touch.
45-81
: General Backup Settings ConfigurationThe series of templated checkboxes (for Enable Backup, Debug Mode, Encryption, and Sanitize Config) provides a consistent UI for general backup configurations. Passing parameters (like tooltip text and labels) via the
checkbox
template keeps the code DRY and maintainable.
83-121
: Retention Settings with TextField TemplatesThe retention settings section leverages the
textField
template for user input on maximum age and backup count limits. The use of regex patterns (e.g.,"^\\d+[dmy]$"
for durations) and clear validation messages enhances data integrity and user guidance.
182-189
: Backup Targets Header and Add ButtonThe header for the "Backup Targets" section and the “Add Target” button provide an intuitive interface for managing targets. The button’s placement and styling are consistent with the overall UI design.
191-356
: Dynamic Target Configuration via x-for TemplateThe dynamic template for iterating over backup targets (using
x-for
) and the conditional display of target-specific settings (for 'local', 'ftp', and 'sftp' types) are implemented comprehensively. Every target field is configured using templated components (e.g.,textField
,passwordField
,checkbox
withdynamicId
set to true) to ensure consistency and uniqueness across instances. The use of a remove button for each target adds necessary interactivity.
// GetMissedRuns returns a list of missed backup times | ||
func (s *Scheduler) GetMissedRuns() []time.Time { | ||
missed := s.state.GetMissedBackups() | ||
times := make([]time.Time, len(missed)) | ||
for i, m := range missed { | ||
times[i] = m.ScheduledTime | ||
} | ||
return times | ||
} | ||
|
||
// ClearMissedRuns clears the list of missed runs | ||
func (s *Scheduler) ClearMissedRuns() { | ||
s.mu.Lock() | ||
defer s.mu.Unlock() | ||
s.schedules = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ClearMissedRuns
usage might be misleading.
Currently, ClearMissedRuns
sets s.schedules = nil
, which clears all schedules, not just missed runs. This naming could cause confusion or unintended behavior if the intention was to only remove missed run entries.
-func (s *Scheduler) ClearMissedRuns() {
- s.mu.Lock()
- defer s.mu.Unlock()
- s.schedules = nil
-}
+func (s *Scheduler) ClearAllSchedules() {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ s.schedules = nil
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// GetMissedRuns returns a list of missed backup times | |
func (s *Scheduler) GetMissedRuns() []time.Time { | |
missed := s.state.GetMissedBackups() | |
times := make([]time.Time, len(missed)) | |
for i, m := range missed { | |
times[i] = m.ScheduledTime | |
} | |
return times | |
} | |
// ClearMissedRuns clears the list of missed runs | |
func (s *Scheduler) ClearMissedRuns() { | |
s.mu.Lock() | |
defer s.mu.Unlock() | |
s.schedules = nil | |
// GetMissedRuns returns a list of missed backup times | |
func (s *Scheduler) GetMissedRuns() []time.Time { | |
missed := s.state.GetMissedBackups() | |
times := make([]time.Time, len(missed)) | |
for i, m := range missed { | |
times[i] = m.ScheduledTime | |
} | |
return times | |
} | |
// ClearAllSchedules clears all schedules | |
func (s *Scheduler) ClearAllSchedules() { | |
s.mu.Lock() | |
defer s.mu.Unlock() | |
s.schedules = nil | |
} |
switch targetConfig.Type { | ||
case "local": | ||
target, err = targets.NewLocalTarget(targets.LocalTargetConfig{ | ||
Path: targetConfig.Settings["path"].(string), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add type assertion validation for target settings.
The type assertion for path is not checked for safety, which could lead to a panic if the configuration is malformed.
Apply this diff to add type assertion validation:
- Path: targetConfig.Settings["path"].(string),
+ Path: func() string {
+ if path, ok := targetConfig.Settings["path"].(string); ok {
+ return path
+ }
+ log.Printf("Warning: Invalid path setting for local target, using default")
+ return "backups/"
+ }(),
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Path: targetConfig.Settings["path"].(string), | |
Path: func() string { | |
if path, ok := targetConfig.Settings["path"].(string); ok { | |
return path | |
} | |
log.Printf("Warning: Invalid path setting for local target, using default") | |
return "backups/" | |
}(), |
if err != nil { | ||
return fmt.Errorf("failed to generate encryption key: %w", err) | ||
} | ||
log.Printf("Successfully generated new encryption key: %s", key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid logging encryption keys in plaintext.
The encryption key is currently being logged in plaintext, which could expose sensitive information in log files.
Apply this diff to mask the key in logs:
-log.Printf("Successfully generated new encryption key: %s", key)
+log.Printf("Successfully generated new encryption key: %s", strings.Repeat("*", len(key)))
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
log.Printf("Successfully generated new encryption key: %s", key) | |
log.Printf("Successfully generated new encryption key: %s", strings.Repeat("*", len(key))) |
/*if len(config.Schedules) == 0 { | ||
errs = append(errs, "at least one backup schedule must be configured when backup is enabled") | ||
}*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uncomment and implement schedule validation.
The schedule validation is currently commented out, but it's crucial for ensuring proper backup timing.
Uncomment and implement the schedule validation:
- /*if len(config.Schedules) == 0 {
- errs = append(errs, "at least one backup schedule must be configured when backup is enabled")
- }*/
+ if len(config.Schedules) == 0 && config.Schedule == "" {
+ errs = append(errs, "either a cron schedule or at least one backup schedule must be configured when backup is enabled")
+ }
+
+ // Validate cron schedule if provided
+ if config.Schedule != "" {
+ if _, err := cron.ParseStandard(config.Schedule); err != nil {
+ errs = append(errs, fmt.Sprintf("invalid cron schedule: %v", err))
+ }
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
/*if len(config.Schedules) == 0 { | |
errs = append(errs, "at least one backup schedule must be configured when backup is enabled") | |
}*/ | |
if len(config.Schedules) == 0 && config.Schedule == "" { | |
errs = append(errs, "either a cron schedule or at least one backup schedule must be configured when backup is enabled") | |
} | |
// Validate cron schedule if provided | |
if config.Schedule != "" { | |
if _, err := cron.ParseStandard(config.Schedule); err != nil { | |
errs = append(errs, fmt.Sprintf("invalid cron schedule: %v", err)) | |
} | |
} |
if _, ok := target.Settings["host"].(string); !ok { | ||
errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add comprehensive validation for FTP settings.
The current validation only checks for the host setting, but other critical settings should also be validated.
Add validation for all FTP settings:
if _, ok := target.Settings["host"].(string); !ok {
errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1))
+} else {
+ // Validate port
+ if port, ok := target.Settings["port"].(int); ok {
+ if port < 1 || port > 65535 {
+ errs = append(errs, fmt.Sprintf("target %d: port must be between 1 and 65535", i+1))
+ }
+ }
+ // Validate timeout
+ if timeout, ok := target.Settings["timeout"].(string); ok {
+ if _, err := time.ParseDuration(timeout); err != nil {
+ errs = append(errs, fmt.Sprintf("target %d: invalid timeout format: %v", i+1, err))
+ }
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if _, ok := target.Settings["host"].(string); !ok { | |
errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1)) | |
} | |
if _, ok := target.Settings["host"].(string); !ok { | |
errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1)) | |
} else { | |
// Validate port | |
if port, ok := target.Settings["port"].(int); ok { | |
if port < 1 || port > 65535 { | |
errs = append(errs, fmt.Sprintf("target %d: port must be between 1 and 65535", i+1)) | |
} | |
} | |
// Validate timeout | |
if timeout, ok := target.Settings["timeout"].(string); ok { | |
if _, err := time.ParseDuration(timeout); err != nil { | |
errs = append(errs, fmt.Sprintf("target %d: invalid timeout format: %v", i+1, err)) | |
} | |
} | |
} |
if path, ok := target.Settings["path"].(string); !ok || path == "" { | ||
errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance path validation for local target.
The current validation only checks if the path is non-empty but doesn't validate its security implications.
Add comprehensive path validation:
if path, ok := target.Settings["path"].(string); !ok || path == "" {
errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1))
+} else {
+ // Prevent directory traversal
+ cleanPath := filepath.Clean(path)
+ if strings.Contains(cleanPath, "..") {
+ errs = append(errs, fmt.Sprintf("target %d: path must not contain directory traversal", i+1))
+ }
+ // Ensure absolute paths are not used
+ if filepath.IsAbs(cleanPath) {
+ errs = append(errs, fmt.Sprintf("target %d: absolute paths are not allowed", i+1))
+ }
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if path, ok := target.Settings["path"].(string); !ok || path == "" { | |
errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1)) | |
} | |
if path, ok := target.Settings["path"].(string); !ok || path == "" { | |
errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1)) | |
} else { | |
// Prevent directory traversal | |
cleanPath := filepath.Clean(path) | |
if strings.Contains(cleanPath, "..") { | |
errs = append(errs, fmt.Sprintf("target %d: path must not contain directory traversal", i+1)) | |
} | |
// Ensure absolute paths are not allowed | |
if filepath.IsAbs(cleanPath) { | |
errs = append(errs, fmt.Sprintf("target %d: absolute paths are not allowed", i+1)) | |
} | |
} |
func atomicWriteFile(targetPath, tempPattern string, perm os.FileMode, write func(*os.File) error) error { | ||
// Create temporary file in the same directory as the target | ||
dir := filepath.Dir(targetPath) | ||
tempFile, err := os.CreateTemp(dir, tempPattern) | ||
if err != nil { | ||
return fmt.Errorf("failed to create temporary file: %w", err) | ||
} | ||
tempPath := tempFile.Name() | ||
|
||
// Ensure the temporary file is removed in case of failure | ||
success := false | ||
defer func() { | ||
if !success { | ||
tempFile.Close() | ||
os.Remove(tempPath) | ||
} | ||
}() | ||
|
||
// Set file permissions | ||
if err := tempFile.Chmod(perm); err != nil { | ||
return fmt.Errorf("failed to set file permissions: %w", err) | ||
} | ||
|
||
// Write data | ||
if err := write(tempFile); err != nil { | ||
return err | ||
} | ||
|
||
// Ensure all data is written to disk | ||
if err := tempFile.Sync(); err != nil { | ||
return fmt.Errorf("failed to sync file: %w", err) | ||
} | ||
|
||
// Close the file before renaming | ||
if err := tempFile.Close(); err != nil { | ||
return fmt.Errorf("failed to close temporary file: %w", err) | ||
} | ||
|
||
// Perform atomic rename | ||
if err := os.Rename(tempPath, targetPath); err != nil { | ||
return fmt.Errorf("failed to rename temporary file: %w", err) | ||
} | ||
|
||
success = true | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance atomicWriteFile error handling.
The function should handle disk space errors more gracefully and provide better cleanup on failures.
func atomicWriteFile(targetPath, tempPattern string, perm os.FileMode, write func(*os.File) error) error {
+ // Check available disk space before starting
+ requiredSpace := uint64(float64(os.Stat(targetPath).Size()) * 1.1) // 10% buffer
+ availableSpace, err := diskmanager.GetAvailableSpace(filepath.Dir(targetPath))
+ if err != nil {
+ return fmt.Errorf("failed to check available space: %w", err)
+ }
+ if availableSpace < requiredSpace {
+ return fmt.Errorf("insufficient disk space: need %d bytes, have %d bytes", requiredSpace, availableSpace)
+ }
+
// Create temporary file in the same directory as the target
dir := filepath.Dir(targetPath)
tempFile, err := os.CreateTemp(dir, tempPattern)
if err != nil {
return fmt.Errorf("failed to create temporary file: %w", err)
}
tempPath := tempFile.Name()
// Ensure the temporary file is removed in case of failure
success := false
defer func() {
if !success {
tempFile.Close()
- os.Remove(tempPath)
+ if err := os.Remove(tempPath); err != nil {
+ log.Printf("Failed to remove temporary file %s: %v", tempPath, err)
+ }
}
}()
Committable suggestion skipped: line range outside the PR's diff.
// getConnection gets a connection from the pool or creates a new one | ||
func (t *FTPTarget) getConnection(ctx context.Context) (*ftp.ServerConn, error) { | ||
// Try to get a connection from the pool | ||
select { | ||
case conn := <-t.connPool: | ||
if t.isConnectionAlive(conn) { | ||
return conn, nil | ||
} | ||
// Connection is dead, close it and create a new one | ||
_ = conn.Quit() | ||
default: | ||
// Pool is empty, create a new connection | ||
} | ||
|
||
return t.connect(ctx) | ||
} | ||
|
||
// returnConnection returns a connection to the pool or closes it if the pool is full |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add connection pool size check.
The current implementation might create too many connections if the pool is empty.
func (t *FTPTarget) getConnection(ctx context.Context) (*ftp.ServerConn, error) {
+ t.mu.Lock()
+ currentConns := len(t.connPool)
+ t.mu.Unlock()
+
+ if currentConns >= t.config.MaxConns {
+ return nil, backup.NewError(backup.ErrIO, "ftp: connection limit reached", nil)
+ }
// Try to get a connection from the pool
select {
case conn := <-t.connPool:
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// getConnection gets a connection from the pool or creates a new one | |
func (t *FTPTarget) getConnection(ctx context.Context) (*ftp.ServerConn, error) { | |
// Try to get a connection from the pool | |
select { | |
case conn := <-t.connPool: | |
if t.isConnectionAlive(conn) { | |
return conn, nil | |
} | |
// Connection is dead, close it and create a new one | |
_ = conn.Quit() | |
default: | |
// Pool is empty, create a new connection | |
} | |
return t.connect(ctx) | |
} | |
// returnConnection returns a connection to the pool or closes it if the pool is full | |
// getConnection gets a connection from the pool or creates a new one | |
func (t *FTPTarget) getConnection(ctx context.Context) (*ftp.ServerConn, error) { | |
t.mu.Lock() | |
currentConns := len(t.connPool) | |
t.mu.Unlock() | |
if currentConns >= t.config.MaxConns { | |
return nil, backup.NewError(backup.ErrIO, "ftp: connection limit reached", nil) | |
} | |
// Try to get a connection from the pool | |
select { | |
case conn := <-t.connPool: | |
if t.isConnectionAlive(conn) { | |
return conn, nil | |
} | |
// Connection is dead, close it and create a new one | |
_ = conn.Quit() | |
default: | |
// Pool is empty, create a new connection | |
} | |
return t.connect(ctx) | |
} | |
// returnConnection returns a connection to the pool or closes it if the pool is full |
validInput() { | ||
return this.$refs.{{.id}}.validity.valid; | ||
}, | ||
...inputField('{{.id}}', '{{.label}}', '{{.model}}', '{{.name}}', '{{.placeholder}}') | ||
}"> | ||
return this.$refs.input ? this.$refs.input.validity.valid : true; | ||
} | ||
}" x-cloak> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
ValidInput Method and x-ref Consistency
Currently, the validInput()
method relies on this.$refs.input
while the :id
and label attributes use fieldId
. For consistency—especially when multiple password fields might be present—it is recommended to bind the reference dynamically using fieldId
instead of a hard-coded "input"
.
For example, consider changing the method and x-ref as follows:
- validInput() {
- return this.$refs.input ? this.$refs.input.validity.valid : true;
- }
+ validInput() {
+ return this.$refs[this.fieldId] ? this.$refs[this.fieldId].validity.valid : true;
+ }
and update:
- x-ref="input"
+ :x-ref="fieldId"
This ensures the reference is unique per component instance.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
validInput() { | |
return this.$refs.{{.id}}.validity.valid; | |
}, | |
...inputField('{{.id}}', '{{.label}}', '{{.model}}', '{{.name}}', '{{.placeholder}}') | |
}"> | |
return this.$refs.input ? this.$refs.input.validity.valid : true; | |
} | |
}" x-cloak> | |
validInput() { | |
- return this.$refs.input ? this.$refs.input.validity.valid : true; | |
+ return this.$refs[this.fieldId] ? this.$refs[this.fieldId].validity.valid : true; | |
} | |
}" :x-ref="fieldId" x-cloak> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
internal/backup/sources/sqlite.go (1)
409-489
:⚠️ Potential issueUse SQLite's backup API for consistent backups.
The current implementation performs a raw file copy of the SQLite database. If the application is actively writing to the database during backup, this approach may result in a corrupted or inconsistent backup. Consider using SQLite's backup API (
sqlite3_backup_init
) orVACUUM INTO
command to ensure transactionally consistent backups.internal/backup/targets/sftp.go (2)
732-752
:⚠️ Potential issueFix security vulnerability in known_hosts handling.
Creating an empty known_hosts file if it doesn't exist defeats the purpose of host key verification and makes the connection vulnerable to MITM attacks.
Apply this diff to fix the issue:
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) { // Check if the known_hosts file exists if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) { - // Create the .ssh directory if it doesn't exist - sshDir := filepath.Dir(knownHostsFile) - if err := os.MkdirAll(sshDir, 0o700); err != nil { - return nil, backup.NewError(backup.ErrIO, "failed to create .ssh directory", err) - } - // Create an empty known_hosts file - if err := os.WriteFile(knownHostsFile, []byte{}, 0o600); err != nil { - return nil, backup.NewError(backup.ErrIO, "failed to create known_hosts file", err) - } + return nil, backup.NewError(backup.ErrSecurity, + "known_hosts file does not exist. Please create it with valid host keys", err) }
149-164
: 🛠️ Refactor suggestionAdd connection pool size check.
The current implementation might create too many connections if the pool is empty.
Apply this diff to add a connection pool size check:
func (t *SFTPTarget) getConnection(ctx context.Context) (*sftp.Client, error) { + t.mu.Lock() + currentConns := len(t.connPool) + t.mu.Unlock() + + if currentConns >= t.config.MaxConns { + return nil, backup.NewError(backup.ErrIO, "sftp: connection limit reached", nil) + } + // Try to get a connection from the pool select { case client := <-t.connPool:internal/conf/config.go (2)
387-387
: 🛠️ Refactor suggestionEnhance encryption configuration.
The current boolean flag for encryption needs to be enhanced with secure key management:
- Use environment variables or a key management system
- Support key rotation
- Add encryption algorithm configuration
Apply this diff to enhance encryption configuration:
- Encryption bool `yaml:"encryption"` // true to enable backup encryption + Encryption struct { + Enabled bool `yaml:"enabled"` // true to enable backup encryption + KeySource string `yaml:"keysource"` // "env", "file", or "vault" + KeyName string `yaml:"keyname"` // Name of the key or environment variable + Algorithm string `yaml:"algorithm"` // "AES256", "ChaCha20", etc. + KeyRotation struct { + Enabled bool `yaml:"enabled"` // Enable automatic key rotation + Interval time.Duration `yaml:"interval"` // How often to rotate keys + RetainOld int `yaml:"retainold"` // Number of old keys to retain + } `yaml:"keyrotation"` + } `yaml:"encryption"`
374-381
: 🛠️ Refactor suggestionAdd validation for backup schedule and retention settings.
The backup configuration needs validation for:
- Schedule format (hour, minute, weekday)
- Duration format in retention settings
- Reasonable limits for max/min backups
Add validation in the
ValidateSettings
function:func validateBackupConfig(config *BackupConfig) error { if !config.Enabled { return nil } // Validate schedule for _, schedule := range config.Schedules { if schedule.Hour < 0 || schedule.Hour > 23 { return fmt.Errorf("invalid hour: %d", schedule.Hour) } if schedule.Minute < 0 || schedule.Minute > 59 { return fmt.Errorf("invalid minute: %d", schedule.Minute) } if schedule.IsWeekly && schedule.Weekday == "" { return fmt.Errorf("weekday is required for weekly schedules") } } // Validate retention if config.Retention.MaxAge != "" { if _, err := time.ParseDuration(config.Retention.MaxAge); err != nil { return fmt.Errorf("invalid retention max age: %w", err) } } if config.Retention.MaxBackups < config.Retention.MinBackups { return fmt.Errorf("max backups (%d) cannot be less than min backups (%d)", config.Retention.MaxBackups, config.Retention.MinBackups) } return nil }Also applies to: 361-365
🧹 Nitpick comments (8)
internal/diskmanager/disk_usage_windows.go (1)
38-61
: Consider refactoring to reduce code duplication.The
GetAvailableSpace
function shares significant code withGetDiskUsage
. Both functions:
- Load the same DLL
- Get the same procedure
- Use the same system call with identical parameters
- Have similar error handling
Consider extracting the common functionality into a helper function:
+// getDiskSpace retrieves disk space information for Windows +func getDiskSpace(baseDir string) (freeBytesAvailable int64, totalNumberOfBytes int64, totalNumberOfFreeBytes int64, err error) { + kernel32 := syscall.NewLazyDLL("kernel32.dll") + getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") + + utf16Path, err := syscall.UTF16PtrFromString(baseDir) + if err != nil { + return 0, 0, 0, err + } + + _, _, err = getDiskFreeSpaceEx.Call( + uintptr(unsafe.Pointer(utf16Path)), + uintptr(unsafe.Pointer(&freeBytesAvailable)), + uintptr(unsafe.Pointer(&totalNumberOfBytes)), + uintptr(unsafe.Pointer(&totalNumberOfFreeBytes)), + ) + if err != syscall.Errno(0) { + return 0, 0, 0, err + } + + return freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes, nil +} + func GetDiskUsage(baseDir string) (float64, error) { - kernel32 := syscall.NewLazyDLL("kernel32.dll") - getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") - - var freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes int64 - - utf16Path, err := syscall.UTF16PtrFromString(baseDir) - if err != nil { - return 0, err - } - - _, _, err = getDiskFreeSpaceEx.Call( - uintptr(unsafe.Pointer(utf16Path)), - uintptr(unsafe.Pointer(&freeBytesAvailable)), - uintptr(unsafe.Pointer(&totalNumberOfBytes)), - uintptr(unsafe.Pointer(&totalNumberOfFreeBytes)), - ) - if err != syscall.Errno(0) { - return 0, err - } + _, totalNumberOfBytes, totalNumberOfFreeBytes, err := getDiskSpace(baseDir) + if err != nil { + return 0, err + } used := totalNumberOfBytes - totalNumberOfFreeBytes return (float64(used) / float64(totalNumberOfBytes)) * 100, nil } func GetAvailableSpace(baseDir string) (uint64, error) { - kernel32 := syscall.NewLazyDLL("kernel32.dll") - getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") - - var freeBytesAvailable, totalNumberOfBytes, totalNumberOfFreeBytes int64 - - utf16Path, err := syscall.UTF16PtrFromString(baseDir) - if err != nil { - return 0, err - } - - _, _, err = getDiskFreeSpaceEx.Call( - uintptr(unsafe.Pointer(utf16Path)), - uintptr(unsafe.Pointer(&freeBytesAvailable)), - uintptr(unsafe.Pointer(&totalNumberOfBytes)), - uintptr(unsafe.Pointer(&totalNumberOfFreeBytes)), - ) - if err != syscall.Errno(0) { - return 0, err - } + freeBytesAvailable, _, _, err := getDiskSpace(baseDir) + if err != nil { + return 0, err + } return uint64(freeBytesAvailable), nil }internal/backup/state.go (1)
251-262
: Consider optimizing range to reduce copying.
Lines 257-260 do a range-value copy forv
, which can be large. Although not necessarily critical, consider iterating over pointers if performance is a concern.-for k, v := range sm.state.Stats { - stats[k] = v +for k := range sm.state.Stats { + v := sm.state.Stats[k] + stats[k] = v }🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)internal/backup/scheduler.go (1)
238-249
: Consider pointer iteration to avoid copying large structs.
When logging target statistics in lines 239-249, passing large struct copies could be avoided by referencing a pointer.-for targetName, targetStats := range stats { +for targetName := range stats { + targetStats := stats[targetName] s.logger.Printf("📊 Backup stats for target %s:", targetName) ... }🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)cmd/restore/restore.go (2)
47-47
: Combine function parameters for readability.
Line 47 suggests we can combine parameters into a single signature to satisfy lint rules and improve clarity.-func runDecrypt(settings *conf.Settings, backupPath string, outputPath string) error { +func runDecrypt(settings *conf.Settings, backupPath, outputPath string) error {🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Check: lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Actions: golangci-lint
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: Build docker image
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: golangci-test
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
112-133
: Adopt new octal literal style for file permissions.
Switching to0o755
and0o600
is recommended for modern Go style.- if err := os.MkdirAll(filepath.Dir(outputPath), 0755); err != nil { + if err := os.MkdirAll(filepath.Dir(outputPath), 0o755); err != nil { return fmt.Errorf("failed to create output directory: %w", err) } - if err := os.WriteFile(outputPath, decryptedData, 0600); err != nil { + if err := os.WriteFile(outputPath, decryptedData, 0o600); err != nil { return fmt.Errorf("failed to write decrypted file: %w", err) } ... - if err := os.WriteFile(newMetaPath, updatedMetaData, 0600); err != nil { + if err := os.WriteFile(newMetaPath, updatedMetaData, 0o600); err != nil { return fmt.Errorf("failed to write updated metadata file: %w", err) }🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)🪛 GitHub Check: lint
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)cmd/backup/backup.go (1)
133-133
: Make backup timeout configurable.The hardcoded 10-minute timeout might not be suitable for all scenarios, such as large databases or slow network connections.
Consider making the timeout configurable:
- ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) + timeout := settings.Backup.Timeout + if timeout == 0 { + timeout = 10 * time.Minute // Default to 10 minutes + } + ctx, cancel := context.WithTimeout(context.Background(), timeout)internal/backup/targets/gdrive.go (1)
276-284
: Enhance token file security.The token file is created with basic permissions (0o600), but additional security measures could be implemented:
- Ensure the token directory has secure permissions
- Consider encrypting the token before saving
Apply this diff to enhance token security:
func (t *GDriveTarget) saveToken(token *oauth2.Token) error { + // Create token directory with secure permissions + tokenDir := filepath.Dir(t.config.TokenFile) + if err := os.MkdirAll(tokenDir, 0o700); err != nil { + return backup.NewError(backup.ErrIO, "gdrive: unable to create token directory", err) + } + f, err := os.OpenFile(t.config.TokenFile, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0o600) if err != nil { return backup.NewError(backup.ErrIO, "gdrive: unable to cache oauth token", err) } defer f.Close() + + // Encrypt token before saving + encrypted, err := t.encryptToken(token) + if err != nil { + return backup.NewError(backup.ErrIO, "gdrive: unable to encrypt token", err) + } - return json.NewEncoder(f).Encode(token) + return json.NewEncoder(f).Encode(encrypted) }views/settings/backupSettings.html (1)
7-10
: Improve accessibility attributes.The HTML includes good ARIA labels but could be enhanced with additional accessibility attributes.
Add
aria-expanded
to collapsible sections andaria-label
to form controls:<div class="collapse collapse-open bg-base-100 shadow-xs col-span-3" role="region" - aria-label="Backup Settings Section" + aria-label="Backup Settings Section" + aria-expanded="true" >- <input type="checkbox" id="backupSettingsOpen" x-on:change="backupSettingsOpen = !backupSettingsOpen" /> + <input type="checkbox" id="backupSettingsOpen" x-on:change="backupSettingsOpen = !backupSettingsOpen" + aria-label="Toggle backup settings section" />- <label class="label justify-start"> + <label class="label justify-start" for="targetType"> <span class="label-text">Target Type</span> </label> - <select x-model="target.type" + <select id="targetType" x-model="target.type" :name="'backup.targets[' + index + '].type'" + aria-label="Select backup target type"Also applies to: 31-31, 195-199
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (30)
cmd/backup/backup.go
(1 hunks)cmd/restore/restore.go
(1 hunks)cmd/root.go
(3 hunks)go.mod
(2 hunks)internal/backup/backup.go
(1 hunks)internal/backup/encryption.go
(1 hunks)internal/backup/errors.go
(1 hunks)internal/backup/logger.go
(1 hunks)internal/backup/metadata_unix.go
(1 hunks)internal/backup/metadata_windows.go
(1 hunks)internal/backup/scheduler.go
(1 hunks)internal/backup/sources/sqlite.go
(1 hunks)internal/backup/state.go
(1 hunks)internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)internal/conf/config.go
(3 hunks)internal/conf/defaults.go
(2 hunks)internal/conf/validate.go
(2 hunks)internal/diskmanager/disk_usage_unix.go
(1 hunks)internal/diskmanager/disk_usage_windows.go
(1 hunks)internal/httpcontroller/routes.go
(1 hunks)views/elements/sidebar.html
(1 hunks)views/settings/backupSettings.html
(1 hunks)views/settings/templates/checkbox.html
(1 hunks)views/settings/templates/hostField.html
(2 hunks)views/settings/templates/passwordField.html
(4 hunks)views/settings/templates/textField.html
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (13)
- views/settings/templates/checkbox.html
- internal/backup/metadata_unix.go
- internal/backup/metadata_windows.go
- views/elements/sidebar.html
- internal/backup/logger.go
- views/settings/templates/hostField.html
- cmd/root.go
- views/settings/templates/textField.html
- internal/httpcontroller/routes.go
- views/settings/templates/passwordField.html
- internal/backup/targets/local.go
- internal/backup/errors.go
- internal/backup/targets/ftp.go
🧰 Additional context used
🪛 GitHub Check: golangci / lint
internal/backup/backup.go
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
[failure] 986-986:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 1079-1079:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
internal/backup/scheduler.go
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
cmd/restore/restore.go
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)
internal/backup/state.go
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
🪛 GitHub Check: lint
internal/backup/backup.go
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
[failure] 986-986:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 1079-1079:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
internal/backup/scheduler.go
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
cmd/restore/restore.go
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)
internal/backup/state.go
[failure] 258-258:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
🪛 GitHub Actions: golangci-lint
cmd/restore/restore.go
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: Build docker image
cmd/restore/restore.go
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
🪛 GitHub Actions: golangci-test
cmd/restore/restore.go
[warning] 47-47: paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (58)
internal/diskmanager/disk_usage_unix.go (1)
25-34
: LGTM! The implementation is correct and consistent.The
GetAvailableSpace
function correctly:
- Uses
syscall.Statfs
to get filesystem statistics- Handles errors appropriately
- Returns available space in bytes using
Bavail * Bsize
internal/backup/state.go (17)
1-12
: Well-defined package structure.
The package declaration and imports are consistent and appropriate, providing a clear foundation for the backup state implementation.
14-21
: Clean struct for backup state.
TheBackupState
struct is well-designed, storing the essential fields for backup schedules, targets, missed runs, and stats.
23-29
: Structured schedule state.
Properly tracks schedule successes, failures, and next schedule time. No concerns.
31-41
: Comprehensive target state.
Captures necessary metadata for tracking target backups, including validation status.
43-49
: Straightforward missed backup representation.
Fields adequately capture the timing and reason for missed backups.
51-56
: Good concurrency approach with RWMutex.
The design properly includes a read-write mutex to manage concurrency on shared state.
58-91
: Robust initialization of state manager.
Creates the JSON state file if absent and handles errors gracefully. The approach to config paths is sensible.
93-104
: Potential need to verify valid JSON data.
WhileloadState
properly unmarshals JSON, it may be beneficial to check for file corruption or partial writes if errors occur frequently. Consider verifying the data fields or adding a fallback strategy if vital.Do you want to integrate additional validation or fallback handling here?
106-138
: Atomic write strategy is solid.
Using a temp file then renaming it ensures data integrity. The code handles directory creation and cleanup correctly.
140-164
: Schedule state update is consistent.
Successfully incrementsFailureCount
on errors and resets it on success. This design effectively tracks schedule outcomes.
166-182
: Missed backup recording is effective.
Appropriately appends missed backups with relevant details.
184-199
: Well-structured target state update.
Captures metadata after each backup, incrementing counts correctly.
201-208
: Straightforward stats update.
Replaces stats map with the new snapshot, which is acceptable if partial merges are not required.
210-221
: Read requests guarded by RLock.
Fetching schedule state with read locks ensures concurrency safety.
223-229
: Target state retrieval is concise.
Similar concurrency protection usingRLock
and returning the stored struct.
231-240
: Returning a copy prevents external modification.
CopyingmissedRuns
out is a good practice to avoid concurrent changes.
242-249
: Missed backups cleared properly.
Sets an empty slice to reset missed runs, followed by a safe state save.internal/backup/scheduler.go (19)
1-12
: Clear import and package definition.
No issues with naming or usage of thebackup
package.
14-22
: BackupSchedule struct is minimal and effective.
Tracks time-based fields for daily or weekly backups.
24-33
: Scheduler structure is well organized.
Holds references to manager, schedules, state manager, and concurrency controls.
35-52
: Scheduler initialization is robust.
Handles logging gracefully and creates the StateManager without error.
54-86
: Schedule addition logic is thorough.
Validates input hours/minutes and calculates next run time.
88-117
: Lifecycle control is straightforward.
Start()
andStop()
handle concurrency locks correctly, preventing multiple starts or stops.
119-124
: Getter remains simple.
ExposesisRunning
state with read lock, which is correct.
126-134
: Missed runs fetching is consistent.
Relies on the underlyingstate.GetMissedBackups
method.
136-141
: Potential confusion in clearing missed runs.
This method setss.schedules = nil
, effectively removing all schedules rather than just missed runs. This is the same concern noted in a previous review.
143-188
: Periodic scheduler loop works properly.
A concurrency-safe ticker checks if each schedule is due, then marks missed backups or runs them.
190-249
: Backup run handles failures and missed backups gracefully.
On error, records missed backups and logs details. This is a correct approach for robust error handling.🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
251-282
: Daily/weekly scheduling logic is clear.
Accurately calculates time offsets to run the next backup.
284-304
: Schedules updated seamlessly.
Rebuilds the slice and recalculates next runs after changes.
306-328
: Weekday parsing is flexible.
Covers abbreviations and returns -1 for daily backups.
330-379
: Configuration loading is comprehensive.
Sets up schedules from config, skipping disabled ones, and retrieves existing state from theStateManager
.
380-384
: Convenient missed backup retrieval.
Exposes underlying state method without additional complexity.
385-388
: Directly clears missed backups from StateManager.
Neatly delegates missed runs clearing.
390-393
: Straightforward stats retrieval.
Again delegates toStateManager
for aggregator logic.
395-434
: TriggerBackup provides a manual backup flow.
Runs the backup, updates stats, and logs results. Excellent approach for on-demand usage.🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)cmd/restore/restore.go (2)
1-14
: Package and imports are clean.
Uses Cobra and the internal packages appropriately.
16-45
: Restore command definition is straightforward and clear.
The command structure follows Cobra best practices with a subcommand for decrypt.cmd/backup/backup.go (3)
39-39
: Avoid logging encryption keys in plaintext.The encryption key is currently being logged in plaintext, which could expose sensitive information in log files.
Apply this diff to mask the key in logs:
-log.Printf("Successfully generated new encryption key: %s", key) +log.Printf("Successfully generated new encryption key: %s", strings.Repeat("*", len(key)))
93-93
: Add type assertion validation for target settings.The type assertion for path is not checked for safety, which could lead to a panic if the configuration is malformed.
Apply this diff to add type assertion validation:
- Path: targetConfig.Settings["path"].(string), + Path: func() string { + if path, ok := targetConfig.Settings["path"].(string); ok { + return path + } + log.Printf("Warning: Invalid path setting for local target, using default") + return "backups/" + }(),
37-38
: Well-structured error handling.The error handling is well-implemented with proper error wrapping, descriptive messages, and consistent propagation.
Also applies to: 64-65, 68-69, 140-141
internal/backup/encryption.go (1)
87-107
: Secure encryption implementation.The encryption implementation follows security best practices:
- Uses AES-256-GCM for strong encryption
- Validates key length
- Sets secure file permissions (0o600 for key file, 0o700 for directory)
- Properly handles nonce generation and validation
Also applies to: 159-161
internal/conf/defaults.go (1)
187-187
: Consider enabling encryption by default for enhanced security.Backups often contain sensitive data. While encryption is configurable, it should be enabled by default to ensure secure backups, especially when using remote storage.
Apply this diff:
-viper.SetDefault("backup.encryption", false) // Encryption disabled by default +viper.SetDefault("backup.encryption", true) // Encryption enabled by defaultinternal/conf/validate.go (3)
289-291
: Uncomment and implement schedule validation.The schedule validation is currently commented out, but it's crucial for ensuring proper backup timing.
Apply this diff:
- /*if len(config.Schedules) == 0 { - errs = append(errs, "at least one backup schedule must be configured when backup is enabled") - }*/ + if len(config.Schedules) == 0 && config.Schedule == "" { + errs = append(errs, "either a cron schedule or at least one backup schedule must be configured when backup is enabled") + } + + // Validate cron schedule if provided + if config.Schedule != "" { + if _, err := cron.ParseStandard(config.Schedule); err != nil { + errs = append(errs, fmt.Sprintf("invalid cron schedule: %v", err)) + } + }
347-349
: Enhance path validation for local target.The current validation only checks if the path is non-empty but doesn't validate its security implications.
Apply this diff:
if path, ok := target.Settings["path"].(string); !ok || path == "" { errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1)) +} else { + // Prevent directory traversal + cleanPath := filepath.Clean(path) + if strings.Contains(cleanPath, "..") { + errs = append(errs, fmt.Sprintf("target %d: path must not contain directory traversal", i+1)) + } + // Ensure absolute paths are not used + if filepath.IsAbs(cleanPath) { + errs = append(errs, fmt.Sprintf("target %d: absolute paths are not allowed", i+1)) + } }
351-353
: Add comprehensive validation for FTP settings.The current validation only checks for the host setting, but other critical settings should also be validated.
Apply this diff:
if _, ok := target.Settings["host"].(string); !ok { errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1)) +} else { + // Validate port + if port, ok := target.Settings["port"].(int); ok { + if port < 1 || port > 65535 { + errs = append(errs, fmt.Sprintf("target %d: port must be between 1 and 65535", i+1)) + } + } + // Validate timeout + if timeout, ok := target.Settings["timeout"].(string); ok { + if _, err := time.ParseDuration(timeout); err != nil { + errs = append(errs, fmt.Sprintf("target %d: invalid timeout format: %v", i+1, err)) + } + } }internal/backup/sources/sqlite.go (3)
187-200
: LGTM: Robust error handling implementation.The error handling is well-implemented with:
- Proper error wrapping and context
- Comprehensive media error detection
- Clear error categorization
Also applies to: 202-276
117-136
: LGTM: Proper resource management.The code demonstrates excellent resource management practices:
- Database connections are properly managed and closed
- Temporary files are cleaned up using defer
- Resources are released in the correct order
Also applies to: 391-408
102-115
: LGTM: Strong security implementation.The code implements proper security measures:
- Path validation to prevent directory traversal
- No direct execution of user input
- Proper file permission handling
Also applies to: 289-300
internal/backup/targets/sftp.go (1)
207-247
: LGTM: Robust error handling and cleanup.The code demonstrates excellent practices:
- Comprehensive error handling with retries
- Proper cleanup of temporary files
- Proper connection cleanup on errors
Also applies to: 654-687
internal/backup/targets/rsync.go (3)
223-277
: LGTM: Strong command injection prevention.The code implements robust command injection prevention:
- Comprehensive path sanitization
- Use of -- to prevent option injection
- Proper argument escaping
Also applies to: 469-481
280-296
: LGTM: Secure SSH configuration.The code implements secure SSH options:
- StrictHostKeyChecking=yes
- ControlMaster=no
- IdentitiesOnly=yes
- Proper key file handling
178-205
: LGTM: Robust error handling.The code demonstrates excellent error handling:
- Comprehensive error handling with retries
- Proper cleanup on errors
- Clear error categorization
Also applies to: 527-556
internal/conf/config.go (1)
353-400
: LGTM: Well-designed configuration structures.The configuration structures demonstrate excellent design:
- Clear separation of concerns
- Logical grouping of related settings
- Comprehensive documentation
internal/backup/targets/gdrive.go (2)
18-18
: Verify the correct Google Drive API package is being used.The implementation uses
google.golang.org/api/drive/v3
which is the correct package for Google Drive API, not Google Cloud Storage. This resolves the confusion noted in past reviews.
643-644
: List operation should handle pagination for large folders.The
List
function at line 646 correctly implements pagination by usingNextPageToken
. This resolves the concern noted in past reviews about handling multi-page listing.Also applies to: 646-648
go.mod (1)
3-3
:✅ Verification successful
Verify Go version compatibility.
The module requires Go version 1.23.2, which appears to be a future version as the current latest stable release is 1.22.0 (as of February 2025).
🌐 Web query:
What is the latest stable version of Go?
💡 Result:
The latest stable version of Go is 1.24.0, released on February 11, 2025[3][7]. This follows the Go team's six-month release cycle, with Go 1.23 having been released in August 2024[3][4].
Key details about Go versioning:
- Releases follow semantic versioning (
major.minor.patch
)- New major versions (like 1.24) are released every six months[2][4]
- Each major version receives minor patch updates for security and bug fixes (e.g., 1.24.1, 1.24.2)[3]
- Official support covers the two most recent major releases under Go's compatibility promise[1][7]
Users can download Go 1.24.0 from the official downloads page[5], with installers available for all major platforms. The release maintains backward compatibility while introducing toolchain improvements and runtime optimizations[4][7].
Citations:
- 1: https://github.com/golang/tools/releases
- 2: https://tip.golang.org/doc/go1.21
- 3: https://go.dev/doc/devel/release
- 4: https://tip.golang.org/doc/go1.23
- 5: https://go.dev/dl/
- 6: https://groups.google.com/g/golang-nuts/c/_ROx_UGCCCg
- 7: https://en.wikipedia.org/wiki/Go_(programming_language)
Actionable: Confirm Go language version baseline in go.mod
- File:
go.mod
(Line 3)- Code Snippet:
go 1.23.2The declared version is
1.23.2
, which is now a valid, previous release given that the latest stable version is1.24.0
(released February 11, 2025). Please verify that1.23.2
is the intended minimum requirement—if you need features or improvements present in Go 1.24.0, consider updating the version accordingly.
{{template "passwordField" dict | ||
"id" "ftpPassword" | ||
"model" "target.settings.password" | ||
"name" "'backup.targets[' + index + '].settings.password'" | ||
"label" "Password" | ||
"tooltip" "FTP account password" | ||
"dynamicId" true}} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance security for password fields.
Password fields should include additional security attributes to prevent auto-complete and ensure secure handling.
Apply this diff to enhance password field security:
{{template "passwordField" dict
"id" "ftpPassword"
"model" "target.settings.password"
"name" "'backup.targets[' + index + '].settings.password'"
"label" "Password"
"tooltip" "FTP account password"
+ "autocomplete" "new-password"
+ "spellcheck" "false"
"dynamicId" true}}
Apply the same changes to the SFTP password field.
Also applies to: 311-318
defer src.Close() | ||
|
||
// Read the file content | ||
content := make([]byte, file.Size) |
Check failure
Code scanning / CodeQL
Slice memory allocation with excessive size value High
user-provided value
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix AI 1 day ago
To fix the problem, we need to implement a maximum allowed value for the size of the uploaded file. This will prevent excessive memory allocation and potential denial-of-service attacks. We will add a check to ensure that the file size does not exceed a predefined maximum value before allocating memory for the slice.
- Define a constant for the maximum allowed file size.
- Add a check to compare the file size against this maximum value.
- If the file size exceeds the maximum value, return an error response.
-
Copy modified line R17 -
Copy modified lines R207-R214
@@ -16,2 +16,3 @@ | ||
keySize = 32 // 256-bit key | ||
maxFileSize = 10 * 1024 * 1024 // 10 MB | ||
) | ||
@@ -205,2 +206,10 @@ | ||
|
||
// Check the file size | ||
if file.Size > maxFileSize { | ||
return c.JSON(http.StatusBadRequest, map[string]interface{}{ | ||
"success": false, | ||
"message": "File size exceeds the maximum allowed limit", | ||
}) | ||
} | ||
|
||
// Read the file content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
internal/httpcontroller/server.go (1)
61-61
:⚠️ Potential issueRemove duplicate
initAuthRoutes
call.The
initAuthRoutes
method is called twice:
- First call at line 61 in
initRoutes
- Second call at line 149 in
initializeServer
This could lead to duplicate route registrations. Remove one of the calls to prevent potential issues.
Apply this diff to remove the duplicate call:
-s.initAuthRoutes()
Also applies to: 149-149
♻️ Duplicate comments (4)
internal/conf/validate.go (3)
289-291
:⚠️ Potential issueUncomment and implement schedule validation.
The schedule validation is currently commented out. This validation is crucial for ensuring proper backup timing.
Apply this diff to implement the validation:
-/*if len(config.Schedules) == 0 { - errs = append(errs, "at least one backup schedule must be configured when backup is enabled") -}*/ +if len(config.Schedules) == 0 && config.Schedule == "" { + errs = append(errs, "either a cron schedule or at least one backup schedule must be configured when backup is enabled") +} + +// Validate cron schedule if provided +if config.Schedule != "" { + if _, err := cron.ParseStandard(config.Schedule); err != nil { + errs = append(errs, fmt.Sprintf("invalid cron schedule: %v", err)) + } +}
347-349
:⚠️ Potential issueEnhance path validation for local target.
The current validation only checks if the path is non-empty but doesn't validate its security implications.
Apply this diff to add comprehensive path validation:
if path, ok := target.Settings["path"].(string); !ok || path == "" { errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1)) +} else { + // Prevent directory traversal + cleanPath := filepath.Clean(path) + if strings.Contains(cleanPath, "..") { + errs = append(errs, fmt.Sprintf("target %d: path must not contain directory traversal", i+1)) + } + // Ensure absolute paths are not used + if filepath.IsAbs(cleanPath) { + errs = append(errs, fmt.Sprintf("target %d: absolute paths are not allowed", i+1)) + } }
351-353
:⚠️ Potential issueAdd comprehensive validation for FTP settings.
The current validation only checks for the host setting, but other critical settings should also be validated.
Apply this diff to add validation for all FTP settings:
if _, ok := target.Settings["host"].(string); !ok { errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1)) +} else { + // Validate port + if port, ok := target.Settings["port"].(int); ok { + if port < 1 || port > 65535 { + errs = append(errs, fmt.Sprintf("target %d: port must be between 1 and 65535", i+1)) + } + } + // Validate timeout + if timeout, ok := target.Settings["timeout"].(string); ok { + if _, err := time.ParseDuration(timeout); err != nil { + errs = append(errs, fmt.Sprintf("target %d: invalid timeout format: %v", i+1, err)) + } + } }internal/conf/config.go (1)
387-389
:⚠️ Potential issueEnhance encryption key management.
The current boolean flag for encryption and plain text storage of the encryption key needs to be enhanced with secure key management.
Apply this diff to improve encryption configuration:
- Encryption bool `yaml:"encryption"` // true to enable backup encryption - EncryptionKey string `yaml:"encryption_key"` // Base64-encoded encryption key for backups + Encryption struct { + Enabled bool `yaml:"enabled"` // true to enable backup encryption + KeySource string `yaml:"keysource"` // "env", "file", or "vault" + KeyName string `yaml:"keyname"` // Name of the key or environment variable + Algorithm string `yaml:"algorithm"` // "AES256", "ChaCha20", etc. + KeyRotation struct { + Enabled bool `yaml:"enabled"` // Enable automatic key rotation + Interval time.Duration `yaml:"interval"` // How often to rotate keys + RetainOld int `yaml:"retainold"` // Number of old keys to retain + } `yaml:"keyrotation"` + } `yaml:"encryption"`
🧹 Nitpick comments (7)
internal/httpcontroller/backup_routes.go (1)
59-93
: Consider storing the encryption key more securely.Currently,
handleGenerateKey
stores the key ins.Settings.Backup.EncryptionKey
and writes it to the configuration file (line 75–76). While this may be acceptable in a trusted environment, it increases the risk of exposure if the config file is ever read by unauthorized parties. For maximum security, consider storing the key in a secure vault or environment variable rather than on disk in cleartext.internal/httpcontroller/handlers/settings.go (2)
855-878
: Ensure secure transport when providing encryption keys.
GenerateBackupKey
writes potentially sensitive key material to disk and logs a success message. If the environment doesn’t enforce HTTPS, keys may be transmitted over an insecure channel. Verify TLS configuration is enforced for production.
880-901
: Plaintext key download may expose sensitive information.
DownloadBackupKey
returns the key directly in the response body. Make sure TLS or another secure method is enforced to avoid accidental exposure of sensitive backup keys over plain HTTP.internal/backup/encryption.go (2)
31-84
: Consider refactoring for improved readability.The function is well-structured but could be split into smaller, more focused functions to improve readability and maintainability.
Consider refactoring into these helper functions:
readKeyFromFile
: Handle reading and decoding the key.generateNewKey
: Handle key generation and file writing.func (m *Manager) getEncryptionKey() ([]byte, error) { if !m.config.Encryption { return nil, NewError(ErrConfig, "encryption is not enabled", nil) } // Get the encryption key file path keyPath, err := m.getEncryptionKeyPath() if err != nil { return nil, err } + // Try to read existing key + key, err := m.readKeyFromFile(keyPath) + if err == nil { + return key, nil + } + if !os.IsNotExist(err) { + return nil, err + } + + // Generate new key if file doesn't exist + return m.generateNewKey(keyPath) +} + +func (m *Manager) readKeyFromFile(keyPath string) ([]byte, error) { // Try to read the existing key file keyBytes, err := os.ReadFile(keyPath) if err != nil { - if !os.IsNotExist(err) { - return nil, NewError(ErrIO, "failed to read encryption key file", err) - } - - // Generate a new key if the file doesn't exist - key := make([]byte, 32) // 256 bits - if _, err := rand.Read(key); err != nil { - return nil, NewError(ErrEncryption, "failed to generate encryption key", err) - } - - // Encode the key as hex - keyHex := hex.EncodeToString(key) - - // Create the config directory if it doesn't exist - if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { - return nil, NewError(ErrIO, "failed to create config directory", err) - } - - // Write the key to the file with secure permissions - if err := os.WriteFile(keyPath, []byte(keyHex), 0o600); err != nil { - return nil, NewError(ErrIO, "failed to write encryption key file", err) - } - - return key, nil + return nil, err } // Decode existing key from hex keyStr := strings.TrimSpace(string(keyBytes)) key, err := hex.DecodeString(keyStr) if err != nil { return nil, NewError(ErrEncryption, "failed to decode encryption key", err) } // Validate key length if len(key) != 32 { return nil, NewError(ErrEncryption, "invalid encryption key length", nil) } return key, nil } + +func (m *Manager) generateNewKey(keyPath string) ([]byte, error) { + // Generate a new key + key := make([]byte, 32) // 256 bits + if _, err := rand.Read(key); err != nil { + return nil, NewError(ErrEncryption, "failed to generate encryption key", err) + } + + // Encode the key as hex + keyHex := hex.EncodeToString(key) + + // Create the config directory if it doesn't exist + if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { + return nil, NewError(ErrIO, "failed to create config directory", err) + } + + // Write the key to the file with secure permissions + if err := os.WriteFile(keyPath, []byte(keyHex), 0o600); err != nil { + return nil, NewError(ErrIO, "failed to write encryption key file", err) + } + + return key, nil
215-264
: Consider refactoring for improved readability.The function is well-structured but could be split into smaller, more focused functions to improve readability and maintainability.
Consider refactoring into these helper functions:
validateKeyFileFormat
: Handle format validation.extractKeyFromContent
: Handle key extraction.func (m *Manager) ImportEncryptionKey(content []byte) error { + // Validate key file format + if err := m.validateKeyFileFormat(content); err != nil { + return err + } + + // Extract and validate key + key, err := m.extractKeyFromContent(content) + if err != nil { + return err + } + + // Get the key file path + keyPath, err := m.getEncryptionKeyPath() + if err != nil { + return err + } + + // Create the config directory if it doesn't exist + if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { + return NewError(ErrIO, "failed to create config directory", err) + } + + // Write the key to file with secure permissions + if err := os.WriteFile(keyPath, []byte(key), 0o600); err != nil { + return NewError(ErrIO, "failed to write encryption key file", err) + } + + m.logger.Printf("✅ Imported encryption key saved to: %s", keyPath) + return nil +} + +func (m *Manager) validateKeyFileFormat(content []byte) error { // Parse the key file content lines := strings.Split(string(content), "\n") if len(lines) < 3 { return NewError(ErrValidation, "invalid key file format", nil) } // Verify the header if !strings.HasPrefix(lines[0], "BirdNET-Go Backup Encryption Key") { return NewError(ErrValidation, "invalid key file format: missing header", nil) } + return nil +} + +func (m *Manager) extractKeyFromContent(content []byte) (string, error) { + lines := strings.Split(string(content), "\n") // Extract the key var key string for _, line := range lines { if strings.HasPrefix(line, "Key: ") { key = strings.TrimPrefix(line, "Key: ") key = strings.TrimSpace(key) break } } if key == "" { - return NewError(ErrValidation, "invalid key file format: key not found", nil) + return "", NewError(ErrValidation, "invalid key file format: key not found", nil) } // Validate key format (should be hex-encoded) if _, err := hex.DecodeString(key); err != nil { - return NewError(ErrValidation, "invalid key format: not hex-encoded", err) - } - - // Get the key file path - keyPath, err := m.getEncryptionKeyPath() - if err != nil { - return err - } - - // Create the config directory if it doesn't exist - if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { - return NewError(ErrIO, "failed to create config directory", err) - } - - // Write the key to file with secure permissions - if err := os.WriteFile(keyPath, []byte(key), 0o600); err != nil { - return NewError(ErrIO, "failed to write encryption key file", err) + return "", NewError(ErrValidation, "invalid key format: not hex-encoded", err) } - m.logger.Printf("✅ Imported encryption key saved to: %s", keyPath) - return nil + return key, nilviews/settings/templates/selectField.html (1)
11-27
: Enhance select field accessibility.The select element should have additional ARIA attributes for better screen reader support.
Apply this diff to improve accessibility:
<select {{if .dynamicId}} :id="'{{.id}}_' + index" x-model="{{.model}}" :name="{{.name}}" {{else}} id="{{.id}}" x-model="{{.model}}" name="{{.name}}" {{end}} {{if .change}}@change="{{.change}}"{{end}} - class="select select-sm select-bordered w-full"> + class="select select-sm select-bordered w-full" + :aria-label="{{.label}}" + :aria-describedby="{{if .tooltip}}{{if .dynamicId}}'{{.id}}_' + index + '_tooltip'{{else}}'{{.id}}_tooltip'{{end}}{{end}}" + role="combobox">views/settings/templates/numberField.html (1)
11-26
: Enhance number field accessibility and validation.The number input should have additional ARIA attributes and client-side validation feedback.
Apply this diff to improve accessibility and validation:
<input type="number" {{if .dynamicId}} :id="'{{.id}}_' + index" x-model.number="{{.model}}" :name="{{.name}}" {{else}} id="{{.id}}" x-model.number="{{.model}}" name="{{.name}}" {{end}} - class="input input-bordered input-sm w-full" + class="input input-bordered input-sm w-full" + :aria-label="{{.label}}" + :aria-describedby="{{if .tooltip}}{{if .dynamicId}}'{{.id}}_' + index + '_tooltip'{{else}}'{{.id}}_tooltip'{{end}}{{end}}" + :aria-invalid="{{.model}} < {{.min}} || {{.model}} > {{.max}}" + @invalid="$event.target.setCustomValidity('Please enter a number between ' + {{.min}} + ' and ' + {{.max}})" + @input="$event.target.setCustomValidity('')" {{if .placeholder}}placeholder="{{.placeholder}}"{{end}} {{if .pattern}}pattern="{{.pattern}}"{{end}} {{if .min}}min="{{.min}}"{{end}} {{if .max}}max="{{.max}}"{{end}} {{if .step}}step="{{.step}}"{{end}}>
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
internal/backup/encryption.go
(1 hunks)internal/conf/config.go
(3 hunks)internal/conf/validate.go
(2 hunks)internal/httpcontroller/backup_routes.go
(1 hunks)internal/httpcontroller/handlers/settings.go
(4 hunks)internal/httpcontroller/routes.go
(2 hunks)internal/httpcontroller/server.go
(1 hunks)internal/httpcontroller/template_functions.go
(2 hunks)internal/httpcontroller/template_renderers.go
(2 hunks)views/settings/backupSettings.html
(1 hunks)views/settings/templates/numberField.html
(1 hunks)views/settings/templates/selectField.html
(1 hunks)
🧰 Additional context used
🪛 GitHub Check: golangci / lint
internal/httpcontroller/handlers/settings.go
[failure] 707-707:
cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Check: lint
internal/httpcontroller/handlers/settings.go
[failure] 707-707:
cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: golangci-test
internal/httpcontroller/handlers/settings.go
[warning] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: Build docker image
internal/httpcontroller/handlers/settings.go
[error] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: golangci-lint
internal/httpcontroller/handlers/settings.go
[warning] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Check: CodeQL
internal/httpcontroller/backup_routes.go
[failure] 207-207: Slice memory allocation with excessive size value
This memory allocation depends on a user-provided value.
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (37)
internal/httpcontroller/handlers/settings.go (2)
104-122
: Validation flow looks solid.The backup settings change detection and validation logic is cleanly implemented. Returning an error early when the configuration is invalid helps avoid partially applied settings.
268-272
: Clear separation for special handling of BackupConfig.Using
updateBackupConfigFromForm
to handle complex fields is a good approach, keepingupdateStructFromForm
more generic.internal/httpcontroller/template_renderers.go (1)
133-155
: Key detection logic is well-structured.The optional check for the encryption key file and populating
keyFileExists
,keyHash
, andkeyCreated
is clear and properly contained to the backup settings page.internal/backup/encryption.go (8)
16-28
: LGTM!The function is well-structured with proper error handling and follows best practices.
87-107
: LGTM!The function follows cryptographic best practices:
- Uses AES-256-GCM for authenticated encryption
- Generates a random nonce
- Properly handles errors
110-134
: LGTM!The function follows cryptographic best practices:
- Validates encrypted data length
- Properly extracts nonce and ciphertext
- Uses GCM.Open for authenticated decryption
137-165
: LGTM!The function follows security best practices:
- Uses secure random number generation
- Sets appropriate directory permissions (0o700)
- Sets appropriate file permissions (0o600)
168-188
: LGTM!The function is well-structured with proper validation and error handling.
191-193
: LGTM!The function is a clean wrapper around the internal implementation.
196-207
: LGTM!The function is well-structured with proper validation and error handling.
210-212
: LGTM!The function is a clean wrapper around the internal implementation.
internal/httpcontroller/routes.go (2)
76-76
: LGTM!The backup settings route is properly configured with authentication requirement.
124-126
: LGTM!The backup key management routes are properly configured with:
- Authentication middleware
- Error handling middleware
- Appropriate HTTP methods (POST for generation, GET for download)
internal/httpcontroller/template_functions.go (1)
32-32
: LGTM!The
sliceFunc
is a simple and clean implementation for creating slices in templates, and it's properly registered in the template functions map.Also applies to: 384-386
internal/conf/validate.go (1)
69-74
: LGTM! Backup validation is properly integrated.The backup validation is correctly integrated into the main validation flow, with proper error handling and conditional execution based on the enabled flag.
internal/conf/config.go (1)
319-319
: LGTM! TempDir field added for backup operations.The TempDir field is correctly added to the SQLite struct to specify the path for temporary backup files.
views/settings/backupSettings.html (21)
1-5
: Template Initialization and Hidden Input
The template definition and hidden input for “templateName” are straightforward and correctly implemented.
6-31
: Backup Schedule Settings Initialization
The Alpine.js data for backup scheduling is clearly initialized, with proper default handling for schedule properties. Verify that the defaults (extracted from.Settings.Backup.Schedules
) are correctly formatted (e.g., numbers for hour/minute, booleans for is_weekly) to prevent runtime issues.
32-42
: Backup Schedule Header and Change Badge
The collapse title section neatly displays the “Backup Schedule” title along with a change badge when modifications occur.
44-62
: Conditional Display for Disabled Backup
The message that appears when backups are disabled is implemented clearly usingx-show
. This provides good user feedback.
64-97
: Schedule Settings Input Fields
The schedule type selector, weekday dropdown, and related input fields are well implemented using templated fields and proper Alpine.js bindings. Their dynamic behavior (e.g. enabling/disabling the weekday field) is clear.
99-117
: Time Input Fields for Hour and Minute
The hour and minute input fields enforce proper numerical ranges and include helpful tooltips. Ensure that numeric values from the server are compatible with these fields.
119-134
: Dynamic Schedule Summary Display
The use of Alpine.js templates to dynamically render the backup schedule summary (daily or weekly) is effective and user friendly.
138-155
: Encryption Settings Initialization & Key Regeneration Modal Setup
The encryption settings’ x-data object correctly introduces properties such asenabled
,key_exists
, and related metadata. TheshowRegenerateConfirm()
function properly sets up and displays the confirmation modal for regenerating the encryption key.
Note: Please verify that the modal element with IDconfirmModalRegenerateKey
exists at runtime.
156-208
: Import Key Functionality
TheimportKey
method robustly handles file input, makes a fetch call with CSRF protection, processes errors, and dispatches notifications. The file input reset upon both success and error is a good touch.
209-250
: Generate Key Functionality
ThegenerateKey
function mirrors the structure ofimportKey
, providing clear feedback and handling errors gracefully. Its use of fetch and notifications is consistent with best practices.
251-256
: Download Key Implementation
ThedownloadKey
function’s redirection viawindow.location.href
is simple and effective for initiating key downloads.
256-261
: Encryption x-init Watcher – Potential Scope Issue
The x-init block for encryption settings watches theencryption
object and assignsbackup.encryption = encryption.enabled
. However, nobackup
property is defined within this x-data scope. Ensure that this reference correctly targets a parent or global state, or remove/update this assignment to avoid runtime errors.
263-273
: Encryption Settings Header
The header’s display (title and change badge) for the encryption section is clear and integrates well with the overall UI.
274-337
: Encryption Key Management Section
The section differentiates between scenarios where an encryption key already exists and when it does not. Options to download, import, or generate a key are provided, and error handling with notifications is properly implemented.
342-355
: Key Regeneration Confirmation Modal
The confirmation modal for regenerating the encryption key is well structured and leverages native HTML<dialog>
. Its clear call-to-action buttons support a positive user experience.
358-425
: Backup Retention Settings Configuration
The retention settings allow users to specify maximum age and the number of backups with appropriate input validation patterns and tooltips. The use of Alpine.js to watch for changes is effective.
427-489
: Backup Targets: Initialization and Hidden Input Sync
The targets configuration is initialized by parsing server-provided JSON and updating a hidden input field in real time. This synchronization supports form submission well, but please verify that the JSON format in$refs.targetsInput.value
matches what is expected by the backend processing.
490-500
: Backup Targets Header and Add Target Button
The hidden input withx-ref="targetsInput"
and the collapse header for backup targets are implemented clearly. The “Add Target” button is intuitively placed alongside the header.
541-589
: FTP Target Settings – Password Field Security
The FTP target configuration includes fields for host, port, username, password, and remote path. However, the password field does not include additional security attributes such asautocomplete="new-password"
andspellcheck="false"
, as recommended in previous reviews.
Consider applying a diff such as:-{{template "passwordField" dict - "id" "ftpPassword" - "model" "target.settings.password" - "name" "'backup.targets[' + index + '].settings.password'" - "label" "Password" - "tooltip" "FTP account password" - "dynamicId" true}} +{{template "passwordField" dict + "id" "ftpPassword" + "model" "target.settings.password" + "name" "'backup.targets[' + index + '].settings.password'" + "label" "Password" + "tooltip" "FTP account password" + "dynamicId" true + "autocomplete" "new-password" + "spellcheck" "false"}}
592-659
: SFTP Target Settings – Password Field Security
Similar to the FTP section, the SFTP target configuration renders a password field without the extra security attributes. Please consider addingautocomplete="new-password"
andspellcheck="false"
to enhance security. For example:-{{template "passwordField" dict - "id" "sftpPassword" - "model" "target.settings.password" - "name" "'backup.targets[' + index + '].settings.password'" - "label" "Password" - "tooltip" "SFTP account password" - "dynamicId" true}} +{{template "passwordField" dict + "id" "sftpPassword" + "model" "target.settings.password" + "name" "'backup.targets[' + index + '].settings.password'" + "label" "Password" + "tooltip" "SFTP account password" + "dynamicId" true + "autocomplete" "new-password" + "spellcheck" "false"}}
660-665
: Remove Target Button
The “Remove Target” button is clearly rendered at the bottom of each target card, ensuring that users can easily delete unwanted backup targets.
// handleImportKey handles the importing of an encryption key | ||
func (s *Server) handleImportKey(c echo.Context) error { | ||
// Get the uploaded file | ||
file, err := c.FormFile("keyFile") | ||
if err != nil { | ||
return c.JSON(http.StatusBadRequest, map[string]interface{}{ | ||
"success": false, | ||
"message": "No key file provided", | ||
}) | ||
} | ||
|
||
// Open the uploaded file | ||
src, err := file.Open() | ||
if err != nil { | ||
return c.JSON(http.StatusInternalServerError, map[string]interface{}{ | ||
"success": false, | ||
"message": "Failed to read uploaded file", | ||
}) | ||
} | ||
defer src.Close() | ||
|
||
// Read the file content | ||
content := make([]byte, file.Size) | ||
if _, err := src.Read(content); err != nil { | ||
return c.JSON(http.StatusInternalServerError, map[string]interface{}{ | ||
"success": false, | ||
"message": "Failed to read key file content", | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid unbounded memory allocation with user-controlled file sizes.
content := make([]byte, file.Size)
allows a user to upload a large file and potentially trigger an out-of-memory condition. Consider imposing an upper limit on file size or reading/writing in chunks:
Proposed fix snippet (partial):
- content := make([]byte, file.Size)
+ const MaxKeyFileSize = 65536 // 64KB or another sensible limit
+ if file.Size > MaxKeyFileSize {
+ return c.JSON(http.StatusBadRequest, map[string]interface{}{
+ "success": false,
+ "message": "Key file too large",
+ })
+ }
+ content := make([]byte, file.Size)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// handleImportKey handles the importing of an encryption key | |
func (s *Server) handleImportKey(c echo.Context) error { | |
// Get the uploaded file | |
file, err := c.FormFile("keyFile") | |
if err != nil { | |
return c.JSON(http.StatusBadRequest, map[string]interface{}{ | |
"success": false, | |
"message": "No key file provided", | |
}) | |
} | |
// Open the uploaded file | |
src, err := file.Open() | |
if err != nil { | |
return c.JSON(http.StatusInternalServerError, map[string]interface{}{ | |
"success": false, | |
"message": "Failed to read uploaded file", | |
}) | |
} | |
defer src.Close() | |
// Read the file content | |
content := make([]byte, file.Size) | |
if _, err := src.Read(content); err != nil { | |
return c.JSON(http.StatusInternalServerError, map[string]interface{}{ | |
"success": false, | |
"message": "Failed to read key file content", | |
}) | |
} | |
// handleImportKey handles the importing of an encryption key | |
func (s *Server) handleImportKey(c echo.Context) error { | |
// Get the uploaded file | |
file, err := c.FormFile("keyFile") | |
if err != nil { | |
return c.JSON(http.StatusBadRequest, map[string]interface{}{ | |
"success": false, | |
"message": "No key file provided", | |
}) | |
} | |
// Open the uploaded file | |
src, err := file.Open() | |
if err != nil { | |
return c.JSON(http.StatusInternalServerError, map[string]interface{}{ | |
"success": false, | |
"message": "Failed to read uploaded file", | |
}) | |
} | |
defer src.Close() | |
// Read the file content | |
const MaxKeyFileSize = 65536 // 64KB or another sensible limit | |
if file.Size > MaxKeyFileSize { | |
return c.JSON(http.StatusBadRequest, map[string]interface{}{ | |
"success": false, | |
"message": "Key file too large", | |
}) | |
} | |
content := make([]byte, file.Size) | |
if _, err := src.Read(content); err != nil { | |
return c.JSON(http.StatusInternalServerError, map[string]interface{}{ | |
"success": false, | |
"message": "Failed to read key file content", | |
}) | |
} | |
// ... rest of the function logic | |
} |
🧰 Tools
🪛 GitHub Check: CodeQL
[failure] 207-207: Slice memory allocation with excessive size value
This memory allocation depends on a user-provided value.
// updateBackupTargetsFromForm updates the backup targets from form values | ||
func updateBackupTargetsFromForm(field reflect.Value, formValues map[string][]string, prefix string) error { | ||
// Always initialize an empty slice to handle the case where all targets are removed | ||
targets := []conf.BackupTarget{} | ||
|
||
// Get the targets JSON from form values | ||
targetsJSON, exists := formValues[prefix+".targets"] | ||
if exists && len(targetsJSON) > 0 { | ||
// Only try to unmarshal if we have data | ||
if err := json.Unmarshal([]byte(targetsJSON[0]), &targets); err != nil { | ||
return fmt.Errorf("error unmarshaling backup targets: %w", err) | ||
} | ||
|
||
// Validate each target's settings | ||
for i, target := range targets { | ||
// Initialize settings map if nil | ||
if target.Settings == nil { | ||
target.Settings = make(map[string]interface{}) | ||
} | ||
|
||
// Validate and set default settings based on target type | ||
switch strings.ToLower(target.Type) { | ||
case "local": | ||
if path, ok := target.Settings["path"].(string); !ok || path == "" { | ||
target.Settings["path"] = "backups/" | ||
} | ||
case "ftp": | ||
if _, ok := target.Settings["host"].(string); !ok { | ||
target.Settings["host"] = "" | ||
} | ||
if _, ok := target.Settings["port"].(float64); !ok { | ||
target.Settings["port"] = 21 | ||
} | ||
if _, ok := target.Settings["username"].(string); !ok { | ||
target.Settings["username"] = "" | ||
} | ||
if _, ok := target.Settings["password"].(string); !ok { | ||
target.Settings["password"] = "" | ||
} | ||
if _, ok := target.Settings["path"].(string); !ok { | ||
target.Settings["path"] = "backups/" | ||
} | ||
case "sftp": | ||
if _, ok := target.Settings["host"].(string); !ok { | ||
target.Settings["host"] = "" | ||
} | ||
if _, ok := target.Settings["port"].(float64); !ok { | ||
target.Settings["port"] = 22 | ||
} | ||
if _, ok := target.Settings["username"].(string); !ok { | ||
target.Settings["username"] = "" | ||
} | ||
if _, ok := target.Settings["password"].(string); !ok { | ||
target.Settings["password"] = "" | ||
} | ||
if _, ok := target.Settings["key_file"].(string); !ok { | ||
target.Settings["key_file"] = "" | ||
} | ||
if _, ok := target.Settings["known_hosts_file"].(string); !ok { | ||
target.Settings["known_hosts_file"] = "" | ||
} | ||
if _, ok := target.Settings["path"].(string); !ok { | ||
target.Settings["path"] = "backups/" | ||
} | ||
default: | ||
return fmt.Errorf("unsupported backup target type for target %d: %s", i+1, target.Type) | ||
} | ||
|
||
targets[i] = target | ||
} | ||
} | ||
|
||
// Always set the targets field, even if it's empty | ||
field.Set(reflect.ValueOf(targets)) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Reduce the function’s cognitive complexity.
updateBackupTargetsFromForm
is flagged with a high cognitive complexity (65). Splitting it into helper functions (e.g., separate logic for “local”, “ftp”, “sftp”) can help reduce complexity, improve testability, and make the code more maintainable.
🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 707-707:
cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Check: lint
[failure] 707-707:
cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: golangci-test
[warning] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: Build docker image
[error] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: golangci-lint
[warning] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
OperationTimeouts struct { | ||
Backup time.Duration // Timeout for entire backup operation (default: 2h) | ||
Store time.Duration // Timeout for storing a backup (default: 15m) | ||
Cleanup time.Duration // Timeout for cleanup operation (default: 10m) | ||
Delete time.Duration // Timeout for delete operation (default: 2m) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation for operation timeouts.
The operation timeouts are defined but lack validation to ensure reasonable values.
Add validation in the ValidateBackupConfig
function:
+ // Validate operation timeouts
+ if config.OperationTimeouts.Backup < time.Minute || config.OperationTimeouts.Backup > 24*time.Hour {
+ errs = append(errs, "backup operation timeout must be between 1 minute and 24 hours")
+ }
+ if config.OperationTimeouts.Store < time.Minute || config.OperationTimeouts.Store > 2*time.Hour {
+ errs = append(errs, "store operation timeout must be between 1 minute and 2 hours")
+ }
+ if config.OperationTimeouts.Cleanup < time.Minute || config.OperationTimeouts.Cleanup > time.Hour {
+ errs = append(errs, "cleanup operation timeout must be between 1 minute and 1 hour")
+ }
+ if config.OperationTimeouts.Delete < 30*time.Second || config.OperationTimeouts.Delete > 30*time.Minute {
+ errs = append(errs, "delete operation timeout must be between 30 seconds and 30 minutes")
+ }
…ders This commit introduces a robust backup system for BirdNET-Go with the following key features: - Configurable backup providers (local, FTP, SFTP, rsync, Google Drive) - Multiple backup targets (SQLite initially) - Flexible scheduling and retention policies - Optional encryption for backups - Comprehensive configuration validation - Modular design with pluggable backup sources and targets The implementation includes: - Backup configuration in settings - Backup manager with scheduling and retention management - Sources and targets for different storage types - Encryption support using AES-256-GCM
… API - Replace manual file copying with SQLite's VACUUM INTO command - Use GORM for safer and more robust database handling - Add additional database validation checks - Implement read-only mode and transaction-based backup - Enhance error handling and resource management
- Refactor backup process to provide more detailed logging and error tracking - Update backup sources and targets to support more robust file handling - Improve error reporting and context in backup operations - Simplify backup configuration and validation - Add more comprehensive logging for backup stages and operations
- Upgrade various Google Cloud, OpenTelemetry, and gRPC dependencies - Update protobuf, Go standard library, and third-party package versions - Refresh go.mod and go.sum with latest compatible library versions
…ation - Add comprehensive error handling for SQLite backup operations - Implement robust disk space and media error detection - Support configurable temporary directory for backups - Add signal handling and lock file mechanism to prevent concurrent backups - Improve backup verification and integrity checks - Extend configuration to support custom temporary backup directory
…out management - Refactor backup process to support granular error handling and custom error types - Implement comprehensive timeout mechanisms for backup operations - Add context-aware backup stages with cancellation support - Improve error reporting and logging for backup processes - Extend configuration to support configurable operation timeouts - Modularize backup methods for better readability and maintainability
…dvanced error management - Implement comprehensive path validation with cross-platform support - Add atomic file writing and retry mechanisms for transient errors - Introduce versioned metadata format with enhanced error tracking - Improve backup verification and integrity checks - Implement granular permission management and security checks - Add exponential backoff and context-aware file operations - Enhance error handling with custom error types and detailed logging
… and atomic operations - Implement connection pooling and retry mechanisms for FTP operations - Add atomic file upload with temporary file tracking and cleanup - Enhance error handling with context-aware and transient error detection - Introduce comprehensive configuration options for FTP target - Improve metadata handling with versioned format - Add advanced validation and feature checking for FTP connections - Implement graceful connection management and resource cleanup
…ction management - Implement robust SFTP connection pooling and retry mechanisms - Add comprehensive security checks for path validation and file operations - Introduce atomic file upload with temporary file tracking and cleanup - Enhance host key verification using known_hosts file - Add progress tracking for file uploads - Improve error handling with context-aware and transient error detection - Implement versioned metadata format for SFTP backups - Add graceful connection management and resource cleanup - Update default configuration to support known_hosts file option
…ic operations - Implement comprehensive security checks for path validation and file operations - Add atomic file upload with temporary file tracking and cleanup - Enhance SSH connection security using known_hosts file - Introduce versioned metadata format for Rsync backups - Implement robust retry mechanisms for transient network errors - Add progress tracking and detailed error handling - Improve connection management with context-aware operations - Extend configuration to support advanced Rsync and SSH options
…nd atomic operations - Implement comprehensive Google Drive backup target with robust authentication - Add OAuth2 token management with automatic refresh and error handling - Introduce rate limiting and quota checking for Google Drive uploads - Implement atomic file upload with temporary file tracking and cleanup - Add versioned metadata format for Google Drive backups - Enhance error handling with context-aware and transient error detection - Implement folder caching and path management for backup storage - Add comprehensive validation and connection management - Improve logging and debug capabilities for Google Drive operations
- Add logger injection for backup targets (FTP, SFTP, Rsync, GDrive) - Implement map-based configuration method for Google Drive target - Improve error handling with errors.As for API error type checking - Add robust configuration parsing with type assertions and validation - Enhance local target metadata file reading with improved error handling - Fix Rsync temporary file cleanup error wrapping
- Add pagination support for Google Drive file listing to handle large backup folders - Improve error handling for file size parsing in Rsync target - Ensure complete file retrieval by processing all pages of Google Drive file list - Add error handling for file size parsing to prevent potential listing failures
- Move encryption key storage from config to a dedicated key file - Add secure key file generation and storage with proper permissions - Remove encryption key from configuration struct - Enhance key validation and error handling - Implement file-based key management with improved security
- Enhance logging across backup targets with emoji prefixes for better visual clarity - Add emoji-based error prefix method in error handling to improve error interpretation - Implement consistent logging style with emojis for status and error messages - Add available space logging for Google Drive target - Improve log messages with emoji indicators for different operation states - Refactor error messages to include emoji-based severity indicators
- Add FileMetadata and FileEntry to support platform-specific file attributes - Implement cross-platform file metadata preservation during backup - Update backup archive creation to include file modification times - Add platform-specific metadata handling for Unix and Windows systems - Improve backup archive file handling with more comprehensive metadata support - Refactor backup process to preserve file permissions and timestamps - Add runtime-based metadata detection for different operating systems
…ile handling - Standardize temporary file and test directory naming across targets - Enhance FTP target with more robust directory creation and validation - Improve error handling and debug logging for FTP directory operations - Remove hidden file restrictions for test directories - Simplify path validation and error reporting in backup targets - Add more context-aware error messages for file and directory operations
- Update atomic upload methods to use os.CreateTemp for generating temporary file names - Modify file listing to exclude temporary upload files - Standardize temporary file naming across FTP and SFTP targets - Enhance file upload robustness by using system-generated temporary file paths
- Replace zip-based backup with tar.gz streaming backup implementation - Implement chunk-based SQLite database backup using sqlite3 driver - Add support for streaming backup data directly to backup archive - Enhance backup process with more granular progress tracking - Remove platform-specific file metadata handling - Simplify backup source interface to support streaming - Improve error handling and logging during backup process
- Remove BackupArchive, FileEntry, and related archive creation methods - Refactor createArchive to use streaming approach with separate methods - Eliminate debug logging and unnecessary complexity in SQLite source - Improve error handling in database information retrieval - Remove scheduling and done channel from backup manager - Standardize octal file mode notation
- Update Metadata struct in backup package with comprehensive JSON tags - Add new optional fields like Checksum, Compressed, and OriginalSize - Remove target-specific metadata version structs - Simplify metadata serialization across all backup targets - Improve metadata flexibility and future-proofing - Remove version-specific conversion methods
- Standardize metadata file naming with .tar.gz.meta extension - Simplify local, rsync, and other targets' file storage and listing logic - Remove timestamp-based directory creation for local backups - Enhance metadata file handling with atomic write and improved error management - Add debug logging with emojis for backup operations - Improve file validation and orphaned metadata detection - Enforce required base path configuration for backup targets
…backups - Introduce `genkey` subcommand to generate encryption keys for backups - Add `ValidateEncryption()` method to check encryption configuration - Implement secure encryption key generation and file storage - Add `sanitize_config` option to control configuration backup behavior - Update default configuration to sanitize config by default
- Implement `restore` command with `decrypt` subcommand - Add support for decrypting encrypted backup archives - Update root command to include restore command - Enhance backup metadata to track encryption status - Implement decryption logic with metadata validation - Support automatic output path generation for decrypted backups
- Introduce BackupStats struct to track detailed backup metrics - Implement GetBackupStats method to retrieve backup statistics - Add ValidateBackupCounts to enforce retention policy requirements - Create Scheduler with advanced scheduling capabilities - Add StateManager to persist backup system state - Support daily and weekly backup schedules - Enhance configuration with flexible schedule definitions - Implement missed backup tracking and reporting
- Implement backup settings page with general and retention configuration - Add support for multiple backup targets (local, FTP, SFTP) - Update sidebar and routes to include backup settings - Enhance form templates to support dynamic and indexed form fields - Remove mandatory schedule validation in backup configuration - Improve input validation and tooltip handling across form components
…lity - Remove redundant x-data blocks in backup settings template - Add dynamic ID generation for form fields using Alpine.js - Enhance accessibility with ARIA attributes for form inputs - Simplify form field template references and improve code readability - Update checkbox, text, and password field templates to use consistent ID generation
- Implement encryption key generation, download, and import routes - Add new methods in backup.Manager for key management - Update BackupConfig to include encryption key field - Create new template components for encryption key settings - Enhance backup settings UI with key generation and import features - Add validation and error handling for encryption key operations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (14)
cmd/backup/backup.go (2)
39-39
:⚠️ Potential issueAvoid logging encryption keys in plaintext.
The encryption key should not be logged in plaintext for security reasons.
- log.Printf("Successfully generated new encryption key: %s", key) + log.Printf("Successfully generated new encryption key: %s", strings.Repeat("*", len(key)))
93-93
:⚠️ Potential issueAdd type assertion validation for target settings.
The type assertion for path is not checked for safety.
- Path: targetConfig.Settings["path"].(string), + Path: func() string { + if path, ok := targetConfig.Settings["path"].(string); ok { + return path + } + log.Printf("Warning: Invalid path setting for local target, using default") + return "backups/" + }(),internal/httpcontroller/backup_routes.go (1)
185-273
:⚠️ Potential issueFix unbounded memory allocation in key import handler.
The current implementation allows unbounded memory allocation through user-controlled file size.
Apply this fix to prevent potential DoS attacks:
+const MaxKeyFileSize = 65536 // 64KB limit for key files func (s *Server) handleImportKey(c echo.Context) error { + if file.Size > MaxKeyFileSize { + return c.JSON(http.StatusBadRequest, map[string]interface{}{ + "success": false, + "message": "Key file too large", + }) + }internal/conf/defaults.go (2)
183-188
: 🛠️ Refactor suggestionEnable encryption by default for enhanced security.
Backups often contain sensitive data and should be encrypted by default.
-viper.SetDefault("backup.encryption", false) +viper.SetDefault("backup.encryption", true)
196-198
: 🛠️ Refactor suggestionEnhance path validation for backup locations.
The current implementation uses raw paths without validation.
Add path validation to prevent directory traversal and ensure proper permissions:
+"path": filepath.Clean("backups/"), // Sanitize path
internal/conf/validate.go (3)
289-291
:⚠️ Potential issueImplement schedule validation.
The schedule validation is currently commented out but is crucial for proper backup timing.
Uncomment and implement the schedule validation:
-/*if len(config.Schedules) == 0 { - errs = append(errs, "at least one backup schedule must be configured when backup is enabled") -}*/ +if len(config.Schedules) == 0 && config.Schedule == "" { + errs = append(errs, "either a cron schedule or at least one backup schedule must be configured when backup is enabled") +}
347-349
:⚠️ Potential issueEnhance path validation for local target.
The current validation only checks if the path exists but doesn't validate its security implications.
Add comprehensive path validation:
if path, ok := target.Settings["path"].(string); !ok || path == "" { errs = append(errs, fmt.Sprintf("target %d: local target requires 'path' setting", i+1)) +} else { + cleanPath := filepath.Clean(path) + if strings.Contains(cleanPath, "..") { + errs = append(errs, fmt.Sprintf("target %d: path must not contain directory traversal", i+1)) + } + if filepath.IsAbs(cleanPath) { + errs = append(errs, fmt.Sprintf("target %d: absolute paths are not allowed", i+1)) + } }
351-353
: 🛠️ Refactor suggestionAdd comprehensive validation for FTP settings.
The current validation only checks for the host setting.
Add validation for all FTP settings:
if _, ok := target.Settings["host"].(string); !ok { errs = append(errs, fmt.Sprintf("target %d: FTP target requires 'host' setting", i+1)) +} else { + if port, ok := target.Settings["port"].(int); ok { + if port < 1 || port > 65535 { + errs = append(errs, fmt.Sprintf("target %d: port must be between 1 and 65535", i+1)) + } + } + if timeout, ok := target.Settings["timeout"].(string); ok { + if _, err := time.ParseDuration(timeout); err != nil { + errs = append(errs, fmt.Sprintf("target %d: invalid timeout format: %v", i+1, err)) + } + } }internal/backup/scheduler.go (1)
136-141
:⚠️ Potential issueFix misleading function name.
The function name
ClearMissedRuns
suggests it only clears missed runs, but it actually clears all schedules by settings.schedules = nil
. This could lead to unintended behavior.Apply this diff to fix the issue:
-func (s *Scheduler) ClearMissedRuns() { +func (s *Scheduler) ClearAllSchedules() { s.mu.Lock() defer s.mu.Unlock() s.schedules = nil }internal/backup/targets/sftp.go (1)
732-752
:⚠️ Potential issueFix security vulnerability in known_hosts handling.
Creating an empty known_hosts file defeats the purpose of host key verification and makes the connection vulnerable to MITM attacks.
Apply this diff to fix the issue:
func knownHostsCallback(knownHostsFile string) (ssh.HostKeyCallback, error) { // Check if the known_hosts file exists if _, err := os.Stat(knownHostsFile); os.IsNotExist(err) { - // Create the .ssh directory if it doesn't exist - sshDir := filepath.Dir(knownHostsFile) - if err := os.MkdirAll(sshDir, 0o700); err != nil { - return nil, backup.NewError(backup.ErrIO, "failed to create .ssh directory", err) - } - // Create an empty known_hosts file - if err := os.WriteFile(knownHostsFile, []byte{}, 0o600); err != nil { - return nil, backup.NewError(backup.ErrIO, "failed to create known_hosts file", err) - } + return nil, backup.NewError(backup.ErrSecurity, + "known_hosts file does not exist. Please create it with valid host keys", err) }internal/conf/config.go (2)
387-389
:⚠️ Potential issueEnhance encryption key management.
Storing the encryption key in plaintext in the config file is a security risk. Consider:
- Using environment variables for the encryption key.
- Storing the key in a secure key management system.
- Adding encryption algorithm configuration.
Apply this diff to improve security:
type BackupConfig struct { - Encryption bool `yaml:"encryption"` // true to enable backup encryption - EncryptionKey string `yaml:"encryption_key"` // Base64-encoded encryption key for backups + Encryption struct { + Enabled bool `yaml:"enabled"` // true to enable backup encryption + KeySource string `yaml:"keysource"` // "env", "file", or "vault" + KeyName string `yaml:"keyname"` // Name of the key or environment variable + Algorithm string `yaml:"algorithm"` // "AES256", "ChaCha20", etc. + KeyRotation struct { + Enabled bool `yaml:"enabled"` // Enable automatic key rotation + Interval time.Duration `yaml:"interval"` // How often to rotate keys + RetainOld int `yaml:"retainold"` // Number of old keys to retain + } `yaml:"keyrotation"` + } `yaml:"encryption"`
395-400
: 🛠️ Refactor suggestionAdd validation for operation timeouts.
The operation timeouts are defined but lack validation to ensure reasonable values.
Add validation in the
ValidateBackupConfig
function:+ // Validate operation timeouts + if config.OperationTimeouts.Backup < time.Minute || config.OperationTimeouts.Backup > 24*time.Hour { + errs = append(errs, "backup operation timeout must be between 1 minute and 24 hours") + } + if config.OperationTimeouts.Store < time.Minute || config.OperationTimeouts.Store > 2*time.Hour { + errs = append(errs, "store operation timeout must be between 1 minute and 2 hours") + } + if config.OperationTimeouts.Cleanup < time.Minute || config.OperationTimeouts.Cleanup > time.Hour { + errs = append(errs, "cleanup operation timeout must be between 1 minute and 1 hour") + } + if config.OperationTimeouts.Delete < 30*time.Second || config.OperationTimeouts.Delete > 30*time.Minute { + errs = append(errs, "delete operation timeout must be between 30 seconds and 30 minutes") + }internal/backup/targets/gdrive.go (1)
277-284
: 🛠️ Refactor suggestionEnhance token file security.
The token file is created with 0o600 permissions, which is good, but additional security measures could be implemented.
Consider these security enhancements:
func (t *GDriveTarget) saveToken(token *oauth2.Token) error { + // Create token file with secure permissions + tokenDir := filepath.Dir(t.config.TokenFile) + if err := os.MkdirAll(tokenDir, 0o700); err != nil { + return backup.NewError(backup.ErrIO, "gdrive: unable to create token directory", err) + } + f, err := os.OpenFile(t.config.TokenFile, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0o600) if err != nil { return backup.NewError(backup.ErrIO, "gdrive: unable to cache oauth token", err) } defer f.Close() + + // Encrypt token before saving + encrypted, err := t.encryptToken(token) + if err != nil { + return backup.NewError(backup.ErrIO, "gdrive: unable to encrypt token", err) + } - return json.NewEncoder(f).Encode(token) + return json.NewEncoder(f).Encode(encrypted) }internal/httpcontroller/handlers/settings.go (1)
706-781
: 🛠️ Refactor suggestionReduce cognitive complexity by extracting target-specific logic.
The function has high cognitive complexity (65) due to nested conditionals and type-specific settings. Consider refactoring to improve maintainability and testability.
Extract target-specific validation into separate functions:
func updateBackupTargetsFromForm(field reflect.Value, formValues map[string][]string, prefix string) error { targets := []conf.BackupTarget{} targetsJSON, exists := formValues[prefix+".targets"] if exists && len(targetsJSON) > 0 { if err := json.Unmarshal([]byte(targetsJSON[0]), &targets); err != nil { return fmt.Errorf("error unmarshaling backup targets: %w", err) } for i, target := range targets { if target.Settings == nil { target.Settings = make(map[string]interface{}) } - switch strings.ToLower(target.Type) { - case "local": - if path, ok := target.Settings["path"].(string); !ok || path == "" { - target.Settings["path"] = "backups/" - } - case "ftp": - if _, ok := target.Settings["host"].(string); !ok { - target.Settings["host"] = "" - } - if _, ok := target.Settings["port"].(float64); !ok { - target.Settings["port"] = 21 - } - // ... more FTP settings - case "sftp": - if _, ok := target.Settings["host"].(string); !ok { - target.Settings["host"] = "" - } - if _, ok := target.Settings["port"].(float64); !ok { - target.Settings["port"] = 22 - } - // ... more SFTP settings - default: - return fmt.Errorf("unsupported backup target type for target %d: %s", i+1, target.Type) - } + if err := validateAndSetDefaultSettings(&target, i); err != nil { + return err + } targets[i] = target } } field.Set(reflect.ValueOf(targets)) return nil } +func validateAndSetDefaultSettings(target *conf.BackupTarget, index int) error { + switch strings.ToLower(target.Type) { + case "local": + return validateLocalTarget(target) + case "ftp": + return validateFTPTarget(target) + case "sftp": + return validateSFTPTarget(target) + default: + return fmt.Errorf("unsupported backup target type for target %d: %s", index+1, target.Type) + } +} +func validateLocalTarget(target *conf.BackupTarget) error { + if path, ok := target.Settings["path"].(string); !ok || path == "" { + target.Settings["path"] = "backups/" + } + return nil +} +func validateFTPTarget(target *conf.BackupTarget) error { + if _, ok := target.Settings["host"].(string); !ok { + target.Settings["host"] = "" + } + if _, ok := target.Settings["port"].(float64); !ok { + target.Settings["port"] = 21 + } + // ... more FTP settings + return nil +} +func validateSFTPTarget(target *conf.BackupTarget) error { + if _, ok := target.Settings["host"].(string); !ok { + target.Settings["host"] = "" + } + if _, ok := target.Settings["port"].(float64); !ok { + target.Settings["port"] = 22 + } + // ... more SFTP settings + return nil +}🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 707-707:
cognitive complexity 65 of funcupdateBackupTargetsFromForm
is high (> 50) (gocognit)🪛 GitHub Check: lint
[failure] 707-707:
cognitive complexity 65 of funcupdateBackupTargetsFromForm
is high (> 50) (gocognit)🪛 GitHub Actions: golangci-test
[warning] 707-707: cognitive complexity 65 of func
updateBackupTargetsFromForm
is high (> 50) (gocognit)🪛 GitHub Actions: golangci-lint
[warning] 707-707: cognitive complexity 65 of func
updateBackupTargetsFromForm
is high (> 50) (gocognit)🪛 GitHub Actions: Build docker image
[error] 707-707: cognitive complexity 65 of func
updateBackupTargetsFromForm
is high (> 50) (gocognit)
🧹 Nitpick comments (63)
internal/diskmanager/disk_usage_windows.go (2)
40-41
: Consider moving DLL and proc loading to package level.The DLL and proc loading could be optimized by moving it to package level variables instead of loading them on each function call. This would improve performance when these functions are called frequently.
package diskmanager +var ( + kernel32 = syscall.NewLazyDLL("kernel32.dll") + getDiskFreeSpaceEx = kernel32.NewProc("GetDiskFreeSpaceExW") +) func GetDiskUsage(baseDir string) (float64, error) { - kernel32 := syscall.NewLazyDLL("kernel32.dll") - getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") // ... rest of the function } func GetAvailableSpace(baseDir string) (uint64, error) { - kernel32 := syscall.NewLazyDLL("kernel32.dll") - getDiskFreeSpaceEx := kernel32.NewProc("GetDiskFreeSpaceExW") // ... rest of the function }
38-39
: Enhance function documentation with more details.The documentation could be more specific about the return value units and any path requirements. Consider adding examples for clarity.
-// GetAvailableSpace returns the available disk space in bytes +// GetAvailableSpace returns the available disk space in bytes for the specified directory path. +// The path must be an existing directory on a Windows filesystem. +// Returns an error if the path is invalid or if the system call fails. +// +// Example: +// +// space, err := GetAvailableSpace("C:\\Users\\") +// if err != nil { +// log.Fatal(err) +// } +// fmt.Printf("Available space: %.2f GB\n", float64(space)/1024/1024/1024)internal/backup/state.go (14)
14-21
: Ensure atomicity when recording updates.The
BackupState
struct contains references to schedules, targets, missed runs, and stats. Although everything is behind a mutex at theStateManager
level, ensure that all updates are performed atomically and consistently, especially when multiple fields (likeSchedules
andMissedRuns
) must be updated together.
23-29
: Validate schedule failures more explicitly.
ScheduleState.FailureCount
is incremented without upper bounds or reset conditions beyond a successful run. Consider if a limit or additional logging might help track recurring failures or to trigger alerts.
31-41
: Add clarity for the role ofTargetState
.
TargetState
tracks metrics likeLastBackupID
,LastBackupTime
, andTotalBackups
. Including brief documentation of how these fields are used (e.g., for reporting or scheduling next runs) would improve maintainability.
43-49
: Clarify missed vs. failed backups.
MissedBackup
captures missed schedules, whileScheduleState
captures failures. If a missed backup should also incrementFailureCount
or trigger additional logic, consider cross-referencing these structures to prevent confusion.
51-56
: Leverage composition or embedding if needed.
StateManager
holds the mutex, path, and pointer toBackupState
. If more state types or configurations are expected over time, consider embedding or composition patterns to keep the code organized.
58-91
: Handle non-standard config directories gracefully.
NewStateManager
picks the first config path fromGetDefaultConfigPaths
. If multiple paths are returned, consider a fallback strategy or more transparent error logging in case the first one fails to load or becomes inaccessible in the future.
107-138
: Consider concurrency implications when writing state.
saveState
uses atomic rename, which is good. However, if two calls tosaveState
happen very closely in time from separate goroutines, the second might overwrite the first. Ensure higher-level usage in the system prevents conflicts or that only one writer is active at any moment.
140-164
: Gracefully handle daily vs. weekly logic.
UpdateScheduleState
uses'daily'
or'weekly-' + Weekday
. If new schedule types arise (e.g., monthly), consider a modular approach or enumerations to avoid string-based logic in multiple places.
166-182
: Ensure missed backups are pruned or periodically cleared.
AddMissedBackup
keeps adding toMissedRuns
. If the system runs long-term, the missed backups list can grow unbounded. Provide a policy or maximum cap, or encourage ops to callClearMissedBackups
in routine intervals.
201-208
: Confirm stats concurrency usage.
UpdateStats
replaces the entiresm.state.Stats
map with the new one. This is safe under themu.Lock()
, but if partial merges are required, consider merging at the map level to avoid overwriting concurrent changes, if that usage scenario arises.
210-223
: Return default schedule state for missing keys.
GetScheduleState
returns the storedScheduleState
, but if the key does not exist, this yields a zeroed struct. Consider logging that the schedule is missing or verifying whether the schedule was properly initialized.
225-229
: Handle unknown targets gracefully.
GetTargetState
returns a zeroedTargetState
if the target is not found. If it’s used downstream in a way that expects valid data, consider logging or returning an error to prevent silent failures.
231-240
: Prevent outside modifications to missed backups.
GetMissedBackups
returns a copy, which is good. Examine if the copy is truly read-only or if a user might modify the objects inside it. If immutability is needed, consider deeper copies to avoid accidental mutation.
242-249
: Log or track missed backups clearance.
ClearMissedBackups
empties the list but no record is kept. If auditing or long-term tracking is needed, consider logging the clearance operation.internal/backup/targets/local.go (15)
20-29
: Consider user-configurable limits.Constants like
maxBackupSize
(10GB) may not suit all environments. Allowing some configuration, even environment variables, could make this more flexible.
31-37
: Account for additional Windows reserved names.
windowsReservedNames
is a good safeguard. However, Windows has other device names likeCOM5-9
orLPT5-9
. If you want broader coverage, consider an approach that extends to all COM and LPT variants.
86-91
: Add docstring explaining concurrency expectations.
LocalTarget
might be used by multiple goroutines. Although there is no explicit concurrency here, clarifying whether each instance is meant to be used concurrently or if a new instance should be created for each backup might be helpful.
93-97
: Document fields in LocalTargetConfig.
LocalTargetConfig
has just aPath
andDebug
flag. If more fields are upcoming, adding doc comments or clarifying usage can ward off confusion as the codebase grows.
99-115
: Evaluate more transient errors.
isTransientError
checks for timeouts, "connection reset," etc. Additional checks might be relevant if the local filesystem experiences momentary OS-level lock or slow I/O, but that might require more nuanced logic.
117-120
: Adjust backoff strategy for large backups.The exponential backoff uses a base of 100ms, which might be too short for high-latency operations, especially for large backups. Consider providing a configurable or higher default backoff delay.
122-139
: Include debug logs for all retry attempts.
withRetry
logs only on transient errors. Consider logging the final successful attempt or total attempts used, so administrators get full insight when a transient error eventually resolves on its last try.
141-146
: Check for partial copies.
copyFile
relies onio.CopyBuffer
. If the source file changes in the middle of copying, we might get partial writes. Typically for local backups, it’s not a big concern, but note that file locking or exclusive access might be safer if data changes are frequent.
148-211
: Harden path validation further.
validatePath
forbids symlinks and relative paths with "..," which is great. On some OS, special files or devices might appear valid but lead to unexpected behavior. If security is critical, consider disallowing any block devices or special file types.
248-251
: Name is fixed.
Name()
is hardcoded to"local"
. If multiple local targets are used (e.g., different directories), consider returning a unique identifier to differentiate them in logs or system outputs.
253-353
: Offer advanced deduplication or versioning strategies.
Store
overwrites the existing file if there's a name collision (based onfilepath.Base(sourcePath)
). For advanced use cases, consider unique subdirectories, timestamps, or hashing. This might avoid accidental overwrites if the same file is backed up from multiple sources.
355-390
: Opportunity for high-level checks after verification.
verifyBackup
checks size only. If there's a need to verify checksums or compare metadata, consider adding an MD5/SHA256 verification step for stronger integrity assurance.
378-436
: Refactor to skip.meta
checks more gracefully.
List
filters files by.meta
. If this logic grows (e.g., new file extensions), consider a general approach or a separate function that retrieves backups vs. scanning for orphaned metadata. It keeps the listing logic more maintainable.
438-463
: Consider logging or returning info on what was deleted.
Delete
removes the backup file and metadata. If needed, gather details (timestamp, size, or reason) and either log them or return them to calling code, for an auditable track of deletion events.
465-514
: Validate ensures 1GB free.
Validate
demands at least 1GB free space, which is a decent baseline. Some minimal embedded systems might have less total storage. Consider making this threshold configurable if you expect usage in constrained environments.internal/backup/sources/sqlite.go (16)
22-26
: Add clarifying comments for usage.
SQLiteSource
is straightforward but might benefit from a short doc clarifying how it’s used in tandem with backup configurations (conf.Settings
).
28-34
: Allow optional custom logger injection.
NewSQLiteSource
always usesbackup.DefaultLogger
. If advanced logging is needed (e.g., structured logs), consider allowing injection of a custombackup.Logger
in the constructor.
36-42
: Sufficiency of the source naming approach.
Name()
returns the filename without extension. If multiple SQLite databases share the same base name but different directories, logs or references might collide. Consider prefixing with the parent directory or a user-friendly label if necessary.
44-56
: Consider additional fields for operation tracking.
Operation
holds an ID andStartTime
. You might want to track database size, user, or environment for auditing or debugging. This helps with more informative logs if multiple backups happen concurrently.
58-77
: Fine-tune debug vs. non-debug logging.
logf
selectively prints more details in debug mode. If you need granular control, consider levels (info, warning, error). Presently you only have debug on/off, which might limit log expressiveness as the system evolves.
102-115
: Catch read-only database edge cases.
verifySourceDatabase
ensures the database file exists but doesn’t confirm if a read-only system or WAL mode might cause partial backups. Consider verifying the journaling mode up front if it’s crucial for your app’s consistency requirements.
117-135
: Ensure proper tear-down on errors.
DatabaseConnection.Close()
setsclosed = true
to prevent repeated close attempts. That’s good. Check any ephemeral resources (like transaction handles or statements) are also closed if the system grows. Currently it’s straightforward, but keep future expansions in mind.
170-185
: Expand optional checks in integrity checks.
verifyDatabaseIntegrity
checksPRAGMA integrity_check
. Some users might wantquick_check
for performance or advanced checks (like foreign key checks). Offer more user-driven or config-based approach if needed.
187-200
: BackupError usage clarity.
BackupError
includes anOp
,Path
, andIsMedia
. This is a handy pattern. If any additional classification is needed (likeErrCorruption
orErrDatabase
specifics), consider an internal code or typed error approach for advanced error handling.
278-287
: Add specialized logging inside withDatabase.
withDatabase
callsopenDatabase
, defers close, then runsfn
. If an error occurs infn
, consider logging precisely where it occurred. Right now, the error is just returned. More context might help debug connection vs. operation failures.
289-300
: Surface partial or re-verification logic.
verifyDatabase
does a quick file existence check, then an integrity check. If new constraints or advanced verifications are introduced (like large table row checks), you might want to break them out or allow partial verifications.
302-332
: Also check auto_vacuum or other settings.
getDatabaseInfo
focuses on page count, page size, and journal mode. If advanced conditions matter (like auto_vacuum for disk usage or synchronous mode for data safety), you could gather and log them similarly.
334-346
: Error handling on partial backups.
initializeBackupConnection
returns asqlite3.SQLiteBackup
and uses the same page counts for start and total. If the database changes mid-backup, page counts can shift. Consider handling that scenario or logging if page counts differ drastically mid-backup.
362-389
: Provide progress or partial feedback.
performBackupSteps
runs chunked steps of 1000 pages. If you want user-facing progress, you could hook in a callback or log after each step. Right now, only errors short-circuit. This might be okay for smaller DBs, but large DB backups might benefit from progress logs.
409-489
: Ensure robust error handling during big DB backups.
streamBackupToWriter
gracefully cleans up the temp file. If the DB is extremely large, consider a strategy to avoid ephemeral disk usage (like a direct pipe-based approach) if that’s feasible. This approach is still valid, just watch for disk space usage.
491-529
: Allow fallback if WAL or streaming fails.
Backup
orchestrates a streaming backup. If WAL mode or data chunking is unsupported in certain environments, consider a fallback to a simpler method (likeVACUUM INTO
for smaller DBs) to reduce complexity. That might help cross-compatibility in older SQLite versions.internal/backup/targets/ftp.go (3)
181-188
: Validate the health check beyond NOOP.
WhileNoOp()
is a common quick check, consider verifying the server's response code more thoroughly or issuing a more involved command to ensure full readiness of the FTP session.
493-501
: Consider implementing feature checks or removing them if unused.
TheValidate
method prints a warning that feature checks are not supported by the current FTP library, butconfig.Features
is never otherwise used. You might remove this field or implement a custom approach for feature validation.
622-650
: Potential concurrency race in final cleanup.
You rely on one more FTP connection to remove temporary files when callingClose()
. Under very high concurrency or abrupt shutdown scenarios, there is a chance that leftover temporary files might not get fully cleaned. Consider a more robust shutdown or repeated cleanup process to reduce the risk of orphaned files.internal/backup/targets/rsync.go (3)
161-176
: Expand transient error handling if necessary.
Currently,isTransientError
accounts for many network-related failures, but you might want to include additional SSH or rsync-specific error strings if you observe new transient conditions in production logs.
385-449
: Consider verifying remote free space before rsync.
Atomic uploads are robust, yet the code does not check if the remote host has enough space available to complete the backup. For large backups, failing after part of the data is uploaded can waste time and bandwidth.
527-556
: Check concurrency while cleaning up tracked temporary files.
cleanupTempFiles
locks the entire operation, which is likely fine for typical usage, but if multiple concurrent cleanup demands occur, it might slow other rsync tasks. Clarify concurrency expectations or move the operation to a safer shutdown phase if needed.Would you like a script to investigate usage patterns or concurrency in a broader scope to confirm the overhead?
internal/backup/backup.go (1)
278-280
: Confirm the file extension choice aligns with future compression changes.
The code sets.tar.gz
as the default extension, then changes to.enc
if encryption is enabled. If newer compression formats or multi-step compressions are introduced, you might rename it accordingly for clarity.internal/backup/errors.go (2)
43-97
: Consider adding error code documentation.The
Error
struct and its methods are well-implemented. However, consider adding a comment explaining the purpose of each field in the struct.// Error represents a backup operation error type Error struct { - Code ErrorCode // Error classification - Message string // Human-readable error message - Err error // Original error if any + // Code represents the specific type of backup error that occurred + Code ErrorCode + // Message provides a human-readable description of the error + Message string + // Err contains the original error that caused this backup error, if any + Err error }
99-118
: Add nil check for message parameter in NewError.The
NewError
function should validate that the message parameter is not empty.// NewError creates a new backup error func NewError(code ErrorCode, message string, err error) error { + if message == "" { + message = "unknown backup error" + } return &Error{ Code: code, Message: message, Err: err, } }cmd/restore/restore.go (3)
47-47
: Combine parameter types for better readability.The function signature can be simplified by combining the string parameters.
-func runDecrypt(settings *conf.Settings, backupPath string, outputPath string) error { +func runDecrypt(settings *conf.Settings, backupPath, outputPath string) error {🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)🪛 GitHub Check: lint
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
112-119
: Update octal literals to use new style.Use the new octal literal style (0o prefix) for better readability and consistency with modern Go conventions.
- if err := os.MkdirAll(filepath.Dir(outputPath), 0755); err != nil { + if err := os.MkdirAll(filepath.Dir(outputPath), 0o755); err != nil { return fmt.Errorf("failed to create output directory: %w", err) } // Write the decrypted data - if err := os.WriteFile(outputPath, decryptedData, 0600); err != nil { + if err := os.WriteFile(outputPath, decryptedData, 0o600); err != nil {🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)🪛 GitHub Check: lint
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
133-133
: Update octal literal for metadata file permissions.Use the new octal literal style for the metadata file permissions.
- if err := os.WriteFile(newMetaPath, updatedMetaData, 0600); err != nil { + if err := os.WriteFile(newMetaPath, updatedMetaData, 0o600); err != nil {🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)🪛 GitHub Check: lint
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)cmd/backup/backup.go (1)
133-134
: Consider making backup timeout configurable.The 10-minute timeout is hardcoded. Consider making it configurable through settings.
+ // Default to 10 minutes if not configured + timeout := settings.Backup.Timeout + if timeout <= 0 { + timeout = 10 * time.Minute + } - ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) + ctx, cancel := context.WithTimeout(context.Background(), timeout)internal/httpcontroller/template_renderers.go (1)
133-154
: Consider moving key file operations to a separate function.The key file handling logic could be extracted into a separate function for better maintainability.
+// getKeyFileDetails returns existence, hash, and creation time of the encryption key file +func (s *Server) getKeyFileDetails(manager *backup.Manager) (exists bool, hash, created string) { + keyPath, err := manager.GetEncryptionKeyPath() + if err != nil { + return false, "", "" + } + + info, err := os.Stat(keyPath) + if err != nil { + return false, "", "" + } + + keyBytes, err := os.ReadFile(keyPath) + if err != nil { + return true, "", info.ModTime().UTC().Format("2006-01-02 15:04:05 UTC") + } + + if len(keyBytes) >= 8 { + hash = string(keyBytes[:8]) + } + created = info.ModTime().UTC().Format("2006-01-02 15:04:05 UTC") + return true, hash, created +}internal/httpcontroller/backup_routes.go (1)
19-133
: Consider adding rate limiting to key generation endpoint.The key generation endpoint should be rate limited to prevent abuse.
+const ( + keySize = 32 // 256-bit key + maxKeyGenPerHour = 5 +) + +var keyGenLimiter = rate.NewLimiter(rate.Every(time.Hour/maxKeyGenPerHour), 1) + func (s *Server) handleGenerateKey(c echo.Context) error { + if !keyGenLimiter.Allow() { + return c.JSON(http.StatusTooManyRequests, map[string]interface{}{ + "success": false, + "message": "Too many key generation requests", + }) + }internal/backup/scheduler.go (1)
191-249
: Enhance backup operation robustness.Consider the following improvements:
- Make the timeout configurable based on backup size or target type.
- Add cleanup logic for partial backups in case of failure.
- Consider implementing a rollback mechanism for failed backups.
Example implementation:
func (s *Scheduler) runBackup(schedule *BackupSchedule) { s.logger.Printf("🔄 Running scheduled backup (Weekly: %v)", schedule.IsWeekly) - ctx, cancel := context.WithTimeout(context.Background(), 2*time.Hour) + timeout := s.calculateBackupTimeout() + ctx, cancel := context.WithTimeout(context.Background(), timeout) defer cancel() + // Create a temporary backup directory + tempDir, err := os.MkdirTemp("", "backup-*") + if err != nil { + s.logger.Printf("❌ Failed to create temporary directory: %v", err) + return + } + defer os.RemoveAll(tempDir) // Run the backup if err := s.manager.RunBackup(ctx); err != nil { s.logger.Printf("❌ Scheduled backup failed: %v", err) + s.cleanupPartialBackup(ctx) ... } ... } +func (s *Scheduler) calculateBackupTimeout() time.Duration { + // Calculate timeout based on backup size, target type, etc. + return 2 * time.Hour // Default +} +func (s *Scheduler) cleanupPartialBackup(ctx context.Context) { + // Implement cleanup logic +}🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)🪛 GitHub Check: lint
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (37)
cmd/backup/backup.go
(1 hunks)cmd/restore/restore.go
(1 hunks)cmd/root.go
(3 hunks)go.mod
(2 hunks)internal/backup/backup.go
(1 hunks)internal/backup/encryption.go
(1 hunks)internal/backup/errors.go
(1 hunks)internal/backup/logger.go
(1 hunks)internal/backup/metadata_unix.go
(1 hunks)internal/backup/metadata_windows.go
(1 hunks)internal/backup/scheduler.go
(1 hunks)internal/backup/sources/sqlite.go
(1 hunks)internal/backup/state.go
(1 hunks)internal/backup/targets/ftp.go
(1 hunks)internal/backup/targets/gdrive.go
(1 hunks)internal/backup/targets/local.go
(1 hunks)internal/backup/targets/rsync.go
(1 hunks)internal/backup/targets/sftp.go
(1 hunks)internal/conf/config.go
(3 hunks)internal/conf/defaults.go
(2 hunks)internal/conf/validate.go
(2 hunks)internal/diskmanager/disk_usage_unix.go
(1 hunks)internal/diskmanager/disk_usage_windows.go
(1 hunks)internal/httpcontroller/backup_routes.go
(1 hunks)internal/httpcontroller/handlers/settings.go
(4 hunks)internal/httpcontroller/routes.go
(2 hunks)internal/httpcontroller/server.go
(1 hunks)internal/httpcontroller/template_functions.go
(2 hunks)internal/httpcontroller/template_renderers.go
(2 hunks)views/elements/sidebar.html
(1 hunks)views/settings/backupSettings.html
(1 hunks)views/settings/templates/checkbox.html
(1 hunks)views/settings/templates/hostField.html
(2 hunks)views/settings/templates/numberField.html
(1 hunks)views/settings/templates/passwordField.html
(4 hunks)views/settings/templates/selectField.html
(1 hunks)views/settings/templates/textField.html
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (16)
- internal/httpcontroller/server.go
- internal/diskmanager/disk_usage_unix.go
- internal/httpcontroller/template_functions.go
- views/settings/templates/numberField.html
- views/elements/sidebar.html
- internal/backup/logger.go
- views/settings/backupSettings.html
- views/settings/templates/passwordField.html
- internal/backup/metadata_windows.go
- views/settings/templates/checkbox.html
- views/settings/templates/hostField.html
- internal/backup/metadata_unix.go
- views/settings/templates/selectField.html
- cmd/root.go
- views/settings/templates/textField.html
- internal/httpcontroller/routes.go
🧰 Additional context used
🪛 GitHub Check: golangci / lint
internal/httpcontroller/handlers/settings.go
[failure] 707-707:
cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
internal/backup/scheduler.go
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
internal/backup/backup.go
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
[failure] 986-986:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 1079-1079:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
cmd/restore/restore.go
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)
🪛 GitHub Check: lint
internal/httpcontroller/handlers/settings.go
[failure] 707-707:
cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
internal/backup/scheduler.go
[failure] 239-239:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 420-420:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
internal/backup/backup.go
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
[failure] 986-986:
rangeValCopy: each iteration copies 128 bytes (consider pointers or indexing) (gocritic)
[failure] 1079-1079:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
cmd/restore/restore.go
[failure] 47-47:
paramTypeCombine: func(settings *conf.Settings, backupPath string, outputPath string) error could be replaced with func(settings *conf.Settings, backupPath, outputPath string) error (gocritic)
[failure] 112-112:
octalLiteral: use new octal literal style, 0o755 (gocritic)
[failure] 117-117:
octalLiteral: use new octal literal style, 0o600 (gocritic)
[failure] 133-133:
octalLiteral: use new octal literal style, 0o600 (gocritic)
🪛 GitHub Actions: golangci-test
internal/httpcontroller/handlers/settings.go
[warning] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: golangci-lint
internal/httpcontroller/handlers/settings.go
[warning] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
🪛 GitHub Actions: Build docker image
internal/httpcontroller/handlers/settings.go
[error] 707-707: cognitive complexity 65 of func updateBackupTargetsFromForm
is high (> 50) (gocognit)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (28)
internal/backup/state.go (3)
93-105
: Check JSON file correctness.
loadState
unconditionally attempts to parse JSON. Possibly add stricter validation (e.g., checking schema or fields). This can prevent partial or malformed JSON from silently causing issues during runtime.
184-199
: Validate metadata before updating target state.
UpdateTargetState
relies onmetadata
fields likeID
orSize
. Ifmetadata
is nil or incomplete, we could risk stale or invalid data. Consider verifyingmetadata != nil
before applying updates.
251-262
: Ensure consistency in usage of stats.
GetStats
returns a shallow copy of the map. If the system evolves to store nested structures, consider deeper copies or a snapshot approach. Presently, for basic numeric or small-value stats, this is fine.internal/backup/targets/local.go (2)
38-84
: Atomic write approach is solid.
atomicWriteFile
thoroughly handles partial writes, file permission setting, and rename errors. Ensure that any external cleanup logic (like rolling back large changes) is also consistent if these writes are part of bigger transactions.
213-246
: Improve fallback on directory creation.
NewLocalTarget
attempts toMkdirAll(absPath, dirPermissions)
and thenChmod
to set strict permissions. Confirm that platforms (e.g., Windows) gracefully handle0o700
. Also, consider an error path if strict permissions fail for any reason on Windows.internal/backup/sources/sqlite.go (7)
1-20
: Imports and file structure look organized.The imports are well-grouped for
sqlite.go
. The code includes error classification, backup logic, and OS-specific checks. This is a good foundation for cross-platform reliability.
79-100
: Check path consistency.
validateConfig
transforms relative paths into absolute. If the configuration changes mid-run or the path is reconfigured, ensure that no stale references are used. In large apps, consistent usage ofdbPath
may be crucial.
137-168
: Use caution with WAL in read-only contexts.When readOnly is
true
, the DSN setsmode=ro
plus&_journal_mode=WAL
. On some platforms, WAL might require write access to the journal file. Double-check if purely read-only backups are feasible with WAL on all OS & SQLite versions.
202-276
: Great thoroughness in media error detection.
isMediaError
checks many syscall-specific conditions across platforms. This is robust. Just be aware that new error codes or OS variations might appear. Regular maintenance or build-time detection of OS constants can help keep coverage up-to-date.
348-360
: Account for edge cases in page count validation.
validatePageCount
sets fallback iftotal <= 0
. Some corner cases, like extremely small or newly created databases with minimal pages, might require distinct handling. Confirm that the fallback logic suffices in production usage.
391-407
: Consider OS-specific performance differences.
copyBackupToWriter
reads from a temporary file withio.Copy
. On certain OSes or large DB backups, direct streaming from the backup DB might be more efficient. However, your approach ensures a stable snapshot, which is beneficial for consistency.
531-541
: Check read/write permissions for the final stage.
Validate()
assures the DB path is valid in read-only mode. If the actual backup operation might require partial writes (like WAL updates), consider verifying that the environment is truly read/write capable. This is platform- and config-dependent but can prevent runtime surprises.internal/backup/targets/ftp.go (3)
49-55
: Consider utilizing theMinSpace
configuration.
TheFTPTargetConfig
struct definesMinSpace
but it is not referenced anywhere in the code, so any minimum space requirement is effectively ignored.Would you like to enforce disk space checks (e.g., by querying and comparing available space before uploading) to guarantee sufficient space for backups?
147-164
: Revisit connection pool size checks.
Even though the code uses a fixed-size channel to manage the pool, there is no explicit check preventing new connections from being created if the pool has reached capacity. This can lead to potential oversubscription of connections when under load.
295-320
: Enhance atomic upload robustness.
Similar to a past suggestion, consider expanding this approach with checksum verification or partial-resume support for large files.internal/backup/targets/rsync.go (1)
222-277
: Re-evaluate restricting hidden files insanitizePath
.
The current check forbids all filenames and directories starting with a dot, which may be overly restrictive if users genuinely need to back up hidden files or folders.Do you want to allow hidden resources (like
.config
files) under certain conditions?internal/backup/backup.go (2)
281-283
: Custom file extension for encrypted backups is well-structured.
Using a different.enc
extension clearly distinguishes encrypted archives from regular.tar.gz
files, aiding maintainers and users.
445-449
: Handle partial success in multi-target storage.
When storing a backup in multiple targets, some might succeed while others fail. Currently, you return partial results if at least one target succeeds. This logic is correct for best-effort scenarios, but confirm that it aligns with end-user expectations (e.g., if partial success is acceptable or if you need all-or-nothing).internal/backup/errors.go (1)
9-41
: LGTM! Well-structured error code definitions.The error codes are comprehensive and well-documented, covering various backup scenarios.
internal/backup/encryption.go (2)
15-84
: LGTM! Secure implementation of encryption key management.The implementation follows security best practices:
- Uses secure file permissions (0o600) for key storage
- Validates key length (256-bit)
- Implements proper error handling
86-134
: LGTM! Secure implementation of encryption/decryption.The implementation follows cryptographic best practices:
- Uses AES-256-GCM for authenticated encryption
- Properly handles nonces
- Validates encrypted data length
internal/backup/targets/gdrive.go (1)
629-685
: Well-implemented pagination for large folders.The function correctly implements pagination using
nextPageToken
to handle large folders, ensuring all backups are retrieved.internal/httpcontroller/handlers/settings.go (5)
104-121
: LGTM! Backup settings validation and notification handling.The code correctly validates backup configuration when enabled and provides appropriate user feedback through notifications.
783-828
: LGTM! Comprehensive backup configuration update logic.The function correctly handles all backup configuration fields and their respective types.
830-853
: LGTM! Thorough comparison of backup settings.The function effectively detects changes in all relevant backup settings using appropriate comparison methods.
855-878
: LGTM! Secure backup key generation with proper error handling.The function correctly handles key generation and provides appropriate user feedback.
880-901
: LGTM! Secure backup key download with proper error handling.The function correctly handles key download with appropriate headers and error checks.
go.mod (1)
40-81
: LGTM! Dependencies added for backup functionality.The dependencies correctly support the new backup features:
- FTP/SFTP support via
github.com/jlaffaye/ftp
andgithub.com/pkg/sftp
- Google Cloud Storage support via
cloud.google.com/go/storage
- Telemetry and monitoring support via OpenTelemetry packages
Also applies to: 136-136
targetStats.NewestBackup = backups[0].Timestamp | ||
} | ||
|
||
for _, backup := range backups { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid value copy in range loops for large structs.
Static analysis warns about performance overhead when iterating over slices of large structs by value. Instead, iterate by index and then reference the struct.
Example fix (for line 954):
- for _, backup := range backups {
+ for i := range backups {
+ backup := &backups[i]
...
Apply a similar adjustment for lines 986 and 1079 to reduce copying overhead.
Also applies to: 986-986, 1079-1079
🧰 Tools
🪛 GitHub Check: golangci / lint
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
🪛 GitHub Check: lint
[failure] 954-954:
rangeValCopy: each iteration copies 176 bytes (consider pointers or indexing) (gocritic)
if keyBytes, err := os.ReadFile(keyPath); err == nil { | ||
// Take first 8 characters of the key as a short identifier | ||
if len(keyBytes) >= 8 { | ||
keyHash = string(keyBytes[:8]) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for key file reading.
The key file reading operation should handle potential errors more gracefully.
// Read the key file to get its hash
if keyBytes, err := os.ReadFile(keyPath); err == nil {
// Take first 8 characters of the key as a short identifier
if len(keyBytes) >= 8 {
keyHash = string(keyBytes[:8])
+ } else {
+ log.Printf("Warning: Key file %s is too short", keyPath)
+ keyHash = "invalid"
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if keyBytes, err := os.ReadFile(keyPath); err == nil { | |
// Take first 8 characters of the key as a short identifier | |
if len(keyBytes) >= 8 { | |
keyHash = string(keyBytes[:8]) | |
} | |
if keyBytes, err := os.ReadFile(keyPath); err == nil { | |
// Take first 8 characters of the key as a short identifier | |
if len(keyBytes) >= 8 { | |
keyHash = string(keyBytes[:8]) | |
} else { | |
log.Printf("Warning: Key file %s is too short", keyPath) | |
keyHash = "invalid" | |
} |
// ImportEncryptionKey imports an encryption key from a file | ||
func (m *Manager) ImportEncryptionKey(content []byte) error { | ||
// Parse the key file content | ||
lines := strings.Split(string(content), "\n") | ||
if len(lines) < 3 { | ||
return NewError(ErrValidation, "invalid key file format", nil) | ||
} | ||
|
||
// Verify the header | ||
if !strings.HasPrefix(lines[0], "BirdNET-Go Backup Encryption Key") { | ||
return NewError(ErrValidation, "invalid key file format: missing header", nil) | ||
} | ||
|
||
// Extract the key | ||
var key string | ||
for _, line := range lines { | ||
if strings.HasPrefix(line, "Key: ") { | ||
key = strings.TrimPrefix(line, "Key: ") | ||
key = strings.TrimSpace(key) | ||
break | ||
} | ||
} | ||
|
||
if key == "" { | ||
return NewError(ErrValidation, "invalid key file format: key not found", nil) | ||
} | ||
|
||
// Validate key format (should be hex-encoded) | ||
if _, err := hex.DecodeString(key); err != nil { | ||
return NewError(ErrValidation, "invalid key format: not hex-encoded", err) | ||
} | ||
|
||
// Get the key file path | ||
keyPath, err := m.getEncryptionKeyPath() | ||
if err != nil { | ||
return err | ||
} | ||
|
||
// Create the config directory if it doesn't exist | ||
if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { | ||
return NewError(ErrIO, "failed to create config directory", err) | ||
} | ||
|
||
// Write the key to file with secure permissions | ||
if err := os.WriteFile(keyPath, []byte(key), 0o600); err != nil { | ||
return NewError(ErrIO, "failed to write encryption key file", err) | ||
} | ||
|
||
m.logger.Printf("✅ Imported encryption key saved to: %s", keyPath) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add key length validation after hex decoding.
While the function validates the hex encoding, it should also verify that the decoded key is 256 bits (32 bytes) long.
// Validate key format (should be hex-encoded)
-if _, err := hex.DecodeString(key); err != nil {
+decodedKey, err := hex.DecodeString(key)
+if err != nil {
return NewError(ErrValidation, "invalid key format: not hex-encoded", err)
}
+if len(decodedKey) != 32 {
+ return NewError(ErrValidation, "invalid key length: must be 256 bits", nil)
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// ImportEncryptionKey imports an encryption key from a file | |
func (m *Manager) ImportEncryptionKey(content []byte) error { | |
// Parse the key file content | |
lines := strings.Split(string(content), "\n") | |
if len(lines) < 3 { | |
return NewError(ErrValidation, "invalid key file format", nil) | |
} | |
// Verify the header | |
if !strings.HasPrefix(lines[0], "BirdNET-Go Backup Encryption Key") { | |
return NewError(ErrValidation, "invalid key file format: missing header", nil) | |
} | |
// Extract the key | |
var key string | |
for _, line := range lines { | |
if strings.HasPrefix(line, "Key: ") { | |
key = strings.TrimPrefix(line, "Key: ") | |
key = strings.TrimSpace(key) | |
break | |
} | |
} | |
if key == "" { | |
return NewError(ErrValidation, "invalid key file format: key not found", nil) | |
} | |
// Validate key format (should be hex-encoded) | |
if _, err := hex.DecodeString(key); err != nil { | |
return NewError(ErrValidation, "invalid key format: not hex-encoded", err) | |
} | |
// Get the key file path | |
keyPath, err := m.getEncryptionKeyPath() | |
if err != nil { | |
return err | |
} | |
// Create the config directory if it doesn't exist | |
if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { | |
return NewError(ErrIO, "failed to create config directory", err) | |
} | |
// Write the key to file with secure permissions | |
if err := os.WriteFile(keyPath, []byte(key), 0o600); err != nil { | |
return NewError(ErrIO, "failed to write encryption key file", err) | |
} | |
m.logger.Printf("✅ Imported encryption key saved to: %s", keyPath) | |
return nil | |
} | |
// ImportEncryptionKey imports an encryption key from a file | |
func (m *Manager) ImportEncryptionKey(content []byte) error { | |
// Parse the key file content | |
lines := strings.Split(string(content), "\n") | |
if len(lines) < 3 { | |
return NewError(ErrValidation, "invalid key file format", nil) | |
} | |
// Verify the header | |
if !strings.HasPrefix(lines[0], "BirdNET-Go Backup Encryption Key") { | |
return NewError(ErrValidation, "invalid key file format: missing header", nil) | |
} | |
// Extract the key | |
var key string | |
for _, line := range lines { | |
if strings.HasPrefix(line, "Key: ") { | |
key = strings.TrimPrefix(line, "Key: ") | |
key = strings.TrimSpace(key) | |
break | |
} | |
} | |
if key == "" { | |
return NewError(ErrValidation, "invalid key file format: key not found", nil) | |
} | |
// Validate key format (should be hex-encoded) | |
decodedKey, err := hex.DecodeString(key) | |
if err != nil { | |
return NewError(ErrValidation, "invalid key format: not hex-encoded", err) | |
} | |
if len(decodedKey) != 32 { | |
return NewError(ErrValidation, "invalid key length: must be 256 bits", nil) | |
} | |
// Get the key file path | |
keyPath, err := m.getEncryptionKeyPath() | |
if err != nil { | |
return err | |
} | |
// Create the config directory if it doesn't exist | |
if err := os.MkdirAll(filepath.Dir(keyPath), 0o700); err != nil { | |
return NewError(ErrIO, "failed to create config directory", err) | |
} | |
// Write the key to file with secure permissions | |
if err := os.WriteFile(keyPath, []byte(key), 0o600); err != nil { | |
return NewError(ErrIO, "failed to write encryption key file", err) | |
} | |
m.logger.Printf("✅ Imported encryption key saved to: %s", keyPath) | |
return nil | |
} |
func (s *Scheduler) checkSchedules(now time.Time) { | ||
s.mu.Lock() | ||
defer s.mu.Unlock() | ||
|
||
for i := range s.schedules { | ||
schedule := &s.schedules[i] | ||
|
||
// Check if it's time to run this schedule | ||
if now.After(schedule.NextRun) || now.Equal(schedule.NextRun) { | ||
// If we're more than 5 minutes past the scheduled time, consider it missed | ||
if now.Sub(schedule.NextRun) > 5*time.Minute { | ||
reason := fmt.Sprintf("System time %v was more than 5 minutes past scheduled time %v", | ||
now.Format(time.RFC3339), | ||
schedule.NextRun.Format(time.RFC3339)) | ||
|
||
if err := s.state.AddMissedBackup(schedule, reason); err != nil { | ||
s.logger.Printf("⚠️ Failed to record missed backup: %v", err) | ||
} | ||
s.logger.Printf("⚠️ Missed backup schedule at %v", schedule.NextRun) | ||
} else { | ||
// Run the backup | ||
go s.runBackup(schedule) | ||
} | ||
|
||
// Calculate next run time | ||
schedule.LastRun = now | ||
schedule.NextRun = s.calculateNextRun(now, schedule.Hour, schedule.Minute, schedule.Weekday, schedule.IsWeekly) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix potential race condition in schedule checking.
The function has a potential race condition where schedule.NextRun
could be modified between the check and execution. Consider copying the relevant schedule data within the lock.
Apply this diff to fix the issue:
func (s *Scheduler) checkSchedules(now time.Time) {
s.mu.Lock()
defer s.mu.Unlock()
for i := range s.schedules {
schedule := &s.schedules[i]
+ nextRun := schedule.NextRun
- if now.After(schedule.NextRun) || now.Equal(schedule.NextRun) {
+ if now.After(nextRun) || now.Equal(nextRun) {
// If we're more than 5 minutes past the scheduled time, consider it missed
- if now.Sub(schedule.NextRun) > 5*time.Minute {
+ if now.Sub(nextRun) > 5*time.Minute {
reason := fmt.Sprintf("System time %v was more than 5 minutes past scheduled time %v",
now.Format(time.RFC3339),
- schedule.NextRun.Format(time.RFC3339))
+ nextRun.Format(time.RFC3339))
...
}
...
}
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (s *Scheduler) checkSchedules(now time.Time) { | |
s.mu.Lock() | |
defer s.mu.Unlock() | |
for i := range s.schedules { | |
schedule := &s.schedules[i] | |
// Check if it's time to run this schedule | |
if now.After(schedule.NextRun) || now.Equal(schedule.NextRun) { | |
// If we're more than 5 minutes past the scheduled time, consider it missed | |
if now.Sub(schedule.NextRun) > 5*time.Minute { | |
reason := fmt.Sprintf("System time %v was more than 5 minutes past scheduled time %v", | |
now.Format(time.RFC3339), | |
schedule.NextRun.Format(time.RFC3339)) | |
if err := s.state.AddMissedBackup(schedule, reason); err != nil { | |
s.logger.Printf("⚠️ Failed to record missed backup: %v", err) | |
} | |
s.logger.Printf("⚠️ Missed backup schedule at %v", schedule.NextRun) | |
} else { | |
// Run the backup | |
go s.runBackup(schedule) | |
} | |
// Calculate next run time | |
schedule.LastRun = now | |
schedule.NextRun = s.calculateNextRun(now, schedule.Hour, schedule.Minute, schedule.Weekday, schedule.IsWeekly) | |
} | |
} | |
} | |
func (s *Scheduler) checkSchedules(now time.Time) { | |
s.mu.Lock() | |
defer s.mu.Unlock() | |
for i := range s.schedules { | |
schedule := &s.schedules[i] | |
nextRun := schedule.NextRun | |
// Check if it's time to run this schedule | |
if now.After(nextRun) || now.Equal(nextRun) { | |
// If we're more than 5 minutes past the scheduled time, consider it missed | |
if now.Sub(nextRun) > 5*time.Minute { | |
reason := fmt.Sprintf("System time %v was more than 5 minutes past scheduled time %v", | |
now.Format(time.RFC3339), | |
nextRun.Format(time.RFC3339)) | |
if err := s.state.AddMissedBackup(schedule, reason); err != nil { | |
s.logger.Printf("⚠️ Failed to record missed backup: %v", err) | |
} | |
s.logger.Printf("⚠️ Missed backup schedule at %v", schedule.NextRun) | |
} else { | |
// Run the backup | |
go s.runBackup(schedule) | |
} | |
// Calculate next run time | |
schedule.LastRun = now | |
schedule.NextRun = s.calculateNextRun(now, schedule.Hour, schedule.Minute, schedule.Weekday, schedule.IsWeekly) | |
} | |
} | |
} |
This PR adds functionality to run scheduled backups of SQLite database and application configuration
More backup targets will be added later, support for MySQL database backup will also be added at later time