-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prune not working with S3 storage. #395
Comments
Are these debug logs from |
I ran those commands from master too just now to make sure, it's the same. |
And also just tried v1.1.0 using
|
Successfully recreated the issue in a test (which will be part of the standard set of tests). Now to figure out why it fails. |
With #398 merged in, this should be fixed. If you are just running the binary, compile it from source and run it. If you are using the Docker image, you should be able to find it as Try it and report here, please. Once you see the fix as well, we can cut a patch release. |
@deitch Thanks, It's working now, It's awesome. Thanks again. |
I've two configuration, one is running hourly and keeping 24 count (it is working perfectly), but I've another which runs daily and it should keep last 31 backup (it's not working, it's only keeping last 1 backup). My Compose: version: '3.8'
services:
mysql-backup-hourly:
image: databack/mysql-backup:6764cb615d5386e3060ffb8e02c7a4157df5a254
environment:
DB_SERVER: mysql
DB_USER: root
DB_PASS: ${MYSQL_ROOT_PASSWORD:?err}
AWS_DEFAULT_REGION: ${MYSQL_BACKUP_AWS_DEFAULT_REGION:?err}
AWS_ACCESS_KEY_ID: ${MYSQL_BACKUP_AWS_ACCESS_KEY_ID:?err}
AWS_SECRET_ACCESS_KEY: ${MYSQL_BACKUP_AWS_SECRET_ACCESS_KEY:?err}
DB_DUMP_TARGET: s3://${MYSQL_BACKUP_S3_BUCKET:?err}/mysql/dumps-hourly
DB_DUMP_CRON: '0 * * * *' # Every hour
DB_DUMP_RETENTION: 24c
NICE: 'true' # Don't be use too much CPU or RAM, just be nice.
COMPRESSION: gzip
TZ: UTC
networks:
- mysql_encrypted_network
command: dump
depends_on:
- services_mysql
deploy:
resources:
limits:
memory: 8G
cpus: '0.5'
mode: global
placement:
constraints:
- node.labels.role == primary
update_config:
order: stop-first
rollback_config:
order: stop-first
mysql-backup-daily:
image: databack/mysql-backup:6764cb615d5386e3060ffb8e02c7a4157df5a254
environment:
DB_SERVER: mysql
DB_USER: root
DB_PASS: ${MYSQL_ROOT_PASSWORD:?err}
AWS_DEFAULT_REGION: ${MYSQL_BACKUP_AWS_DEFAULT_REGION:?err}
AWS_ACCESS_KEY_ID: ${MYSQL_BACKUP_AWS_ACCESS_KEY_ID:?err}
AWS_SECRET_ACCESS_KEY: ${MYSQL_BACKUP_AWS_SECRET_ACCESS_KEY:?err}
DB_DUMP_TARGET: s3://${MYSQL_BACKUP_S3_BUCKET:?err}/mysql/dumps-daily
DB_DUMP_CRON: '30 22 * * *' # At 22:30
DB_DUMP_RETENTION: 31c
NICE: 'true' # Don't be use too much CPU or RAM, just be nice.
COMPRESSION: gzip
TZ: UTC
networks:
- mysql_encrypted_network
command: dump
depends_on:
- services_mysql
deploy:
resources:
limits:
memory: 8G
cpus: '0.5'
mode: global
placement:
constraints:
- node.labels.role == primary
update_config:
order: stop-first
rollback_config:
order: stop-first
mysql-backup-monthly:
image: databack/mysql-backup:6764cb615d5386e3060ffb8e02c7a4157df5a254
environment:
DB_SERVER: mysql
DB_USER: root
DB_PASS: ${MYSQL_ROOT_PASSWORD:?err}
AWS_DEFAULT_REGION: ${MYSQL_BACKUP_AWS_DEFAULT_REGION:?err}
AWS_ACCESS_KEY_ID: ${MYSQL_BACKUP_AWS_ACCESS_KEY_ID:?err}
AWS_SECRET_ACCESS_KEY: ${MYSQL_BACKUP_AWS_SECRET_ACCESS_KEY:?err}
DB_DUMP_TARGET: s3://${MYSQL_BACKUP_S3_BUCKET:?err}/mysql/dumps-monthly
DB_DUMP_CRON: '0 0 1 * *' # At 00:00 on day-of-month 1
NICE: 'true' # Don't be use too much CPU or RAM, just be nice.
COMPRESSION: gzip
TZ: UTC
networks:
- mysql_encrypted_network
command: dump
depends_on:
- services_mysql
deploy:
resources:
limits:
memory: 8G
cpus: '0.5'
mode: global
placement:
constraints:
- node.labels.role == primary
update_config:
order: stop-first
rollback_config:
order: stop-first
networks:
mysql_encrypted_network:
external: true
|
This is helpful. Will see if I can figure it out, but will take some days. |
@iamriajul what is the debug log for the one that is only keeping the last one, rather than the last 31? |
tested using v1 and master of docker hub image. (eg: databack/mysql-backup:1.0.0 and databack/mysql-backup:master)
Tried running the
entrypoint
directly.Note: AWS Credentials are passed via Environment Variables.
DEBUG LOGS:
With command:
./entrypoint prune --target=s3://***-backup-mumbai/mysql/dumps-frequent --retention=2c --verbose=2 --debug
Output:
With command:
./entrypoint prune --target=s3://****-backup-mumbai/mysql/dumps-frequent --retention=1h --verbose=2 --debug
Output:
Screenshot of Backup Directory:
The text was updated successfully, but these errors were encountered: