-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An archive for each db #382
Comments
Hi @bessone and welcome.
anything the container can do, the binary can do. The container just wraps the binary.
Not the way it is now. It generates a separate file for each database, and then creates a single tar file of all of them and gzip compresses it, so yo get a single, "this is your backup". That can then be sent to the target(s): s3, smb, file, etc. Of course, that doesn't mean it cannot be done. What is the use case? And what would the UX look like, considering targets and restore use cases? |
Hello @deitch, my need is to backup many databases on the same server, they belong to different clients/projects, so it would be great to have different gz files, so that I can manage them more easily. I thought about creating a conf file for each database, so as to make them separate, but there are many and then for each new database I would have to create a new conf. Basically I would like to replicate the behavior of automysqlbackup (now abandoned for years) and have its own file for each database. |
So what would the UX look like? Something like: mysql-backup dump --per-database-file Something like that? Which would lead to, instead of (simplified names):
all then tarred into a single file:
and then compressed:
You would have one file per gz? Maybe tar with a single file in it, maybe not, that doesn't matter only impacts if it makes the code cleaner or not:
Something like that? |
Yep! exactly that, of course the type of compression and naming can remain exactly the same, the only difference is having individual files for each database as you explained. |
I can see it. It is non-trivial, in that the assumption throughout much of the build is that the result of a dump is a single file, which it then pushes to the target. Would it be easier to keep it as is, using a I don't object to a PR to make this happen. |
Hello,
with the standalone binary is it possible to generate backups of each database in a different compressed file instead of having all the dumps inside a single archive file?
Thanks
The text was updated successfully, but these errors were encountered: