Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An archive for each db #382

Open
bessone opened this issue Nov 28, 2024 · 5 comments
Open

An archive for each db #382

bessone opened this issue Nov 28, 2024 · 5 comments

Comments

@bessone
Copy link

bessone commented Nov 28, 2024

Hello,

with the standalone binary is it possible to generate backups of each database in a different compressed file instead of having all the dumps inside a single archive file?

Thanks

@deitch
Copy link
Collaborator

deitch commented Nov 28, 2024

Hi @bessone and welcome.

with the standalone binary is it possible

anything the container can do, the binary can do. The container just wraps the binary.

generate backups of each database in a different compressed file instead of having all the dumps inside a single archive file?

Not the way it is now. It generates a separate file for each database, and then creates a single tar file of all of them and gzip compresses it, so yo get a single, "this is your backup". That can then be sent to the target(s): s3, smb, file, etc.

Of course, that doesn't mean it cannot be done.

What is the use case? And what would the UX look like, considering targets and restore use cases?

@bessone
Copy link
Author

bessone commented Nov 28, 2024

Hello @deitch,

my need is to backup many databases on the same server, they belong to different clients/projects, so it would be great to have different gz files, so that I can manage them more easily.
In case of restore the idea is to import the sql file directly, without using mysql-backup.

I thought about creating a conf file for each database, so as to make them separate, but there are many and then for each new database I would have to create a new conf.

Basically I would like to replicate the behavior of automysqlbackup (now abandoned for years) and have its own file for each database.

@deitch
Copy link
Collaborator

deitch commented Nov 28, 2024

So what would the UX look like? Something like:

mysql-backup dump --per-database-file

Something like that? Which would lead to, instead of (simplified names):

database1_20241201.sql
database2_20241201.sql
database3_20241201.sql

all then tarred into a single file:

backup_20241201.tar

and then compressed:

backup_20241201.tar.gz

You would have one file per gz? Maybe tar with a single file in it, maybe not, that doesn't matter only impacts if it makes the code cleaner or not:

backup_database1_20241201.gz
backup_database2_20241201.gz
backup_database3_20241201.gz

Something like that?

@bessone
Copy link
Author

bessone commented Nov 29, 2024

Yep!

exactly that, of course the type of compression and naming can remain exactly the same, the only difference is having individual files for each database as you explained.

@deitch
Copy link
Collaborator

deitch commented Dec 3, 2024

I can see it. It is non-trivial, in that the assumption throughout much of the build is that the result of a dump is a single file, which it then pushes to the target.

Would it be easier to keep it as is, using a file target, and then post-process it to read the tgz file and separate it? Not optimal, especially if you are going to more distant targets (S3 or SMB), but a reasonable bridge?

I don't object to a PR to make this happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants