-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rentention policy on backup server #93
Comments
You should create a second policy for your backup server, something like:
You can also retain snapshots for longer, e.g. have |
works like charm |
There is no easy way, no. You could have two different config files, one for frequent+hourly snaps and one for the daily snaps. Then you run the daily one only every day at 5am, e.g. with the cron job like
But that is a bit complicated... |
okay, let it stay that 00:00 last question i see in log : Nov 24 20:45:01 INFO: Starting pyznap... how to snap to different dataset ? use two differnt config and cron job ? or maybe snap&send all /default/ ? |
In that case it's best to have two different policies. Since the settings in the config are recursive, you could do something like this:
This should set the snapshot frequency for all child datasets of Also btw, it might actually be better to have pyznap also running on the prod server, simply to take snapshots. And then on the remote only pull the snapshots. The way you have it set up now you have to run pyznap remotely over ssh, which can be a bit slow, though everything should work as expected. |
hi,
i have use pyznap only on remote site to pull snapshots from production server to backup server.
everything is ok execpt one thing, deleting snap from dest/backup server. on source server everything is ok
my config :
You can also take snapshots on a remote and pull snapshots from there
[ssh:22:[email protected]:default/data]
frequent = 4
hourly = 24
daily = 7
snap = yes
clean = yes
dest = default/data
compress = lz4
how to keep retention on backup server as on soruce
The text was updated successfully, but these errors were encountered: