Skip to content

Commit

Permalink
Add more retries when restoring a basebackup
Browse files Browse the repository at this point in the history
Commit 4869d84 added logic to make the
number of retries dependent on the backup size. Instead of allowing for
one error every 64GB allow for one every 10GB.

It's better to retry more than to start from scratch when things
go wrong.
  • Loading branch information
rdunklau committed Nov 26, 2024
1 parent def54ec commit 58fd152
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions pghoard/restore.py
Original file line number Diff line number Diff line change
Expand Up @@ -607,8 +607,8 @@ def _get_basebackup(
os.chmod(dirname, 0o700)

# Based on limited samples, there could be one stalled download per 122GiB of transfer
# So we tolerate one stall for every 64GiB of transfer (or STALL_MIN_RETRIES for smaller backup)
stall_max_retries = max(STALL_MIN_RETRIES, int(int(metadata.get("total-size-enc", 0)) / (64 * 2 ** 30)))
# So we tolerate one stall for every 10GiB of transfer (or STALL_MIN_RETRIES for smaller backup)
stall_max_retries = max(STALL_MIN_RETRIES, int(int(metadata.get("total-size-enc", 0)) / (10 * 2 ** 30)))

fetcher = BasebackupFetcher(
app_config=self.config,
Expand Down

0 comments on commit 58fd152

Please sign in to comment.