-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release extra files in dst after upload #150
Conversation
Allow the coordinator to release the files that were uploaded during backup creation. Helps to free disk space after the backup was uploaded. Especially useful for Cassandra, since because of compactions we end up keeping around copies of old sstables.
Codecov Report
@@ Coverage Diff @@
## master #150 +/- ##
==========================================
+ Coverage 87.02% 87.56% +0.53%
==========================================
Files 122 136 +14
Lines 7648 9246 +1598
==========================================
+ Hits 6656 8096 +1440
- Misses 992 1150 +158
... and 2 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
def release(self, hexdigest: str) -> None: | ||
assert self.lock.locked() | ||
assert self.src != self.dst | ||
for snapshotfile in self.hexdigest_to_snapshotfiles.get(hexdigest, []): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The snapshot will get out of sync with the disk here. Is that is so that we don't recompute hashes in case a file is not deleted? We will need to be careful to always re-link all files when creating a new snapshot in the SQLLite PR (if I ever get around to finishing that), also obviously if an upload is ever run after this step looks like it will skip the files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the upload would fail (because it'll attempt to read the files), and presently we never re-upload the same snapshot (except when it fails during backup - but then it's done before the release). In the SQLite case we could mark the released rows in a more explicit fashion if needed. This logic is a bit obscure, but the alternative seems to be "snapshot more often" - which is also a bit weird, since why would we want to waste CPU cycles to compute hashes of files we'd never upload?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the upload will fail, just skip the missing files https://github.com/Aiven-Open/astacus/blob/master/astacus/node/uploader.py#L41.
I like the idea of marking released rows, I'll keep that in mind
async def run_step(self, cluster: Cluster, context: StepsContext) -> List[ipc.NodeResult]: | ||
snapshot_results = context.get_result(SnapshotStep) | ||
nodes_metadata = await get_nodes_metadata(cluster) | ||
all_nodes_have_release_feature = nodes_metadata and all( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: nodes_all_have_feature
could be a helper.
Allow the coordinator to release the files that were uploaded during backup creation. Helps to free disk space after the backup was uploaded. Especially useful for Cassandra, since because of compactions we end up keeping around copies of old sstables.