Skip to content

Commit

Permalink
Respect limits on the number of keys for the R2 delete() method.
Browse files Browse the repository at this point in the history
We only call the method if we have a non-zero number of keys; and for greater
than 1000 keys, we split the request into multiple chunks so that we don't hit
the per-request limit. This gives us a bit more breathing room for very large
projects, though eventually we'll be capped by the subrequest limit.
  • Loading branch information
LTLA committed Feb 27, 2024
1 parent ff0de33 commit 07afcfd
Showing 1 changed file with 21 additions and 1 deletion.
22 changes: 21 additions & 1 deletion src/utils/s3.js
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,26 @@ export async function quickRecursiveDelete(prefix, env, { list_limit = 1000 } =
env,
{ list_limit: list_limit, namesOnly: false }
);
await env.BOUND_BUCKET.delete(deletions);

if (deletions.length) {
// The R2 binding can only accept a max of 1000 keys per delete()
// request. So we split it up into evenly spaced chunks that are no
// greater than 1000 each, and we submit these as subrequests. We have
// a maximum of 50 subrequests per worker, which means that we can
// delete 50k objects for every call of this function; not bad.
let num_requests = Math.ceil(deletions.length / 1000);
let per_request = Math.ceil(deletions.length / num_requests);

let reqs = [];
let start = 0;
for (var i = 0; i < num_requests; ++i) {
const end = Math.min(start + per_request, deletions.length);
reqs.push(env.BOUND_BUCKET.delete(deletions.slice(start, end)));
start = end;
}

await Promise.all(reqs);
}

return freed;
}

0 comments on commit 07afcfd

Please sign in to comment.