Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add retry_count to container_push #2101

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

itmecho
Copy link

@itmecho itmecho commented May 30, 2022

PR Checklist

Please check if your PR fulfills the following requirements:

  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)

PR Type

What kind of change does this PR introduce?

  • Bugfix
  • Feature
  • Code style update (formatting, local variables)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • CI related changes
  • Documentation content changes
  • Other... Please describe:

What is the current behavior?

Currently, if an image fails to push, it will immediately fail. While this makes sense for single image pushes, it's not ideal when pushing multiple containers as often this means running the entire build step again.

Issue Number: #1989 (related)

What is the new behavior?

Adds a retry_count attribute to container_push. If the push fails, it will be retried until the retry_count is reached. On each retry, a log line will be printed to show the error that occurred as well as which attempt it was.

Does this PR introduce a breaking change?

  • Yes
  • No

Other information

I've updated the docs to include the new attribute but I'm not sure if there are tests that cover it. I'm happy to add the tests but I might need some guidance as to where and how to implement them!

@google-cla
Copy link

google-cla bot commented May 30, 2022

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@itmecho
Copy link
Author

itmecho commented May 30, 2022

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Done but I can't re-run the check to clear the failure

@itmecho
Copy link
Author

itmecho commented May 30, 2022

I've found the container_push tests in tests/container/BUILD but I'm not sure how to test it properly as it would require a push to fail one or more times and then succeed

@itmecho itmecho changed the title Add retry-count to container_push Add retry_count to container_push May 30, 2022
@@ -44,6 +44,7 @@ var (
format = flag.String("format", "", "The format of the uploaded image (Docker or OCI).")
clientConfigDir = flag.String("client-config-dir", "", "The path to the directory where the client configuration files are located. Overiddes the value from DOCKER_CONFIG.")
skipUnchangedDigest = flag.Bool("skip-unchanged-digest", false, "If set to true, will only push images where the digest has changed.")
retryCount = flag.Int("retry-count", 0, "Amount of times the push will be retried.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we validate this value? What happens when the retry count is -1? Some users may assume this to mean infinite retries, when in fact the container image would not even attempt to be pushed once.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Do you think it's better to log using log.Fatalln if it's negative or set it to 0 and log a warning instead?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed a version that exits with Fatalln as that matches the other validation behaviour.

}

if *retryCount > 0 && retry < *retryCount {
log.Printf("Failed to push image on attempt %d: %v", retry+1, err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error is missing the stamped variable. It will make it tricky to know what actually failed without this. Maybe reuse the original format?

Suggested change
log.Printf("Failed to push image on attempt %d: %v", retry+1, err)
log.Printf("Error pushing image to %s (attempt %d): %v", stamped, retry, err)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense! Pushed that change.

container/go/cmd/pusher/pusher.go Show resolved Hide resolved
@@ -90,6 +90,9 @@ def _impl(ctx):
tag = tag,
))

if ctx.attr.retry_count > 0:
pusher_args.append("-retry-count={}".format(ctx.attr.retry_count))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe there should be some validation here? Might be confusing if a user provides -1 and nothing happens.

I think we can merge given this change.

@EdSchouten
Copy link
Contributor

@itmecho Friendly ping :-)

@itmecho
Copy link
Author

itmecho commented Oct 12, 2022

@itmecho Friendly ping :-)

Oh sorry! I thought this commit covers that validation?

4e54051

Is that no good or is there something else I've missed?

@github-actions
Copy link

This Pull Request has been automatically marked as stale because it has not had any activity for 180 days. It will be closed if no further activity occurs in 30 days.
Collaborators can add an assignee to keep this open indefinitely. Thanks for your contributions to rules_docker!

@github-actions github-actions bot added the Can Close? Will close in 30 days unless there is a comment indicating why not label Apr 11, 2023
@com6056
Copy link

com6056 commented Apr 23, 2023

@itmecho do you have time to wrap this up? If not, happy to pick this up!

@github-actions github-actions bot removed the Can Close? Will close in 30 days unless there is a comment indicating why not label Apr 24, 2023
@itmecho
Copy link
Author

itmecho commented Jul 24, 2023

@itmecho do you have time to wrap this up? If not, happy to pick this up!

@com6056 Sorry for the delay! I've pushed some validation and a doc update, hopefully that covers it!

The build looks like it's failing on something to do with docker/debian9? Not sure if that's related to my changes or not...

@com6056
Copy link

com6056 commented Aug 24, 2023

@itmecho do you have time to wrap this up? If not, happy to pick this up!

@com6056 Sorry for the delay! I've pushed some validation and a doc update, hopefully that covers it!

The build looks like it's failing on something to do with docker/debian9? Not sure if that's related to my changes or not...

@uhthomas any idea why that build is failing or who we can ask about it?

Copy link

This Pull Request has been automatically marked as stale because it has not had any activity for 180 days. It will be closed if no further activity occurs in 30 days.
Collaborators can add an assignee to keep this open indefinitely. Thanks for your contributions to rules_docker!

@github-actions github-actions bot added the Can Close? Will close in 30 days unless there is a comment indicating why not label Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Can Close? Will close in 30 days unless there is a comment indicating why not
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants