-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: fix faulty logic for handling possible double-encoding of github app private key #3363
base: main
Are you sure you want to change the base?
Conversation
…private key Signed-off-by: Kent Rancourt <[email protected]>
✅ Deploy Preview for docs-kargo-io ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3363 +/- ##
==========================================
+ Coverage 52.47% 52.49% +0.02%
==========================================
Files 291 291
Lines 26617 26625 +8
==========================================
+ Hits 13968 13978 +10
+ Misses 11886 11884 -2
Partials 763 763 ☔ View full report in Codecov by Sentry. |
🤦 The reason I picked the previous (albeit faulty) logic is that we would not have to keep up with the input requirements from the GitHub API, and would typically produce more telling errors (from GitHub itself) when a key would not be double base64 encoded. For example, if someone now adds a newline in front of their key, they will receive a vague base64 decode error rather than GitHub telling them something about the key being wrong. |
With your logic, if someone botches the extra encoding -- by missing a digit when copying + pasting, for instance -- the result is a surprising authn error from GitHub instead of a decoding error. But the docs don't say to do that extra encoding, so I'll acknowledge that an incorrect key without extra encoding is a far more likely scenario than a botched encoding and your approach is better at that case. Sorry for not recognizing this initially. My logic handles the less likely case better. You get a decoding error instead of an authn error. I would be fine with just switching back to a corrected version of your logic (but keeping the new tests), but am also wondering if there isn't an approach that would handle all cases correctly. wdyt if: We strip leading and trailing whitespace from the input and then check whether it contains non-base64 characters -- which a PEM encoded key always will. (Checking for this seems more forgiving than checking for the PEM header in the event GH starts using a different key format in the future.) If non-base64 characters are found we're probably not looking at a botched encoding, so we should use the value as is and let authn fail if that value isn't the correct key. On the other hand, if we see only base64 characters, we attempt to decode it. If it fails, it's more likely a botched encoding than an incorrect key, so we return the decoding error wrapped with a message informing the user that the extra encoding is unnecessary. I think the only scenario that this isn't resilient to would be if GH Apps ever switch to a key format that uses base64 digits only and that does not seem likely at all. I could be overthinking all of this, but this seems like it would return the correct type of error for all cases. |
Follows up on #3055 and #3059
cc @hiddeco
The issue was here:
&corruptInputErr
is a**base64.CorruptInputError
, whicherr
never is.Rather than just delete the offending
&
, I've refactored this and added a test to be sure this is fixed, but I've also switched over to a different strategy for identifying whether we're dealing with a "raw" PEM-encoded PKCS1 or PKCS8 key (i.e. one not unnecessarily double-encoded). It seems this might actually be more reliable than guessing we have a PEM-encoded key whenever the input happens not to be a valid base64 encoding.But please lmk if you see any problems with the new approach or if you have any ideas for making it more robust.