-
-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor signing/assemble code in openjdk_build_pipeline.groovy to support Windows/docker builds #1117
Conversation
Thank you for creating a pull request!Please check out the information below if you have not made a pull request here before (or if you need a reminder how things work). Code Quality and Contributing GuidelinesIf you have not done so already, please familiarise yourself with our Contributing Guidelines and Code Of Conduct, even if you have contributed before. TestsGithub actions will run a set of jobs against your PR that will lint and unit test your changes. Keep an eye out for the results from these on the latest commit you submitted. For more information, please see our testing documentation. In order to run the advanced pipeline tests (executing a set of mock pipelines), it requires an admin to post |
run tests (Running at https://ci.adoptium.net/job/build-scripts-pr-tester/job/openjdk-build-pr-tester/1920/) |
PR TESTER RESULT ❎ Some pipelines failed or the job was aborted! ❎ |
0f71881
to
457077d
Compare
run tests (Triggered https://ci.adoptium.net/job/build-scripts-pr-tester/job/openjdk-build-pr-tester/1922/ with refactored cleanWS post build) |
457077d
to
ebbc774
Compare
PR TESTER RESULT ❎ Some pipelines failed or the job was aborted! ❎ |
ebbc774
to
c1a357b
Compare
run tests (This one is not using the cached download so has a chance of being a valid test :-) https://ci.adoptium.net/job/build-scripts-pr-tester/job/openjdk-build-pr-tester/1923/ ) |
PR TESTER RESULT ❎ Some pipelines failed or the job was aborted! ❎ |
c1a357b
to
4286580
Compare
run tests (enableSigner was deleted from some invocations of buildScripts - linter didn't seem to pick it up though 🤔 New run: https://ci.adoptium.net/job/build-scripts-pr-tester/job/openjdk-build-pr-tester/1924 ) |
PR TESTER RESULT ❎ Some pipelines failed or the job was aborted! ❎ |
4286580
to
c7d1ce7
Compare
run tests (Running at https://ci.adoptium.net/job/build-scripts-pr-tester/job/openjdk-build-pr-tester/1926/ since some re-runs using the mechanism described in #1057 (comment) first to verify that it should start testing my code once it gets to that point) |
PR TESTER RESULT ❎ Some pipelines failed or the job was aborted! ❎ |
9d749be
to
6c889b5
Compare
run tests |
PR TESTER RESULT ✅ All pipelines passed! ✅ |
Ref: https://ci.adoptium.net/job/build-scripts/job/jobs/job/jdk21u/job/windbld/782/
https://ci.adoptium.net/job/build-scripts/job/jobs/job/jdk21u/job/windbld/760/ without that option ran through ok Diff of file listings between the 21.0.5+8-ea build which we have published and the one from this PR in the windbld job
When running outside docker (i.e. replicating the normal build) I'm getting a problem during the unstash of the results of the signing phase:
|
81504fc
to
7ede372
Compare
Signed-off-by: Andrew Leonard <[email protected]>
Co-authored-by: Andrew Leonard <[email protected]>
…tium#1127) (adoptium#1128) * jdk-23.0.1 release jdkBranch GA tag check using wrong repo * jdk-23.0.1 release jdkBranch GA tag check using wrong repo --------- Signed-off-by: Andrew Leonard <[email protected]>
Signed-off-by: Stewart X Addison <[email protected]>
force push to resolve perceived conflict |
run tests |
PR TESTER RESULT ✅ All pipelines passed! ✅ |
@@ -2024,6 +2158,9 @@ class Build { | |||
context.node(label) { | |||
addNodeToBuildDescription() | |||
// Cannot clean workspace from inside docker container | |||
if ( buildConfig.TARGET_OS == 'windows' && buildConfig.DOCKER_IMAGE ) { | |||
context.bat('rm -rf c:/workspace/openjdk-build/cyclonedx-lib c:/workspace/openjdk-build/security') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems a little odd, why specific c:\workspace and why just cyclonedx-lib and security ?
presumably permissions? Why can't we just do rm -rf ' + context.WORKSPACE + '/workspace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the minimal that was required to clear up the workspace to allow the git clean
operation to succeed without failures as I recall. I would definitely like to improve the way this is done, but those two directories specifically caused a problem (potentially because they are written to during the build, and that happens inside the docker container under different permissions?) I would strongly support a separate issue being raised to look at whether we can avoid writing into that directory and/or move/change the scope of this particular rm
operation. We do have various worksapce clean-up options on the build to clear things up, but unless that is done at the correct time (as per the comment above the bit you've highlighted) it doesn't work in the container case.
That was a longer answer that I planned, but let me know if you're ok with deferring that to an action after this PR, or if you want it addressed before you feel able to approve it. I don't like it, but I'm also conscious of the fact that the PR as-is has undergone a good amount of testing for now and there are multiple cases that would have to be tested to fully test additional changes here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have done some tests without this rm and so far not reproduced the failure but I suspect it's an edge case on certain failures which triggered the value to reset the workspace so I'm still more comfortable leaving this in if possible. But definitely worth keeping an eye on so I could add some debug in to keep an eye on it and see later on if it might have failed without it (I'm running with that in another branch for the testing)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok makes sense as an edge case, but can we use:
'rm -rf ' + context.WORKSPACE + '/cyclonedx-lib'
'rm -rf ' + context.WORKSPACE + '/security'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to change it, but any particular reason? Just feels more efficient to spawn one shell and rm process instead of two.
Edit: Ah your main point is to use the workspace variable. I'll test that today
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At that point WORKSPACE is resolving to c:\workspace\workspace\build-scripts\jobs\jdk21u\...
instead of the "shortname" so it would clean up the wrong directory (Also it's using \
instead of /
which is likely to cause problems)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I can get it to use that variable if I move the workspace
definition earlier in the build()
function and remove the multiple definitions, then wrapping this part in context.ws(workspace)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pushed the change and seems to work in the docker environment on initial tests - LMK if moving the scope of the variable is likely to cause any side-effects e.g. on other platforms..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me check, didn't realize this was out of scope of the context.ws()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah looks good now, buildScripts() is run within the context.ws()...
…directories Signed-off-by: Stewart X Addison <[email protected]>
run tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reviewed this to the best of my knowledge and ability, given the scope of change, and the extensive testing, I'm happy to approve.
Thanks - at this point I'll take that ;-) We'll find out if there are any side effects from the EA runs that'll happen overnight. (I'll merge once the pipeline tester completes but given it's through on a couple of versions I'm not concerned) |
PR TESTER RESULT ❎ Some pipelines failed or the job was aborted! ❎ |
Failure was in the Alpine build where it had a connectivity problem on dockerhost-azure-ubuntu2204-x64-1:
Unclear on the cause but a subsequent run on the same machine suffered a similar fate. Running a subsequent one (build 74) on another machine did not show a problem, and running two builds with the existing pipelines (not via the PR tester) also passed: xlinux jdk21 and Alpine jdk21. It is assumed that this is a machine specific issue and that this PR is not the cause. |
There's a lot in here, so apologies in advance to the reviewers who will need a bit of time to review this properly ;-) Noting that there are some
SXAEC:
references in here which are things that still need to be cleaned up but the overall architecture and flow is now functional as per the builds in https://ci.adoptium.net/job/build-scripts/job/jobs/job/jdk21u/job/windbld which are now getting through all the steps and running tests.Some things done in this PR:
batOrSh
function that runs commands under the windows batch scripts if we're on windows instead of always usingbash
as this was causing some instability (issue 3714 listed below)buildScriptsAssemble
andbuildScriptsEclipseSigner
sign
phase has been renamed assign tgz/zip
to avoid ambiguity. I'm tempted to changeassemble
toassemble & SBoM
(Noting that it also includes thedu -k
of the output frommake-adopt-build-farm.sh
which is slow - at least 10 minutes - on my current test machine)openjdk_build_pipeline:
eyecatchers added so we can more easily find the separate sections while looking at the build logs (curl -s
theconsoleText
link and grep it for_pipeline:
- similar to thebuild.sh:
lines from the temurin-build repo)envVars
has been extracted into the top levelbuild()
and passed into thebuildScripts()
andbuildScriptsAssemble()
functions that use itopenjdk_build_dir_arg
(obsoleted in Remove use of --user-openjdk-build-root-directory now JDK-8326685 backported #1084) commented out but I'm still tempted to just remove it now ...**/*
was unnecessary.There are still some issues with the workspace cleaning options which may need to be addressed, although that can be done in subsequent PRs. e.g.
rm -rf c:/workspace/openjdk-build/cyclonedx-lib c:/workspace/openjdk-build/security
andrm -rf ' + context.WORKSPACE + '/workspace/target/*
. Some additional work will be needed before the clean options will work. Ref: adoptium/infrastructure#3723Fixes adoptium/infrastructure#3709
Supercedes #1103 (as this includes those changes)
Fixes adoptium/infrastructure#3714
Note: There is some potential follow-on work that could be done to tidy up this groovy script overall in #1116
Existing windows pipeline
New pipeline: