You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running more than one machine using the same local sstate-cache is not efficient and we need to setup a local sstate mirror to be used in CI builds.
The PR #12 build a change that only add a new job but however the local sstate-cache is not being used fully, despite the fact that this is persistent between builds.
This can be seen in the job where only about 52% is being used:
2024-09-11 14:13:17 - INFO - Initialising tasks...Sstate summary: Wanted 2180 Local 1141 Mirrors 0 Missed 1039 Current 0 (52% match, 0% complete)
The text was updated successfully, but these errors were encountered:
Right, we need some type of lock on the layers to improve the build. Maybe in this case the sstate mirror is not needed because all the machines are pretty similar and from the same arch.
About the lock:
running kas with --lock can improve this situation but for that it might be better to have an external repository (like quic-yocto/ci-manifest.git) just to save the hashes on the kas yaml. this way we don't pollute the local kas config of the bsp layer and not force the users to a specific version.
running kas with --lock can improve this situation but for that it might be better to have an external repository (like quic-yocto/ci-manifest.git) just to save the hashes on the kas yaml. this way we don't pollute the local kas config of the bsp layer and not force the users to a specific version.
Yeah, that is my thinking as well. We can probably always use the latest in PRs and generate a lock file after the merge happens, publishing into another external repo.
Running more than one machine using the same local sstate-cache is not efficient and we need to setup a local sstate mirror to be used in CI builds.
The PR #12 build a change that only add a new job but however the local sstate-cache is not being used fully, despite the fact that this is persistent between builds.
This can be seen in the job where only about 52% is being used:
The text was updated successfully, but these errors were encountered: