Skip to content

Commit

Permalink
fix: building withing lower parallelism than dependency group size (#…
Browse files Browse the repository at this point in the history
…2051)

Fixes #2042

Before this change, we would get build errors because the `go.work` file
for a given module would include all the built modules including modules
that were in the same build `group` layer as a given module.

For example, if we had 4 modules all in the same group `[a, b, c, d]`
and we build with a `-j2` option (2 parallel builds), we would run into
an issue where `a` and `b` would build first, in parallel. `a` would
finish and we would start to build `c`. But, when we looked up the
`schema` to get shared modules for the `go.work` file for `c`, that list
would now include `a` even though they are in the same build group and
should never depend on each other.

This fix changes the build func to not re-lookup built modules and
instead just use the set of `builtModules` that were available at the
start of the build group.

With this change we should be able to safely support any `-j` option
higher or lower than the number of modules. This probably would have
eventually happened either way once we had a system with more modules
than the number of cpus on a given machine.
  • Loading branch information
wesbillman authored Jul 11, 2024
1 parent 3f88e8e commit d16ef79
Showing 1 changed file with 1 addition and 5 deletions.
6 changes: 1 addition & 5 deletions buildengine/engine.go
Original file line number Diff line number Diff line change
Expand Up @@ -690,11 +690,7 @@ func (e *Engine) build(ctx context.Context, moduleName string, builtModules map[
return fmt.Errorf("module %q not found", moduleName)
}

combined := map[string]*schema.Module{}
if err := e.gatherSchemas(builtModules, combined); err != nil {
return err
}
sch := &schema.Schema{Modules: maps.Values(combined)}
sch := &schema.Schema{Modules: maps.Values(builtModules)}

if e.listener != nil {
e.listener.OnBuildStarted(meta.module)
Expand Down

0 comments on commit d16ef79

Please sign in to comment.