Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Towards independent modules #510

Conversation

GiasemiSh
Copy link

@GiasemiSh GiasemiSh commented Aug 4, 2020

closes #498 #485

Description

This PR changes the way that modules interact with qmstr.
Before, the master server was starting the modules when we were switching to the specific phase. A prerequisite was the modules to be present in the master container in order to start them. Since we want to change this monolithic approach, we needed to change the way that modules are interacting.

With this PR modules can be initialized independently and they will be waiting for a signal channel to start their processes.

For instance, scancode-module dependencies are in a dedicated image which is called and started in our k8s job.
The module will be waiting until the master server has changed to the analysis phase. Once we have switched to the analysis phase, a signal channel will be sent to all the analyzer modules. Once all the modules are done the master will get notified and continue with the next phase.

Tasks to make this PR work:

  • Rebase over wip/christoph/k8s-mvn

  • Move module's dependencies in dedicated docker images

  • Add the modules in the k8s job. Start with scancode-module. Pull the scancode-module image, and run the module.

  • Run the job and debug.

Notes

  • Check if we connect to the master server correctly

dependes on #477 #503 #499

GiasemiSh and others added 19 commits July 9, 2020 13:50
Run the analysis/reporting modules from qmstrctl analyze /
qmstrctl report commands.
Modules now can run outside from the master server.
Add function to ping the server that the current phase has finished.
This channel will be used by the analyzers.
The analyzers will wait for the signal to be closed to start their analysis.
Return the whole config declaration for the specific analyzer
Close analysis phase once all the analyzers have finished
with their processes.
Shutdown analyzers gracefully after they finish with their processes
or in the case of an error. Also they notify the master server.
* Create signal channel to start the reporters
* Count running reporters and notify master server
once they have finished
Modules get connected to the running master server.
With the use of sync.WaitGroup we wait for the goroutines to finish
before exiting the code.
Copy link
Contributor

@fullsushidev fullsushidev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments so we can go through this together during the sync.

lib/go-qmstr/cli/phasecontrol.go Show resolved Hide resolved
lib/go-qmstr/cli/phasecontrol.go Show resolved Hide resolved
// wait until all modules have finished
ModulesAreDone = make(chan struct{})
log.Printf("Waiting for modules to finish.. \n")
<-ModulesAreDone // <-- THIS MAY NOT WORK!!! Select{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need help to understand

flag.Int32Var(&anaID, "rid", -1, "unique reporter id")
var rprID int32
CountReporters++
flag.StringVar(&serviceAddress, "rserv", "", "connect to reporting service address")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a TODO flag for the DB address

@marcomicera marcomicera marked this pull request as ready for review August 10, 2020 08:36
@marcomicera marcomicera marked this pull request as draft August 12, 2020 07:12
@marcomicera marcomicera deleted the branch QMSTR:feature/self-contained-modules October 28, 2020 10:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants