Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIGINT being sent to turbo is not forwarded to underlying tasks #9694

Open
chris-olszewski opened this issue Jan 13, 2025 · 9 comments
Open
Labels
kind: bug Something isn't working needs: author input

Comments

@chris-olszewski
Copy link
Member

chris-olszewski commented Jan 13, 2025

Expected behavior

kill -SIGINT ${TURBO_PID} should result in a SIGINT being sent to all tasks being run.

Actual behavior

SIGINT isn't being sent.

Call To Action

If you are encountering an issue with signal handling please provide the following:

  • The output of turbo info
  • The command being used to invoke turbo
  • How the signal is being sent to turbo
  • A reproduction repository would be quite helpful as well
@Scalahansolo
Copy link

We see this issue when executing turbo through a yarn script at the root of our monorepo. Specially the issue that we see in our monorepo, is when the underlying dev task fails, the process isn't always killed by turbo repo. Then, I'll got to ctrl+c the entire turbo process but the underlying task isn't killed. So when I go to run our dev command again the old dev task` is still sticking around and I need to hunt down the PID to manually kill it.

As an aside... given that Turbo prioritizes their work in part by the number of 👍🏼 on a given bug report, it's a bummer that we've now lost that tracking by porting it to a new issue.

@chris-olszewski
Copy link
Member Author

chris-olszewski commented Jan 13, 2025

@Scalahansolo

  • Can you please include the exact command that you're using to invoke turbo? Is the script just turbo run dev?
  • Are you using the TUI or streamed logs?
  • Can you please provide the output of turbo info so I can get some some understanding of your environment?
  • Can you give me the yarn version you are using?

So when I go to run our dev command again the old dev task` is still sticking around and I need to hunt down the PID to manually kill it.

Is this the dev task that failed? If so could you pstree -p PID before the task fails?

@zanona
Copy link

zanona commented Jan 13, 2025

Sorry to sound negative but it didn't quite make sense to me closing #3711, a 2 year old issue with 34 up votes and ask users to resubmit their cases? This feels more like a marketing driven decision to clear up old issues which are still unresolved? Even if I don't really see that as a problem. I really appreciate the work you guys do but a bit more transparency as to why would be appreciated. 🙏

@chris-olszewski
Copy link
Member Author

I really appreciate the work you guys do but a bit more transparency as to why would be appreciated. 🙏

We needed to break down #3711 into more actionable items. There wasn't a way to distinguish what the 👍 on #3711 indicated:

Most reports didn't include enough information for us to understand what issues people were hitting or what the expected outcome of that issue should be.

@anthonyshew
Copy link
Contributor

anthonyshew commented Jan 13, 2025

@zanona, this GitHub repository isn't a marketing channel. It's where us on the core team come to do the best work of our lives and work with a community of developers to try to build something amazing.

The conversation on the original issue had degraded into something that was unclear for both the folks interacting on the issue and us, and so we split the conversation into something more fruitful. Chris has been clear about this here and here. Additionally, the original issue was already resolved in 1.8.5, and it now looks that it was re-opened for something that looked similar, but turned out to not be the same.

Doing this has already proven useful to us, as @Scalahansolo's comment here has already given us a a good clue that we're looking at right now.

Ultimately, these signal handling issues are an important breed of error for us. I only accidentally caught that these Issues were lying around while looking for something else, and called it out amongst our team channel. At that point, the 34 upvotes did their job, and we don't need to worry about them anymore. Additionally, the Issue still exists in its closed state and is linked to this one, so that signal isn't gone if we need it in the future for some reason.

You're seeing Chris is chipping away at it now. We've also put together what we're hoping is a better process for ourselves so that stuff like this doesn't fall through the cracks again.

@dabrowne
Copy link

@Scalahansolo

So when I go to run our dev command again the old dev task` is still sticking around and I need to hunt down the PID to manually kill it.

In case it makes your life simpler, I'm currently working around this issue using pkill turbo which saves me having to hunt down the PID. In my case, the process that is lingering looks like this (output from ps aux):

/path/to/repo/node_modules/.pnpm/[email protected]/node_modules/turbo-linux-64/bin/turbo --skip-infer daemon

@chris-olszewski
Copy link
Member Author

@dabrowne

That is our daemon which is expected to continue running after the primary turbo exits to keep tabs on the filesystem. If you don't want this process to stick around, then you can configure that in your turbo.json with the "daemon": false.

@dabrowne
Copy link

@dabrowne

That is our daemon which is expected to continue running after the primary turbo exits to keep tabs on the filesystem. If you don't want this process to stick around, then you can configure that in your turbo.json with the "daemon": false.

@chris-olszewski ah I see, thanks for the explanation.

In the past I've observed high CPU usage after running a turbo command and then cancelling it with ctrl+c. I assumed that this was due to tasks failing to terminate and continuing to run in the background (i.e. this issue). I'll take a closer look next time this happens to see whether it is simply the Turbo daemon causing the high CPU usage.

@chris-olszewski
Copy link
Member Author

The daemon can take up significant CPU when hashing repository contents in the background, #9572 and #9564 should keep resource usage in check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind: bug Something isn't working needs: author input
Projects
None yet
Development

No branches or pull requests

5 participants