-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
try more complete lifecycle #2744
base: async-loop-issue
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
import asyncio | ||
from concurrent.futures import ThreadPoolExecutor | ||
from contextlib import contextmanager | ||
from signal import SIGINT, SIGTERM | ||
|
||
from flytekit.loggers import logger | ||
|
||
|
||
def handler(loop, s: int): | ||
loop.stop() | ||
logger.debug(f"Shutting down loop at {id(loop)} via {s!s}") | ||
loop.remove_signal_handler(SIGTERM) | ||
loop.add_signal_handler(SIGINT, lambda: None) | ||
|
||
|
||
@contextmanager | ||
def use_event_loop(): | ||
loop = asyncio.new_event_loop() | ||
asyncio.set_event_loop(loop) | ||
Comment on lines
+18
to
+19
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If there is an existing event loop running, then I do not think a library should be modifying the global event loop. If there is another event loop running with tasks from another library, then switching out the event loop from underneath them can cause problems. For example, if another library is actively scheduling work, they will have task end up in two different loops. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can add a try block around this then, to look for an existing event loop, and use that if found. But I think that is not a good idea. Still learning, but I feel libraries should not be creating event loops on import. As of today, this is true of flytekit's dependencies (except for the unionai library). The lack of an event loop is why the error came about in the first place right? My issue with adding a try block... and then using the event loop if we find one.
Event loops are thread singletons, there's one per thread. I feel that if your library needs an event loop on import (not on calling an executable like Alternatively we can also check to see if there's an event loop, save it to a variable if so, and then restore it later, basically what async run does, but with one extra step, but honestly I'd rather see it fail. Seeing the failure allowed us to find an issue in the union library. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. unionfs does not create the event loop during import. The event loop is created when
In principle, I am okay with this, but async python libraries do different things:
My preferred solution is 2, if there was a good way to pass in the new event loop into a library that does 1. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thinking about this more, yeah shouldn't we do 2 always? We can't prevent user code from arbitrarily running There will probably be bugs related to usage of
in the case of grpc aio, doesn't just having the loop set count as passing it in? |
||
executor = ThreadPoolExecutor() | ||
loop.set_default_executor(executor) | ||
for sig in (SIGTERM, SIGINT): | ||
loop.add_signal_handler(sig, handler, loop, sig) | ||
try: | ||
yield loop | ||
finally: | ||
tasks = asyncio.all_tasks(loop=loop) | ||
for t in tasks: | ||
logger.debug(f"canceling {t.get_name()}") | ||
t.cancel() | ||
group = asyncio.gather(*tasks, return_exceptions=True) | ||
loop.run_until_complete(group) | ||
executor.shutdown(wait=True) | ||
loop.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you leave a comment here that explains why we need this functionality?