-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve error handling and logging #97
base: develop
Are you sure you want to change the base?
Conversation
9bf5de7
to
2fbbef1
Compare
2fbbef1
to
40b2975
Compare
40b2975
to
d23635f
Compare
25d69d1
to
c355792
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One other general question: does caplog
only return logs generated by/during that specific test case, or does it contain all logs?
@@ -53,9 +58,40 @@ def set_alternate_configuration(configuration_identifier: str): | |||
def setup_global_command_context(hasura_admin_secret: str): | |||
CommandContext.hasura_admin_secret = hasura_admin_secret | |||
|
|||
def setup_logging(debug: bool): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest centralizing this logic in src/utils/logger.py
. Ideally, that module can have a function like set_log_level
which is the callback for the --debug
option.
@@ -1,24 +1,28 @@ | |||
"""Programmatic entrypoint for CLI application. | |||
""" | |||
import sys | |||
from rich.console import Console | |||
|
|||
import logging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know I went back and forth on this, but I decided I like the ergonomics better if we configure a specific child logger in /src/utils/logger.py
. Then, it's clearer in the code that the logger we import has been specifically configured for this application. For example:
from aerie_cli.utils.logger import aerie_cli_logger as logger
...
logger.info("Whatever")
to_print = "Deleting goals for Plan ID {plan}: ".format(plan=plan_id) | ||
goal_ids = [] | ||
for goal in clear_goals: | ||
goal_ids.append(goal["goal"]["id"]) | ||
typer.echo(str(goal["goal"]["id"]) + " ", nl=False) | ||
typer.echo() | ||
|
||
to_print += str(goal["goal"]["id"]) + " " | ||
logging.info(to_print) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this isn't your code, but I vote we tidy this up while we're at it:
goal_ids = []
for goal in clear_goals:
goal_ids.append(goal["goal"]["id"])
logging.info(f"Deleting goals for Plan ID {plan}: {' '.join([str(i) for i in goal_ids])}")
TIME_FORMAT = '%Y-%m-%d_%H:%M:%S' | ||
START_TIME = datetime.now().strftime(TIME_FORMAT) | ||
LOGS_PATH = Path(APP_DIRS.user_config_dir).resolve().absolute() / "logs" | ||
LOGS_PATH.mkdir(parents=True, exist_ok=True) | ||
CURRENT_LOG_FILE_NAME = f"aerie_cli_{START_TIME}" | ||
CURRENT_LOG_PATH = LOGS_PATH / (f"{CURRENT_LOG_FILE_NAME}.log") | ||
number_appended_to_log = 1 | ||
while CURRENT_LOG_PATH.exists(): | ||
CURRENT_LOG_PATH = LOGS_PATH / (f"{CURRENT_LOG_FILE_NAME}_{number_appended_to_log}.log") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like values and logic that would be more cleanly tracked in a PersistentLogs
class. For the while loop specifically, we could also solve this by appending a UUID, which will avoid the race condition where two Aerie-CLI processes both think they've found an available log path before creating it.
SESSION_FILE_DIRECTORY.mkdir(parents=True, exist_ok=True) | ||
LOGS_PATH.mkdir(parents=True, exist_ok=True) | ||
|
||
def clear_old_log_files(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above -- this probably ought to be a PersistentLogs.clear_old_log_files
method
GLOBAL_LOGGER = logging.getLogger(__name__) | ||
GLOBAL_LOGGER.propagate = True # we must set this to see stdout in caplog |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to name and track GLOBAL_LOGGER
?
GLOBAL_LOGGER = logging.getLogger(__name__) | |
GLOBAL_LOGGER.propagate = True # we must set this to see stdout in caplog | |
logging.getLogger().propagate = True # we must set this to see stdout in caplog |
@@ -26,20 +27,21 @@ | |||
CONFIGURATION_NAME = "localhost" | |||
configuration_id = -1 | |||
|
|||
def test_configurations_clean(): | |||
def test_configurations_clean(caplog): | |||
caplog.set_level(logging.INFO) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can define our own fixture that takes in caplog
and sets the loglevel once. Then, we can just take in that fixture in each of our tests.
f"{CURRENT_LOG_PATH}") | ||
# We don't want to print the full traceback, | ||
# so we use debug | ||
logging.debug(traceback.format_exc()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logging.debug(traceback.format_exc()) | |
logging.debug(traceback.format_exc()) | |
sys.exit(-1) |
Closes #3