From 466a0cd03aca3e7890b5a569670f99049b6b3775 Mon Sep 17 00:00:00 2001 From: Jason Senthil Date: Thu, 25 Jan 2024 13:03:55 -0800 Subject: [PATCH] fix spacing in checkpointing docs (#690) Summary: Pull Request resolved: https://github.com/pytorch/tnt/pull/690 Example for loading best checkpoint wasn't displayed as code block was missing space Reviewed By: gunchu, williamhufb Differential Revision: D53008623 fbshipit-source-id: 3a88d67767d292883f28c1f87d0a860aa1f08cc7 --- docs/source/checkpointing.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/source/checkpointing.rst b/docs/source/checkpointing.rst index 6f7c9c0ef8..9596c0d776 100644 --- a/docs/source/checkpointing.rst +++ b/docs/source/checkpointing.rst @@ -119,6 +119,7 @@ By specifying the monitored metric to be "train_loss", the checkpointer will exp Later on, the best checkpoint can be loaded via .. code-block:: python + TorchSnapshotSaver.restore_from_best(your_dirpath_here, unit, metric_name="train_loss", mode="min") If you'd like to monitor a validation metric (say validation loss after each eval epoch during :py:func:`~torchtnt.framework.fit.fit`), you can use the `save_every_n_eval_epochs` flag instead, like so