From 7af38eb98bdceb8fc6f8635ed7dd664ef44e4b10 Mon Sep 17 00:00:00 2001 From: Tianyi Tao Date: Sun, 25 Aug 2024 22:14:02 +0000 Subject: [PATCH] Fix unexpected inference_mode interaction with torch.autograd.functional.jacobian (#130307) Fixes #128264 Pull Request resolved: https://github.com/pytorch/pytorch/pull/130307 Approved by: https://github.com/soulitzer --- docs/source/notes/autograd.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/notes/autograd.rst b/docs/source/notes/autograd.rst index f070f22041836a..eb37a66e59eaf1 100644 --- a/docs/source/notes/autograd.rst +++ b/docs/source/notes/autograd.rst @@ -240,9 +240,9 @@ This better runtime comes with a drawback: tensors created in inference mode will not be able to be used in computations to be recorded by autograd after exiting inference mode. -Enable inference mode when you are performing computations that don’t need -to be recorded in the backward graph, AND you don’t plan on using the tensors -created in inference mode in any computation that is to be recorded by autograd later. +Enable inference mode when you are performing computations that do not have +interactions with autograd, AND you don’t plan on using the tensors created +in inference mode in any computation that is to be recorded by autograd later. It is recommended that you try out inference mode in the parts of your code that do not require autograd tracking (e.g., data processing and model evaluation).