Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copying data results in leaf tensor grad warning #8496

Closed
lstrgar opened this issue Mar 23, 2024 · 1 comment
Closed

Copying data results in leaf tensor grad warning #8496

lstrgar opened this issue Mar 23, 2024 · 1 comment

Comments

@lstrgar
Copy link

lstrgar commented Mar 23, 2024

I want to copy data from PyTorch into a Taichi field. I understand the best way to do this is using the .from_torch() method; however, I don't believe this method supports slicing. E.g. if I have a field of shape (A, B, C) and I want to copy a tensor of shape (A, B) into one C available slots I cannot use something like field[:,:,0].from_torch(tensor). Instead I'm copying manually as below; however, this results in the following unexpected warning. Why?

CODE

import taichi as ti
import torch

device = "cuda:0"

ti.init(arch=ti.cuda, default_fp=ti.f64, debug=True, device_memory_fraction=0.999)

x_torch = torch.randn(10, 10, dtype=torch.float64, requires_grad=True, device=device)

x_ti = ti.field(dtype=ti.f64)
ti.root.dense(ti.ijk, (10, 10, 5)).place(x_ti)

ti.root.lazy_grad()

def cpy_2taichi(dest, src, device):
    inter = dest.to_torch(device)
    inter[:, :, 0] = src
    dest.from_torch(inter)

cpy_2taichi(x_ti, x_torch, device)

OUTPUT

[Taichi] version 1.7.0, llvm 15.0.4, commit 2fd24490, linux, python 3.10.13
[Taichi] Starting on arch=cuda
/home/qpd4588/miniconda3/envs/tds/lib/python3.10/site-packages/taichi/lang/kernel_impl.py:763: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /opt/conda/conda-bld/pytorch_1708025847130/work/build/aten/src/ATen/core/TensorBody.h:489.)
  if v.requires_grad and v.grad is None:
@lstrgar
Copy link
Author

lstrgar commented Mar 24, 2024

This happens I believe because the Taichi kernel modifies the tensor object's grad attribute. If the attribute is currently None then the warning will be thrown. This is not ideal, but the warning can be voided by initializing the .grad to zeros.

@lstrgar lstrgar closed this as completed Mar 24, 2024
@github-project-automation github-project-automation bot moved this from Untriaged to Done in Taichi Lang Mar 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

1 participant