You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a taichi ndarray, and need to convert it to numpy. The method to_numpy() is used. An error occured when the ndarray is very large : RuntimeError: [taichi/rhi/cuda/cuda_driver.h:taichi::lang::CUDADriverFunction<void *>::operator ()@92] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize)
It seems that the to_numpy() method needs another memory which is about the same size of ndarray. A sample test code is below:
ti.init(arch=ti.gpu, device_memory_GB=0.9)
# img will occpy 0.5 GB memoryn=512img=ti.ndarray(ti.f32, shape=(n, n, n))
a=img.to_numpy() # Error
If I set device_memory_GB=1.1, there won't be any error. Therefore, I guess to_numpy() needs double memory of ndarray in total. However, my data is 4 GB, and GPU memory is 6 GB, which cannot execute to_numpy(). Is there any solution to solve my problem?
I have a taichi ndarray, and need to convert it to numpy. The method
to_numpy()
is used. An error occured when the ndarray is very large :RuntimeError: [taichi/rhi/cuda/cuda_driver.h:taichi::lang::CUDADriverFunction<void *>::operator ()@92] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize)
It seems that the
to_numpy()
method needs another memory which is about the same size of ndarray. A sample test code is below:If I set
device_memory_GB=1.1
, there won't be any error. Therefore, I guessto_numpy()
needs double memory of ndarray in total. However, my data is 4 GB, and GPU memory is 6 GB, which cannot executeto_numpy()
. Is there any solution to solve my problem?Additional information:
Python version: 3.10.10
Taichi version: 1.6.0
CUDA version: 12.0
The text was updated successfully, but these errors were encountered: