Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDAStreamView (or similar name) object #314

Open
jakirkham opened this issue Dec 19, 2024 · 2 comments
Open

CUDAStreamView (or similar name) object #314

jakirkham opened this issue Dec 19, 2024 · 2 comments
Labels
awaiting-response Further information is requested cuda.core Everything related to the cuda.core module

Comments

@jakirkham
Copy link
Collaborator

Just like in the array case, where there are many different ways to define an array (Python Buffer Protocol, __array_interface__, __cuda_array_interface__, DLPack, etc.), there is a similar issue with streams up to this point. To provide a few examples...

With all these objects floating around that provide different APIs, there remains a need to wrangle them them into some kind of common object and that has proper RAII semantics and can be used in a standard way

A CUDAStreamView, much like the StridedMemoryView, would help provide this standard way to consume these different objects and provide a standard object and API to use (including __cuda_stream__)

It could...

  1. Identify the object
  2. Keep a copy of the cudaStream_t in an attribute
  3. Keep a reference to the exporting object
  4. Query/store any other relevant info
@leofang leofang added the triage Needs the team's attention label Dec 20, 2024
@leofang
Copy link
Member

leofang commented Dec 26, 2024

The goal of cuda.core is to not bifurcate the CUDA-owned objects in each Python project, and instead new projects should just use cuda.core and get streams.

For established projects, we already have (by design) __cuda_stream__ + Device.create_stream for wrapping a foreign stream object, so I do not think there's anything extra that cuda.core needs to do, other than helping/educating/urging existing Python projects to implement __cuda_stream__.
https://nvidia.github.io/cuda-python/cuda-core/latest/generated/cuda.core.experimental.Device.html#cuda.core.experimental.Device.create_stream

@leofang leofang added awaiting-response Further information is requested cuda.core Everything related to the cuda.core module and removed triage Needs the team's attention labels Dec 26, 2024
@vyasr
Copy link

vyasr commented Jan 7, 2025

The main possible value-add that I see is having a standard vocabulary type for processing objects that expose the protocol. Essentially if my lib exposes a function foo that takes a stream, the protocol approach would have you do this:

# lib.py
def foo(..., stream_like):
    if not hasattr(stream_like, "__cuda_stream__"):
        raise ValueError("Invalid stream object")
    stream = process_stream_like(stream_like.__cuda__stream__) # This function needs to be defined.

A standard view type in cuda.core would replace the need for process_stream_like.

from cuda.core import StreamView
def foo(..., stream_like):
    stream = StreamView(stream_like)

It's definitely not a requirement though, just a convenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-response Further information is requested cuda.core Everything related to the cuda.core module
Projects
None yet
Development

No branches or pull requests

3 participants