Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check CUDA out of memory #213

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Commits on Aug 19, 2024

  1. Remove unused proto buff of InputShape and OutputShape

    Since we removed ModelInfo interface, the proto buff for InputShape and OutputShape, and their conversions are redundant
    thodkatz committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    2a7eb7a View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    529243e View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    6f441ea View commit details
    Browse the repository at this point in the history
  4. Add procedures for checking gpu out of memory for given shapes

    Two procedures have been added:
    - Get the maximum tensor shape
    - Check if a tensor's shape fits to memory
    thodkatz committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    b1595d7 View commit details
    Browse the repository at this point in the history
  5. Add device id to cuda requests

    The current interface supports multiple device ids. To check if a cuda memory request is a valid one, meaning that a gpu is detected, a device id is needed to do the check for the available ones if any.
    thodkatz committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    46ac782 View commit details
    Browse the repository at this point in the history