Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simpleP2P fails and Vulkan programs cannot allocate memory #14

Open
2 tasks done
jhmueller-huawei opened this issue Jun 14, 2024 · 3 comments
Open
2 tasks done
Labels
bug Something isn't working

Comments

@jhmueller-huawei
Copy link

NVIDIA Open GPU Kernel Modules Version

550.90.07-p2p

Please confirm this issue does not happen with the proprietary driver (of the same version). This issue tracker is only for bugs specific to the open kernel driver.

  • I confirm that this does not happen with the proprietary driver package.

Operating System and Version

Arch Linux

Kernel Release

6.9.4-arch1-1

Please confirm you are running a stable release kernel (e.g. not a -rc). We do not accept bug reports for unreleased kernels.

  • I am running on a stable kernel release.

Hardware: GPU

2x NVIDIA GeForce RTX 4090

Describe the bug

I tried the modified driver to get P2P running on my two 4090s, but it doesn't work properly and with this driver also applications that use Vulkan don't work properly anymore, crashing when trying to allocate memory (vkAllocateMemory returns VK_ERROR_OUT_OF_DEVICE_MEMORY).

To Reproduce

Here's my output from the cuda sample simpleP2P:

Checking for multiple GPUs...
CUDA-capable device count: 2

Checking GPU(s) for support of peer to peer memory access...
> Peer access from NVIDIA GeForce RTX 4090 (GPU0) -> NVIDIA GeForce RTX 4090 (GPU1) : Yes
> Peer access from NVIDIA GeForce RTX 4090 (GPU1) -> NVIDIA GeForce RTX 4090 (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 12.21GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Verification error @ element 1: val = 1.000000, ref = 4.000000
Verification error @ element 2: val = 2.000000, ref = 8.000000
Verification error @ element 3: val = 3.000000, ref = 12.000000
Verification error @ element 4: val = 4.000000, ref = 16.000000
Verification error @ element 5: val = 5.000000, ref = 20.000000
Verification error @ element 6: val = 6.000000, ref = 24.000000
Verification error @ element 7: val = 7.000000, ref = 28.000000
Verification error @ element 8: val = 8.000000, ref = 32.000000
Verification error @ element 9: val = 9.000000, ref = 36.000000
Verification error @ element 10: val = 10.000000, ref = 40.000000
Verification error @ element 11: val = 11.000000, ref = 44.000000
Verification error @ element 12: val = 12.000000, ref = 48.000000
Disabling peer access...
Shutting down...
Test failed!

Here is the output of p2pBandwidthLatencyTest (for this I had to downgrade cuda from 12.5.0 to 12.4.1, otherwise it would error, complaining about an incompatible ptx version when running the delay kernel in the sample):

[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, NVIDIA GeForce RTX 4090, pciBusID: 1, pciDeviceID: 0, pciDomainID:0
Device: 1, NVIDIA GeForce RTX 4090, pciBusID: 2, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=1 CAN Access Peer Device=0

***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.

P2P Connectivity Matrix
     D\D     0     1
     0       1     1
     1       1     1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1 
     0 905.80  11.95 
     1  12.02 922.37 
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
   D\D     0      1 
     0 837.35  13.18 
     1  13.18 939.00 
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
   D\D     0      1 
     0 882.52  16.74 
     1  16.81 923.64 
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
   D\D     0      1 
     0 846.51  25.89 
     1  25.93 920.97 
P2P=Disabled Latency Matrix (us)
   GPU     0      1 
     0   1.40  10.63 
     1  11.24   1.43 

   CPU     0      1 
     0   1.35   4.21 
     1   4.27   1.36 
P2P=Enabled Latency (P2P Writes) Matrix (us)
   GPU     0      1 
     0   1.40   0.92 
     1   0.91   1.40 

   CPU     0      1 
     0   1.39   1.10 
     1   1.19   1.36 

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

And as mentioned, Vulkan applications cannot allocate (at least device local) memory anymore, e.g., vkcube

Selected GPU 0: NVIDIA GeForce RTX 4090, type: DiscreteGpu
[1]    5876 segmentation fault (core dumped)  vkcube

or vkgears:

Failed to allocate memory for the depth image

Bug Incidence

Always

nvidia-bug-report.log.gz

nvidia-bug-report.log.gz

More Info

I made sure that iommu is disabled with the kernel parameters amd_iommu=off iommu=off and having it disabled in the BIOS and large BAR support is there as well:

% nvidia-smi -q | grep -i bar -A 3
    BAR1 Memory Usage
        Total                             : 32768 MiB
        Used                              : 24205 MiB
        Free                              : 8563 MiB
--
    BAR1 Memory Usage
        Total                             : 32768 MiB
        Used                              : 24212 MiB
        Free                              : 8556 MiB

I'm wondering why used is almost the 24 GB that the GPUs have and if that is the reason, why Vulkan applications cannot allocate memory?

@jhmueller-huawei jhmueller-huawei added the bug Something isn't working label Jun 14, 2024
@wozeparrot
Copy link
Collaborator

I'm not able to reproduce the simpleP2P failure on 4090s but I am able to reproduce the vulkan issue.

@lihu
Copy link

lihu commented Jul 11, 2024

I met the same error as jhmueller-huawei posted, Verification errors.

@mingsterism
Copy link

any updates on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants