You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to setup with podman, I did two small modifications to make it work.
Specify the RHEL_VERSION at the beginning of Dockerfiles;
add the code below before using groupadd in the Dockerfiles.
FROM rockylinux:${RHEL_VERSION}-minimal
RUN microdnf update -y && \
microdnf install shadow-utils -y && \
microdnf clean all
Though I successfully build up the container and get inside, when I run application via "./gaugeFixing", I got the error as below
# [2024-01-06 01:18:29] FATAL: [Rank 0] A GPU error occured: communicationBase_mpi.cpp: Failed to count devices (gpuGetDeviceCount): CUDA driver version is insufficient for CUDA runtime version ( cudaErrorInsufficientDriver )
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 0] A GPU error occured: communicationBase_mpi.cpp: Failed to count devices (gpuGetDeviceCount): CUDA driver version is insufficient for CUDA runtime version ( cudaErrorInsufficientDriver )
and I tried nvidia-smi, I got
[simulateqcd@510d453c6799 applications]$ nvidia-smi
bash: nvidia-smi: command not found
The text was updated successfully, but these errors were encountered:
When I try to setup with podman, I did two small modifications to make it work.
Though I successfully build up the container and get inside, when I run application via "./gaugeFixing", I got the error as below
and I tried nvidia-smi, I got
The text was updated successfully, but these errors were encountered: