You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From cuda compatibility doc, newer cuda runtime could run on older cuda driver.
But that doesn't hold with cuda container. I tried
docker run -it --rm --gpus all nvidia/cuda:12.8.0-cudnn-devel-ubuntu24.04
with host cuda driver 565.77 ( cuda 12.7 ) and got an error.
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.8, please update your driver to a newer version, or use an earlier cuda container: unknown.
After some investigation I found NVIDIA_REQUIRE_CUDA=cuda>=12.8 is set in the image. So does that mean cuda container doesn't support backwards compatibility?
The text was updated successfully, but these errors were encountered:
Not an expert but as the error says you do have 2 options, either update your drivers so that they are compatible with the container, or downgrade your CUDA container so that it's compatible with your driver. On the Docker Hub registry you can find the containers if you do wish to go down the path of finding the right container for you: https://hub.docker.com/r/nvidia/cuda/tags.
Regarding the backwards compatibility question heres what I understand.
New drivers can typically run apps built with older cuda toolkits (backwards compatibility).
However, running a newer runtime on an older driver isn't guaranteed to be supported (forwards compatibility).
So because of that the container images have to enforce a minimum driver requirement to prevent potential runtime issues that could occur when an older driver is used with a newer runtime.
Thanks for your answer! I just realized that new toolkit + old driver isn't about backwards compatibility at all. It should be minor version compatibility (e.g. driver 12.7 and runtime 12.8) or forward compatibility. So my question actually is: Does cuda container supports minor version compatibility, or is it disabled deliberately because of limited features in that case?
I did a quick test with docker --env NVIDIA_DISABLE_REQUIRE=1 ... and nvcc --arch=sm_86 ... according to minor version compatibility doc here. And the simple vecadd example works fine.
From cuda compatibility doc, newer cuda runtime could run on older cuda driver.
But that doesn't hold with cuda container. I tried
with host cuda driver 565.77 ( cuda 12.7 ) and got an error.
After some investigation I found
NVIDIA_REQUIRE_CUDA=cuda>=12.8
is set in the image. So does that mean cuda container doesn't support backwards compatibility?The text was updated successfully, but these errors were encountered: