2

I'd like to know if my Nvidia GPU, CUDA drivers, cuDNN, Pytorch and TensorFlow are all compatible between each other ahead of time instead of getting some less explicit error when running code such as:

tensorflow/compiler/mlir/tools/kernel_gen/tf_gpu_runtime_wrappers.cc:40] 'cuModuleLoadData(&module, data)' failed with 'CUDA_ERROR_UNSUPPORTED_PTX_VERSION'

How can one automatically check if one's Nvidia GPU, CUDA drivers, cuDNN, Pytorch and TensorFlow are all compatible between each other?

1
  • 2
    There are so many ever-changing versions of all of the above, and so many variables (some of which are hidden in binary blobs) that automating this is probably impossible. The closest you'll get is to write a trivial do-nothing test program that uses all of those elements (i.e. something like a Minimum Working Example) and see if it compiles and runs. Even that won't be conclusive, though, because some errors won't show up until you try to do certain things or use certain functions...and trying to define a list of all those things is the same problem with the goal-posts shifted a bit. Commented Oct 28 at 4:33

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.