I am facing a persistent issue when trying to initialize the TPU in my notebook. I have already confirmed that:
My account is Verified.
The Notebook Accelerator is set to TPU.
My TPU quota is currently available.
However, the standard initialization code consistently throws a NotFoundError because the required OpKernel is missing. I suspect this is an environment configuration issue on the platform itself.
Has anyone encountered this specific OpKernel not registered error recently while using the TPU runtime and found a workaround?
Code and Error Details
Code Used:
import tensorflow as tf
# Detect and initialize TPU
tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='local')
tf.tpu.experimental.initialize_tpu_system(tpu)
# Create TPU distribution strategy
strategy = tf.distribute.TPUStrategy(tpu)
print("TPU initialized successfully.")
Traceback Snippet:
InvalidArgumentError: No OpKernel was registered to support Op 'ConfigureDistributedTPU' used by {{node ConfigureDistributedTPU}}
...
Registered devices: [CPU]
Registered kernels:
<no registered kernels>
During handling of the above exception, another exception occurred:
NotFoundError: TPUs not found in the cluster. Failed in initialization: No OpKernel was registered to support Op 'ConfigureDistributedTPU'...
Key Observation:
The output shows Registered devices: [CPU], confirming that the environment is not detecting the active TPU accelerator at the TensorFlow software level.
Any assistance or known workarounds would be greatly appreciated\! Thank you.
TPUClusterResolver(tpu='local')just useTPUClusterResolver.connect()withou argument.