As part of a TF 2.0 tutorial, I was trying out the callback function in the TensorFlow which enables the model to stop training when a specific accuracy or loss value is reached. The example provided in this Colab works fine. I tried to run a similar example locally using pycharm(with tf gpu conda env) but the callback function is not executed at all and runs till the last epoch. There is no error whatsoever and the codes looks same.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from matplotlib import pyplot as plt
from tensorflow.keras.callbacks import Callback
class MyCallback(Callback):
def on_epochs_end(self, epoch, logs={}):
if(logs.get('accuracy') > 0.9):
print("\n Training stopping now. accuracy reached 90 !")
self.model.stop_training = True
callback = MyCallback()
# Input data
(training_data, training_labels), (testing_data, testing_labels) = fashion_mnist.load_data()
training_data = training_data / 255.0
testing_data = testing_data / 255.0
plt.imshow(training_data[0], cmap='gray')
# Network
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(units=128, activation='relu'),
Dense(units=10, activation='softmax')])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_data, training_labels, epochs=25, callbacks=[callback])
I was referring to different examples for some solutions and I came across statements like
- activation='relu'
- activation=tf.nn.relu
- activation=tf.keras.activation.relu
Which is the right one to use? Is the error caused due to incorrect imports?
If anyone could give some hints, it would be helpful.