I’ve written a Resnet50 mannequin utilizing google colab. After practice my mannequin then saving fashions afterwards load mannequin with out restart run time get identical end result, however when restarting runtime google colab and working xtrain, ytest, x_val, y_val
then loading mannequin once more, I get totally different end result.
right here is my code:
#hyperparameter and callback
batch_size = 128
num_epochs = 120
input_shape = (48, 48, 1)
num_classes = 7#Compile the mannequin.
from keras.optimizers import Adam, SGD
mannequin = ResNet50(input_shape = (48, 48, 1), lessons = 7)
optimizer = SGD(learning_rate=0.0005)
mannequin.compile(optimizer= optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
mannequin.abstract()
historical past = mannequin.match(
data_generator.stream(xtrain, ytrain,),
steps_per_epoch=len(xtrain) / batch_size,
epochs=num_epochs,
verbose=1,
validation_data= (x_val,y_val))
import matplotlib.pyplot as plt
mannequin.save('Fix_Model_resnet50editSGD5st.h5')
#plot graph
accuracy = historical past.historical past['accuracy']
val_accuracy = historical past.historical past['val_accuracy']
loss = historical past.historical past['loss']
val_loss = historical past.historical past['val_loss']
num_epochs = vary(len(accuracy))
plt.plot(num_epochs, accuracy, 'r', label='Coaching acc')
plt.plot(num_epochs, val_accuracy, 'b', label='Validation acc')
plt.title('Coaching and validation accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend()
plt.determine()
plt.plot(num_epochs, loss, 'r', label='Coaching loss')
plt.plot(num_epochs, val_loss, 'b', label='Validation loss')
plt.title('Coaching and validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend()
plt.present()
#load mannequin
from keras.fashions import load_model
model_load = load_model('Fix_Model_resnet50editSGD5st.h5')
model_load.abstract()
testdatamodel = model_load.consider(xtest, ytest)
print("Take a look at Loss " + str(testdatamodel[0]))
print("Take a look at Acc: " + str(testdatamodel[1]))
traindata = model_load.consider(xtrain, ytrain)
print("Take a look at Loss " + str(traindata[0]))
print("Take a look at Acc: " + str(traindata[1]))
valdata = model_load.consider(x_val, y_val)
print("Take a look at Loss " + str(valdata[0]))
print("Take a look at Acc: " + str(valdata[1]))
-after coaching and saving mannequin then run load mannequin with out restart runtime google colab : as you possibly can see the
Take a look at loss: 0.9411 — accuracy: 0.6514
Prepare loss: 0.7796 — accuracy: 0.7091
simply run load mannequin once more after restart runtime colab:
Take a look at loss: 0.7928 — accuracy: 0.6999
Prepare loss: 0.8189 — accuracy: 0.6965
after Restart Runtime Evaluate test and train
You could set random seed to get identical outcomes on each iteration both in identical session or publish restarting.
tf.random.set_seed(
seed
)
test https://www.tensorflow.org/api_docs/python/tf/random/set_seed
Answered By — Sarvesh Dubey
Reply Checked By — Mary Flores (FixIt Volunteer)
This Reply collected from stackoverflow, is licensed beneath cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0