Close Menu
    Trending
    • Simulating Flood Inundation with Python and Elevation Data: A Beginner’s Guide
    • From Text-to-Video to Hyper-Personalization: What’s Hot in AI Right Now | by Rana Mohsin | May, 2025
    • Frank McCourt Jr. Interview: Why I Want to Buy TikTok
    • LLM Optimization: LoRA and QLoRA | Towards Data Science
    • 🔥 “A Fireside Chat Between Three Minds: JEPA, Generative AI, and Agentic AI Debate the Future” | by pawan | May, 2025
    • Top Colleges Now Value What Founders Have Always Hired For
    • The Secret Power of Data Science in Customer Support
    • Decoding Complexity: My Journey with Gemini Multimodality and Multimodal RAG | by Yaswanth Ippili | May, 2025
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Machine Learning»Model Load get different result after restart runtime | by Ted James | Apr, 2025
    Machine Learning

    Model Load get different result after restart runtime | by Ted James | Apr, 2025

    FinanceStarGateBy FinanceStarGateApril 13, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I’ve written a Resnet50 mannequin utilizing google colab. After practice my mannequin then saving fashions afterwards load mannequin with out restart run time get identical end result, however when restarting runtime google colab and working xtrain, ytest, x_val, y_val then loading mannequin once more, I get totally different end result.

    right here is my code:

    #hyperparameter and callback
    batch_size = 128
    num_epochs = 120
    input_shape = (48, 48, 1)
    num_classes = 7

    #Compile the mannequin.
    from keras.optimizers import Adam, SGD
    mannequin = ResNet50(input_shape = (48, 48, 1), lessons = 7)
    optimizer = SGD(learning_rate=0.0005)
    mannequin.compile(optimizer= optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])

    mannequin.abstract()
    historical past = mannequin.match(
    data_generator.stream(xtrain, ytrain,),
    steps_per_epoch=len(xtrain) / batch_size,
    epochs=num_epochs,
    verbose=1,
    validation_data= (x_val,y_val))

    import matplotlib.pyplot as plt
    mannequin.save('Fix_Model_resnet50editSGD5st.h5')

    #plot graph
    accuracy = historical past.historical past['accuracy']
    val_accuracy = historical past.historical past['val_accuracy']
    loss = historical past.historical past['loss']
    val_loss = historical past.historical past['val_loss']
    num_epochs = vary(len(accuracy))
    plt.plot(num_epochs, accuracy, 'r', label='Coaching acc')
    plt.plot(num_epochs, val_accuracy, 'b', label='Validation acc')
    plt.title('Coaching and validation accuracy')
    plt.ylabel('accuracy')
    plt.xlabel('epoch')
    plt.legend()
    plt.determine()
    plt.plot(num_epochs, loss, 'r', label='Coaching loss')
    plt.plot(num_epochs, val_loss, 'b', label='Validation loss')
    plt.title('Coaching and validation loss')
    plt.ylabel('loss')
    plt.xlabel('epoch')
    plt.legend()
    plt.present()

    #load mannequin
    from keras.fashions import load_model
    model_load = load_model('Fix_Model_resnet50editSGD5st.h5')

    model_load.abstract()

    testdatamodel = model_load.consider(xtest, ytest)
    print("Take a look at Loss " + str(testdatamodel[0]))
    print("Take a look at Acc: " + str(testdatamodel[1]))

    traindata = model_load.consider(xtrain, ytrain)
    print("Take a look at Loss " + str(traindata[0]))
    print("Take a look at Acc: " + str(traindata[1]))

    valdata = model_load.consider(x_val, y_val)
    print("Take a look at Loss " + str(valdata[0]))
    print("Take a look at Acc: " + str(valdata[1]))

    -after coaching and saving mannequin then run load mannequin with out restart runtime google colab : as you possibly can see the

    Take a look at loss: 0.9411 — accuracy: 0.6514

    Prepare loss: 0.7796 — accuracy: 0.7091

    ModelEvaluateTest & Train

    simply run load mannequin once more after restart runtime colab:

    Take a look at loss: 0.7928 — accuracy: 0.6999

    Prepare loss: 0.8189 — accuracy: 0.6965

    after Restart Runtime Evaluate test and train

    You could set random seed to get identical outcomes on each iteration both in identical session or publish restarting.

    tf.random.set_seed(
    seed
    )

    test https://www.tensorflow.org/api_docs/python/tf/random/set_seed

    Answered By — Sarvesh Dubey

    Reply Checked By — Mary Flores (FixIt Volunteer)

    This Reply collected from stackoverflow, is licensed beneath cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIs Jeff Bezos-Backed Slate Auto Making a ‘Cheap’ EV Truck?
    Next Article JPMorgan CEO Jamie Dimon: Bank Is Preparing for Turbulence
    FinanceStarGate

    Related Posts

    Machine Learning

    From Text-to-Video to Hyper-Personalization: What’s Hot in AI Right Now | by Rana Mohsin | May, 2025

    May 31, 2025
    Machine Learning

    🔥 “A Fireside Chat Between Three Minds: JEPA, Generative AI, and Agentic AI Debate the Future” | by pawan | May, 2025

    May 31, 2025
    Machine Learning

    Decoding Complexity: My Journey with Gemini Multimodality and Multimodal RAG | by Yaswanth Ippili | May, 2025

    May 31, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    Why handing over total control to AI agents would be a huge mistake

    March 24, 2025

    Day 3 of my 30 day learning challenge | by Amar Shaik | May, 2025

    May 19, 2025

    Overcoming legacy tech through agentic AI | by QuantumBlack, AI by McKinsey | QuantumBlack, AI by McKinsey | May, 2025

    May 8, 2025

    Meta AI Lead: Humans Will Be the Boss of Superintelligent AI

    March 20, 2025

    Real-Time Object Tracking with Python, YOLOv5, and Arduino Servo Control (follow person or pet) | by Bram Burggraaf | Feb, 2025

    February 15, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    Nvidia CEO Jensen Huang Says AI Tutors Are the Future

    February 27, 2025

    Should Data Scientists Care About Quantum Computing?

    February 13, 2025

    Side hustles so popular with millennials and gen Z, even people making $100,000 a year have one

    May 15, 2025
    Our Picks

    8 Steps to Build a Data-Driven Organization

    March 1, 2025

    Instagram Is Paying Creators Up to $20,000 for Referrals

    May 21, 2025

    Why Trying to Find Your Purpose Is Delaying Your Success

    April 11, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.