Update: This article has been updated to show how to save and restore models in Tensorflow 2.0. If you want to learn the same with Tensorflow1.x, please go to this earlier article that explains how to save and restore Tensorflow 1.x models.
In this Tensorflow 2.X tutorial, I shall explain:
- What is a Tensorflow-Keras Model API?
- How to save a Tensorflow-Keras model?
- How to restore a Tensorflow-Keras model?
- How to work with restored models for prediction or fine-tuning?
This tutorial assumes that you have some idea about training a neural network. Otherwise, please follow this tutorial and come back here.
1. What is a Tensorflow-Keras Model API?:
After the release of Tensorflow 2.X the Model API in tf.Keras became the standard method of defining and training models in Tensorflow. Model API is a higher-level wrapper that makes the process of training neural networks easier by providing easy to use one-liner functions that can handle all the complexities of training for you. So any model in Tensorflow 2.X will be represented as tf.keras.Model class and it exposes all the functions for training, inference, and saving of neural networks. Now let’s see how to define a simple two-layer fully connected network in Tensorflow 2.X using two different methods,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
import tensorflow as tf #Defining a tf.keras.Model using Sequential API #Use it only for single input and single output case model = tf.keras.Sequential([tf.keras.layers.Dense(200, activation="relu", input_shape=(100,)), tf.keras.layers.Dense(300, activation="relu")]) #Defining a tf.keras.Model using Functional API #In this method we create layers and call it with appropriate input tensors to get layer outputs #Can be used to create Model with multiple inputs and outputs inputs = tf.keras.layers.Input(shape=(100,)) layer_1_out = tf.keras.layers.Dense(200, activation="relu")(inputs) layer_2_out = tf.keras.layers.Dense(300, activation="relu")(layer_1_out) layer_3_out = tf.keras.layers.Dense(400, activation="relu")(layer_2_out) model = tf.keras.Model(inputs=[inputs], outputs=[layer_2_out, layer_3_out]) |
2. How to save a Tensorflow-Keras Model?:
After you have trained a neural network, you would want to save it for future use and deploying to production. The saved model primarily contains the network design or graph, values of the network parameters that we have trained, and also the optimizer parameters if the tf.keras.Model was compiled with it. There are two different ways of saving tf.keras.Model:
- HDF5 format: Keras Native format
- SavedModel format: Tensorflow Native format Let’s look at these model formats in more detail.
HDF5 format:
The HDF5 format is great to store a huge amount of numerical data and manipulate this data from NumPy. For example, it’s easily possible to slice multi-terabyte datasets stored on disk as if they were real NumPy arrays. You can also store multiple datasets in a single file, iterate over them or check out the .shape and .dtype attributes. The HDF5 format saves the model and all of its parameters in a single file with .h5 extension where the model architecture, trained weights, and optimizer information(if present) are serialized and dumped into it. This is the easiest way to save models when it is created only using standard layers.
1 2 3 4 |
#saving a model in HDF5 format #model is a tf.keras.Model object created using any of the above methods model.save("mymodel.h5") |
SavedModel format:
The SavedModel is the native format of Tensorflow. You can also have SavedModels without using Tensorflow native API. It also saves the model architecture, weights, and optimizer information similar to the HDF5 method but instead of using a single file, it splits the information into multiple files and groups them into a folder. SavedModel is the preferred method of saving when your model has custom layers. In the case of HDF5 format, custom layers have to be defined in runtime during the model restoration. In SavedModel the custom layers are also serialized and saved in disk because of which they can be directly loaded without any prior definition of layers needed.
1 2 3 4 5 6 7 8 9 10 11 12 |
#saving a model in SavedModel format #Both HDF5 and SavedModel has same API for saving. Append the output name with .h5 to save in HDF5 format model.save("mymodel") #SavedModel folder has following structure #assets/ #saved_model.pb #variables/ # variables.data-00000-of-00002 # variables.data-00000-of-00002 # variables.index |
The functions of various files are as follows,
a) saved_model.pb
This file contains the architecture information of the saved model, training configuration, and optimizer information. This is the most important file of the SavedModel.
b) variables folder
This folder contains the trained weights and biases of the model. This contains huge amount of variables, hence it’s split into multiple files.
c) assets folder
This folder is mostly empty and sometimes may contain some information that Tensorflow uses internally for restoring the model.
3. How to restore a Tensorflow model?
Once we save the tf.keras.Model to disk we can load it back at any time for resuming the training, doing predictions or fine-tuning by creating a new network on top of the restored network. Similar to saving, tf.keras.Model can be restored using a unified API for both HDF5 format or savedModel format.
1 2 3 4 5 6 |
#Restoring a HDF5 model model = tf.keras.models.load_model("mymodel.h5") #Restoring a savedModel model = tf.keras.models.load_model("mymodel") |
Tensorflow will automatically determine whether the provided path is SavedModel or HDF5 format and takes the appropriate steps for restoring.
4. How to work with restored models for prediction or fine-tuning?
Now that you have understood how to save and restore Tensorflow models, Let’s develop a practical guide to restore any pre-trained model and use it for prediction, fine-tuning, or further training. Whenever you are working with Tensorflow, you define a tf.keras.Model which is trained using the fit API that takes the training datasets, hyperparameters for training and trains the model for a specified number of epochs. The trained model can be saved using the above methods for doing predictions using them or for more training in the future. Let’s say we have trained and saved the two-layer fully connected network mentioned in section-1 to disk and now we want to do inference using them. It can be done as follows,
1 2 3 |
restored_model = tf.keras.models.load_model("mymodel.h5") prediction_out = restored_model.predict(input_batch) |
But what if we want to fine-tune on the saved model by throwing away the last layer in the saved model and creating our own layers on top on it?. It can be done easily by restoring and referencing the appropriate tensors of tf.keras.Model object.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
#Restoring the model with three layers(Input, Dense, Dense) restored_model = tf.keras.models.load_model("mymodel.h5") #Now we want to create a new network where two different Dense layers has to be #added on top of the output of first Dense in restored model. #We can get the appropriate output tensor by indexing the right layer in tf.keras.Model object middle_output = restored_model.layers[-2].output new_Dense_1_out = tf.keras.layers.Dense(50)(middle_output) new_Dense_2_out = tf.keras.layers.Dense(20)(new_Dense_1_out) #Now we can create a new model with input of old model and new output tensors new_model = tf.keras.Model(inputs=restored_model.inputs, outputs=[new_Dense_2_out]) #And the new model can be trained using the new_model.fit() API as usual |
Hopefully, this gives you a very clear understanding of how Tensorflow models are saved and restored. Please feel free to share your questions or doubts in the comments section.
Update: June 2020: Now working with Tensorflow 2.0 Tf.Keras API. For the older Tensorflow version use this tutorial.