# Saving and Loading Models ¶

An introduction of saving and loading learned models.

It usually take too many times for learning from big data, and it is annoying when just we start learning from the beginning.
ReNom has the function that saving and loading the model parameters, which is useful when we use the model parameters that already learned from big data.
There are some ways to save and load the model depends on which model we use(Functional Model, Sequential Model, Custom Model), so we introduce how to save and load for each model.

## Requirements ¶

In this tutorial, following modules are required.

• numpy 1.13.1
• matplolib 2.0.2
• h5py 2.7.0

We introduced how to save and load the model parameters for bellow cases.

• For simple neural network
• Functional model
• Sequential model
• For custom model

## For simple neural network ¶

In [1]:

import numpy as np
import matplotlib.pyplot as plt
import renom as rm
from renom.utility.trainer import Trainer
from renom.utility.distributor import NdarrayDistributor


### Prepare data ¶

First, we prepare a dataset. We define a population distribution, and sample data from it.

In [2]:

# Data population distribution
def population(x):
return np.sin(x*np.pi*2) + np.random.randn(*x.shape)*0.1

x = np.random.rand(1000, 1)
y = population(x)

dist = NdarrayDistributor(x, y)

# Split distributor into train_dist and test_dist by the ratio of 9:1.
train_dist, test_dist = dist.split(0.9)

# Plot dataset
plt.scatter(*train_dist.data(), label="train data")
plt.scatter(*test_dist.data(), label="test data")
plt.legend()
plt.grid()
plt.title("Population of dataset")
plt.ylabel("y")
plt.xlabel("x")
plt.show()


### Model definition(Functional Model) ¶

In [3]:

class Model1(rm.Model):

def __init__(self):
super(Model1, self).__init__()
self._l1 = rm.Dense(3)
self._l2 = rm.Dense(1)

def forward(self, x):
t1 = rm.relu(self._l1(x))
t2 = self._l2(t1)
return t2

model = Model1()


### Train the model using trainer function ¶

We train the model using trainer function. The usage of trainer function is introduced in “Tutorial Trainer”.

In [4]:

trainer = Trainer(model,
batch_size=64,
loss_func=rm.mean_squared_error,
num_epoch=10,
optimizer=rm.Adam())
trainer.train(train_dist, test_dist)

epoch  0: avg loss 0.2346: avg test loss 0.2391: 15it [00:00, 780.72it/s]
epoch  1: avg loss 0.2222: avg test loss 0.2423: 15it [00:00, 719.57it/s]
epoch  2: avg loss 0.2364: avg test loss 0.2336: 15it [00:00, 871.63it/s]
epoch  3: avg loss 0.2274: avg test loss 0.2378: 15it [00:00, 811.40it/s]
epoch  4: avg loss 0.2270: avg test loss 0.2233: 15it [00:00, 795.58it/s]
epoch  5: avg loss 0.2186: avg test loss 0.2226: 15it [00:00, 838.38it/s]
epoch  6: avg loss 0.2146: avg test loss 0.2244: 15it [00:00, 862.10it/s]
epoch  7: avg loss 0.2097: avg test loss 0.2110: 15it [00:00, 847.75it/s]
epoch  8: avg loss 0.2171: avg test loss 0.2207: 15it [00:00, 825.14it/s]
epoch  9: avg loss 0.2138: avg test loss 0.2205: 15it [00:00, 839.61it/s]


### Save weight parameters of model ¶

Here we save the weight parameters of the learned model. For saving them, call the function “save” of the model object. The method save requires path to the save file. The format of the saved file is hdf5 . So this method requires the module h5py.

In [5]:

print(model._l1.params)
model.save("model1.h5")

{&apos;b&apos;: Variable([[ 0.        , -0.06023313,  0.        ]], dtype=float32), &apos;w&apos;: Variable([[-1.10603499,  0.58687919, -1.67471874]], dtype=float32)}


### Reset Model Parameters ¶

After saving the weight parameters, once we reset them.

In [6]:

for layer in model.iter_models():
setattr(layer, "params", {})
print(model.params)

{}


### Load weight parameters to model ¶

Then we load and set the parameters from the file.

In [7]:

# Load and set the weight parameters.
model.load("model1.h5")
print(model._l1.params)

{&apos;b&apos;: Variable([[ 0.        , -0.06023313,  0.        ]], dtype=float32), &apos;w&apos;: Variable([[-1.10603499,  0.58687919, -1.67471874]], dtype=float32)}


### Model definition(Sequential Model) ¶

Then we define a simple 2 layered neural network model using sequential model. Both the input size and the output size are 1.

In [8]:

model = rm.Sequential([
rm.Dense(3),
rm.Relu(),
rm.Dense(1),
])

In [9]:

trainer = Trainer(model,
batch_size=64,
loss_func=rm.mean_squared_error,
num_epoch=10,
optimizer=rm.Adam())
trainer.train(train_dist, test_dist)

epoch  0: avg loss 0.2481: avg test loss 0.2462: 15it [00:00, 779.69it/s]
epoch  1: avg loss 0.2286: avg test loss 0.2454: 15it [00:00, 755.89it/s]
epoch  2: avg loss 0.2272: avg test loss 0.2533: 15it [00:00, 828.68it/s]
epoch  3: avg loss 0.2327: avg test loss 0.2416: 15it [00:00, 779.66it/s]
epoch  4: avg loss 0.2237: avg test loss 0.2415: 15it [00:00, 794.10it/s]
epoch  5: avg loss 0.2307: avg test loss 0.2280: 15it [00:00, 819.87it/s]
epoch  6: avg loss 0.2329: avg test loss 0.2230: 15it [00:00, 806.11it/s]
epoch  7: avg loss 0.2249: avg test loss 0.2344: 15it [00:00, 812.91it/s]
epoch  8: avg loss 0.2162: avg test loss 0.2197: 15it [00:00, 830.36it/s]
epoch  9: avg loss 0.2181: avg test loss 0.2209: 15it [00:00, 882.49it/s]

In [10]:

print(model[0].params)
model.save("model2.h5")

{&apos;b&apos;: Variable([[ 0.        ,  0.03517542, -0.05341347]], dtype=float32), &apos;w&apos;: Variable([[-1.50921047,  0.08258125,  1.02629244]], dtype=float32)}

In [11]:

for layer in model:
setattr(layer, "params", {})
print(model[0].params)

{}

In [12]:

model.load("model2.h5")
print(model[0].params)

{&apos;b&apos;: Variable([[ 0.        ,  0.03517542, -0.05341347]], dtype=float32), &apos;w&apos;: Variable([[-1.50921047,  0.08258125,  1.02629244]], dtype=float32)}


## For Custom Model ¶

We can use the custom model same as simple model as bellow.

### Model definition(Custom Model) ¶

In [13]:

class Model3(rm.Model):

def __init__(self):
super(Model3, self).__init__()
self.params.w1 = rm.Variable(np.zeros((1, 3)))

def forward(self, x):
w1 =self.params.w1
t1 = rm.relu(rm.dot(x,w1))
return t1
model = Model3()

In [14]:

print(model.params.w1)
model.save("model3.h5")

[[ 0.  0.  0.]]

In [15]:

for layer in model.iter_models():
setattr(layer, "params", {})
print(model.params)

{}

In [16]:

# Load and set the weight parameters.
model.load("model3.h5")
print(model.params.w1)

[[ 0.  0.  0.]]