# ReNom Basic Calculation using Auto Differentiation ¶

This is an introduction of ReNom basic calculation using auto differentiation.

Auto differentiation for loss function is used in following .grad part.

loss.grad()

Just as an example, we generate data following a sin function, add some noise, and then plot the data into training and test datasets.
Then we define a 2-layer neural network with a naive representation.

## Requirements ¶

For this tutorial, you will need the following modules:
- matplotlib 2.0.2
- numpy 1.13.1
In [1]:

# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
import renom as rm


## Data preparation ¶

As stated above, we'll now generate data from sin function. The population of our dataset  population_distribution  is defined as below.
The training data and test data are generated with regard to  population_distribution  .
In [2]:

population_distribution = lambda x:np.sin(x) + np.random.randn(*x.shape)*0.05

train_x = np.random.rand(1024, 1)*np.pi*2
train_y = population_distribution(train_x)

test_x = np.random.rand(128, 1)*np.pi*2
test_y = population_distribution(test_x)


This graph shows the distribution of the generated dataset. The training set is in blue, the test set in orange.

In [3]:

plt.clf()
plt.grid()
plt.scatter(train_x, train_y, label = "train")
plt.scatter(test_x, test_y, label="test")
plt.title("Population of dataset")
plt.ylabel("y")
plt.xlabel("x")
plt.legend()
plt.show()


## Neural network definition ¶

We define a 2-layer neural network, with 2 weight parameters and 2 bias parameters. These parameters will be updated with respect to their gradients, so we need to instantiate them as Variable objects. We show the image of the neural network as follows.

In [4]:

INPUT_SIZE = 1
OUTPUT_SIZE = 1
HIDDEN_SIZE = 5

w1 = rm.Variable(np.random.randn(INPUT_SIZE, HIDDEN_SIZE)*0.01)
b1 = rm.Variable(np.zeros((1, HIDDEN_SIZE)))
w2 = rm.Variable(np.random.randn(HIDDEN_SIZE, OUTPUT_SIZE)*0.01)
b2 = rm.Variable(np.zeros((1, OUTPUT_SIZE)))

optimiser = rm.Sgd(0.01)

def nn_forward(x):
z = rm.dot(rm.tanh(rm.dot(x, w1) + b1), w2) + b2
return z

def nn(x, y):
z = nn_forward(x)
loss = rm.sum(((z - y)**2)/2)
return loss


## Training loop ¶

The training loop is described below.
.grad calculate the gradient of the ReNom Variable Object and update(optimizer) updates the parameter(weight and bias) based on optimizer method.
In [5]:

N = len(train_x)
batch_size = 32
train_curve = []
for i in range(1, 101):
perm = np.random.permutation(N)
total_loss = 0
for j in range(N//batch_size):
index = perm[j*batch_size:(j+1)*batch_size]
train_batch_x = train_x[index]
train_batch_y = train_y[index]
loss = nn(train_batch_x, train_batch_y)
total_loss += loss.as_ndarray()
train_curve.append(total_loss/(j+1))
if i%10 == 0:
print("epoch %02d train_loss:%f"%(i, train_curve[-1]))

plt.clf()
plt.grid()
plt.plot(train_curve)
plt.title("Training curve")
plt.ylabel("train error")
plt.xlabel("epoch")
plt.show()

epoch 10 train_loss:1.506160
epoch 20 train_loss:0.620538
epoch 30 train_loss:0.554743
epoch 40 train_loss:0.605716
epoch 50 train_loss:0.503734
epoch 60 train_loss:0.393485
epoch 70 train_loss:0.247103
epoch 80 train_loss:0.219726
epoch 90 train_loss:0.221648
epoch 100 train_loss:0.181218


## Prediction ¶

Finally, we test our model against the test dataset. Normally, ReNom returns a Node object, and contiues to expand the computational graph. For this to work, the  as_ndarray  method should be called.

In [6]:

predicted = nn_forward(test_x).as_ndarray()


Now, if we visually inspect the result, we can confirm that the model successfully approximates the test population.

In [7]:

plt.clf()
plt.grid()
plt.scatter(test_x, test_y, label = "true")
plt.scatter(test_x, predicted, label="predicted")
plt.title("Prediction result")
plt.ylabel("y")
plt.xlabel("x")
plt.legend()
plt.show()