Sequential Model

An introduction to how to build a sequential model

ReNom DL has tyo types to make a neural network definition. One is a sequential model, which is easy to write and understand the neural network structure. Another is a functional model, which is specifically for an advanced users and required to make a novel activation function or novel loss function or layer or novel usage of layers. So, basically it is enough to use sequential model, but it is necessary to make a advanced neural network structures.

This tutorial is for users to learn how to use a sequential model in ReNom DL.


  • numpy 1.13.1
  • matplotlib 2.0.2
  • ReNom 2.4.1
In [1]:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
import renom as rm
from renom.optimizer import Sgd

Data Preparation

We’re now generating training and test data for classification of iris data to 3 class. We are going to construct the neural network for classification.

In [2]:
iris = load_iris()
data =
label =
print("data shape:{}".format(data.shape))
print("label shape:{}".format(label.shape))
data shape:(150, 4)
label shape:(150,)

Iris dataset has 3 classes, target name is setosa, versicolor, virginica. We will classify these three classes.

In [3]:
array(['setosa', 'versicolor', 'virginica'],

Neural network definition

We define a normal 3-layers neural network.
We have to determine the number of units, input layer’s unit is infered and others are required to determine by manually.
But output layer’s unit has to be set to the number of classes.
This time, input dimension is 4 dimensions, and output dimension is 3, and 20 hidden units.
In [4]:
model = rm.Sequential([

Training Loop

In [5]:
X_train, X_test, y_train, y_test = train_test_split(data, label, test_size=0.3)
y_train = y_train.reshape(len(X_train), -1)
y_test = y_test.reshape(len(X_test), -1)
print("X_train:{}, X_test:{}, y_train:{}, y_test:{}".format(X_train.shape, X_test.shape, y_train.shape, y_test.shape))
X_train:(105, 4), X_test:(45, 4), y_train:(105, 1), y_test:(45, 1)
In [6]:
batch_size = 8
epoch = 10
N = len(X_train)
optimizer = Sgd(lr=0.001)
learning_curve = []
test_learning_curve = []

for i in range(epoch):
    perm = np.random.permutation(N)
    loss = 0
    for j in range(0, N // batch_size):
        train_batch = X_train[perm[j*batch_size : (j+1)*batch_size]]
        response_batch = y_train[perm[j*batch_size : (j+1)*batch_size]]

        with model.train():
            l = rm.softmax_cross_entropy(model(train_batch), response_batch)
        grad = l.grad()
        loss += l.as_ndarray()
    train_loss = loss / (N // batch_size)

    test_loss = rm.softmax_cross_entropy(model(X_test), y_test).as_ndarray()
    print("epoch:{:03d}, train_loss:{:.4f}, test_loss:{:.4f}".format(i, float(train_loss), float(test_loss)))
epoch:000, train_loss:5.9495, test_loss:4.3903
epoch:001, train_loss:4.9622, test_loss:3.7556
epoch:002, train_loss:4.2683, test_loss:3.2667
epoch:003, train_loss:3.8218, test_loss:2.9466
epoch:004, train_loss:3.5294, test_loss:2.8622
epoch:005, train_loss:3.4601, test_loss:2.8613
epoch:006, train_loss:3.4920, test_loss:2.8607
epoch:007, train_loss:3.4917, test_loss:2.8609
epoch:008, train_loss:3.4937, test_loss:2.8608
epoch:009, train_loss:3.4618, test_loss:2.8604


In [7]:
predictions = np.argmax(model(X_test).as_ndarray(), axis=1)

print(confusion_matrix(y_test, predictions))
print(classification_report(y_test, predictions))

plt.plot(learning_curve, linewidth=1, label="train")
plt.plot(test_learning_curve, linewidth=1, label="test")
[[12  7  0]
 [ 2 11  0]
 [ 3  2  8]]
             precision    recall  f1-score   support

          0       0.71      0.63      0.67        19
          1       0.55      0.85      0.67        13
          2       1.00      0.62      0.76        13

avg / total       0.75      0.69      0.69        45