renom.layers.loss package

class renom.layers.loss.clipped_mean_squared_error. ClippedMeanSquaredError ( clip=1.0 )

Bases: object

Cliped mean squared error function. In the forward propagation, this function yields same calculation as mean squared error.

In the backward propagation, this function calculates following formula.

\frac{dE}{dx}_{clipped} = max(min(\frac{dE}{dx}, clip), -clip)
Parameters:
  • x ( ndarray,Node ) – Input data.
  • y ( ndarray,Node ) – Target data.
  • clip ( float,tuple ) – Clipping threshold.
Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.

class renom.layers.loss.cross_entropy. CrossEntropy

Bases: object

This function evaluates the cross entropy loss between the target y and the input x .

E(x) = \sum_{n}^{N}\sum_{k}^{K}(-y*ln(x+\epsilon))

N is batch size. \epsilon is small number for avoiding division by zero.

Parameters:
  • x ( ndarray,Node ) – Input array.
  • y ( ndarray,Node ) – Target array.
Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[1.0, 0.5]])
>>> y = np.array([[0.0, 1.0]])
>>> print(x.shape, y.shape)
((1, 2), (1, 2))
>>> loss = rm.cross_entropy(x, y)
>>> print(loss)
cross_entropy(0.6931471824645996)
class renom.layers.loss.mean_squared_error. MeanSquaredError

Bases: object

This function evaluates the loss between the target y and the input x using mean squared error.

E(x) = \frac{1}{2N}\sum_{n}^{N}\sum_{k}^{K}(x_{nk}-y_{nk})^2

N is batch size.

Parameters:
  • x ( ndarray,Node ) – Input array.
  • y ( ndarray,Node ) – Target array.
Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[1, 1]])
>>> y = np.array([[-1, -1]])
>>> print(x.shape, y.shape)
((1, 2), (1, 2))
>>> loss = rm.mean_squared_error(x, y)
>>> print(loss)
mean_squared_error(4.0)
class renom.layers.loss.sigmoid_cross_entropy. SigmoidCrossEntropy

Bases: object

This function evaluates the loss between target y and output of sigmoid activation z using cross entropy.

\begin{split}z_{nk} &= \frac{1}{1 + \exp(-x_{nk})} \\ E(x) &= -\frac{1}{N}\sum_{n}^{N}\sum_{k}^{K}y_{nk}\log(z_{nk})+(1-y_{nk})\log(1-z_{nk})\end{split}
Parameters:
  • x ( ndarray,Node ) – Input array.
  • y ( ndarray,Node ) – Target array.
Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.

class renom.layers.loss.softmax_cross_entropy. SoftmaxCrossEntropy

Bases: object

This function evaluates the loss between target y and output of softmax activation z using cross entropy.

\begin{split}z_{nk} &= \frac{\exp(x_{nk})}{\sum_{j=1}^{K}\exp(x_{nj})} \\ E(x) &= -\frac{1}{N}\sum_{n}^{N}\sum_{k}^{K}y_{nk}\log(z_{nk})\end{split}
Parameters:
  • x ( ndarray,Node ) – Input array.
  • y ( ndarray,Node ) – Target array.
Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.