renom.layers.loss

class renom.layers.loss.mean_squared_error. MeanSquaredError

This function evaluates the loss between the target y and the input x using mean squared error.

E(x) = \frac{1}{2N}\sum_{n}^{N}\sum_{k}^{K}(x_{nk}-y_{nk})^2

In the case of the argument reduce_sum is False, this class will not perform summation.

E({\bf x}) = \frac{1}{2N}({\bf x}-{\bf y})^2

N is batch size.

Parameters:
  • x ( ndarray , Node ) – Input array.
  • y ( ndarray , Node ) – Target array.
  • reduce_sum ( bool ) – If True is given, the result array will be summed up and returns scalar value.
Returns:

Mean squared error.

Return type:

( Node , ndarray)

Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[1, 1]])
>>> y = np.array([[-1, -1]])
>>> print(x.shape, y.shape)
((1, 2), (1, 2))
>>> loss = rm.mean_squared_error(x, y)
>>> print(loss)
[4.]
>>> loss = rm.mean_squared_error(x, y, reduce_sum=False)
>>> print(loss)
[[ 2.  2.]]
mean_squared_error(4.0)
>>>
>>> # Also you can call this function with alias.
>>> loss = rm.mse(x, y)
>>> print(loss)
mean_squared_error(4.0)
class renom.layers.loss.clipped_mean_squared_error. ClippedMeanSquaredError ( clip=1.0 , reduce_sum=True )

Cliped mean squared error function. In the forward propagation, this function yields same calculation as mean squared error.

In the backward propagation, this function calculates following formula.

\frac{dE}{dx}_{clipped} = max(min(\frac{dE}{dx}, clip), -clip)
Parameters:
  • x ( ndarray , Node ) – Input data.
  • y ( ndarray , Node ) – Target data.
  • clip ( float , tuple ) – Clipping threshold.
  • reduce_sum ( bool ) – If True is given, the result array will be summed up and returns scalar value.
Returns:

Clipping mean squared error.

Return type:

( Node , ndarray)

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[1, 1]])
>>> y = np.array([[-1, -1]])
>>> print(x.shape, y.shape)
((1, 2), (1, 2))
>>> loss = rm.clipped_mean_squared_error(x, y)
>>> print(loss)
clipped_mean_squared_error(4.0)
>>>
>>> # Also you can call this function with alias.
>>> loss = rm.cmse(x, y)
>>> print(loss)
clipped_mean_squared_error(4.0)
Raises: AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.
class renom.layers.loss.cross_entropy. CrossEntropy

This function evaluates the cross entropy loss between the target y and the input x .

E(x) = \sum_{n}^{N}\sum_{k}^{K}(-y*ln(x+\epsilon))

N is batch size. \epsilon is small number for avoiding division by zero.

Parameters:
  • x ( ndarray , Node ) – Input array.
  • y ( ndarray , Node ) – Target array.
  • reduce_sum ( bool ) – If True is given, the result array will be summed up and returns scalar value.
Returns:

Cross entropy error.

Return type:

( Node , ndarray)

Raises:

AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[1.0, 0.5]])
>>> y = np.array([[0.0, 1.0]])
>>> print(x.shape, y.shape)
((1, 2), (1, 2))
>>> loss = rm.cross_entropy(x, y)
>>> print(loss)
[0.6931471824645996]
>>> loss = rm.cross_entropy(x, y, reduce_sum=False)
>>> print(loss)
[[0.          0.69314718]]
class renom.layers.loss.sigmoid_cross_entropy. SigmoidCrossEntropy

This function evaluates the loss between target y and output of sigmoid activation z using cross entropy.

\begin{split}z_{nk} &= \frac{1}{1 + \exp(-x_{nk})} \\ E(x) &= -\frac{1}{N}\sum_{n}^{N}\sum_{k}^{K}y_{nk}\log(z_{nk})+(1-y_{nk})\log(1-z_{nk})\end{split}
Parameters:
  • x ( ndarray , Node ) – Input array.
  • y ( ndarray , Node ) – Target array.
  • reduce_sum ( bool ) – If True is given, the result array will be summed up and returns scalar value.
Returns:

Cross entropy error between sigmoid(x) and target y.

Return type:

( Node , ndarray)

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[0, 1]])
>>> y = np.array([[1, 1]])
>>> loss_func = rm/SigmoidCrossEntropy()
>>> loss = loss_func(x, y)
>>> print(loss)
1.0064088106155396
>>>
>>> loss = rm.sigmoid_cross_entropy(x, y)
>>> print(loss)
1.0064088106155396
>>>
>>> # You can also call this function with alias.
>>> loss = rm.sgce(x, y)
>>> print(loss)
1.0064088106155396
Raises: AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.
class renom.layers.loss.softmax_cross_entropy. SoftmaxCrossEntropy

This function evaluates the loss between target y and output of softmax activation z using cross entropy.

\begin{split}z_{nk} &= \frac{\exp(x_{nk})}{\sum_{j=1}^{K}\exp(x_{nj})} \\ E(x) &= -\frac{1}{N}\sum_{n}^{N}\sum_{k}^{K}y_{nk}\log(z_{nk})\end{split}
Parameters:
  • x ( ndarray , Node ) – Input array.
  • y ( ndarray , Node ) – Target array.
  • reduce_sum ( bool ) – If True is given, the result array will be summed up and returns scalar value.

Example

>>> import renom as rm
>>> import numpy as np
>>>
>>> x = np.array([[0, 1]])
>>> y = np.array([[1, 0]])
>>> loss_func = rm.SoftmaxCrossEntropy()
>>> loss = loss_func(x, y)
>>> print(loss)
1.31326162815094
>>> loss = rm.softmax_cross_entropy(x, y)
>>> print(loss)
1.31326162815094
>>>
>>> # You can call this function with alias.
>>> loss = rm.smce(x, y)
>>> print(loss)
1.31326162815094
Raises: AssertionError – An assertion error will be raised if the given tensor dimension is less than 2.