renom package

ReNom

class renom.core. Grads ( root=None )

Bases: object

Grads class. This class contains gradients of each Node object.

When the function grad which is instance of Node class is called, an instance of Grads class will be returned.

For getting the gradient with respect to a Variable object 'x' which is on a computational graph, call the 'get' function of Grads object (An example is bellow).

Example

>>> import numpy as np
>>> import renom as rm
>>> a = rm.Variable(np.random.rand(2, 3))
>>> b = rm.Variable(np.random.rand(2, 3))
>>> c = rm.sum(a + 2*b)
>>> grad = c.grad()
>>> grad.get(a)
Mul([[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]], dtype=float32)
>>> grad.get(b)
RMul([[ 2.,  2.,  2.],
      [ 2.,  2.,  2.]], dtype=float32)
get ( node , default=<object object> )

This function returns the gradient with respect to the given node. In the case of there are not any gradient of the given node, this function returns 'None'.

Parameters: node ( Node ) -- Returns a gradient with respect to this argument.
Returns: Gradient of given node object.
Return type: ndarray, Node , None
update ( opt=None , models=() )

This function updates variable objects on the computational graph using obtained gradients.

If an optimizer instance is passed, gradients are rescaled with regard to the optimization algorithm before updating.

Parameters:
  • opt ( Optimizer ) -- Algorithm for rescaling gradients.
  • models -- List of models to update variables. When specified, variables which does not belong to one of the models are not updated.
class renom.core. Node ( *args , **kwargs )

Bases: numpy.ndarray

This is the base class of all operation function. Node class inherits numpy ndarray class.

Example

>>> import numpy as np
>>> import renom as rm
>>> vx = rm.Variable(np.random.rand(3, 2))
>>> isinstance(vx, rm.Node)
True
to_cpu ( )

Send the data on GPU device to CPU.

to_gpu ( )

Send the data on CPU to GPU device.

as_ndarray ( )

This method returns itself as ndarray object.

release_gpu ( )

This method releases memory on GPU.

grad ( initial=None , detach_graph=True , **kwargs )

This method follows computational graph and returns the gradients of Variable object.

Parameters:
  • initial ( ndarray ) -- Initial value of following the graph.
  • detach_graph ( boolean ) -- If it's True, the computational graph will be destroyed.
detach_graph ( )

This method destroys computational graph.

class renom.core. Variable ( *args , **kwargs )

Bases: renom.core.Node

Variable class.

The gradient of this object will be calculated. Variable object is created from ndarray object or Number object.

Parameters:
  • value ( Variable , ndarray ) -- Input array.
  • auto_update ( bool ) -- Auto update flag.

Example

>>> import numpy as np
>>> import renom as rm
>>> x = np.array([1. -1])
>>> rm.Variable(x)
Variable([ 1., -1.], dtype=float32)
class renom.core. Amax ( *args , **kwargs )

Bases: renom.core.Abase

This function performs max calculation.

Parameters:
  • arg ( Variable , ndarray ) -- Input matrix.
  • axis ( int ) -- Perform calculation along this argument.
  • keepdims ( bool ) -- If True is passed, reduced dimensions remain.

Example

>>> import numpy as np
>>> import renom as rm
>>> # Forward Calculation
>>> a = np.arange(4).reshape(2, 2)
>>> a
[[0 1]
 [2 3]]
>>> rm.amax(a, axis=1)
[ 1.  3.]
>>>
>>> rm.amax(a, axis=0)
[ 2.  3.]
>>> rm.amax(a, axis=0, keepdims=True)
[[ 2.  3.]]
>>>
>>> # Calculation of differentiation
>>> va = rm.Variable(a)
>>> out = rm.amax(va)
>>> grad = out.grad()
>>> grad.get(va) # Getting the gradient of 'va'.
[[ 0.,  0.],
 [ 0.,  1.]]
class renom.core. Amin ( *args , **kwargs )

Bases: renom.core.Abase

This function performs min calculation.

Parameters:
  • arg ( Variable , ndarray ) -- Input matrix.
  • axis ( int ) -- Perform calculation along this argument.
  • keepdims ( bool ) -- If Ture is passed, reduced dimentions remain.

Example

>>> import numpy as np
>>> import renom as rm
>>> # Forward Calculation
>>> a = np.arange(4).reshape(2, 2)
>>> a
[[0 1]
 [2 3]]
>>> rm.amin(a, axis=1)
[ 0.  2.]
>>>
>>> rm.amin(a, axis=0)
[ 0.  1.]
>>> rm.amin(a, axis=0, keepdims=True)
[[ 0.  1.]]
>>>
>>> # Calculation of differentiation
>>> va = rm.Variable(a)
>>> out = rm.amin(va)
>>> grad = out.grad()
>>> grad.get(va) # Getting the gradient of 'va'.
[[ 1.,  0.],
 [ 0.,  0.]]
renom.operation. reshape ( array , shape )

This function reshapes matrix shape.

Parameters:

Example

>>> import renom as rm
>>> import numpy as np
>>> x = rm.Variable(np.arange(6))
>>> x.shape
(6,)
>>> y = rm.reshape(x, (2, 3))
>>> y.shape
(2, 3)
class renom.operation. sum ( *args , **kwargs )

Bases: renom.core.Node

This function sums up matrix elements. If the argument 'axis' is passed, this function performs sum along specified axis.

Parameters:
  • array ( Variable ) -- Input array.
  • axis ( int ) -- Summing up along this axis.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> z = rm.sum(x)
>>> z
sum(3.21392822265625, dtype=float32)
class renom.operation. dot ( *args , **kwargs )

Bases: renom.core.BinOp

This function executes dot product of the two matrixes.

Parameters:

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> y = np.random.rand(2, 2)
>>> z = rm.dot(y, x)
>>> z
dot([[ 0.10709135,  0.15022227,  0.12853521],
     [ 0.30557284,  0.32320538,  0.26753256]], dtype=float32)
class renom.operation. concat ( *args , **kwargs )

Bases: renom.core.Node

Join a sequence of arrays along specified axis.

Parameters:
  • args ( * Variable, tuple) -- Input arrays or tuple of input arrays.
  • axis ( int ) -- Concatenation will be performed along this axis. Default value is 1.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> y = np.random.rand(2, 2)
>>> z = rm.concat(x, y)
>>> z.shape
(2, 5)
>>> z
concat([[ 0.56989014,  0.50372809,  0.40573129,  0.17601326,  0.07233092],
        [ 0.09377897,  0.8510806 ,  0.78971916,  0.52481949,  0.06913455]], dtype=float32)
class renom.operation. where ( *args , **kwargs )

Bases: renom.core.Node

Return elements, either from a or b, depending on condition.

Parameters:

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> x
array([[ 0.56989017,  0.50372811,  0.4057313 ],
       [ 0.09377897,  0.85108059,  0.78971919]])
>>> z = rm.where(x > 0.5, x, 0)
>>> z
where([[ 0.56989014,  0.50372809,  0.        ],
       [ 0.        ,  0.8510806 ,  0.78971916]], dtype=float32)
class renom.operation. sqrt ( *args , **kwargs )

Bases: renom.core.UnaryOp

Square root operation.

Parameters: arg ( Variable , ndarray ) -- Input array.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> x
array([[ 0.56989017,  0.50372811,  0.4057313 ],
       [ 0.09377897,  0.85108059,  0.78971919]])
>>> z = rm.sqrt(x)
>>> z
sqrt([[ 0.75491071,  0.70973808,  0.6369704 ],
      [ 0.30623353,  0.92254031,  0.88866144]], dtype=float32)
class renom.operation. log ( *args , **kwargs )

Bases: renom.core.UnaryOp

Log operation.

Parameters: arg ( Variable , ndarray ) -- Input array.
class renom.operation. exp ( *args , **kwargs )

Bases: renom.core.UnaryOp

Exponential operation.

Parameters: arg ( Variable , ndarray ) -- Input array.
class renom.optimizer. Sgd ( lr=0.1 , momentum=0.4 )

Bases: renom.optimizer.Optimizer

Stochastic Gradient Descent.

Parameters:
  • lr ( float ) -- Learning rate.
  • momentum ( float ) -- Momentum coefficient of optimization.

Example

>>> import numpy as np
>>> import renom as rm
>>> x = rm.Variable(np.random.rand(2, 3))
>>> x
Variable([[ 0.93283856,  0.44494787,  0.47652033],
          [ 0.04769089,  0.16719061,  0.52063918]], dtype=float32)
>>> a = 2
>>> opt = rm.Sgd(lr=0.1)    # Stochastic gradient decent algorithm
>>> y = rm.sum(a*x)
>>> dx = y.grad(detach_graph=False).get(x)
>>> dx
RMul([[ 2.,  2.,  2.],
      [ 2.,  2.,  2.]], dtype=float32)
>>> y.grad(detach_graph=False).update(opt)
>>> x
Variable([[ 0.73283857,  0.24494787,  0.27652031],
          [-0.1523091 , -0.03280939,  0.32063919]], dtype=float32)
class renom.optimizer. Adagrad ( lr=0.01 , epsilon=1e-08 )

Bases: renom.optimizer.Optimizer

Adaptive gradient algorithm. [Adagrad]

Parameters:
  • lr ( float ) -- Learning rate.
  • epsilon ( float ) -- Small number in the equation for avoiding zero division.
[Adagrad] Duchi, J., Hazan, E., & Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12, 2121–2159.
class renom.optimizer. Rmsprop ( lr=0.001 , g=0.9 , epsilon=1e-08 )

Bases: renom.optimizer.Optimizer

Rmsprop described by following formula. [Rmsprop]

\begin{split}m_{t+1} &=& gm_{t} + (1-g)\nabla E^2 \\ r_{t} &=& \frac{lr}{\sqrt{m_{t+1}}+\epsilon} \\ w_{t+1} &=& w_{t} - r_{t}\nabla E\end{split}
Parameters:
  • lr ( float ) -- Learning rate.
  • g ( float ) --
  • epsilon ( float ) -- Small number in the equation for avoiding zero division.
[Rmsprop] Nitish Srivastava, Kevin Swersky, Geoffrey Hinton. Neural Networks for Machine Learning.
class renom.optimizer. Adam ( lr=0.001 , g=0.999 , b=0.9 , epsilon=1e-08 )

Bases: renom.optimizer.Optimizer

Adaptive moment estimation described by following formula. [Adam]

\begin{split}m_{t+1} &=& bm_t + \nabla E \\ n_{t+1} &=& gn_t + \nabla E^2 \\ \hat{m}_{t+1} &=& \frac{m_{t+1}}{1-b^{t+1}} \\ \hat{n}_{t+1} &=& \frac{n_{t+1}}{1-g^{t+1}} \\ w_{t+1} &=& w_{t} - \frac{\alpha \hat{m}_{t+1}}{\sqrt{\hat{n}_{t+1}}+\epsilon}\end{split}
Parameters:
  • lr ( float ) -- Learning rate.
  • g ( float ) -- Coefficient
  • b ( float ) -- Coefficient
  • epsilon ( float ) -- Small number in the equation for avoiding zero division.
[Adam] Diederik P. Kingma, Jimmy Ba. ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION(2014) https://arxiv.org/pdf/1412.6980.pdf