renom package

ReNom

class renom.core. Grads

Bases: object

This class contains gradients of each Node object.

When the function grad which is instance of Node class called, an instance of Grads is returned.

Example

>>> import numpy as np
>>> import renom as rm
>>> a = rm.Variable(np.random.rand(2, 3))
>>> b = rm.Variable(np.random.rand(2, 3))
>>> c = rm.sum(a + 2*b)
>>> grad = c.grad()
>>> grad.get(a)
Mul([[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]], dtype=float32)
>>> grad.get(b)
RMul([[ 2.,  2.,  2.],
      [ 2.,  2.,  2.]], dtype=float32)
get ( node , default=<object object> )

This function returns the gradient of the given node.

Parameters: node ( Node ) –
Returns: This method returns gradient of passed node object.
Return type: ndarray,Node,None
update ( opt=None , models=() )

Updates variables using earned gradients.

If an optimizer instance is passed, gradients are rescaled with regard to the optimization algorithm before updating.

Parameters:
  • opt ( Optimizer ) – Algorithm for rescaling gradients.
  • models – List of models to update variables. When specified, variables which does not belong to one of the models are not updated.
class renom.core. Node ( *args , **kwargs )

Bases: numpy.ndarray

This is the base class of all operation function. Node class inherits numpy ndarray class.

Example

>>> import numpy as np
>>> import renom as rm
>>> vx = rm.Variable(np.random.rand(3, 2))
>>> isinstance(vx, rm.Node)
True
to_cpu ( )

Send the data on GPU device to CPU.

to_gpu ( )

Send the data on CPU to GPU device.

as_ndarray ( )

This method returns itself as ndarray object.

release_gpu ( )

This method releases memory on GPU.

grad ( initial=None , detach_graph=True , **kwargs )

This method follows computational graph and returns the gradients of Variable object.

Parameters:
  • initial ( ndarray ) – Initial value of following the graph.
  • detach_graph ( boolean ) – If it’s True, the computational graph will be destroyed.
detach_graph ( )

This method destroys computational graph.

class renom.core. Variable ( *args , **kwargs )

Bases: renom.core.Node

Variable class.

The gradient of this object will be calculated. Variable object is created from ndarray object or Number object.

Parameters:
  • value ( Variable,ndarray ) – Input array.
  • auto_update ( bool ) – Auto update flag.

Example

>>> import numpy as np
>>> import renom as rm
>>> x = np.array([1. -1])
>>> rm.Variable(x)
Variable([ 1., -1.], dtype=float32)
class renom.operation. reshape ( *args , **kwargs )

Bases: renom.core.Node

This function reshapes matrix shape.

Parameters:

Example

>>> import renom as rm
>>> import numpy as np
>>> x = rm.Variable(np.arange(6))
>>> x.shape
(6,)
>>> y = rm.reshape(x, (2, 3))
>>> y.shape
(2, 3)
class renom.operation. sum ( *args , **kwargs )

Bases: renom.core.Node

This function sums up matrix elements. In the current version(2.0), summation along 1st axis and summation of all elements are supported.

Parameters:
  • array ( Variable ) – Input array.
  • axis ( int ) – Summing up along specified axis.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> z = rm.sum(x)
>>> z
sum(3.21392822265625, dtype=float32)
class renom.operation. dot ( *args , **kwargs )

Bases: renom.core.BinOp

This function executes dot product of the two matrixes.

Parameters:
  • lhs ( Variable,ndarray ) – Input array.
  • rhs ( Variable,ndarray ) – Input array.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> y = np.random.rand(2, 2)
>>> z = rm.dot(y, x)
>>> z
dot([[ 0.10709135,  0.15022227,  0.12853521],
     [ 0.30557284,  0.32320538,  0.26753256]], dtype=float32)
class renom.operation. concat ( *args , **kwargs )

Bases: renom.core.BinOp

Join a sequence of arrays along an existing axis. In the current version(2.0), concatenation along 2nd axis is only supported.

Parameters:
  • lhs ( Variable,ndarray ) – Input array.
  • rhs ( Variable,ndarray ) – Input array.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> y = np.random.rand(2, 2)
>>> z = rm.concat(x, y)
>>> z.shape
(2, 5)
>>> z
concat([[ 0.56989014,  0.50372809,  0.40573129,  0.17601326,  0.07233092],
        [ 0.09377897,  0.8510806 ,  0.78971916,  0.52481949,  0.06913455]], dtype=float32)
class renom.operation. where ( *args , **kwargs )

Bases: renom.core.Node

Return elements, either from a or b, depending on condition.

Parameters:
  • condition ( Variable,ndarray ) – Condition array.
  • a ( Variable,ndarray ) – Input array.
  • b ( Variable,ndarray ) – Input array.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> x
array([[ 0.56989017,  0.50372811,  0.4057313 ],
       [ 0.09377897,  0.85108059,  0.78971919]])
>>> z = rm.where(x > 0.5, x, 0)
>>> z
where([[ 0.56989014,  0.50372809,  0.        ],
       [ 0.        ,  0.8510806 ,  0.78971916]], dtype=float32)
class renom.operation. sqrt ( *args , **kwargs )

Bases: renom.core.UnaryOp

Square root operation.

Parameters: arg ( Variable,ndarray ) – Input array.

Example

>>> import numpy as np
>>> import renom as rm
>>>
>>> x = np.random.rand(2, 3)
>>> x
array([[ 0.56989017,  0.50372811,  0.4057313 ],
       [ 0.09377897,  0.85108059,  0.78971919]])
>>> z = rm.sqrt(x)
>>> z
sqrt([[ 0.75491071,  0.70973808,  0.6369704 ],
      [ 0.30623353,  0.92254031,  0.88866144]], dtype=float32)
class renom.operation. log ( *args , **kwargs )

Bases: renom.core.UnaryOp

Log operation.

Parameters: arg ( Variable,ndarray ) – Input array.
class renom.operation. exp ( *args , **kwargs )

Bases: renom.core.UnaryOp

Exponential operation.

Parameters: arg ( Variable,ndarray ) – Input array.
class renom.optimizer. Sgd ( lr=0.1 , momentum=0.4 )

Bases: renom.optimizer.Optimizer

Stochastic Gradient Descent.

Parameters:
  • lr ( float ) – Learning rate.
  • momentum ( float ) – Momentum coefficient of optimization.

Example

>>> import numpy as np
>>> import renom as rm
>>> x = rm.Variable(np.random.rand(2, 3))
>>> x
Variable([[ 0.93283856,  0.44494787,  0.47652033],
          [ 0.04769089,  0.16719061,  0.52063918]], dtype=float32)
>>> a = 2
>>> opt = rm.Sgd(lr=0.1)    # Stochastic gradient decent algorithm
>>> y = rm.sum(a*x)
>>> dx = y.grad(detach_graph=False).get(x)
>>> dx
RMul([[ 2.,  2.,  2.],
      [ 2.,  2.,  2.]], dtype=float32)
>>> y.grad(detach_graph=False).update(opt)
>>> x
Variable([[ 0.73283857,  0.24494787,  0.27652031],
          [-0.1523091 , -0.03280939,  0.32063919]], dtype=float32)
class renom.optimizer. Adagrad ( lr=0.01 , epsilon=1e-08 )

Bases: renom.optimizer.Optimizer

Adaptive gradient algorithm.

Parameters:
  • lr ( float ) – Learning rate.
  • epsilon ( float ) – Small number in the equation for avoiding zero division.
class renom.optimizer. Rmsprop ( lr=0.001 , g=0.9 , epsilon=1e-08 )

Bases: renom.optimizer.Optimizer

Rmsprop described by following formula.

\begin{split}m_{t+1} &=& gm_{t} + (1-g)\nabla E^2 \\ r_{t} &=& \frac{lr}{\sqrt{m_{t+1}}+\epsilon} \\ w_{t+1} &=& w_{t} - r_{t}\nabla E\end{split}
Parameters:
  • lr ( float ) – Learning rate.
  • g ( float ) –
  • epsilon ( float ) – Small number in the equation for avoiding zero division.
class renom.optimizer. Adam ( lr=0.001 , g=0.999 , b=0.9 , epsilon=1e-08 )

Bases: renom.optimizer.Optimizer

Adaptive moment estimation described by following formula.

\begin{split}m_{t+1} &=& bm_t + \nabla E \\ n_{t+1} &=& gn_t + \nabla E^2 \\ \hat{m}_{t+1} &=& \frac{m_{t+1}}{1-b^{t+1}} \\ \hat{n}_{t+1} &=& \frac{n_{t+1}}{1-g^{t+1}} \\ w_{t+1} &=& w_{t} - \frac{\alpha \hat{m}_{t+1}}{\sqrt{\hat{n}_{t+1}}+\epsilon}\end{split}
Parameters:
  • lr ( float ) – Learning rate.
  • g ( float ) – Coefficient
  • b ( float ) – Coefficient
  • epsilon ( float ) – Small number in the equation for avoiding zero division.