renom_img.api.segmentation ¶

class
FCN16s
( class_map=[] , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False ) ¶ 
Bases:
renom_img.api.segmentation.fcn.FCN_Base
Fully convolutional network (16s) for semantic segmentation
Parameters:  class_map ( array ) – Array of class names
 imsize ( int or tuple ) – Input image size
 load_pretrained_weight ( bool , str ) – True if pretrained weight is used, otherwise False.
 train_whole_network ( bool ) – True if the overall model is trained, otherwise False
Example
>>> import renom as rm >>> import numpy as np >>> from renom_img.api.segmentation.fcn import FCN16s >>> n, c, h, w = (2, 12, 64, 64) >>> x = rm.Variable(np.random.rand(n, c, h, w)) >>> model = FCN16s() >>> t = model(x) >>> t.shape (2, 12, 64, 64)
References
Jonathan Long, Evan Shelhamer, Trevor DarrellFully Convolutional Networks for Semantic Segmentation

fit
( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , augmentation=None , callback_end_epoch=None , class_weight=False ) ¶ 
This function performs training with given data and hyper parameters.
Parameters:  train_img_path_list ( list ) – List of image path.
 train_annotation_list ( list ) – List of annotations.
 valid_img_path_list ( list ) – List of image path for validation.
 valid_annotation_list ( list ) – List of annotations for validation.
 epoch ( int ) – Number of training epoch.
 batch_size ( int ) – Number of batch size.
 augmentation ( Augmentation ) – Augmentation object.
 callback_end_epoch ( function ) – Given function will be called at the end of each epoch.
Returns: Training loss list and validation loss list.
Return type: (tuple)
Example
>>> train_img_path_list, train_annot_list = ... # Define own data. >>> valid_img_path_list, valid_annot_list = ... >>> model = FCN16s() # Any algorithm which provided by ReNomIMG here. >>> model.fit( ... # Feeds image and annotation data. ... train_img_path_list, ... train_annot_list, ... valid_img_path_list, ... valid_annot_list, ... epoch=8, ... batch_size=8) >>>
Following arguments will be given to the function
callback_end_epoch
. epoch (int)  Number of current epoch.
 model (Model)  Model object.
 avg_train_loss_list (list)  List of average train loss of each epoch.
 avg_valid_loss_list (list)  List of average valid loss of each epoch.

forward
( x ) ¶ 
Performs forward propagation. You can call this function using
__call__
method.Parameters: x ( ndarray , Node ) – Input to FCN16s.

get_optimizer
( current_epoch=None , total_epoch=None , current_batch=None , total_batch=None , **kwargs ) ¶ 
Returns an instance of Optimizer for training FCN16s algorithm. If all argument(current_epoch, total_epoch, current_batch, total_batch) are given, the learning rate is modified according to the number of training iterations or the constant learning rate is used.
Parameters:  current_epoch ( int ) – The number of current epoch.
 total_epoch ( int ) – The number of total epoch.
 current_batch ( int ) – The number of current batch.
 total_batch ( int ) – The number of total batch.
Returns: Optimizer object.
Return type: (Optimizer)

loss
( x , y ) ¶ 
Loss function of FCN16s algorithm.
Parameters:  x ( ndarray , Node ) – Output of model.
 y ( ndarray , Node ) – Target array.
Returns: Loss between x and y.
Return type: (Node)

predict
( img_list ) ¶ 
Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, then a list in which there are arrays whose shape is (width, height) is returned. Return type: (Numpy.array or list)

preprocess
( x ) ¶ 
Performs preprocess for a given array.
Parameters: x ( ndarray , Node ) – Image array for preprocessing. Preprocessing for FCN is follows.
\[\begin{split}x_{red} = 123.68 \\ x_{green} = 116.779 \\ x_{blue} = 103.939\end{split}\]

regularize
( ) ¶ 
Regularization term to a loss function.
Example
>>> x = numpu.random.rand(1, 3, 224, 224) >>> y = numpy.random.rand(1, (5*2+20)*7*7) >>> model = FCN16s() >>> loss = model.loss(x, y) >>> reg_loss = loss + model.regularize() # Add weight decay term.

class
FCN32s
( class_map=[] , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False ) ¶ 
Bases:
renom_img.api.segmentation.fcn.FCN_Base
Fully convolutional network (21s) for semantic segmentation
Parameters:  class_map ( array ) – Array of class names
 imsize ( int or tuple ) – Input image size
 load_pretrained_weight ( bool , str ) – True if pretrained weight is used, otherwise False.
 train_whole_network ( bool ) – True if the overall model is trained, otherwise False
Example
>>> import renom as rm >>> import numpy as np >>> from renom_img.api.segmentation.fcn import FCN32s >>> n, c, h, w = (2, 12, 64, 64) >>> x = rm.Variable(np.random.rand(n, c, h, w)) >>> model = FCN32s() >>> t = model(x) >>> t.shape (2, 12, 64, 64)
References
Jonathan Long, Evan Shelhamer, Trevor DarrellFully Convolutional Networks for Semantic Segmentation

fit
( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , augmentation=None , callback_end_epoch=None , class_weight=False ) ¶ 
This function performs training with given data and hyper parameters.
Parameters:  train_img_path_list ( list ) – List of image path.
 train_annotation_list ( list ) – List of annotations.
 valid_img_path_list ( list ) – List of image path for validation.
 valid_annotation_list ( list ) – List of annotations for validation.
 epoch ( int ) – Number of training epoch.
 batch_size ( int ) – Number of batch size.
 augmentation ( Augmentation ) – Augmentation object.
 callback_end_epoch ( function ) – Given function will be called at the end of each epoch.
Returns: Training loss list and validation loss list.
Return type: (tuple)
Example
>>> train_img_path_list, train_annot_list = ... # Define own data. >>> valid_img_path_list, valid_annot_list = ... >>> model = FCN32s() # Any algorithm which provided by ReNomIMG here. >>> model.fit( ... # Feeds image and annotation data. ... train_img_path_list, ... train_annot_list, ... valid_img_path_list, ... valid_annot_list, ... epoch=8, ... batch_size=8) >>>
Following arguments will be given to the function
callback_end_epoch
. epoch (int)  Number of current epoch.
 model (Model)  Model object.
 avg_train_loss_list (list)  List of average train loss of each epoch.
 avg_valid_loss_list (list)  List of average valid loss of each epoch.

forward
( x ) ¶ 
Performs forward propagation. You can call this function using
__call__
method.Parameters: x ( ndarray , Node ) – Input to FCN32s.

get_optimizer
( current_epoch=None , total_epoch=None , current_batch=None , total_batch=None , **kwargs ) ¶ 
Returns an instance of Optimizer for training FCN32s algorithm. If all argument(current_epoch, total_epoch, current_batch, total_batch) are given, the learning rate is modified according to the number of training iterations or the constant learning rate is used.
Parameters:  current_epoch ( int ) – The number of current epoch.
 total_epoch ( int ) – The number of total epoch.
 current_batch ( int ) – The number of current batch.
 total_batch ( int ) – The number of total batch.
Returns: Optimizer object.
Return type: (Optimizer)

loss
( x , y ) ¶ 
Loss function of FCN32s algorithm.
Parameters:  x ( ndarray , Node ) – Output of model.
 y ( ndarray , Node ) – Target array.
Returns: Loss between x and y.
Return type: (Node)

predict
( img_list ) ¶ 
Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, then a list in which there are arrays whose shape is (width, height) is returned. Return type: (Numpy.array or list)

preprocess
( x ) ¶ 
Performs preprocess for a given array.
Parameters: x ( ndarray , Node ) – Image array for preprocessing. Preprocessing for FCN is follows.
\[\begin{split}x_{red} = 123.68 \\ x_{green} = 116.779 \\ x_{blue} = 103.939\end{split}\]

regularize
( ) ¶ 
Regularization term to a loss function.
Example
>>> x = numpu.random.rand(1, 3, 224, 224) >>> y = numpy.random.rand(1, (5*2+20)*7*7) >>> model = FCN32s() >>> loss = model.loss(x, y) >>> reg_loss = loss + model.regularize() # Add weight decay term.

class
FCN8s
( class_map=[] , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False ) ¶ 
Bases:
renom_img.api.segmentation.fcn.FCN_Base
Fully convolutional network (8s) for semantic segmentation
Parameters:  class_map ( array ) – Array of class names
 imsize ( int or tuple ) – Input image size
 load_pretrained_weight ( bool , str ) – True if pretrained weight is used, otherwise False.
 train_whole_network ( bool ) – True if the overall model is trained, otherwise False
Example
>>> import renom as rm >>> import numpy as np >>> from renom_img.api.segmentation.fcn import FCN8s >>> n, c, h, w = (2, 12, 64, 64) >>> x = rm.Variable(np.random.rand(n, c, h, w)) >>> model = FCN8s() >>> t = model(x) >>> t.shape (2, 12, 64, 64)
References
Jonathan Long, Evan Shelhamer, Trevor DarrellFully Convolutional Networks for Semantic Segmentation

fit
( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , augmentation=None , callback_end_epoch=None , class_weight=False ) ¶ 
This function performs training with given data and hyper parameters.
Parameters:  train_img_path_list ( list ) – List of image path.
 train_annotation_list ( list ) – List of annotations.
 valid_img_path_list ( list ) – List of image path for validation.
 valid_annotation_list ( list ) – List of annotations for validation.
 epoch ( int ) – Number of training epoch.
 batch_size ( int ) – Number of batch size.
 augmentation ( Augmentation ) – Augmentation object.
 callback_end_epoch ( function ) – Given function will be called at the end of each epoch.
Returns: Training loss list and validation loss list.
Return type: (tuple)
Example
>>> train_img_path_list, train_annot_list = ... # Define own data. >>> valid_img_path_list, valid_annot_list = ... >>> model = FCN8s() # Any algorithm which provided by ReNomIMG here. >>> model.fit( ... # Feeds image and annotation data. ... train_img_path_list, ... train_annot_list, ... valid_img_path_list, ... valid_annot_list, ... epoch=8, ... batch_size=8) >>>
Following arguments will be given to the function
callback_end_epoch
. epoch (int)  Number of current epoch.
 model (Model)  Model object.
 avg_train_loss_list (list)  List of average train loss of each epoch.
 avg_valid_loss_list (list)  List of average valid loss of each epoch.

forward
( x ) ¶ 
Performs forward propagation. You can call this function using
__call__
method.Parameters: x ( ndarray , Node ) – Input to FCN8s.

get_optimizer
( current_epoch=None , total_epoch=None , current_batch=None , total_batch=None , **kwargs ) ¶ 
Returns an instance of Optimizer for training FCN8s algorithm. If all argument(current_epoch, total_epoch, current_batch, total_batch) are given, the learning rate is modified according to the number of training iterations or the constant learning rate is used.
Parameters:  current_epoch ( int ) – The number of current epoch.
 total_epoch ( int ) – The number of total epoch.
 current_batch ( int ) – The number of current batch.
 total_batch ( int ) – The number of total batch.
Returns: Optimizer object.
Return type: (Optimizer)

loss
( x , y ) ¶ 
Loss function of FCN8s algorithm.
Parameters:  x ( ndarray , Node ) – Output of model.
 y ( ndarray , Node ) – Target array.
Returns: Loss between x and y.
Return type: (Node)

predict
( img_list ) ¶ 
Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, then a list in which there are arrays whose shape is (width, height) is returned. Return type: (Numpy.array or list)

preprocess
( x ) ¶ 
Performs preprocess for a given array.
Parameters: x ( ndarray , Node ) – Image array for preprocessing. Preprocessing for FCN is follows.
\[\begin{split}x_{red} = 123.68 \\ x_{green} = 116.779 \\ x_{blue} = 103.939\end{split}\]

regularize
( ) ¶ 
Regularization term to a loss function.
Example
>>> x = numpu.random.rand(1, 3, 224, 224) >>> y = numpy.random.rand(1, (5*2+20)*7*7) >>> model = FCN8s() >>> loss = model.loss(x, y) >>> reg_loss = loss + model.regularize() # Add weight decay term.

class
UNet
( class_map=[] , imsize=(512 , 512) , load_pretrained_weight=False , train_whole_network=False ) ¶ 
Bases:
renom_img.api.segmentation.SemanticSegmentation
UNet: Convolutional Networks for Biomedical Image Segmentation
Parameters:  class_map ( array ) – Array of class names
 imsize ( int or tuple ) – Input image size
 load_pretrained_weight ( bool , str ) – True if pretrained weight is used, otherwise False.
 train_whole_network ( bool ) – True if the overall model is trained, otherwise False
Example
>>> import renom as rm >>> import numpy as np >>> from renom_img.api.segmentation.unet import UNet >>> n, c, h, w = (2, 12, 64, 64) >>> x = rm.Variable(np.random.rand(n, c, h, w)) >>> model = UNet() >>> t = model(x) >>> t.shape (2, 12, 64, 64)
References
Olaf Ronneberger, Philipp Fischer, Thomas BroxUNet: Convolutional Networks for Biomedical Image Segmentation

fit
( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , augmentation=None , callback_end_epoch=None , class_weight=False ) ¶ 
This function performs training with given data and hyper parameters.
Parameters:  train_img_path_list ( list ) – List of image path.
 train_annotation_list ( list ) – List of annotations.
 valid_img_path_list ( list ) – List of image path for validation.
 valid_annotation_list ( list ) – List of annotations for validation.
 epoch ( int ) – Number of training epoch.
 batch_size ( int ) – Number of batch size.
 augmentation ( Augmentation ) – Augmentation object.
 callback_end_epoch ( function ) – Given function will be called at the end of each epoch.
Returns: Training loss list and validation loss list.
Return type: (tuple)
Example
>>> train_img_path_list, train_annot_list = ... # Define own data. >>> valid_img_path_list, valid_annot_list = ... >>> model = UNet() # Any algorithm which provided by ReNomIMG here. >>> model.fit( ... # Feeds image and annotation data. ... train_img_path_list, ... train_annot_list, ... valid_img_path_list, ... valid_annot_list, ... epoch=8, ... batch_size=8) >>>
Following arguments will be given to the function
callback_end_epoch
. epoch (int)  Number of current epoch.
 model (Model)  Model object.
 avg_train_loss_list (list)  List of average train loss of each epoch.
 avg_valid_loss_list (list)  List of average valid loss of each epoch.

forward
( x ) ¶ 
Performs forward propagation. You can call this function using
__call__
method.Parameters: x ( ndarray , Node ) – Input to UNet.

get_optimizer
( current_epoch=None , total_epoch=None , current_batch=None , total_batch=None , **kwargs ) ¶ 
Returns an instance of Optimizer for training UNet algorithm. If all argument(current_epoch, total_epoch, current_batch, total_batch) are given, the learning rate is modified according to the number of training iterations or the constant learning rate is used.
Parameters:  current_epoch ( int ) – The number of current epoch.
 total_epoch ( int ) – The number of total epoch.
 current_batch ( int ) – The number of current batch.
 total_batch ( int ) – The number of total batch.
Returns: Optimizer object.
Return type: (Optimizer)

loss
( x , y ) ¶ 
Loss function of UNet algorithm.
Parameters:  x ( ndarray , Node ) – Output of model.
 y ( ndarray , Node ) – Target array.
Returns: Loss between x and y.
Return type: (Node)

predict
( img_list ) ¶ 
Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, then a list in which there are arrays whose shape is (width, height) is returned. Return type: (Numpy.array or list)

preprocess
( x ) ¶ 
Performs preprocess for a given array.
Parameters: x ( ndarray , Node ) – Image array for preprocessing. Image preprocess for UNet.
\(new_x = x/255\)
Returns: Preprocessed data. Return type: (ndarray)

regularize
( ) ¶ 
Regularization term to a loss function.
Example
>>> x = numpu.random.rand(1, 3, 224, 224) >>> y = numpy.random.rand(1, (5*2+20)*7*7) >>> model = UNet() >>> loss = model.loss(x, y) >>> reg_loss = loss + model.regularize() # Add weight decay term.

class
TernausNet
( class_map=[] , imsize=(512 , 512) , load_pretrained_weight=False , train_whole_network=False ) ¶ 
Bases:
renom_img.api.segmentation.SemanticSegmentation
TernausNet: UNet with VGG11 Encoder PreTrained on ImageNet for Image Segmentation
Parameters:  class_map ( array ) – Array of class names
 imsize ( int or tuple ) – Input image size
 load_pretrained_weight ( bool , str ) – True if pretrained weight is used, otherwise False.
 train_whole_network ( bool ) – True if the overall model is trained, otherwise False
Example
>>> import renom as rm >>> import numpy as np >>> from renom_img.api.segmentation.ternausnet import TernausNet >>> n, c, h, w = (2, 12, 64, 64) >>> x = rm.Variable(np.random.rand(n, c, h, w)) >>> model = TernausNet() >>> t = model(x) >>> t.shape (2, 12, 64, 64)
References
Vladimir Iglovikov, Alexey ShvetsTernausNet: UNet with VGG11 Encoder PreTrained on ImageNet for Image Segmentation

fit
( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , augmentation=None , callback_end_epoch=None , class_weight=False ) ¶ 
This function performs training with given data and hyper parameters.
Parameters:  train_img_path_list ( list ) – List of image path.
 train_annotation_list ( list ) – List of annotations.
 valid_img_path_list ( list ) – List of image path for validation.
 valid_annotation_list ( list ) – List of annotations for validation.
 epoch ( int ) – Number of training epoch.
 batch_size ( int ) – Number of batch size.
 augmentation ( Augmentation ) – Augmentation object.
 callback_end_epoch ( function ) – Given function will be called at the end of each epoch.
Returns: Training loss list and validation loss list.
Return type: (tuple)
Example
>>> train_img_path_list, train_annot_list = ... # Define own data. >>> valid_img_path_list, valid_annot_list = ... >>> model = TernausNet() # Any algorithm which provided by ReNomIMG here. >>> model.fit( ... # Feeds image and annotation data. ... train_img_path_list, ... train_annot_list, ... valid_img_path_list, ... valid_annot_list, ... epoch=8, ... batch_size=8) >>>
Following arguments will be given to the function
callback_end_epoch
. epoch (int)  Number of current epoch.
 model (Model)  Model object.
 avg_train_loss_list (list)  List of average train loss of each epoch.
 avg_valid_loss_list (list)  List of average valid loss of each epoch.

forward
( x ) ¶ 
Performs forward propagation. You can call this function using
__call__
method.Parameters: x ( ndarray , Node ) – Input to TernausNet.

get_optimizer
( current_epoch=None , total_epoch=None , current_batch=None , total_batch=None , **kwargs ) ¶ 
Returns an instance of Optimizer for training TernausNet algorithm. If all argument(current_epoch, total_epoch, current_batch, total_batch) are given, the learning rate is modified according to the number of training iterations or the constant learning rate is used.
Parameters:  current_epoch ( int ) – The number of current epoch.
 total_epoch ( int ) – The number of total epoch.
 current_batch ( int ) – The number of current batch.
 total_batch ( int ) – The number of total batch.
Returns: Optimizer object.
Return type: (Optimizer)

loss
( x , y ) ¶ 
Loss function of TernausNet algorithm.
Parameters:  x ( ndarray , Node ) – Output of model.
 y ( ndarray , Node ) – Target array.
Returns: Loss between x and y.
Return type: (Node)

predict
( img_list ) ¶ 
Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, then a list in which there are arrays whose shape is (width, height) is returned. Return type: (Numpy.array or list)

preprocess
( x ) ¶ 
Performs preprocess for a given array.
Parameters: x ( ndarray , Node ) – Image array for preprocessing. Image preprocess for UNet.
\(new_x = x/255.\)
Returns: Preprocessed data. Return type: (ndarray)

regularize
( ) ¶ 
Regularization term to a loss function.
Example
>>> x = numpu.random.rand(1, 3, 224, 224) >>> y = numpy.random.rand(1, (5*2+20)*7*7) >>> model = TernausNet() >>> loss = model.loss(x, y) >>> reg_loss = loss + model.regularize() # Add weight decay term.