# renom_img.api.segmentation ¶

class  FCN16s  ( class_map=None , train_final_upscore=False , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False )

Bases:  renom_img.api.segmentation.SemanticSegmentation 

Fully convolutional network (16s) for semantic segmentation

 Parameters: class_map ( list , dict ) – List of class names. train_final_upscore ( bool ) – Whether or not to train final upscore layer. If True, final upscore layer is initialized to bilinear upsampling and made trainable. If False, final upscore layer is fixed to bilinear upsampling. imsize ( int , tuple ) – Input image size. load_pretrained_weight ( bool , str ) – Argument specifying whether or not to load pretrained weight values. If True, pretrained weights will be downloaded to the current directory and loaded as the initial weight values. If a string is given, weight values will be loaded and initialized from the weights in the given file name. train_whole_network ( bool ) – Flag specifying whether to freeze or train the base encoder layers of the model during training. If True, trains all layers of the model. If False, the convolutional encoder base is frozen during training.

Example

>>> from renom_img.api.segmentation.fcn import FCN16s
>>>
>>> class_map = ['cat', 'dog']
>>> model = FCN16s(class_map, train_final_upscore=False, imsize=(224,224), load_pretrained_weight=True, train_whole_network=True)


References

Jonathan Long, Evan Shelhamer, Trevor Darrell
Fully Convolutional Networks for Semantic Segmentation

 fit  ( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , optimizer=None , augmentation=None , callback_end_epoch=None , class_weight=None )

This function performs training with the given data and hyperparameters.

 Parameters: train_img_path_list ( list ) – List of image paths. train_annotation_list ( list ) – List of annotations. valid_img_path_list ( list ) – List of image paths for validation. valid_annotation_list ( list ) – List of annotations for validation. epoch ( int ) – Number of training epochs. batch_size ( int ) – Batch size. augmentation ( Augmentation ) – Augmentation object. callback_end_epoch ( function ) – Given function will be called at the end of each epoch. Training loss list and validation loss list. (tuple)

Example

>>> train_img_path_list, train_annot_list = ... # Define train data
>>> valid_img_path_list, valid_annot_list = ... # Define validation data
>>> class_map = ... # Define class map
>>> model = FCN16s(class_map) # Specify any algorithm provided by ReNomIMG API here
>>> model.fit(
...     # Feeds image and annotation data
...     train_img_path_list,
...     train_annot_list,
...     valid_img_path_list,
...     valid_annot_list,
...     epoch=8,
...     batch_size=8)
>>>


The following arguments will be given to the function  callback_end_epoch  .

• epoch (int) - Current epoch number.
• model (Model) - Model object.
• avg_train_loss_list (list) - List of average train loss of each epoch.
• avg_valid_loss_list (list) - List of average valid loss of each epoch.
 forward  ( x )

Performs forward propagation. You can call this function using the  __call__  method.

 Parameters: x ( ndarray , Node ) – Input to FCN16s. Returns raw output of FCN16s. (Node)

Example

>>> import numpy as np
>>> x = np.random.rand(1, 3, 224, 224)
>>>
>>> class_map = ["dog", "cat"]
>>> model = FCN16s(class_map)
>>>
>>> y = model.forward(x) # Forward propagation.
>>> y = model(x)  # Same as above result.

 loss  ( x , y , class_weight=None )

Loss function of FCN16s algorithm.

 Parameters: x ( ndarray , Node ) – Output of model. y ( ndarray , Node ) – Target array. Loss between x and y. (Node)
 predict  ( img_list , batch_size=1 )
 Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, a list with arrays whose shape is (width, height) is returned. (Numpy.array or list)
 preprocess  ( x )

Performs preprocessing for a given array.

 Parameters: x ( ndarray , Node ) – Image array for preprocessing.
 regularize  ( )

Adds a regularization term to the loss function.

Example

>>> x = numpy.random.rand(1, 3, 224, 224)  # Input image
>>> y = ...  # Ground-truth label
>>>
>>> class_map = ['cat', 'dog']
>>> model = FCN16s(class_map)
>>>
>>> z = model(x)  # Forward propagation
>>> loss = model.loss(z, y)  # Loss calculation
>>> reg_loss = loss + model.regularize()  # Add weight decay term.

class  FCN32s  ( class_map=None , train_final_upscore=False , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False )

Bases:  renom_img.api.segmentation.SemanticSegmentation 

Fully convolutional network (32s) for semantic segmentation

 Parameters: class_map ( list , dict ) – List of class names. train_final_upscore ( bool ) – Whether or not to train final upscore layer. If True, final upscore layer is initialized to bilinear upsampling and made trainable. If False, final upscore layer is fixed to bilinear upsampling. imsize ( int , tuple ) – Input image size. load_pretrained_weight ( bool , str ) – Argument specifying whether or not to load pretrained weight values. If True, pretrained weights will be downloaded to the current directory and loaded as the initial weight values. If a string is given, weight values will be loaded and initialized from the weights in the given file name. train_whole_network ( bool ) – Flag specifying whether to freeze or train the base encoder layers of the model during training. If True, trains all layers of the model. If False, the convolutional encoder base is frozen during training.

Example

>>> from renom_img.api.segmentation.fcn import FCN32s
>>>
>>> class_map = ['cat', 'dog']
>>> model = FCN32s(class_map, train_final_upscore=False, imsize=(224,224), load_pretrained_weight=True, train_whole_network=True)


References

Jonathan Long, Evan Shelhamer, Trevor Darrell
Fully Convolutional Networks for Semantic Segmentation

 fit  ( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , optimizer=None , augmentation=None , callback_end_epoch=None , class_weight=None )

This function performs training with the given data and hyperparameters.

 Parameters: train_img_path_list ( list ) – List of image paths. train_annotation_list ( list ) – List of annotations. valid_img_path_list ( list ) – List of image paths for validation. valid_annotation_list ( list ) – List of annotations for validation. epoch ( int ) – Number of training epochs. batch_size ( int ) – Batch size. augmentation ( Augmentation ) – Augmentation object. callback_end_epoch ( function ) – Given function will be called at the end of each epoch. Training loss list and validation loss list. (tuple)

Example

>>> train_img_path_list, train_annot_list = ... # Define train data
>>> valid_img_path_list, valid_annot_list = ... # Define validation data
>>> class_map = ... # Define class map
>>> model = FCN32s(class_map) # Specify any algorithm provided by ReNomIMG API here
>>> model.fit(
...     # Feeds image and annotation data
...     train_img_path_list,
...     train_annot_list,
...     valid_img_path_list,
...     valid_annot_list,
...     epoch=8,
...     batch_size=8)
>>>


The following arguments will be given to the function  callback_end_epoch  .

• epoch (int) - Current epoch number.
• model (Model) - Model object.
• avg_train_loss_list (list) - List of average train loss of each epoch.
• avg_valid_loss_list (list) - List of average valid loss of each epoch.
 forward  ( x )

Performs forward propagation. You can call this function using the  __call__  method.

 Parameters: x ( ndarray , Node ) – Input to FCN32s. Returns raw output of FCN32s. (Node)

Example

>>> import numpy as np
>>> x = np.random.rand(1, 3, 224, 224)
>>>
>>> class_map = ["dog", "cat"]
>>> model = FCN32s(class_map)
>>>
>>> y = model.forward(x) # Forward propagation.
>>> y = model(x)  # Same as above result.

 loss  ( x , y , class_weight=None )

Loss function of FCN32s algorithm.

 Parameters: x ( ndarray , Node ) – Output of model. y ( ndarray , Node ) – Target array. Loss between x and y. (Node)
 predict  ( img_list , batch_size=1 )
 Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, a list with arrays whose shape is (width, height) is returned. (Numpy.array or list)
 preprocess  ( x )

Performs preprocessing for a given array.

 Parameters: x ( ndarray , Node ) – Image array for preprocessing.
 regularize  ( )

Adds a regularization term to the loss function.

Example

>>> x = numpy.random.rand(1, 3, 224, 224)  # Input image
>>> y = ...  # Ground-truth label
>>>
>>> class_map = ['cat', 'dog']
>>> model = FCN32s(class_map)
>>>
>>> z = model(x)  # Forward propagation
>>> loss = model.loss(z, y)  # Loss calculation
>>> reg_loss = loss + model.regularize()  # Add weight decay term.

class  FCN8s  ( class_map=None , train_final_upscore=False , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False )

Bases:  renom_img.api.segmentation.SemanticSegmentation 

Fully convolutional network (8s) for semantic segmentation

 Parameters: class_map ( list , dict ) – List of class names. train_final_upscore ( bool ) – Whether or not to train final upscore layer. If True, final upscore layer is initialized to bilinear upsampling and made trainable. If False, final upscore layer is fixed to bilinear upsampling. imsize ( int , tuple ) – Input image size. load_pretrained_weight ( bool , str ) – Argument specifying whether or not to load pretrained weight values. If True, pretrained weights will be downloaded to the current directory and loaded as the initial weight values. If a string is given, weight values will be loaded and initialized from the weights in the given file name. train_whole_network ( bool ) – Flag specifying whether to freeze or train the base encoder layers of the model during training. If True, trains all layers of the model. If False, the convolutional encoder base is frozen during training.

Example

>>> from renom_img.api.segmentation.fcn import FCN8s
>>>
>>> class_map = ['cat', 'dog']
>>> model = FCN8s(class_map, train_final_upscore=False, imsize=(224,224), load_pretrained_weight=True, train_whole_network=True)


References

Jonathan Long, Evan Shelhamer, Trevor Darrell
Fully Convolutional Networks for Semantic Segmentation

 fit  ( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , optimizer=None , augmentation=None , callback_end_epoch=None , class_weight=None )

This function performs training with the given data and hyperparameters.

 Parameters: train_img_path_list ( list ) – List of image paths. train_annotation_list ( list ) – List of annotations. valid_img_path_list ( list ) – List of image paths for validation. valid_annotation_list ( list ) – List of annotations for validation. epoch ( int ) – Number of training epochs. batch_size ( int ) – Batch size. augmentation ( Augmentation ) – Augmentation object. callback_end_epoch ( function ) – Given function will be called at the end of each epoch. Training loss list and validation loss list. (tuple)

Example

>>> train_img_path_list, train_annot_list = ... # Define train data
>>> valid_img_path_list, valid_annot_list = ... # Define validation data
>>> class_map = ... # Define class map
>>> model = FCN8s(class_map) # Specify any algorithm provided by ReNomIMG API here
>>> model.fit(
...     # Feeds image and annotation data
...     train_img_path_list,
...     train_annot_list,
...     valid_img_path_list,
...     valid_annot_list,
...     epoch=8,
...     batch_size=8)
>>>


The following arguments will be given to the function  callback_end_epoch  .

• epoch (int) - Current epoch number.
• model (Model) - Model object.
• avg_train_loss_list (list) - List of average train loss of each epoch.
• avg_valid_loss_list (list) - List of average valid loss of each epoch.
 forward  ( x )

Performs forward propagation. You can call this function using the  __call__  method.

 Parameters: x ( ndarray , Node ) – Input to FCN8s. Returns raw output of FCN8s. (Node)

Example

>>> import numpy as np
>>> x = np.random.rand(1, 3, 224, 224)
>>>
>>> class_map = ["dog", "cat"]
>>> model = FCN8s(class_map)
>>>
>>> y = model.forward(x) # Forward propagation.
>>> y = model(x)  # Same as above result.

 loss  ( x , y , class_weight=None )

Loss function of FCN8s algorithm.

 Parameters: x ( ndarray , Node ) – Output of model. y ( ndarray , Node ) – Target array. Loss between x and y. (Node)
 predict  ( img_list , batch_size=1 )
 Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, a list with arrays whose shape is (width, height) is returned. (Numpy.array or list)
 preprocess  ( x )

Performs preprocessing for a given array.

 Parameters: x ( ndarray , Node ) – Image array for preprocessing.
 regularize  ( )

Adds a regularization term to the loss function.

Example

>>> x = numpy.random.rand(1, 3, 224, 224)  # Input image
>>> y = ...  # Ground-truth label
>>>
>>> class_map = ['cat', 'dog']
>>> model = FCN8s(class_map)
>>>
>>> z = model(x)  # Forward propagation
>>> loss = model.loss(z, y)  # Loss calculation
>>> reg_loss = loss + model.regularize()  # Add weight decay term.

class  UNet  ( class_map=None , imsize=(256 , 256) , load_pretrained_weight=False , train_whole_network=False )

Bases:  renom_img.api.segmentation.SemanticSegmentation 

U-Net: Convolutional Networks for Biomedical Image Segmentation

 Parameters: class_map ( list , dict ) – List of class names. imsize ( int , tuple ) – Input image size. load_pretrained_weight ( bool , str ) – Argument specifying whether or not to load pretrained weight values. Pretrained weights are not available for U-Net, so this must be set to False (for random initialization) or to a string specifying the filename of pretrained weights provided by the user. If a string is given, weight values will be loaded and initialized from the weights in the given filename. train_whole_network ( bool ) – Flag specifying whether to freeze or train the base encoder layers of the model during training. If True, trains all layers of the model. If False, the convolutional encoder base is frozen during training.

Example

>>> from renom_img.api.segmentation.unet import UNet
>>>
>>> class_map = ['background', 'object']
>>> model = UNet(class_map, imsize=(224,224), load_pretrained_weight=False, train_whole_network=True)


References

Olaf Ronneberger, Philipp Fischer, Thomas Brox
U-Net: Convolutional Networks for Biomedical Image Segmentation

 fit  ( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , optimizer=None , augmentation=None , callback_end_epoch=None , class_weight=None )

This function performs training with the given data and hyperparameters.

 Parameters: train_img_path_list ( list ) – List of image paths. train_annotation_list ( list ) – List of annotations. valid_img_path_list ( list ) – List of image paths for validation. valid_annotation_list ( list ) – List of annotations for validation. epoch ( int ) – Number of training epochs. batch_size ( int ) – Batch size. augmentation ( Augmentation ) – Augmentation object. callback_end_epoch ( function ) – Given function will be called at the end of each epoch. Training loss list and validation loss list. (tuple)

Example

>>> train_img_path_list, train_annot_list = ... # Define train data
>>> valid_img_path_list, valid_annot_list = ... # Define validation data
>>> class_map = ... # Define class map
>>> model = UNet(class_map) # Specify any algorithm provided by ReNomIMG API here
>>> model.fit(
...     # Feeds image and annotation data
...     train_img_path_list,
...     train_annot_list,
...     valid_img_path_list,
...     valid_annot_list,
...     epoch=8,
...     batch_size=8)
>>>


The following arguments will be given to the function  callback_end_epoch  .

• epoch (int) - Current epoch number.
• model (Model) - Model object.
• avg_train_loss_list (list) - List of average train loss of each epoch.
• avg_valid_loss_list (list) - List of average valid loss of each epoch.
 forward  ( x )

Performs forward propagation. You can call this function using the  __call__  method.

 Parameters: x ( ndarray , Node ) – Input to UNet. Returns raw output of UNet. (Node)

Example

>>> import numpy as np
>>> x = np.random.rand(1, 3, 224, 224)
>>>
>>> class_map = ["dog", "cat"]
>>> model = UNet(class_map)
>>>
>>> y = model.forward(x) # Forward propagation.
>>> y = model(x)  # Same as above result.

 loss  ( x , y , class_weight=None )

Loss function of UNet algorithm.

 Parameters: x ( ndarray , Node ) – Output of model. y ( ndarray , Node ) – Target array. Loss between x and y. (Node)
 predict  ( img_list , batch_size=1 )
 Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, a list with arrays whose shape is (width, height) is returned. (Numpy.array or list)
 preprocess  ( x )

Performs preprocessing for a given array.

 Parameters: x ( ndarray , Node ) – Image array for preprocessing.
 regularize  ( )

Adds a regularization term to the loss function.

Example

>>> x = numpy.random.rand(1, 3, 224, 224)  # Input image
>>> y = ...  # Ground-truth label
>>>
>>> class_map = ['cat', 'dog']
>>> model = UNet(class_map)
>>>
>>> z = model(x)  # Forward propagation
>>> loss = model.loss(z, y)  # Loss calculation
>>> reg_loss = loss + model.regularize()  # Add weight decay term.

class  TernausNet  ( class_map=None , imsize=(224 , 224) , load_pretrained_weight=False , train_whole_network=False )

Bases:  renom_img.api.segmentation.SemanticSegmentation 

TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation

 Parameters: class_map ( list , dict ) – List of class names. imsize ( int , tuple ) – Input image size. load_pretrained_weight ( bool , str ) – Argument specifying whether or not to load pretrained weight values. If True, pretrained weights will be downloaded to the current directory and loaded as the initial weight values. If a string is given, weight values will be loaded and initialized from the weights in the given file name. train_whole_network ( bool ) – Flag specifying whether to freeze or train the base encoder layers of the model during training. If True, trains all layers of the model. If False, the convolutional encoder base is frozen during training.

Example

>>> from renom_img.api.segmentation.ternausnet import TernausNet
>>>
>>> class_map = ['background', 'object']
>>> model = TernausNet(class_map, imsize=(224,224), load_pretrained_weight=True, train_whole_network=True)


References

TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation

 fit  ( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , optimizer=None , augmentation=None , callback_end_epoch=None , class_weight=None )

This function performs training with the given data and hyperparameters.

 Parameters: train_img_path_list ( list ) – List of image paths. train_annotation_list ( list ) – List of annotations. valid_img_path_list ( list ) – List of image paths for validation. valid_annotation_list ( list ) – List of annotations for validation. epoch ( int ) – Number of training epochs. batch_size ( int ) – Batch size. augmentation ( Augmentation ) – Augmentation object. callback_end_epoch ( function ) – Given function will be called at the end of each epoch. Training loss list and validation loss list. (tuple)

Example

>>> train_img_path_list, train_annot_list = ... # Define train data
>>> valid_img_path_list, valid_annot_list = ... # Define validation data
>>> class_map = ... # Define class map
>>> model = TernausNet(class_map) # Specify any algorithm provided by ReNomIMG API here
>>> model.fit(
...     # Feeds image and annotation data
...     train_img_path_list,
...     train_annot_list,
...     valid_img_path_list,
...     valid_annot_list,
...     epoch=8,
...     batch_size=8)
>>>


The following arguments will be given to the function  callback_end_epoch  .

• epoch (int) - Current epoch number.
• model (Model) - Model object.
• avg_train_loss_list (list) - List of average train loss of each epoch.
• avg_valid_loss_list (list) - List of average valid loss of each epoch.
 forward  ( x )

Performs forward propagation. You can call this function using the  __call__  method.

 Parameters: x ( ndarray , Node ) – Input to TernausNet. Returns raw output of TernausNet. (Node)

Example

>>> import numpy as np
>>> x = np.random.rand(1, 3, 224, 224)
>>>
>>> class_map = ["dog", "cat"]
>>> model = TernausNet(class_map)
>>>
>>> y = model.forward(x) # Forward propagation.
>>> y = model(x)  # Same as above result.

 loss  ( x , y , class_weight=None )

Loss function of TernausNet algorithm.

 Parameters: x ( ndarray , Node ) – Output of model. y ( ndarray , Node ) – Target array. Loss between x and y. (Node)
 predict  ( img_list , batch_size=1 )
 Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, a list with arrays whose shape is (width, height) is returned. (Numpy.array or list)
 preprocess  ( x )

Performs preprocessing for a given array.

 Parameters: x ( ndarray , Node ) – Image array for preprocessing.
 regularize  ( )

Adds a regularization term to the loss function.

Example

>>> x = numpy.random.rand(1, 3, 224, 224)  # Input image
>>> y = ...  # Ground-truth label
>>>
>>> class_map = ['cat', 'dog']
>>> model = TernausNet(class_map)
>>>
>>> z = model(x)  # Forward propagation
>>> loss = model.loss(z, y)  # Loss calculation
>>> reg_loss = loss + model.regularize()  # Add weight decay term.

class  Deeplabv3plus  ( class_map=None, imsize=(321, 321), scale_factor=16, atrous_rates=[6, 12, 18], lr_initial=0.007, lr_power=0.9, load_pretrained_weight=False, train_whole_network=False )

Bases:  renom_img.api.segmentation.SemanticSegmentation 

Deeplabv3+ model with modified aligned Xception65 backbone.

 Parameters: class_map ( list , dict ) – List of class names. imsize ( int , tuple ) – Image size after rescaling. Must be set to (321,321) in current implementation, which only supports a fixed rescaled size of 321x321. scale_factor ( int ) – Reduction factor for output feature maps before upsampling. Current implementation only supports a value of 16. atrous_rates ( list ) – List of dilation factors in ASPP module atrous convolution layers. Current implementation only supports values of [6,12,18]. lr_initial ( float ) – Initial learning rate for poly learning rate schedule. The default value is 1e-4. lr_power ( float ) – Exponential factor for poly learning rate schedule. The default value is 0.9 load_pretrained_weight ( bool , str ) – Argument specifying whether or not to load pretrained weight values. If True, pretrained weights will be downloaded to the current directory and loaded as the initial weight values. If a string is given, weight values will be loaded and initialized from the weights in the given file name. train_whole_network ( bool ) – Flag specifying whether to freeze or train the base encoder layers of the model during training. If True, trains all layers of the model. If False, the convolutional encoder base is frozen during training.

Example

>>> from renom_img.api.segmentation.deeplab import Deeplabv3plus
>>>
>>> class_map = ['background', 'object']
>>> model = Deeplabv3plus(class_map, imsize=(224,224), lr_initial=1e-3, lr_power=0.9, load_pretrained_weight=True, train_whole_network=True)


References

Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam
Rethinking Atrous Convolution for Semantic Image Segmentation

Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

 fit  ( train_img_path_list=None , train_annotation_list=None , valid_img_path_list=None , valid_annotation_list=None , epoch=136 , batch_size=64 , optimizer=None , augmentation=None , callback_end_epoch=None , class_weight=None )

This function performs training with the given data and hyperparameters.

 Parameters: train_img_path_list ( list ) – List of image paths. train_annotation_list ( list ) – List of annotations. valid_img_path_list ( list ) – List of image paths for validation. valid_annotation_list ( list ) – List of annotations for validation. epoch ( int ) – Number of training epochs. batch_size ( int ) – Batch size. augmentation ( Augmentation ) – Augmentation object. callback_end_epoch ( function ) – Given function will be called at the end of each epoch. Training loss list and validation loss list. (tuple)

Example

>>> train_img_path_list, train_annot_list = ... # Define train data
>>> valid_img_path_list, valid_annot_list = ... # Define validation data
>>> class_map = ... # Define class map
>>> model = Deeplabv3plus(class_map) # Specify any algorithm provided by ReNomIMG API here
>>> model.fit(
...     # Feeds image and annotation data
...     train_img_path_list,
...     train_annot_list,
...     valid_img_path_list,
...     valid_annot_list,
...     epoch=8,
...     batch_size=8)
>>>


The following arguments will be given to the function  callback_end_epoch  .

• epoch (int) - Current epoch number.
• model (Model) - Model object.
• avg_train_loss_list (list) - List of average train loss of each epoch.
• avg_valid_loss_list (list) - List of average valid loss of each epoch.
 forward  ( x )

Performs forward propagation. You can call this function using the  __call__  method.

 Parameters: x ( ndarray , Node ) – Input to Deeplabv3plus. Returns raw output of Deeplabv3plus. (Node)

Example

>>> import numpy as np
>>> x = np.random.rand(1, 3, 224, 224)
>>>
>>> class_map = ["dog", "cat"]
>>> model = Deeplabv3plus(class_map)
>>>
>>> y = model.forward(x) # Forward propagation.
>>> y = model(x)  # Same as above result.

 loss  ( x , y , class_weight=None )

Loss function of Deeplabv3plus algorithm.

 Parameters: x ( ndarray , Node ) – Output of model. y ( ndarray , Node ) – Target array. Loss between x and y. (Node)
 predict  ( img_list , batch_size=1 )
 Returns: If only an image or a path is given, an array whose shape is (width, height) is returned. If multiple images or paths are given, a list with arrays whose shape is (width, height) is returned. (Numpy.array or list)
 preprocess  ( x )

Performs preprocessing for a given array.

 Parameters: x ( ndarray , Node ) – Image array for preprocessing.
 regularize  ( )

Adds a regularization term to the loss function.

Example

>>> x = numpy.random.rand(1, 3, 224, 224)  # Input image
>>> y = ...  # Ground-truth label
>>>
>>> class_map = ['cat', 'dog']
>>> model = Deeplabv3plus(class_map)
>>>
>>> z = model(x)  # Forward propagation
>>> loss = model.loss(z, y)  # Loss calculation
>>> reg_loss = loss + model.regularize()  # Add weight decay term.