Algorithms module

Blockmatching

algorithms.blockmatching.blockmatching(floating_image, reference_image, init_result_trsf=None, left_trsf=None, param_str_1='-trsf-type rigid', param_str_2=None, dtype=None, **kwargs)[source]

Blockmatching registration algorithm. Registers a floating_image onto a reference_image.

Parameters:
  • floating_image (SpatialImage) – image to register
  • reference_image (SpatialImage) – reference image to use for registration of floating_image
  • init_result_trsf (BalTrsf, optional) – if given (default=None) it is used to initialise the registration and the returned transformation will contain it (composition)
  • left_trsf (BalTrsf, optional) – if given (default=None) it is used to initialise the registration but the returned transformation will NOT contain it (no composition)
  • param_str_1 (str, optional) – string of parameters used by blockmatching API (default=’-trsf-type rigid’)
  • param_str_2 (str, optional) – string of EXTRA parameters used by blockmatching API (default=None)
  • dtype (np.dtype, optional) – output image type, by default is equal to the input type.
Returns:

  • trsf_out (BalTrsf) – transformation matrix
  • res_img (SpatialImage) – registered image and metadata

Raises:

TypeError – if floating_image or reference_image are not SpatialImage

Example

>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> flo_path = data_path('time_0_cut.inr')
>>> floating_image = imread(flo_path)
>>> ref_path = data_path('time_1_cut.inr')
>>> reference_image = imread(ref_path)
>>> trsf_rig, res_rig = blockmatching(floating_image, reference_image) # rigid registration
>>> param_str_2 = '-trsf-type vectorfield'
>>> trsf_def, res_def = blockmatching(floating_image, reference_image,
                                      init_result_trsf=trsf_rig,
                                      param_str_2=param_str_2) # deformable registration

Connexe

algorithms.connexe.connexe(image, seeds=None, param_str_1='-low-threshold 1 -high-threshold 3 -labels -connectivity 26', param_str_2=None, dtype=None)[source]

Connected components labeling.

Parameters:
  • image (SpatialImage) – input image
  • seeds (SpatialImage, optional) – seeds image
  • param_str_1 (str, optional) – by default param_str_1 is equal to CONNEXE_DEFAULT. by default thresholds are fixed, and one label is associated to each connected component
  • param_str_2 (str, optional) – optional str parameter
  • dtype (np.array.dtype, optional) – type of array to return
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms import regionalext, connexe
>>> img_path = data_path('time_0_cut.inr')
>>> input_image = imread(img_path)
>>> regext_img = regionalext(input_image)
>>> output_image = connexe(regext_img)

Exposure

This module gather some of the functionalities from the ‘exposure’ module of scikit-image.

These algorithms are useful to stretch intensity images to span the whole range of value accessible to a given bit-depth.

algorithms.exposure.type_to_range(img)[source]

Returns the minimum and maximum values of a dtype according to image.

Parameters:img (numpy.array or SpatialImage) – Image from which to extract the slice
Returns:
  • min (int) – the minimum value depending on the array type
  • max (int) – the maximum values depending on the array type
algorithms.exposure.x_slice_contrast_stretch(image, pc_min=2, pc_max=99)[source]

Performs slice by slice contrast stretching in x-direction. Contrast stretching is here performed using lower and upper percentile of the image values to the min and max value of the image dtype.

Parameters:
  • image (numpy.array or SpatialImage) – Image from which to extract the slice
  • pc_min (int) – Lower percentile use to define the lower range of the input image for image stretching
  • pc_max (int) – Upper percentile use to define the upper range of the input image for image stretching
Returns:

stretched SpatialImage

Return type:

SpatialImage

algorithms.exposure.y_slice_contrast_stretch(image, pc_min=2, pc_max=99)[source]

Performs slice by slice contrast stretching in y-direction. Contrast stretching is here performed using lower and upper percentile of the image values to the min and max value of the image dtype.

Parameters:
  • image (numpy.array or SpatialImage) – Image from which to extract the slice
  • pc_min (int) – Lower percentile use to define the lower range of the input image for image stretching
  • pc_max (int) – Upper percentile use to define the upper range of the input image for image stretching
Returns:

stretched SpatialImage

Return type:

SpatialImage

algorithms.exposure.z_slice_contrast_stretch(image, pc_min=2, pc_max=99)[source]

Performs slice by slice contrast stretching in z-direction. Contrast stretching is here performed using lower and upper percentile of the image values to the min and max value of the image dtype.

Parameters:
  • image (numpy.array or SpatialImage) – Image from which to extract the slice
  • pc_min (int) – Lower percentile use to define the lower range of the input image for image stretching
  • pc_max (int) – Upper percentile use to define the upper range of the input image for image stretching
Returns:

stretched SpatialImage

Return type:

SpatialImage

algorithms.exposure.z_slice_equalize_adapthist(image, kernel_size=None, clip_limit=None, n_bins=256)[source]

Performs slice by slice adaptive histogram qualization in z-direction.

Parameters:
  • image (numpy.array or SpatialImage) – image from which to extract the slice
  • kernel_size (integer or list-like, optional) – Defines the shape of contextual regions used in the algorithm. If iterable is passed, it must have the same number of elements as image.ndim (without color channel). If integer, it is broadcasted to each image dimension. By default, kernel_size is 1/8 of image height by 1/8 of its width.
  • clip_limit (float, optional) – Clipping limit, normalized between 0 and 1 (higher values give more contrast).
  • n_bins (int, optional) – Number of gray bins for histogram (“data range”).
Returns:

equalized intensisty image

Return type:

SpatialImage

Notes

For color images, the following steps are performed:
  • The image is converted to HSV color space
  • The CLAHE algorithm is run on the V (Value) channel
  • The image is converted back to RGB space and returned

For RGBA images, the original alpha channel is removed.

Linear Filtering

algorithms.linearfilter.linearfilter(image, param_str_1='-x 0 -y 0 -z 0 -sigma 1.0', param_str_2=None, dtype=None)[source]

Linear filtering algorithms.

Parameters:
  • image (SpatialImage) – SpatialImage, input image
  • param_str_1 (str) – LINEARFILTER_DEFAULT, default is a gaussian filter with an unitary sigma
  • param_str_2 (str) – optional, optional parameters
  • dtype (np.dtype, optional,) – output image type. By default, the output type is equal to the input type.
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms import linearfilter
>>> img_path = data_path('time_0_cut.inr')
>>> input_image = imread(img_path)
>>> output_image = linearfilter(input_image)
>>> param_str_2 = '-x 0 -y 0 -z 0 -sigma 2.0'
>>> output_image = linearfilter(input_image, param_str_2=param_str_2)

Mean images

algorithms.mean_images.mean_images(list_spatial_images, list_spatial_masks=None, param_str_1='-mean', param_str_2=None)[source]

Mean image algorithms.

Parameters:
  • list_spatial_images (list) – input list of SpatialImage (grayscale)
  • list_spatial_masks (list, optional) – input list of SpatialImages (binary)
  • param_str_1 (str, optional) – MEANIMAGES_DEFAULT, by default a mean image is computed
  • param_str_2 (str, optional) – optional parameters
Returns:

mean image

Return type:

SpatialImage

Example

>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms import mean_images
>>> img_path = data_path('time_0_cut.inr')
>>> input_image = imread(img_path)
>>> output_image = mean_images([input_image, input_image, input_image])

Morphology

algorithms.morpho.morpho(image, struct_elt_vt=None, param_str_1='-dilation -sphere -radius 1 -iterations 1', param_str_2=None, dtype=None)[source]

Mathematical morphology algorithms on grayscale images.

Parameters:
  • image (SpatialImage) – input image
  • struct_elt_vt (optional,) – structuring element, by default an approximation of an euclidean ball is used
  • param_str_1 (str, optional) – MORPHO_DEFAULT, by default a dilation is applied
  • param_str_2 (str, optional) – optional, optional parameters
  • dtype (np.dtype, optional) – output image type. By default, the output type is equal to the input type.
Returns:

output image and metadata

Return type:

SpatialImage

Notes

‘-connectivity’ parameter override ‘-sphere’ parameter

Example

>>> output_image = morpho(input_image)
algorithms.morpho.cell_filter(image, struct_elt_vt=None, param_str_1='-dilation -sphere -radius 1 -iterations 1', param_str_2=None, dtype=None)[source]

Mathematical morphology algorithms on segmented images.

Parameters:
  • image (SpatialImage) – input image
  • struct_elt_vt (optional,) – structuring element, by default an approximation of an euclidean ball is used
  • param_str_1 (str, optional) – CELL_FILTER_DEFAULT, by default a dilation is applied
  • param_str_2 (str, optional) – optional parameters
  • dtype (np.dtype, optional) – output image type. By default, the output type is equal to the input type.
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> output_image = cell_filter(input_image)

Reconstruction

This is a revised copy of (deprecated?) package ‘vplants.mars_alt.mars.reconstruction’ It was using a library that is not available anymore (morpheme).

algorithms.reconstruction.end_margin(img, width, axis=None)[source]

An end margin is an inside black space (0) that can be added into the end of array object.

Parameters:
  • img (numpy.array) – NxMxP array
  • width (int) – size of the margin
  • axis (int, optional) – axis along which the margin is added. By default, add in all directions (see also stroke).
Returns:

input array with the end margin

Return type:

numpy.array

Example

>>> from numpy import zeros, random
>>> from timagetk.algorithms.reconstruction import end_margin
>>> img = random.random((3, 4, 5))
>>> end_margin(img, 1, 0)
algorithms.reconstruction.im2surface(image, threshold_value=45, only_altitude=False, front_half_only=False)[source]

This function computes a surfacic view of the meristem, according to a revisited version of the method described in [1].

Parameters:
  • image (SpatialImage) – image to be masked
  • threshold_value (int, float) – consider intensities superior to threshold.
  • only_altitude (bool) – only return altitude map, not maximum intensity projection
  • front_half_only (bool) – only consider the first half of all slices in the Z direction.
Returns:

  • mip_img (SpatialImage) – maximum intensity projection. None if only_altitude in True
  • alt_img (SpatialImage) – altitude of maximum intensity projection

References

[1]Barbier de Reuille, P. , Bohn‐Courseau, I. , Godin, C. and Traas, J. (2005), A protocol to analyse cellular dynamics during plant development. The Plant Journal, 44: 1045-1053. doi:10.1111/j.1365-313X.2005.02576.x

Example

>>> import matplotlib.pylab as plt
>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms.reconstruction import max_intensity_projection
>>> img_path = data_path('time_0_cut.inr')
>>> image = imread(img_path)
>>> proj = max_intensity_projection(image)
>>> plt.imshow(proj)
algorithms.reconstruction.matrix_real2voxels(matrix, target_res, source_res)[source]

Converts a transform matrix (M) expressed in real coordinates (a transform from space_r to space_r) into a matrix M’ from space_1 to space_2 where space_s is the voxel space from which M comes from and space_t the one where it will end, and space_r is the real common space.

Parameters:
  • matrix (numpy.array) – a 4x4 numpy.array.
  • target_res (tuple) – a 3-uple of unit vectors for the space_2 (eg: (1.,2.,1)
  • source_res (tuple) – a 3-uple of unit vectors for the space_1 (eg: (2.,1.,3)
Returns:

matrix in “voxel” coordinates (M’ mapping space_1 to space_2 , instead of space_r to space_r).

Return type:

numpy.array

algorithms.reconstruction.max_intensity_projection(image, threshold_value=45)[source]

This function computes a surfacic view of the meristem, according to a revisited version of the method described in Barbier de Reuille and al. in Plant Journal.

Parameters:
  • image (SpatialImage) – image to be masked
  • threshold_value (int, float) – consider intensities superior to threshold.
Returns:

maximum intensity projection

Return type:

SpatialImage

Example

>>> import matplotlib.pylab as plt
>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms.reconstruction import max_intensity_projection
>>> img_path = data_path('time_0_cut.inr')
>>> image = imread(img_path)
>>> proj = max_intensity_projection(image)
>>> plt.imshow(proj)
algorithms.reconstruction.pts2transfo(x, y)[source]

Infer rigid transformation from control point pairs using quaternions.

The quaternion representation is used to register two point sets with known correspondences. It computes the rigid transformation as a solution to a least squares formulation of the problem.

The rigid transformation, defined by the rotation \(R\) and the translation \(t\), is optimized by minimizing the cost function:

\[C(R,t) = \sum_i{|yi - R.xi - t|^2}\]

The optimal translation \(t\) is given by:

\[t = y_b - R.x_b\]

with \(x_b\) and \(y_b\) the barycenters of two point sets

The optimal rotation \(R\) using quaternions is optimized by minimizing the following cost function :

\[C(q) = \sum_i{|yi'*q - q*xi'|^2}\]

with \(yi'\) and \(xi'\) converted to barycentric coordinates and identified by quaternions

With the matrix representations :

\[yi'*q - q*xi' = Ai.q C(q) = q^T.|\sum(A^T.A)|.q = q^T.B.q\]

with:

\[A = egin{bmatrix} 0 & (xn\_i - yn\_i) & (xn\_j - yn\_j) & (xn\_k - yn\_k) \ -(xn\_i - yn\_i) & 0 & (-xn\_k - yn\_k) & (xn\_j + yn\_j) \ -(xn\_j - yn\_j) & -(-xn\_k - yn\_k) & 0 & (-xn\_i - yn\_i) \ -(xn\_k - yn\_k) & -(xn\_j + yn\_j) & -(-xn\_i - yn\_i) & 0 \end{bmatrix}\]

The unit quaternion representing the best rotation is the unit eigenvector corresponding to the smallest eigenvalue of the matrix -B :

\[v = a, b.i, c.j, d.k\]

The orthogonal matrix corresponding to a rotation by the unit quaternion

\[v = a + bi + cj + dk\]

with \(|z| = 1\), is given by :

\[R = array([ [a*a + b*b - c*c - d*d, 2bc - 2ad , 2bd + 2ac ], [ 2bc + 2ad , a*a - b*b + c*c - d*d, 2cd - 2ab ], [ 2bd - 2ac , 2cd + 2ab , a*a - b*b - c*c + d*d] ])\]
Parameters:
  • x (list) – list of points
  • y (list) – list of points
Returns:

T – array (R,t) which correspond to the optimal rotation and translation

Return type:

numpy.array

Notes

\[T = | R t | | 0 1 |\]

with T.shape(4,4)

Example

>>> from timagetk.algorithms.reconstruction import pts2transfo
>>> # x and y, two point sets with 7 known correspondences
>>> x = [[238.*0.200320, 196.*0.200320, 9.],
         [204.*0.200320, 182.*0.200320, 11.],
         [180.*0.200320, 214.*0.200320, 12.],
         [201.*0.200320, 274.*0.200320, 12.],
         [148.*0.200320, 225.*0.200320, 18.],
         [248.*0.200320, 252.*0.200320, 8.],
         [305.*0.200320, 219.*0.200320, 10.]]
>>> y = [[173.*0.200320, 151.*0.200320, 17.],
         [147.*0.200320, 179.*0.200320, 16.],
         [165.*0.200320, 208.*0.200320, 12.],
         [226.*0.200320, 204.*0.200320, 9.],
         [170.*0.200320, 254.*0.200320, 10.],
         [223.*0.200320, 155.*0.200320, 13.],
         [218.*0.200320, 109.*0.200320, 23.]]
>>> pts2transfo(x, y)
array([[  0.40710149,   0.89363883,   0.18888626, -22.0271968 ],
       [ -0.72459862,   0.19007589,   0.66244094,  51.59203463],
       [  0.55608022,  -0.40654742,   0.72490964,  -0.07837002],
       [  0.        ,   0.        ,   0.        ,   1.        ]])
algorithms.reconstruction.spatialise_matrix_points(points, image, mip_thresh=45)[source]

Given a list of points in matrix coordinates (i.e. i,j,k - but k is ignored anyway), and a spatial image, returns a list of points in real space (x,y,z) with the Z coordinate recomputed from an altitude map extracted from image. This implies that points were placed on the mip/altitude map result of max_intensity_projection applied to image with mip_thresh.

Parameters:
  • points (list(tuple(float,float,float)),) – file 2D points to spatialise. Can also be a filename pointing to a numpy-compatible list of points.
  • image (SpatialImage) – image or path to image to use to spatialise the points.
  • mip_thresh (int, float) – threshold used to compute the original altitude map for points.
Returns:

points3d – 3D points in REAL coordinates.

Return type:

list [of tuple [of float, float, float]]

algorithms.reconstruction.stroke(img, width, outside=False)[source]

A stroke is an outline that can be added to an array object.

Parameters:
  • img (numpy.array) – NxMxP array
  • width (int) – size of the stroke
  • outside (bool, optional) – used to set the position of the stroke. By default, the position of the stroke is inside (outside = False)
Returns:

input array with the stroke

Return type:

numpy.array

Example

>>> from numpy import zeros, random
>>> from timagetk.algorithms.reconstruction import stroke
>>> img = random.random((3, 4, 5))
>>> stroke(img, 1)
algorithms.reconstruction.surface2im(points, altitude)[source]

This function is used to convert points from maximum intensity projection to the real world.

Parameters:
  • points (list) – list of points from the maximum intensity projection
  • altitude (SpatialImage) – altitude of maximum intensity projection
Returns:

coord – list of points in the real world

Return type:

list

algorithms.reconstruction.surface_landmark_matching(ref_img, ref_pts, flo_img, flo_pts, ref_pts_already_spatialised=False, flo_pts_already_spatialised=False, mip_thresh=45)[source]

Computes the registration of flo_img to ref_img by minimizing distances between ref_pts and flo_pts.

Parameters:
  • ref_img (SpatialImage, str) – image or path to image to use to reference image.
  • ref_pts (list) – ordered sequence of 2D/3D points to use as reference landmarks.
  • flo_img (SpatialImage, str) – image or path to image to use to floating image
  • flo_pts (list) – ordered sequence of 2D/3D points to use as floating landmarks.
  • ref_pts_already_spatialised (bool) – If True, consider reference points are already in REAL 3D space.
  • flo_pts_already_spatialised (bool) – If True, consider floating points are already in REAL 3D space.
  • mip_thresh (int, float) – used to recompute altitude map to project points in 3D if they aren’t spatialised.
Returns:

trsf – The result is a 4x4 resampling voxel matrix (i.e. from ref_img to flo_img).

Return type:

numpy.ndarray

Notes

If ref_pts_already_spatialised and flo_pts_already_spatialised are True and ref_pts and flo_pts are indeed in real 3D coordinates, then this is exactly a landmark matching registration

Regionalext

algorithms.regionalext.regionalext(image, param_str_1='-minima -connectivity 26 -h 3', param_str_2=None, dtype=None)[source]

H-transform algorithms.

Parameters:
  • image (SpatialImage) – input image
  • param_str_1 (str, optional) – REGIONALEXT_DEFAULT, by default a h_min transform is computed
  • param_str_2 (str, optional) – optional parameters
  • dtype (np.dtype, optional) – output image type. By default, the output type is equal to the input type.
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms import regionalext
>>> img_path = data_path('time_0_cut.inr')
>>> input_image = imread(img_path)
>>> output_image = regionalext(input_image)
>>> param_str_2 = '-minima -connectivity 26 -h 5'
>>> output_image = regionalext(input_image, param_str_2=param_str_2)

Resample

algorithms.resample.resample_isotropic(image, voxelsize, option='gray', **kwargs)[source]

Resample image into an isotropic dataset of given vexelsize.

Notes

To apply on intensity images, use option=’gray’. To apply on labelled images, use option=’label’.

Parameters:
  • image (SpatialImage) – input image
  • voxelsize (float) – voxelsize value
  • option (str, optional) – option can be either ‘gray’ or ‘label’
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> import numpy as np
>>> from timagetk.components import SpatialImage
>>> from timagetk.algorithms.resample import resample_isotropic
>>> test_array = np.ones((5,5,10), dtype=np.uint8)
>>> img = SpatialImage(test_array, voxelsize=[1., 1., 2.])
>>> output_image = resample_isotropic(img, voxelsize=0.4)
>>> output_image.voxelsize
[0.4, 0.4, 0.4]
algorithms.resample.subsample(image, factor=[2, 2, 1], option='gray')[source]

Subsample a SpatialImage (2D/3D, grayscale/label)

Notes

To apply on intensity images, use option=’gray’. To apply on labelled images, use option=’label’.

Parameters:
  • image (SpatialImage) – input SpatialImage
  • factor (list, optional) – int|float or xyz-list of subsampling values
  • option (str, optional) – option can be either ‘gray’ or ‘label’
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> import numpy as np
>>> from timagetk.components import SpatialImage
>>> from timagetk.algorithms.resample import subsample
>>> test_array = np.ones((5,5,10), dtype=np.uint8)
>>> img = SpatialImage(test_array, voxelsize=[1., 1., 2.])
>>> output_image = subsample(img)
>>> output_image.shape
(2, 2, 10)
>>> output_image.voxelsize
[2.5, 2.5, 2.0]

Transformation

algorithms.trsf.inv_trsf(trsf, template_img=None, param_str_1='', param_str_2=None)[source]

Inversion of a BalTrsf transformation matrix.

Parameters:
  • trsf (BalTrsf) – input transformation to invert
  • template_img (optional, default is None.) – used for output image geometry and can be either SpatialImage or a list of dimensions
  • param_str_1 (str, optional) – INV_TRSF_DEFAULT, default = ‘’
  • param_str_2 (str, optional) – optional parameters
Returns:

output transformation

Return type:

BalTrsf

Example

>>> trsf_output = inv_trsf(trsf_input)
algorithms.trsf.apply_trsf(image, trsf=None, template_img=None, param_str_1='-linear', param_str_2=None, dtype=None)[source]

Apply a BalTrsf transformation to a SpatialImage image. To apply a transformation to a LabelledImage, uses ‘-nearest’ in param_str_2, default is ‘-linear’

Parameters:
  • image (SpatialImage) – input image to transform
  • trsf (BalTrsf, optional) – input transformation, default is identity
  • template_img (SpatialImage or list, optional) – used for output image geometry and can be either be a SpatialImage or a list of dimensions. If a list of dimension, voxelsize is defined by image . default is None
  • param_str_1 (str, optional) – APPLY_TRSF_DEFAULT
  • param_str_2 (str, optional) – optional parameters
  • dtype (np.dtype, optional) – output image type, by default, output type is equal to input type.
Returns:

output image with metadata

Return type:

SpatialImage

Example

>>> output_image = apply_trsf(input_image, input_trsf)
algorithms.trsf.compose_trsf(list_trsf, template_img=None, param_str_1='', param_str_2=None)[source]

Composition of BalTrsf transformations

Parameters:
  • list_trsf (list) – list of BalTrsf transformations
  • template_img (SpatialImage, optional) – template_img is a SpatialImage specified for vectorfield composition
  • param_str_1 (str) – COMPOSE_TRSF_DEFAULT
  • param_str_2 (str) – optional, optional parameters
Returns:

composition of given transformations

Return type:

BalTrsf

Examples

>>> res_trsf = compose_trsf([trsf_1, trsf_2, trsf_3])
>>> res_trsf = compose_trsf([trsf_1, trsf_2, trsf_3], template_img=template_img)
algorithms.trsf.create_trsf(template_img=None, param_str_1='-identity', param_str_2=None, trsf_type=None, trsf_unit=None)[source]

Create a BalTrsf instance.

Create “classical” transformations, such as identity, translation, rotation, sinusoidal, random, etc.

Parameters:
  • template_img (SpatialImage, optional) – Image used to define the shape of the three BalImage in case of a non-linear deformation.
  • param_str_1 (str, optional) – CREATE_TRSF_DEFAULT, default transformation is identity.
  • param_str_2 (str, optional) – optional parameters.
  • trsf_type (int or str, optional) – type of BalTrsf (see TRSF_TYPE_DICT in bal_trsf.py), default is AFFINE_3D.
  • trsf_unit (int or str, optional) – unit of BalTrsf (see TRSF_UNIT_DICT in bal_trsf.py), default is REAL_UNIT.
Returns:

output transformation

Return type:

BalTrsf

Example

>>> from timagetk.algorithms.trsf import create_trsf
>>> identity_trsf = create_trsf()
algorithms.trsf.mean_trsfs(list_trsf)[source]

Mean trsfs (vectorfield) BalTrsf transformation.

Parameters:list_trsf (list) – list of BalTrsf transformations (vectorfield type)
Returns:output mean transformation
Return type:BalTrsf

Example

>>> trsf_out = mean_trsf([trsf_1, trsf_2, trsf_3])

Watershed

algorithms.watershed.watershed(image, seeds, param_str_1='-labelchoice most', param_str_2=None, dtype=None)[source]

Seeded watershed algorithm.

Parameters:
  • image (SpatialImage) – input image
  • seeds (SpatialImage) – seeds image, each marker have an unique value
  • param_str_1 (str) – WATERSHED_DEFAULT
  • param_str_2 (str) – optional, optional parameters
  • dtype (np.dtype, optional,) – output image type, by default, output type is equal to input type.
Returns:

output image and metadata

Return type:

SpatialImage

Example

>>> from timagetk.util import data_path
>>> from timagetk.io import imread
>>> from timagetk.algorithms import linearfilter, regionalext, connexe, watershed
>>> img_path = data_path('time_0_cut.inr')
>>> input_image = imread(img_path)
>>> smooth_img = linearfilter(input_image)
>>> regext_img = regionalext(smooth_img)
>>> conn_img = connexe(regext_img)
>>> output_image = watershed(smooth_img, conn_img)