Examples

All sample scripts can be found in the examples folder timagetk/timagetk/examples/ and can be launched.

For example, to run the script input_output.py, open a shell prompt, go the the examples folder and type:

$ python input_output.py

All shared data (images, transformation, …) can be found in the data folder timagetk/timagetk/shared/data/.

Some results of the example scripts may be found in the result folder timagetk/timagetk/examples/results/.

Input/Output

These examples illustrate input and output operations (see Input/Output module).

Reading and writing images

Images can be checked using any image visualization tool (such as Gnomon or Fiji).

Follow these examples to understand how to use these functions:


# input image path
img_path = data_path('time_0_cut.inr')
# .inr format to SpatialImage
sp_img = imread(img_path)
# sp_img is now a SpatialImage instance

try:
    assert isinstance(sp_img, SpatialImage)
except AssertionError:
    raise TypeError("Object 'sp_img' is not a SpatialImage.")

# get the SpatialImage metadata property
metadata = sp_img.metadata  # dictionary
# print the SpatialImage metadata
print(('Metadata :'), metadata)
# there are two ways to modify metadata property
# 1. access the '' metadata directly
sp_img.metadata['filename'] = img_path
# print the SpatialImage metadata
print(('New metadata :'), sp_img.metadata)
# 2. update the whole dictionary with a new one:
sp_img.metadata = {'filename': img_path.split('/')[-1]}
# print the SpatialImage metadata
print(('New metadata :'), sp_img.metadata)

# get the SpatialImage voxelsize (list of floating numbers)
vox = sp_img.voxelsize
# print the SpatialImage voxelsize
print(('Voxelsize :'), vox)
# voxelsize modification
new_vox = []  # empty list
for ind, val in enumerate(vox):
    new_vox.append(2.0 * val)
# set the SpatialImage voxelsize property
sp_img.voxelsize = new_vox
print(('New voxelsize :'), sp_img.voxelsize)

# output
# filename
res_name = 'example_input_output.tif'
# SpatialImage to .tif format
imsave(out_path + res_name, sp_img)
# filename
res_name = 'example_input_output.inr'
# SpatialImage to .inr format
imsave(out_path + res_name, sp_img)

You can find a detailed description of:

Reading and writing transformations

Using C function

To load or save BalTrsf instances using the C read/write methods:

  • use BalTrsf.write() to save BalTrsf instances
  • use BalTrsf.read() to load BalTrsf instances saved with BalTrsf.write()

Follow these examples to understand how to use these methods:

from timagetk.util import data_path
from timagetk.algorithms.trsf import BalTrsf

# - Read a saved ``BalTrsf``, using the C method ``read``:
trsf = BalTrsf()
trsf.read(data_path("identity_trsf.trsf"))
trsf.c_display('identity')

# - Save a ``BalTrsf``, using the C method ``write``:
trsf.write("test.trsf")

Using Python function

Warning

If you use these functions to save BalTrsf objects, you can not load them using the C method BalTrsf.read()!

To load or save BalTrsf instances using the Python read/write functions, use:

  • use save_trsf() to save BalTrsf instances
  • use read_trsf() to read file saved with save_trsf

Follow these examples to understand how to use these functions:

from timagetk.util import data_path
from timagetk.io.io_trsf import read_trsf
from timagetk.io.io_trsf import save_trsf

# - Read a saved ``BalTrsf``, using the Python function ``read_trsf``:
trsf = read_trsf(data_path("identity_trsf.trsf"))
trsf.c_display('identity')

# - Save a ``BalTrsf``, using the Python function ``save_trsf``:
save_trsf(trsf, "test.trsf")

Transformations

Linear BalTrsf instance from numpy.array

Warning

Not yet implemented/tested for 2D!

To create rigid BalTrsf instances from a numpy.array describing this transformation, uses np_array_to_rigid_trsf from algorithms.trsf.

To create affine BalTrsf instances from a numpy.array describing this transformation, uses np_array_to_affine_trsf from algorithms.trsf.


"""
These examples show how to create linear ``BalTrsf`` object from ``numpy.array``.
"""

import numpy as np
from timagetk.wrapping.bal_trsf import np_array_to_rigid_trsf
from timagetk.wrapping.bal_trsf import np_array_to_affine_trsf

# - Create a RIGID transformation (from a random array):
arr = np.random.random_sample((4, 4))
trsf = np_array_to_rigid_trsf(arr)
# To get the unit of the transformation (should be 'REAL_UNIT'):
trsf.get_unit()
# To get the type of the transformation (should be 'RIGID_3D'):
print(trsf.get_type())
# To test if this transformation is linear (should be True):
print(trsf.is_linear())
# To print information and the linear matrix as defined in the allocated memory:
trsf.c_display('rigid_test')

# - Create a AFFINE transformation (from a random array):
arr = np.random.random_sample((4, 4))
trsf = np_array_to_affine_trsf(arr)
# To get the unit of the transformation (should be 'REAL_UNIT'):
trsf.get_unit()
# To get the type of the transformation (should be 'AFFINE_3D'):
print(trsf.get_type())
# To test if this transformation is linear (should be True):
print(trsf.is_linear())
# To print information and the linear matrix as defined in the allocated memory:
trsf.c_display('affine_test')

See also

TRSF_TYPE_DICT & TRSF_UNIT_DICT global variables defined in wrapping.bal_trsf.

Non-linear BalTrsf instance from list of SpatialImage

Warning

Not yet implemented!

Todo

Implement initialization of non-linear BalTrsf instance from list of SpatialImage.

Intensity images

Intensity images, also called grayscale images, are obtained from microscopy imaging (confocal, SPIM, …) and their dimensionality may range from 2D to 5D (3D + multi-channel + time).

We hereafter present some methods to transform them.

Note

Please note these methods often works on 2D or 3D images!

Exposure & intensity rescaling

The following images illustrate the effects of two intensity rescaling techniques on grayscale images.

(Source code, png, hires.png, pdf)

_images/exposure.png

This code example details how to performs intensity rescaling on grayscale images.

from timagetk.io import imread
from timagetk.util import data_path

from timagetk.algorithms.exposure import z_slice_contrast_stretch
from timagetk.algorithms.exposure import z_slice_equalize_adapthist
import matplotlib.pyplot as plt
from timagetk.visu.mplt import slice_n_hist

# - Example input image path
img_path = data_path('time_0_cut.inr')
# - Load the image:
sp_img = imread(img_path)
# - Get the middle z-slice:
n_zslices = sp_img.shape[2]
middle_z = int(n_zslices / 2)

# - Create a view of the original z-slice and the associated histograms:
slice_n_hist(sp_img.get_z_slice(middle_z).get_array(),
             "original z-slice (z {}/{})".format(middle_z, n_zslices),
             'time_0_cut.inr')
# - Display interactive maptlotlib viewer:
plt.show()

# - Performs z-slice by z-slice contrast stretching:
st_img = z_slice_contrast_stretch(sp_img)
# - Create a view of the contrasted z-slice and the associated histograms:
slice_n_hist(st_img.get_z_slice(middle_z).get_array(),
             "z-slice contrast stretching (z {}/{})".format(middle_z,
                                                            n_zslices),
             'time_0_cut.inr')
# - Display interactive maptlotlib viewer:
plt.show()

# - Performs z-slice by z-slice adaptative histogram equalization:
eq_img = z_slice_equalize_adapthist(sp_img)
# - Create a view of the equalized z-slice and the associated histograms:
slice_n_hist(eq_img.get_z_slice(middle_z).get_array(),
             "z-slice adaptative histogram equalization (z {}/{})".format(
                 middle_z, n_zslices), 'time_0_cut.inr')
# - Display interactive maptlotlib viewer:
plt.show()

The following images detail how intensity rescaling on grayscale images modify the values distribution by showing associated histograms and cumulative histograms.

(Source code, png, hires.png, pdf)

_images/exposure_00_00.png

(png, hires.png, pdf)

_images/exposure_01_00.png

(png, hires.png, pdf)

_images/exposure_02_00.png

Grayscale filtering

The following images gives an overview of the effects of grayscale filtering on grayscale images.

(Source code, png, hires.png, pdf)

_images/grayscale_filters.png

This example illustrate the effect of Gaussian smoothing on grayscale images with increasing values for standard deviation.

# -*- python -*-
# -*- coding: utf-8 -*-
#
#
#       Copyright 2018 INRIA
#
#       File author(s):
#           Jonathan Legrand <jonathan.legrand@ens-lyon.fr>
#
#       See accompanying file LICENSE.txt
# ------------------------------------------------------------------------------

from __future__ import print_function

"""
These examples shows the effect of some filters on grayscale images.
"""
import numpy as np
from timagetk.io import imread
from timagetk.util import data_path
from timagetk.algorithms.resample import isometric_resampling
from timagetk.plugins.linear_filtering import linear_filtering
from timagetk.plugins.linear_filtering import list_linear_methods

from pylab import show, imshow, title, subplot, figure, tight_layout
from matplotlib import gridspec

# - Example input image path
img_path = data_path('time_0_cut.inr')
# - Load the image:
sp_img = imread(img_path)

# - Make the grayscale image isometric:
# gaussian smoothing methods assumes image isometry
# (by applying the same parameter 'std_dev' in every direction)
sp_img = isometric_resampling(sp_img)

# - Get the middle z-slice:
n_zslices = sp_img.shape[2]
middle_z = int(n_zslices / 2)

# - To get the list of defined linear filtering methods:
linear_methods = list_linear_methods()
print(linear_methods)

# Define list of increasing standard deviation to apply on images:
sigmas = range(1, 4)
n_fig = len(sigmas) + 1  # don't forget original image

# - We create a grid to assemble all figures:
figure(figsize=[3.5 * n_fig, 4])
gs = gridspec.GridSpec(1, n_fig)

# - Get the middle z-slice of the original grayscale image and add it as first image:
subplot(gs[0, 0])
imshow(sp_img.get_z_slice(middle_z).get_array(), cmap="gray", vmin=0, vmax=255)
title("original z-slice (z {}/{})".format(middle_z, n_zslices))

# - Gaussian smoothing example with increasing standard deviation:
for n, sigma in enumerate(sigmas):
    # Define where to add the image in the plot:
    subplot(gs[0, n + 1])
    # Perform gaussian smoothing:
    gauss_filter_img = linear_filtering(sp_img, 'gaussian_smoothing',
                                        std_dev=sigma)
    imshow(gauss_filter_img.get_z_slice(middle_z).get_array(), cmap="gray", vmin=0, vmax=255)
    title('gaussian_smoothing (std_dev={})'.format(sigma))

# - Display the images:
tight_layout()
show()

(Source code, png, hires.png, pdf)

_images/grayscale_filters1.png

Maximum intensity projection

It is possible to performs Maximum Intensity Projection (MIP) of grayscale images using max_intensity_projection from timagetk.algorithms.reconstruction.

The following code illustrate how to performs maximum intensity projection:

from timagetk.io import imread
from timagetk.util import data_path

from timagetk.algorithms.resample import isometric_resampling
from timagetk.algorithms.reconstruction import max_intensity_projection

from pylab import show, imshow, title, figure

# - Example input image path
img_path = data_path('p58-t0_INT_down_interp_2x.inr.gz')
# - Load the image:
sp_img = imread(img_path)

# - Make the grayscale image isometric:
# ``max_intensity_projection`` methods assumes image isometry
sp_img = isometric_resampling(sp_img)

# - We create a grid to assemble all figures:
figure()
# - Original gray-scale image:
mip = max_intensity_projection(sp_img, 55)
mip = mip.to_2D()
imshow(mip, cmap="gray", vmin=0, vmax=255)
title("Original image")

show()

The following images shows mip from the same 3D image, but with different intensity rescaling methods:

(Source code, png, hires.png, pdf)

_images/mip.png

Rigid, affine and deformable registration

This example illustrates rigid, affine and deformable registration (see Registration).

A registration process requires two input images (floating_img and reference_img) and computes:

  • a spatial transformation trsf_out between the two input images,
  • a result image img_res corresponding to the floating image resampled into the reference frame.

The resulting images (see folder timagetk/timagetk/examples/results/) can be checked against the reference image by using any image visualization tool (such as Gnomon or Fiji).

_images/registration.jpg
# imports
import os
try:
    from timagetk.util import data_path
    from timagetk.io import imread, imsave
    from timagetk.plugins import registration
    #from timagetk.wrapping import BalTrsf
except ImportError as e:
    raise ImportError('Import Error: {}'.format(e))

out_path = './results/' # to save results
if not os.path.isdir(out_path):
    new_fold = os.path.join(os.getcwd(),'results')
    os.mkdir(new_fold)

# we consider two different times
# time_1 is the floating image
# time_2 is the reference image
times = [1, 2]
# list of SpatialImage instances
list_images = [imread(data_path('time_' + str(time) + '.inr'))
               for time in times]
floating_img, reference_img = list_images[0], list_images[1]

# Rigid registration:
trsf_rig, res_rig = registration(floating_img,
                                 reference_img,
                                 method='rigid_registration')
# display the spatial transformation (4x4 matrix):
trsf_rig.c_display()
# save the spatial transformation:
res_name = 'example_trsf_rigid.trsf' # filename
trsf_rig.write(out_path+res_name)
# save the result image:
res_name = 'example_reg_rigid_1_2.tif' # filename
# SpatialImage to .tif format
imsave(out_path+res_name, res_rig)

# Affine registration:
trsf_aff, res_aff = registration(floating_img,
                                 reference_img,
                                 method='affine_registration')
res_name = 'example_reg_affine_1_2.tif' # filename
# SpatialImage to .tif format
imsave(out_path+res_name, res_aff)

# Deformable registration:
trsf_def, res_def = registration(floating_img,
                                 reference_img,
                                 method=
                                 'deformable_registration')
res_name = 'example_reg_deformable_1_2.tif' # filename
# SpatialImage to .tif format
imsave(out_path+res_name, res_def)

# Save the reference image:
res_name = 'example_reg_reference.tif' # filename
# SpatialImage to .tif format
imsave(out_path+res_name, reference_img)

Multi-angle fusion

After multiple acquisitions, from different angles, of the same tissue it is possible to fuse these image together using the fusion algorithm from timagetk.algorithms.fusion.

The following code illustrate how to performs such multi-angle fusion:

"""

from timagetk.util import data_path

from timagetk.io import imread
from timagetk.plugins.fusion import fusion
from timagetk.algorithms.resample import resample

# Defines files location:
files = [data_path("time_0_cut.inr"),
         data_path("time_0_cut_rotated1.mha"),
         data_path("time_0_cut_rotated2.mha")]
ref_fusion_file = data_path("time_0_cut_fused.mha")

# Defines downsampling factor (speed up computation):
downsampling = 2.

# Loop reading and downsampling the images to fuse:
img_list = []
for f in files:
    print(f)
    img = imread(f)
    ds_vxs = [vox * downsampling for vox in img.voxelsize]
    img_list.append(resample(img, ds_vxs, option='grey'))

# Performs multi-angle fusion:
fus_img = fusion(img_list, 0)

# Read the "reference fusion" file and apply same downsampling:
ref_fus_img = imread(ref_fusion_file)
ds_vxs = [vox * downsampling for vox in ref_fus_img.voxelsize]
ref_fus_img = resample(ref_fus_img, ds_vxs, option='grey')

# Compute the structural similarity index between the "reference fusion", and the one we just computed:
ssim = fus_img.structural_similarity_index(ref_fus_img)

# Test if the images are 98% similar (according to structural similarity):
ssim >= 0.98

The following images illustrate the multi-angle fusion:

(Source code, png, hires.png, pdf)

_images/fusion.png

Note that the shape of the fusion image is larger than the initial images. Super-resolution is also achieved during this step.

Sequence Registration

This example illustrates rigid and affine registration along a whole sequence (see Sequence registration).

This process requires a list of images (a sequence) and computes:

  • a list of spatial transformations list_compo_trsf between successive images and the reference image,
  • a list of result images list_res_img corresponding to the successive images resampled into the reference frame.

By default, the reference image is the last image of the sequence and all images are resampled into this frame. The resulting images (see folder timagetk/timagetk/examples/results/) can be checked against the reference image by using any image visualization tool (such as Gnomon or Fiji).

_images/seq_rigid_registration.jpg
# imports
import os
try:
    from timagetk.util import data_path
    from timagetk.io import imread, imsave
    from timagetk.plugins import sequence_registration
except ImportError as e:
    raise ImportError('Import Error: {}'.format(e))

out_path = './results/' # to save results
if not os.path.isdir(out_path):
    new_fold = os.path.join(os.getcwd(),'results')
    os.mkdir(new_fold)

# we consider three different times
# time_0 is the floating image
# time_1 is first the local reference image,
# and becames the floating image
# time_2 is the global reference image
times = [0, 1, 2]
# list of SpatialImage instances
list_images = [imread(data_path('time_' + str(time) + '.inr'))
               for time in times]

# Rigid registration along the whole sequence:
# list_compo_trsf: list of transformations
# list_res_img: list of resulting images
# the primitive embeds an iterative registration
list_compo_trsf, list_res_img = sequence_registration(list_images,
                          method='rigid')
for ind, img in enumerate(list_res_img):
    # filenames
    res_name = ''.join(['example_seq_reg_rigid_', str(ind), '_',
                        str(times[-1]), '.inr'])
    # SpatialImage to .inr format
    imsave(out_path+res_name, img)

# Affine registration alog the whole sequence:
list_compo_trsf, list_res_img = sequence_registration(list_images,
                          method='affine')
for ind, img in enumerate(list_res_img):
    # filenames
    res_name = ''.join(['example_seq_reg_affine_', str(ind), '_',
                        str(times[-1]), '.inr'])
    # SpatialImage to .inr format
    imsave(out_path+res_name, img)

Seeded-watershed segmentation

This example illustrates seeded-watershed segmentation.


import os
try:
    from timagetk.util import data_path
    from timagetk.io import imread, imsave
    from timagetk.plugins import linear_filtering, morphology
    from timagetk.plugins import h_transform
    from timagetk.plugins import region_labeling, segmentation
    from timagetk.plugins import labels_post_processing
except ImportError as e:
    raise ImportError('Import Error: {}'.format(e))

out_path = './results/' # to save results
if not os.path.isdir(out_path):
    new_fold = os.path.join(os.getcwd(),'results')
    os.mkdir(new_fold)

# we consider an input image
# SpatialImage instance
input_img = imread(data_path('input.tif'))

# optional denoising block
smooth_img = linear_filtering(input_img, std_dev=2.0,
                              method='gaussian_smoothing')
asf_img = morphology(smooth_img, max_radius=3,
                     method='co_alternate_sequential_filter')

ext_img = h_transform(asf_img, h=150,
                      method='h_transform_min')
con_img = region_labeling(ext_img, low_threshold=1,
                          high_threshold=150,
                          method='connected_components')
seg_img = segmentation(smooth_img, con_img, control='first',
                       method='seeded_watershed')

# optional post processig block
pp_img = labels_post_processing(seg_img, radius=1,
                                iterations=1,
                                method='labels_erosion')

res_name = 'example_segmentation.tif'
imsave(out_path+res_name, pp_img)

The following images shows each step of a segmentation pipeline.

(Source code, png, hires.png, pdf)

_images/seeded_watershed_segmentation.png

Depending on the grayscale image source, the result of the seed extraction steps (h_transform + region_labeling) may differ:

(Source code, png, hires.png, pdf)

_images/seed_extraction_inputs.png

Depending on the grayscale image source, but with similar seeds, the result of the watershed step (segmentation) may differ:

(Source code, png, hires.png, pdf)

_images/watershed_inputs.png

The following images illustrate the impact of control parameter when performing seeded watershed segmentation:

(Source code, png, hires.png, pdf)

_images/watershed_controls.png

Labelled images