This paper proposes a way of better approximating continuous, two-dimensional morphology in the discrete domain, by allowing for irregularly sampled input and output signals. We generalize previous work to allow for a greater variety of structuring elements, both flat and non-flat. Experimentally we show improved results over regular, discrete morphology with respect to the approximation of continuous morphology. It is also worth noting that the number of output samples can often be reduced without sacrificing the quality of the approximation, since the morphological operators usually generate output signals with many plateaus, which, intuitively do not need a large number of samples to be correctly represented. Finally, the paper presents some results showing adaptive morphology on irregularly sampled signals.
The stochastic watershed is an unsupervised segmentation tool recently proposed by Angulo and Jeulin. By repeated application of the seeded watershed with randomly placed markers, a probability density function for object boundaries is created. In a second step, the algorithm then generates a meaningful segmentation of the image using this probability density function. The method performs best when the image contains regions of similar size, since it tends to break up larger regions and merge smaller ones. We propose two simple modifications that greatly improve the properties of the stochastic watershed: (1) add noise to the input image at every iteration, and (2) distribute the markers using a randomly placed grid. The noise strength is a new parameter to be set, but the output of the algorithm is not very sensitive to this value. In return, the output becomes less sensitive to the two parameters of the standard algorithm. The improved algorithm does not break up larger regions, effectively making the algorithm useful for a larger class of segmentation problems.
Adaptive structuring elements modify their shape and size according to the image content and may outperform fixed structuring elements. Without any restrictions, they suffer from a high computational complexity, which is often higher than linear with respect to the number of pixels in the image. This paper introduces adaptive structuring elements that have predefined shape, but where the size is adjusted to the local image structures. The size of adaptive structuring elements is determined by the salience map that corresponds to the salience of the edges in the image, which can be computed in linear time. We illustrate the difference between the new adaptive structuring elements and morphological amoebas. As an example of its usefulness, we show how the new adaptive morphological operations can isolate the text in historical documents.
We present an up-to-date survey on the topic of adaptive mathematical morphology. A broad review of research performed within the field is provided, as well as an in-depth summary of the theoretical advances within the field. Adaptivity can come in many different ways, based on different attributes, measures, and parameters. Similarities and differences between a few selected methods for adaptive structuring elements are considered, providing perspective on the consequences of different types of adaptivity. We also provide a brief analysis of perspectives and trends within the field, discussing possible directions for future studies.
The Hit or Miss Transform is a fundamental morphological operator, and can be used for template matching. In this paper, we present a framework for adaptive Hit or Miss Transform, where structuring elements are adaptive with respect to the input image itself. We illustrate the difference between the new adaptive Hit or Miss Transform and the classical Hit or Miss Transform. As an example of its usefulness, we show how the new adaptive Hit or Miss Transform can detect particles in single molecule imaging.
It has been shown that the use of the salience map based on the salience distance transform can be useful for the construction of spatially adaptive structuring elements. In this paper, we propose salience-based parabolic structuring functions that are defined for a fixed, predefined spatial support, and have low computational complexity. In addition, we discuss how to properly define adjunct morphological operators using the new spatially adaptive structuring functions. It is also possible to obtain flat adaptive structuring elements by thresholding the salience-based parabolic structuring functions.
Spatially adaptive structuring elements adjust their shape to the local structures in the image, and are often defined by a ball in a geodesic distance or gray-weighted distance metric space. This paper introduces salience adaptive structuring elements as spatially variant structuring elements that modify not only their shape, but also their size according to the salience of the edges in the image. Consequently they have good properties for filtering.
Spatially adaptive structuring elements adjust their shape to the local structures in the image, and are often defined by a ball in a geodesic distance or gray-weighted distance metric space. This paper introduces salience adaptive structuring elements as spatially variant structuring elements that modify not only their shape, but also their size according to the salience of the edges in the image. Morphological operators with salience adaptive structuring elements shift edges with high salience to a less extent than those with low salience. Salience adaptive structuring elements are less flexible than morphological amoebas and their shape is less affected by noise in the image. Consequently, morphological operators using salience adaptive structuring elements have better properties.
Histopathology of testicular tissue is considered to be the most sensitive tool to detect adverse effects on male reproduction. When assessing tissue damage, seminiferous epithelium needs to be classified into different stages to detect certain cell damages; but stage identification is a demanding task. The authors present a method to identify the 12 stages in mink testicular tissue. The staging system uses Gata-4 immunohistochemistry to visualize acrosome development and proved to be both intraobserver-reproducible and interobserver-reproducible with a substantial agreement of 83.6% (kappa=0.81) and 70.5% (kappa=0.67), respectively. To further advance and objectify this method, they present a computerized staging system that identifies these 12 stages. This program has an agreement of 52.8% (kappa 0.47) with the consensus staging by 2 investigators. The authors propose a pooling of the stages into 5 groups based on morphology, stage transition, and toxicologically important endpoints. The computerized program then reached a substantial agreement of 76.7% (kappa=0.69). The computerized staging tool uses local ternary patterns to describe the texture of the tubules and a support vector machine classifier to learn which textures correspond to which stages. The results have the potential to modernize the tedious staging process required in toxicological evaluation of testicular tissue, especially if combined with whole-slide imaging and automated tubular segmentation. Environ Toxicol Chem 2017;36:156-164.
Computerized image processing has provided us with valuable tools for analyzing histology images. However, histology images are complex, and the algorithm which is developed for a data set may not work for a new and unseen data set. The preparation procedure of the tissue before imaging can significantly affect the resulting image. Even for the same staining method, factors like delayed fixation may alter the image quality. In this paper we face the challenging problem of designing a method that works on data sets with strongly varying quality. In environmental research, due to the distance between the site where the wild animals are caught and the laboratory, there is always a delay in fixation. Here we suggest a segmentation method based on the structural information of epithelium cell layer in testicular tissue. The cell nuclei are detected using the fast radial symmetry filter. A graph is constructed on top of the epithelial cells. Graph-cut optimization method is used to cut the links between cells of different tubules. The algorithm is tested on five different groups of animals. Group one is fixed immediately, three groups were left at room temperature for 18, 30 and 42 hours respectively, before fixation. Group five was frozen after 6 hours in room temperature and thawed. The suggested algorithm gives promising results for the whole data set.
While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to (1) segment radial plant organs into individual cells, (2) classify cells into cell type categories based upon Random Forest classification, (3) divide each cell into sub-regions, and (4) quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.
The properties of the fibre/matrix interface contribute to stiffness, strength and fracture behaviour of fibre-reinforced composites. In cellulosic composites, the limited affinity between the hydrophilic fibres and the hydrophobic thermoplastic matrix remains a challenge, and the reinforcing capability ofthe fibres is hence not fully utilized. A direct characterisation of the stress transfer ability through pull-out tests on single fibres is extremely cumbersome due to the small dimension of the wood fibres. Here a novel approach is proposed:the length distribution ofthe fibres sticking out ofthe matrix atthe fracture surface is approximated using X-ray microtomography and is used as an estimate of the adhesion between the fibres and the matrix. When a crack grows in the material, the fibres will either break or be pulled-out of the matrix depending on their adhesion to the matrix: good adhesion between the fibres and the matrix should result in more fibre breakage and less pull-out of the fibres than poor adhesion. The effect of acetylation on the adhesion between the wood fibres and the PLA matrix was evaluated at different moisture contents using the proposed method. By using an acetylation treatment of the fibres it was possible to improve the strength of the composite samples soaked in the water by more than 30%.
Many algorithms in image analysis require a priority queue, a data structure that holds pointers to pixels in the image, and which allows efficiently finding the pixel in the queue with the highest priority. However, very few articles describing such image analysis algorithms specify which implementation of the priority queue was used. Many assessments of priority queues can be found in the literature, but mostly in the context of numerical simulation rather than image analysis. Furthermore, due to the ever-changing characteristics of computing hardware, performance evaluated empirically 10 years ago is no longer relevant. In this paper I revisit priority queues as used in image analysis routines, evaluate their performance in a very general setting, and come to a very different conclusion than other authors: implicit heaps are the most efficient priority queues. At the same time. I propose a simple modification of the hierarchical queue (or bucket queue) that is more efficient than the implicit heap for extremely large queues.
The stochastic watershed is a method for unsupervised image segmentation proposed by Angulo and Jeulin (2007). The method first computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seeds. Contours that appear with high probability are assumed to be more important. This PDF is then post-processed to obtain a final segmentation. The main computational hurdle with the stochastic watershed method is the calculation of the PDF. In the original publication by Angulo and Jeulin, the PDF was estimated by Monte Carlo simulation, i.e., repeatedly selecting random markers and performing seeded watershed segmentation. Meyer and Stawiaski (2010) showed that the PDF can be calculated exactly, without performing any Monte Carlo simulations, but do not provide any implementation details. In a naive implementation, the computational cost of their method is too high to make it useful in practice. Here, we extend the work of Meyer and Stawiaski by presenting an efficient (quasi-linear) algorithm for exact computation of the PDF. We demonstrate that in practice, the proposed method is faster than any previously reported method by more than two orders of magnitude. The algorithm is formulated for general undirected graphs, and thus trivially generalizes to images with any number of dimensions.
Seeded segmentation with minimum spanning forests, also known as segmentation by watershed cuts, is a powerful method for supervised image segmentation. Given that correct segmentation labels are provided for a small set of image elements, called seeds, the watershed cut method completes the labeling for all image elements so that the boundaries between different labels are optimally aligned with salient edges in the image. Here, a randomized version of watershed segmentation, the targeted stochastic watershed, is proposed for performing multi-label targeted image segmentation with stochastic seed input. The input to the algorithm is a set of probability density functions (PDFs), one for each segmentation label, defined over the pixels of the image. For each pixel, we calculate the probability that the pixel is assigned a given segmentation label in seeded watershed segmentation with seeds drawn from the input PDFs. We propose an efficient algorithm (quasi-linear with respect to the number of image elements) for calculating the desired probabilities exactly.
The stochastic watershed is a method for identifying salient contours in an image, with applications to image segmentation. The method computes a probability density function (PDF), assigning to each piece of contour in the image the probability to appear as a segmentation boundary in seeded watershed segmentation with randomly selected seedpoints. Contours that appear with high probability are assumed to be more important. This paper concerns an efficient method for computing the stochastic watershed PDF exactly, without performing any actual seeded watershed computations. A method for exact evaluation of stochastic watersheds was proposed by Meyer and Stawiaski (2010). Their method does not operate directly on the image, but on a compact tree representation where each edge in the tree corresponds to a watershed partition of the image elements. The output of the exact evaluation algorithm is thus a PDF defined over the edges of the tree. While the compact tree representation is useful in its own right, it is in many cases desirable to convert the results from this abstract representation back to the image, e. g, for further processing. Here, we present an efficient linear time algorithm for performing this conversion.
An improved method based on X-ray microtomography is developed for estimating fibre length distribution of short-fibre composite materials. In particular, a new method is proposed for correcting the biasing effects caused by the finite sample size as defined by the limited field of view of the tomographic devices. The method is first tested for computer generated fibre data and then applied in analyzing the fibre length distribution in three different types of wood fibre reinforced composite materials. The results were compared with those obtained by an independent method based on manual registration of fibres in images from a light microscope. The method can be applied in quality control and in verifying the effects of processing parameters on the fibre length and on the relevant mechanical properties of short fibre composite materials, e.g. stiffness, strength and fracture toughness. (C) 2012 Elsevier Ltd. All rights reserved.
The stochastic watershed is a segmentation algorithm that estimates the importance of each boundary by repeatedly segmenting the image using a watershed with randomly placed seeds. Recently, this algorithm was further developed in two directions: (1) The exact evaluation algorithm efficiently produces the result of the stochastic watershed with an infinite number of repetitions. This algorithm computes the probability for each boundary to be found by a watershed with random seeds, making the result deterministic and much faster. (2) The robust stochastic watershed improves the usefulness of the segmentation result by avoiding false edges in large regions of uniform intensity. This algorithm simply adds noise to the input image for each repetition of the watershed with random seeds. In this paper, we combine these two algorithms into a method that produces a segmentation result comparable to the robust stochastic watershed, with a considerably reduced computation time. We propose to run the exact evaluation algorithm three times, with uniform noise added to the input image, to produce three different estimates of probabilities for the edges. We combine these three estimates with the geometric mean. In a relatively simple segmentation problem, F-measures averaged over the results on 46 images were identical to those of the robust stochastic watershed, but the computation times were an order of magnitude shorter.
The current state-of-the-art of tree-ring analysis and densitometry is still mainly limited to two dimensions and mostly requires proper treatment of the surface of the samples. In this paper we elaborate on the potential of helical X-ray computed tomography for 3D tree-ring analysis. Microdensitometrical profiles are obtained by processing of the reconstructed volumes. Correction of the structure direction, taking into account the angle of growth rings and grain, results in very accurate microdensity and precise ring width measurements. Both a manual as well as an automated methodology is proposed here, of which the MATLAB (c) code is available. Examples are given for pine (Pinus sylvestris L), oak (Quercus robur L) and teak (Tectona grandis L.). In all, the methodologies applied here on the 3D volumes are useful for growth related studies, enabling a fast and non-destructive analysis.
During animal development, complex patterns of gene expression provide positional information within the embryo. To better understand the underlying gene regulatory networks, the Berkeley Drosophila Transcription Network Project (BDTNP) has developed methods that support quantitative computational analysis of three-dimensional (3D) gene expression in early Drosophila embryos at cellular resolution. We introduce PointCloudXplore (PCX), an interactive visualization tool that supports visual exploration of relationships between different genes' expression using a combination of established visualization techniques. Two aspects of gene expression are of particular interest: 1) gene expression patterns defined by the spatial locations of cells expressing a gene and 2) relationships between the expression levels of multiple genes. PCX provides users with two corresponding classes of data views: 1) Physical Views based on the spatial relationships of cells in the embryo and 2) Abstract Views that discard spatial information and plot expression levels of multiple genes with respect to each other. Cell Selectors highlight data associated with subsets of embryo cells within a View. Using linking, these selected cells can be viewed in multiple representations. We describe PCX as a 3D gene expression visualization tool and provide examples of how it has been used by BDTNP biologists to generate new hypotheses.
To completely segment all individual wood fibres in volume images of fibrous materials presents a challenging problem but is important in understanding the micro mechanical properties of composite materials. This paper presents a filter that identifies and closes pores in wood fibre walls, simplifying the shape of the fibres. After this filter, a novel segmentation method based on graph cuts identifies individual fibres. The methods are validated on a realistic synthetic fibre data set and then applied on μCT images of wood fibre composites.
With increased resolution in x-ray computed tomography, refraction adds increasingly to the attenuation signal. Though potentially beneficial, the artifacts caused by refraction often need to be removed from the image. In this paper, we propose a postprocessing method, based on deconvolution, that is able to remove these artifacts after conventional reconstruction. This method poses two advantages over existing projection-based (preprocessing) phase-retrieval or phase-removal algorithms. First, evaluation of the parameters can be done very quickly, improving the overall speed of the method. Second, postprocessing methods can be applied when projection data is not available, which occurs in several commercial systems with closed software or when projection data has been deleted. It is shown that the proposed method performs comparably to state-of-the-art methods in terms of image quality.
X-ray Computerized Tomography at micrometer resolution (μCT) is an important tool for understanding the properties of wood fibre materials such as paper, carton and wood fibre composites. While many image analysis methods have been developed for μCT images in wood science, the evaluation of these methods if often not thorough enough because of the lack of a dataset with ground truth. This paper describes the generation of synthetic μCT volumes of wood fibre materials. Fibres with a high degree of morphological variations are modeled and densely packed into a volume of the material. Using a simulation of the μCT image acquisition process, realistic synthetic images are obtained. This simulation uses noise characterized from a set of μCT images. The synthetic images have a known ground truth, and can therefore be used when evaluating image analysis methods.