Single image super-resolution (SR) reconstructionaims to estimate a noise-free and blur-free high resolution imagefrom a single blurred and noisy lower resolution observation.Most existing SR reconstruction methods assume that noise in theimage is white Gaussian. Noise resulting from photon countingdevices, as commonly used in image acquisition, is, however,better modelled with a mixed Poisson-Gaussian distribution. Inthis study we propose a single image SR reconstruction methodbased on energy minimization for images degraded by mixedPoisson-Gaussian noise.We evaluate performance of the proposedmethod on synthetic images, for different levels of blur andnoise, and compare it with recent methods for non-Gaussiannoise. Analysis shows that the appropriate treatment of signaldependentnoise, provided by our proposed method, leads tosignificant improvement in reconstruction performance.
We present a segmentation method that estimates the relative coverage of each pixel in a sensed image by each image component. The proposed super-resolution blur-aware model (utilizes a priori knowledge of the image blur) for linear unmixing of image intensities relies on a sparsity promoting approach expressed by two main requirements: (i) minimization of Huberized total variation, providing smooth object boundaries and noise removal, and (ii) minimization of nonedge image fuzziness, responding to an assumption that imaged objects are crisp and that fuzziness is mainly due to the imaging and digitization process. Edge fuzziness due to partial coverage is allowed, enabling subpixel precise feature estimates. The segmentation is formulated as an energy minimization problem and solved by the spectral projected gradient method, utilizing a graduated nonconvexity scheme. Quantitative and qualitative evaluation on synthetic and real multichannel images confirms good performance, particularly relevant when subpixel precision in segmentation and subsequent analysis is a requirement. (C) 2019 SPIE and IS&T
Noise and blur, present in images after acquisition, negatively affect their further analysis. For image enhancement when the Point Spread Function (PSF) is unknown, blind deblurring is suitable, where both the PSF and the original image are simultaneously reconstructed. In many realistic imaging conditions, noise is modelled as a mixture of Poisson (signal-dependent) and Gaussian (signal independent) noise. In this paper we propose a blind deconvolution method for images degraded by such mixed noise. The method is based on regularized energy minimization. We evaluate its performance on synthetic images, for different blur kernels and different levels of noise, and compare with non-blind restoration. We illustrate the performance of the method on Transmission Electron Microscopy images of cilia, used in clinical practice for diagnosis of a particular type of genetic disorders.
Most energy minimization-based restoration methods are developed for signal-independent Gaussian noise. The assumption of Gaussian noise distribution leads to a quadratic data fidelity term, which is appealing in optimization. When an image is acquired with a photon counting device, it contains signal-dependent Poisson or mixed Poisson–Gaussian noise. We quantify the loss in performance that occurs when a restoration method suited for Gaussian noise is utilized for mixed noise. Signal-dependent noise can be treated by methods based on either classical maximum a posteriori (MAP) probability approach or on a variance stabilization approach (VST). We compare performances of these approaches on a large image material and observe that VST-based methods outperform those based on MAP in both quality of restoration and in computational efficiency. We quantify improvement achieved by utilizing Huber regularization instead of classical total variation regularization. The conclusion from our study is a recommendation to utilize a VST-based approach combined with regularization by Huber potential for restoration of images degraded by blur and signal-dependent noise. This combination provides a robust and flexible method with good performance and high speed.
We extend the shape signature based on the distance of the boundary points from the shape centroid, to the case of fuzzy sets. The analysis of the transition from crisp to fuzzy shape descriptor is first given in the continuous case. This is followed by a study of the specific issues induced by the discrete representation of the objects in a computer.
We analyze two methods for calculating the signature of a fuzzy shape, derived from two ways of defining a fuzzy set: first, by its membership function, and second, as a stack of its α-cuts. The first approach is based on measuring the length of a fuzzy straight line by integration of the fuzzy membership function, while in the second one we use averaging of the shape signatures obtained for the individual α-cuts of the fuzzy set. The two methods, equivalent in the continuous case for the studied class of fuzzy shapes, produce different results when adjusted to the discrete case. A statistical study, aiming at characterizing the performances of each method in the discrete case, is done. Both methods are shown to provide more precise descriptions than their corresponding crisp versions. The second method (based on averaged Euclidean distance over the α-cuts) outperforms the others.
We present two different extensions of the Sum of minimal distances and the Complement weighted sum of minimal distances to distances between fuzzy sets. We evaluate to what extent the proposed distances show monotonic behavior with respect to increasing translation and rotation of digital objects, in noise free, as well as in noisy conditions. Tests show that one of the extension approaches leads to distances exhibiting very good performance. Furthermore, we evaluate distance based classification of crisp and fuzzy representations of objects at a range of resolutions. We conclude that the proposed distances are able to utilize the additional information available in a fuzzy representation, thereby leading to improved performance of related image processing tasks.
Local binary pattern (LBP) descriptors have been popular in texture classification in recent years. They were introduced as descriptors of local image texture and their histograms are shown to be well performing texture features. In this paper we introduce two new LBP descriptors, αLBP and its improved variant IαLBP. We evaluate their performance in classification by comparing them with some of the existing LBP descriptors - LBP, ILBP, shift LBP (SLBP) and with one ternary descriptor - LTP. The texture descriptors are evaluated on three datasets - KTH-TIPS2b, UIUC and Virus texture dataset. The novel descriptor outperforms the other descriptors on two datasets, KTH-TIPS2b and Virus, and is tied for first place with ILBP on the UIUC dataset.
Measuring width and diameter of a shape areproblems well studied in the literature. A pixel coverage repre-sentation is one specific type of digital fuzzy representation of acontinuous image object, where the (membership) value of eachpixel is (approximately) equal to the relative area of the pixelwhich is covered by the continuous object. Lately a number ofmethods for shape analysis use pixel coverage for reducing errorof estimation. We introduce a novel method for estimating theprojection of a shape in a given direction. The method is based onutilizing pixel coverage representation of a shape. Performance ofthe method is evaluated by a number of tests on synthetic objects,confirming high precision and applicability for calculation ofdiameter and elongation of a shape.
Early stage cancer detection is essential for reducing cancer mortality. Screening programs such as that for cervical cancer are highly effective in preventing advanced stage cancers. One obstacle to the introduction of screening for other cancer types is the cost associated with manual inspection of the resulting cell samples. Computer assisted image analysis of cytology slides may offer a significant reduction of these costs. We are particularly interested in detection of cancer of the oral cavity, being one of the most common malignancies in the world, with an increasing tendency of incidence among young people. Due to the non-invasive accessibility of the oral cavity, automated detection may enable screening programs leading to early diagnosis and treatment.It is well known that variations in the chromatin texture of the cell nucleus are an important diagnostic feature. With an aim to maximize reliability of an automated cancer detection system for oral cancer detection, we evaluate three state of the art deep convolutional neural network (DCNN) approaches which are specialized for texture analysis. A powerful tool for texture description are local binary patterns (LBPs); they describe the pattern of variations in intensity between a pixel and its neighbours, instead of using the image intensity values directly. A neural network can be trained to recognize the range of patterns found in different types of images. Many methods have been proposed which either use LBPs directly, or are inspired by them, and show promising results on a range of different image classification tasks where texture is an important discriminative feature.We evaluate multiple recently published deep learning-based texture classification approaches: two of them (referred to as Model 1, by Juefei-Xu et al. (CVPR 2017); Model 2, by Li et al. (2018)) are inspired by LBP texture descriptors, while the third (Model 3, by Marcos et al. (ICCV 2017)), based on Rotation Equivariant Vector Field Networks, aims at preserving fine textural details under rotations, thus enabling a reduced model size. Performances are compared with state-of-the-art results on the same dataset, by Wieslander et al. (CVPR 2017), which are based on ResNet and VGG architectures. Furthermore a fusion of DCNN with LBP maps as in Wetzer et al. (Bioimg. Comp. 2018) is evaluated for comparison. Our aim is to explore if focus on texture can improve CNN performance.Both of the methods based on LBPs exhibit higher performances (F1-score for Model 1: 0.85; Model 2: 0.83) than what is obtained by using CNNs directly on the greyscale data (VGG: 0.78, ResNet: 0.76). This clearly demonstrates the effectiveness of LBPs for this type of image classification task. The approach based on rotation equivariant networks stays behind in performance (F1-score for Model 3: 0.72), indicating that this method may be less appropriate for classifying single-cell images.
Automated detection of cilia in low magnification transmission electron microscopy images is a central task in the quest to relieve the pathologists in the manual, time consuming and subjective diagnostic procedure. However, automation of the process, specifically in low magnification, is challenging due to the similar characteristics of non-cilia candidates. In this paper, a convolutional neural network classifier is proposed to further reduce the false positives detected by a previously presented template matching method. Adding the proposed convolutional neural network increases the area under Precision-Recall curve from 0.42 to 0.71, and significantly reduces the number of false positive objects.
Distance transforms (DTs) are, usually, defined on a binary image as a mapping from each background element to the distance between its centre and the centre of the closest object element. However, due to discretization effects, such DTs have limited precision, including reduced rotational and translational invariance. We show in this paper that a significant improvement in performance of Euclidean DTs can be achieved if voxel coverage values are utilized and the position of an object boundary is estimated with sub-voxel precision. We propose two algorithms of linear time complexity for estimating Euclidean DT with sub-voxel precision. The evaluation confirms that both algorithms provide 4-14 times increased accuracy compared to what is achievable from a binary object representation.
Distance from the boundary of a shape to its centroid, a.k.a. signature of a shape, is a frequently used shape descriptor. Commonly, the observed shape results from a crisp (binary) segmentation of an image. The loss of information associated with binarization leads to a significant decrease in accuracy and precision of the signature, as well as its reduced invariance w.r.t. translation and rotation. Coverage information enables better estimation of edge position within a pixel. In this paper, we propose an iterative method for computing the signature of a shape utilizing its pixel coverage representation. The proposed method iteratively improves the accuracy of the computed signature, starting from a good initial estimate. A statistical study indicates considerable improvements in both accuracy and precision, compared to a crisp approach and a previously proposed approach based on averaging signatures over α-cuts of a fuzzy representation. We observe improved performance of the proposed descriptor in the presence of noise and reduced variation under translation and rotation.
Introduction: Cancer of the oral cavity is one of the most common malignancies in the world. The incidence of oral cavity and oropharyngeal cancer is increasing among young people. It is noteworthy that the oral cavity can be relatively easily accessed for routine screening tests that could potentially decrease the incidence of oral cancer. Automated deep learning computer aided methods show promising ability for detection of subtle precancerous changes at a very early stage, also when visual examination is less effective. Although the biological nature of these malignancy associated changes is not fully understood, the consistency of morphology and textural changes within a cell dataset could shed light on the premalignant state. In this study, we are aiming to increase understanding of this phenomenon by exploring and visualizing what parts of cell images are considered as most important when trained deep convolutional neural networks (DCNNs) are used to differentiate cytological images into normal and abnormal classes.
Materials and methods: Cell samples are collected with a brush at areas of interest in the oral cavity and stained according to standard PAP procedures. Digital images from the slides are acquired with a 0.32 micron pixel size in greyscale format (570 nm bandpass filter). Cell nuclei are manually selected in the images and a small region is cropped around each nucleus resulting in images of 80x80 pixels. Medical knowledge is not used for choosing the cells but they are just randomly selected from the glass; for the learning process we are only providing ground truth on the patient level and not on the cell level. Overall, 10274 images of cell nuclei and the surrounding region are used to train state-of-the-art DCNNs to distinguish between cells from healthy persons and persons with precancerous lesions. Data augmentation through 90 degrees rotations and mirroring is applied to the datasets. Different approaches for class activation mapping and related methods are utilized to determine what image regions and feature maps are responsible for the relevant class differentiation.
Results and Discussion:The best performing of the observed deep learning architectures reaches a per cell classification accuracy surpassing 80% on the observed material. Visualizing the class activation maps confirms our expectation that the network is able to learn to focus on specific relevant parts of the sample regions. We compare and evaluate our findings related to detected discriminative regions with the subjective judgements of a trained cytotechnologist. We believe that this effort on improving understanding of decision criteria used by machine and human leads to increased understanding of malignancy associated changes and also improves robustness and reliability of the automated malignancy detection procedure.
We propose a simple and fast method for microscopy imageenhancement and quantitatively evaluate its performance on a databasecontaining cell images obtained from microscope setups of several levelsof quality. The method utilizes an efficiently and accurately estimated rel-ative modulation transfer function to generate images of higher quality,starting from those of lower quality, by filtering in the Fourier domain.We evaluate the method visually and based on correlation coefficientand normalized mutual information. We conclude that enhanced imagesexhibit high similarity, both visually and in terms of information con-tent, with acquired high quality images. This is an important result forthe development of a cost-effective screening system for cervical cancer.
In this paper we study set distances that are used in image processing. We propose a generalization of Sum of minimal distances and show that its special cases include a metric by Symmetric difference. The Hausdorff metric and the Chamfer matching distances are also closely related with the presented framework. In addition, we define the Complement set distance of a given distance. We evaluate the observed distance with respect to applicability to image object registration. We perform comparative evaluations with respect to noise sensitivity, as well as with respect to rigid body transformations. We conclude that the family of Generalized sum of minimal distances has many desirable properties for this application.
We introduce the use of DC programming, in combination with convex-concave regularization, as a deterministic approach for solving the optimization problem imposed by defuzzification by feature distance minimization. We provide a DC based algorithm for finding a solution to the defuzzification problem by expressing the objective function as a difference of two convex functions and iteratively solving a family of DC programs. We compare the performance with the previously recommended method, simulated annealing, on a number of test images. Encouraging results, together with several advantages of the DC based method, approve use of this approach, and motivate its further exploration.
We propose a method for computing, in linear time, the exact Euclidean distance transform of sets of points s. t. one coordinate of a point can be assigned any real value, whereas other coordinates are restricted to discrete sets of values. The proposed distance transform is applicable to objects represented by grid line sampling, and readily provides sub-pixel precise distance values. The algorithm is easy to implement; we present complete pseudo code. The method is easy to parallelize and extend to higher dimensional data. We present two ways of obtaining approximate grid line sampled representations, and evaluate the proposed EDT on synthetic examples. The method is competitive w. r. t. state-of-the-art methods for sub-pixel precise distance evaluation.
We present four novel point-to-set distances defined for fuzzy or gray-level image data, two based on integration over α-cuts and two based on the fuzzy distance transform. We explore their theoretical properties. Inserting the proposed point-to-set distances in existing definitions of set-to-set distances, among which are the Hausdorff distance and the sum of minimal distances, we define a number of distances between fuzzy sets. These set distances are directly applicable for comparing gray-level images or fuzzy segmented objects, but also for detecting patterns and matching parts of images. The distance measures integrate shape and intensity/membership of observed entities, providing a highly applicable tool for image processing and analysis. Performance evaluation of derived set distances in real image processing tasks is conducted and presented. It is shown that the considered distances have a number of appealing theoretical properties and exhibit very good performance in template matching and object classification for fuzzy segmented images as well as when applied directly on gray-level intensity images. Examples include recognition of hand written digits and identification of virus particles. The proposed set distances perform excellently on the MNIST digit classification task, achieving the best reported error rate for classification using only rigid body transformations and a kNN classifier.
SRμCT images of paper and pulp fiber materials are characterized by a low signal to noise ratio. De-noising is therefore a common preprocessing step before segmentation into fiber and background components. We suggest a de-noising method based on total variation minimization using a modified Spectral Conjugate Gradient algorithm. Quantitative evaluation performed on synthetic 3D data and qualitative evaluation on real 3D paper fiber data confirm appropriateness of the suggested method for the particular application.
A defuzzification method based on feature distance minimization is further improved by incorporating into the distance function feature values measured on object representations at different scales. It is noticed that such an approach can improve defuzzification results by better preserving the properties of a fuzzy set; area preservation at scales in-between local (pixel-size) and global (the whole object) provides that characteristics of the fuzzy object are more appropriately exhibited in the defuzzification. For the purpose of comparing sets of different resolution, we propose a feature vector representation of a (fuzzy and crisp) set, utilizing a resolution pyramid. The distance measure is accordingly adjusted. The defuzzification method is extended to the 3D case. Illustrative examples are given.
We present a method for simulating lower quality images starting from higher quality ones, based on acquired image pairs from different optical setups. The method does not require estimates of point (or line) spread functions of the system, but utilizes the relative transfer function derived from images of real specimen of interest in the observed application. Thanks to the use of a larger number of real specimen, excellent stability and robustness of the method is achieved. The intended use is exploring the influence of image quality on features and classification accuracy in pattern recognition based screening tasks. Visual evaluation of the obtained images strongly confirms usefulness of the method. The approach is quantitatively evaluated by observing stability of feature values, proven useful for PAP-smear classification, between synthetic and real images from seven different microscope setups. The evaluation shows that features from the synthetically generated lower resolution images are as similar to features from real images at that resolution, as features from two different images of the same specimen, taken at the same low resolution, are to each other.
We present a method which utilizes advantages of fuzzy object representations and image processing techniques adjusted to them, to further increase efficient utilization of image information. Starting from a number of low-resolution images of affine transformations of an object, we create its suitably defuzzified high-resolution reconstruction. We evaluate the proposed method on synthetic data, observing its performance w.r.t. noise sensitivity, influence of the number of used low-resolution images, sensitivity to object variation and to inaccurate registration. Our aim is to explore applicability of the method to real image data acquired by Transmission Electron Microscopy, in a biomedical application we are currently working on.
Oral cancer incidence is rapidly increasing worldwide. The most important determinant factor in cancer survival is early diagnosis. To facilitate large scale screening, we propose a fully automated end-to-end pipeline for oral cancer screening on whole slide cytology images. The pipeline consists of regression based nucleus detection, followed by per cell focus selection, and CNN based classification. We demonstrate that the pipeline provides fast and efficient cancer classification of whole slide cytology images, improving over previous results. The complete source code is made available as open source (https://github.com/MIDA-group/OralScreen).
Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance.
One of the big challenges in the recognition of biomedical samples is the lack of large annotated datasets. Their relatively small size, when compared to datasets like ImageNet, typically leads to problems with efficient training of current machine learning algorithms. However, the recent development of generative adversarial networks (GANs) appears to be a step towards addressing this issue. In this study, we focus on one instance of GANs, which is known as deep convolutio nal generative adversarial network (DCGAN). It gained a lot of attention recently because of its stability in generating realistic artificial images. Our article explores the possibilities of using DCGANs for generating HEp-2 images. We trained multiple DCGANs and generated several datasets of HEp-2 images. Subsequently, we combined them with traditional augmentation and evaluated over three different deep learning configurations. Our article demonstrates high visual quality of generated images, which is also supported by state-of-the-art classification results.
This Open Access textbook provides students and researchers in the life sciences with essential practical information on how to quantitatively analyze data images. It refrains from focusing on theory, and instead uses practical examples and step-by step protocols to familiarize readers with the most commonly used image processing and analysis platforms such as ImageJ, MatLab and Python. Besides gaining knowhow on algorithm usage, readers will learn how to create an analysis pipeline by scripting language; these skills are important in order to document reproducible image analysis workflows.
The textbook is chiefly intended for advanced undergraduates in the life sciences and biomedicine without a theoretical background in data analysis, as well as for postdocs, staff scientists and faculty members who need to perform regular quantitative analyses of microscopy images.
Automated image analysis has become key to extract quantitative information from biological microscopy images, however the methods involved are now often so complex that they can no longer be unambiguously described using written protocols. We introduce BIAFLOWS, a web based framework to encapsulate, deploy, and benchmark automated bioimage analysis workflows (the software implementation of an image analysis method). BIAFLOWS helps fairly comparing image analysis workflows and reproducibly disseminating them, hence safeguarding research based on their results and promoting the highest quality standards in bioimage analysis.
We present a novel approach to measuring distances between objects in images, suitable for information-rich object representations which simultaneously capture several properties in each image pixel. Multiple spatial fuzzy sets on the image domain, unified in a vector-valued fuzzy set, are used to model such representations. Distance between such sets is based on a novel point-to-set distance suitable for vector-valued fuzzy representations. The proposed set distance may be applied in, e.g., template matching and object classification, with an advantage that a number of object features are simultaneously considered. The distance measure is of linear time complexity w.r.t. the number of pixels in the image. We evaluate the performance of the proposed measure in template matching in presence of noise, as well as in object detection and classification in low resolution Transmission Electron Microscopy images.
Error bounds for estimation of moments from a fuzzy representation of a shape are derived, and compared with estimations from a crisp representation. It is shown that a fuzzy membership function based on the pixel area coverage provides higher accuracy of the estimates, compared to binary Gauss digitization at the same spatial image resolution. Theoretical results are confirmed by a statistical study of disks and squares, where the moments of the shape, up to order two, are estimated from its fuzzy discrete representation. The errors of the estimates decrease both with increased size of a shape (spatial resolution) and increased membership resolution (number of available grey-levels).
We present a novel method that provides an accurate and precise estimate of the length of the boundary (perimeter) of an object by taking into account gray levels on the boundary of the digitization of the same object. Assuming a model where pixel intensity is proportional to the coverage of a pixel, we show that the presented method provides error-free measurements of the length of straight boundary segments in the case of nonquantized pixel values. For a more realistic situation, where pixel values are quantized, we derive optimal estimates that minimize the maximal estimation error. We show that the estimate converges toward a correct value as the number of gray levels tends toward infinity. The method is easy to implement; we provide the complete pseudocode. Since the method utilizes only a small neighborhood, it is very easy to parallelize. We evaluate the estimator on a set of concave and convex shapes with known perimeters, digitized at increasing resolution. In addition, we provide an example of applicability of the method on real images, by suggesting appropriate preprocessing steps and presenting results of a comparison of the suggested method with other local approaches.
We present a novel method that provides an accurate and precise estimate of the length of the boundary (perimeter) of an object, by taking into account gray-levels on the boundary of the digitization of the same object. Assuming a model where pixel intensity is proportional to the coverage of a pixel, we show that the presented method provides error-free measurements of the length of straight boundary segments in the case of non-quantized pixel values. For a more realistic situation, where pixel values are quantized, we derive optimal estimates that minimize the maximal estimation error. We show that the estimate converges toward a correct value as the number of gray-levels tends toward infinity. The method is easy to implement; we provide complete pseudo-code. Since the method utilizes only a small neighborhood, it is very easy to parallelize. We evaluate the estimator on a set of concave and convex shapes with known perimeters, digitized at increasing resolution.
By utilizing intensity information available in images, partial coverage of pixels at object borders can be estimated. Such information can, in turn, provide more precise feature estimates. We present a pixel coverage segmentation method which assigns pixel values corresponding to the area of a pixel that is covered by the imaged object(s). Starting from any suitable crisp segmentation, we extract a one-pixel thin 4-connected boundary between the observed image components where a local linear mixture model is used for estimating fractional pixel coverage values. We evaluate the presented segmentation method, as well as its usefulness for subsequent precise feature estimation, on synthetic test objects with increasing levels of noise added. We conclude that for reasonable noise levels the presented method outperforms the achievable results of a perfect crisp segmentation. Finally, we illustrate the application of the suggested method on a real histological colour image.
In this paper, we analyze the representation and reconstruction of fuzzy disks by using moments. Both continuous and digital fuzzy disks are considered. A fuzzy disk is a convex fuzzy spatial set, where the membership of a point to the fuzzy disk depends only on the distance of the point to the centre of the disk. We show that, for a certain class of membership functions defining a fuzzy disk, there exists a one-to-one correspondence between the set of fuzzy disks and the set of their generalized moment representations. Theoretical error bounds for the accuracy of the estimation of generalized moments of a continuous fuzzy disk from the generalized moments of its digitization and, in connection with that, the accuracy of an approximate reconstruction of a continuous fuzzy disk from the generalized moments of its digitization, are derived. Defuzzification (reduction of a continuous fuzzy disk to a crisp representative) is also considered. A statistical study of generated synthetic objects complements the theoretical results.
We present a defuzzification method which produces a crisp digital object starting from a fuzzy digital one, while keeping selected properties of them as similar as possible. Our main focus is on defuzzification based on the invariance of perimeter and area measures while taking into account with the membership values. We perform a similarity optimization procedure using on a region growing approach to obtain a crisp object with the desired properties.
To overcome the problems of low quality of image segmentation, as well as significant loss of the data, it seems promising to retain the data inaccuracies as realistic as possible during the image analysis procedures, instead of making hard decisions in t
Fuzzy segmentation methods have been developed in order to reduce the negative effects of the unavoidable loss of data in the digitization process. These methods require the development of new image analysis methods, handling grey-level images. This paper