uu.seUppsala University Publications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 88) Show all publications
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2019). Fast and Robust Symmetric Image Registration Based on Distances Combining Intensity and Spatial Information. IEEE Transactions on Image Processing, 28(7), 3584-3597
Open this publication in new window or tab >>Fast and Robust Symmetric Image Registration Based on Distances Combining Intensity and Spatial Information
2019 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 28, no 7, p. 3584-3597Article in journal (Refereed) Published
Abstract [en]

Intensity-based image registration approaches rely on similarity measures to guide the search for geometric correspondences with high affinity between images. The properties of the used measure are vital for the robustness and accuracy of the registration. In this study a symmetric, intensity interpolation-free, affine registration framework based on a combination of intensity and spatial information is proposed. The excellent performance of the framework is demonstrated on a combination of synthetic tests, recovering known transformations in the presence of noise, and real applications in biomedical and medical image registration, for both 2D and 3D images. The method exhibits greater robustness and higher accuracy than similarity measures in common use, when inserted into a standard gradientbased registration framework available as part of the open source Insight Segmentation and Registration Toolkit (ITK). The method is also empirically shown to have a low computational cost, making it practical for real applications. Source code is available.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Image registration, set distance, gradient methods, optimization, cost function, iterative algorithms, fuzzy sets, magnetic resonance imaging, transmission electron microscopy
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-377450 (URN)10.1109/TIP.2019.2899947 (DOI)000471067800004 ()
Funder
Vinnova, 2016-02329Swedish Research Council, 2015-05878Swedish Research Council, 2017-04385Vinnova, 2017-02447
Available from: 2019-02-20 Created: 2019-02-20 Last updated: 2019-07-05Bibliographically approved
Majtner, T., Bajić, B., Lindblad, J., Sladoje, N., Blanes-Vidal, V. & Nadimi, E. S. (2019). On the Effectiveness of Generative Adversarial Networks as HEp-2 Image Augmentation Tool. In: Michael Felsberg, Per-Erik Forssén, Ida-Maria Sintorn, Jonas Unger (Ed.), Scandinavian Conference on Image Analysis: SCIA 2019. Paper presented at Scandinavian Conference on Image Analysis (pp. 439-451).
Open this publication in new window or tab >>On the Effectiveness of Generative Adversarial Networks as HEp-2 Image Augmentation Tool
Show others...
2019 (English)In: Scandinavian Conference on Image Analysis: SCIA 2019 / [ed] Michael Felsberg, Per-Erik Forssén, Ida-Maria Sintorn, Jonas Unger, 2019, p. 439-451Conference paper, Published paper (Refereed)
Abstract [en]

One of the big challenges in the recognition of biomedical samples is the lack of large annotated datasets. Their relatively small size, when compared to datasets like ImageNet, typically leads to problems with efficient training of current machine learning algorithms. However, the recent development of generative adversarial networks (GANs) appears to be a step towards addressing this issue. In this study, we focus on one instance of GANs, which is known as deep convolutio nal generative adversarial network (DCGAN). It gained a lot of attention recently because of its stability in generating realistic artificial images. Our article explores the possibilities of using DCGANs for generating HEp-2 images. We trained multiple DCGANs and generated several datasets of HEp-2 images. Subsequently, we combined them with traditional augmentation and evaluated over three different deep learning configurations. Our article demonstrates high visual quality of generated images, which is also supported by state-of-the-art classification results.

Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11482
Keywords
Deep learning, Image recognition, HEp-2 image classification, GAN, CNN, GoogLeNet, VGG-16, Inception-v3, Transfer learning
National Category
Computer Vision and Robotics (Autonomous Systems) Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398231 (URN)10.1007/978-3-030-20205-7_36 (DOI)978-3-030-20204-0 (ISBN)
Conference
Scandinavian Conference on Image Analysis
Available from: 2019-12-03 Created: 2019-12-03 Last updated: 2019-12-03
Gay, J., Harlin, H., Wetzer, E., Lindblad, J. & Sladoje, N. (2019). Oral Cancer Detection: A Comparison of Texture Focused Deep Learning Approaches. In: Proceedings of the Swedish Society for Automated Image Analysis (SSBA): . Paper presented at Symposium on Image Analysis, Göteborg, Sweden, March 2019.
Open this publication in new window or tab >>Oral Cancer Detection: A Comparison of Texture Focused Deep Learning Approaches
Show others...
2019 (English)In: Proceedings of the Swedish Society for Automated Image Analysis (SSBA), 2019Conference paper, Oral presentation with published abstract (Other academic)
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398345 (URN)
Conference
Symposium on Image Analysis, Göteborg, Sweden, March 2019
Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-04
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2019). Robust Symmetric Affine Image Registration. In: Swedish Symposium on Image Analysis: . Paper presented at 37th Annual Swedish Symposium on Image Analysis SSBA 2019, Göteborg, Sweden, March 2019.
Open this publication in new window or tab >>Robust Symmetric Affine Image Registration
2019 (English)In: Swedish Symposium on Image Analysis, 2019Conference paper, Published paper (Other academic)
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-379864 (URN)
Conference
37th Annual Swedish Symposium on Image Analysis SSBA 2019, Göteborg, Sweden, March 2019
Available from: 2019-03-21 Created: 2019-03-21 Last updated: 2019-03-28
Bajic, B., Lindblad, J. & Sladoje, N. (2019). Sparsity promoting super-resolution coverage segmentation by linear unmixing in presence of blur and noise. Journal of Electronic Imaging (JEI), 28(1), Article ID 013046.
Open this publication in new window or tab >>Sparsity promoting super-resolution coverage segmentation by linear unmixing in presence of blur and noise
2019 (English)In: Journal of Electronic Imaging (JEI), ISSN 1017-9909, E-ISSN 1560-229X, Vol. 28, no 1, article id 013046Article in journal (Refereed) Published
Abstract [en]

We present a segmentation method that estimates the relative coverage of each pixel in a sensed image by each image component. The proposed super-resolution blur-aware model (utilizes a priori knowledge of the image blur) for linear unmixing of image intensities relies on a sparsity promoting approach expressed by two main requirements: (i) minimization of Huberized total variation, providing smooth object boundaries and noise removal, and (ii) minimization of nonedge image fuzziness, responding to an assumption that imaged objects are crisp and that fuzziness is mainly due to the imaging and digitization process. Edge fuzziness due to partial coverage is allowed, enabling subpixel precise feature estimates. The segmentation is formulated as an energy minimization problem and solved by the spectral projected gradient method, utilizing a graduated nonconvexity scheme. Quantitative and qualitative evaluation on synthetic and real multichannel images confirms good performance, particularly relevant when subpixel precision in segmentation and subsequent analysis is a requirement. (C) 2019 SPIE and IS&T

Place, publisher, year, edition, pages
IS&T & SPIE, 2019
Keywords
fuzzy segmentation, super-resolution, deconvolution, linear unmixing, total variation, energy minimization
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:uu:diva-379780 (URN)10.1117/1.JEI.28.1.013046 (DOI)000460119700046 ()
Funder
Swedish Research Council, 2014-4231Swedish Research Council, 2015-05878Swedish Research Council, 2017-04385
Available from: 2019-03-21 Created: 2019-03-21 Last updated: 2020-03-08
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2019). Stochastic Distance Functions with Applications in Object Detection and Image Segmentation. In: Swedish Symposium on Image Analysis: . Paper presented at 37th Annual Swedish Symposium on Image Analysis SSBA 2019, Göteborg, Sweden, March 2019.
Open this publication in new window or tab >>Stochastic Distance Functions with Applications in Object Detection and Image Segmentation
2019 (English)In: Swedish Symposium on Image Analysis, 2019Conference paper, Published paper (Other academic)
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-379866 (URN)
Conference
37th Annual Swedish Symposium on Image Analysis SSBA 2019, Göteborg, Sweden, March 2019
Available from: 2019-03-21 Created: 2019-03-21 Last updated: 2019-03-28
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2019). Stochastic Distance Transform. In: Discrete Geometry for Computer Imagery: . Paper presented at 21th International Conference on Discrete Geometry for Computer Imagery (pp. 75-86). Springer
Open this publication in new window or tab >>Stochastic Distance Transform
2019 (English)In: Discrete Geometry for Computer Imagery, Springer, 2019, p. 75-86Conference paper, Published paper (Refereed)
Abstract [en]

The distance transform (DT) and its many variations are ubiquitous tools for image processing and analysis. In many imaging scenarios, the images of interest are corrupted by noise. This has a strong negative impact on the accuracy of the DT, which is highly sensitive to spurious noise points. In this study, we consider images represented as discrete random sets and observe statistics of DT computed on such representations. We, thus, define a stochastic distance transform (SDT), which has an adjustable robustness to noise. Both a stochastic Monte Carlo method and a deterministic method for computing the SDT are proposed and compared. Through a series of empirical tests, we demonstrate that the SDT is effective not only in improving the accuracy of the computed distances in the presence of noise, but also in improving the performance of template matching and watershed segmentation of partially overlapping objects, which are examples of typical applications where DTs are utilized.

Place, publisher, year, edition, pages
Springer, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11414
Keywords
distance transform, stochastic, robustness to noise, random sets, monte carlo, template matching, watershed segmentation
National Category
Computer Vision and Robotics (Autonomous Systems) Discrete Mathematics
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-381027 (URN)10.1007/978-3-030-14085-4_7 (DOI)
Conference
21th International Conference on Discrete Geometry for Computer Imagery
Available from: 2019-02-23 Created: 2019-04-03 Last updated: 2019-04-03
Gay, J., Harlin, H., Wetzer, E., Lindblad, J. & Sladoje, N. (2019). Texture-based oral cancer detection: A performance analysis of deep learning approaches.. In: 3rd NEUBIAS Conference: . Paper presented at 3rd NEUBIAS Conference. Luxembourg
Open this publication in new window or tab >>Texture-based oral cancer detection: A performance analysis of deep learning approaches.
Show others...
2019 (Swedish)In: 3rd NEUBIAS Conference, Luxembourg, 2019Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Early stage cancer detection is essential for reducing cancer mortality. Screening programs such as that for cervical cancer are highly effective in preventing advanced stage cancers. One obstacle to the introduction of screening for other cancer types is the cost associated with manual inspection of the resulting cell samples. Computer assisted image analysis of cytology slides may offer a significant reduction of these costs. We are particularly interested in detection of cancer of the oral cavity, being one of the most common malignancies in the world, with an increasing tendency of incidence among young people. Due to the non-invasive accessibility of the oral cavity, automated detection may enable screening programs leading to early diagnosis and treatment.It is well known that variations in the chromatin texture of the cell nucleus are an important diagnostic feature. With an aim to maximize reliability of an automated cancer detection system for oral cancer detection, we evaluate three state of the art deep convolutional neural network (DCNN) approaches which are specialized for texture analysis. A powerful tool for texture description are local binary patterns (LBPs); they describe the pattern of variations in intensity between a pixel and its neighbours, instead of using the image intensity values directly. A neural network can be trained to recognize the range of patterns found in different types of images. Many methods have been proposed which either use LBPs directly, or are inspired by them, and show promising results on a range of different image classification tasks where texture is an important discriminative feature.We evaluate multiple recently published deep learning-based texture classification approaches: two of them (referred to as Model 1, by Juefei-Xu et al. (CVPR 2017); Model 2, by Li et al. (2018)) are inspired by LBP texture descriptors, while the third (Model 3, by Marcos et al. (ICCV 2017)), based on Rotation Equivariant Vector Field Networks, aims at preserving fine textural details under rotations, thus enabling a reduced model size. Performances are compared with state-of-the-art results on the same dataset, by Wieslander et al. (CVPR 2017), which are based on ResNet and VGG architectures. Furthermore a fusion of DCNN with LBP maps as in Wetzer et al. (Bioimg. Comp. 2018) is evaluated for comparison. Our aim is to explore if focus on texture can improve CNN performance.Both of the methods based on LBPs exhibit higher performances (F1-score for Model 1: 0.85; Model 2: 0.83) than what is obtained by using CNNs directly on the greyscale data (VGG: 0.78, ResNet: 0.76). This clearly demonstrates the effectiveness of LBPs for this type of image classification task. The approach based on rotation equivariant networks stays behind in performance (F1-score for Model 3: 0.72), indicating that this method may be less appropriate for classifying single-cell images.

Place, publisher, year, edition, pages
Luxembourg: , 2019
National Category
Computer and Information Sciences
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398150 (URN)
Conference
3rd NEUBIAS Conference
Available from: 2019-12-02 Created: 2019-12-02 Last updated: 2019-12-04
Wetzer, E., Lindblad, J., Sintorn, I.-M., Hultenby, K. & Sladoje, N. (2019). Towards automated multiscale Glomeruli detection and analysis in TEM by fusion of CNN and LBP maps. In: 3rd NEUBIAS Conference: . Paper presented at 3rd NEUBIAS Conference. Luxembourg
Open this publication in new window or tab >>Towards automated multiscale Glomeruli detection and analysis in TEM by fusion of CNN and LBP maps
Show others...
2019 (English)In: 3rd NEUBIAS Conference, Luxembourg, 2019Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Glomeruli are special structures in kidneys which filter the plasma volume from metabolic waste.Podocytes are cells that wrap around the capillaries of the Glomerulus. They take an active role in the renal filtration by preventing plasma proteins from entering the urinary ultrafiltrate through slits between so called foot processes. A number of diseases, such as minimal change disease, systemic lupus and diabetic nephropathy, can affect the glomerulus and have serious impact on the kidneys and their function.When the resolution of optical microscopy is insufficient for a diagnosis, it is necessary to thoroughly examine the morphology of the podocytes in transmission electron microscopy (TEM). This includes measuring the size and shape of the foot processes, the thickness and overall morphology of the Glomerulus Base Membrane (GBM), and the number of slits along the GBM.The high resolution of TEM images produces large amounts of data and requires long acquisition time, which makes automated imaging and Glomerulus detection a desired option. We present a multi-step and multi-scale approach to automatically detect Glomeruli and subsequently foot processes by using convolutional neural networks (CNN). Previously, texture information in the form of local binary patterns (LBPs) has shown great success in Glomerulus detection in different modalities other than TEM. This motivates our approach to explore different methods to incorporate LBPs in CNN training to enhance the performance over training exclusively on intensity images. We use a modified approximation of the Earth mover’s distance to define dissimilarities between the initially unordered binary codes resulting from pixel-wise LBP computations.Multidimensional scaling based on those dissimilarities can be applied to compute LBP maps which are suitable as CNN input. We explore the effect of different radii and dimensions for the LBP maps generation, as well as the impact of early, mid and late fusion of intensity and texture information input. We compare the performance of ResNet50 and VGG16-like architectures. Furthermore we provide comparison of transfer learning of networks pretrained on ImageNet, as well as on a publicly available SEM database, a network architecture in which convolutional layers are replaced by local binary convolutional layers, as well as ‘classic’ methods such as SVM or 1-NN classification based on LBP histograms.We show that for Glomerulus detection, where texture is a main discriminative feature, CNN training on the texture based input provides complementary information not learned by the network on the intensity images and mid and late fusion can boost performance. In foot process detection, in which the scale shifts the focus from texture to morphology, the performance also benefits by the handcrafted texture features, though to a lesser extent than for the larger scale Glomerulus detection.

Place, publisher, year, edition, pages
Luxembourg: , 2019
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398148 (URN)
Conference
3rd NEUBIAS Conference
Available from: 2019-12-02 Created: 2019-12-02 Last updated: 2019-12-04
Koriakina, N., Sladoje, N., Bengtsson, E., Ramqvist, E. D., Hirsch, J. M., Runow Stark, C. & Lindblad, J. (2019). Visualization of convolutional neural network class activations in automated oral cancer detection for interpretation of malignancy associated changes. In: 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019: . Paper presented at 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019.
Open this publication in new window or tab >>Visualization of convolutional neural network class activations in automated oral cancer detection for interpretation of malignancy associated changes
Show others...
2019 (English)In: 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019, 2019, , p. 1Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Introduction: Cancer of the oral cavity is one of the most common malignancies in the world. The incidence of oral cavity and oropharyngeal cancer is increasing among young people. It is noteworthy that the oral cavity can be relatively easily accessed for routine screening tests that could potentially decrease the incidence of oral cancer. Automated deep learning computer aided methods show promising ability for detection of subtle precancerous changes at a very early stage, also when visual examination is less effective. Although the biological nature of these malignancy associated changes is not fully understood, the consistency of morphology and textural changes within a cell dataset could shed light on the premalignant state. In this study, we are aiming to increase understanding of this phenomenon by exploring and visualizing what parts of cell images are considered as most important when trained deep convolutional neural networks (DCNNs) are used to differentiate cytological images into normal and abnormal classes.

Materials and methods: Cell samples are collected with a brush at areas of interest in the oral cavity and stained according to standard PAP procedures. Digital images from the slides are acquired with a 0.32 micron pixel size in greyscale format (570 nm bandpass filter). Cell nuclei are manually selected in the images and a small region is cropped around each nucleus resulting in images of 80x80 pixels. Medical knowledge is not used for choosing the cells but they are just randomly selected from the glass; for the learning process we are only providing ground truth on the patient level and not on the cell level. Overall, 10274 images of cell nuclei and the surrounding region are used to train state-of-the-art DCNNs to distinguish between cells from healthy persons and persons with precancerous lesions. Data augmentation through 90 degrees rotations and mirroring is applied to the datasets. Different approaches for class activation mapping and related methods are utilized to determine what image regions and feature maps are responsible for the relevant class differentiation.

Results and Discussion:The best performing of the observed deep learning architectures reaches a per cell classification accuracy surpassing 80% on the observed material. Visualizing the class activation maps confirms our expectation that the network is able to learn to focus on specific relevant parts of the sample regions. We compare and evaluate our findings related to detected discriminative regions with the subjective judgements of a trained cytotechnologist. We believe that this effort on improving understanding of decision criteria used by machine and human leads to increased understanding of malignancy associated changes and also improves robustness and reliability of the automated malignancy detection procedure.

Publisher
p. 1
Keywords
Oral cancer, saliency methods, deep convolutional neural networks
National Category
Other Computer and Information Science Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398145 (URN)
Conference
3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019
Funder
Swedish Research Council, 2017-04385
Available from: 2019-12-02 Created: 2019-12-02 Last updated: 2019-12-04
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7312-8222

Search in DiVA

Show all publications

Profile pages

Personal page