uu.seUppsala University Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep convolutional neural networks for detecting cellular changes due to malignancy
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.ORCID iD: 0000-0002-1636-3469
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.ORCID iD: 0000-0002-4139-7003
Show others and affiliations
2017 (English)In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW 2017), IEEE, 2017, p. 82-89Conference paper, Published paper (Refereed)
Abstract [en]

Discovering cancer at an early stage is an effective way to increase the chance of survival. However, since most screening processes are done manually it is time inefficient and thus a costly process. One way of automizing the screening process could be to classify cells using Convolutional Neural Networks. Convolutional Neural Networks have been proven to be accurate for image classification tasks. Two datasets containing oral cells and two datasets containing cervical cells were used. For the cervical cancer dataset the cells were classified by medical experts as normal or abnormal. For the oral cell dataset we only used the diagnosis of the patient. All cells obtained from a patient with malignancy were thus considered malignant even though most of them looked normal. The performance was evaluated for two different network architectures, ResNet and VGG. For the oral datasets the accuracy varied between 78-82% correctly classified cells depending on the dataset and network. For the cervical datasets the accuracy varied between 84-86% correctly classified cells depending on the dataset and network. The results indicate a high potential for detecting abnormalities in oral cavity and in uterine cervix. ResNet was shown to be the preferable network, with a higher accuracy and a smaller standard deviation.

Place, publisher, year, edition, pages
IEEE, 2017. p. 82-89
Series
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9936
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
URN: urn:nbn:se:uu:diva-329356DOI: 10.1109/ICCVW.2017.18ISI: 000425239600011ISBN: 978-1-5386-1034-3 (electronic)OAI: oai:DiVA.org:uu-329356DiVA, id: diva2:1140894
Conference
ICCV workshop on Bioimage Computing, Venice, Italy, October 23, 2017.
Funder
EU, European Research Council, 682810Swedish Research Council, 2012-4968
Available from: 2017-09-13 Created: 2017-09-13 Last updated: 2018-05-23Bibliographically approved
In thesis
1. Deep Neural Networks and Image Analysis for Quantitative Microscopy
Open this publication in new window or tab >>Deep Neural Networks and Image Analysis for Quantitative Microscopy
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Understanding biology paves the way for discovering drugs targeting deadly diseases like cancer, and microscopy imaging is one of the most informative ways to study biology. However, analysis of large numbers of samples is often required to draw statistically verifiable conclusions. Automated approaches for analysis of microscopy image data makes it possible to handle large data sets, and at the same time reduce the risk of bias. Quantitative microscopy refers to computational methods for extracting measurements from microscopy images, enabling detection and comparison of subtle changes in morphology or behavior induced by varying experimental conditions. This thesis covers computational methods for segmentation and classification of biological samples imaged by microscopy.

Recent increase in computational power has enabled the development of deep neural networks (DNNs) that perform well in solving real world problems. This thesis compares classical image analysis algorithms for segmentation of bacteria cells and introduces a novel method that combines classical image analysis and DNNs for improved cell segmentation and detection of rare phenotypes. This thesis also demonstrates a novel DNN for segmentation of clusters of cells (spheroid), with varying sizes, shapes and textures imaged by phase contrast microscopy. DNNs typically require large amounts of training data. This problem is addressed by proposing an automated approach for creating ground truths by utilizing multiple imaging modalities and classical image analysis. The resulting DNNs are applied to segment unstained cells from bright field microscopy images. In DNNs, it is often difficult to understand what image features have the largest influence on the final classification results. This is addressed in an experiment where DNNs are applied to classify zebrafish embryos based on phenotypic changes induced by drug treatment. The response of the trained DNN is tested by ablation studies, which revealed that the networks do not necessarily learn the features most obvious at visual examination. Finally, DNNs are explored for classification of cervical and oral cell samples collected for cancer screening. Initial results show that the DNNs can respond to very subtle malignancy associated changes. All the presented methods are developed using open-source tools and validated on real microscopy images.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2017. p. 85
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1566
Keyword
Deep neural networks, convolutional neural networks, image analysis, quantitative microscopy, bright-field microscopy
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-329834 (URN)978-91-513-0080-1 (ISBN)
Public defence
2017-11-10, 2446, ITC, Lägerhyddsvägen 2, Hus 2, Uppsala, 10:15 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2012-4968EU, European Research Council, 682810eSSENCE - An eScience Collaboration
Available from: 2017-10-17 Created: 2017-09-21 Last updated: 2018-03-08

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records BETA

Wieslander, HåkanBengtsson, EwertWählby, CarolinaHirsch, Jan-MichaelKecheril Sadanandan, Sajith

Search in DiVA

By author/editor
Wieslander, HåkanBengtsson, EwertWählby, CarolinaHirsch, Jan-MichaelKecheril Sadanandan, Sajith
By organisation
Division of Visual Information and InteractionOral and Maxillofacial Surgery
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1097 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf