Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
Link to record
Permanent link

Direct link
Publications (10 of 116) Show all publications
Walhagen, P., Bengtsson, E., Lennartz, M., Sauter, G. & Busch, C. (2022). AI based prostate analysis system trained without human supervision to predict patient outcome from tissue samples. Journal of Pathology Informatics
Open this publication in new window or tab >>AI based prostate analysis system trained without human supervision to predict patient outcome from tissue samples
Show others...
2022 (English)In: Journal of Pathology Informatics, ISSN 2229-5089, E-ISSN 2153-3539, ISSN 2229-5089Article in journal (Refereed) Published
Abstract [en]

In order to plan the best treatment for prostate cancer patients the aggressiveness of the tumor is graded based on visual assessment of tissue biopsies according to the Gleason scale. Recently a number of AI models have been developed that can be trained to do this grading as well as human pathologists. But the accuracy of the AI grading will be limited by the accuracy of the subjective “ground truth” Gleason grades used for the training. We have trained an AI to predict patient outcome directly based on image analysis of a large biobank of tissue samples with known outcome without input of any human knowledge about cancer grading. The model has shown similar and in some cases better ability to predict patient outcome on an independent test-set than expert pathologists doing the conventional grading.

Keywords
prostate cancer grading, artificial intelligence based cancer grading, predicting prostate cancer recurrence
National Category
Engineering and Technology
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-486071 (URN)10.1016/j.jpi.2022.100137 (DOI)
Available from: 2022-09-30 Created: 2022-09-30 Last updated: 2023-01-09Bibliographically approved
Bengtsson, E. & Ranefall, P. (2019). Image analysis in digital pathology: Combining automated assessment of Ki67 staining quality with calculation of Ki67 cell proliferation index. Cytometry Part A, 95(7), 714-716
Open this publication in new window or tab >>Image analysis in digital pathology: Combining automated assessment of Ki67 staining quality with calculation of Ki67 cell proliferation index
2019 (English)In: Cytometry Part A, ISSN 1552-4922, E-ISSN 1552-4930, Vol. 95, no 7, p. 714-716Article in journal, Editorial material (Other academic) Published
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-393609 (URN)10.1002/cyto.a.23685 (DOI)000478855500004 ()30512236 (PubMedID)
Available from: 2018-12-03 Created: 2019-09-25 Last updated: 2019-09-26Bibliographically approved
Bengtsson, E. & Tárnok, A. (2019). Special Section on Image Cytometry. Cytometry Part A, 95A(4), 363-365
Open this publication in new window or tab >>Special Section on Image Cytometry
2019 (English)In: Cytometry Part A, ISSN 1552-4922, E-ISSN 1552-4930, Vol. 95A, no 4, p. 363-365Article in journal, Editorial material (Other academic) Published
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-384088 (URN)10.1002/cyto.a.23762 (DOI)000466792700001 ()30977267 (PubMedID)
Available from: 2019-04-12 Created: 2019-06-17 Last updated: 2019-06-18Bibliographically approved
Koriakina, N., Sladoje, N., Bengtsson, E., Ramqvist, E. D., Hirsch, J. M., Runow Stark, C. & Lindblad, J. (2019). Visualization of convolutional neural network class activations in automated oral cancer detection for interpretation of malignancy associated changes. In: 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019: . Paper presented at 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019.
Open this publication in new window or tab >>Visualization of convolutional neural network class activations in automated oral cancer detection for interpretation of malignancy associated changes
Show others...
2019 (English)In: 3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019, 2019, , p. 1Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Introduction: Cancer of the oral cavity is one of the most common malignancies in the world. The incidence of oral cavity and oropharyngeal cancer is increasing among young people. It is noteworthy that the oral cavity can be relatively easily accessed for routine screening tests that could potentially decrease the incidence of oral cancer. Automated deep learning computer aided methods show promising ability for detection of subtle precancerous changes at a very early stage, also when visual examination is less effective. Although the biological nature of these malignancy associated changes is not fully understood, the consistency of morphology and textural changes within a cell dataset could shed light on the premalignant state. In this study, we are aiming to increase understanding of this phenomenon by exploring and visualizing what parts of cell images are considered as most important when trained deep convolutional neural networks (DCNNs) are used to differentiate cytological images into normal and abnormal classes.

Materials and methods: Cell samples are collected with a brush at areas of interest in the oral cavity and stained according to standard PAP procedures. Digital images from the slides are acquired with a 0.32 micron pixel size in greyscale format (570 nm bandpass filter). Cell nuclei are manually selected in the images and a small region is cropped around each nucleus resulting in images of 80x80 pixels. Medical knowledge is not used for choosing the cells but they are just randomly selected from the glass; for the learning process we are only providing ground truth on the patient level and not on the cell level. Overall, 10274 images of cell nuclei and the surrounding region are used to train state-of-the-art DCNNs to distinguish between cells from healthy persons and persons with precancerous lesions. Data augmentation through 90 degrees rotations and mirroring is applied to the datasets. Different approaches for class activation mapping and related methods are utilized to determine what image regions and feature maps are responsible for the relevant class differentiation.

Results and Discussion:The best performing of the observed deep learning architectures reaches a per cell classification accuracy surpassing 80% on the observed material. Visualizing the class activation maps confirms our expectation that the network is able to learn to focus on specific relevant parts of the sample regions. We compare and evaluate our findings related to detected discriminative regions with the subjective judgements of a trained cytotechnologist. We believe that this effort on improving understanding of decision criteria used by machine and human leads to increased understanding of malignancy associated changes and also improves robustness and reliability of the automated malignancy detection procedure.

Publisher
p. 1
Keywords
Oral cancer, saliency methods, deep convolutional neural networks
National Category
Other Computer and Information Science Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-398145 (URN)
Conference
3rd NEUBIAS Conference, Luxembourg, 2-8 February 2019
Funder
Swedish Research Council, 2017-04385
Available from: 2019-12-02 Created: 2019-12-02 Last updated: 2019-12-04
Runow Stark, C., Gustavsson, I. M., Gyllensten, U. B., Darai Ramqvist, E., Lindblad, J., Wählby, C., . . . Hirsch, J. M. (2018). Brush Biopsy For HR-HPV Detection With FTA Card And AI For Cytology Analysis - A Viable Non-invasive Alternative. In: Bengt Hasséus (Ed.), EAOM2018: . Paper presented at 14th Biennial Congress of the European Association of Oral Medicine, Göteborg, Sweden, September 2018.
Open this publication in new window or tab >>Brush Biopsy For HR-HPV Detection With FTA Card And AI For Cytology Analysis - A Viable Non-invasive Alternative
Show others...
2018 (English)In: EAOM2018 / [ed] Bengt Hasséus, 2018Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Introduction: Oral cancer accounts for about 800-1,000 new cases each year in Sweden and the ratio of cancer related to high-risk human papillomavirus (HR-HPV) is increasing in the younger population due to changes in sexual habits. The most two frequent HR-HPV types 16 and 18 have both significant oncogenic potential.

Objectives: In this pilot study we evaluate two non-invasive automated methods; 1) detection of HR-HPV using FTA cards, and 2) image scanning of cytology for detection of premalignant lesions as well as eradicate the early stage of neoplasia.

Material and Methods: 160 patients with verified HR-HPV oropharyngeal cancer, previous ano-genital HR-HPV-infection or potentially malignant oral disorder were recruited for non-invasive brush sampling and analyzed with two validated automated methods both used in cervix cancer screening. For analysis of HR-HPV DNA the indicating FTA elute micro cardTM were used for dry collection, transportation and storage of the brush samples. For analysis of cell morphology changes an automated liquid base Cytology method (Preserve Cyt) combined with deep learning computer aided technique was used.

Results: Preliminary results show that the FTA-method is reliable and indicates that healthy and malignant brush samples can be separated by image analysis. 

Conclusions: With further development of these fully automated methods, it is possible to implement a National Screening Program of the oral mucosa, and thereby select patients for further investigation in order to find lesions with potential malignancy in an early stage. 

Keywords
cytometry, deep learning, oral cancer, image analysis, HPV
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-367985 (URN)
Conference
14th Biennial Congress of the European Association of Oral Medicine, Göteborg, Sweden, September 2018
Funder
Vinnova, 2017-02447
Available from: 2018-12-02 Created: 2018-12-02 Last updated: 2021-11-25Bibliographically approved
Bengtsson, E., Wieslander, H., Forslid, G., Wählby, C., Hirsch, J.-M., Runow Stark, C., . . . Lindblad, J. (2018). Detection of Malignancy-Associated Changes Due to Precancerous and Oral Cancer Lesions: A Pilot Study Using Deep Learning. In: Andrea Cossarizza (Ed.), CYTO2018: . Paper presented at 33rd Congress of the International Society for Advancement of Cytometry.
Open this publication in new window or tab >>Detection of Malignancy-Associated Changes Due to Precancerous and Oral Cancer Lesions: A Pilot Study Using Deep Learning
Show others...
2018 (English)In: CYTO2018 / [ed] Andrea Cossarizza, 2018Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Background: The incidence of oral cancer is increasing and it is effecting younger individuals. PAP smear-based screening, visual, and automated, have been used for decades, to successfully decrease the incidence of cervical cancer. Can similar methods be used for oral cancer screening? We have carried out a pilot study using neural networks for classifying cells, both from cervical cancer and oral cancer patients. The results which were reported from a technical point of view at the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), were particularly interesting for the oral cancer cases, and we are currently collecting and analyzing samples from more patients. Methods: Samples were collected with a brush in the oral cavity and smeared on glass slides, stained, and prepared, according to standard PAP procedures. Images from the slides were digitized with a 0.35 micron pixel size, using focus stacks with 15 levels 0.4 micron apart. Between 245 and 2,123 cell nuclei were manually selected for analysis for each of 14 datasets, usually 2 datasets for each of the 6 cases, in total around 15,000 cells. A small region was cropped around each nucleus, and the best 2 adjacent focus layers in each direction were automatically found, thus creating images of 100x100x5 pixels. Nuclei were chosen with an aim to select well preserved free-lying cells, with no effort to specifically select diagnostic cells. We therefore had no ground truth on the cellular level, only on the patient level. Subsets of these images were used for training 2 sets of neural networks, created according to the ResNet and VGG architectures described in literature, to distinguish between cells from healthy persons, and those with precancerous lesions. The datasets were augmented through mirroring and 90 degrees rotations. The resulting networks were used to classify subsets of cells from different persons, than those in the training sets. This was repeated for a total of 5 folds. Results: The results were expressed as the percentage of cell nuclei that the neural networks indicated as positive. The percentage of positive cells from healthy persons was in the range 8% to 38%. The percentage of positive cells collected near the lesions was in the range 31% to 96%. The percentages from the healthy side of the oral cavity of patients with lesions ranged 37% to 89%. For each fold, it was possible to find a threshold for the number of positive cells that would correctly classify all patients as normal or positive, even for the samples taken from the healthy side of the oral cavity. The network based on the ResNet architecture showed slightly better performance than the VGG-based one. Conclusion: Our small pilot study indicates that malignancyassociated changes that can be detected by neural networks may exist among cells in the oral cavity of patients with precancerous lesions. We are currently collecting samples from more patients, and will present those results as well, with our poster at CYTO 2018.

Keywords
cytometry, deep learning, oral cancer, image analysis
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-366820 (URN)
Conference
33rd Congress of the International Society for Advancement of Cytometry
Funder
Vinnova
Available from: 2018-11-26 Created: 2018-11-26 Last updated: 2021-11-25Bibliographically approved
Lidayová, K., Gupta, A., Frimmel, H., Sintorn, I.-M., Bengtsson, E. & Smedby, Ö. (2017). Classification of cross-sections for vascular skeleton extraction using convolutional neural networks. In: Medical Image Understanding and Analysis: . Paper presented at MIUA 2017, July 11–13, Edinburgh, UK (pp. 182-194). Springer
Open this publication in new window or tab >>Classification of cross-sections for vascular skeleton extraction using convolutional neural networks
Show others...
2017 (English)In: Medical Image Understanding and Analysis, Springer, 2017, p. 182-194Conference paper, Published paper (Refereed)
Abstract [en]

Recent advances in Computed Tomography Angiography provide high-resolution 3D images of the vessels. However, there is an inevitable requisite for automated and fast methods to process the increased amount of generated data. In this work, we propose a fast method for vascular skeleton extraction which can be combined with a segmentation algorithm to accelerate the vessel delineation. The algorithm detects central voxels - nodes - of potential vessel regions in the orthogonal CT slices and uses a convolutional neural network (CNN) to identify the true vessel nodes. The nodes are gradually linked together to generate an approximate vascular skeleton. The CNN classifier yields a precision of 0.81 and recall of 0.83 for the medium size vessels and produces a qualitatively evaluated enhanced representation of vascular skeletons.

Place, publisher, year, edition, pages
Springer, 2017
Series
Communications in Computer and Information Science ; 723
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-318529 (URN)10.1007/978-3-319-60964-5_16 (DOI)000770548800016 ()978-3-319-60963-8 (ISBN)
Conference
MIUA 2017, July 11–13, Edinburgh, UK
Available from: 2017-06-22 Created: 2017-03-24 Last updated: 2024-01-18Bibliographically approved
Bengtsson, E., Danielsen, H., Treanor, D., Gurcan, M. N., MacAulay, C. & Molnár, B. (2017). Computer-aided diagnostics in digital pathology. Cytometry Part A, 91(6), 551-554
Open this publication in new window or tab >>Computer-aided diagnostics in digital pathology
Show others...
2017 (English)In: Cytometry Part A, ISSN 1552-4922, E-ISSN 1552-4930, Vol. 91, no 6, p. 551-554Article in journal, Editorial material (Other academic) Published
National Category
Radiology, Nuclear Medicine and Medical Imaging
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-330045 (URN)10.1002/cyto.a.23151 (DOI)000404037600001 ()28640523 (PubMedID)
Available from: 2017-06-22 Created: 2017-09-25 Last updated: 2017-11-25Bibliographically approved
Wieslander, H., Forslid, G., Bengtsson, E., Wählby, C., Hirsch, J.-M., Runow Stark, C. & Sadanandan, S. K. (2017). Deep convolutional neural networks for detecting cellular changes due to malignancy. In: Proc. 16th International Conference on Computer Vision Workshops: . Paper presented at ICCV Workshop on Bioimage Computing, Venice, Italy, October 23, 2017 (pp. 82-89). IEEE Computer Society
Open this publication in new window or tab >>Deep convolutional neural networks for detecting cellular changes due to malignancy
Show others...
2017 (English)In: Proc. 16th International Conference on Computer Vision Workshops, IEEE Computer Society, 2017, p. 82-89Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE Computer Society, 2017
National Category
Medical Image Processing
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-329356 (URN)10.1109/ICCVW.2017.18 (DOI)000425239600011 ()978-1-5386-1034-3 (ISBN)
Conference
ICCV Workshop on Bioimage Computing, Venice, Italy, October 23, 2017
Funder
EU, European Research Council, 682810Swedish Research Council, 2012-4968
Available from: 2018-01-23 Created: 2017-09-13 Last updated: 2022-04-08Bibliographically approved
Bengtsson, E. (2017). Image processing and its hardware support: Analysis vs synthesis - historical trends. In: P Sharma, F M Bianchi (Ed.), Image Analysis, SCIA 2017, Pt I: . Paper presented at 20th Scandinavian Conference on Image Analysis, Tromso, Norway, June 12-14, 2017 (pp. 3-14). Switzerland
Open this publication in new window or tab >>Image processing and its hardware support: Analysis vs synthesis - historical trends
2017 (English)In: Image Analysis, SCIA 2017, Pt I / [ed] P Sharma, F M Bianchi, Switzerland, 2017, p. 3-14Conference paper, Published paper (Refereed)
Abstract [en]

Computers can be used to handle images in two fundamen-tally dierent ways. They can be used to analyse images to obtain quan-titative data or some image understanding. And they can be used tocreate images that can be displayed through computer graphics and vi-sualization. For both of these purposes it is of interest to develop ecientways of representing, compressing and storing the images. While SCIA,the Scandinavia Conference of Image Analysis, according to its name ismainly concerned with the former aspect of images, it is interesting tonote that image analysis throughout its history has been strongly in u-enced also by developments on the visualization side. When the confer-ence series now has reached its 20th milestone it may be worth re ectingon what factors have been important in forming the development of theeld. To understand where you are it is good to know where you comefrom and it may even help you understand where you are going.

Place, publisher, year, edition, pages
Switzerland: , 2017
Keywords
history, image processing, image analysis, computer graphics, visualization, hardware support
National Category
Engineering and Technology Computer Systems
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-366917 (URN)10.1007/978-3-319-59126-1_1 (DOI)000454359300001 ()978-3-319-59126-1 (ISBN)978-3-319-59125-4 (ISBN)
Conference
20th Scandinavian Conference on Image Analysis, Tromso, Norway, June 12-14, 2017
Available from: 2018-11-26 Created: 2018-11-26 Last updated: 2019-02-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1636-3469

Search in DiVA

Show all publications