Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
Refine search result
1234567 1 - 50 of 426
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Acosta, Oscar
    et al.
    Frimmel, Hans
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Scientific Computing. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Fenster, Aaron
    Ourselin, Sébastien
    Filtering and restoration of structures in 3D ultrasound images2007In: Proc. 4th International Symposium on Biomedical Imaging, Piscataway, NJ: IEEE , 2007, p. 888-891Conference paper (Refereed)
  • 2. Acosta, Oscar
    et al.
    Frimmel, Hans
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Scientific Computing. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis.
    Fenster, Aaron
    Salvado, Olivier
    Ourselin, Sébastien
    Pyramidal flux in an anisotropic diffusion scheme for enhancing structures in 3D images2008In: Medical Imaging 2008: Image Processing, Bellingham, WA, 2008, p. 691429:1-12Conference paper (Refereed)
  • 3.
    Adlersson, Albert
    Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
    Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find out2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with. 

    Download full text (pdf)
    XAI_BachelorThesis
  • 4. Agarwala, Sunita
    et al.
    Nandi, Debashis
    Kumar, Abhishek
    Dhara, Ashis Kumar
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Thakur, Sumitra Basu
    Sadhu, Anup
    Bhadra, Ashok Kumar
    Automated segmentation of lung field in HRCT images using active shape model2017In: Proc. 37th Region 10 Conference, IEEE, 2017, p. 2516-2520Conference paper (Refereed)
  • 5. Agosti, Edoardo
    et al.
    Saraceno, Giorgio
    Rampinelli, Vittorio
    Raffetti, Elena
    Uppsala University, Disciplinary Domain of Science and Technology, Earth Sciences, Department of Earth Sciences, LUVAL. Department of Global Public Health Sciences, Karolinska Institute, Stockholm, Sweden.
    Veiceschi, Pierlorenzo
    Buffoli, Barbara
    Rezzani, Rita
    Giorgianni, Andrea
    Hirtler, Lena
    Alexander, Alex Yohan
    Deganello, Alberto
    Piazza, Cesare
    Nicolai, Piero
    Castelnuovo, Paolo
    Locatelli, Davide
    Peris-Celda, Maria
    Fontanella, Marco Maria
    Doglietto, Francesco
    Quantitative Anatomic Comparison of Endoscopic Transnasal and Microsurgical Transcranial Approaches to the Anterior Cranial Fossa2022In: Operative Neurosurgery, ISSN 2332-4252, E-ISSN 2332-4260, Vol. 23, no 4, p. e256-e266Article in journal (Refereed)
    Abstract [en]

    BACKGROUND: 

    Several microsurgical transcranial approaches (MTAs) and endoscopic transnasal approaches (EEAs) to the anterior cranial fossa (ACF) have been described.

    OBJECTIVE: 

    To provide a preclinical, quantitative, anatomic, comparative analysis of surgical approaches to the ACF.

    METHODS: 

    Five alcohol-fixed specimens underwent high-resolution computed tomography. The following approaches were performed on each specimen: EEAs (transcribriform, transtuberculum, and transplanum), anterior MTAs (transfrontal sinus interhemispheric, frontobasal interhemispheric, and subfrontal with unilateral and bilateral frontal craniotomy), and anterolateral MTAs (supraorbital, minipterional, pterional, and frontotemporal orbitozygomatic approach). An optic neuronavigation system and dedicated software (ApproachViewer, part of GTx-Eyes II—UHN) were used to quantify the working volume of each approach and extrapolate the exposure of different ACF regions. Mixed linear models with random intercepts were used for statistical analyses.

    RESULTS: 

    EEAs offer a large and direct route to the midline region of ACF, whose most anterior structures (ie, crista galli, cribriform plate, and ethmoidal roof) are also well exposed by anterior MTAs, whereas deeper ones (ie, planum sphenoidale and tuberculum sellae) are also well exposed by anterolateral MTAs. The orbital roof region is exposed by both anterolateral and lateral MTAs. The posterolateral region (ie, sphenoid wing and optic canal) is well exposed by anterolateral MTAs.

    CONCLUSION: 

    Anterior and anterolateral MTAs play a pivotal role in the exposure of most anterior and posterolateral ACF regions, respectively, whereas midline regions are well exposed by EEAs. Furthermore, certain anterolateral approaches may be most useful when involvement of the optic canal and nerves involvement are suspected.

  • 6.
    Allalou, Amin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Methods for 2D and 3D Quantitative Microscopy of Biological Samples2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    New microscopy techniques are continuously developed, resulting in more rapid acquisition of large amounts of data. Manual analysis of such data is extremely time-consuming and many features are difficult to quantify without the aid of a computer. But with automated image analysis biologists can extract quantitative measurements and increases throughput significantly, which becomes particularly important in high-throughput screening (HTS). This thesis addresses automation of traditional analysis of cell data as well as automation of both image capture and analysis in zebrafish high-throughput screening. 

    It is common in microscopy images to stain the nuclei in the cells, and to label the DNA and proteins in different ways. Padlock-probing and proximity ligation are highly specific detection methods that  produce point-like signals within the cells. Accurate signal detection and segmentation is often a key step in analysis of these types of images. Cells in a sample will always show some degree of variation in DNA and protein expression and to quantify these variations each cell has to be analyzed individually. This thesis presents development and evaluation of single cell analysis on a range of different types of image data. In addition, we present a novel method for signal detection in three dimensions. 

    HTS systems often use a combination of microscopy and image analysis to analyze cell-based samples. However, many diseases and biological pathways can be better studied in whole animals, particularly those that involve organ systems and multi-cellular interactions. The zebrafish is a widely-used vertebrate model of human organ function and development. Our collaborators have developed a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae. This thesis presents improvements to the system, including accurate positioning of the fish which incorporates methods for detecting regions of interest, making the system fully automatic. Furthermore, the thesis describes a novel high-throughput tomography system for screening live zebrafish in both fluorescence and bright field microscopy. This 3D imaging approach combined with automatic quantification of morphological changes enables previously intractable high-throughput screening of vertebrate model organisms.

    List of papers
    1. A detailed analysis of 3D subcellular signal localization
    Open this publication in new window or tab >>A detailed analysis of 3D subcellular signal localization
    Show others...
    2009 (English)In: Cytometry Part A, ISSN 1552-4922, Vol. 75A, no 4, p. 319-328Article in journal (Refereed) Published
    Abstract [en]

    Detection and localization of fluorescent signals in relation to other subcellular structures is an important task in various biological studies. Many methods for analysis of fluorescence microscopy image data are limited to 2D. As cells are in fact 3D structures, there is a growing need for robust methods for analysis of 3D data. This article presents an approach for detecting point-like fluorescent signals and analyzing their subnuclear position. Cell nuclei are delineated using marker-controlled (seeded) 3D watershed segmentation. User-defined object and background seeds are given as input, and gradient information defines merging and splitting criteria. Point-like signals are detected using a modified stable wave detector and localized in relation to the nuclear membrane using distance shells. The method was applied to a set of biological data studying the localization of Smad2-Smad4 protein complexes in relation to the nuclear membrane. Smad complexes appear as early as 1 min after stimulation while the highest signal concentration is observed 45 min after stimulation, followed by a concentration decrease. The robust 3D signal detection and concentration measures obtained using the proposed method agree with previous observations while also revealing new information regarding the complex formation.

    Keywords
    3D image analysis, fluorescence signal segmentation, subcellular positioning, Smad detection
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:uu:diva-98014 (URN)10.1002/cyto.a.20663 (DOI)000264513800006 ()
    Available from: 2009-02-05 Created: 2009-02-05 Last updated: 2018-01-13Bibliographically approved
    2. Single-cell A3243G mitochondrial DNA mutation load assays for segregation analysis
    Open this publication in new window or tab >>Single-cell A3243G mitochondrial DNA mutation load assays for segregation analysis
    Show others...
    2007 (English)In: Journal of Histochemistry and Cytochemistry, ISSN 0022-1554, E-ISSN 1551-5044, Vol. 55, no 11, p. 1159-1166Article in journal (Refereed) Published
    Abstract [en]

    Segregation of mitochondrial DNA (mtDNA) is an important underlying pathogenic factor in mtDNA mutation accumulation in mitochondrial diseases and aging, but the molecular mechanisms of mtDNA segregation are elusive. Lack of high-throughput single-cell mutation load assays lies at the root of the paucity of studies in which, at the single-cell level, mitotic mtDNA segregation patterns have been analyzed. Here we describe development of a novel fluorescence-based, non-gel PCR restriction fragment length polymorphism method for single-cell A3243G mtDNA mutation load measurement. Results correlated very well with a quantitative in situ Padlock/rolling circle amplification–based genotyping method. In view of the throughput and accuracy of both methods for single-cell A3243G mtDNA mutation load determination, we conclude that they are well suited for segregation analysis.

    Keywords
    A3243G mtDNA, Aging, Heteroplasmy, Mitochondrial diseases, Mutation load, Padlock probing, PCR-RFLP, Segregation
    National Category
    Medical and Health Sciences
    Identifiers
    urn:nbn:se:uu:diva-12658 (URN)10.1369/jhc.7A7282.2007 (DOI)000250320100009 ()17679731 (PubMedID)
    Available from: 2008-01-09 Created: 2008-01-09 Last updated: 2022-01-28Bibliographically approved
    3. BlobFinder, a tool for fluorescence microscopy image cytometry
    Open this publication in new window or tab >>BlobFinder, a tool for fluorescence microscopy image cytometry
    2009 (English)In: Computer Methods and Programs in Biomedicine, ISSN 0169-2607, E-ISSN 1872-7565, Vol. 94, no 1, p. 58-65Article in journal (Refereed) Published
    Abstract [en]

    Images can be acquired at high rates with modern fluorescence microscopy hardware, giving rise to a demand for high-speed analysis of image data. Digital image cytometry, i.e., automated measurements and extraction of quantitative data from images of cells, provides valuable information for many types of biomedical analysis. There exists a number of different image analysis software packages that can be programmed to perform a wide array of useful measurements. However, the multi-application capability often compromises the simplicity of the tool. Also, the gain in speed of analysis is often compromised by time spent learning complicated software. We provide a free software called BlobFinder that is intended for a limited type of application, making it easy to use, easy to learn and optimized for its particular task. BlobFinder can perform batch processing of image data and quantify as well as localize cells and point like source signals in fluorescence microscopy images, e.g., from FISH, in situ PLA and padlock probing, in a fast and easy way.

    Keywords
    Image cytometry, Single cell analysis, FISH, Software
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Research subject
    Computerized Image Analysis
    Identifiers
    urn:nbn:se:uu:diva-87971 (URN)10.1016/j.cmpb.2008.08.006 (DOI)000264282400006 ()18950895 (PubMedID)
    Available from: 2009-01-22 Created: 2009-01-16 Last updated: 2018-06-26Bibliographically approved
    4. Robust signal detection in 3D fluorescence microscopy
    Open this publication in new window or tab >>Robust signal detection in 3D fluorescence microscopy
    2010 (English)In: Cytometry. Part A, ISSN 1552-4922, Vol. 77A, no 1, p. 86-96Article in journal (Refereed) Published
    Abstract [en]

    Robust detection and localization of biomolecules inside cells is of great importance to better understand the functions related to them. Fluorescence microscopy and specific staining methods make biomolecules appear as point-like signals on image data, often acquired in 3D. Visual detection of such point-like signals can be time consuming and problematic if the 3D images are large, containing many, sometimes overlapping, signals. This sets a demand for robust automated methods for accurate detection of signals in 3D fluorescence microscopy. We propose a new 3D point-source signal detection method that is based on Fourier series. The method consists of two parts, a detector, which is a cosine filter to enhance the point-like signals, and a verifier, which is a sine filter to validate the result from the detector. Compared to conventional methods, our method shows better robustness to noise and good ability to resolve signals that are spatially close. Tests on image data show that the method has equivalent accuracy in signal detection in comparison to Visual detection by experts. The proposed method can be used as an efficient point-like signal detection tool for various types of biological 3D image data.

    National Category
    Bioinformatics and Systems Biology
    Identifiers
    urn:nbn:se:uu:diva-98015 (URN)10.1002/cyto.a.20795 (DOI)000273384700011 ()
    Available from: 2009-02-05 Created: 2009-02-05 Last updated: 2022-01-28Bibliographically approved
    5. High-throughput in vivo optical projection tomography of small vertebrates
    Open this publication in new window or tab >>High-throughput in vivo optical projection tomography of small vertebrates
    Show others...
    (English)Manuscript (preprint) (Other academic)
    National Category
    Natural Sciences
    Identifiers
    urn:nbn:se:uu:diva-159203 (URN)
    Available from: 2011-09-25 Created: 2011-09-25 Last updated: 2011-11-04
    6. Fully automated cellular-resolution vertebrate screening platform with parallel animal processing
    Open this publication in new window or tab >>Fully automated cellular-resolution vertebrate screening platform with parallel animal processing
    Show others...
    2012 (English)In: Lab on a Chip, ISSN 1473-0197, E-ISSN 1473-0189, Vol. 12, no 4, p. 711-716Article in journal (Refereed) Published
    Abstract [en]

    The zebrafish larva is an optically-transparent vertebrate model with complex organs that is widelyused to study genetics, developmental biology, and to model various human diseases. In this article, wepresent a set of novel technologies that significantly increase the throughput and capabilities of ourpreviously described vertebrate automated screening technology (VAST). We developed a robustmulti-thread system that can simultaneously process multiple animals. System throughput is limitedonly by the image acquisition speed rather than by the fluidic or mechanical processes. We developedimage recognition algorithms that fully automate manipulation of animals, including orienting andpositioning regions of interest within the microscope’s field of view. We also identified the optimalcapillary materials for high-resolution, distortion-free, low-background imaging of zebrafish larvae.

    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:uu:diva-159202 (URN)10.1039/c1lc20849g (DOI)000299380800007 ()
    Available from: 2011-09-25 Created: 2011-09-25 Last updated: 2018-01-12Bibliographically approved
    7. Image based measurements of single cell mtDNA mutation load MTD 2007
    Open this publication in new window or tab >>Image based measurements of single cell mtDNA mutation load MTD 2007
    Show others...
    2007 (English)In: Medicinteknikdagarna 2007, 2007Conference paper, Published paper (Other (popular science, discussion, etc.))
    Abstract [en]

    Cell cultures as well as cells in tissue always display a certain degree of variability,and measurements based on cell averages will miss important information contained in a heterogeneous population. These differences among cells in a population may be essential to quantify when looking at, e.g., protein expression and mutations in tumor cells which often show high degree of heterogeneity.

    Single nucleotide mutations in the mithochondrial DNA (mtDNA) can accumulate and later be present in large proportions of the mithocondria causing devastating diseases. To study mtDNA accumulation and segregation one needs to measure the amount of mtDNA mutations in each cell in multiple serial cell culture passages. The different degrees of mutation in a cell culture can be quantified by making measurements on individual cells as an alternative to looking at an average of a population. Fluorescence microscopy in combination with automated digital image analysis provides an efficient approach to this type of single cell analysis.

    Image analysis software for these types of applications are often complicated and not easy to use for persons lacking extensive knowledge in image analysis, e.g., laboratory personnel. This paper presents a user friendly implementation of an automated method for image based measurements of mtDNA mutations in individual cells detected with padlock probes and rolling-circle amplification (RCA). The mitochondria are present in the cell’s cytoplasm, and here each cytoplasm has to be delineated without the presence of a cytoplasmic stain. Three different methods for segmentation of cytoplasms are compared and it is shown that automated cytoplasmic delineation can be performed 30 times faster than manual delineation, with an accuracy as high as 87%. The final image based measurements of mitochondrial mutation load are also compared to, and show high agreement with, measurements made using biochemical techniques.

    National Category
    Other Computer and Information Science
    Identifiers
    urn:nbn:se:uu:diva-12745 (URN)
    Available from: 2008-01-11 Created: 2008-01-11 Last updated: 2018-01-12Bibliographically approved
    8. Increasing the dynamic range of in situ PLA
    Open this publication in new window or tab >>Increasing the dynamic range of in situ PLA
    Show others...
    2011 (English)In: Nature Methods, ISSN 1548-7091, E-ISSN 1548-7105, Vol. 8, no 11, p. 892-893Article in journal, Editorial material (Refereed) Published
    National Category
    Biological Sciences
    Identifiers
    urn:nbn:se:uu:diva-159199 (URN)10.1038/nmeth.1743 (DOI)000296891800004 ()
    Available from: 2011-09-25 Created: 2011-09-25 Last updated: 2022-01-28Bibliographically approved
    9. High-throughput cellular-resolution in vivo vertebrate screening
    Open this publication in new window or tab >>High-throughput cellular-resolution in vivo vertebrate screening
    Show others...
    2011 (English)In: Proc. 15th International Conference on Miniaturized Systems for Chemistry and Life Sciences, 2011Conference paper, Published paper (Refereed)
    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-159201 (URN)
    Available from: 2011-09-25 Created: 2011-09-25 Last updated: 2011-11-04
    Download full text (pdf)
    fulltext
  • 7.
    Allalou, Amin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Curic, Vladimir
    Pardo-Martin, Carlos
    Massachusetts Institute of Technology, USA.
    Yanik, Mehmet Fatih
    Massachusetts Institute of Technology, USA.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Approaches for increasing throughput andinformation content of image-based zebrafishscreens2011In: Proceeding of SSBA 2011, 2011Conference paper (Other academic)
    Abstract [en]

    Microscopy in combination with image analysis has emerged as one of the most powerful and informativeways to analyze cell-based high-throughput screening (HTS) samples in experiments designed to uncover novel drugs and drug targets. However, many diseases and biological pathways can be better studied in whole animals, particularly diseases and pathways that involve organ systems and multicellular interactions, such as organ development, neuronal degeneration and regeneration, cancer metastasis, infectious disease progression and pathogenesis. The zebrafish is a wide-spread and popular vertebrate model of human organfunction and development, and it is unique in the sense that large-scale in vivo genetic and chemical studies are feasible due in part to its small size, optical transparency,and aquatic habitat. To improve the throughput and complexity of zebrafish screens, a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae has been developed at Yanik lab at Research Laboratory of Electronics, MIT, USA. The system loads live zebrafish from reservoirs or multiwell plates, positions and rotates them for high-speed confocal imaging of organs,and dispenses the animals without damage. We present two improvements to the described system, including automation of positioning of the animals and a novel approach for brightfield microscopy tomographic imaging of living animals.

  • 8.
    Allalou, Amin
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Wu, Yuelong
    Ghannad-Rezaie, Mostafa
    Eimon, Peter M.
    Yanik, Mehmet Fatih
    Automated deep-phenotyping of the vertebrate brain2017In: eLIFE, E-ISSN 2050-084X, Vol. 6, article id e23379Article in journal (Refereed)
  • 9.
    Andersson, Axel
    et al.
    Uppsala University, Science for Life Laboratory, SciLifeLab. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Diego, Ferran
    HCI/IWR and Department of Physics and Astronomy, Heidelberg University, Heidelberg.
    Hamprecht, Fred A.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab. HCI/IWR and Department of Physics and Astronomy, Heidelberg University, Heidelberg.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    ISTDECO: In Situ Transcriptomics Decoding by DeconvolutionManuscript (preprint) (Other academic)
    Abstract [en]

    In Situ Transcriptomics (IST) is a set of image-based transcriptomics approaches that enables localisation of gene expression directly in tissue samples. IST techniques produce multiplexed image series in which fluorescent spots are either present or absent across imaging rounds and colour channels. A spot’spresence and absence form a type of barcoded pattern that labels a particular type of mRNA. Therefore, the expression of agene can be determined by localising the fluorescent spots and decode the barcode that they form. Existing IST algorithms usually do this in two separate steps: spot localisation and barcode decoding. Although these algorithms are efficient, they are limited by strictly separating the localisation and decoding steps. This limitation becomes apparent in regions with low signal-to-noise ratio or high spot densities. We argue that an improved gene expression decoding can be obtained by combining these two steps into a single algorithm. This allows for an efficient decoding that is less sensitive to noise and optical crowding. We present IST Decoding by Deconvolution (ISTDECO), a principled decoding approach combining spectral and spatial deconvolution into a single algorithm. We evaluate ISTDECOon simulated data, as well as on two real IST datasets, and compare with state-of-the-art. ISTDECO achieves state-of-the-art performance despite high spot densities and low signal-to-noise ratios. It is easily implemented and runs efficiently using a GPU.ISTDECO implementation, datasets and demos are available online at: github.com/axanderssonuu/istdeco

  • 10.
    Andersson, Jonathan
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Water–fat separation in magnetic resonance imaging and its application in studies of brown adipose tissue2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Virtually all the magnetic resonance imaging (MRI) signal of a human originates from water and fat molecules. By utilizing the property chemical shift the signal can be separated, creating water- and fat-only images. From these images it is possible to calculate quantitative fat fraction (FF) images, where the value of each voxel is equal to the percentage of its signal originating from fat. In papers I and II methods for water–fat signal separation are presented and evaluated.

    The method in paper I utilizes a graph-cut to separate the signal and was designed to perform well even for a low signal-to-noise ratio (SNR). The method was shown to perform as well as previous methods at high SNRs, and better at low SNRs.

    The method presented in paper II uses convolutional neural networks to perform the signal separation. The method was shown to perform similarly to a previous method using a graph-cut when provided non-undersampled input data. Furthermore, the method was shown to be able to separate the signal using undersampled data. This may allow for accelerated MRI scans in the future.

    Brown adipose tissue (BAT) is a thermogenic organ with the main purpose of expending chemical energy to prevent the body temperature from falling too low. Its energy expending capability makes it a potential target for treating overweight/obesity and metabolic dysfunctions, such as type 2 diabetes. The most well-established way of estimating the metabolic potential of BAT is through measuring glucose uptake using 18F-fludeoxyglucose (18F-FDG) positron emission tomography (PET) during cooling. This technique exposes subjects to potentially harmful ionizing radiation, and alternative methods are desired. One alternative method is measuring the BAT FF using MRI.

    In paper III the BAT FF in 7-year olds was shown to be negatively associated with blood serum levels of the bone-specific protein osteocalcin and, after correction for adiposity, thigh muscle volume. This may have implications for how BAT interacts with both bone and muscle tissue.

    In paper IV the glucose uptake of BAT during cooling of adult humans was measured using 18F-FDG PET. Additionally, their BAT FF was measured using MRI, and their skin temperature during cooling near a major BAT depot was measured using infrared thermography (IRT). It was found that both the BAT FF and the temperature measured using IRT correlated with the BAT glucose uptake, meaning these measurements could be potential alternatives to 18F-FDG PET in future studies of BAT.

    List of papers
    1. Water-fat separation incorporating spatial smoothing is robust to noise
    Open this publication in new window or tab >>Water-fat separation incorporating spatial smoothing is robust to noise
    2018 (English)In: Magnetic Resonance Imaging, ISSN 0730-725X, E-ISSN 1873-5894, Vol. 50, p. 78-83, article id S0730-725X(18)30040-7Article in journal (Refereed) Published
    Abstract [en]

    PURPOSE: To develop and evaluate a noise-robust method for reconstruction of water and fat images for spoiled gradient multi-echo sequences.

    METHODS: The proposed method performs water-fat separation by using a graph cut to minimize an energy function consisting of unary and binary terms. Spatial smoothing is incorporated to increase robustness to noise. The graph cut can fail to find a solution covering the entire image, in which case the relative weighting of the unary term is iteratively increased until a complete solution is found. The proposed method was compared to two previously published methods. Reconstructions were performed on 16 cases taken from the 2012 ISMRM water-fat reconstruction challenge dataset, for which reference reconstructions were provided. Robustness towards noise was evaluated by reconstructing images with different levels of noise added. The percentage of water-fat swaps were calculated to measure performance.

    RESULTS: At low noise levels the proposed method produced similar results to one of the previously published methods, while outperforming the other. The proposed method significantly outperformed both of the previously published methods at moderate and high noise levels.

    CONCLUSION: By incorporating spatial smoothing, an increased robustness towards noise is achieved when performing water-fat reconstruction of spoiled gradient multi-echo sequences.

    Keywords
    Chemical shift imaging, Dixon, Graph cuts, Multi-scale, Quadratic pseudo-Boolean optimization, Water-fat separation
    National Category
    Radiology, Nuclear Medicine and Medical Imaging
    Identifiers
    urn:nbn:se:uu:diva-347450 (URN)10.1016/j.mri.2018.03.015 (DOI)000434750700011 ()29601865 (PubMedID)
    Funder
    Swedish Research Council, 2016-01040
    Available from: 2018-04-03 Created: 2018-04-03 Last updated: 2019-08-14Bibliographically approved
    2. Separation of water and fat signal in whole-body gradient echo scans using convolutional neural networks
    Open this publication in new window or tab >>Separation of water and fat signal in whole-body gradient echo scans using convolutional neural networks
    2019 (English)In: Magnetic Resonance in Medicine, ISSN 0740-3194, E-ISSN 1522-2594, Vol. 82, no 3, p. 1177-1186Article in journal (Refereed) Published
    Abstract [en]

    Purpose: To perform and evaluate water–fat signal separation of whole‐body gradient echo scans using convolutional neural networks.

    Methods: Whole‐body gradient echo scans of 240 subjects, each consisting of 5 bipolar echoes, were used. Reference fat fraction maps were created using a conventional method. Convolutional neural networks, more specifically 2D U‐nets, were trained using 5‐fold cross‐validation with 1 or several echoes as input, using the squared difference between the output and the reference fat fraction maps as the loss function. The outputs of the networks were assessed by the loss function, measured liver fat fractions, and visually. Training was performed using a graphics processing unit (GPU). Inference was performed using the GPU as well as a central processing unit (CPU).

    Results: The loss curves indicated convergence, and the final loss of the validation data decreased when using more echoes as input. The liver fat fractions could be estimated using only 1 echo, but results were improved by use of more echoes. Visual assessment found the quality of the outputs of the networks to be similar to the reference even when using only 1 echo, with slight improvements when using more echoes. Training a network took at most 28.6 h. Inference time of a whole‐body scan took at most 3.7 s using the GPU and 5.8 min using the CPU.

    Conclusion: It is possible to perform water–fat signal separation of whole‐body gradient echo scans using convolutional neural networks. Separation was possible using only 1 echo, although using more echoes improved the results.

    Keywords
    Dixon, convolutional neural network, deep learning, magnetic resonance imaging, neural network, water-fat separation
    National Category
    Radiology, Nuclear Medicine and Medical Imaging
    Identifiers
    urn:nbn:se:uu:diva-382933 (URN)10.1002/mrm.27786 (DOI)000485077600026 ()31033022 (PubMedID)
    Funder
    Swedish Research Council, 2016-01040
    Available from: 2019-05-07 Created: 2019-05-07 Last updated: 2019-10-15Bibliographically approved
    3. MRI estimates of brown adipose tissue in children - Associations to adiposity, osteocalcin, and thigh muscle volume
    Open this publication in new window or tab >>MRI estimates of brown adipose tissue in children - Associations to adiposity, osteocalcin, and thigh muscle volume
    Show others...
    2019 (English)In: Magnetic Resonance Imaging, ISSN 0730-725X, E-ISSN 1873-5894, Vol. 58, p. 135-142Article in journal (Refereed) Published
    Abstract [en]

    Context: Brown adipose tissue is of metabolic interest. The tissue is however poorly explored in children.

    Methods: Sixty-three 7-year old subjects from the Swedish birth-cohort Halland Health and Growth Study were recruited. Care was taken to include both normal weight and overweight children, but the subjects were otherwise healthy. Only children born full term were included. Water-fat separated whole-body MRI scans, anthropometric measurements, and measurements of fasting glucose and levels of energy homeostasis related hormones, including the insulin-sensitizer osteocalcin, were performed. The fat fraction (FF) and effective transverse relaxation time (T-2(star)) of suspected brown adipose tissue in the cervical-supraclavicular-axillary fat depot (sBAT) and the FFs of abdominal visceral (VAT) and subcutaneous adipose tissue (SAT) were measured. Volumes of sBAT, abdominal VAT and SAT, and thigh muscle volumes were measured.

    Results: The FF in the sBAT depot was lower than in VAT and SAT for all children. In linear correlations including sex and age as explanatory variables, sBAT FF correlated positively with all measures of adiposity (p < 0.01), except for VAT FF and weight, positively with sBAT T-2* (p = 0.036), and negatively with osteocalcin (p = 0.017). When adding measures of adiposity as explanatory variables, sBAT FF also correlated negatively with thigh muscle volume (p < 0.01).

    Conclusions: Whole-body water-fat MRI of children allows for measurements of sBAT. The FF of sBAT was lower than that of VAT and SAT, indicating presence of BAT. Future studies could confirm whether the observed correlations corresponds to a hormonally active BAT.

    Place, publisher, year, edition, pages
    ELSEVIER SCIENCE INC, 2019
    Keywords
    Brown adipose tissue, Magnetic resonance imaging, Adiposity, Osteocalcin, Muscle volume, Quantitative MRI
    National Category
    Radiology, Nuclear Medicine and Medical Imaging
    Identifiers
    urn:nbn:se:uu:diva-380416 (URN)10.1016/j.mri.2019.02.001 (DOI)000461412300018 ()30742901 (PubMedID)
    Funder
    Swedish Research Council, 2013-3013Swedish Research Council, 2016-01040Region Västra Götaland
    Available from: 2019-04-02 Created: 2019-04-02 Last updated: 2019-08-14Bibliographically approved
    4. Estimating the cold-induced brown adipose tissue glucose uptake rate measured by 18F-FDG PET using infrared thermography and water-fat separated MRI
    Open this publication in new window or tab >>Estimating the cold-induced brown adipose tissue glucose uptake rate measured by 18F-FDG PET using infrared thermography and water-fat separated MRI
    Show others...
    2019 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 9, article id 12358Article in journal (Refereed) Published
    Abstract [en]

    Brown adipose tissue (BAT) expends chemical energy to produce heat, which makes it a potential therapeutic target for combating metabolic dysfunction and overweight/obesity by increasing its metabolic activity. The most well-established method for measuring BAT metabolic activity is glucose uptake rate (GUR) measured using 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET). However, this is expensive and exposes the subjects to potentially harmful radiation. Cheaper and safer methods are warranted for large-scale or longitudinal studies. Potential alternatives include infrared thermography (IRT) and magnetic resonance imaging (MRI). The aim of this study was to evaluate and further develop these techniques. Twelve healthy adult subjects were studied. The BAT GUR was measured using 18F-FDG PET during individualized cooling. The temperatures of the supraclavicular fossae and a control region were measured using IRT during a simple cooling protocol. The fat fraction and effective transverse relaxation rate of BAT were measured using MRI without any cooling intervention. Simple and multiple linear regressions were employed to evaluate how well the MRI and IRT measurements could estimate the GUR. Results showed that both IRT and MRI measurements correlated with the GUR. This suggest that these measurements may be suitable for estimating the cold-induced BAT GUR in future studies.

    Keywords
    brown adipose tissue, 18F-FDG positron emission tomography, infrared thermography, magnetic resonance imagingm PET/MRI, water–fat signal separation
    National Category
    Radiology, Nuclear Medicine and Medical Imaging
    Research subject
    Radiology
    Identifiers
    urn:nbn:se:uu:diva-390410 (URN)10.1038/s41598-019-48879-7 (DOI)000482564800014 ()31451711 (PubMedID)
    Funder
    Swedish Research Council, 2016-01040Swedish Heart Lung Foundation, 2170492EXODIAB - Excellence of Diabetes Research in Sweden
    Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2022-09-15Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 11.
    Andrade-Loarca, Hector
    et al.
    Ludwig Maximilians Univ Munchen, Dept Math, D-80333 Munich, Germany..
    Kutyniok, Gitta
    Ludwig Maximilians Univ Munchen, Dept Math, D-80333 Munich, Germany.;Univ Tromso, Dept Phys & Technol, N-9019 Tromso, Norway..
    Öktem, Ozan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Scientific Computing. KTH Royal Inst Technol, Dept Math, SE-10044 Stockholm, Sweden.
    Petersen, Philipp
    Univ Vienna, Fac Math, A-1090 Vienna, Austria.;Univ Vienna, Res Network Data Sci, A-1090 Vienna, Austria..
    Deep microlocal reconstruction for limited-angle tomography2022In: Applied and Computational Harmonic Analysis, ISSN 1063-5203, E-ISSN 1096-603X, Vol. 59, p. 155-197Article in journal (Refereed)
    Abstract [en]

    We present a deep-learning-based algorithm to jointly solve a reconstruction problem and a wavefront set extraction problem in tomographic imaging. The algorithm is based on a recently developed digital wavefront set extractor as well as the well-known microlocal canonical relation for the Radon transform. We use the wavefront set information about x-ray data to improve the reconstruction by requiring that the underlying neural networks simultaneously extract the correct ground truth wavefront set and ground truth image. As a necessary theoretical step, we identify the digital microlocal canonical relations for deep convolutional residual neural networks. We find strong numerical evidence for the effectiveness of this approach.

    Download full text (pdf)
    fulltext
  • 12.
    Anklin, Valentin
    et al.
    IBM Research-Europe, Zurich, Switzerland;ETH Zurich, Zurich, Switzerland.
    Pati, Pushpak
    IBM Research-Europe, Zurich, Switzerland;ETH Zurich, Zurich, Switzerland.
    Jaume, Guillaume
    IBM Research-Europe, Zurich, Switzerland;EPFL, Lausanne, Switzerland.
    Bozorgtabar, Behzad
    EPFL, Lausanne, Switzerland.
    Foncubierta-Rodriguez, Antonio
    IBM Research-Europe, Zurich, Switzerland.
    Thiran, Jean-Philippe
    EPFL, Lausanne, Switzerland.
    Sibony, Mathilde
    Cochin Hospital, Paris, France;University of Paris, Paris, France.
    Gabrani, Maria
    IBM Research-Europe, Zurich, Switzerland.
    Goksel, Orcun
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. ETH Zurich, Zurich, Switzerland.
    Learning Whole-Slide Segmentation from Inexact and Incomplete Labels Using Tissue Graphs2021In: Medical Image Computing and Computer Assisted Intervention: MICCAI 2021, Cham: Springer Nature, 2021, p. 636-646Conference paper (Refereed)
    Abstract [en]

    Segmenting histology images into diagnostically relevant regions is imperative to support timely and reliable decisions by pathologists. To this end, computer-aided techniques have been proposed to delineate relevant regions in scanned histology slides. However, the techniques necessitate task-specific large datasets of annotated pixels, which is tedious, time-consuming, expensive, and infeasible to acquire for many histology tasks. Thus, weakly-supervised semantic segmentation techniques are proposed to leverage weak supervision which is cheaper and quicker to acquire. In this paper, we propose SEGGINI, a weakly-supervised segmentation method using graphs, that can utilize weak multiplex annotations, i.e., inexact and incomplete annotations, to segment arbitrary and large images, scaling from tissue microarray (TMA) to whole slide image (WSI). Formally, SEGGINI constructs a tissue-graph representation for an input image, where the graph nodes depict tissue regions. Then, it performs weakly-supervised segmentation via node classification by using inexact image-level labels, incomplete scribbles, or both. We evaluated SEGGINI on two public prostate cancer datasets containing TMAs and WSIs. Our method achieved state-of-the-art segmentation performance on both datasets for various annotation settings while being comparable to a pathologist baseline. Code and models are available at: https://github.com/histocartography/seg-gini

  • 13. Arvidsson, Anna
    et al.
    Sarve, Hamid
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Johansson, Carina B.
    Comparing and visualizing titanium implant integration in rat bone using 2D and 3D techniques2015In: Journal of Biomedical Materials Research. Part B - Applied biomaterials, ISSN 1552-4973, E-ISSN 1552-4981, Vol. 103, no 1, p. 12-20Article in journal (Refereed)
    Abstract [en]

    The aim was to compare the osseointegration of grit-blasted implants with and without a hydrogen fluoride treatment in rat tibia and femur, and to visualize bone formation using state-of-the-art 3D visualization techniques. Grit-blasted implants were inserted in femur and tibia of 10 Sprague-Dawley rats (4 implants/rat). Four weeks after insertion, bone implant samples were retrieved. Selected samples were imaged in 3D using Synchrotron Radiation-based CT (SRCT). The 3D data was quantified and visualized using two novel visualization techniques, thread fly-through and 2D unfolding. All samples were processed to cut and ground sections and 2D histomorphometrical comparisons of bone implant contact (BIC), bone area (BA), and mirror image area (MI) were performed. BA values were statistically significantly higher for test implants than controls (p<0.05), but BIC and MI data did not differ significantly. Thus, the results partly indicate improved bone formation at blasted and hydrogen fluoride treated implants, compared to blasted implants. The 3D analysis was a valuable complement to 2D analysis, facilitating improved visualization. However, further studies are required to evaluate aspects of 3D quantitative techniques, with relation to light microscopy that traditionally is used for osseointegration studies. (c) 2014 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 103B: 12-20, 2015.

  • 14.
    Augustin, Xenia
    et al.
    ETH Zurich.
    Zhang, Lin
    ETH Zurich.
    Goksel, Orcun
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. ETH Zurich.
    Estimating Mean Speed-of-Sound from Sequence-Dependent Geometric Disparities2021Conference paper (Refereed)
  • 15.
    Avenel, Christophe
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Carlbom, Ingrid
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Blur detection and visualization in histological whole slide images2015In: Proc. 10th International Conference on Mass Data Analysis of Images and Signals, Leipzig, Germany: IBaI , 2015Conference paper (Refereed)
    Abstract [en]

    Digital pathology holds the promise of improved workflow and also of the use of image analysis to extract features from tissue samples for quantitative analysis to improve current subjective analysis of, for example, cancer tissue. But this requires fast and reliable image digitization. In this paper we address image blurriness, which is a particular problem with very large images or tissue micro arrays scanned with whole slide scanners, since autofocus methods may fail when there is a large variation in image content. We introduce a method to detect, quantify and dis-play blurriness from whole slide images (WSI) in real-time. We describe a blurriness measurement based on an ideal high pass filter in the frequency domain. In contrast with other method our method does not require any prior knowledge of the image content, and it produces a continuous blurriness map over the entire WSI. This map can be displayed as an overlay of the original data and viewed at different levels of magnification with zoom and pan features. The computation time for an entire WSI is around 5 minutes on an average workstation, which is about 180 times faster than existing methods.

  • 16.
    Avenel, Christophe
    et al.
    CADESS Med AB, Uppsala, Sweden.
    Tolf, Anna
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Immunology, Genetics and Pathology, Clinical and experimental pathology.
    Dragomir, Anca
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Immunology, Genetics and Pathology, Clinical and experimental pathology. Department of Pathology, Uppsala University Hospital, Uppsala, Sweden..
    Carlbom, Ingrid
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. CADESS Med AB, Uppsala, Sweden.
    Glandular Segmentation of Prostate Cancer: An Illustration of How the Choice of Histopathological Stain Is One Key to Success for Computational Pathology2019In: Frontiers in Bioengineering and Biotechnology, E-ISSN 2296-4185, Vol. 7, article id 125Article in journal (Refereed)
    Abstract [en]

    Digital pathology offers the potential for computer-aided diagnosis, significantly reducing the pathologists' workload and paving the way for accurate prognostication with reduced inter-and intra-observer variations. But successful computer-based analysis requires careful tissue preparation and image acquisition to keep color and intensity variations to a minimum. While the human eye may recognize prostate glands with significant color and intensity variations, a computer algorithm may fail under such conditions. Since malignancy grading of prostate tissue according to Gleason or to the International Society of Urological Pathology (ISUP) grading system is based on architectural growth patterns of prostatic carcinoma, automatic methods must rely on accurate identification of the prostate glands. But due to poor color differentiation between stroma and epithelium from the common stain hematoxylin-eosin, no method is yet able to segment all types of glands, making automatic prognostication hard to attain. We address the effect of tissue preparation on glandular segmentation with an alternative stain, Picrosirius red-hematoxylin, which clearly delineates the stromal boundaries, and couple this stain with a color decomposition that removes intensity variation. In this paper we propose a segmentation algorithm that uses image analysis techniques based on mathematical morphology and that can successfully determine the glandular boundaries. Accurate determination of the stromal and glandular morphology enables the identification of the architectural pattern that determine the malignancy grade and classify each gland into its appropriate Gleason grade or ISUP Grade Group. Segmentation of prostate tissue with the new stain and decomposition method has been successfully tested on more than 11000 objects including well-formed glands (Gleason grade 3), cribriform and fine caliber glands (grade 4), and single cells (grade 5) glands.

    Download full text (pdf)
    FULLTEXT01
  • 17.
    Azar, Jimmy
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Automated Tissue Image Analysis Using Pattern Recognition2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Automated tissue image analysis aims to develop algorithms for a variety of histological applications. This has important implications in the diagnostic grading of cancer such as in breast and prostate tissue, as well as in the quantification of prognostic and predictive biomarkers that may help assess the risk of recurrence and the responsiveness of tumors to endocrine therapy.

    In this thesis, we use pattern recognition and image analysis techniques to solve several problems relating to histopathology and immunohistochemistry applications. In particular, we present a new method for the detection and localization of tissue microarray cores in an automated manner and compare it against conventional approaches.

    We also present an unsupervised method for color decomposition based on modeling the image formation process while taking into account acquisition noise. The method is unsupervised and is able to overcome the limitation of specifying absorption spectra for the stains that require separation. This is done by estimating reference colors through fitting a Gaussian mixture model trained using expectation-maximization.

    Another important factor in histopathology is the choice of stain, though it often goes unnoticed. Stain color combinations determine the extent of overlap between chromaticity clusters in color space, and this intrinsic overlap sets a main limitation on the performance of classification methods, regardless of their nature or complexity. In this thesis, we present a framework for optimizing the selection of histological stains in a manner that is aligned with the final objective of automation, rather than visual analysis.

    Immunohistochemistry can facilitate the quantification of biomarkers such as estrogen, progesterone, and the human epidermal growth factor 2 receptors, in addition to Ki-67 proteins that are associated with cell growth and proliferation. As an application, we propose a method for the identification of paired antibodies based on correlating probability maps of immunostaining patterns across adjacent tissue sections.

    Finally, we present a new feature descriptor for characterizing glandular structure and tissue architecture, which form an important component of Gleason and tubule-based Elston grading. The method is based on defining shape-preserving, neighborhood annuli around lumen regions and gathering quantitative and spatial data concerning the various tissue-types.

    List of papers
    1. Microarray Core Detection by Geometric Restoration
    Open this publication in new window or tab >>Microarray Core Detection by Geometric Restoration
    2012 (English)In: Analytical Cellular Pathology, ISSN 0921-8912, E-ISSN 1878-3651, Vol. 35, no 5-6, p. 381-393Article in journal (Refereed) Published
    Abstract [en]

    Whole-slide imaging of tissue microarrays (TMAs) holds the promise of automated image analysis of a large number of histopathological samples from a single slide. This demands high-throughput image processing to enable analysis of these tissue samples for diagnosis of cancer and other conditions. In this paper, we present a completely automated method for the accurate detection and localization of tissue cores that is based on geometric restoration of the core shapes without placing any assumptions on grid geometry. The method relies on hierarchical clustering in conjunction with the Davies-Bouldin index for cluster validation in order to estimate the number of cores in the image wherefrom we estimate the core radius and refine this estimate using morphological granulometry. The final stage of the algorithm reconstructs circular discs from core sections such that these discs cover the entire region of each core regardless of the precise shape of the core. The results show that the proposed method is able to reconstruct core locations without any evidence of localization error. Furthermore, the algorithm is more efficient than existing methods based on the Hough transform for circle detection. The algorithm's simplicity, accuracy, and computational efficiency allow for automated high-throughput analysis of microarray images.

    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-183618 (URN)10.3233/ACP-2012-0067 (DOI)000311675800005 ()22684152 (PubMedID)
    Available from: 2012-10-30 Created: 2012-10-30 Last updated: 2022-01-28Bibliographically approved
    2. Blind Color Decomposition of Histological Images
    Open this publication in new window or tab >>Blind Color Decomposition of Histological Images
    Show others...
    2013 (English)In: IEEE Transactions on Medical Imaging, ISSN 0278-0062, E-ISSN 1558-254X, Vol. 32, no 6, p. 983-994Article in journal (Refereed) Published
    Abstract [en]

    Cancer diagnosis is based on visual examination under a microscope of tissue sections from biopsies. But whereas pathologists rely on tissue stains to identify morphological features, automated tissue recognition using color is fraught with problems that stem from image intensity variations due to variations in tissue preparation, variations in spectral signatures of the stained tissue, spectral overlap and spatial aliasing in acquisition, and noise at image acquisition. We present a blind method for color decomposition of histological images. The method decouples intensity from color information and bases the decomposition only on the tissue absorption characteristics of each stain. By modeling the charge-coupled device sensor noise, we improve the method accuracy. We extend current linear decomposition methods to include stained tissues where one spectral signature cannot be separated from all combinations of the other tissues' spectral signatures. We demonstrate both qualitatively and quantitatively that our method results in more accurate decompositions than methods based on non-negative matrix factorization and independent component analysis. The result is one density map for each stained tissue type that classifies portions of pixels into the correct stained tissue allowing accurate identification of morphological features that may be linked to cancer.

    National Category
    Medical Image Processing
    Research subject
    Computerized Image Processing
    Identifiers
    urn:nbn:se:uu:diva-160312 (URN)10.1109/TMI.2013.2239655 (DOI)000319701800002 ()
    Available from: 2011-10-21 Created: 2011-10-21 Last updated: 2022-01-28
    3. Histological Stain Evaluation for Machine Learning Applications
    Open this publication in new window or tab >>Histological Stain Evaluation for Machine Learning Applications
    2012 (English)In: Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, 2012Conference paper, Published paper (Refereed)
    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-183619 (URN)
    Conference
    MICCAI 2012, the 15th International Conference on Medical Image Computing and Computer Assisted Intervention, October 1-5, 2012, Nice, France
    Available from: 2012-10-30 Created: 2012-10-30 Last updated: 2015-01-23
    4. Image segmentation and identification of paired antibodies in breast tissue
    Open this publication in new window or tab >>Image segmentation and identification of paired antibodies in breast tissue
    2014 (English)In: Computational & Mathematical Methods in Medicine, ISSN 1748-670X, E-ISSN 1748-6718, p. 647273:1-11Article in journal (Refereed) Published
    Abstract [en]

    Comparing staining patterns of paired antibodies designed towards a specific protein but toward different epitopes of the protein provides quality control over the binding and the antibodies' ability to identify the target protein correctly and exclusively. We present a method for automated quantification of immunostaining patterns for antibodies in breast tissue using the Human Protein Atlas database. In such tissue, dark brown dye 3,3'-diaminobenzidine is used as an antibody-specific stain whereas the blue dye hematoxylin is used as a counterstain. The proposed method is based on clustering and relative scaling of features following principal component analysis. Our method is able (1) to accurately segment and identify staining patterns and quantify the amount of staining and (2) to detect paired antibodies by correlating the segmentation results among different cases. Moreover, the method is simple, operating in a low-dimensional feature space, and computationally efficient which makes it suitable for high-throughput processing of tissue microarrays.

    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-229978 (URN)10.1155/2014/647273 (DOI)000338856800001 ()25061472 (PubMedID)
    Projects
    eSSENCE
    Available from: 2014-07-01 Created: 2014-08-18 Last updated: 2017-12-05Bibliographically approved
    5. Automated Classification of Glandular Tissue by Statistical Proximity Sampling
    Open this publication in new window or tab >>Automated Classification of Glandular Tissue by Statistical Proximity Sampling
    2015 (English)In: International Journal of Biomedical Imaging, ISSN 1687-4188, E-ISSN 1687-4196, article id 943104Article in journal (Refereed) Published
    Abstract [en]

    Due to the complexity of biological tissue and variations in staining procedures, features that are based on the explicit extraction of properties from subglandular structures in tissue images may have difficulty generalizing well over an unrestricted set of images and staining variations. We circumvent this problem by an implicit representation that is both robust and highly descriptive, especially when combined with a multiple instance learning approach to image classification. The new feature method is able to describe tissue architecture based on glandular structure. It is based on statistically representing the relative distribution of tissue components around lumen regions, while preserving spatial and quantitative information, as a basis for diagnosing and analyzing different areas within an image. We demonstrate the efficacy of the method in extracting discriminative features for obtaining high classification rates for tubular formation in both healthy and cancerous tissue, which is an important component in Gleason and tubule-based Elston grading. The proposed method may be used for glandular classification, also in other tissue types, in addition to general applicability as a region-based feature descriptor in image analysis where the image represents a bag with a certain label (or grade) and the region-based feature vectors represent instances.

    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-230871 (URN)10.1155/2015/943104 (DOI)000362067400001 ()
    Available from: 2014-09-01 Created: 2014-09-01 Last updated: 2017-12-05Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (jpg)
    presentationsbild
  • 18.
    Azar, Jimmy
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Busch, Christer
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Immunology, Genetics and Pathology, Molecular and Morphological Pathology.
    Carlbom, Ingrid
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Histological Stain Evaluation for Machine Learning Applications2012In: Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, 2012Conference paper (Refereed)
  • 19.
    Azar, Jimmy
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Busch, Christer
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Genetics and Pathology.
    Carlbom, Ingrid
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Microarray Core Detection by Geometric Restoration2012In: Analytical Cellular Pathology, ISSN 0921-8912, E-ISSN 1878-3651, Vol. 35, no 5-6, p. 381-393Article in journal (Refereed)
    Abstract [en]

    Whole-slide imaging of tissue microarrays (TMAs) holds the promise of automated image analysis of a large number of histopathological samples from a single slide. This demands high-throughput image processing to enable analysis of these tissue samples for diagnosis of cancer and other conditions. In this paper, we present a completely automated method for the accurate detection and localization of tissue cores that is based on geometric restoration of the core shapes without placing any assumptions on grid geometry. The method relies on hierarchical clustering in conjunction with the Davies-Bouldin index for cluster validation in order to estimate the number of cores in the image wherefrom we estimate the core radius and refine this estimate using morphological granulometry. The final stage of the algorithm reconstructs circular discs from core sections such that these discs cover the entire region of each core regardless of the precise shape of the core. The results show that the proposed method is able to reconstruct core locations without any evidence of localization error. Furthermore, the algorithm is more efficient than existing methods based on the Hough transform for circle detection. The algorithm's simplicity, accuracy, and computational efficiency allow for automated high-throughput analysis of microarray images.

  • 20.
    Azar, Jimmy C.
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Simonsson, Martin
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Hast, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Image segmentation and identification of paired antibodies in breast tissue2014In: Computational & Mathematical Methods in Medicine, ISSN 1748-670X, E-ISSN 1748-6718, p. 647273:1-11Article in journal (Refereed)
    Abstract [en]

    Comparing staining patterns of paired antibodies designed towards a specific protein but toward different epitopes of the protein provides quality control over the binding and the antibodies' ability to identify the target protein correctly and exclusively. We present a method for automated quantification of immunostaining patterns for antibodies in breast tissue using the Human Protein Atlas database. In such tissue, dark brown dye 3,3'-diaminobenzidine is used as an antibody-specific stain whereas the blue dye hematoxylin is used as a counterstain. The proposed method is based on clustering and relative scaling of features following principal component analysis. Our method is able (1) to accurately segment and identify staining patterns and quantify the amount of staining and (2) to detect paired antibodies by correlating the segmentation results among different cases. Moreover, the method is simple, operating in a low-dimensional feature space, and computationally efficient which makes it suitable for high-throughput processing of tissue microarrays.

    Download full text (pdf)
    fulltext
  • 21.
    Azar, Jimmy
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Simonsson, Martin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Hast, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Automated Classification of Glandular Tissue by Statistical Proximity Sampling2015In: International Journal of Biomedical Imaging, ISSN 1687-4188, E-ISSN 1687-4196, article id 943104Article in journal (Refereed)
    Abstract [en]

    Due to the complexity of biological tissue and variations in staining procedures, features that are based on the explicit extraction of properties from subglandular structures in tissue images may have difficulty generalizing well over an unrestricted set of images and staining variations. We circumvent this problem by an implicit representation that is both robust and highly descriptive, especially when combined with a multiple instance learning approach to image classification. The new feature method is able to describe tissue architecture based on glandular structure. It is based on statistically representing the relative distribution of tissue components around lumen regions, while preserving spatial and quantitative information, as a basis for diagnosing and analyzing different areas within an image. We demonstrate the efficacy of the method in extracting discriminative features for obtaining high classification rates for tubular formation in both healthy and cancerous tissue, which is an important component in Gleason and tubule-based Elston grading. The proposed method may be used for glandular classification, also in other tissue types, in addition to general applicability as a region-based feature descriptor in image analysis where the image represents a bag with a certain label (or grade) and the region-based feature vectors represent instances.

    Download full text (pdf)
    fulltext
  • 22.
    Bajic, Buda
    et al.
    Univ. of Novi Sad, Fac Tech Sci, Novi Sad, Serbia.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Sladoje, Nataša
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Blind deconvolution of images degraded with mixed Poisson-Gaussian noise with application in Transmission Electron Microscopy2016In: Proceedings of the Swedish Society for Automated Image Analysis, Uppsala, 2016, p. 137-141Conference paper (Other academic)
  • 23. Bajic, Buda
    et al.
    Suveer, Amit
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Gupta, Anindya
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Pepic, Ivana
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Sladoje, Natasa
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Sintorn, Ida-Maria
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Denoising of short exposure transmission electron microscopy images for ultrastructural enhancement2018In: Proc. 15th International Symposium on Biomedical Imaging, IEEE, 2018, p. 921-925Conference paper (Refereed)
  • 24. Bajić, Buda
    et al.
    Lindblad, Joakim
    Sladoje, Nataša
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    An evaluation of potential functions for regularized image deblurring2014In: Image Analysis and Recognition: Part I, Springer Berlin/Heidelberg, 2014, p. 150-158Conference paper (Refereed)
  • 25.
    Banerjee, Subhashis
    et al.
    Machine Intelligence Unit Indian Statistical Institute Kolkata, India.
    Kumar Dhara, Ashis
    National Institute of Technology, Durgapur, West Bengal, India.
    Wikström, Johan
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Segmentation of Intracranial Aneurysm Remnant in MRA using Dual-Attention Atrous Net2021In: 25th International Conference on Pattern Recognition (ICPR), 2021, p. 9265-9272Conference paper (Refereed)
    Abstract [en]

    Due to the advancement of non-invasive medical imaging modalities like Magnetic Resonance Angiography (MRA), an increasing number of Intracranial Aneurysm (IA) cases are being reported in recent years. The IAs are typically treated by so-called endovascular coiling, where blood flow in the IA is prevented by embolization with a platinum coil. Accurate quantification of the IA Remnant (IAR), i.e. the volume with blood flow present post treatment is the utmost important factor in choosing the right treatment planning. This is typically done by manually segmenting the aneurysm remnant from the MRA volume. Since manual segmentation of volumetric images is a labour-intensive and error-prone process, development of an automatic volumetric segmentation method is required. Segmentation of small structures such as IA, that may largely vary in size, shape, and location is considered extremely difficult. Similar intensity distribution of IAs and surrounding blood vessels makes it more challenging and susceptible to false positive. In this paper we propose a novel 3D CNN architecture called Dual-Attention Atrous Net (DAtt-ANet), which can efficiently segment IAR volumes from MRA images by reconciling features at different scales using the proposed Parallel Atrous Unit (PAU) along with the use of self-attention mechanism for extracting fine-grained features and intra-class correlation. The proposed DAtt-ANet model is trained and evaluated on a clinical MRA image dataset of IAR consisting of 46 subjects. We compared the proposed DAtt-ANet with five state-of-the-art CNN models based on their segmentation performance. The proposed DAtt-ANet outperformed all other methods and was able to achieve a five-fold cross-validation DICE score of 0.73 +/- 0.06.

  • 26. Banerjee, Subhashis
    et al.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Deep Active Learning for Glioblastoma Quantification2023In: Scandinavian Conference on Image Analysis, 2023, p. 190-200Conference paper (Refereed)
  • 27.
    Banerjee, Subhashis
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Lifelong Learning with Dynamic Convolutions for Glioma: Segmentation from Multi-Modal MRI2023In: Medical Imaging 2023: Image Processing / [ed] Olivier Colliot;Ivana Išgum, SPIE - International Society for Optical Engineering, 2023, Vol. 12464, article id 124643JConference paper (Refereed)
    Abstract [en]

    This paper presents a novel solution for catastrophic forgetting in life long learning (LL) using Dynamic ConvolutionNeural Network (Dy-CNN). The proposed dynamic convolution layer, can adapt convolution filters bylearning kernel coefficients or weights based on the input image. Suitability of the proposed Dy-CNN in a lifelongsequential learning-based scenario with multi-modal MR images is experimentally demonstrated for segmentation of Glioma tumor from multi-modal MR images. Experimental results demonstrated the superiority of the Dy-CNN-based segmenting network in terms of learning through multi-modal MRI images and better convergence of lifelong learning-based training.

  • 28.
    Banerjee, Subhashis
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Lifelong Learning with Dynamic Convolutions for Glioma Segmentation from Multi-Modal MRI2023In: Medical imaging 2023 / [ed] Colliot, O Isgum, I, SPIE - International Society for Optical Engineering, 2023, Vol. 12464, article id 124643JConference paper (Refereed)
    Abstract [en]

    This paper presents a novel solution for catastrophic forgetting in lifelong learning (LL) using Dynamic Convolution Neural Network (Dy-CNN). The proposed dynamic convolution layer can adapt convolution filters by learning kernel coefficients or weights based on the input image. The suitability of the proposed Dy-CNN in a lifelong sequential learning-based scenario with multi-modal MR images is experimentally demonstrated for the segmentation of Glioma tumors from multi-modal MR images. Experimental results demonstrated the superiority of the Dy-CNN-based segmenting network in terms of learning through multi-modal MRI images and better convergence of lifelong learning-based training.

    Download full text (pdf)
    fulltext
  • 29.
    Banerjee, Subhashis
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Toumpanakis, Dimitrios
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Dhara, Ashis Kumar
    Natl Inst Technol Durgapur, Dept Elect Engn, Durgapur, India..
    Wikström, Johan
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Topology-Aware Learning for Volumetric Cerebrovascular Segmentation2022In: 2022 IEEE International Symposium on Biomedical Imaging (IEEE ISBI 2022), IEEE, 2022, p. 1-4Conference paper (Refereed)
    Abstract [en]

    This paper presents a topology-aware learning strategy for volumetric segmentation of intracranial cerebrovascular structures. We propose a multi-task deep CNN along with a topology-aware loss function for this purpose. Along with the main task (i.e. segmentation), we train the model to learn two related auxiliary tasks viz. learning the distance transform for the voxels on the surface of the vascular tree and learning the vessel centerline. This provides additional regularization and allows the encoder to learn higher-level intermediate representations to boost the performance of the main task. We compare the proposed method with six state-of-the-art deep learning-based 3D vessel segmentation methods, by using a public Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) dataset. Experimental results demonstrate that the proposed method has the best performance in this particular context.

  • 30.
    Banerjee, Subhashis
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Toumpanakis, Dimitrios
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Dhara, Ashis
    Department of Electrical Engineering, National Institute of Technology Durgapur, India.
    Wikström, Johan
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Deep Curriculum Learning for Follow-up MRI Registration in Glioblastoma2023In: Medical Imaging 2023: Image Processing, SPIE -Society of Photo-Optical Instrumentation Engineers , 2023, Vol. 12464, article id 124643IConference paper (Refereed)
    Abstract [en]

    This paper presents a weakly supervised deep convolutional neural network-based approach to perform voxel-level3D registration between subsequent follow-up MRI scans of the same patient. To handle the large deformation inthe surrounding brain tissues due to the tumor’s mass effect we proposed curriculum learning-based training forthe network. Weak supervision helps the network to concentrate more focus on the tumor region and resectioncavity through a saliency detection network. Qualitative and quantitative experimental results show the proposedregistration network outperformed two popular state-of-the-art methods.

  • 31.
    Bendazzoli, Simone
    et al.
    KTH Royal Inst Technol, Dept Biomed Engn & Hlth Syst, Halsovagen 11, S-14157 Huddinge, Sweden.
    Brusini, Irene
    KTH Royal Inst Technol, Dept Biomed Engn & Hlth Syst, Halsovagen 11, S-14157 Huddinge, Sweden;Karolinska Inst, Dept Neurobiol Care Sci & Soc, Alfred Nobels Alle 23,D3, S-14152 Huddinge, Sweden.
    Damberg, Peter
    Karolinska Inst, Dept Clin Neurosci, Tomtebodavagen 18A P1 5, S-17177 Stockholm, Sweden.
    Smedby, Örjan
    KTH Royal Inst Technol, Dept Biomed Engn & Hlth Syst, Halsovagen 11, S-14157 Huddinge, Sweden.
    Andersson, Leif
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Biochemistry and Microbiology. Uppsala University, Science for Life Laboratory, SciLifeLab.
    Wang, Chunliang
    KTH Royal Inst Technol, Dept Biomed Engn & Hlth Syst, Halsovagen 11, S-14157 Huddinge, Sweden.
    Automatic rat brain segmentation from MRI using statistical shape models and random forest2019In: MEDICAL IMAGING 2019: IMAGE PROCESSING / [ed] Angelini, ED Landman, BA, SPIE-INT SOC OPTICAL ENGINEERING , 2019, article id 109492OConference paper (Refereed)
    Abstract [en]

    In MRI neuroimaging, the shimming procedure is used before image acquisition to correct for inhomogeneity of the static magnetic field within the brain. To correctly adjust the field, the brain's location and edges must first be identified from quickly-acquired low resolution data. This process is currently carried out manually by an operator, which can be time-consuming and not always accurate. In this work, we implement a quick and automatic technique for brain segmentation to be potentially used during the shimming. Our method is based on two main steps. First, a random forest classifier is used to get a preliminary segmentation from an input MRI image. Subsequently, a statistical shape model of the brain, which was previously generated from ground-truth segmentations, is fitted to the output of the classifier to obtain a model-based segmentation mask. In this way, a-priori knowledge on the brain's shape is included in the segmentation pipeline. The proposed methodology was tested on low resolution images of rat brains and further validated on rabbit brain images of higher resolution. Our results suggest that the present method is promising for the desired purpose in terms of time efficiency, segmentation accuracy and repeatability. Moreover, the use of shape modeling was shown to be particularly useful when handling low-resolution data, which could lead to erroneous classifications when using only machine learning-based methods.

  • 32.
    Bengtsson Bernander, Karl
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Equivariant Neural Networks for Biomedical Image Analysis2024Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    While artificial intelligence and deep learning have revolutionized many fields in the last decade, one of the key drivers has been access to data. This is especially true in biomedical image analysis where expert annotated data is hard to come by. The combination of Convolutional Neural Networks (CNNs) with data augmentation has proven successful in increasing the amount of training data at the cost of overfitting. In this thesis, equivariant neural networks have been used to extend the equivariant properties of CNNs to more transformations than translations. The networks have been trained and evaluated on biomedical image datasets, including bright-field microscopy images of cytological samples indicating oral cancer, and transmission electron microscopy images of virus samples. By designing the networks to be equivariant to e.g. rotations, it is shown that the need for data augmentation is reduced, that less overfitting occurs, and that convergence during training is faster. Furthermore, equivariant neural networks are more data efficient than CNNs, as demonstrated by scaling laws. These benefits are not present in all problem settings and which benefits will occur is somewhat unpredictable. We have identified that the results to some extent depend on architectures, hyperparameters and datasets. Further research may broaden the performed studies to explain how the results occur with new theory.

    List of papers
    1. Replacing data augmentation with rotation-equivariant CNNs in image-based classification of oral cancer
    Open this publication in new window or tab >>Replacing data augmentation with rotation-equivariant CNNs in image-based classification of oral cancer
    2021 (English)Conference paper, Published paper (Refereed)
    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-460520 (URN)
    Conference
    25th Iberoamerican Congress on Pattern Recognition, Porto, Portugal, 10 - 13 May 2021
    Funder
    Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Available from: 2021-12-07 Created: 2021-12-07 Last updated: 2024-01-09Bibliographically approved
    2. Rotation-Equivariant Semantic Instance Segmentation on Biomedical Images
    Open this publication in new window or tab >>Rotation-Equivariant Semantic Instance Segmentation on Biomedical Images
    2022 (English)In: MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022 / [ed] Yang, G Aviles-Rivero, A Roberts, M Schonlieb, CB, SPRINGER INTERNATIONAL PUBLISHING AG Springer Nature, 2022, Vol. 13413, p. 283-297Conference paper, Published paper (Refereed)
    Abstract [en]

    Advances in image segmentation techniques, brought by convolutional neural network (CNN) architectures like U-Net, show promise for tasks such as automated cancer screening. Recently, these methods have been extended to detect different instances of the same class, which could be used to, for example, characterize individual cells in whole-slide images. Still, the amount of data needed and the number of parameters in the network are substantial. To alleviate these problems, we modify a method of semantic instance segmentation to also enforce equivariance to the p4 symmetry group of 90-degree rotations and translations. We perform four experiments on a synthetic dataset of scattered sticks and a subset of the Kaggle 2018 Data Science Bowl, the BBBC038 dataset, consisting of segmented nuclei images. Results indicate that the rotation-equivariant architecture yields similar accuracy as a baseline architecture. Furthermore, we observe that the rotation-equivariant architecture converges faster than the baseline. This is a promising step towards reducing the training time during development of methods based on deep learning.

    Place, publisher, year, edition, pages
    Springer NatureSPRINGER INTERNATIONAL PUBLISHING AG, 2022
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
    Keywords
    Deep learning, Training, Convergence
    National Category
    Medical Image Processing
    Identifiers
    urn:nbn:se:uu:diva-489382 (URN)10.1007/978-3-031-12053-4_22 (DOI)000883331000022 ()978-3-031-12053-4 (ISBN)978-3-031-12052-7 (ISBN)
    Conference
    26th Annual Conference on Medical Image Understanding and Analysis (MIUA), JUL 27-29, 2022, Univ Cambridge, Cambridge, ENGLAND
    Funder
    Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Available from: 2022-12-28 Created: 2022-12-28 Last updated: 2024-01-15Bibliographically approved
    3. Classification of Viruses in Transmission Electron Microscopy Images using Equivariant Neural Networks
    Open this publication in new window or tab >>Classification of Viruses in Transmission Electron Microscopy Images using Equivariant Neural Networks
    (English)Manuscript (preprint) (Other academic)
    National Category
    Medical Image Processing
    Research subject
    Computerized Image Processing
    Identifiers
    urn:nbn:se:uu:diva-519609 (URN)
    Funder
    Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Available from: 2024-01-08 Created: 2024-01-08 Last updated: 2024-01-09
    4. Equivariant Neural Networks for Biomedical Images Improves Data Efficiency
    Open this publication in new window or tab >>Equivariant Neural Networks for Biomedical Images Improves Data Efficiency
    (English)Manuscript (preprint) (Other academic)
    National Category
    Medical Image Processing
    Research subject
    Computerized Image Processing
    Identifiers
    urn:nbn:se:uu:diva-519610 (URN)
    Funder
    Wallenberg AI, Autonomous Systems and Software Program (WASP)
    Available from: 2024-01-08 Created: 2024-01-08 Last updated: 2024-01-09
    Download full text (pdf)
    UUThesis_K-Bengtsson-Bernander-2024
    Download (jpg)
    presentationsbild
  • 33.
    Bengtsson Bernander, Karl
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala Univ, Ctr Image Anal, Uppsala, Sweden..
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala Univ, Ctr Image Anal, Uppsala, Sweden..
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology. Uppsala Univ, Ctr Image Anal, Uppsala, Sweden..
    Nystrom, Ingela
    Uppsala Univ, Ctr Image Anal, Uppsala, Sweden..
    Rotation-Equivariant Semantic Instance Segmentation on Biomedical Images2022In: MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022 / [ed] Yang, G Aviles-Rivero, A Roberts, M Schonlieb, CB, SPRINGER INTERNATIONAL PUBLISHING AG Springer Nature, 2022, Vol. 13413, p. 283-297Conference paper (Refereed)
    Abstract [en]

    Advances in image segmentation techniques, brought by convolutional neural network (CNN) architectures like U-Net, show promise for tasks such as automated cancer screening. Recently, these methods have been extended to detect different instances of the same class, which could be used to, for example, characterize individual cells in whole-slide images. Still, the amount of data needed and the number of parameters in the network are substantial. To alleviate these problems, we modify a method of semantic instance segmentation to also enforce equivariance to the p4 symmetry group of 90-degree rotations and translations. We perform four experiments on a synthetic dataset of scattered sticks and a subset of the Kaggle 2018 Data Science Bowl, the BBBC038 dataset, consisting of segmented nuclei images. Results indicate that the rotation-equivariant architecture yields similar accuracy as a baseline architecture. Furthermore, we observe that the rotation-equivariant architecture converges faster than the baseline. This is a promising step towards reducing the training time during development of methods based on deep learning.

  • 34.
    Bengtsson Bernander, Karl
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology.
    Nyström, Ingela
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Replacing data augmentation with rotation-equivariant CNNs in image-based classification of oral cancer2021Conference paper (Refereed)
  • 35.
    Bengtsson Bernander, Karl
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Nyström, Ingela
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Rotation-Equivariant Semantic Instance Segmentation on Biomedical Images2022In: Medical Image Understanding and Analysis, MIUA 2022 / [ed] Yang, G Aviles-Rivero, A Roberts, M Schonlieb, CB, Springer, 2022, Vol. 13413, p. 283-297Conference paper (Refereed)
    Abstract [en]

    Advances in image segmentation techniques, brought by convolutional neural network (CNN) architectures like U-Net, show promise for tasks such as automated cancer screening. Recently, these methods have been extended to detect different instances of the same class, which could be used to, for example, characterize individual cells in whole-slide images. Still, the amount of data needed and the number of parameters in the network are substantial. To alleviate these problems, we modify a method of semantic instance segmentation to also enforce equivariance to the p4 symmetry group of 90-degree rotations and translations. We perform four experiments on a synthetic dataset of scattered sticks and a subset of the Kaggle 2018 Data Science Bowl, the BBBC038 dataset, consisting of segmented nuclei images. Results indicate that the rotation-equivariant architecture yields similar accuracy as a baseline architecture. Furthermore, we observe that the rotation-equivariant architecture converges faster than the baseline. This is a promising step towards reducing the training time during development of methods based on deep learning.

  • 36.
    Bengtsson Bernander, Karl
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Sintorn, Ida-Maria
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Radiology. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3.
    Nyström, Ingela
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Classification of Viruses in Transmission Electron Microscopy Images using Equivariant Neural NetworksManuscript (preprint) (Other academic)
  • 37.
    Bengtsson, Ewert
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Quantitative and automated microscopy: Where do we stand after 80 years of research?2014In: Proc. 11th International Symposium on Biomedical Imaging, Piscataway, NJ: IEEE Press, 2014, p. 274-277Conference paper (Refereed)
  • 38.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Dahlqvist, Bengt
    Eriksson, Olle
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Jarkrans, Torsten
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Nordin, Bo
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Stenkvist, Björn
    Cervical Pre-screening Using Computerized Image Analysis1983In: Proceedings of the 3rd Scandinavian Conference on Image Analysis, Köpenhamn, 1983, p. 404-411Conference paper (Refereed)
  • 39.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Dahlqvist, Bengt
    Eriksson, Olle
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Jarkrans, Torsten
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Nordin, Bo
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Centre for Image Analysis.
    Stenkvist, Björn
    Studie av reproducerbarheten av mikroskopiska cellbilder med TV-kamera1982Report (Other academic)
  • 40.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Malm, Patrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Screening for Cervical Cancer Using Automated Analysis of PAP-Smears2014In: Computational & Mathematical Methods in Medicine, ISSN 1748-670X, E-ISSN 1748-6718, Vol. 2014, p. 842037:1-12Article, review/survey (Refereed)
    Abstract [en]

    Cervical cancer is one of the most deadly and common forms of cancer among women if no action is taken to prevent it, yet it is preventable through a simple screening test, the so-called PAP-smear. This is the most effective cancer prevention measure developed so far. But the visual examination of the smears is time consuming and expensive and there have been numerous attempts at automating the analysis ever since the test was introduced more than 60 years ago. The first commercial systems for automated analysis of the cell samples appeared around the turn of the millennium but they have had limited impact on the screening costs. In this paper we examine the key issues that need to be addressed when an automated analysis system is developed and discuss how these challenges have been met over the years. The lessons learned may be useful in the efforts to create a cost-effective screening system that could make affordable screening for cervical cancer available for all women globally, thus preventing most of the quarter million annual unnecessary deaths still caused by this disease.

    Download full text (pdf)
    fulltext
  • 41.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Ranefall, Petter
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab.
    Image analysis in digital pathology: Combining automated assessment of Ki67 staining quality with calculation of Ki67 cell proliferation index2019In: Cytometry Part A, ISSN 1552-4922, E-ISSN 1552-4930, Vol. 95, no 7, p. 714-716Article in journal (Other academic)
  • 42.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Tárnok, Attila
    Special Section on Image Cytometry2019In: Cytometry Part A, ISSN 1552-4922, E-ISSN 1552-4930, Vol. 95A, no 4, p. 363-365Article in journal (Other academic)
  • 43.
    Bengtsson, Ewert
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Automatic control. Uppsala university.
    Wieslander, Håkan
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Forslid, Gustav
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Wählby, Carolina
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Hirsch, Jan-Michael
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Oral and Maxillofacial Surgery.
    Runow Stark, Christina
    Kecheril Sadanandan, Sajith
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Science for Life Laboratory, SciLifeLab. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Lindblad, Joakim
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction.
    Detection of Malignancy-Associated Changes Due to Precancerous and Oral Cancer Lesions: A Pilot Study Using Deep Learning2018In: CYTO2018 / [ed] Andrea Cossarizza, 2018Conference paper (Refereed)
    Abstract [en]

    Background: The incidence of oral cancer is increasing and it is effecting younger individuals. PAP smear-based screening, visual, and automated, have been used for decades, to successfully decrease the incidence of cervical cancer. Can similar methods be used for oral cancer screening? We have carried out a pilot study using neural networks for classifying cells, both from cervical cancer and oral cancer patients. The results which were reported from a technical point of view at the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), were particularly interesting for the oral cancer cases, and we are currently collecting and analyzing samples from more patients. Methods: Samples were collected with a brush in the oral cavity and smeared on glass slides, stained, and prepared, according to standard PAP procedures. Images from the slides were digitized with a 0.35 micron pixel size, using focus stacks with 15 levels 0.4 micron apart. Between 245 and 2,123 cell nuclei were manually selected for analysis for each of 14 datasets, usually 2 datasets for each of the 6 cases, in total around 15,000 cells. A small region was cropped around each nucleus, and the best 2 adjacent focus layers in each direction were automatically found, thus creating images of 100x100x5 pixels. Nuclei were chosen with an aim to select well preserved free-lying cells, with no effort to specifically select diagnostic cells. We therefore had no ground truth on the cellular level, only on the patient level. Subsets of these images were used for training 2 sets of neural networks, created according to the ResNet and VGG architectures described in literature, to distinguish between cells from healthy persons, and those with precancerous lesions. The datasets were augmented through mirroring and 90 degrees rotations. The resulting networks were used to classify subsets of cells from different persons, than those in the training sets. This was repeated for a total of 5 folds. Results: The results were expressed as the percentage of cell nuclei that the neural networks indicated as positive. The percentage of positive cells from healthy persons was in the range 8% to 38%. The percentage of positive cells collected near the lesions was in the range 31% to 96%. The percentages from the healthy side of the oral cavity of patients with lesions ranged 37% to 89%. For each fold, it was possible to find a threshold for the number of positive cells that would correctly classify all patients as normal or positive, even for the samples taken from the healthy side of the oral cavity. The network based on the ResNet architecture showed slightly better performance than the VGG-based one. Conclusion: Our small pilot study indicates that malignancyassociated changes that can be detected by neural networks may exist among cells in the oral cavity of patients with precancerous lesions. We are currently collecting samples from more patients, and will present those results as well, with our poster at CYTO 2018.

  • 44.
    Beter, M.
    et al.
    Univ Eastern Finland, AI Virtanen Inst Mol Sci, Neulaniementie 2,POB 1627, Kuopio 70211, Finland..
    Abdollahzadeh, A.
    Univ Eastern Finland, AI Virtanen Inst Mol Sci, Neulaniementie 2,POB 1627, Kuopio 70211, Finland..
    Pulkkinen, H. H.
    Univ Eastern Finland, AI Virtanen Inst Mol Sci, Neulaniementie 2,POB 1627, Kuopio 70211, Finland..
    Huang, Hua
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Immunology, Genetics and Pathology.
    Orsenigo, F.
    IFOM ETS The AIRC Inst Mol Oncol, Vasc Biol Unit, Milan, Italy..
    Magnusson, Peetra
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Immunology, Genetics and Pathology.
    Yla-Herttuala, S.
    Univ Eastern Finland, AI Virtanen Inst Mol Sci, Neulaniementie 2,POB 1627, Kuopio 70211, Finland.;Kuopio Univ Hosp, Heart Ctr, Kuopio, Finland.;Kuopio Univ Hosp, Gene Therapy Unit, Kuopio, Finland..
    Tohka, J.
    Univ Eastern Finland, AI Virtanen Inst Mol Sci, Neulaniementie 2,POB 1627, Kuopio 70211, Finland..
    Laakkonen, J. P.
    Univ Eastern Finland, AI Virtanen Inst Mol Sci, Neulaniementie 2,POB 1627, Kuopio 70211, Finland..
    SproutAngio: an open-source bioimage informatics tool for quantitative analysis of sprouting angiogenesis and lumen space2023In: Scientific Reports, E-ISSN 2045-2322, Vol. 13, article id 7279Article in journal (Refereed)
    Abstract [en]

    Three-dimensional image analyses are required to improve the understanding of the regulation of blood vessel formation and heterogeneity. Currently, quantitation of 3D endothelial structures or vessel branches is often based on 2D projections of the images losing their volumetric information. Here, we developed SproutAngio, a Python-based open-source tool, for fully automated 3D segmentation and analysis of endothelial lumen space and sprout morphology. To test the SproutAngio, we produced a publicly available in vitro fibrin bead assay dataset with a gradually increasing VEGF-A concentration (). We demonstrate that our automated segmentation and sprout morphology analysis, including sprout number, length, and nuclei number, outperform the widely used ImageJ plugin. We also show that SproutAngio allows a more detailed and automated analysis of the mouse retinal vasculature in comparison to the commonly used radial expansion measurement. In addition, we provide two novel methods for automated analysis of endothelial lumen space: (1) width measurement from tip, stalk and root segments of the sprouts and (2) paired nuclei distance analysis. We show that these automated methods provided important additional information on the endothelial cell organization in the sprouts. The pipelines and source code of SproutAngio are publicly available ().

    Download full text (pdf)
    FULLTEXT01
  • 45.
    Bezek, Can Deniz
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Bilgin, Mert
    Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland.
    Zhang, Lin
    Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland.
    Göksel, Orcun
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland.
    Global Speed-of-Sound Prediction Using Transmission Geometry2022In: Proceedings of the 2022 IEEE International Ultrasonics Symposium (IUS), IEEE, 2022, p. 1-4Conference paper (Refereed)
    Abstract [en]

    Most ultrasound (US) imaging techniques usespatially-constant speed-of-sound (SoS) values for beamforming.Having a discrepancy between the actual and used SoS valueleads to aberration artifacts, e.g., reducing the image resolution,which may affect diagnostic usability. Accuracy and quality ofdifferent US imaging modalities, such as tomographic reconstruc-tion of local SoS maps, also depend on a good initial beamformingSoS. In this work, we develop an analytical method for estimatingmean SoS in an imaged medium. We show that the relative shiftsbetween beamformed frames depend on the SoS offset and thegeometric disparities in transmission paths. Using this relation,we estimate a correction factor and hence a corrected mean SoSin the medium. We evaluated our proposed method on a set ofnumerical simulations, demonstrating its utility both for globalSoS prediction and for local SoS tomographic reconstruction.For our evaluation dataset, for an initial SoS under- and over-assumption of 5% the medium SoS, our method is able to predictthe actual mean SoS within 0.3% accuracy. For the tomographicreconstruction of local SoS maps, the reconstruction accuracy isimproved on average by 78.5% and 87%, respectively, comparedto an initial SoS under- and over-assumption of 5%.Index Terms—Beamforming, aberration correction.

  • 46.
    Bezek, Can Deniz
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Göksel, Orcun
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division Vi3. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Analytical Estimation of Beamforming Speed-of-Sound Using Transmission Geometry2023In: Ultrasonics, ISSN 0041-624X, E-ISSN 1874-9968, Vol. 134, article id 107069Article in journal (Refereed)
    Abstract [en]

    Most ultrasound imaging techniques necessitate the fundamental step of converting temporal signals received from transducer elements into a spatial echogenecity map. This beamforming (BF) step requires the knowledge of speed-of-sound (SoS) value in the imaged medium. An incorrect assumption of BF SoS leads to aberration artifacts, not only deteriorating the quality and resolution of conventional brightness mode (B-mode) images, hence limiting their clinical usability, but also impairing other ultrasound modalities such as elastography and spatial SoS reconstructions, which rely on faithfully beamformed images as their input. In this work, we propose an analytical method for estimating BF SoS. We show that pixel-wise relative shifts between frames beamformed with an assumed SoS is a function of geometric disparities of the transmission paths and the error in such SoS assumption. Using this relation, we devise an analytical model, the closed form solution of which yields the difference between the assumed and the true SoS in the medium. Based on this, we correct the BF SoS, which can also be applied iteratively. Both in simulations and experiments, lateral B-mode resolution is shown to be improved by ≈ 25% compared to that with an initial SoS assumption error of 3.3% (50 m/s), while localization artifacts from beamforming are also corrected. After 5 iterations, our method achieves BF SoS errors of under 0.6 m/s in simulations. Residual time-delay errors in beamforming 32 numerical phantoms are shown to reduce down to 0.07 µs, with average improvements of up to 21 folds compared to initial inaccurate assumptions. We additionally show the utility of the proposed method in imaging local SoS maps, where using our correction method reduces reconstruction root-mean-square errors substantially, down to their lower-bound with actual BF SoS.

    Download full text (pdf)
    fulltext
  • 47. Bhatt, Manish
    et al.
    Ayyalasomayajula, Kalyan R.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Yalavarthy, Phaneendra K.
    Generalized Beer–Lambert model for near-infrared light propagation in thick biological tissues2016In: Journal of Biomedical Optics, ISSN 1083-3668, E-ISSN 1560-2281, Vol. 21, no 7, article id 076012Article in journal (Refereed)
  • 48.
    Bianchi, Kevin
    et al.
    ISIT UMR6284 CNRS, Univ. d’Auvergne BP10448, F-63000 Clermont-Ferrand.
    Vacavant, Antoine
    ISIT UMR6284 CNRS, Univ. d’Auvergne BP10448, F-63000 Clermont-Ferrand.
    Strand, Robin
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Terve, Pierre
    KEOSYS Company 1, impasse Auguste Fresnel, F 44815 Saint Herblain.
    Sarry, Laurent
    ISIT UMR6284 CNRS, Univ. d’Auvergne BP10448, F-63000 Clermont-Ferrand.
    Dual B-spline Snake for Interactive Myocardial Segmentation2013Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel interactive segmentation formalism based on two coupledB-Spline snake models to efficiently and simultaneously extract myocardial walls fromshort-axis magnetic resonance images. The main added value of this model is interactionas it is possible to quickly and intuitively correct the result in complex cases withoutrestarting the whole segmentation working flow. During this process, energies computedfrom the images guide the user to the best position of the model.

  • 49.
    Blache, Ludovic
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology.
    Nysjö, Fredrik
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Malmberg, Filip
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Thor, Andreas
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Plastic Surgery.
    Rodriguez-Lorenzo, Andres
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Surgical Sciences, Plastic Surgery.
    Nyström, Ingela
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    SoftCut:: A Virtual Planning Tool for Soft Tissue Resection on CT Images2018In: Medical Image Understanding and Analysis / [ed] Mark Nixon; Sasan Mahmoodi; Reyer Zwiggelaar, Cham: Springer, 2018, Vol. 894, p. 299-310Conference paper (Refereed)
    Abstract [en]

    With the increasing use of three-dimensional (3D) models and Computer Aided Design (CAD) in the medical domain, virtual surgical planning is now frequently used. Most of the current solutions focus on bone surgical operations. However, for head and neck oncologic resection, soft tissue ablation and reconstruction are common operations. In this paper, we propose a method to provide a fast and efficient estimation of shape and dimensions of soft tissue resections. Our approach takes advantage of a simple sketch-based interface which allows the user to paint the contour of the resection on a patient specific 3D model reconstructed from a computed tomography (CT) scan. The volume is then virtually cut and carved following this pattern. From the outline of the resection defined on the skin surface as a closed curve, we can identify which areas of the skin are inside or outside this shape. We then use distance transforms to identify the soft tissue voxels which are closer from the inside of this shape. Thus, we can propagate the shape of the resection inside the soft tissue layers of the volume. We demonstrate the usefulness of the method on patient specific CT data.

  • 50.
    Blom, Elisabeth
    et al.
    Uppsala University, Disciplinary Domain of Science and Technology, Chemistry, Department of Biochemistry and Organic Chemistry.
    Velikyan, Irina
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Radiology, Oncology and Radiation Science, Biomedical Radiation Sciences.
    Monazzam, Azita
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Endocrine Tumor Biology.
    Razifar, Pasha
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Centre for Image Analysis. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction.
    Nair, Manoj
    Razifar, Payam
    Vanderheyden, Jean-Luc
    Krivoshein, Arcadius V.
    Backer, Marina
    Backer, Joseph
    Långström, Bengt
    Uppsala University, Disciplinary Domain of Science and Technology, Chemistry, Department of Biochemistry and Organic Chemistry.
    Synthesis and characterization of scVEGF-PEG-[68Ga]NOTA and scVEGF-PEG-[68Ga]DOTA PET tracers2011In: Journal of labelled compounds & radiopharmaceuticals, ISSN 0362-4803, E-ISSN 1099-1344, Vol. 54, no 11, p. 685-692Article in journal (Refereed)
    Abstract [en]

    Vascular endothelial growth factor (VEGF) signaling via vascular endothelial growth factor receptor 2 (VEGFR-2) on tumor endothelial cells is a critical driver of tumor angiogenesis. Novel anti-angiogenic drugs target VEGF/VEGFR-2 signaling and induce changes in VEGFR-2 prevalence. To monitor VEGFR-2 prevalence in the course of treatment, we are evaluating (68)Ga positron emission tomography imaging agents based on macrocyclic chelators, site-specifically conjugated via polyethylene glycol (PEG) linkers to engineered VEGFR-2 ligand, single-chain (sc) VEGF. The (68)Ga-labeling was performed at room temperature with NOTA (2,2', 2 ''-(1,4,7-triazonane-1,4,7-triyl) triacetic acid) conjugates or at 90 degrees C by using either conventional or microwave heating with NOTA and DOTA (2,2', 2 '', 2'''-(1,4,7,10-tetraazacyclododecane-1,4,7,10-tetrayl) tetraacetic acid) conjugates. The fastest (similar to 2min) and the highest incorporation (>90%) of (68)Ga into conjugate that resulted in the highest specific radioactivity (similar to 400MBq/nmol) was obtained with microwave heating of the conjugates. The bioactivity of the NOTA-and DOTA-containing tracers was validated in 3-D tissue culture model of 293/KDR cells engineered to express high levels of VEGFR-2. The NOTA-containing tracer also displayed a rapid accumulation (similar to 20s after intravenous injection) to steady-state level in xenograft tumor models. A combination of high specific radioactivity and maintenance of functional activity suggests that scVEGF-PEG-[(68)Ga] NOTA and scVEGF-PEG-[(68)Ga] DOTA might be promising tracers for monitoring VEGFR-2 prevalence and should be further explored.

1234567 1 - 50 of 426
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf