Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
Link to record
Permanent link

Direct link
Publications (10 of 13) Show all publications
Kundu, S., Banerjee, S., Breznik, E., Toumpanakis, D., Wikström, J., Strand, R. & Kumar Dhara, A. (2024). ASE-Net for Segmentation of Post-operative Glioblastoma and Patient-specific Fine-tuning for Segmentation Refinement of Follow-up MRI Scans. SN computer science, 5(106)
Open this publication in new window or tab >>ASE-Net for Segmentation of Post-operative Glioblastoma and Patient-specific Fine-tuning for Segmentation Refinement of Follow-up MRI Scans
Show others...
2024 (English)In: SN computer science, E-ISSN 2661-8907, Vol. 5, no 106Article in journal (Refereed) Published
Abstract [en]

Volumetric quantification of tumors is usually done manually by radiologists requiring precious medical time and suffering from inter-observer variability. An automatic tool for accurate volume quantification of post-operative glioblastoma would reduce the workload of radiologists and improve the quality of follow-up monitoring and patient care. This paper deals with the 3-D segmentation of post-operative glioblastoma using channel squeeze and excitation based attention gated network (ASE-Net). The proposed deep neural network has a 3-D encoder and decoder based architecture with channel squeeze and excitation (CSE) blocks and attention blocks. The CSE block reduces the dependency on space information and put more emphasize on the channel information. The attention block suppresses the feature maps of irrelevant background and helps highlighting the relevant feature maps. The Uppsala university data set used has post-operative follow-up MRI scans for fifteen patients. A patient specific fine-tuning approach is used to improve the segmentation results for each patient. ASE-Net is also cross-validated with BraTS-2021 data set. The mean dice score of five-fold cross validation results with BraTS-2021 data set for enhanced tumor is 0.8244. The proposed network outperforms the competing networks like U-Net, Attention U-Net and Res U-Net. On the Uppsala University glioblastoma data set, the mean Dice score obtained with the proposed network is 0.7084, Hausdorff Distance-95 is 7.14 and the mean volumetric similarity achieved is 0.8579. With fine-tuning the pre-trained network, the mean dice score improved to 0.7368, Hausdorff Distance-95 decreased to 6.10 and volumetric similarity improved to 0.8736. ASE-Net outperforms the competing networks and can be used for volumetric quantification of post-operative glioblastoma from follow-up MRI scans. The network significantly reduces the probability of over segmentation.

Place, publisher, year, edition, pages
Springer, 2024
National Category
Medical Imaging
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-498177 (URN)10.1007/s42979-023-02425-5 (DOI)
Available from: 2023-03-10 Created: 2023-03-10 Last updated: 2025-02-09Bibliographically approved
Breznik, E., Wetzer, E., Lindblad, J. & Sladoje, N. (2024). Cross-modality sub-image retrieval using contrastive multimodal image representations. Scientific Reports, 14(1), Article ID 18798.
Open this publication in new window or tab >>Cross-modality sub-image retrieval using contrastive multimodal image representations
2024 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 14, no 1, article id 18798Article in journal (Refereed) Published
Abstract [en]

In tissue characterization and cancer diagnostics, multimodal imaging has emerged as a powerful technique. Thanks to computational advances, large datasets can be exploited to discover patterns in pathologies and improve diagnosis. However, this requires efficient and scalable image retrieval methods. Cross-modality image retrieval is particularly challenging, since images of similar (or even the same) content captured by different modalities might share few common structures. We propose a new application-independent content-based image retrieval (CBIR) system for reverse (sub-)image search across modalities, which combines deep learning to generate representations (embedding the different modalities in a common space) with robust feature extraction and bag-of-words models for efficient and reliable retrieval. We illustrate its advantages through a replacement study, exploring a number of feature extractors and learned representations, as well as through comparison to recent (cross-modality) CBIR methods. For the task of (sub-)image retrieval on a (publicly available) dataset of brightfield and second harmonic generation microscopy images, the results show that our approach is superior to all tested alternatives. We discuss the shortcomings of the compared methods and observe the importance of equivariance and invariance properties of the learned representations and feature extractors in the CBIR pipeline. Code is available at: https://github.com/MIDA-group/CrossModal_ImgRetrieval.

Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Medical Imaging
Identifiers
urn:nbn:se:uu:diva-470293 (URN)10.1038/s41598-024-68800-1 (DOI)001318393400020 ()39138271 (PubMedID)
Note

These authors contributed equally: Eva Breznik and Elisabeth Wetzer

Available from: 2022-03-22 Created: 2022-03-22 Last updated: 2025-02-09Bibliographically approved
Heil, R. & Breznik, E. (2023). A Study of Augmentation Methods for Handwritten Stenography Recognition. In: : . Paper presented at IbPRIA 2023: 11th Iberian Conference on Pattern Recognition and Image Analysis.
Open this publication in new window or tab >>A Study of Augmentation Methods for Handwritten Stenography Recognition
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

One of the factors limiting the performance of handwritten text recognition (HTR) for stenography is the small amount of annotated training data. To alleviate the problem of data scarcity, modern HTR methods often employ data augmentation. However, due to specifics of the stenographic script, such settings may not be directly applicable for stenography recognition. In this work, we study 22 classical augmentation techniques, most of which are commonly used for HTR of other scripts, such as Latin handwriting. Through extensive experiments, we identify a group of augmentations, including for example contained ranges of random rotation, shifts and scaling, that are beneficial to the use case of stenography recognition. Furthermore, a number of augmentation approaches, leading to a decrease in recognition performance, are identified. Our results are supported by statistical hypothesis testing. A link to the source code is provided in the paper.

National Category
Computer Sciences
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-497025 (URN)10.1007/978-3-031-36616-1_11 (DOI)
Conference
IbPRIA 2023: 11th Iberian Conference on Pattern Recognition and Image Analysis
Available from: 2023-02-22 Created: 2023-02-22 Last updated: 2023-09-05
Breznik, E. (2023). Image Processing and Analysis Methods for Biomedical Applications. (Doctoral dissertation). Uppsala: Acta Universitatis Upsaliensis
Open this publication in new window or tab >>Image Processing and Analysis Methods for Biomedical Applications
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

With new technologies and developments medical images can be acquired more quickly and at a larger scale than ever before. However, increased amount of data induces an overhead in the human labour needed for its inspection and analysis. To support clinicians in decision making and enable swifter examinations, computerized methods can be utilized to automate the more time-consuming tasks. For such use, methods need be highly accurate, fast, reliable and interpretable. In this thesis we develop and improve methods for image segmentation, retrieval and statistical analysis, with applications in imaging-based diagnostic pipelines. 

Individual objects often need to first be extracted/segmented from the image before they can be analysed further. We propose methodological improvements for deep learning-based segmentation methods using distance maps, with the focus on fully-supervised 3D patch-based training and training on 2D slices under point supervision. We show that using a directly interpretable distance prior helps to improve segmentation accuracy and training stability.

For histological data in particular, we propose and extensively evaluate a contrastive learning and bag of words-based pipeline for cross-modal image retrieval. The method is able to recover correct matches from the database across modalities and small transformations with improved accuracy compared to the competitors. 

In addition, we examine a number of methods for multiplicity correction on statistical analyses of correlation using medical images. Evaluation strategies are discussed and anatomy-observing extensions to the methods are developed as a way of directly decreasing the multiplicity issue in an interpretable manner, providing improvements in error control. 

The methods presented in this thesis were developed with clinical applications in mind and provide a strong base for further developments and future use in medical practice.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2023. p. 74
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2253
Keywords
Multiple comparisons, image segmentation, image retrieval, deep learning, medical image analysis, magnetic resonance imaging, whole-body imaging
National Category
Medical Imaging
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-498953 (URN)978-91-513-1760-1 (ISBN)
Public defence
2023-05-12, Sonja Lyttkens (101121), Ångström Laboratoriet, Lägerhyddsvägen 1, Uppsala, 09:15 (English)
Opponent
Supervisors
Available from: 2023-04-21 Created: 2023-03-22 Last updated: 2025-02-09
Ovalle, A., Subramonian, A., Singh, A., Voelcker, C., Sutherland, D. J., Locatelli, D., . . . Stark, L. (2023). Queer In AI: A Case Study in Community-Led Participatory AI. In: FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Paper presented at 6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), June 12-15, 2023, Chicago, IL, USA (pp. 1882-1895). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Queer In AI: A Case Study in Community-Led Participatory AI
Show others...
2023 (English)In: FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery (ACM), 2023, p. 1882-1895Conference paper, Published paper (Refereed)
Abstract [en]

Queerness and queer people face an uncertain future in the face of ever more widely deployed and invasive artificial intelligence (AI). These technologies have caused numerous harms to queer people, including privacy violations, censoring and downranking queer content, exposing queer people and spaces to harassment by making them hypervisible, deadnaming and outing queer people. More broadly, they have violated core tenets of queerness by classifying and controlling queer identities. In response to this, the queer community in AI has organized Queer in AI, a global, decentralized, volunteer-run grassroots organization that employs intersectional and community-led participatory design to build an inclusive and equitable AI future. In this paper, we present Queer in AI as a case study for community-led participatory design in AI. We examine how participatory design and intersectional tenets started and shaped this community's programs over the years. We discuss different challenges that emerged in the process, look at ways this organization has fallen short of operationalizing participatory and intersectional principles, and then assess the organization's impact. Queer in AI provides important lessons and insights for practitioners and theorists of participatory methods broadly through its rejection of hierarchy in favor of decentralization, success at building aid and programs by and for the queer community, and effort to change actors and institutions outside of the queer community. Finally, we theorize how communities like Queer in AI contribute to the participatory design in AI more broadly by fostering cultures of participation in AI, welcoming and empowering marginalized participants, critiquing poor or exploitative participatory practices, and bringing participation to institutions outside of individual research projects. Queer in AI's work serves as a case study of grassroots activism and participatory methods within AI, demonstrating the potential of community-led participatory methods and intersectional praxis, while also providing challenges, case studies, and nuanced insights to researchers developing and using participatory methods.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
National Category
Gender Studies Information Systems, Social aspects
Identifiers
urn:nbn:se:uu:diva-515658 (URN)10.1145/3593013.3594134 (DOI)001062819300153 ()979-8-4007-0192-4 (ISBN)
Conference
6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), June 12-15, 2023, Chicago, IL, USA
Available from: 2023-11-09 Created: 2023-11-09 Last updated: 2025-02-17Bibliographically approved
Breznik, E., Wetzer, E., Lindblad, J. & Sladoje, N. (2022). Label-Free Reverse Image Search of Multimodal Microscopy Images. In: : . Paper presented at Swedish Symposium on Image Analysis.
Open this publication in new window or tab >>Label-Free Reverse Image Search of Multimodal Microscopy Images
2022 (English)Conference paper, Oral presentation only (Other academic)
National Category
Medical Imaging
Identifiers
urn:nbn:se:uu:diva-470159 (URN)
Conference
Swedish Symposium on Image Analysis
Available from: 2022-03-21 Created: 2022-03-21 Last updated: 2025-02-09Bibliographically approved
Wetzer, E., Breznik, E., Lindblad, J. & Sladoje, N. (2022). Re-Ranking Strategies in Cross-Modality Microscopy Retrieval. In: : . Paper presented at IEEE ISBI 2022 International Symposium on Biomedical Imaging, 28-31 March, 2022, Kolkata, India. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Re-Ranking Strategies in Cross-Modality Microscopy Retrieval
2022 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

For many cancer diagnoses tissue samples stained by hematoxylin and eosin are inspected in a brightfield (BF) microscope. It is becoming increasingly common to additionally inspect second harmonic generation (SHG) images alongside their BF counterparts as such multimodal image pairs carry complimentary information about the tissue. To match BF and SHG images captured in different microscopes, Breznik et al. (2022) recently proposed a method for image retrieval to match unaligned multimodal image pairs of BF and SHG: it creates a bag-of-words (BoW) based on SURF features and image representations called CoMIRs; and finally a re-ranking step to refine the retrieval among the best-ranking matches. Here, we evaluate three different re-ranking strategies (one relying on global features, two relying on local features) for cross-modality image retrieval of SHG and BF images and evaluate them on a publicly available dataset.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
National Category
Medical Imaging
Identifiers
urn:nbn:se:uu:diva-524878 (URN)
Conference
IEEE ISBI 2022 International Symposium on Biomedical Imaging, 28-31 March, 2022, Kolkata, India
Available from: 2024-03-12 Created: 2024-03-12 Last updated: 2025-02-09Bibliographically approved
Tominec, I. & Breznik, E. (2021). An unfitted RBF-FD method in a least-squares setting for elliptic PDEs on complex geometries. Journal of Computational Physics, 436, Article ID 110283.
Open this publication in new window or tab >>An unfitted RBF-FD method in a least-squares setting for elliptic PDEs on complex geometries
2021 (English)In: Journal of Computational Physics, ISSN 0021-9991, E-ISSN 1090-2716, Vol. 436, article id 110283Article in journal (Refereed) Published
Abstract [en]

Radial basis function generated finite difference (RBF-FD) methods for PDEs require a set of interpolation points which conform to the computational domain Ω. One of the requirements leading to approximation robustness is to place the interpolation points with a locally uniform distance around the boundary of Ω. However generating interpolation points with such properties is a cumbersome problem. Instead, the interpolation points can be extended over the boundary and as such completely decoupled from the shape of Ω. In this paper we present a modification to the least-squares RBF-FD method which allows the interpolation points to be placed in a box that encapsulates Ω. This way, the node placement over a complex domain in 2D and 3D is greatly simplified. Numerical experiments on solving an elliptic model PDE over complex 2D geometries show that our approach is robust. Furthermore it performs better in terms of the approximation error and the runtime vs. error compared with the classic RBF-FD methods. It is also possible to use our approach in 3D, which we indicate by providing convergence results of a solution over a thoracic diaphragm.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Complex geometry, Radial basis function, Least-squares, Partial differential equation, Immersed method, Ghost points
National Category
Computational Mathematics
Research subject
Numerical Analysis; Scientific Computing
Identifiers
urn:nbn:se:uu:diva-465219 (URN)10.1016/j.jcp.2021.110283 (DOI)000746492700004 ()
Funder
Swedish Research Council, 2016-04849
Available from: 2022-01-17 Created: 2022-01-17 Last updated: 2024-01-15Bibliographically approved
Wetzer, E., Pielawski, N., Breznik, E., Öfverstedt, J., Lu, J., Wählby, C., . . . Sladoje, N. (2021). Contrastive Learning for Equivariant Multimodal Image Representations. In: : . Paper presented at "The Power of Women in Deep Learning" Workshop at the "Mathematics of deep learning" Programme at the Isaac Newton Institute for Mathematical Sciences. Cambridge University
Open this publication in new window or tab >>Contrastive Learning for Equivariant Multimodal Image Representations
Show others...
2021 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Combining the information of different imaging modalities offers complimentary information about the properties of the imaged specimen. Often these modalities need to be captured by different machines, which requires that the resulting images need to be matched and registered in order to map the corresponding signals to each other. This can be a very challenging task due to the varying appearance of the specimen in different sensors.

We have recently developed a method which uses contrastive learning to find representations of both modalities, such that the images of different modalities are mapped into the same representational space. The learnt representations (referred to as CoMIRs) are abstract and very similar with respect to a selected similarity measure. There are requirements which these representations need to fulfil for downstream tasks such as registration - e.g rotational equivariance or intensity similarity. We present a hyperparameter free modification of the contrastive loss, which is based on InfoNCE, to produce equivariant, dense-like image representations. These representations are similar enough to be considered in a common space, in which monomodal methods for registration can be exploited.

Place, publisher, year, edition, pages
Cambridge University: , 2021
National Category
Medical Imaging
Research subject
Computerized Image Processing
Identifiers
urn:nbn:se:uu:diva-459429 (URN)
Conference
"The Power of Women in Deep Learning" Workshop at the "Mathematics of deep learning" Programme at the Isaac Newton Institute for Mathematical Sciences
Available from: 2021-11-23 Created: 2021-11-23 Last updated: 2025-02-09
Breznik, E. & Strand, R. (2021). Effects of distance transform choice in training with boundary loss. In: : . Paper presented at Swedish Symposium on Deep Learning (SSDL), Online, 15 March 2021.
Open this publication in new window or tab >>Effects of distance transform choice in training with boundary loss
2021 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Convolutional neural networks are the method of choice for many medical imaging tasks, in particular segmentation. Recently, efforts have been made to include distance measures in the network training, as for example the introduction of boundary loss, calculated via a signed distance transform. Using boundary loss for segmentation can alleviate issues with imbalance and irregular shapes, leading to a better segmentation boundary. It is originally based on the Euclidean distance transform. In this paper we investigate the effects of employing various definitions of distance when using the boundary loss for medical image segmentation. Our results show a promising behaviour in training with non-Euclidean distances, and suggest a possible new use of the boundary loss in segmentation problems.

National Category
Medical Imaging
Identifiers
urn:nbn:se:uu:diva-499054 (URN)
Conference
Swedish Symposium on Deep Learning (SSDL), Online, 15 March 2021
Funder
Uppsala University
Available from: 2023-03-22 Created: 2023-03-22 Last updated: 2025-02-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3147-5626

Search in DiVA

Show all publications