Logo: to the web site of Uppsala University

uu.sePublikasjoner fra Uppsala universitet
Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 13) Visa alla publikasjoner
Kundu, S., Banerjee, S., Breznik, E., Toumpanakis, D., Wikström, J., Strand, R. & Kumar Dhara, A. (2024). ASE-Net for Segmentation of Post-operative Glioblastoma and Patient-specific Fine-tuning for Segmentation Refinement of Follow-up MRI Scans. SN computer science, 5(106)
Åpne denne publikasjonen i ny fane eller vindu >>ASE-Net for Segmentation of Post-operative Glioblastoma and Patient-specific Fine-tuning for Segmentation Refinement of Follow-up MRI Scans
Vise andre…
2024 (engelsk)Inngår i: SN computer science, E-ISSN 2661-8907, Vol. 5, nr 106Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Volumetric quantification of tumors is usually done manually by radiologists requiring precious medical time and suffering from inter-observer variability. An automatic tool for accurate volume quantification of post-operative glioblastoma would reduce the workload of radiologists and improve the quality of follow-up monitoring and patient care. This paper deals with the 3-D segmentation of post-operative glioblastoma using channel squeeze and excitation based attention gated network (ASE-Net). The proposed deep neural network has a 3-D encoder and decoder based architecture with channel squeeze and excitation (CSE) blocks and attention blocks. The CSE block reduces the dependency on space information and put more emphasize on the channel information. The attention block suppresses the feature maps of irrelevant background and helps highlighting the relevant feature maps. The Uppsala university data set used has post-operative follow-up MRI scans for fifteen patients. A patient specific fine-tuning approach is used to improve the segmentation results for each patient. ASE-Net is also cross-validated with BraTS-2021 data set. The mean dice score of five-fold cross validation results with BraTS-2021 data set for enhanced tumor is 0.8244. The proposed network outperforms the competing networks like U-Net, Attention U-Net and Res U-Net. On the Uppsala University glioblastoma data set, the mean Dice score obtained with the proposed network is 0.7084, Hausdorff Distance-95 is 7.14 and the mean volumetric similarity achieved is 0.8579. With fine-tuning the pre-trained network, the mean dice score improved to 0.7368, Hausdorff Distance-95 decreased to 6.10 and volumetric similarity improved to 0.8736. ASE-Net outperforms the competing networks and can be used for volumetric quantification of post-operative glioblastoma from follow-up MRI scans. The network significantly reduces the probability of over segmentation.

sted, utgiver, år, opplag, sider
Springer, 2024
HSV kategori
Forskningsprogram
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-498177 (URN)10.1007/s42979-023-02425-5 (DOI)
Tilgjengelig fra: 2023-03-10 Laget: 2023-03-10 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Breznik, E., Wetzer, E., Lindblad, J. & Sladoje, N. (2024). Cross-modality sub-image retrieval using contrastive multimodal image representations. Scientific Reports, 14(1), Article ID 18798.
Åpne denne publikasjonen i ny fane eller vindu >>Cross-modality sub-image retrieval using contrastive multimodal image representations
2024 (engelsk)Inngår i: Scientific Reports, E-ISSN 2045-2322, Vol. 14, nr 1, artikkel-id 18798Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In tissue characterization and cancer diagnostics, multimodal imaging has emerged as a powerful technique. Thanks to computational advances, large datasets can be exploited to discover patterns in pathologies and improve diagnosis. However, this requires efficient and scalable image retrieval methods. Cross-modality image retrieval is particularly challenging, since images of similar (or even the same) content captured by different modalities might share few common structures. We propose a new application-independent content-based image retrieval (CBIR) system for reverse (sub-)image search across modalities, which combines deep learning to generate representations (embedding the different modalities in a common space) with robust feature extraction and bag-of-words models for efficient and reliable retrieval. We illustrate its advantages through a replacement study, exploring a number of feature extractors and learned representations, as well as through comparison to recent (cross-modality) CBIR methods. For the task of (sub-)image retrieval on a (publicly available) dataset of brightfield and second harmonic generation microscopy images, the results show that our approach is superior to all tested alternatives. We discuss the shortcomings of the compared methods and observe the importance of equivariance and invariance properties of the learned representations and feature extractors in the CBIR pipeline. Code is available at: https://github.com/MIDA-group/CrossModal_ImgRetrieval.

sted, utgiver, år, opplag, sider
Springer Nature, 2024
HSV kategori
Identifikatorer
urn:nbn:se:uu:diva-470293 (URN)10.1038/s41598-024-68800-1 (DOI)001318393400020 ()39138271 (PubMedID)
Merknad

These authors contributed equally: Eva Breznik and Elisabeth Wetzer

Tilgjengelig fra: 2022-03-22 Laget: 2022-03-22 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Heil, R. & Breznik, E. (2023). A Study of Augmentation Methods for Handwritten Stenography Recognition. In: : . Paper presented at IbPRIA 2023: 11th Iberian Conference on Pattern Recognition and Image Analysis.
Åpne denne publikasjonen i ny fane eller vindu >>A Study of Augmentation Methods for Handwritten Stenography Recognition
2023 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

One of the factors limiting the performance of handwritten text recognition (HTR) for stenography is the small amount of annotated training data. To alleviate the problem of data scarcity, modern HTR methods often employ data augmentation. However, due to specifics of the stenographic script, such settings may not be directly applicable for stenography recognition. In this work, we study 22 classical augmentation techniques, most of which are commonly used for HTR of other scripts, such as Latin handwriting. Through extensive experiments, we identify a group of augmentations, including for example contained ranges of random rotation, shifts and scaling, that are beneficial to the use case of stenography recognition. Furthermore, a number of augmentation approaches, leading to a decrease in recognition performance, are identified. Our results are supported by statistical hypothesis testing. A link to the source code is provided in the paper.

HSV kategori
Forskningsprogram
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-497025 (URN)10.1007/978-3-031-36616-1_11 (DOI)
Konferanse
IbPRIA 2023: 11th Iberian Conference on Pattern Recognition and Image Analysis
Tilgjengelig fra: 2023-02-22 Laget: 2023-02-22 Sist oppdatert: 2023-09-05
Breznik, E. (2023). Image Processing and Analysis Methods for Biomedical Applications. (Doctoral dissertation). Uppsala: Acta Universitatis Upsaliensis
Åpne denne publikasjonen i ny fane eller vindu >>Image Processing and Analysis Methods for Biomedical Applications
2023 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

With new technologies and developments medical images can be acquired more quickly and at a larger scale than ever before. However, increased amount of data induces an overhead in the human labour needed for its inspection and analysis. To support clinicians in decision making and enable swifter examinations, computerized methods can be utilized to automate the more time-consuming tasks. For such use, methods need be highly accurate, fast, reliable and interpretable. In this thesis we develop and improve methods for image segmentation, retrieval and statistical analysis, with applications in imaging-based diagnostic pipelines. 

Individual objects often need to first be extracted/segmented from the image before they can be analysed further. We propose methodological improvements for deep learning-based segmentation methods using distance maps, with the focus on fully-supervised 3D patch-based training and training on 2D slices under point supervision. We show that using a directly interpretable distance prior helps to improve segmentation accuracy and training stability.

For histological data in particular, we propose and extensively evaluate a contrastive learning and bag of words-based pipeline for cross-modal image retrieval. The method is able to recover correct matches from the database across modalities and small transformations with improved accuracy compared to the competitors. 

In addition, we examine a number of methods for multiplicity correction on statistical analyses of correlation using medical images. Evaluation strategies are discussed and anatomy-observing extensions to the methods are developed as a way of directly decreasing the multiplicity issue in an interpretable manner, providing improvements in error control. 

The methods presented in this thesis were developed with clinical applications in mind and provide a strong base for further developments and future use in medical practice.

sted, utgiver, år, opplag, sider
Uppsala: Acta Universitatis Upsaliensis, 2023. s. 74
Serie
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2253
Emneord
Multiple comparisons, image segmentation, image retrieval, deep learning, medical image analysis, magnetic resonance imaging, whole-body imaging
HSV kategori
Forskningsprogram
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-498953 (URN)978-91-513-1760-1 (ISBN)
Disputas
2023-05-12, Sonja Lyttkens (101121), Ångström Laboratoriet, Lägerhyddsvägen 1, Uppsala, 09:15 (engelsk)
Opponent
Veileder
Tilgjengelig fra: 2023-04-21 Laget: 2023-03-22 Sist oppdatert: 2025-02-09
Ovalle, A., Subramonian, A., Singh, A., Voelcker, C., Sutherland, D. J., Locatelli, D., . . . Stark, L. (2023). Queer In AI: A Case Study in Community-Led Participatory AI. In: FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Paper presented at 6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), June 12-15, 2023, Chicago, IL, USA (pp. 1882-1895). Association for Computing Machinery (ACM)
Åpne denne publikasjonen i ny fane eller vindu >>Queer In AI: A Case Study in Community-Led Participatory AI
Vise andre…
2023 (engelsk)Inngår i: FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery (ACM), 2023, s. 1882-1895Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Queerness and queer people face an uncertain future in the face of ever more widely deployed and invasive artificial intelligence (AI). These technologies have caused numerous harms to queer people, including privacy violations, censoring and downranking queer content, exposing queer people and spaces to harassment by making them hypervisible, deadnaming and outing queer people. More broadly, they have violated core tenets of queerness by classifying and controlling queer identities. In response to this, the queer community in AI has organized Queer in AI, a global, decentralized, volunteer-run grassroots organization that employs intersectional and community-led participatory design to build an inclusive and equitable AI future. In this paper, we present Queer in AI as a case study for community-led participatory design in AI. We examine how participatory design and intersectional tenets started and shaped this community's programs over the years. We discuss different challenges that emerged in the process, look at ways this organization has fallen short of operationalizing participatory and intersectional principles, and then assess the organization's impact. Queer in AI provides important lessons and insights for practitioners and theorists of participatory methods broadly through its rejection of hierarchy in favor of decentralization, success at building aid and programs by and for the queer community, and effort to change actors and institutions outside of the queer community. Finally, we theorize how communities like Queer in AI contribute to the participatory design in AI more broadly by fostering cultures of participation in AI, welcoming and empowering marginalized participants, critiquing poor or exploitative participatory practices, and bringing participation to institutions outside of individual research projects. Queer in AI's work serves as a case study of grassroots activism and participatory methods within AI, demonstrating the potential of community-led participatory methods and intersectional praxis, while also providing challenges, case studies, and nuanced insights to researchers developing and using participatory methods.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2023
HSV kategori
Identifikatorer
urn:nbn:se:uu:diva-515658 (URN)10.1145/3593013.3594134 (DOI)001062819300153 ()979-8-4007-0192-4 (ISBN)
Konferanse
6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), June 12-15, 2023, Chicago, IL, USA
Tilgjengelig fra: 2023-11-09 Laget: 2023-11-09 Sist oppdatert: 2025-02-17bibliografisk kontrollert
Breznik, E., Wetzer, E., Lindblad, J. & Sladoje, N. (2022). Label-Free Reverse Image Search of Multimodal Microscopy Images. In: : . Paper presented at Swedish Symposium on Image Analysis.
Åpne denne publikasjonen i ny fane eller vindu >>Label-Free Reverse Image Search of Multimodal Microscopy Images
2022 (engelsk)Konferansepaper, Oral presentation only (Annet vitenskapelig)
HSV kategori
Identifikatorer
urn:nbn:se:uu:diva-470159 (URN)
Konferanse
Swedish Symposium on Image Analysis
Tilgjengelig fra: 2022-03-21 Laget: 2022-03-21 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Wetzer, E., Breznik, E., Lindblad, J. & Sladoje, N. (2022). Re-Ranking Strategies in Cross-Modality Microscopy Retrieval. In: : . Paper presented at IEEE ISBI 2022 International Symposium on Biomedical Imaging, 28-31 March, 2022, Kolkata, India. Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Re-Ranking Strategies in Cross-Modality Microscopy Retrieval
2022 (engelsk)Konferansepaper, Oral presentation with published abstract (Annet vitenskapelig)
Abstract [en]

For many cancer diagnoses tissue samples stained by hematoxylin and eosin are inspected in a brightfield (BF) microscope. It is becoming increasingly common to additionally inspect second harmonic generation (SHG) images alongside their BF counterparts as such multimodal image pairs carry complimentary information about the tissue. To match BF and SHG images captured in different microscopes, Breznik et al. (2022) recently proposed a method for image retrieval to match unaligned multimodal image pairs of BF and SHG: it creates a bag-of-words (BoW) based on SURF features and image representations called CoMIRs; and finally a re-ranking step to refine the retrieval among the best-ranking matches. Here, we evaluate three different re-ranking strategies (one relying on global features, two relying on local features) for cross-modality image retrieval of SHG and BF images and evaluate them on a publicly available dataset.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2022
HSV kategori
Identifikatorer
urn:nbn:se:uu:diva-524878 (URN)
Konferanse
IEEE ISBI 2022 International Symposium on Biomedical Imaging, 28-31 March, 2022, Kolkata, India
Tilgjengelig fra: 2024-03-12 Laget: 2024-03-12 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Tominec, I. & Breznik, E. (2021). An unfitted RBF-FD method in a least-squares setting for elliptic PDEs on complex geometries. Journal of Computational Physics, 436, Article ID 110283.
Åpne denne publikasjonen i ny fane eller vindu >>An unfitted RBF-FD method in a least-squares setting for elliptic PDEs on complex geometries
2021 (engelsk)Inngår i: Journal of Computational Physics, ISSN 0021-9991, E-ISSN 1090-2716, Vol. 436, artikkel-id 110283Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Radial basis function generated finite difference (RBF-FD) methods for PDEs require a set of interpolation points which conform to the computational domain Ω. One of the requirements leading to approximation robustness is to place the interpolation points with a locally uniform distance around the boundary of Ω. However generating interpolation points with such properties is a cumbersome problem. Instead, the interpolation points can be extended over the boundary and as such completely decoupled from the shape of Ω. In this paper we present a modification to the least-squares RBF-FD method which allows the interpolation points to be placed in a box that encapsulates Ω. This way, the node placement over a complex domain in 2D and 3D is greatly simplified. Numerical experiments on solving an elliptic model PDE over complex 2D geometries show that our approach is robust. Furthermore it performs better in terms of the approximation error and the runtime vs. error compared with the classic RBF-FD methods. It is also possible to use our approach in 3D, which we indicate by providing convergence results of a solution over a thoracic diaphragm.

sted, utgiver, år, opplag, sider
Elsevier, 2021
Emneord
Complex geometry, Radial basis function, Least-squares, Partial differential equation, Immersed method, Ghost points
HSV kategori
Forskningsprogram
Numerisk analys; Beräkningsvetenskap
Identifikatorer
urn:nbn:se:uu:diva-465219 (URN)10.1016/j.jcp.2021.110283 (DOI)000746492700004 ()
Forskningsfinansiär
Swedish Research Council, 2016-04849
Tilgjengelig fra: 2022-01-17 Laget: 2022-01-17 Sist oppdatert: 2024-01-15bibliografisk kontrollert
Wetzer, E., Pielawski, N., Breznik, E., Öfverstedt, J., Lu, J., Wählby, C., . . . Sladoje, N. (2021). Contrastive Learning for Equivariant Multimodal Image Representations. In: : . Paper presented at "The Power of Women in Deep Learning" Workshop at the "Mathematics of deep learning" Programme at the Isaac Newton Institute for Mathematical Sciences. Cambridge University
Åpne denne publikasjonen i ny fane eller vindu >>Contrastive Learning for Equivariant Multimodal Image Representations
Vise andre…
2021 (engelsk)Konferansepaper, Poster (with or without abstract) (Annet vitenskapelig)
Abstract [en]

Combining the information of different imaging modalities offers complimentary information about the properties of the imaged specimen. Often these modalities need to be captured by different machines, which requires that the resulting images need to be matched and registered in order to map the corresponding signals to each other. This can be a very challenging task due to the varying appearance of the specimen in different sensors.

We have recently developed a method which uses contrastive learning to find representations of both modalities, such that the images of different modalities are mapped into the same representational space. The learnt representations (referred to as CoMIRs) are abstract and very similar with respect to a selected similarity measure. There are requirements which these representations need to fulfil for downstream tasks such as registration - e.g rotational equivariance or intensity similarity. We present a hyperparameter free modification of the contrastive loss, which is based on InfoNCE, to produce equivariant, dense-like image representations. These representations are similar enough to be considered in a common space, in which monomodal methods for registration can be exploited.

sted, utgiver, år, opplag, sider
Cambridge University: , 2021
HSV kategori
Forskningsprogram
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-459429 (URN)
Konferanse
"The Power of Women in Deep Learning" Workshop at the "Mathematics of deep learning" Programme at the Isaac Newton Institute for Mathematical Sciences
Tilgjengelig fra: 2021-11-23 Laget: 2021-11-23 Sist oppdatert: 2025-02-09
Breznik, E. & Strand, R. (2021). Effects of distance transform choice in training with boundary loss. In: : . Paper presented at Swedish Symposium on Deep Learning (SSDL), Online, 15 March 2021.
Åpne denne publikasjonen i ny fane eller vindu >>Effects of distance transform choice in training with boundary loss
2021 (engelsk)Konferansepaper, Poster (with or without abstract) (Annet vitenskapelig)
Abstract [en]

Convolutional neural networks are the method of choice for many medical imaging tasks, in particular segmentation. Recently, efforts have been made to include distance measures in the network training, as for example the introduction of boundary loss, calculated via a signed distance transform. Using boundary loss for segmentation can alleviate issues with imbalance and irregular shapes, leading to a better segmentation boundary. It is originally based on the Euclidean distance transform. In this paper we investigate the effects of employing various definitions of distance when using the boundary loss for medical image segmentation. Our results show a promising behaviour in training with non-Euclidean distances, and suggest a possible new use of the boundary loss in segmentation problems.

HSV kategori
Identifikatorer
urn:nbn:se:uu:diva-499054 (URN)
Konferanse
Swedish Symposium on Deep Learning (SSDL), Online, 15 March 2021
Forskningsfinansiär
Uppsala University
Tilgjengelig fra: 2023-03-22 Laget: 2023-03-22 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-3147-5626