Logotyp: till Uppsala universitets webbplats

uu.sePublikationer från Uppsala universitet
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Publikationer (10 of 34) Visa alla publikationer
Ahmad, N., Öfverstedt, J., Tarai, S., Bergström, G., Ahlström, H. & Kullberg, J. (2024). Interpretable Uncertainty-Aware Deep Regression with Cohort Saliency Analysis for Three-Slice CT Imaging Studies. In: Ninon Burgos; Caroline Petitjean; Maria Vakalopoulou; Stergios Christodoulidis; Pierrick Coupe; Hervé Delingette; Carole Lartizien; Diana Mateus (Ed.), Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning: . Paper presented at The 7th International Conference on Medical Imaging with Deep Learning, 3-5 July, 2024, Paris, France (pp. 17-32). MLResearchPress
Öppna denna publikation i ny flik eller fönster >>Interpretable Uncertainty-Aware Deep Regression with Cohort Saliency Analysis for Three-Slice CT Imaging Studies
Visa övriga...
2024 (Engelska)Ingår i: Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning / [ed] Ninon Burgos; Caroline Petitjean; Maria Vakalopoulou; Stergios Christodoulidis; Pierrick Coupe; Hervé Delingette; Carole Lartizien; Diana Mateus, MLResearchPress , 2024, s. 17-32Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Obesity is associated with an increased risk of morbidity and mortality. Achieving a healthy body composition, which involves maintaining a balance between fat and muscle mass, is important for metabolic health and preventing chronic diseases. Computed tomography (CT) imaging offers detailed insights into the body’s internal structure, aiding in understanding body composition and its related factors. In this feasibility study, we utilized CT image data from 2,724 subjects from the large metabolic health cohort studies SCAPIS and IGT. We train and evaluate an uncertainty-aware deep regression based ResNet-50 network, which outputs its prediction as mean and variance, for quantification of cross-sectional areas of liver, visceral adipose tissue (VAT), and thigh muscle. This was done using collages of three single-slice CT images from the liver, abdomen, and thigh regions. The model demonstrated promising results with the evaluation metrics – including R-squared (R2) and mean absolute error (MAE) for predictions. Additionally, for interpretability, the model was evaluated with saliency analysis based on Grad-CAM (Gradient-weighted Class Activation Mapping) at stages 2, 3, and 4 of the network. Deformable image registration to a template subject further enabled cohort saliency analysis that provide group-wise visualization of image regions of importance for associations to biomarkers of interest. We found that the networks focus on relevant regions for each target, according to prior knowledge.

Ort, förlag, år, upplaga, sidor
MLResearchPress, 2024
Serie
Proceedings of Machine Learning Research, PMLR, E-ISSN 2640-3498 ; 250
Nationell ämneskategori
Radiologi och bildbehandling Epidemiologi
Forskningsämne
Maskininlärning
Identifikatorer
urn:nbn:se:uu:diva-554960 (URN)
Konferens
The 7th International Conference on Medical Imaging with Deep Learning, 3-5 July, 2024, Paris, France
Forskningsfinansiär
Vetenskapsrådet, 2019-04756EXODIAB - Excellence of Diabetes Research in SwedenHjärt-Lungfonden
Tillgänglig från: 2025-04-18 Skapad: 2025-04-18 Senast uppdaterad: 2025-06-10Bibliografiskt granskad
Tarai, S., Lundström, E., Öfverstedt, J., Jönsson, H., Ahmad, N., Ahlström, H. & Kullberg, J. (2024). Prediction of Total Metabolic Tumor Volume from Tissue-Wise FDG-PET/CT Projections, Interpreted Using Cohort Saliency Analysis. In: Moi Hoon Yap; Connah Kendrick; Ardhendu Behera; Timothy Cootes; Reyer Zwiggelaar (Ed.), Medical Image Understanding and Analysis: 28th Annual Conference, MIUA 2024, Manchester, UK, July 24–26, 2024, Proceedings, Part II. Paper presented at 28th Annual Conference, MIUA 2024, Manchester, UK, July 24–26, 2024 (pp. 242-255). Cham: Springer
Öppna denna publikation i ny flik eller fönster >>Prediction of Total Metabolic Tumor Volume from Tissue-Wise FDG-PET/CT Projections, Interpreted Using Cohort Saliency Analysis
Visa övriga...
2024 (Engelska)Ingår i: Medical Image Understanding and Analysis: 28th Annual Conference, MIUA 2024, Manchester, UK, July 24–26, 2024, Proceedings, Part II / [ed] Moi Hoon Yap; Connah Kendrick; Ardhendu Behera; Timothy Cootes; Reyer Zwiggelaar, Cham: Springer, 2024, s. 242-255Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Early and accurate prediction of clinical outcomes holds great potential for patient prognostics and personalized treatment planning. Development of automated methods for estimation of medical image-based clinical parameters (e.g. total metabolic tumor volume, TMTV) could pave the way for predicting advanced clinical outcomes not explicitly available in the images, such as overall survival. We developed an automated framework that extracted tissue-wise multi-channel 2D projections from whole-body FDG-PET/CT volumes, by separating tissues based on CT Hounsfield units, and used a DenseNet-121 to estimate the TMTV from the projections. For transparency and interpretability, an image registration-based cohort saliency analysis was proposed. The network was applied on the autoPET cohort (501 scans representing lymphoma, lung cancer, melanoma) and evaluated using a single channel method (baseline) and a multi-channel method (proposed), for the purpose of comparison. The incorporation of multiple channels demonstrated an advantage in the TMTV prediction, outperforming the baseline model with a ΔMAE = –14.34 ml; ΔR2 = 0.1584; ΔICC = 0.1316 (p-value = 0.0098). The Pearson correlation coefficient (r) was computed between the ground truth (GT) tumor projections and the aggregated saliency maps. Statistical comparison, via bootstrapping, showed that the proposed model consistently outperformed the baseline, with significantly higher r across all cancer types and both sexes, except for melanoma in females. This implied that the aggregated saliency maps generated by the proposed model showed higher correspondence with the GT, compared to the baseline model. Our approach offers a promising and interpretable framework for the automated prediction of TMTV, with further potential to also predict advanced clinical outcomes.

Ort, förlag, år, upplaga, sidor
Cham: Springer, 2024
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 14860
Nyckelord
Whole-body PET/CT projections, Deep regression, Cohort saliency analysis
Nationell ämneskategori
Cancer och onkologi Radiologi och bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-541110 (URN)10.1007/978-3-031-66958-3_18 (DOI)001314340700018 ()978-3-031-66957-6 (ISBN)978-3-031-66958-3 (ISBN)
Konferens
28th Annual Conference, MIUA 2024, Manchester, UK, July 24–26, 2024
Forskningsfinansiär
Cancerfonden, 201303 PjF 01Stiftelsen för Makarna Gottfrid och Karin Erikssons fond
Tillgänglig från: 2024-10-28 Skapad: 2024-10-28 Senast uppdaterad: 2025-05-13Bibliografiskt granskad
Nordling, L., Öfverstedt, J., Lindblad, J. & Sladoje, N. (2023). Contrastive Learning of Equivariant Image Representations for Multimodal Deformable Registration. In: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI): . Paper presented at IEEE 20th International Symposium on Biomedical Imaging, Cartagena, Colombia, 18-21 April, 2023. Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Contrastive Learning of Equivariant Image Representations for Multimodal Deformable Registration
2023 (Engelska)Ingår i: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Institute of Electrical and Electronics Engineers (IEEE), 2023Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We propose a method for multimodal deformable image registration which combines a powerful deep learning approach to generate CoMIRs, dense image-like representations of multimodal image pairs, with INSPIRE, a robust framework for monomodal deformable image registration. We introduce new equivariance constraints to improve the consistency of CoMIRs under deformation. We evaluate the method on three publicly available multimodal datasets, one remote sensing, one histological, and one cytological. The proposed method demonstrates general applicability and consistently outperforms state-of-the-art registration tools \elastixname and VoxelMorph. We share source code of the proposed method and complete experimental setup as open-source at: https://github.com/MIDA-group/CoMIR_INSPIRE.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2023
Serie
International Symposium on Biomedical Imaging, ISSN 1945-7928, E-ISSN 1945-8452
Nyckelord
image alignment, correlative imaging, representation learning, equivariance
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:uu:diva-510266 (URN)10.1109/ISBI53787.2023.10230465 (DOI)001062050500143 ()978-1-6654-7358-3 (ISBN)978-1-6654-7359-0 (ISBN)
Konferens
IEEE 20th International Symposium on Biomedical Imaging, Cartagena, Colombia, 18-21 April, 2023
Forskningsfinansiär
Vinnova, 2017-02447Vinnova, 2020-03611Vinnova, 2021-01420Vetenskapsrådet, 2017-04385Swedish National Infrastructure for Computing (SNIC)Vetenskapsrådet, 2018-05973
Tillgänglig från: 2023-08-25 Skapad: 2023-08-25 Senast uppdaterad: 2023-11-14Bibliografiskt granskad
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2023). INSPIRE: Intensity and Spatial Information-Based Deformable Image Registration. PLOS ONE, 18(3), Article ID e0282432.
Öppna denna publikation i ny flik eller fönster >>INSPIRE: Intensity and Spatial Information-Based Deformable Image Registration
2023 (Engelska)Ingår i: PLOS ONE, E-ISSN 1932-6203, Vol. 18, nr 3, artikel-id e0282432Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We present INSPIRE, a top-performing general-purpose method for deformable image registration. INSPIRE brings distance measures which combine intensity and spatial information into an elastic B-splines-based transformation model and incorporates an inverse inconsistency penalization supporting symmetric registration performance. We introduce several theoretical and algorithmic solutions which provide high computational efficiency and thereby applicability of the proposed framework in a wide range of real scenarios. We show that INSPIRE delivers highly accurate, as well as stable and robust registration results. We evaluate the method on a 2D dataset created from retinal images, characterized by presence of networks of thin structures. Here INSPIRE exhibits excellent performance, substantially outperforming the widely used reference methods. We also evaluate INSPIRE on the Fundus Image Registration Dataset (FIRE), which consists of 134 pairs of separately acquired retinal images. INSPIRE exhibits excellent performance on the FIRE dataset, substantially outperforming several domain-specific methods. We also evaluate the method on four benchmark datasets of 3D magnetic resonance images of brains, for a total of 2088 pairwise registrations. A comparison with 17 other state-of-the-art methods reveals that INSPIRE provides the best overall performance. Code is available at github.com/MIDA-group/inspire

Ort, förlag, år, upplaga, sidor
Public Library of Science (PLoS), 2023
Nyckelord
Image registration, splines (mathematics), deformable models, set distance, gradient methods, optimization, cost function, iterative methods, fuzzy sets
Nationell ämneskategori
Datorgrafik och datorseende Medicinsk bildvetenskap
Forskningsämne
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-429025 (URN)10.1371/journal.pone.0282432 (DOI)000945993100007 ()36867617 (PubMedID)
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vinnova, 2017-02447Vinnova, 2020-03611Vinnova, 2021-01420
Tillgänglig från: 2020-12-18 Skapad: 2020-12-18 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Nordling, L., Öfverstedt, J., Lindblad, J. & Sladoje, N. (2023). Multimodal deformable image registration using contrastive learning of equivariant image representations. In: : . Paper presented at 40th Swedish Symposium on Image Analysis, Kolmårdens vildmarkshotell, Sweden, 13-15 March, 2023.
Öppna denna publikation i ny flik eller fönster >>Multimodal deformable image registration using contrastive learning of equivariant image representations
2023 (Engelska)Konferensbidrag, Enbart muntlig presentation (Övrigt vetenskapligt)
Abstract [en]

We propose a method for multimodal deformable image registration which combines a powerful deep learning approach to generate CoMIRs, dense image-like representations of multimodal image pairs, with INSPIRE, a robust framework for monomodal deformable image registration. We introduce new equivariance constraints to improve the consistency of CoMIRs under deformation. We evaluate the method on three publicly available multimodal datasets, one remote sensing, one histological, and one cytological. The proposed method demonstrates general applicability and consistently outperforms state-of-the-art registration tools \elastixname and VoxelMorph. We share source code of the proposed method and complete experimental setup as open-source at: https://github.com/MIDA-group/CoMIR_INSPIRE.

Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:uu:diva-510269 (URN)
Konferens
40th Swedish Symposium on Image Analysis, Kolmårdens vildmarkshotell, Sweden, 13-15 March, 2023
Forskningsfinansiär
Vinnova, 2017-02447Vinnova, 2020-03611Vetenskapsrådet, 2017-04385
Tillgänglig från: 2023-08-25 Skapad: 2023-08-25 Senast uppdaterad: 2023-08-30Bibliografiskt granskad
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2022). Cross-Sim-NGF: FFT-Based Global Rigid Multimodal Alignment of Image Volumes using Normalized Gradient Fields. In: : . Paper presented at 10th Workshop on Biomedical Image Registration. Springer Nature Springer Nature
Öppna denna publikation i ny flik eller fönster >>Cross-Sim-NGF: FFT-Based Global Rigid Multimodal Alignment of Image Volumes using Normalized Gradient Fields
2022 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Multimodal image alignment involves finding spatial correspondences between volumes varying in appearance and structure. Automated alignment methods are often based on local optimization that can be highly sensitive to initialization. We propose a novel efficient algorithm for computing similarity of normalized gradient fields (NGF) in the frequency domain, which we globally optimize to achieve rigid multimodal 3D image alignment. We validate the method experimentally on a dataset comprised of 20 brain volumes acquired in four modalities (T1w, Flair, CT, [18F] FDG PET), synthetically displaced with known transformations. The proposed method exhibits excellent performance on all six possible modality combinations and outperforms the four considered reference methods by a large margin. An important advantage of the method is its speed; global rigid alignment of 3.4 Mvoxel volumes requires approximately 40 seconds of computation, and the proposed algorithm outperforms a direct algorithm for the same task by more than three orders of magnitude. Open-source code is provided.

Ort, förlag, år, upplaga, sidor
Springer NatureSpringer Nature, 2022
Nationell ämneskategori
Medicinsk bildvetenskap
Forskningsämne
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-472285 (URN)10.1007/978-3-031-11203-4_17 (DOI)000883026000017 ()
Konferens
10th Workshop on Biomedical Image Registration
Tillgänglig från: 2022-04-07 Skapad: 2022-04-07 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2022). Efficient Algorithms for Global Multimodal Image Registration. In: : . Paper presented at Swedish Symposium on Image Analysis (SSBA).
Öppna denna publikation i ny flik eller fönster >>Efficient Algorithms for Global Multimodal Image Registration
2022 (Engelska)Konferensbidrag, Enbart muntlig presentation (Övrigt vetenskapligt)
Abstract [en]

Multimodal image registration is the process of finding spatial correspondences between images formed by different imaging techniques or under different conditions, to facilitate heterogeneous data fusion and correlative analysis. Two similarity measures widely used in multimodal image registration are mutual information (MI) and similarity of normalized gradient fields (NGF). We propose efficient algorithms for computing MI and similarity of NGF for all discrete axis-aligned shifts in the frequency domain. These fast algorithms enable highly reliable global registration of multimodal images, also for very large displacements,  which we confirm by their performance evaluation on a number of different pairs of modalities.

We consider four datasets, and observe that global maximization of MI is the best choice for two datasets/applications in 2D, while global maximization of similarity of NGF performs best on the remaining two datasets, of which one consists of 2D images, and the other consists of 3D data. This confirms the relevance of both methods; their properties recommend them for application in different scenarios.    

Nationell ämneskategori
Medicinsk bildvetenskap
Forskningsämne
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-490950 (URN)
Konferens
Swedish Symposium on Image Analysis (SSBA)
Tillgänglig från: 2022-12-16 Skapad: 2022-12-16 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Öfverstedt, J., Lindblad, J. & Sladoje, N. (2022). Fast computation of mutual information in the frequency domain with applications to global multimodal image alignment. Pattern Recognition Letters, 159, 196-203
Öppna denna publikation i ny flik eller fönster >>Fast computation of mutual information in the frequency domain with applications to global multimodal image alignment
2022 (Engelska)Ingår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 159, s. 196-203Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Multimodal image alignment is the process of finding spatial correspondences between images formed by different imaging techniques or under different conditions, to facilitate heterogeneous data fusion and correlative analysis. The information-theoretic concept of mutual information (MI) is widely used as a similarity measure to guide multimodal alignment processes, where most works have focused on local maximization of MI, which typically works well only for small displacements. This points to a need for global maximization of MI, which has previously been computationally infeasible due to the high run-time complexity of existing algorithms. We propose an efficient algorithm for computing MI for all discrete displacements (formalized as the cross-mutual information function (CMIF)), which is based on cross-correlation computed in the frequency domain. We show that the algorithm is equivalent to a direct method while superior in terms of run-time. Furthermore, we propose a method for multimodal image alignment for transformation models with few degrees of freedom (e.g., rigid) based on the proposed CMIF-algorithm. We evaluate the efficacy of the proposed method on three distinct benchmark datasets, containing remote sensing images, cytological images, and histological images, and we observe excellent success-rates (in recovering known rigid transformations), overall outperforming alternative methods, including local optimization of MI, as well as several recent deep learning-based approaches. We also evaluate the run-times of a GPU implementation of the proposed algorithm and observe speed-ups from 100 to more than 10,000 times for realistic image sizes compared to a GPU implementation of a direct method. Code is shared as open-source at github.com/MIDA-group/globalign.

Ort, förlag, år, upplaga, sidor
ElsevierElsevier BV, 2022
Nyckelord
Mutual information, Image alignment, Global optimization, Multimodal, Entropy
Nationell ämneskategori
Medicinsk bildvetenskap Datorgrafik och datorseende
Forskningsämne
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-447807 (URN)10.1016/j.patrec.2022.05.022 (DOI)000833390400013 ()
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vinnova, 2017-02447Vetenskapsrådet, 2017-04385
Tillgänglig från: 2021-06-30 Skapad: 2021-06-30 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Lu, J., Öfverstedt, J., Lindblad, J. & Sladoje, N. (2022). Is image-to-image translation the panacea for multimodal image registration?: A comparative study. PLOS ONE, 17(11), Article ID e0276196.
Öppna denna publikation i ny flik eller fönster >>Is image-to-image translation the panacea for multimodal image registration?: A comparative study
2022 (Engelska)Ingår i: PLOS ONE, E-ISSN 1932-6203, Vol. 17, nr 11, artikel-id e0276196Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.

Ort, förlag, år, upplaga, sidor
Public Library of Science (PLoS), 2022
Nationell ämneskategori
Medicinsk bildvetenskap Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:uu:diva-490949 (URN)10.1371/journal.pone.0276196 (DOI)000925006300013 ()36441754 (PubMedID)
Forskningsfinansiär
Vinnova, 2017-02447Vinnova, 2020-03611Vetenskapsrådet, 2017-04385
Tillgänglig från: 2022-12-16 Skapad: 2022-12-16 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Öfverstedt, J. (2022). Methods for Reliable Image Registration: Algorithms, Distance Measures, and Representations. (Doctoral dissertation). Uppsala: Acta Universitatis Upsaliensis
Öppna denna publikation i ny flik eller fönster >>Methods for Reliable Image Registration: Algorithms, Distance Measures, and Representations
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Much biomedical and medical research relies on the collection of ever-larger amounts of image data (both 2D images and 3D volumes, as well as time-series) and increasingly from multiple sources. Image registration, the process of finding correspondences between images based on the affinity of features of interest, is often required as a vital step towards the final analysis, which may consist of a comparison of images, measurement of movement, or fusion of complementary information. The contributions in this work are centered around reliable image registration methods for both 2D and 3D images with the aim of wide applicability: similarity and distance measures between images for image registration, algorithms for efficient computation of these, and other commonly used measures for both local and global optimization frameworks, and representations for multimodal image registration where the appearance and structures present in the images may vary dramatically.

The main contributions are: (i) distance measures for affine symmetric intensity image registration, combining intensity and spatial information based on the notion of alpha-cuts from fuzzy set theory; (ii) the extension of the affine registration method to more flexible deformable transformation models, leading to the framework Intensity and Spatial Information-Based Deformable Image Registration (INSPIRE); (iii) two efficient algorithms for computing the proposed distances and their spatial gradients and thereby enabling local gradient-based optimization; (iv) a contrastive representation learning method, Contrastive Multimodal Image Representation for Registration (CoMIR), utilizing deep learning techniques to obtain common representations that can be registered using methods designed for monomodal scenarios; (v) efficient algorithms for global optimization of mutual information and similarities of normalized gradient fields; (vi) a comparative study exploring the applicability of modern image-to-image translation methods to facilitate multimodal registration; (vii) the Stochastic Distance Transform, using the theory of discrete random sets to offer improved noise-insensitivity to distance computations; (viii) extensive evaluation of the proposed image registration methods on a number of different datasets mainly from (bio)medical imaging, where they exhibit excellent performance, and reliability, suggesting wide utility.

Ort, förlag, år, upplaga, sidor
Uppsala: Acta Universitatis Upsaliensis, 2022. s. 110
Serie
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2107
Nyckelord
Image registration, alignment, local optimization, global optimization, mutual information, normalized gradient fields, representation learning
Nationell ämneskategori
Datorgrafik och datorseende Medicinsk bildvetenskap
Forskningsämne
Datoriserad bildbehandling
Identifikatorer
urn:nbn:se:uu:diva-463393 (URN)978-91-513-1382-5 (ISBN)
Disputation
2022-02-25, 101195, Ångström, Lägerhyddsvägen 1, Uppsala, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2022-02-03 Skapad: 2022-01-10 Senast uppdaterad: 2025-02-09
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-0253-9037

Sök vidare i DiVA

Visa alla publikationer