Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Motion Estimation from Temporally and Spatially Sparse Medical Image Sequences
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Artificial Intelligence. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Systems and Control.ORCID iD: 0000-0002-9013-949x
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Description
Abstract [en]

Motion is a fundamental aspect of human life. Even during low-intensity activities, we move. The lungs absorb oxygen when inhaling and desorb carbon dioxide when exhaling. The heart pumps oxygenated blood to the body's organs. Wave-like contractions help us process food. All such events cause motion within the body. Being able to describe motion offers benefits in medical health, e.g., analysis of organ functions and guidance during ongoing treatments. The motion can be captured by acquiring medical images in real-time. However, in several cases, the resolution of the medical images is limited by the acquisition time, and the images suffer from low temporal and spatial resolution. One such example appears in radiotherapy, e.g., by acquiring 2D cine-MRIs for monitoring ongoing treatment sessions. An accurate estimation of the entire 3D motion provides a more realistic estimate of the actual delivery outcome and is a necessary feature for more advanced procedures, like real-time beam adaptation.

In this thesis, we develop methods to estimate the motion from temporally and spatially sparse medical image sequences. We start by extracting knowledge from optimization-based medical image registration methods and showing how deep learning can reduce execution time. Then, we model the motion dynamics as a sequence of deformable image registrations. Due to the high dimensionality of the medical image, we model the dynamics in a lower dimensional space. For this, we apply dimension reduction techniques like principal component analysis and variational auto-encoders. The dynamic is then modeled using state-space representations and diffusion probabilistic models to solve the two inference problems of forecasting and simulating the state processes.

The main contribution lies in the five presented scientific articles, where we deal with the problem of temporally and spatially sparse sequences separately and then combine them into a uniform solution. The proposed methods are evaluated on medical images of several modalities, such as MRI, CT, and ultrasound, and finally demonstrated on the use case in the radiotherapy domain, where more accurate motion estimates could spare healthy tissues from being exposed to radiation dose.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2024. , p. 81
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2456
Keywords [en]
Motion modeling, Medical image registration, Deep learning, Dimensionality reduction, Dynamic modeling
National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
URN: urn:nbn:se:uu:diva-538082ISBN: 978-91-513-2244-5 (print)OAI: oai:DiVA.org:uu-538082DiVA, id: diva2:1902077
Public defence
2024-12-05, 101195, Heinz-Otto Kreiss, Ångströmslaboratoriet, Lägerhyddsvägen 1, Uppsala, 09:15 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2024-10-28 Created: 2024-10-01 Last updated: 2024-10-28
List of papers
1. Learning a Deformable Registration Pyramid
Open this publication in new window or tab >>Learning a Deformable Registration Pyramid
2021 (English)In: Segmentation, Classification, and Registration of Multi-modality Medical Imaging Data / [ed] Nadya Shusharina, Mattias P. Heinrich, Ruobing Huang, Springer Nature Springer Nature, 2021, Vol. 12587, p. 80-86Conference paper, Published paper (Refereed)
Abstract [en]

We introduce an end-to-end unsupervised (or weakly supervised) image registration method that blends conventional medical image registration with contemporary deep learning techniques from computer vision. Our method downsamples both the fixed and the moving images into multiple feature map levels where a displacement field is estimated at each level and then further refined throughout the network. We train and test our model on three different datasets. In comparison with the initial registrations we find an improved performance using our model, yet we expect it would improve further if the model was fine-tuned for each task. The implementation is publicly available (https://github.com/ngunnar/learning-a-deformable-registration-pyramid).

Place, publisher, year, edition, pages
Springer NatureSpringer Nature, 2021
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 12587
National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
urn:nbn:se:uu:diva-443269 (URN)10.1007/978-3-030-71827-5_10 (DOI)978-3-030-71826-8 (ISBN)978-3-030-71827-5 (ISBN)
Conference
MICCAI 2020, Lima, Peru, October 4–8, 2020
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research , SM19-0029
Available from: 2021-06-01 Created: 2021-06-01 Last updated: 2024-10-01Bibliographically approved
2. Unsupervised dynamic modeling of medical image transformations
Open this publication in new window or tab >>Unsupervised dynamic modeling of medical image transformations
2022 (English)In: 2022 25th International Conference on Information Fusion (FUSION 2022), Institute of Electrical and Electronics Engineers (IEEE), 2022, p. 1-7Conference paper, Published paper (Refereed)
Abstract [en]

Spatiotemporal imaging has applications in e.g. cardiac diagnostics, surgical guidance, and radiotherapy monitoring, In this paper, we explain the temporal motion by identifying the underlying dynamics, only based on the sequential images. Our dynamical model maps the inputs of observed high-dimensional sequential images to a low-dimensional latent space wherein a linear relationship between a hidden state process and the lower-dimensional representation of the inputs holds. For this, we use a conditional variational auto-encoder (CVAE) to nonlinearly map the higher dimensional image to a lower-dimensional space, wherein we model the dynamics with a linear Gaussian state-space model (LG-SSM). The model, a modified version of the Kalman variational auto-encoder, is end-to-end trainable, and the weights, both in the CVAE and LG-SSM, are simultaneously updated by maximizing the evidence lower bound of the marginal likelihood. In contrast to the original model, we explain the motion with a spatial transformation from one image to another. This results in sharper reconstructions and the possibility of transferring auxiliary information, such as segmentation, through the image sequence. Our experiments, on cardiac ultrasound time series, show that the dynamic model outperforms traditional image registration in execution time, to a similar performance. Further, our model offers the possibility to impute and extrapolate for missing samples.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Dynamic system, State-space models, Deep learning, Generative models, Sequential modeling, Image registration
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-486389 (URN)10.23919/FUSION49751.2022.9841369 (DOI)000855689000139 ()978-1-7377497-2-1 (ISBN)978-1-6654-8941-6 (ISBN)
Conference
25th International Conference of Information Fusion (FUSION), JUL 04-07, 2022, Linkoping, SWEDEN
Funder
Knut and Alice Wallenberg FoundationSwedish Foundation for Strategic Research, SM19-0029
Available from: 2022-10-10 Created: 2022-10-10 Last updated: 2024-10-01Bibliographically approved
3. Online Learning in Motion Modeling for Intra-interventional Image Sequences
Open this publication in new window or tab >>Online Learning in Motion Modeling for Intra-interventional Image Sequences
2024 (English)In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2024: 27th International Conference, Marrakesh, Morocco, October 6–10, 2024, Proceedings, Part II / [ed] Marius George Linguraru; Qi Dou; Aasa Feragen; Stamatia Giannarou; Ben Glocker; Karim Lekadir; Julia A. Schnabel, Springer, 2024, p. 706-716Conference paper, Published paper (Refereed)
Abstract [en]

Image monitoring and guidance during medical examinations can aid both diagnosis and treatment. However, the sampling frequency is often too low, which creates a need to estimate the missing images. We present a probabilistic motion model for sequential medical images, with the ability to both estimate motion between acquired images and forecast the motion ahead of time. The core is a low-dimensional temporal process based on a linear Gaussian state-space model with analytically tractable solutions for forecasting, simulation, and imputation of missing samples. The results, from two experiments on publicly available cardiac datasets, show reliable motion estimates and an improved forecasting performance using patient-specific adaptation by online learning.

Place, publisher, year, edition, pages
Springer, 2024
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15002
Keywords
Image registration, Online learning, Dynamic probabilistic modeling.
National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
urn:nbn:se:uu:diva-538033 (URN)10.1007/978-3-031-72069-7_66 (DOI)978-3-031-72068-0 (ISBN)978-3-031-72069-7 (ISBN)
Conference
International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2024, 27th International Conference, Marrakesh, Morocco, October 6–10, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2024-10-21Bibliographically approved
4. Diffusion-Based 3D Motion Estimation from Sparse 2D Observations
Open this publication in new window or tab >>Diffusion-Based 3D Motion Estimation from Sparse 2D Observations
2023 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Intra-interventional imaging is a tool for monitoring and guiding ongoing treatment sessions. Ideally one would like the full 3D image at high temporal resolution, this is however not possible due to the acquisition time. In this study, we consider the scenario when the observations are sparse and consist only of 2D image slices through the 3D volume. Given 2D-2D image registrations between a predefined 3D volume and the observations, we propose a method to estimate the full 3D motion. This 3D motion enables the reconstruction of the 3D anatomy. Our method relies on a conditioning-based denoising diffusion model and generates estimates given the 2D sparse observations. We reduce the dimensionality of the diffusion process by embedding the data in a lower dimensional space using principal component analysis. The model is evaluated in two experiments: first on synthetically generated data and then using medical lung images. Our observations show that the estimates are stable across the entire volume and within 1 mm of the lower bound defined by the reconstruction error.

Keywords
Motion modeling, 3D reconstruction, Medical image registration, Diffusion model
National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
urn:nbn:se:uu:diva-538019 (URN)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2024-10-01
5. Machine learning-based 3D deformable motion modeling for MRI-guided radiotherapy
Open this publication in new window or tab >>Machine learning-based 3D deformable motion modeling for MRI-guided radiotherapy
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Background: To compensate for organ motion during a radiotherapy session,margins are normally added to the tumor region. With MR-linacs, it is nowpossible to monitor the motion by acquiring 2D cine-MRI in real-time.

Purpose: In this paper, we propose a method to estimate the entire 3D motiongiven sparse information in the form of 2D images of the anatomy.

Methods: The methods consist of three models: two 2D motion modelswith forecasting possibility and one 2D-to-3D extrapolation model to estimatethe 3D motion at each point in time. In the experiments, we use real imagesfrom patients treated with an MRI-linac system, where seven patients wereused for training and two for evaluation. The experiment was two-fold; onebased on a phase-sorted 4D CT with known motions, and, one based on acine-MRI sequence, where the ground-truth 3D motion was unknown.

Results: Our model estimates the 3D motion given two 2D image observationsin the coronal and sagittal orientation with an average error of 0.43mm in the entire anatomy. In the PTV, the average error was 0.82 mm and 0.56mm for the two patients in the evaluation cohort. For the cine-MRI sequence,our approach achieved results comparable to previously published centroidtracking while also providing a complete deformable 3D motion estimate.

Conclusions: We present a method to estimate full 3D motion from soarsedata in the form of 2D images suitable for the MR-linac.

National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
urn:nbn:se:uu:diva-538023 (URN)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2024-10-01

Open Access in DiVA

UUThesis_N-Gunnarsson-2024(1295 kB)77 downloads
File information
File name FULLTEXT01.pdfFile size 1295 kBChecksum SHA-512
963aecee2a2fc2d748cb0407125e205819d84ea549b33a82d473abcad72dcd48f908f0c86cf02c288a06c2c090d4ba7c167573acaa06cfcc1bfeb030aee7eaaa
Type fulltextMimetype application/pdf

Authority records

Gunnarsson, Niklas

Search in DiVA

By author/editor
Gunnarsson, Niklas
By organisation
Artificial IntelligenceDivision of Systems and Control
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 77 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 553 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf