Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Machine learning-based 3D deformable motion modeling for MRI-guided radiotherapy
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Artificial Intelligence. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Systems and Control.ORCID iD: 0000-0002-9013-949x
Odense University Hospital, Odense, Denmark.ORCID iD: 0000-0002-5309-2696
Odense University Hospital, Odense, Denmark.ORCID iD: 0000-0002-7270-7967
Odense University Hospital, Odense, Denmark.
Show others and affiliations
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Background: To compensate for organ motion during a radiotherapy session,margins are normally added to the tumor region. With MR-linacs, it is nowpossible to monitor the motion by acquiring 2D cine-MRI in real-time.

Purpose: In this paper, we propose a method to estimate the entire 3D motiongiven sparse information in the form of 2D images of the anatomy.

Methods: The methods consist of three models: two 2D motion modelswith forecasting possibility and one 2D-to-3D extrapolation model to estimatethe 3D motion at each point in time. In the experiments, we use real imagesfrom patients treated with an MRI-linac system, where seven patients wereused for training and two for evaluation. The experiment was two-fold; onebased on a phase-sorted 4D CT with known motions, and, one based on acine-MRI sequence, where the ground-truth 3D motion was unknown.

Results: Our model estimates the 3D motion given two 2D image observationsin the coronal and sagittal orientation with an average error of 0.43mm in the entire anatomy. In the PTV, the average error was 0.82 mm and 0.56mm for the two patients in the evaluation cohort. For the cine-MRI sequence,our approach achieved results comparable to previously published centroidtracking while also providing a complete deformable 3D motion estimate.

Conclusions: We present a method to estimate full 3D motion from soarsedata in the form of 2D images suitable for the MR-linac.

National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
URN: urn:nbn:se:uu:diva-538023OAI: oai:DiVA.org:uu-538023DiVA, id: diva2:1895956
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2024-10-01
In thesis
1. Motion Estimation from Temporally and Spatially Sparse Medical Image Sequences
Open this publication in new window or tab >>Motion Estimation from Temporally and Spatially Sparse Medical Image Sequences
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Motion is a fundamental aspect of human life. Even during low-intensity activities, we move. The lungs absorb oxygen when inhaling and desorb carbon dioxide when exhaling. The heart pumps oxygenated blood to the body's organs. Wave-like contractions help us process food. All such events cause motion within the body. Being able to describe motion offers benefits in medical health, e.g., analysis of organ functions and guidance during ongoing treatments. The motion can be captured by acquiring medical images in real-time. However, in several cases, the resolution of the medical images is limited by the acquisition time, and the images suffer from low temporal and spatial resolution. One such example appears in radiotherapy, e.g., by acquiring 2D cine-MRIs for monitoring ongoing treatment sessions. An accurate estimation of the entire 3D motion provides a more realistic estimate of the actual delivery outcome and is a necessary feature for more advanced procedures, like real-time beam adaptation.

In this thesis, we develop methods to estimate the motion from temporally and spatially sparse medical image sequences. We start by extracting knowledge from optimization-based medical image registration methods and showing how deep learning can reduce execution time. Then, we model the motion dynamics as a sequence of deformable image registrations. Due to the high dimensionality of the medical image, we model the dynamics in a lower dimensional space. For this, we apply dimension reduction techniques like principal component analysis and variational auto-encoders. The dynamic is then modeled using state-space representations and diffusion probabilistic models to solve the two inference problems of forecasting and simulating the state processes.

The main contribution lies in the five presented scientific articles, where we deal with the problem of temporally and spatially sparse sequences separately and then combine them into a uniform solution. The proposed methods are evaluated on medical images of several modalities, such as MRI, CT, and ultrasound, and finally demonstrated on the use case in the radiotherapy domain, where more accurate motion estimates could spare healthy tissues from being exposed to radiation dose.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2024. p. 81
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 2456
Keywords
Motion modeling, Medical image registration, Deep learning, Dimensionality reduction, Dynamic modeling
National Category
Medical Image Processing
Research subject
Artificial Intelligence
Identifiers
urn:nbn:se:uu:diva-538082 (URN)978-91-513-2244-5 (ISBN)
Public defence
2024-12-05, 101195, Heinz-Otto Kreiss, Ångströmslaboratoriet, Lägerhyddsvägen 1, Uppsala, 09:15 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-10-28 Created: 2024-10-01 Last updated: 2024-10-28

Open Access in DiVA

No full text in DiVA

Authority records

Gunnarsson, Niklas

Search in DiVA

By author/editor
Gunnarsson, NiklasBernchou, UffeMahmood, FaisalKimstrand, Peter
By organisation
Artificial IntelligenceDivision of Systems and Control
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 43 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf