Open this publication in new window or tab >> (English)Manuscript (preprint) (Other academic)
Abstract [en]
Brain tractography involves mapping diffusion-weighted images (DWI) onto streamlines representing neural fibre bundles. Recent research avenues have framed tractography into a reinforcement learning (RL) framework with actor-critic models. However, previous RL-based methods may compromise geometrical relations between the input (DWI) and output (tractogram).
More specifically, 3D rotations applied to the input of RL-based tractography are not adequately reflected in the output, indicating a lack of SO3 equivariance. This study aims to restore the equivariance present in previous non-learning-based methods (e.g., iFOD2 from MRtrix3) to RL-based tractography.
To achieve this, we introduce SO3 equivariant and invariant components for the actors (direction prediction model) and critics (q-value prediction model), respectively. We employ an SE3-equivariant transformer as the next direction prediction function. The fact that both the input DWI and the output directional update can be represented as spherical tensors and transform under representations of SO3 makes this formulation a natural fit for the present problem.
Another benefit of RL-based tractography is that incorporating local neighbourhoods can help mitigate well-known tractography problems (e.g. kissing, crossing, fanning). Our proposed algorithm extracts neighbourhood information as a predefined graph with spherical signals on the nodes.
The contribution of this work is threefold. First, we discuss rotational equivariance in streamlined tractography on a theoretical level. Second, we propose a method that combines RL-based tractography with an equivariant model. Third, we evaluate the equivariance of the proposed method both locally and globally.
Keywords
Tractography, Reinforcement Learning, Rotational Equivariance, Geometric Deep Learning, SE(3)-Transformer, Diffusion-Weighted MRI
National Category
Medical Image Processing
Research subject
Medical Technology; Computer Science
Identifiers
urn:nbn:se:kth:diva-355875 (URN)
Funder
Knut and Alice Wallenberg Foundation
Note
Acknowledgments: The computations for the experiments were enabled bythe Berzelius resource provided by the Knut and AliceWallenberg Foundation at the National Academic Infras-tructure for Supercomputing in Sweden. This researchhas been partially funded by Digital Futures, the dBrainproject, the Swedish Research Council, Grant No. 2022-03389, and MedTechLabs. The funding sources were notinvolved in the research and preparation of this article.
QC 20241106
2024-11-062024-11-062024-11-06Bibliographically approved