Open
Description
Is your feature request related to a problem? Please describe.
There are three distinct registration networks (UNet, LocalNet, GlobalNet) and two types of prediction (dvf and ddf) supported by MONAI.
The paired_lung_ct demos combines LocalNet and ddf prediction.
It would be great if we could provide some more tutorials with different network (GlobalNet) and prediction types (dvf).
Describe the solution you'd like
Add new tutorials.
Describe alternatives you've considered
N/A
Additional context
N/A
Metadata
Metadata
Assignees
Labels
No labels
Activity
kate-sann5100 commentedon Mar 5, 2021
Should we
@mathpluscode @YipengHu
YipengHu commentedon Mar 5, 2021
LocalNet/UNet + DVF makes more sense. I'd suggestion the two tutorials be:
1- LocalNet/UNet+DVF, unpaired, no label supervision - using single or mixed inhale/exhale phases;
2- LocalNet, paired, with label supervision - registering inhale (moving) to exhale (fixed) phases.
if time,
3 - Change one of the above to GlobalNet (DVF does not have effect if affine here)
kate-sann5100 commentedon Mar 5, 2021
@YipengHu
That sounds very feasible.
Btw I am curious what is the pro and con of DVF vs DDF and their suitable use cases?
YipengHu commentedon Mar 5, 2021
kate-sann5100 commentedon Mar 5, 2021
@YipengHu That answers my question perfectly. So does that mean in the lung ct case, dvf has no obvious advantage over ddf as we don't have specific constraints?
YipengHu commentedon Mar 5, 2021
Now this is still an open research question as far as I'm aware. For example, between rib cage and lung tissue, there is clear sliding motion which, in theory, should not be modelled as diffeomorphic transformation, plus the airway expansion etc, but dvf may still perform better locally especially when the corresponding features are lacking.
kate-sann5100 commentedon Mar 5, 2021
@YipengHu
I think this information is enough for me to write the tutorials for now. That's super helpful. Thank you.
tvercaut commentedon Mar 6, 2021
Not much more to add from @YipengHu's answer. Just one clarification: Even with stationary velocity fields, imposing volume-preserving constraints remains somewhat involved, see e.g. @LucasFidon's paper.
fvlntn commentedon Sep 23, 2021
Hello @kate-sann5100 and @YipengHu,
Is there any ETA on a tutorial for GlobalNet with unpaired, ddf and label supervision? I'd like to check if my work has no obvious error.
I tried by myself and got some results using GlobalNet by using everytime the same atlas for fixed_image and fixed_label.
I used as a loss a linear combination of LNCCLoss, DiceLoss and BendingEnergyLoss but don't know which weights are best.
Is there any study on that?
Thanks a lot
YipengHu commentedon Sep 23, 2021
@fvlntn first, not sure we have such a plan as there would be so many permutations with the options of labels, losses and transformation etc ;) but do keep us updated with your experiments with any interesting findings or bugs! As for the weight between these losses, it is a good question and i'd say very much application-dependent (also implementation-dependent as well). For example, we use a very high (>10) weight on BEL on some of our multimosal registration where image similarity (e.g. LNCC) does not always work robustly. But for those with many good segmentation available, label-driven applications, it might need to weight less to allow more local deformation to be learned. Best way is perhaps experiment with your own data (i'm afraid).
fvlntn commentedon Sep 23, 2021
@YipengHu Thanks for the answer, I meant a tutorial for unpaired registration and a different one for GlobalNet, not necessarily the one I said in my first comment. (As you mentioned in #134 (comment))
Thanks for the information. For the weights, I'm actually experimenting with my data to get the best weights possible for the loss, but it's kinda hard to evaluate registration outputs. Is there any obvious way to evaluate a registration model besides the dice metric? (Loss changes since the weights change everytime).
Last question: Using GlobalNet (Affine), is it normal for BEL to be around 5e-12 ? For both training and validation, BEL (Weight 1) is between 4.5e-12 and 5e-12 which is really low compared to LNCC (Weight 1) between -0.92 and -0.93 and Dice Loss (Weight 1) between 0.4 and 0.3 for instance on my data.
Thanks,
YipengHu commentedon Sep 23, 2021
@fvlntn :) good catch. i'm afraid we (sort of) decided not go ahead with those suggestions. correct me if i'm wrong here @kate-sann5100 one of the limitations we had was the limited publicly available data which do not allow us to prove/disapprove large useful registration networks. If getting things "just working" for those data, it could even be misleading, for example in the pre-configured hyperparamters that would almost definitely be overfitting the 10-20 training images.
re: registration evaluation - not sure i can help here, it is very much an open research question. Happy to discuss further with more details from your application/data etc.
re: GlobalNet with BEL - GlobalNet is using affine as the transformation model at the moment, BEL will not be that useful, e.g. BE is invariant to translation and rotation. Although numerically you might get some positive values to penalise but i would just set BE weight to 0. We could disable the BEL entirely when GlobalNet is used, but it may be too restrictive for those wanted to try something experimental, e.g. predicting other low-dimensional transformation parameters.
fvlntn commentedon Sep 23, 2021
Alright, thanks alot for your time and your explanations!
ShalomRochman commentedon Jan 28, 2025
Is there any tutorial for affine registration with GlobalNet?