-
Notifications
You must be signed in to change notification settings - Fork 4
ENH: Add GQI to available models #143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #143 +/- ##
==========================================
- Coverage 71.35% 67.43% -3.93%
==========================================
Files 23 24 +1
Lines 1138 1262 +124
Branches 139 145 +6
==========================================
+ Hits 812 851 +39
- Misses 283 368 +85
Partials 43 43 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Yes. IIUC, GQI is not really a model in the sense that parameters are fit
to data according to some equation. Rather, much like DSI from which it was
derived, it’s a way to compute an ODF from diffusion data. So in some sense
the model “prediction” would identical to the data. Is there a particular
reason to implement it for nifreeze?
…On Fri, May 23, 2025 at 10:04 PM Oscar Esteban ***@***.***> wrote:
Enables Generalized Q-Imaging to the portfolio of DIPY supported models.
I'm hitting the issue that this model does not have a predict()
implementation (cc/ @yasseraleman <https://github.com/yasseraleman>).
NotImplementedError: This model does not have prediction implemented yet
@arokem <https://github.com/arokem>, would it be very hard to write?
------------------------------
You can view, comment on, or merge this pull request online at:
#143
Commit Summary
- 93bb31b
<93bb31b>
enh: add GQI to available models
File Changes
(2 files <https://github.com/nipreps/nifreeze/pull/143/files>)
- *M* src/nifreeze/model/base.py
<https://github.com/nipreps/nifreeze/pull/143/files#diff-59145ee0e5e3ec5769497e46f1bee4d47d3855cf4aa03a00e38d186394ff69f5>
(2)
- *M* src/nifreeze/model/dmri.py
<https://github.com/nipreps/nifreeze/pull/143/files#diff-b58005fd41fe310827c710a6540c00b56f75574cc68dba23d06d6ccb5a34c880>
(11)
Patch Links:
- https://github.com/nipreps/nifreeze/pull/143.patch
- https://github.com/nipreps/nifreeze/pull/143.diff
—
Reply to this email directly, view it on GitHub
<#143>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA46NWKVJBMRHLTADLSS5L2774WNAVCNFSM6AAAAAB52EDKGKVHI2DSMVQWIX3LMV43ASLTON2WKOZTGA4DQMBXGMZDMOI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
If I understood @yasseraleman correctly yesterday, you can "reconstruct" the data from the ODF by calculating a kernel. In our leave-one-out framework, this would not make sense in "single fit mode" (i.e., fit with ALL the data once and then predict individual orientations so the model plays the role of "regularizing" that particular orientation and therefore you would get something different from the original training data). However, it could be useful in the standard operation mode where QBI would be "fit" on all the data except the orientation you are about to "predict". The "model" situation you describe would be the same for the Gaussian Process: if you "predict" on an orientation that went into "fitting" then you get exactly the training data point (therefore useless for our framework).
Yes, I've been tryining to fine tune DTI to work on a multi-shell dataset and it doesn't really work. If you use all b-values the fit is far from great (reasonably) and if you only use b < 1500, then the predictions beyond that b value are not usable as registration target (reasonably, too). Talking with @yasseraleman, he suggested QBI as a good candidate for basically any multi-b scheme (multishell, DSI) with high b-values where DTI and DKI run very very short. He mentioned it because he actually does this prediction it in his code. |
Gotcha, yes, that does seem like an interesting option. But doesn't that push the problem to fitting an adequate kernel? In particular, you'd have to define how the kernel behaves at higher b-values (presumably something non-Gaussian). So, you'd have to fit a kernel to data (or maybe I am missing something? @yasseraleman: how do you do this?) The conceptual issues aside, I think this would not be too hard to implement, maybe first as a sub-class here? |
93bb31b
to
0bb34bc
Compare
Enables Generalized Q-Imaging to the portfolio of DIPY supported models.
I'm hitting the issue that this model does not have a
predict()
implementation (cc/ @yasseraleman).@arokem, would it be very hard to write?