You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am opening this thread to discuss the comments of Reviewer 1
In general the manuscript was well-written and technically sound. My main issue is that in the introduction, the work is motivated by improving transmit field homogeneity, but despite one of the vendor algorithms giving the best field homogeneity, the discussion and conclusions pivot to metrics where their shimming toolbox was better (contrast and efficiency) and the importance of these metrics was unclear. The conclusion is overly positive, if homogeneity were the goal.
So this seems overall quite positive, but I do see how adding B1 homogeneity is necessary to address their issues.
I think I was missing results on B1 homogeneity across algorithms, was this not the motivation?
R1.1: I think this is solved by just adding the CoV explicitly to Table I. If we had space, I'd add "B1+ along the cord" as a figure, but we are at the 5 figure+ table limit.
It would have benefited from an explanation of why the efficiency is important. It should be pointed out that the general tradeoff between efficiency and homogeneity is well known. Were there SAR or voltage constraints that were not stated, or were the optimizations carried out for a fixed power goal?
R1.2: My argument would be that the low B1+ efficiency, along with indicating a potential power constraint for power-limited sequences (like an MPRAGE, for example) also shows that its a sub-optimal operating mode, since we, as a trend, observe that the efficiency increase does not lead to a lower homogeneity. That is, our homogeneity gains are "for free".
Does the low efficiency of the vendor algorithms measured have some concrete impact? Could it have resulted from an adjustment which gives a uniform field (without particular target amplitude or flip angle) but relies on an imperfect general transmit voltage adjustment giving a low reference voltage? In other words, what speaks against simply scaling up the Tx channel weights?
R1.3: I am not sure I am parsing this comment correctly. But given that our reference voltage was kept constant during the scans (I need to triple-check this), I think the argument against simply scaling the Tx weights is partly that you are not going to have any more homogeneity, and partly that you are imparting SAR for no benefit.
Along the same theme again, it might be clearer if the performance were reported in terms of peak local SAR, or efficiency defined as the ratio of magnitude of sums to sum of magnitudes, or % of SAR maximum.
R1.4: I have saved the RFSWD logs for all these scans, we can do this.
2 x 2 mm in 2.1.2 Sequences should be mm2.
R1.5: Trivial
As a reader I would be curious to know how long it takes to transfer images, calculate optimized weights, and transfer them back to the scanner (what is the impact to the workflow).
R1.6: I think we implicitly mention this in the caption of Figure 1, but we can make it explicit. I will look at one of the relevant screen recordings to get a feeling for the upper bound.
Doesn't the GRE contrast depend on how the transmit voltage (reference) adjustment was done? For example, if a 15 degree FA would be determined to be optimal for contrast, wouldn't a algorithm that achieved a particular amplitude succeed over another algorithm that was much more homogeneous yet suffered from a poor reference adjustment? Alternatively, doesn't contrast depend on actual flip angle achieved and not on efficiency? This could be discussed.
R1.7: I have several points for this one. One, that we have done our optimisation on the B1+ maps, not on the GRE scans (though I guess this needs to be rephrased/calrified). Two, that the reference voltage was kept constant between all the GRE scan (that is, it was the same RF waveform, "just" with different aplitude and phase, not an adjustment of the RF waveform itself). Three, that the achieved flip angle is exactly related to the efficiency.
I was unsure whether a difference between e.g. 1.19 and 1.27 mean contrast was meaningful.
R1.8: I guess we can do some statistical test on the five subjects? Or clarify what the statistical tests in the Appendix mean/imply?
I am opening this thread to discuss the comments of Reviewer 1
So this seems overall quite positive, but I do see how adding B1 homogeneity is necessary to address their issues.
R1.1: I think this is solved by just adding the CoV explicitly to Table I. If we had space, I'd add "B1+ along the cord" as a figure, but we are at the 5 figure+ table limit.
R1.2: My argument would be that the low B1+ efficiency, along with indicating a potential power constraint for power-limited sequences (like an MPRAGE, for example) also shows that its a sub-optimal operating mode, since we, as a trend, observe that the efficiency increase does not lead to a lower homogeneity. That is, our homogeneity gains are "for free".
R1.3: I am not sure I am parsing this comment correctly. But given that our reference voltage was kept constant during the scans (I need to triple-check this), I think the argument against simply scaling the Tx weights is partly that you are not going to have any more homogeneity, and partly that you are imparting SAR for no benefit.
R1.4: I have saved the RFSWD logs for all these scans, we can do this.
R1.5: Trivial
R1.6: I think we implicitly mention this in the caption of Figure 1, but we can make it explicit. I will look at one of the relevant screen recordings to get a feeling for the upper bound.
R1.7: I have several points for this one. One, that we have done our optimisation on the B1+ maps, not on the GRE scans (though I guess this needs to be rephrased/calrified). Two, that the reference voltage was kept constant between all the GRE scan (that is, it was the same RF waveform, "just" with different aplitude and phase, not an adjustment of the RF waveform itself). Three, that the achieved flip angle is exactly related to the efficiency.
R1.8: I guess we can do some statistical test on the five subjects? Or clarify what the statistical tests in the Appendix mean/imply?
@jcohenadad what do you think?
The text was updated successfully, but these errors were encountered: