Replies: 7 comments 1 reply
-
Hi @eort, I'll defer to @AlexanderFengler on this if he disagrees, but I think your intuition is correct. In general the intuition of link functions in GLMs such as logistic regressions also applies here, so the interpretation of the parameter estimates should be made after transforming them back to their original space using the inverse functions. |
Beta Was this translation helpful? Give feedback.
-
Hi @eort , you are right, basically apply the correct inverse link for each parameter. Thank you for trying HSSM :). Best, |
Beta Was this translation helpful? Give feedback.
-
Hi both of you, Thanks for your answers! If you don't mind I have a follow-up. I realized that after the back-transformation, my estimates are a little unrealistic. Specifically, while the intercept in my regression ( |
Beta Was this translation helpful? Give feedback.
-
Okay, I am more confused now. Looking at the same regression on the For the intercept, it is clear that the transformation is necessary and the result reasonable, but for the drug effect the not-back-transformed data just don't make any sense (as it is beyond the bounds of |
Beta Was this translation helpful? Give feedback.
-
Hey @eort , I think the confusion is that you should basically run the transform on the output of the regression, not on single terms. Transforms operate as If you look at the traces, you should see a trialwise However I will grant that those estimates of Best, |
Beta Was this translation helpful? Give feedback.
-
Hey, Sorry to bring this up again, but I still haven't managed to make sense of this. @AlexanderFengler, you mentioned that I should look at the trialwise parameters as they are post-transform. I managed and the distributions look alright, however, they incorporate of course all the fixed and random effects. It is unclear to me how I would extract the group effects of a condition from the trialwise estimates. If I don't care about the subject-specific effects or intercepts, how can I obtain estimates of the experimental factors in a space that interpretable for respective parameter? Also, turning back to your point that I should run the transform on the output of the regression, not on single terms. If I understand you correctly, this would produce the trialwise estimates as in This being said, I am increasingly sure that my implementations of the back-transformation (either via Thanks again, |
Beta Was this translation helpful? Give feedback.
-
Me again... I looked up how Bambi is using link functions and came across this example where So, in this example the estimates are actually simply back-transformed and interpreted subsequently. Assuming that they know what they are doing, why is this permitted? Only reason I can think of is that in that example only a single regressor was put into the model, but than there is still the intercept. How does this example related to your (@AlexanderFengler) statement that applying the back-transform to individual terms does not make sense? On an unrelated note, do I understand correctly that using a Eduard |
Beta Was this translation helpful? Give feedback.
-
Hi,
Thanks for fixing the issues regarding sampling when the prior is not (-inf, inf). Using
link_settings='log_logit'
improves my fits massively!I was wondering about the interpretation though. From replies in discussion (primarily by @AlexanderFengler), I understand, that under these link function settings, the parameters cannot be interpreted directly, but need to be transformed back to the "standard space" in order for its values to make sense in the original sense. Is it correct that this back-transformation can be done by calling
model.link['<parameter>'].linkinv(x)
? If so, can this be done on all samples of theinference data
object in the model after sampling, such that calling the arviz plotting function would automatically take the back-transformed values rather that the fitted ones?Perhaps to give a concrete example. I fitted following model:
resulting in following fit:
Ignoring the random effects, is it correct to conclude that posterior of
a
peaks around 0.97 (linkinv(-1.1)
) for the intercept (i.e. condition "plac"), ranging from 0.83 (linkinv(-1.1)
) to 1.13 (linkinv(-0.8)
), whereas the influence of the condition "cyc", relative to the intercept, is described by a posterior from 1.68(linkinv(0.05)
) to 1.72 (linkinv(0.11)
) with the mode being at 1.70 (linkinv(0.08)
?Does this make sense?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions