You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to use the trained models to do conditional inference on new observations and also get the underlying probabilities rather than sampled data sets? For example, I train on a binary matrix of diagnoses and then as a new patient comes in, I can input their known conditions and get the probability they have the other conditions?
The ability to do that in combination with the TF API would make this a very powerful "auto-complete" model.
The text was updated successfully, but these errors were encountered:
It is possible to recover the predicted probabilities (rather than labels) by setting cat_coalesce = FALSE and bin_label = FALSE in the complete() function. Since uncertainty over the predictions is handled via multiply imputing the data, the best strategy would then be to average across M completed datasets in order to get good estimates of the average predicted probabilities.
We are actively looking into adding a new function to predict missing values for data not used in training, which would allow you to achieve the proposed workflow above.
Is there a way to use the trained models to do conditional inference on new observations and also get the underlying probabilities rather than sampled data sets? For example, I train on a binary matrix of diagnoses and then as a new patient comes in, I can input their known conditions and get the probability they have the other conditions?
The ability to do that in combination with the TF API would make this a very powerful "auto-complete" model.
The text was updated successfully, but these errors were encountered: