-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prior for R_0 -- suggestion #105
Comments
We've done sensitivity analyses but it's always worth another look. What's
the mean and quantile of the alternatives you've plotted? I'd very much
like to see a pull request and / or community contribution along these
lines. What's BRAM-COD?
…On Wed, May 13, 2020 at 2:30 PM Luiz Max F. Carvalho < ***@***.***> wrote:
Hi, first of all, many many thanks for the awesome work all of you have
been doing!
Secondly, I have a minor question:
What's the justification for the prior currently employed for R_0? The
hierarchy
kappa ~ half-normal(0, 0.5)
R_0 ~ half-normal(m, kappa)
leads to a "witch hat"-looking prior on R_0.
[image: image]
<https://user-images.githubusercontent.com/2875083/81818532-87dc7000-9504-11ea-8f74-9ee259a3501a.png>
Would a simpler Gamma/log-normal prior not be better here? Easier to
elicit and probably captures key features such as Pr(R_0 > 1) more
easily. If you're interested I can help construct such a prior and then
submit a PR, but I have no easy way of running the model to check results
as of now.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#105>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAOPW4RDFS5M36NPUPWCB3RRKOGTANCNFSM4M7YBCPQ>
.
|
(sorry read quickly--presumably you did the obvious thing used a lognormal
and gamma with mean m.)
…On Wed, May 13, 2020 at 2:51 PM Seth Flaxman ***@***.***> wrote:
We've done sensitivity analyses but it's always worth another look. What's
the mean and quantile of the alternatives you've plotted? I'd very much
like to see a pull request and / or community contribution along these
lines. What's BRAM-COD?
On Wed, May 13, 2020 at 2:30 PM Luiz Max F. Carvalho <
***@***.***> wrote:
> Hi, first of all, many many thanks for the awesome work all of you have
> been doing!
>
> Secondly, I have a minor question:
>
> What's the justification for the prior currently employed for R_0? The
> hierarchy
>
> kappa ~ half-normal(0, 0.5)
> R_0 ~ half-normal(m, kappa)
>
> leads to a "witch hat"-looking prior on R_0.
>
> [image: image]
> <https://user-images.githubusercontent.com/2875083/81818532-87dc7000-9504-11ea-8f74-9ee259a3501a.png>
>
> Would a simpler Gamma/log-normal prior not be better here? Easier to
> elicit and probably captures key features such as Pr(R_0 > 1) more
> easily. If you're interested I can help construct such a prior and then
> submit a PR, but I have no easy way of running the model to check results
> as of now.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#105>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAAOPW4RDFS5M36NPUPWCB3RRKOGTANCNFSM4M7YBCPQ>
> .
>
|
Can you send code for your plot? I tried to replicate it and either I have
a bug or you have a bug. (Most likely me.)
…On Wed, May 13, 2020 at 3:01 PM Seth Flaxman ***@***.***> wrote:
(sorry read quickly--presumably you did the obvious thing used a lognormal
and gamma with mean m.)
On Wed, May 13, 2020 at 2:51 PM Seth Flaxman ***@***.***> wrote:
> We've done sensitivity analyses but it's always worth another look.
> What's the mean and quantile of the alternatives you've plotted? I'd very
> much like to see a pull request and / or community contribution along these
> lines. What's BRAM-COD?
>
> On Wed, May 13, 2020 at 2:30 PM Luiz Max F. Carvalho <
> ***@***.***> wrote:
>
>> Hi, first of all, many many thanks for the awesome work all of you have
>> been doing!
>>
>> Secondly, I have a minor question:
>>
>> What's the justification for the prior currently employed for R_0? The
>> hierarchy
>>
>> kappa ~ half-normal(0, 0.5)
>> R_0 ~ half-normal(m, kappa)
>>
>> leads to a "witch hat"-looking prior on R_0.
>>
>> [image: image]
>> <https://user-images.githubusercontent.com/2875083/81818532-87dc7000-9504-11ea-8f74-9ee259a3501a.png>
>>
>> Would a simpler Gamma/log-normal prior not be better here? Easier to
>> elicit and probably captures key features such as Pr(R_0 > 1) more
>> easily. If you're interested I can help construct such a prior and then
>> submit a PR, but I have no easy way of running the model to check results
>> as of now.
>>
>> —
>> You are receiving this because you are subscribed to this thread.
>> Reply to this email directly, view it on GitHub
>> <#105>, or
>> unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AAAOPW4RDFS5M36NPUPWCB3RRKOGTANCNFSM4M7YBCPQ>
>> .
>>
>
|
Hi @flaxter , It's more likely I'm the buggy one: Code LogMean <- function(realMean, realSD){
## takes REAL mean and REAL standard deviation; returns LOG mean
realVar <- realSD^2
mu <- log(realMean/ sqrt(1 + (realVar/realMean^2)))
return(mu)
}
LogVar <- function(realMean, realSD){
## takes REAL mean and REAL standard deviation; returns LOG variance
realVar <- realSD^2
sigmasq <- log(1 + (realVar/realMean^2))
return(sigmasq)
}
###################
M <- 1e6
kappa_ic <- abs(rnorm(n = M, mean = 0, sd = 1/2))
R0_ic <- abs(rnorm(n = M, mean = 3.28, sd = kappa_ic))
kappa_bram <- abs(rnorm(n = M, mean = 0, sd = 1))
R0_bram <- abs(rnorm(n = M, mean = 2.4, sd = kappa_bram))
m <- 3 # E[R0]
v <- .9^2 # Var(R0)
# Gamma
alpha <- m^2/v
beta <- m/v
#Log-normal
mu <- LogMean(realMean = m, realSD = sqrt(v) )
sigma <- sqrt( LogVar(realMean = m, realSD = sqrt(v) ) )
R0_gamma <- rgamma(n = M, shape = alpha, rate = beta)
R0_ln <- rlnorm(n = M, meanlog = mu, sdlog = sigma)
forplot <- data.frame(R_0 = c(R0_ic,
R0_bram,
R0_gamma,
R0_ln),
study = rep(c("Imperial",
"BRAM-COD",
"Gamma",
"Log-normal"), each = M))
library(ggplot2)
ggplot(forplot, aes(x = R_0, colour = study, fill = study)) +
geom_density(alpha = .4) +
scale_x_continuous(expression(R_0), expand = c(0, 0)) +
scale_y_continuous("Density", expand = c(0, 0)) +
geom_vline(xintercept = 1, linetype = "longdash") +
theme_bw(base_size = 16)
summy <- function(x, na.rm = TRUE){
ans <- data.frame(mean = mean(x, na.rm = na.rm),
sd = sd(x, na.rm = na.rm),
lwr = quantile(x, probs = .025, na.rm = na.rm),
uprr = quantile(x, probs = .975, na.rm = na.rm))
return(ans)
}
lapply(list(bramcod = R0_bram, imperial = R0_ic, Gamma = R0_gamma, LN = R0_ln), summy) Last line gives
BRAM-COD is a recent Brazilian study with a model based on your model from report 13. |
I realised the script above has a (as of yet inconsequential) bug in the generation of the random samples. Technically, the model specified in Stan is a truncated normal, not a folded normal, so the correct generation of the samples would be something like M <- 1e6
library(truncnorm)
kappa_ic <- rtruncnorm(n = M, mean = 0, sd = 1/2, a = 0, b = Inf)
R0_ic <- rtruncnorm(n = M, mean = 3.28, sd = kappa_ic, a = 0, b = Inf)
kappa_bram <- rtruncnorm(n = M, mean = 0, sd = 1, a = 0, b = Inf)
R0_bram <- rtruncnorm(n = M, mean = 2.4, sd = kappa_bram, a = 0, b = Inf) Giving > lapply(list(bramcod = R0_bram, imperial = R0_ic, Gamma = R0_gamma, LN = R0_ln), summy)
$bramcod
mean sd lwr uprr
2.5% 2.471883 0.9082468 0.6756234 4.67399
$imperial
mean sd lwr uprr
2.5% 3.280978 0.497174 2.192721 4.371233
$Gamma
mean sd lwr uprr
2.5% 2.999731 0.9005028 1.502073 5.006691
$LN
mean sd lwr uprr
2.5% 3.00048 0.9005886 1.616915 5.110658 |
Hi, first of all, many many thanks for the awesome work all of you have been doing!
Secondly, I have a minor question:
What's the justification for the prior currently employed for
R_0
? The hierarchyleads to a "witch hat"-looking prior on
R_0
.Would a simpler Gamma/log-normal prior not be better here? Easier to elicit and probably captures key features such as
Pr(R_0 > 1)
more easily. If you're interested I can help construct such a prior and then submit a PR, but I have no easy way of running the model to check results as of now.The text was updated successfully, but these errors were encountered: