-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot get simple "Bayes rule for Gaussians" to work #17
Comments
To clarify, the following scalar version works
What fails is the vector version below.
Are there any examples which use vector Gaussians? |
Hi @murphyk, the issue arises because using ForneyLab
g = FactorGraph()
nhidden = 1 #2
nobs = 1
A = 0.2 # [1 0]
Q = eye(nhidden)
R = 0.1*eye(nobs)
y_data = [1.2]
@RV x ~ GaussianMeanVariance(zeros(nhidden), Q)
@RV obs_noise ~ GaussianMeanVariance(zeros(nobs), R)
@RV y = x + obs_noise
placeholder(y, :y, dims=(1,)) # add clamping
algo = Meta.parse(sumProductAlgorithm(x))
eval(algo) # Load algorithm
data = Dict(:y => y_data)
marginals = step!(data); |
Thanks, that works. I have created a PR to add a demo of the above process (together with pretty pictures) in the 2d case, since there are currently no examples of multi-variate Gaussian inference. |
Unfortunately, I am having problems extending the 1d Kalman smoother example from the demos directory to the vector case. More precisely, this is my code:
And this is the error:
|
Hi @murphyk, this error occurs because For now, following code works: using Random
F = [1.0 0 1.0 0; 0 1.0 0 1.0; 0 0 1.0 0; 0 0 0 1.0]
H = [1.0 0 0 0; 0 1.0 0 0]
nobs, nhidden = size(H)
e = 0.001
Q = [e 0 0 0; 0 e 0 0; 0 0 e 0; 0 0 0 e]
e = 1.0
R = [e 0; 0 e]
e = 1.0
init_mu = [8.0, 10.0, 1.0, 0.0]
init_V = [e 0 0 0; 0 e 0 0; 0 0 e 0; 0 0 0 e]
# Generate data
Random.seed!(1)
T = n_samples = 5
zs = randn(nhidden, T)
ys = Vector{Vector{Float64}}(undef, T)
for i=1:T
ys[i] = randn(nobs)
end
using ForneyLab
g = FactorGraph()
# State prior
@RV x_0 ~ GaussianMeanVariance(init_mu, init_V)
# Transition and observation model
x = Vector{Variable}(undef, n_samples)
y = Vector{Variable}(undef, n_samples)
x_t_prev = x_0
for t = 1:n_samples
global x_t_prev
mu_x = F * x_t_prev
@RV [id=:x_*t] x[t] ~ GaussianMeanVariance(mu_x, Q)
mu_y = H * x[t]
@RV y[t] ~ GaussianMeanVariance(mu_y, R)
placeholder(y[t], :y, dims=(nobs,), index=t)
x_t_prev = x[t]
end
println("Generating inference code")
algo = Meta.parse(sumProductAlgorithm(x))
println("Compiling")
eval(algo) # Load algorithm
# Prepare data dictionary and prior statistics
data = Dict(:y => ys)
# Execute algorithm
println("Running forwards-backwards")
marginals = step!(data) Additionally, you can use parameter |
I am trying to perform inference in a simple linear Gaussian X->Y model,
where X and Y could be scalars or vectors. Essentially this is a simplification of the kalman smoothing code from the list of demos.
Here is my code:
I get this error:
I also tried this code:
but that gives this error
What am I doing wrong?
(I am using Julia 1.1 and ForneyLab 0.9.1.)
The text was updated successfully, but these errors were encountered: