Replies: 9 comments
-
@amyszka Please give a complete example, and always print out and read the fit report. If you are using the built-in |
Beta Was this translation helpful? Give feedback.
-
Hi @newville. I've included an initial value for Example: https://pastebin.com/hT9wRv4d Thank you. |
Beta Was this translation helpful? Give feedback.
-
@amyszka well, you are saying that the uncertainties are pretty high, by setting the weight to your fake noise So, that looks reasonable to me. Perhaps you intended to add some noise to the data to be fitted? It is really hard to over-emphasize the importance of data visualization and exploratory data analysis. |
Beta Was this translation helpful? Give feedback.
-
@newville Yes I see my confusion – there needs to be noise directly present in the In the case where there is an given array of Thank you again for your time. |
Beta Was this translation helpful? Give feedback.
-
I continue to be alarmed (and somewhat suspicious), about the confusion about the use
Sorry, I don't understand this. What is "not related together by that simple addition"?
well, with "real world" data, the data would generally have some noise. And then one way (and perhaps "the normal way") to use In your example |
Beta Was this translation helpful? Give feedback.
-
@newville Sorry if I was unclear. My main focus is fitting a Gaussian to some scientific data (which should approximate a Gaussian but it is noisy to begin with) as Thank you for your time and patience, I'll see how this goes with the fitting! |
Beta Was this translation helpful? Give feedback.
-
Hello, My supervisors and I have come up with a solution for our particular examples, in case anyone else comes across this issue. We found that applying the 1/dy weighting rather than 1/dy*2 (which is usual in least-squares methods) did indeed return better results, with smaller final uncertainty values when compared, and are very similar to the values obtained when manually integrating across the error array. We also found that specifying Thank you. |
Beta Was this translation helpful? Give feedback.
-
Um, that "rather than 1/dy*2 (which is usual in least-squares methods)" is completely wrong. Really, not slightly wrong, not "a simple misunderstanding", not a difference in terminology. It is entirely wrong. Somehow many people here over the past several months made up some very bad advice that one should weight the residual with (1/data_uncertainty)**2 and then insisted that this point is somehow open to interpretation, confusing, or potentially confusing. The Lmfit authors never say, endorse, or give any credence to the utterly incorrect and frankly stupid and dangerous idea that one should, under any circumstances weight the residual by "(1/data_uncertainty)**2". Really, I do not know where this idea comes from. It is profoundly wrong and does not hold up to dimensional analysis. It's so wrong that it is a useful test for competence. If anyone tells that you should weight by "(1/data_uncertainty)**2" you can be certain that they are completely unsuited for doing numerical analysis and you should never trust another word they say. : |
Beta Was this translation helpful? Give feedback.
-
This is a rather harsh reply considering. Perhaps I can help clarify what I think is just mismatched terminology. Normally in model fitting one minimises a where Regarding lmfit it does say "that weights*(data - fit) is minimized in the least-squares sense" (3/4 down https://lmfit.github.io/lmfit-py/model.html) which does indeed imply it does its own squaring, but takes some digging to find. It's a complex package with complex documentation. I expect this is tripping up others, and perhaps including an explicit |
Beta Was this translation helpful? Give feedback.
-
Hello. I am having trouble understanding how uncertainties of the fitted parameters from a Gaussian model fit are determined. I am getting unrealistic values for parameters (such as Δheight, Δsigma, etc.) when using
lmfit.models.GaussianModel()
on a dataset with y errorbars.As an example, I've generated a pure Gaussian with a range of x-values [0, 100] and y-values as the Gaussian of height 5, center 50 and sigma 1.5. I've also generated some normally-distributed error values for each y-value simply as
dy = np.abs(np.random.normal(0, 1, len(y))
. I've brought this through LMFIT's process using:gmodel = lmfit.models.GaussianModel()
params = gmodel.make_params(height=5, center=50, sigma=1.5)
result = gmodel.fit(y, params, x=x, weights=1.0/dy)
which then reports parameter errors of basically zero, which should not be expected considering the inclusion of the Δy array as weights.
center: 50.0000000 +/- 1.7798e-14 (0.00%)
sigma: 1.50000000 +/- 1.4858e-14 (0.00%)
height: 5.00000025 +/- 3.7783e-14 (0.00%)
This also occurs when I increase the amount of uncertainty in y (such as using
dy = np.abs(np.random.normal(0, 1000, len(y))
). Is this the correct way to incorporate the errors of y-values into LMFIT's fitting process?Here's a plot of the modeled Gaussian.
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions