T-Matrix trimer test fit fails on Windows x64
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
HoloPy | Status tracked in Dev | |||||
Dev |
Fix Released
|
Medium
|
Unassigned |
Bug Description
TestFit.
The fit values seem to be close to the "gold" values. The only major difference seems to be the fit status (I get 3, while gold has 1). Could this be causing the assertion to fail?
Output of "nosetests holopy\
AssertionError:
Arrays are not almost equal
Fit results from the trimer are not approx. equal to the standard fit results.
(mismatch 5.26315789474%)
x: array([ 1.58940596e+00, 1.59800000e+00, 1.59900000e+00,
y: array([ 1.58940000e+00, 1.59800000e+00, 1.59900000e+00,
Output file fit_result.tsv:
# Fit results from: c:\Users\
Image n_particle_real_1 n_particle_real_2 n_particle_real_3 n_particle_imag_1 n_particle_imag_2 n_particle_imag_3 radius_1 radius_2 radius_3 x_com y_com z_com scaling_alpha euler_alpha euler_beta euler_gamma fnorm status timestamp
c:\Users\
Yes. The fit status (1 vs. 3) is what caused that test to fail. For a trimer, the fit returns 19 parameters. We check each of these parameters against results obtained previously (the gold standard).
The fit status parameter is certainly important as far as "<5" vs. "=5". When the status equals 5, it means that the fitting algorithm stopped iterating because it reached the maximum allowed number of iterations (the fit did not necessarily converge to a good solution). When the status is less than 5, it means that the fitter had a different (probably better) reason to not do any more iterations.
Does it make sense that different machines would come up with different codes 1-4?
We could relax the test to check that the fit status parameter is simply <5 if it makes sense for different machines to have different codes 1-4 when running the same fit.