BUG: signal.decimate returns unexpected values with float32 arrays
See original GitHub issueDescribe your issue.
This may be a duplicate of gh-15072, but 1) I don’t have a big array (50 values), and 2) this method did not return any NaN’s, just unexpected large (negative or positive) values.
I finally caught the issue when I realized I had switched from scipy==1.4.1 to scipy==1.7.3 (before, it was working)
I’m attaching two plots made with the same code example, but each with a different scipy version. In the 1.4.1 one, the decimate result is correct and independent of the type of array, and in the 1.7.3, it’s wrong for the float32 array:
Reproducing Code Example
import scipy.signal as ssi
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(1)
n = 50
p = np.random.rand(n)
q = 20
p_d1 = ssi.decimate(np.asarray(p, dtype='float32'), q)
p_d2 = ssi.decimate(np.asarray(p, dtype='float64'), q)
print('max p_d1 (float32): ', max(p_d1)) # scipy==1.4.1: 0.526 / scipy==1.7.3: 5279 <- wrong
print('max p_d2 (float64): ', max(p_d2)) # scipy==1.4.1: 0.526 / scipy==1.7.3: 0.526
# Optionally plot
t = list(range(n))
t_d = [t[i * q] for i in range(len(p_d1))]
plt.plot(t, p, marker='o')
plt.plot(t_d, p_d1, marker='s')
plt.plot(t_d, p_d2, marker='x')
plt.legend(['float32', 'float32 > decimated', 'float64 > decimated'], loc='upper right')
plt.title('scipy==1.4.1') # scipy==1.4.1 / scipy==1.7.3 (latest to date)
plt.show()
Error message
None!!
SciPy/NumPy/Python version information
No issue with scipy==1.4.1; Issue with scipy==1.7.3
Issue Analytics
- State:
- Created 2 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
scipy.signal.decimate — SciPy v1.9.3 Manual
The signal to be downsampled, as an N-dimensional array. ... When using IIR downsampling, it is recommended to call decimate multiple times for...
Read more >SciPy 1.10.0 Release Notes
#15072: BUG: signal.decimate returns NaN with large float32 arrays. #15393: BUG: signal.decimate returns unexpected values with float32 arrays.
Read more >Python+Chaco+Traits - rendering bug: unexpected fills of line ...
I am able to reproduce the error and talking with John Wiggins (maintainer of Enable), it is a bug in kiva (which chaco...
Read more >SciPy: doc/release/0.18.0-notes.rst | Fossies
Because of the bug, that call would return 10 identical values. ... #5976: Unexpected exception in scipy.sparse.bmat while using 0 x 0 matrix...
Read more >What's new — MNE 1.2.2 documentation
Fix bug in mne.time_frequency.read_csd() that returned projs as a ... to signal user-facing changes to our API (#11120 by Daniel McCloy).
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I think this is related to #13529 - we need to switch to second-order sections, see stalled #14371.
Here’s a not-heavily-tested
sos_decimate
that tends to create very high order filters. It handles this test OK, though not brilliantly (I think at least partly due to “boundary-condition” issues combined with the high order filters; maybe having #11205 would help?).I’m not sure about using
padlen=0
, but it’s needed for this short input and, again, the high-order filters.Here’s an n=50 result (reproducing the above), then an n=500 result, which shows the boundary-condition effects:
Thanks for looking into it. I don’t think that decimals stored in float32 should be downsampled to values x10000 larger. So I think the algorithm is at fault here, and although converting the inputs to float64 to reconvert them back at the end might work, it may not be a good solution. In scipy v. 1.4.1 it appears to work, so could we not use the same algorithm?