Aliasing everywhere
See original GitHub issuex_complex[::stride] can be lossless while abs(x_complex)[::stride] isn’t; modulus expands the spectrum. Stride and modulus commute, hence we lose information.
This yielded max 25% relative absolute difference on a simple example I tried, in second order:
sc.oversampling = 0
o0 = sc(x)
sc.oversampling = 999
o1 = sc(x)[:, ::sc.T]
adiff = np.abs(o0 - o1) / (np.abs(o0) + 1e7)
However, I noticed 1) difference in first order was <.01%, as expected; 2) second-order coeffs matched within a scalar multiple of 1.27.
This is tricky to interpret. Lossless subsampling preserves all information, yet a nonlinearity (abs) on this “identical” data produces a different result. From point of view of abs, we do lose information - obvious if we take abs first and then subsample. Yet, we can fully invert the complex subsampling and then take abs to produce the exact same abs.
However, consider if the nonlinearity instead mapped the first FFT bin to Nyquist, for any input length - e.g. (-1)**floor(x / factor). Then, no matter how much we FFT upsample, there’s always content at Nyquist. Hence I think the correct answer is, the perspective that matters is that of the intended operators; if our transform relied on the original 1Hz becoming Nyquist, this’ll be impossible post-subsampling. Similarly, scattering relies on modulus, and modulus isn’t same post-subsampling: the continuous-time math is distorted.
Thus, allclose(o0, o1) must pass - namely, o0 must agree with o1. This suggests potential need for a normalizing factor dependent upon k’s. Outstanding:
- Is there exact math for this?
- Is the difference always a constant scaling?
code
import numpy as np
from scipy.fft import fft, ifft
from kymatio.numpy import Scattering1D
from kymatio.visuals import plot
sc = Scattering1D(8, 2048, Q=8, T=256, max_pad_factor=None)
x = np.random.randn(sc.N)
#%%
def adiff(x0, x1):
return np.abs(x0 - x1) / (np.abs(x0) + 1e-7)
def padiff(x0, x1):
d = adiff(x0, x1)
print(d.min(), d.max(), d.mean(), [n[0] for n in np.where(d == d.max())])
#%%
sc.oversampling = 0
o0 = sc(x)
sc.oversampling = 999
o1 = sc(x)[:, ::sc.T]
ord1_idxs, ord2_idxs = (sc.meta()['order'] == 1), (sc.meta()['order'] == 2)
o0_1, o1_1 = o0[ord1_idxs], o1[ord1_idxs]
o0_2, o1_2 = o0[ord2_idxs], o1[ord2_idxs]
padiff(o0_1, o1_1)
padiff(o0_2, o1_2)
#%% compute for one specific coefficient
n1 = 42
n2 = -1
pf1 = sc.psi1_f[n1][0]
j1 = sc.psi1_f[n1]['j']
j2 = sc.psi1_f[n2]['j']
pf20 = sc.psi2_f[n2][j1]
pf21 = sc.psi2_f[n2][0]
np.random.seed(0)
xp = np.pad(x, (len(pf1) - len(x))//2, mode='reflect')
xf = fft(xp)
o0, o1 = {}, {}
o0['0'] = xf * pf1
o0['1'] = o0['0'].reshape(2**j1, -1).mean(axis=0)
o0['2'] = ifft(o0['1'])
o0['3'] = np.abs(o0['2'])
o0['4'] = fft(o0['3'])
o0['5'] = o0['4'] * pf20
o0['6'] = o0['5'].reshape(2**int(j2 - j1), -1).mean(axis=0)
o0['7'] = ifft(o0['6'])
o0['8'] = np.abs(o0['7'])
o0['9'] = fft(o0['8'])
o0['10'] = o0['9'] * sc.phi_f[j2]
o0['11'] = o0['10'].reshape(sc.T//2**j2, -1).mean(axis=0)
o0['12'] = ifft(o0['11']).real
o1['0'] = xf * pf1
o1['2'] = ifft(o1['0'])
o1['3'] = np.abs(o1['2'])
o1['4'] = fft(o1['3'])
o1['5'] = o1['4'] * pf21
o1['7'] = ifft(o1['5'])
o1['8'] = np.abs(o1['7'])
o1['9'] = fft(o1['8'])
o1['10'] = o1['9'] * sc.phi_f[0]
o1['11'] = o1['10'].reshape(sc.T, -1).mean(axis=0)
o1['12'] = ifft(o1['11']).real
O0, O1 = o0['12'], o1['12']
s, e = sc.ind_start[sc.log2_T], sc.ind_end[sc.log2_T]
O0u, O1u = O0[s:e], O1[s:e]
padiff(O0, O1)
padiff(O0u, O1u)
print()
#%%
for i in (0, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12):
if i in (3, 8):
print()
i = str(i)
c0, c1 = o0[i], o1[i]
c0 = c0[1:len(c0)//2]
c1 = c1[1:len(c1)//2]
# c0 = c0 / np.abs(c0).max()
# c1 = c1 / np.abs(c1).max()
mx = max(np.abs(c0).max(), np.abs(c1).max()) * 1.05
pkw = dict(abs=1, show=1, ylims=(0, mx))
plot(c0, title=f"o0-{i}", **pkw)
plot(c1, title=f"o1-{i}", **pkw)
Issue Analytics
- State:
- Created 2 years ago
- Comments:5

Top Related StackOverflow Question
Excellent troubleshooting work !!
Also >1% at first order is possible but is much rarer, I guess since aliasing the aliased compounds.