Time-frequency (joint) scattering example/API support
See original GitHub issueIt is my understanding that Joint Time-Frequency Scattering is 2D scattering with a spin up & down Morlet wavelet filterbank, time & frequency directions decoupled, applied on 1D scattering coefficients. If that’s accurate, seems everything’s in place to use it given properly configured filter_bank
(think L=2
is needed).
It’d help to have this API-supported with guiding docstrings, or at least an example of how to use Scattering1D
& Scattering2D
to do joint scattering.
Issue Analytics
- State:
- Created 3 years ago
- Comments:35
Top Results From Across the Web
Joint Time-Frequency Scattering explanation?
JTFS is an extension of Wavelet Scattering that exploits time-frequency structure, adding sensitivity to frequency-dependent time shifts, ...
Read more >Joint Time-Frequency Scattering - arXiv
We introduce the joint time-frequency scattering transform, a time-shift invariant representation which characterizes the multiscale energy distribution of a ...
Read more >[PDF] Joint Time–Frequency Scattering | Semantic Scholar
A recognition system based on the joint time–frequency scattering transform (jTFST) for pitch evolution-based playing techniques (PETs), ...
Read more >Joint Scattering for Automatic Chick Call Recognition | DeepAI
Joint Time -Frequency Scattering for Audio Classification ... Livestock vocalisations play a crucial part in such systems, for example, assessing laying ...
Read more >Practical Introduction to Time-Frequency Analysis - MathWorks
This example shows how to perform and interpret basic time-frequency signal analysis. In practical applications, many signals are nonstationary.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
yes!! an exciting research perspective indeed (never has a neural network been trained on JTFS coefficients)
You’re right. I used the word “channels” in reference to what a typical
Conv2D
layer would take; but in our case, it’s important to separate the JTFS variable dimension (i.e., containing neither time nor frequency but rather the second-order scales and spin) from the audio channel dimension (e…g, left vs right in stereo music). In full generality, we could have a 5-D output(i forgot in which order though, so don’t quote me on this)
Great to see this is happening and thanks for making this branch available. I have been wondering why Kymatio does not yet support TF scattering.
I am very interested in using this in my research over MATLAB, since it is easily integrated with torch
Will post updates on usage in a pytorch front-end