Convolutional reverb
See original GitHub issueIs your feature request related to a problem? Please describe. It’d be nice to add the ability to convolve signals with impulse responses, possibly multichannel impulse responses.
Describe the solution you’d like I think we could implement it with the following API:
import nussl
audio = nussl.AudioSignal('/path/to/audio.wav')
# load impulse from file
impulse = nussl.AudioSignal('/path/to/impulse.wav')
# or load from array gotten by some means
impulse_array = generate_random_ir()
impulse = nussl.AudioSignal(audio_data_array=impulse_array, sample_rate=audio.sample_rate)
# non destructive (default), returns a new signal
convolved = audio.convolve(impulse) # -> AudioSignal
# destructive
audio.convolve(impulse, overwrite=True) # -> inplace
Three cases:
- If
impulse
is single channel, then it is applied to all channels inaudio
. - If
impulse
is multi-channel, andaudio
is single-channel, thenaudio
is broadcast toimpulse
. The output ofconvolve
will be multi-channel in this case, even whenaudio
is single-channel. It will have the same number of channels asimpulse
. - If both
impulse
andaudio
are multi-channel, thenaudio
andimpulse
must have the same number of channels. Sinceimpulse
is an AudioSignal object, one could doto_mono()
if there is a channel mismatch, then apply it:
# fix for channel mismatch
convolved = audio.convolve(impulse.to_mono())
or use make_audio_signal_from_channel
:
convolved = audio.convolve(impulse.make_audio_signal_from_channel(0))
Treating impulse responses as just AudioSignal possibly gives a lot of nice benefits. Users could also convolve two signals even if one of them is not an impulse response, which could sound…interesting.
Describe alternatives you’ve considered Implementing this outside is nussl is simple by just editing the audio_data arrays. But this could be a nice feature, especially for data augmentation.
Additional context Add any other context or screenshots about the feature request here.
Issue Analytics
- State:
- Created 3 years ago
- Comments:8
For actual implementation, we can just use
scipy.signal.convolve
, I think? We could do a GPU implementation via torch as well but I don’t know if that’s worth it.I think I’d like it to be in
mixing.py
with a hook inaudio_signal.py
.