question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Support the following conventions: per channel, per image and per ensemble

See original GitHub issue

per_channel augmentations: an augmentation that is applied on each audio channel separately per_image augmenations: augmentations that are applied on multichannel audio (e,g. swapping the channels, changing the spatial image) per_ensemble (or per_batch?) augmentations: augmentations that are applied to the whole ensemble of sources. E.g. imagine you want to change the energy of the each image but keep the overall mix energy constant or below a certain value

Ref. a comment from @faroit on slack

In my opinion, per channel and per image has the highest priority

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
faroitcommented, Sep 24, 2020

I think by per-ensemble, Fabian still talks about one single example, not a batch. For one example, you might have a set of sources and you’d like to augment them with some kind of constraint (random energy but same mixture normalization in his example). Not sure if capture his idea but that’s what I understand.

yes, that exactly what I meant.

Could you please explain it like I’m five

@iver56 even though five year old can’t code: I make a code example 😉

desired_snr_db = ... # random SNR for this mix, to be augmented
signal_power =  ... # compute snr for one target source (which among the other sources)
for source in sources:
    noise_power = ... # compute snr of this source    
    G = (signal_power / noise_power) * 10**(-desired_snr_db / 10)
    source = torch.sqrt(G) * source

# mix down sources ...

I also agree that per channel and per image are a priority.

agree.

Fabian still talks about one single example, not a batch

yes but I think per_batch should be added as well to the list. My number of request would be random source mixing where the source axes is shuffled so that the mix stems from other samples, thus remove the alignment. Does that make sense?

0reactions
iver56commented, Oct 30, 2020

Transforms now support three modes:

  • mode=“per_batch”
  • mode=“per_example”
  • mode=“per_channel”

I think the “per ensemble” scenario also fits within this framework

Read more comments on GitHub >

github_iconTop Results From Across the Web

A Gentle Introduction to Channels-First and Channels-Last ...
In this tutorial, you will discover channel ordering formats, how to prepare and manipulate image data to meet formats, and how to configure...
Read more >
Apple Music Style Guide
Note: The “Featuring” and “With” roles should only be used once per artist for each song/album (that is, an artist shouldn't have the...
Read more >
ENGL-1A: Practice Final Flashcards - Quizlet
Study with Quizlet and memorize flashcards containing terms like Select the entry that correctly uses MLA style for in-text documentation.
Read more >
MLflow Models — MLflow 2.1.0 documentation
An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example,...
Read more >
App resources overview - Android Developers
Almost every app should provide alternative resources to support specific device configurations. For instance, you should include alternative ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found