question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Extra Cutout Operation in Preprocessing

See original GitHub issue

This extra cutout operation is (deterministically for every image) done in the repository and not mentioned in the updated paper using RandAugment. Is the performance reported in the paper consistent with this implementation or the one specified in the paper?

https://github.com/google-research/uda/blob/6aabffa896458fb7806eba9d94fbfeca77f3b72e/image/preprocess.py#L274

More context:

for image in ori_images:
   chosen_policy = aug_policies[np.random.choice(
       len(aug_policies))]
   aug_image = augmentation_transforms.apply_policy(
       chosen_policy, image)
   aug_image = augmentation_transforms.cutout_numpy(aug_image)

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6

github_iconTop GitHub Comments

1reaction
qizhexcommented, Oct 22, 2019

I think we might have a different understanding of “The default augmentations for all methods include flips, pad-and-crop and Cutout”. Do you mean that Cutout, flip & pad-and-crop are in the search space of RandAugment? It actually means that they apply Cutout, flip, pad-and-crop in addition to two random operations.

This is similar to what has been done in AutoAugment:

      epoch_policy = self.good_policies[np.random.choice(
          len(self.good_policies))]
      final_img = augmentation_transforms.apply_policy(
          epoch_policy, data)
      final_img = augmentation_transforms.random_flip(
          augmentation_transforms.zero_pad_and_crop(final_img, 4))
      # Apply cutout
      final_img = augmentation_transforms.cutout_numpy(final_img)

So this is what has been done in AutoAugment: two operations->pad_and_crop->flip->cutout.

0reactions
qizhexcommented, Oct 24, 2019

Yes. The order in this codebase should be two random ops -> cutout -> flip -> crop. The flip and crop is applied during training in data.py. We changed the order since we use random ops and cutout by numpy and PIL instead of building them into the TF computational graph. Flip and crop are applied during training so that the model sees more diverse images.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What Is Data Preprocessing? 4 Crucial Steps to Do It Right
It's a crucial process that can affect the success of data mining and machine learning projects. It makes knowledge discovery from datasets ...
Read more >
NLP Text Preprocessing: A Practical Guide and Template
Based on the general outline above, we performed a series of steps under each component. Remove HTML tags; Remove extra whitespaces; Convert ...
Read more >
Preprocessing of EEG data and computing ERPs
Preprocessing involves several steps including identifying individual trials from the dataset, filtering and artifact rejections. This tutorial covers how to ...
Read more >
Preprocess Signals - MATLAB & Simulink - MathWorks
Preprocessing operations overwrite the signal on which they work. If you want to keep the original signal, duplicate it and operate on the...
Read more >
4. Fairness Pre-Processing - Practical Fairness [Book] - O'Reilly
As discussed in the previous chapter, fairness can affect three stages of the data modeling pipeline. This chapter focuses on the earliest stage,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found