question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Augmentation with keyword arguments

See original GitHub issue

I was trying to implement an augmentation with keyword arguments that are fetched from a dataframe. I can do this inside my torch dataset class easily but it would be better to implement this in the augmentation pipeline. However, I couldn’t find any examples of what I’m trying to do.

I created a custom augmentation which its apply method takes arg1 and arg2.

class Aug(ImageOnlyTransform):

    def apply(self, image, arg1, arg2):

        image = do_something_based_on_args(image, arg1, arg2)

        return image

arg1 and arg2 can be accessed in the dataset’s __getitem__ method but I don’t how should I pass them to the augmentation pipeline. Is it supposed to be done like this?

transformed = self.transforms(image=image, mask=mask, arg1=arg1, arg2=arg2)

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:8

github_iconTop GitHub Comments

1reaction
Dipetcommented, Sep 18, 2022

I want to pad images to be at least 224 x 224, and then also have dimensions to be divisible by 32

Yes, looks like we need to add flag to support this kind of transform. You are right - you can use 2 PadIfNeeded:

t = A.Compose([
    A.PadIfNeeded(min_width=256, min_height=256),
    A.PadIfNeeded(pad_height_divisor=32, pad_width_divisor=32, min_width=None, min_height=None),
])
1reaction
Dipetcommented, Aug 12, 2022
  • targets_as_params - if you want to use some targets (arguments that you pass when call the augmentation pipeline) to produce some augmentation parameters on aug call, you need to list all of them here. When the transform is called, they will be provided in get_params_dependent_on_targets. For example: image, mask, bboxes, keypoints - are standard names for our targets.
  • get_params_dependent_on_targets - used to generate parameters based on some targets. If your transform doesn’t depend on any target, only it’s own arguments, you can use get_params. These functions are used to produce params once per call, this is useful when you are producing some random or heavy params.
  • get_transform_init_args_names - used for serialization purposes. If params names in __init__ are equal to the params names stored inside the transform, you can just enumerate them inside this function. Otherwise, if you have some custom serialization logic, you will have to override the _to_dict method. We may remove remove this function in the future when someone implements automatic parsing of the __init__ call.
Read more comments on GitHub >

github_iconTop Results From Across the Web

Augmentation - TorchIO - Read the Docs
Base class for stochastic augmentation transforms. Parameters: **kwargs – See Transform for additional keyword arguments.
Read more >
train() got an unexpected keyword argument 'augmentation'
I want to add custom augmentation when training the model. So I put model.train(dataset_train, dataset_val, epochs=10, learning_rate=config.
Read more >
TypeError: got an unexpected keyword argument 'image'
I am getting TypeError: get_train_augs() got an unexpected keyword argument 'image', I have my augmentation functions as follows.
Read more >
How to save and load parameters of an augmentation pipeline
To deserialize an augmentation pipeline with Lambda transforms, you need to manually provide all Lambda transform instances using the lambda_transforms  ...
Read more >
nvidia.dali.fn.water
nvidia.dali.fn. water (*inputs, **kwargs)¶. Performs a water augmentation, which makes the image appear to be underwater. Supported backends.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found