Augmentation with keyword arguments
See original GitHub issueI was trying to implement an augmentation with keyword arguments that are fetched from a dataframe. I can do this inside my torch dataset class easily but it would be better to implement this in the augmentation pipeline. However, I couldn’t find any examples of what I’m trying to do.
I created a custom augmentation which its apply method takes arg1
and arg2
.
class Aug(ImageOnlyTransform):
def apply(self, image, arg1, arg2):
image = do_something_based_on_args(image, arg1, arg2)
return image
arg1
and arg2
can be accessed in the dataset’s __getitem__
method but I don’t how should I pass them to the augmentation pipeline. Is it supposed to be done like this?
transformed = self.transforms(image=image, mask=mask, arg1=arg1, arg2=arg2)
Issue Analytics
- State:
- Created a year ago
- Comments:8
Top Results From Across the Web
Augmentation - TorchIO - Read the Docs
Base class for stochastic augmentation transforms. Parameters: **kwargs – See Transform for additional keyword arguments.
Read more >train() got an unexpected keyword argument 'augmentation'
I want to add custom augmentation when training the model. So I put model.train(dataset_train, dataset_val, epochs=10, learning_rate=config.
Read more >TypeError: got an unexpected keyword argument 'image'
I am getting TypeError: get_train_augs() got an unexpected keyword argument 'image', I have my augmentation functions as follows.
Read more >How to save and load parameters of an augmentation pipeline
To deserialize an augmentation pipeline with Lambda transforms, you need to manually provide all Lambda transform instances using the lambda_transforms ...
Read more >nvidia.dali.fn.water
nvidia.dali.fn. water (*inputs, **kwargs)¶. Performs a water augmentation, which makes the image appear to be underwater. Supported backends.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes, looks like we need to add flag to support this kind of transform. You are right - you can use 2
PadIfNeeded
:targets_as_params
- if you want to use some targets (arguments that you pass when call the augmentation pipeline) to produce some augmentation parameters on aug call, you need to list all of them here. When the transform is called, they will be provided inget_params_dependent_on_targets
. For example:image
,mask
,bboxes
,keypoints
- are standard names for our targets.get_params_dependent_on_targets
- used to generate parameters based on some targets. If your transform doesn’t depend on any target, only it’s own arguments, you can useget_params
. These functions are used to produce params once per call, this is useful when you are producing some random or heavy params.get_transform_init_args_names
- used for serialization purposes. If params names in__init__
are equal to the params names stored inside the transform, you can just enumerate them inside this function. Otherwise, if you have some custom serialization logic, you will have to override the_to_dict
method. We may remove remove this function in the future when someone implements automatic parsing of the__init__
call.