question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[RFC] How should datasets handle decoding of files?

See original GitHub issue

It is a common feature request (for example #4991) to be able to disable the decoding when loading a dataset. To solve this we added a decoder keyword argument to the load mechanism (torchvision.prototype.datasets.load(..., decoder=...)). It takes an Optional[Callable] with the following signature:

def my_decoder(buffer: BinaryIO) -> torch.Tensor: ...

If it is a callable, it will be passed a buffer from the dataset and the result will be integrated into the sample dictionary. If the decoder is None instead, the buffer will be integrated in the sample dictionary instead, leaving it to the user to decode.

https://github.com/pytorch/vision/blob/4cacf5a19f68f6b6483c758e3ac95d1dd9b6194c/torchvision/prototype/datasets/_builtin/imagenet.py#L132-L134

This works well for images, but already breaks down for videos as discovered in #4838. The issue is that decoding a video results in more information than a single tensor. The tentative plan in #4838 was to change the signature to

def my_decoder(buffer: BinaryIO) -> Dict[str, Any]: ...

With this, a decoder can now return arbitrary information, which can be integrated in the top level of the sample dictionary.

Unfortunately, looking ahead, I don’t think even this architecture will be sufficient. Two issues came to mind:

  1. The current signature assumes that there is only one type of payload to decode in a dataset, i.e. images or videos. Other types, for example annotation files stored as .mat, .xml, or .flo, will always be decoded. Thus, the user can’t completely deactivate the decoding after all. Furthermore, they can also not use any custom decoding for these types if need be.
  2. The current signature assumes that all payloads of a single type can be decoded by the same decoder. Counter examples to this are the HD1K optical flow datasets that uses 16bit .png images as annotations which have sub-par support by Pillow.

To overcome this, I propose a new architecture that is similar to the RoutedDecoder datapipe. We should have a Decoder class that has a sequence of Handler’s (name up for discussion):

class Decoder:
    def __init__(
        self,
        *handlers: Callable[[str, BinaryIO], Optional[str, Any]],
        must_decode: bool = True,
    ):
        self.handlers = handlers
        self.must_decode = must_decode

    def __call__(
        self,
        path: str,
        buffer: BinaryIO,
        *,
        prefix: str = "",
        include_path: bool = True,
    ) -> Dict[str, Any]:
        for handler in self.handlers:
            output = handler(path, buffer)
            if output is not None:
                break
        else:
            if self.must_decode:
                raise RuntimeError(
                    f"No handler was responsible for decoding the file {path}."
                )
            output = {(f"{prefix}_" if prefix else "") + "buffer": buffer}

        if include_path:
            output[(f"{prefix}_" if prefix else "") + "path"] = path

        return output

If called with a path-buffer-pair the decoder iterates through the registered handlers and returns the first valid output. Thus, each handler can determine based on the path if it is responsible for decoding the current buffer. By default, the decoder will bail if no handler decoded the input. This can be relaxed by the must_decode=False flag (name up for discussion), which is a convenient way to have a non-decoder.

We would need to change datasets.load function to

def load(
    ...,
    decoder: Optional[
        Union[
            Decoder,
            Callable[[str, BinaryIO], Optional[str, Any]],
            Sequence[Callable[[str, BinaryIO], Optional[str, Any]]],
        ]
    ] = ()
):
    ...
    if decoder is None:
        decoder = Decoder(must_decode=False)
    elif not isinstance(decoder, Decoder):
        decoder = Decoder(
            *decoder if isinstance(decoder, collections.abc.Sequence) else decoder,
            *dataset.info.handlers,
            *default_handlers,
        )
    ...

By default the user would get the dataset specific handlers as well as the default ones. By supplying custom ones, they would be processed with a higher priority and thus overwriting the default behavior if needs be. If None is passed we get a true non-encoder. Finally, by passing a Decoder instance the user has full control over the behavior.

Within the dataset definition, the call to the decoder would simply look like

path, buffer = data

sample = dict(...)
sample.update(decoder(path, buffer))

or, if multiple buffers need to be decoded,

image_data, ann_data = data
image_path, image_buffer = image_data
ann_path, ann_buffer = ann_data

sample = dict()
sample.update(decoder(image_path, image_buffer, prefix="image"))
sample.update(decoder(ann_path, ann_buffer, prefix="ann"))

cc @pmeier @bjuncek

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ejguancommented, Dec 15, 2021

Let us discuss it over the team meeting then. I think we can release it to you. Let domains to handle corresponding decoders makes more sense to me.

0reactions
pmeiercommented, Dec 22, 2021

After some more discussion, we realized there is another requirement: even without decoding, the sample dictionary should be serializable. This eliminates the possibility of using custom file wrappers as originally thought up in https://github.com/pytorch/vision/issues/5075#issuecomment-994756308.

Our current idea is to always read each file and store the encoded bytes in an uint8 tensor. This has two advantages:

  1. Whatever method we later use to serialize needs to handle tensors anyway, so we don’t need to worry about the encoded files.
  2. If we in the future we have scriptable decoding transform, we could have end-to-end scriptability.

The only downside we are currently seeing is that we lose the ability to not load the data at all. Given that the time to read the bytes is usually dwarfed by the decoding time, we feel this is a good compromise.

You can find a proof-of-concept implementation in #5105.

Read more comments on GitHub >

github_iconTop Results From Across the Web

RFC 6386 - VP8 Data Format and Decoding Guide
RFC 6386 VP8 Data Format and Decoding Guide November 2011 Decoding of this portion of the frame header is handled in the reference...
Read more >
[RFC] Add tar-based IterableDataset implementation to PyTorch
All the decoding and augmentation are still handled by whatever thread/process the Dataset instance runs in. That is, usually, you still use ...
Read more >
How to Send Binary Data to RFC from XI(or)PI - Support Wiki
1. How to post binary data to the XI/PI inbound HTTP adapter using java based HTTP client. 2. Base64 Encryption of binary data...
Read more >
WOFF File Format 2.0 - W3C
Note that, for OpenType Collections (previously, TrueType Collections) while there are two tested implementations of encoding and of decoding, ...
Read more >
RFC 3548: Base16, Base32, Base64 Data Encodings
There are two interfaces provided by this module. The modern interface supports encoding and decoding string objects using all three alphabets. The legacy ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found