Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Feature Suggestion: Allow decompression functions to be asynchronous

See original GitHub issue

A few of the rosbags I’m working with are lz4 compressed and I’m noticing that my frame time is often dominated by the decompression function when reading the file leading to hiccups in application responsiveness.

It would be great to be able to run the decompression asynchronously on a web worker – using ArrayBuffers and SharedArrayBuffers it should be possible to win pretty easily on decompression time.

Is it as easy as awaiting a promise when calling the compression function here? The other question is then would it be safe to transfer the ArrayBuffer ownership temporarily to a webworker while decompression work happens? Or is it expected that multiple chunks share an array buffer? It looks like the answer might be no in the browser but it’s unclear when running in node. If they are separate then it would be possible to decompress multiple chunks in parallel, too.

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:17 (16 by maintainers)

github_iconTop GitHub Comments

janpaul123commented, Aug 9, 2019

Should we detect this using the bag index and show a warning or error?

jtbandescommented, Jul 29, 2019

The thing to look for would be bag files where the chunks overlap in time. This would likely only happen if you created the bag file “manually” with a script that adds messages to the bag such that their timestamps are out of order (and even then, the chunk size would have to be small enough that earlier messages written later are actually written into a new chunk). A rosbag record would write them with the timestamps they’re received at, which should be monotonic so it wouldn’t experience this problem.

The issue amounts to the fact that during readMessages we just iterate over matching chunks in order: As I pointed out above, the C++ bag reader handles this by keeping a separate iterator for each chunk and sorting them after each message is emitted.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How can I asynchronously decompress a gzip file in a ...
I've authored a library fflate to accomplish exactly this task. It offers asynchronous versions of every compression/decompression method it ...
Read more >
Enabling payload compression for an API - AWS Documentation
In API Gateway, learn how to enable GZIP compression of a response payload ... By default, API Gateway supports decompression of the method...
Read more >
zstd 1.5.1 Manual
The zstd compression library provides in-memory compression and decompression functions. The library supports regular compression levels from 1 up to ...
Read more >
Compression & decompression - Fortinet Documentation Library
Configuring temporary decompression for scanning & rewriting. Similar to SSL/TLS inspection, in order for some features to function, you must configure the ...
Read more >
async function - JavaScript - MDN Web Docs - Mozilla
The async and await keywords enable asynchronous, promise-based behavior to be written in a cleaner style, avoiding the need to explicitly ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found