question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Write multiple parquet files from a single dataframe defined by max file size

See original GitHub issue

The Problem I was unsuccessful in finding a way to write a single dataframe to multiple parquet files using the s3.to_parquet() method. Currently, it seems to write one parquet file which could slow down Athena queries.

Possible Solution It would be great to have an option for the “max parquet file size” when using s3.to_parquet(). So instead of creating one large parquet file, we could create many smaller parquet files that would help optimize Athena queries.

My reasoning I believe Athena, behind the scenes, uses the number of files to split the query workload across different nodes.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
rparthascommented, Jul 21, 2020

Can I work on this particular request ?

1reaction
igorborgestcommented, Jul 21, 2020

@rparthas yep, it would be very welcome! Just make sure to checkout from our dev branch and then open a pull request against the same.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Write multiple parquet files from a single dataframe defined by ...
The Problem I was unsuccessful in finding a way to write a single dataframe to multiple parquet files using the s3.to_parquet() method.
Read more >
pandas df.to_parquet write to multiple smaller files
I have a very large DataFrame (100M x 100), and am using df.to_parquet('data.snappy', engine='pyarrow', compression='snappy') to write to a file ...
Read more >
Compaction / Merge of parquet files | by Chris Finlayson
Compaction / Merge of parquet files. Optimising size of parquet files for processing by Hadoop or Spark. The small file problem. One of...
Read more >
Reading and Writing the Apache Parquet Format
Multiple Parquet files constitute a Parquet dataset. These may present in a number of ways: A list of Parquet absolute file paths. A...
Read more >
Convert Many Parquet Files to a Single CSV using Python
Let's see how we can load multiple Parquet files into a DataFrame and write them to a single CSV file using the Dask...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found