question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Test integration with dask.dataframe

See original GitHub issue

dask.dataframe should also be able to handle fletcher columns and accessors. Thus we should have at least tests that confirm:

  • dask.dataframe can have fletcher.Fletcher{Chunked,Continuous}Array columns
  • The fr_text accessor is working with dask.dataframe

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:2
  • Comments:7 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
xhochycommented, Jun 28, 2020

Yes, since Thursday this is working on master: https://github.com/xhochy/fletcher/pull/147

0reactions
xhochycommented, Jun 29, 2020

If you want to create such a column from a Parquet file without going through the object, check out the types_mapper argument of pyarrow.Tables.to_pandas. This also works for other ExtensionArray, not only fletcher. This can also save quite some overhead / GIL contentation.

import fletcher as fr
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq

df = pd.DataFrame({'str': ['a', 'b', 'c']})
df.to_parquet("test.parquet")
table = pq.read_table("test.parquet")

table.to_pandas().info()

# <class 'pandas.core.frame.DataFrame'>
# RangeIndex: 3 entries, 0 to 2
# Data columns (total 1 columns):
#  #   Column  Non-Null Count  Dtype 
# ---  ------  --------------  ----- 
#  0   str     3 non-null      object
# dtypes: object(1)
# memory usage: 152.0+ bytes

table.to_pandas(types_mapper={pa.string(): fr.FletcherChunkedDtype(pa.string())}.get).info()
# <class 'pandas.core.frame.DataFrame'>
# RangeIndex: 3 entries, 0 to 2
# Data columns (total 1 columns):
#  #   Column  Non-Null Count  Dtype                   
# ---  ------  --------------  -----                   
#  0   str     3 non-null      fletcher_chunked[string]
# dtypes: fletcher_chunked[string](1)
# memory usage: 147.0 bytes
Read more comments on GitHub >

github_iconTop Results From Across the Web

Development Guidelines
For the Dask collections like Dask Array and Dask DataFrame, behavior is typically tested directly against the NumPy or Pandas libraries using the...
Read more >
Pandas with Dask, For an Ultra-Fast Notebook
Pandas with Dask, For an Ultra-Fast Notebook · The ability to work in parallel with NumPy array and Pandas DataFrame objects · integration...
Read more >
Data Validation with Dask - pandera
Then you can use pandera schemas to validate dask dataframes. In the example below we'll use the class-based API to define a SchemaModel...
Read more >
Dask Integration · Discussion #647 · unionai-oss/pandera
My primary use case was to integrate Pandera with Dask so that we could annotate and validate functions that operate on Dask dataframes....
Read more >
Coiled Runtime: Toward Using Dask in Production
The Next Step: Integration & System Testing ... common geospatial and dataframe operations at scale, to validate Dask's performance against ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found