question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

pyarrow.hdfs.HadoopFileSystem not serializable by Spark

See original GitHub issue

https://github.com/uber/petastorm/blob/e65e327f3bf10ffe95954ceed1a5da89ea0b0ba2/petastorm/etl/dataset_metadata.py#L212

Hi again,

I am using pyspark 2.4.0, and pyarrow 0.11.1 and spark is not able to serialize pyarrow.hdfs.HadoopFileSystem. Have you encountered this issue before?

I get “HDFS Connection Failed” during the serialization, which is a bit strange. I can open a new filesystem connection inside the spark-mapper with no problem, so there should not be a connection issue but rather a serialization issue.

Snippet from stacktrace: File "/srv/hops/hopsdata/tmp/nm-local-dir/usercache/N8YmHUGK9tr5Q9_iz7ZdAb0oU66QXgWDdYzH4tE4wgI/appcache/application_1547648243443_0001/container_e01_1547648243443_0001_01_000002/pyspark.zip/pyspark/serializers.py", line 566, in loads return pickle.loads(obj, encoding=encoding) File "/srv/hops/anaconda/anaconda/envs/petastorm/lib/python3.6/site-packages/pyarrow/hdfs.py", line 37, in __init__ self._connect(host, port, user, kerb_ticket, driver, extra_conf) File "pyarrow/io-hdfs.pxi", line 105, in pyarrow.lib.HadoopFileSystem._connect File "pyarrow/error.pxi", line 83, in pyarrow.lib.check_status pyarrow.lib.ArrowIOError: HDFS connection failed

Thanks /Kim

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:3
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

4reactions
selitvincommented, Jan 16, 2019

I see that the local filesystem is serialiazable. Probably libhdfs3 based filesystem is serializable as well., while libhdfs one is not. That explains that we never ran into it. We’ll get it fixed in the next release. Thanks for bringing this up!

1reaction
jsgoller1commented, Feb 20, 2019

@maver1ck / @Limmen - #310 is now landed so you should be able to pass a filesystem factory method to materialize_dataset() (or leave it empty for use with the local filesystem). I’ll close this issue for now, but feel free to reopen if the issue resurfaces.

Read more comments on GitHub >

github_iconTop Results From Across the Web

pyarrow.hdfs.connect — Apache Arrow v10.0.1
Deprecated since version 2.0: pyarrow.hdfs.connect is deprecated, please use pyarrow.fs.HadoopFileSystem instead. Parameters: hostNameNode.
Read more >
What is the best possible way to call Hadoop FileSystem ...
I have defined a MyMain Scala object that extends Serializable since it involves calling UDF transformation on each of these HDFS buckets.
Read more >
A gentle introduction to Apache Arrow with Apache Spark and ...
Apache Arrow comes with bindings to a C++-based interface to the Hadoop File System. It means that we can read or download all...
Read more >
Reading and Writing the Apache Parquet Format
You can write a partitioned dataset for any pyarrow file system that is a file-store (e.g. local, HDFS, S3). The default behaviour when...
Read more >
pyarrow.fs.HadoopFileSystem — Apache Arrow v3.0.0
kerb_ticket (string or path, default None) – If not None, the path to the ... are equivalent * HadoopFileSystem.from_uri('hdfs://localhost:8020/?user=test'.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found