read_csv_glob fails reading from azure storage account
See original GitHub issueSystem information
- python 3.9
- modin 0.12
- ray 1.92
pandas
and modin
read_csv
works using the adlfs
module (https://github.com/fsspec/adlfs):
first setup os environ
os.environ["AZURE_STORAGE_ACCOUNT_NAME"] = "someaccount"
os.environ["AZURE_STORAGE_ACCOUNT_KEY"] = "somekey"
then do
modin_pd.read_csv('abfs://container@blob.core.windows.net/filename.csv')
works!
read_csv_glob
throws the following error:
modin_experimental_pd.read_csv_glob('abfs://container@blob.core.windows.net/filename.csv')
File /python3.9/site-packages/modin/experimental/pandas/io.py:183, in _make_parser_func.<locals>.parser_func(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, skipfooter, doublequote, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
180 f_locals["sep"] = "\t"
182 kwargs = {k: v for k, v in f_locals.items() if k in _pd_read_csv_signature}
--> 183 return _read(**kwargs)
File python3.9/site-packages/modin/experimental/pandas/io.py:208, in _read(**kwargs)
205 Engine.subscribe(_update_engine)
207 try:
--> 208 pd_obj = FactoryDispatcher.read_csv_glob(**kwargs)
209 except AttributeError:
210 raise AttributeError("read_csv_glob() is only implemented for pandas on Ray.")
File /python3.9/site-packages/modin/core/execution/dispatching/factories/dispatcher.py:185, in FactoryDispatcher.read_csv_glob(cls, **kwargs)
182 @classmethod
183 @_inherit_docstrings(factories.ExperimentalPandasOnRayFactory._read_csv_glob)
184 def read_csv_glob(cls, **kwargs):
--> 185 return cls.__factory._read_csv_glob(**kwargs)
File /python3.9/site-packages/modin/core/execution/dispatching/factories/factories.py:513, in ExperimentalPandasOnRayFactory._read_csv_glob(cls, **kwargs)
506 @classmethod
507 @doc(
508 _doc_io_method_raw_template,
(...)
511 )
512 def _read_csv_glob(cls, **kwargs):
--> 513 return cls.io_cls.read_csv_glob(**kwargs)
File python3.9/site-packages/modin/core/io/text/csv_glob_dispatcher.py:62, in CSVGlobDispatcher._read(cls, filepath_or_buffer, **kwargs)
60 if isinstance(filepath_or_buffer, str):
61 if not cls.file_exists(filepath_or_buffer):
---> 62 return cls.single_worker_read(filepath_or_buffer, **kwargs)
63 filepath_or_buffer = cls.get_path(filepath_or_buffer)
64 elif not cls.pathlib_or_pypath(filepath_or_buffer):
File python3.9/site-packages/modin/core/storage_formats/pandas/parsers.py:269, in PandasParser.single_worker_read(cls, fname, **kwargs)
267 ErrorMessage.default_to_pandas("Parameters provided")
268 # Use default args for everything
--> 269 pandas_frame = cls.parse(fname, **kwargs)
270 if isinstance(pandas_frame, pandas.io.parsers.TextFileReader):
271 pd_read = pandas_frame.read
File python3.9/site-packages/modin/core/storage_formats/pandas/parsers.py:312, in PandasCSVGlobParser.parse(chunks, **kwargs)
309 index_col = kwargs.get("index_col", None)
311 pandas_dfs = []
--> 312 for fname, start, end in chunks:
313 if start is not None and end is not None:
314 # pop "compression" from kwargs because bio is uncompressed
315 with OpenFile(fname, "rb", kwargs.pop("compression", "infer")) as bio:
ValueError: not enough values to unpack (expected 3, got 1)
following calls method also fails
modin_experimental_pd.read_csv_glob('abfs://container@blob.core.windows.net/*')
modin_experimental_pd.read_csv_glob('abfs://container@blob.core.windows.net/*.csv')
Issue Analytics
- State:
- Created a year ago
- Comments:10 (7 by maintainers)
Top Results From Across the Web
Read csv from Azure blob Storage and store in a DataFrame
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe. I tried to use: df...
Read more >AzureHttpError - unable to read file from blob #48 - GitHub
This is the first report I've gotten of this error. Noted mdurant's suggestion, which seems the most likely explanation.
Read more >Unable to read csv frile from storage account - Microsoft Q&A
I have csv file in azure blob storage and i am creating one runbook with power shell. Powershell will run based on input...
Read more >Azure Databricks – How to read CSV file from blob storage ...
Step 6: Once you click on Run All and if you get an error like “ModuleNotFoundError: No module named 'azure'”. Run the following...
Read more >Read file from dbfs with pd.read_csv() using databricks-connect
I want to read a CSV file that is in DBFS (databricks) with ... I have a .xls file stored in Azure File...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@c3-cjazra the main problem here is that we haven’t fully switched to using
fsspec
, so reading multiple files can only be for s3, because we are explicitly usings3fs
. This definitely needs to be fixed, the main difficulty is to correctly write the handling of cases when reading is done anonymously or under certain permissions (anon=True or False).@devin-petersohn what is the planned date for the next release?
Hi @anmyachev, thanks for the quick followup! I just tried on 0.14.0, I see the error is solved, however the problem now is that it is only reading from one file instead of all files.
so there are 2 csv file at the container path but
modin_experimental_pd.read_csv_glob('abfs://container@blob.core.windows.net/*.csv')
only return the content of 1 file