question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BUG: read_csv is failing with an encoding different that UTF-8 and memory_map set to True in version 1.2.4

See original GitHub issue
  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • (optional) I have confirmed this bug exists on the master branch of pandas.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

df = pd.DataFrame({'name': ['Raphael', 'Donatello', 'Miguel Angel', 'Leonardo'],
                                     'mask': ['red', 'purple', 'orange', 'blue'],
                                     'weapon': ['sai', 'bo staff', 'nunchunk', 'katana']})
df.to_csv("tmnt.csv", index=False, encoding="utf-16")
pd.read_csv(filepath_or_buffer="tmnt.csv", encoding="utf-16", sep=",", header=0, decimal=".", memory_map=True)

Problem description

This works perfectly with version 1.1.1, but now it doesn’t and since this is a nice feature because it removes I/O overhead, I think is good look into this and also because it could break many things

Expected Output

           name    mask    weapon
0       Raphael     red       sai
1     Donatello  purple  bo staff
2  Miguel Angel  orange  nunchunk
3      Leonardo    blue    katana

Traceback

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./lib/python3.8/site-packages/pandas/io/parsers.py", line 610, in read_csv
    return _read(filepath_or_buffer, kwds)
  File "./lib/python3.8/site-packages/pandas/io/parsers.py", line 462, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
  File "./lib/python3.8/site-packages/pandas/io/parsers.py", line 819, in __init__
    self._engine = self._make_engine(self.engine)
  File "./lib/python3.8/site-packages/pandas/io/parsers.py", line 1050, in _make_engine
    return mapping[engine](self.f, **self.options)  # type: ignore[call-arg]
  File "./lib/python3.8/site-packages/pandas/io/parsers.py", line 1898, in __init__
    self._reader = parsers.TextReader(self.handles.handle, **kwds)
  File "pandas/_libs/parsers.pyx", line 518, in pandas._libs.parsers.TextReader.__cinit__
  File "pandas/_libs/parsers.pyx", line 649, in pandas._libs.parsers.TextReader._get_header
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

Output of pd.show_versions()

INSTALLED VERSIONS
------------------
commit           : 2cb96529396d93b46abab7bbc73a208e708c642e
python           : 3.8.8.final.0
python-bits      : 64
OS               : Darwin
OS-release       : 20.2.0
Version          : Darwin Kernel Version 20.2.0: Wed Dec  2 20:39:59 PST 2020; root:xnu-7195.60.75~1/RELEASE_X86_64
machine          : x86_64
processor        : i386
byteorder        : little
LC_ALL           : None
LANG             : None
LOCALE           : None.UTF-8

pandas           : 1.2.4
numpy            : 1.20.1
pytz             : 2021.1
dateutil         : 2.8.1
pip              : 21.0.1
setuptools       : 54.0.0
Cython           : None
pytest           : 6.2.2
hypothesis       : None
sphinx           : None
blosc            : None
feather          : None
xlsxwriter       : None
lxml.etree       : 4.6.2
html5lib         : None
pymysql          : None
psycopg2         : 2.8.6 (dt dec pq3 ext lo64)
jinja2           : None
IPython          : None
pandas_datareader: None
bs4              : 4.9.3
bottleneck       : None
fsspec           : None
fastparquet      : None
gcsfs            : None
matplotlib       : None
numexpr          : None
odfpy            : None
openpyxl         : 3.0.6
pandas_gbq       : None
pyarrow          : None
pyxlsb           : None
s3fs             : None
scipy            : 1.6.2
sqlalchemy       : 1.4.1
tables           : None
tabulate         : None
xarray           : None
xlrd             : 2.0.1
xlwt             : None
numba            : None

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
amznerocommented, Apr 17, 2021

I looked into pandas/io/common.py and pandas/io/parser.py, found some points to discuss.

1. In common.py-Line-783, it only uses “utf-8” to decode bytes when users setting memory_mapping option, the encoding option has not effect.

newline = newbytes.decode("utf-8")

2. Can not decode a “utf-16” bytes(from f.readline()) directly if “\n” is at the end of the string.

Code Snippet.

# 1. write some txt

data = ["The quick brown", "fox jumps over", "the lazy dog"]
with open("utf16_test.txt", "w", encoding="utf-16") as f:
    for line in data:
        f.write(f"{line}\n")


# 2. read in "r" mode + "utf-16" encoding and readlines
with open("utf16_test.txt", "r", encoding="utf-16") as f:
    lines = f.readlines()
    print(f"lines lens: {len(lines)}")
    for line in lines:
        print(line, end="")

# lines lens: 3
# The quick brown
# fox jumps over
# the lazy dog

# 3. read in "rb" mode and readlines
with open("utf16_test.txt", "rb") as f:
    lines = f.readlines()
    print(f"lines lens: {len(lines)}")
    for each in lines:
        print(each)
    print("-"*50)
    for each in data:
        print(each.encode("utf-16"))

# lines lens: 4
# b'\xff\xfeT\x00h\x00e\x00 \x00q\x00u\x00i\x00c\x00k\x00 \x00b\x00r\x00o\x00w\x00n\x00\n'
# b'\x00f\x00o\x00x\x00 \x00j\x00u\x00m\x00p\x00s\x00 \x00o\x00v\x00e\x00r\x00\n' # abnormal bytes
# b'\x00t\x00h\x00e\x00 \x00l\x00a\x00z\x00y\x00 \x00d\x00o\x00g\x00\n'           # abnormal bytes
# b'\x00'                                                                         # abnormal bytes
# --------------------------------------------------
# b'\xff\xfeT\x00h\x00e\x00 \x00q\x00u\x00i\x00c\x00k\x00 \x00b\x00r\x00o\x00w\x00n\x00'
# b'\xff\xfef\x00o\x00x\x00 \x00j\x00u\x00m\x00p\x00s\x00 \x00o\x00v\x00e\x00r\x00'
# b'\xff\xfet\x00h\x00e\x00 \x00l\x00a\x00z\x00y\x00 \x00d\x00o\x00g\x00'


# lines[0].decode("utf-8") whill raise "UnicodeDecodeError: 'utf-16-le' codec can't decode byte 0x0a in position 32: truncated data"

# works fine when ignore \n
lines[0][:-1].decode("utf-16")
# return "The quick brown"

strs = "The quick brown\n".encode("utf-16")
# \xff\xfeT\x00h\x00e\x00 \x00q\x00u\x00i\x00c\x00k\x00 \x00b\x00r\x00o\x00w\x00n\x00\n\x00
# append \x00 after \n

# 4. read in "rb" mode and readlines

with open("utf16_test.txt", "rb") as f:
    bytes_data = f.read()
    print(bytes_data)
    print(bytes_data.decode("utf-16"))

# b'\xff\xfeT\x00h\x00e\x00 \x00q\x00u\x00i\x00c\x00k\x00 \x00b\x00r\x00o\x00w\x00n\x00\n\x00f\x00o\x00x\x00 \x00j\x00u\x00m\x00p\x00s\x00 \x00o\x00v\x00e\x00r\x00\n\x00t\x00h\x00e\x00 \x00l\x00a\x00z\x00y\x00 \x00d\x00o\x00g\x00\n\x00'
# The quick brown
# fox jumps over
# the lazy dog

3. After the file handle is mapped into a mmap object, the behavior of the wrapped handle becomes different from the raw handle. Maybe related to 2?

Code snippet

import mmap
import pandas as pd
from typing import cast
from pandas.io.common import _MMapWrapper

df = pd.DataFrame({'name': ['Raphael', 'Donatello', 'Miguel Angel', 'Leonardo'],
                                     'mask': ['red', 'purple', 'orange', 'blue'],
                                     'weapon': ['sai', 'bo staff', 'nunchunk', 'katana']})

df.to_csv("tmnt.csv", index=False, encoding="utf-16")

file_handle = open("tmnt.csv", "r", encoding="utf-16", newline="")

for _ in range(3):
    print(next(file_handle))

# Raphael,red,sai
# 
# Donatello,purple,bo staff
# 
# Miguel Angel,orange,nunchunk
# 

wrapped_handle = cast(mmap.mmap, _MMapWrapper(file_handle))
for _ in range(3):
    print(next(file_handle))
 
# b'\xff\xfen\x00a\x00m\x00e\x00,\x00m\x00a\x00s\x00k\x00,\x00w\x00e\x00a\x00p\x00o\x00n\x00\n'
# b'\x00R\x00a\x00p\x00h\x00a\x00e\x00l\x00,\x00r\x00e\x00d\x00,\x00s\x00a\x00i\x00\n'
# b'\x00D\x00o\x00n\x00a\x00t\x00e\x00l\x00l\x00o\x00,\x00p\x00u\x00r\x00p\x00l\x00e\x00,\x00b\x00o\x00 
\x00s\x00t\x00a\x00f\x00f\x00\n'

1reaction
twoertweincommented, Apr 16, 2021

Does it work with the python engine on 1.1.x and 1.2.x? pd.read_csv(filepath_or_buffer="tmnt.csv", encoding="utf-16", memory_map=True, ..., engine="python").

I will have more time to look into this next week.

Read more comments on GitHub >

github_iconTop Results From Across the Web

UnicodeDecodeError when reading CSV file in Pandas with ...
A real world example is an UTF8 file that has been edited with a non utf8 editor and which contains some lines with...
Read more >
How can I fix the UTF-8 error when bulk uploading users?
This error occurs because the software you are using saves the file in a different type of encoding, such as ISO-8859, instead of...
Read more >
'utf-8' codec can't decode byte 0xff in position 0: invalid start ...
pandas-dev/pandasBUG: read_csv is failing with an encoding different that UTF-8 and memory_map set to True in version 1.2.4#40986. Created over 1 year ago....
Read more >
Use UTF-8 (Unicode) charset encoding for pages and email ...
It's broken and makes Bad Things happen in Netscape 4.x. Need to actually send a charset parameter on the Content-Type header being spit...
Read more >
Issues with CSV uploads and character encoding in Shiny
rawdat <- read.csv(inFile$datapath, header = TRUE, sep = ","). I have tried to fix this by adding encode = "UTF-8" but now I...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found