ParserError: Error tokenizing data. C error: Expected 1 fields in line 23, saw 46
See original GitHub issueWhen I try:
from get_all_tickers import get_tickers as gt
tickers = gt.get_tickers()
I get an error:
tickers = gt.get_tickers(NASDAQ=False)
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
c:\Users\Mislav\Documents\GitHub\stocksee\stocksee\ib_market_data.py in
----> 36 tickers = gt.get_tickers(NASDAQ=False)
C:\ProgramData\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in get_tickers(NYSE, NASDAQ, AMEX)
71 tickers_list = []
72 if NYSE:
---> 73 tickers_list.extend(__exchange2list('nyse'))
74 if NASDAQ:
75 tickers_list.extend(__exchange2list('nasdaq'))
C:\ProgramData\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in __exchange2list(exchange)
136
137 def __exchange2list(exchange):
--> 138 df = __exchange2df(exchange)
139 # removes weird tickers
140 df_filtered = df[~df['Symbol'].str.contains("\.|\^")]
C:\ProgramData\Anaconda3\lib\site-packages\get_all_tickers\get_tickers.py in __exchange2df(exchange)
132 response = requests.get('https://old.nasdaq.com/screening/companies-by-name.aspx', headers=headers, params=params(exchange))
133 data = io.StringIO(response.text)
--> 134 df = pd.read_csv(data, sep=",")
135 return df
136
~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
674 )
675
--> 676 return _read(filepath_or_buffer, kwds)
677
678 parser_f.__name__ = name
~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
452
453 try:
--> 454 data = parser.read(nrows)
455 finally:
456 parser.close()
~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in read(self, nrows)
1131 def read(self, nrows=None):
1132 nrows = _validate_integer("nrows", nrows)
-> 1133 ret = self._engine.read(nrows)
1134
1135 # May alter columns / col_dict
~\AppData\Roaming\Python\Python38\site-packages\pandas\io\parsers.py in read(self, nrows)
2035 def read(self, nrows=None):
2036 try:
-> 2037 data = self._reader.read(nrows)
2038 except StopIteration:
2039 if self._first_chunk:
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
pandas\_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 1 fields in line 23, saw 46
Issue Analytics
- State:
- Created 3 years ago
- Reactions:4
- Comments:21
Top Results From Across the Web
Python Pandas Error tokenizing data - csv - Stack Overflow
If this error arises when reading a file written by pandas.to_csv() , it MIGHT be because there is a '\r' in a column...
Read more >How To Fix pandas.parser.CParserError: Error tokenizing data
The most obvious solution to the problem, is to fix the data file manually by removing the extra separators in the lines causing...
Read more >Error tokenizing data. C error: Expected X fields in line
In this example, I'll explain an easy fix for the “ParserError: Error tokenizing data. C error: Expected X fields in line Y, saw...
Read more >How to fix CParserError: Error tokenizing data
The Error tokenizing data may arise when you're using separator (for eg. comma ',') as a delimiter and you have more separator than...
Read more >How To Solve Python Pandas Error Tokenizing Data Error?
While reading a CSV file, you may get the “Pandas Error Tokenizing Data“. This mostly occurs due to the incorrect data in the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I made a quick&dirty fix in get_tickers.py. Filtering by mktcap_min, mktcap_max and sectors works for me. I didn’t test regions. Github doesn’t allow me to upload a .py file, so you need to remove the ‘.txt’ ending of this one and replace the corresponding file in the package. get_tickers.py.txt Thanks to @Possums for the basics!
Nasdaq API got updated, so the old URL is no longer available I believe.
The following is a quick implementation of the new API.