clevercsv sniffer slows to a crawl on large-ish files (e.g. FEC data)
See original GitHub issueHello,
This is a very neat project! I was thinking “I should collect a bunch of CSV files from the web and do statistics to see what the dialects are, and their predominance, to be able to better detect them” and then I found your paper and Python package! Congrats on this very nice contribution.
I am trying to see how clevercsv
performs on FEC data. For instance, let’s consider this file:
https://www.fec.gov/files/bulk-downloads/1980/indiv80.zip
$ head -5 fec-indiv-1979–1980.csv
C00078279|A|M11|P|80031492155|22Y||MCKENNON, K R|MIDLAND|MI|00000|||10031979|400|||||CONTRIBUTION REF TO INDIVIDUAL|3062020110011466469
C00078279|A|M11||79031415137|15||OREFFICE, P|MIDLAND|MI|00000|DOW CHEMICAL CO||10261979|1500||||||3061920110000382948
C00078279|A|M11||79031415137|15||DOWNEY, J|MIDLAND|MI|00000|DOW CHEMICAL CO||10261979|300||||||3061920110000382949
C00078279|A|M11||79031415137|15||BLAIR, E|MIDLAND|MI|00000|DOW CHEMICAL CO||10261979|1000||||||3061920110000382950
C00078287|A|Q1||79031231889|15||BLANCHARD, JOHN A|CHICAGO|IL|60685|||03201979|200||||||3061920110000383914
When I try to open the file with clevercsv
, it takes an inordinate of time, and seems to be hanging. So I tried to use the sniffer as suggested in your example Binder.
# downloaded, unzipped and renamed to a CSV file from:
# https://www.fec.gov/files/bulk-downloads/1980/indiv80.zip
content = open("fec-indiv-1979–1980.csv").read()
clevercsv.Sniffer().sniff(content, verbose=True)
It prints out this:
Running normal form detection ...
Not normal, has potential escapechar.
Running data consistency measure ...
and then a while later (a few minutes later) starts printing:
Considering 92 dialects.
SimpleDialect(',', '', ''): P = 22104.867952 T = 0.003101 Q = 68.546737
SimpleDialect(',', '"', ''): P = 13927.762095 T = 0.003668 Q = 51.090510
SimpleDialect(',', '"', '/'): P = 13839.682333 T = 0.002461 Q = 34.060128
SimpleDialect(',', "'", ''): P = 12072.093333 T = 0.003278 Q = 39.571560
SimpleDialect(';', '', ''): P = 106.613556 T = 0.000003 Q = 0.000345
SimpleDialect(';', '"', ''): P = 99.261000 T = 0.000000 Q = 0.000000
SimpleDialect(';', '"', '/'): P = 50.238917 skip.
SimpleDialect(';', "'", ''): P = 49.981222 skip.
SimpleDialect('', '', ''): P = 308.696000 T = 0.000000 Q = 0.000000
SimpleDialect('', '"', ''): P = 194.530000 T = 0.000000 Q = 0.000000
SimpleDialect('', '"', '/'): P = 96.652000 T = 0.000000 Q = 0.000000
SimpleDialect('', "'", ''): P = 144.787000 T = 0.000000 Q = 0.000000
SimpleDialect(' ', '', ''): P = 17818.683137 T = 0.346978 Q = 6182.686103
SimpleDialect(' ', '', '/'): P = 17818.565863 T = 0.346984 Q = 6182.762051
SimpleDialect(' ', '"', ''): P = 11300.749933 T = 0.353544 Q = 3995.309179
SimpleDialect(' ', '"', '/'): P = 10372.973520 T = 0.355343 Q = 3685.960429
SimpleDialect(' ', "'", ''): P = 7231.699311 T = 0.343354 Q = 2483.032090
SimpleDialect(' ', "'", '/'): P = 7231.658120 T = 0.343362 Q = 2483.075319
SimpleDialect('#', '', ''): P = 163.330000 skip.
SimpleDialect('#', '"', ''): P = 103.253000 skip.
SimpleDialect('#', '"', '/'): P = 67.761333 skip.
SimpleDialect('#', "'", ''): P = 78.132000 skip.
SimpleDialect('$', '', ''): P = 155.096500 skip.
SimpleDialect('$', '"', ''): P = 97.764000 skip.
SimpleDialect('$', '"', '/'): P = 64.601000 skip.
SimpleDialect('$', "'", ''): P = 72.892500 skip.
SimpleDialect('%', '', ''): P = 104.950222 skip.
SimpleDialect('%', '', '\\'): P = 104.783889 skip.
SimpleDialect('%', '"', ''): P = 65.896889 skip.
SimpleDialect('%', '"', '/'): P = 65.765333 skip.
SimpleDialect('%', '"', '\\'): P = 65.730556 skip.
SimpleDialect('%', "'", ''): P = 49.648556 skip.
SimpleDialect('%', "'", '\\'): P = 49.482222 skip.
SimpleDialect('&', '', ''): P = 2940.570750 skip.
SimpleDialect('&', '', '/'): P = 2940.446000 skip.
SimpleDialect('&', '"', ''): P = 1936.209667 skip.
SimpleDialect('&', '"', '/'): P = 1441.305200 skip.
SimpleDialect('&', "'", ''): P = 1340.900250 skip.
SimpleDialect('&', "'", '/'): P = 1340.775500 skip.
SimpleDialect('*', '', ''): P = 156.344000 skip.
SimpleDialect('*', '"', ''): P = 97.514500 skip.
SimpleDialect('*', '"', '/'): P = 65.599000 skip.
SimpleDialect('*', "'", ''): P = 73.142000 skip.
SimpleDialect('+', '', ''): P = 156.344000 skip.
SimpleDialect('+', '', '\\'): P = 156.094500 skip.
SimpleDialect('+', '"', ''): P = 99.011500 skip.
SimpleDialect('+', '"', '/'): P = 65.266333 skip.
SimpleDialect('+', '"', '\\'): P = 99.011500 skip.
SimpleDialect('+', "'", ''): P = 73.890500 skip.
SimpleDialect('+', "'", '\\'): P = 73.890500 skip.
SimpleDialect('-', '', ''): P = 1456.570500 skip.
SimpleDialect('-', '"', ''): P = 914.921750 skip.
SimpleDialect('-', '"', '/'): P = 700.916267 skip.
SimpleDialect('-', "'", ''): P = 687.513667 skip.
SimpleDialect(':', '', ''): P = 155.096500 skip.
SimpleDialect(':', '"', ''): P = 97.514500 skip.
SimpleDialect(':', '"', '/'): P = 64.933667 skip.
SimpleDialect('<', '', ''): P = 155.845000 skip.
SimpleDialect('<', '"', ''): P = 97.764000 skip.
SimpleDialect('<', '"', '/'): P = 65.100000 skip.
SimpleDialect('<', "'", ''): P = 73.391500 skip.
SimpleDialect('?', '', ''): P = 155.595500 skip.
SimpleDialect('?', '"', ''): P = 98.512500 skip.
SimpleDialect('?', '"', '/'): P = 96.652000 skip.
SimpleDialect('@', '', ''): P = 156.344000 skip.
SimpleDialect('@', '"', ''): P = 98.762000 skip.
SimpleDialect('@', '"', '/'): P = 64.767333 skip.
SimpleDialect('@', "'", ''): P = 73.391500 skip.
SimpleDialect('\\', '', ''): P = 105.282889 skip.
SimpleDialect('\\', '"', ''): P = 66.063222 skip.
SimpleDialect('\\', '"', '/'): P = 66.098000 skip.
SimpleDialect('\\', "'", ''): P = 74.140000 skip.
SimpleDialect('^', '', ''): P = 154.597500 skip.
SimpleDialect('^', '"', ''): P = 97.514500 skip.
SimpleDialect('^', '"', '/'): P = 64.601000 skip.
SimpleDialect('^', "'", ''): P = 72.643000 skip.
SimpleDialect('_', '', ''): P = 156.094500 skip.
SimpleDialect('_', '"', ''): P = 98.013500 skip.
SimpleDialect('_', '"', '/'): P = 65.100000 skip.
SimpleDialect('_', "'", ''): P = 73.391500 skip.
SimpleDialect('|', '', ''): P = 293996.190476 T = 0.946519 Q = 278273.106576
SimpleDialect('|', '', '/'): P = 146998.094048 skip.
SimpleDialect('|', '', '@'): P = 146998.094048 skip.
SimpleDialect('|', '', '\\'): P = 146998.092857 skip.
SimpleDialect('|', '"', ''): P = 185266.666667 skip.
SimpleDialect('|', '"', '/'): P = 46024.763214 skip.
SimpleDialect('|', '"', '@'): P = 92633.332143 skip.
SimpleDialect('|', '"', '\\'): P = 92633.330952 skip.
SimpleDialect('|', "'", ''): P = 12535.572981 skip.
SimpleDialect('|', "'", '/'): P = 12535.572981 skip.
SimpleDialect('|', "'", '@'): P = 12535.572765 skip.
SimpleDialect('|', "'", '\\'): P = 12535.572981 skip.
I would say this takes about 30 minutes, and finally it concludes:
SimpleDialect('|', '', '')
I think I understand what’s going on: You designed this for small-ish datasets, and so you reprocess the whole file for every dialog to determine what makes most sense.
I would be tempted to think this is because I feed the data as a variable content
, following your example, rather than provide the filename directly. However when I tried to read the read_csv
method directly with the filename, it also was really, very, very slow. So I think in all situations currently, clevercsv
trips on this file, and more generally this type of file.
When I take the initiative to truncate the data arbitrarily, clevercsv
works beautifully. But shouldn’t the truncating be something the library, as opposed to the user does?
clevercsv.Sniffer().sniff(content[0:1000], verbose=True)
provides in a few seconds:
Running normal form detection ...
Didn't match any normal forms.
Running data consistency measure ...
Considering 4 dialects.
SimpleDialect(',', '', ''): P = 4.500000 T = 0.000000 Q = 0.000000
SimpleDialect('', '', ''): P = 0.009000 T = 0.000000 Q = 0.000000
SimpleDialect(' ', '', ''): P = 1.562500 T = 0.312500 Q = 0.488281
SimpleDialect('|', '', ''): P = 8.571429 T = 0.952381 Q = 8.163265
SimpleDialect('|', '', '')
@GjjvdBurg If this is not a known problem, may I suggest using some variation of “infinite binary search”?
- We start with a small size, and we truncate the file to that small size.
- The detected dialect may be wrong because we are only working on a small subset of the file, so we double the amount of content that we provide the sniffer, and see if it answers the same thing.
- We repeat a predetermined number of times (for instance 4 times), until we’ve asserted the sniffer has detected the same dialect for larger portions of the file.
I have implemented this algorithm here: https://gist.github.com/jlumbroso/c123a30a2380b58989c7b12fe4b4f49e
When I run it on the above mentioned file, it immediately (without futzing) produces the correct answer:
In[3]: probe_sniff(content)
Out[3]: SimpleDialect('|', '', '')
And on the off-chance you would like me to add this algorithm to the codebase, where would it go?
Issue Analytics
- State:
- Created 3 years ago
- Comments:5
Top GitHub Comments
Hi @jlumbroso,
This took a bit longer than expected, but I’ve now added a comparison study to the repo (see here). This experiment evaluates the accuracy and runtime of dialect detection as a function of the number of lines used for the detection. I’ve included a version of the “infinite binary search” you suggested, dubbed
clevercsv_grow
in the figures.I changed the parameters of your algorithm a bit when testing, as the results depend on how many lines are used initially and how many steps are taken. In the comparison
clevercsv_grow
starts with a buffer of 100 lines, which explains why the accuracy is the same as for CleverCSV whenline_count <= 100
. The most relevant figures are those for dialect detection accuracy for files with at least 1000 or 10000 lines. These figures show that using a growing buffer unfortunately incurs a performance hit of a few percent. However, as the runtime plots show it does make quite a difference for large files. A potential solution to this might be to try some sort of hybrid method that uses the binary search technique only when the number of lines is above some threshold.I’m curious to hear your thoughts on this!
Hi @jlumbroso, thanks for the detailed bug report!
You’re describing an issue that I’ve been thinking about for a while, but never had the time to seriously investigate, so I’m glad that you raised this. Yes, CleverCSV can be quite slow for large files, and chunking the file during detection would be a good solution to this. I agree this is something the library should handle if it can.
I should mention though that the
read_csv
wrapper and theclevercsv
command line application both take anum_chars
argument, added specifically for large files (moreover the Sniffer docs suggest using a sample). But if that’s not clear from the CleverCSV documentation, that’s on me and should be addressed.I would definitely be open to adding a fix for this into the package, but I’m a bit undecided about the best approach. In any case I’d want to do some testing on the accuracy of this method compared to reading the full file. An alternative that I’ve had in mind is to keep the
Parser
object in memory for each dialect and feed these additional characters in the same way that you propose. That way the type detection wouldn’t have to re-run on cells it has already seen from a previous batch, and we could be a bit more clever about dropping unlikely dialects. On the other hand, that would probably require more significant refactoring/rewriting of the code, whereas your algorithm could wrap theDetector.detect
method quite easily.All that said, I did some debugging on the specific file you mention and it appears it suffers from the same issue as in #13, but regarding the
url
regex. With the modified url regex suggested in that issue, I can detect the dialect in about 5 minutes on my machine. While that’s still way too long, it’s an improvement over the 30 min you found. (Using 10000 characters gives the dialect in about 0.6 seconds).So I’ll first update the package with that regex, and improve the documentation w.r.t. using a smaller sample on big files. Then, I’d like to add some sort of benchmark to the repo that allows us to evaluate the performance of the chunking method (it’ll likely be a few weeks before I get around to this, unfortunately). If you’d like to work on adding your algorithm to the package, it would be great if you can prepare a pull request. We can discuss the implementation details there more easily (again, I think wrapping the detect method is probably the easiest at this point). How does that sound?