Dataset(df).upload() doesn't work if the only columns are the_geom and cartodb_id
See original GitHub issueIt seems to want more columns when it builds a query:
CartoException: Batch SQL job failed with result: {'user': 'eschbacher', 'status': 'failed',
'query': "BEGIN; DROP TABLE IF EXISTS test_upload; CREATE TABLE test_upload (, the_geom geometry(MultiPolygon, 4326)); SELECT CDB_CartodbfyTable('eschbacher', 'test_upload'); COMMIT;",
'created_at': '2019-08-20T18:26:51.936Z',
'updated_at': '2019-08-20T18:26:51.954Z',
'failed_reason': 'syntax error at or near ","',
'job_id': '5b6a1396-5b6d-4581-808f-27c3c8947464'}
Specifically, you can see the query it constructs here:
CREATE TABLE test_upload (, the_geom geometry(MultiPolygon, 4326))
Issue Analytics
- State:
- Created 4 years ago
- Comments:16 (16 by maintainers)
Top Results From Across the Web
Error uploading dataframes · Issue #947 · CartoDB/cartoframes
The last cell of the dataset example fails with this error: TypeError ... Dataset(df).upload() doesn't work if the only columns are the_geom ...
Read more >CartoDB restricts uploading only up to 250 columns
CartoDB have a limit on the number of columns that can be contained inside a dataset. If you have more than 250 columns...
Read more >cartoframes Documentation - Read the Docs
A Python package for integrating CARTO maps, analysis, and data services into data science workflows. Python data analysis workflows often ...
Read more >Mapping Points with Folium | Data EconoScientist
From the dataframe, df_counters, I want to create a map of each of their locations based on the latitude and longitude coordinates. Ideally, ......
Read more >Bringing Data Into CARTO - CUNY
Abstract. This tutorial demonstrates how you can upload your own data into the CARTO web mapping platform. We walk.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I wouldn’t add specific logic like renaming columns, check nulls, existence, etc.
I’d upload what we have and when cartodbfying if there’s an error (because
cartodb_id
is not unique for example), the backend will raise an error and the user should have to modify their dataframe to make it work (in most cases by just dropping or resetting thecartodb_id
column). We can just solve this with good documentation.I’d prefer we are able to explain a simple process so that users can understand it, than adding too much magic and find lots of corner cases (in any case we can add magic in the future, once we understand how users upload data).
About this, IIRC we upload the index as a regular column so when download back the dataset, the index is
cartodb_id
and you keep your previous index as a column.My 2 cents.
🥇Thanks @oleurud!