CSV import config is very slow for a 50K row file
See original GitHub issuePlease note, this is not for the import itself, this is for the dialog that will define the file and the format of the upload OS: MAC OS 11.6.7 DBeaver version: Version 22.1.1.202206261508 DB: postgres 14.2 drive name:postgres
I’m trying to upload a file with 50000 rows and I can’t really move past the file selection dialog. I am stuck in “update data produces settings from import stream” It might eventually move to the next screen but by this time the conection to the database has dropped. my table has 10 columns.
this is the DDL and I have attached the file.
CREATE TABLE scribe.ff_communication_log (
id_ff_communication_log int4 NOT NULL,
id_communication int4 NOT NULL,
communication_instance_id int4 NULL,
id_consumer int4 NULL,
id_auth_group int4 NULL,
id_barcode int4 NULL,
manual_entry_enc varchar(20) NULL DEFAULT NULL::character varying,
email varchar(255) NULL DEFAULT NULL::character varying,
tlf varchar(18) NULL DEFAULT NULL::character varying,
id_ff_channel int4 NULL DEFAULT 1,
hash varchar(255) NULL DEFAULT NULL::character varying,
status varchar(45) NULL DEFAULT NULL::character varying,
batch_number int4 NULL,
"token" varchar(255) NULL DEFAULT NULL::character varying,
CONSTRAINT ff_communication_log_pkey PRIMARY KEY (id_ff_communication_log)
);
CREATE INDEX email_ix ON scribe.ff_communication_log USING btree (email);
CREATE INDEX ff_communication_log_account_fk ON scribe.ff_communication_log USING btree (id_auth_group);
CREATE INDEX ff_communication_log_barcode_fk ON scribe.ff_communication_log USING btree (id_barcode);
CREATE INDEX ff_communication_log_communication_instance_id_ix ON scribe.ff_communication_log USING btree (communication_instance_id);
CREATE INDEX ff_communication_log_customer_fk ON scribe.ff_communication_log USING btree (id_consumer);
CREATE INDEX ff_communication_log_ff_communication_fk ON scribe.ff_communication_log USING btree (id_communication);
CREATE INDEX hash ON scribe.ff_communication_log USING btree
[messages_small.txt](https://github.com/dbeaver/dbeaver/files/9039331/messages_small.txt)
(hash);
CREATE INDEX id_ff_channel ON scribe.ff_communication_log USING btree (id_ff_channel);
CREATE INDEX indexmanualentry ON scribe.ff_communication_log USING btree (manual_entry_enc);
Issue Analytics
- State:
- Created a year ago
- Comments:12 (5 by maintainers)
Top Results From Across the Web
Import-Csv slow performance - powershell - Stack Overflow
I using Import-Csv as suggested and I noted the performance still relatively very slow. Averagely it took around 20 minutes to execute on...
Read more >CSV import background job speedup · Legacy Forums - Omeka
I'm using Omeka CSV Import and the imports takes long time to finish and I'm ... How big and how many files are...
Read more >Solved: CSV import too slow - Atlassian Community
Solved: Hello, I have a problem with the import from a CSV which does not have many columns or lines (10x7), it is...
Read more >Import Data from 48 GB csv File to SQL Server
The script even batches your import into 50K rows so that during import, it does not hog the memory. Edit: ON SQL Server...
Read more >High-Performance Techniques for Importing CSV to SQL ...
If you're wondering why batching was used, it's because the datatable seems to get exponentially slower as it grows above a few hundred...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thank you for your help
I’m not sure if we can fix that. I guess the issue can be closed now.