executeBatch implementation
See original GitHub issueCurrently i am working on integration between metabase and ClickHouse.
It seems that clickhouse-jdbc driver doesn’t provide support for formats like:
INSERT INTO test.batch_insert (s, i) VALUES (?, ?), (?, ?)
and
INSERT INTO test.batch_insert (s, i) VALUES (?, 101), (?, 102)
Looks like it is supported by ClickHouse (i’ve wrote some tests to theese formats and fixed 'em), so can it be supported in JDBC?
Issue Analytics
- State:
- Created 7 years ago
- Reactions:9
- Comments:12 (6 by maintainers)
Top Results From Across the Web
Java Examples & Tutorials of Statement.executeBatch (java.sql)
The JDBC driver must behave consistently with the underlying database, following the "all or nothing" principle. If the driver continues processing, the array ......
Read more >JDBC - Batch Processing - Tutorialspoint
The executeBatch() returns an array of integers, and each element of the array represents the update count for the respective update statement. Just...
Read more >Overview of Update Batching Models
An executeBatch() call closes the statement object's current result set, if one exists. Committing the Changes in the Oracle Implementation of Standard Batching....
Read more >Java Batch Processing using JDBC Drivers - Hevo Data
Use executeBatch() method to execute all the SQL statements. Use the commit() method to commit all the changes. The below code consists of...
Read more >Batch Processing in Java JDBC - javatpoint
The addBatch(String query) method of the CallableStatement, PreparedStatement, and Statement is used to single statements to a batch. int[] executeBatch(), The ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, any update to this issue? Would love to resolve this so that @Badya could hopefully complete a connector from Metabase to Clickhouse. We are using Metabase internally for GreenPlum, and it is a fantastic interface for sql queries and visualization. Being able to incorporate Clickhouse into Metabase would allow us to work on much larger datasets.
Hi. There hadn’t been updates. Let me describe my point:
ClickHouse works effectively with big inserts up to 1M rows and the data for such “bulk” inserts may be compressed for network transfer.
The way I see to make best of both is to turn these complex inserts into simple “bulk” ones with some tricky parsing, which I wouldn’t like to do, or leave current implementation for simple inserts, and “degrade” to copying values groups into query(making a long query with a lot of values groups) for complex ones, which will be less efficient, but easier to implement. I prefer second option, with clear documentation that it is not efficient and is not recommended. Does this sound reasonable for you and do you think it solves the problem? If answer is positive I’ll do this. Anyway, contributions are welcome.