Multiple results tabs opening when executing script under Apache Spark connector
See original GitHub issueDescription
When executing a script on an Apache Spark connection, a results window gets opened for each statement whether results are returned or not. Consider the following query:
CREATE TABLE MyTable(PersonName VARCHAR(50));
INSERT INTO MyTable(PersonName) SELECT 'John Doe';
SELECT * FROM MyTable;
DROP TABLE MyTable;
On other connections (tested on MS SQL Server), one result window is opened, which is expected since there is one SELECT statement. On Apache Spark, however, four tabs are opened, one for each statement.
MS SQL Server:
Apache Spark:
DBeaver Version
22.2.4.202211061524
Operating System
Windows 11 22H2
Database and driver
Spark 3.3.1 Apache Spark driver 1.2.1.spark2
Steps to reproduce
- Create an Apache Spark connection
- Create a script consisting of several SQL DML statements that do not produce results
- Run all the statements as a single script by pressing Alt-X or the Execute script button
- Multiple result windows are opened
Additional context
No response
Issue Analytics
- State:
- Created 10 months ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Configuration - Spark 3.1.2 Documentation
Spark properties control most application settings and are configured separately for each application. These properties can be set directly on a SparkConf ......
Read more >Spark Web UI - Understanding Spark Execution
Spark UI is separated into below tabs. If you are running the Spark application locally, Spark UI can be accessed using the http://localhost: ......
Read more >SQL Execution - DBeaver Documentation
The executes all queries in the script, but opens multiple result tabs. Each script query is executed in a separate thread (that is,...
Read more >Lab 4 - Explore, transform, and load data into the Data ...
Then you will use Apache Spark to load data into the data warehouse and join ... A new SQL script tab will open...
Read more >Authoring and running queries - Amazon Redshift
You can author queries with multiple SQL statements in one query tab. The queries are run serially and multiple results tabs open for...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Wow, thank you! I believe this would be enough.
@ShadelessFox Are there any settings I can change or play around with on the driver? Or in DBeaver?
If you need a Spark dev environment, it’s fairly straightforward to stand up a single node Spark cluster in standalone mode. Just tested these commands in an LXC container running Ubuntu 22.04:
Next, add a Spark connection in DBeaver. For the host, use the IP/name of the machine hosting Spark. Leave port as 10000. I put default as the database, but I don’t think this matters. Leave username/password blank:
You should then be up and running.
I can also help with testing. I don’t have Java experience, but if there are any guides for compiling/running tests, I can follow them.