No Suitable driver error
See original GitHub issueWhen trying to write to azure sql using the snipped below, we see the following error
final_df.write.format(“com.microsoft.sqlserver.jdbc.spark”).mode(“overwrite”).option(“url”, url).option(“dbtable”, table_name).option(“user”, username).option(“password”, password).save()
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.$anonfun$driverClass$2(JDBCOptions.scala:105)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:105)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite.<init>(JDBCOptions.scala:194)
at com.microsoft.sqlserver.jdbc.spark.SQLServerBulkJdbcOptions.<init>(SQLServerBulkJdbcOptions.scala:25)
at com.microsoft.sqlserver.jdbc.spark.SQLServerBulkJdbcOptions.<init>(SQLServerBulkJdbcOptions.scala:27)
at com.microsoft.sqlserver.jdbc.spark.DefaultSource.createRelation(DefaultSource.scala:55)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:126)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:962)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:767)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:962)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:414)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:398)
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:6
Top Results From Across the Web
No Suitable Driver Found For JDBC - Javatpoint
No suitable driver found for JDBC is an exception in Java that generally occurs when any driver is not found for making the...
Read more >No suitable driver found for jdbc:mysql://localhost:3306/dbname
Make sure you run this first: Class.forName("com.mysql.jdbc.Driver");. This forces the driver to register itself, so that Java knows how to ...
Read more >How to Fix java.sql.SQLException: No suitable driver found for ...
In order to solve this error, you need the MySQL JDBC driver like mysql-connector-java-5.1.36.jar in your classpath. If you use a driver which...
Read more >java.sql.SQLException: No suitable driver found for 'jdbc ...
This error comes when you are trying to connect to MySQL database from Java program using JDBC but either the JDBC driver for...
Read more >Resolve java.sql.SQLException: No suitable driver found for ...
You will get this type of exception whenever your JDBC URL is not accepted by any of the loaded JDBC drivers by the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Already…here is what I could do to make it work. Something is either goofed up in the dependencies or docs need to be updated
Add JDBC driver explicitly to spark
bin/spark-shell --packages org.apache.hadoop:hadoop-azure:2.7.3,com.microsoft.azure:azure-storage:8.6.6,com.microsoft.azure:spark-mssql-connector_2.12:1.1.0,com.microsoft.sqlserver:mssql-jdbc:8.4.1.jre8
Add the driver option
final_df.write.format("com.microsoft.sqlserver.jdbc.spark").mode("overwrite").option("url", url).option("dbtable", table_name).option("user", username).option("password", password).option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver").save()
Glad to know it works now. Connector does have jdbc dependency.