question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Table creation doesn't support writes due to incorrect configurations

See original GitHub issue

Bug

Describe the problem

While running the TestBenchmark and other benchmark tests, I ran into this problem error when trying to create a Delta table and insert values into it: Table implementation does not support writes.

Steps to reproduce

  • Download and benchmark classes into a new intelij project and set the build.sbt with the configuration noted below
  • Run TestBenchmark.scala with these params (not using the whole test harness) --test-param testdb --benchmark-path file:///delta-data/miniobenchmarks --db-name miniotest2

Observed results

TestBenchmark fails when trying to create a table. Failing Query: CREATE TABLE $tableName USING delta SELECT 1 AS x Source

org.apache.spark.SparkException: Table implementation does not support writes: miniotest2.test

Expected results

Table is created and seeded with one row with the value 1

Further details

The failed syntax (CREATE TABLE $tableName USING delta SELECT 1 AS x) succeeded when running it on a databricks notebook.

This alternate syntax also failed with the same error:

runQuery(“CREATE TABLE $tableName USING delta”, “table-create-test”) runQuery(“INSERT INTO $tableName VALUES (10), (9), (8), (7), (6)”, “table-insert-test”)


This syntax succeeded:
runQuery(s"CREATE TABLE $tableName USING delta", "table-create-test")
spark.range(0,5).write.format("delta").mode("append").save(dbLocation+"/"+tableName)
runQuery(s"INSERT INTO $tableName VALUES (10), (9), (8), (7), (6)", "table-insert-test")

Full Stack Trace

org.apache.spark.SparkException: Table implementation does not support writes: miniotest2.test at org.apache.spark.sql.errors.QueryExecutionErrors$.unsupportedTableWritesError(QueryExecutionErrors.scala:761) at org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:515) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538) at org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.writeToTable(WriteToDataSourceV2Exec.scala:491) at org.apache.spark.sql.execution.datasources.v2.TableWriteExecHelper.writeToTable$(WriteToDataSourceV2Exec.scala:486) at org.apache.spark.sql.execution.datasources.v2.CreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:68) at org.apache.spark.sql.execution.datasources.v2.CreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:92) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:220) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617) at benchmark.Benchmark.runQuery(Benchmark.scala:153) at benchmark.TestBenchmark.runInternal(TestBenchmark.scala:71) at benchmark.Benchmark.run(Benchmark.scala:128) at benchmark.TestBenchmark$.$anonfun$main$1(TestBenchmark.scala:84) at benchmark.TestBenchmark$.$anonfun$main$1$adapted(TestBenchmark.scala:83) at scala.Option.foreach(Option.scala:437) at benchmark.TestBenchmark$.main(TestBenchmark.scala:83) at benchmark.TestBenchmark.main(TestBenchmark.scala)

Environment information

Copied the benchmark package into my own project. https://github.com/delta-io/delta/tree/master/benchmarks/src/main/scala/benchmark My build.sbt contains:

scalaVersion := "2.13.6"
val sparkVersion = "3.3.0"
val deltaVersion = "2.0.0"

libraryDependencies += "org.apache.spark" %% "spark-core" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-sql" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-catalyst" % sparkVersion
libraryDependencies += "io.delta" %% "delta-core" % deltaVersion
libraryDependencies += "io.delta" % "delta-storage" % deltaVersion
libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "3.3.4"
libraryDependencies += "com.amazonaws" % "aws-java-sdk" % "1.12.290"
libraryDependencies += "com.amazonaws" % "aws-java-sdk-bundle" % "1.12.290"
libraryDependencies += "com.github.scopt" %% "scopt" % "4.0.1"

Willingness to contribute

The Delta Lake Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the Delta Lake code base?

  • Yes. I can contribute a fix for this bug independently.
  • Yes. I would be willing to contribute a fix for this bug with guidance from the Delta Lake community.
  • No. I cannot contribute a bug fix at this time.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
zsxwingcommented, Aug 31, 2022

Just to be clear. I meant the right code should be

.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
.config("spark.ui.proxyBase", "")
.master("local[*]")
0reactions
christoburpcommented, Aug 31, 2022

it looks like it was a configuration problem. thanks for helping me resolve it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

3 common foreign key mistakes (and how to avoid them)
Dangling foreign keys 3. Not creating foreign key indexes Bonus: Not using foreign keys Key takeaway: think before you CREATE TABLE.
Read more >
Serverless SQL pool self-help - Azure Synapse Analytics
Error: CREATE EXTERNAL TABLE/DATA SOURCE/DATABASE SCOPED CREDENTIAL/FILE FORMAT is not supported in master database. , it means that the master ...
Read more >
Data definition language (DDL) statements in Google ...
This statement supports the following variants, which have the same limitations: CREATE TABLE LIKE : Create a table with the same schema as...
Read more >
sql server invalid object name - but tables are listed in SSMS ...
I am attempting to create a Stored Procedure ...
Read more >
13.1.20 CREATE TABLE Statement - MySQL :: Developer Zone
By default, tables are created in the default database, using the InnoDB storage engine. An error occurs if the table exists, if there...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found