question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Exception During Insert

See original GitHub issue

Setup org.apache.hudi:hudi-spark-bundle_2.11:0.5.3,org.apache.spark:spark-avro_2.11:2.4.4 Client PySpark Storage S3: hudi_options = {   | ‘hoodie.table.name’: self.table_name,   | ‘hoodie.datasource.write.recordkey.field’: ‘column’,   | ‘hoodie.datasource.write.table.name’: self.table_name,   | ‘hoodie.datasource.write.precombine.field’: ‘column’,   | ‘hoodie.datasource.write.partitionpath.field’: ‘dl_snapshot_date’,   | ‘hoodie.upsert.shuffle.parallelism’: 2,   | ‘hoodie.insert.shuffle.parallelism’: 2   | } Data get written and able to load with spark. But write produce exception

20/07/02 21:53:36 ERROR PriorityBasedFileSystemView: Got error running preferred function. Trying secondary
org.apache.hudi.exception.HoodieRemoteException: xx.xx.xx.xx:xxxx failed to respond
	at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.getPendingCompactionOperations(RemoteHoodieTableFileSystemView.java:376)
	at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.execute(PriorityBasedFileSystemView.java:66)
	at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.getPendingCompactionOperations(PriorityBasedFileSystemView.java:199)
	at org.apache.hudi.table.CleanHelper.<init>(CleanHelper.java:78)
	at org.apache.hudi.table.HoodieCopyOnWriteTable.scheduleClean(HoodieCopyOnWriteTable.java:288)
	at org.apache.hudi.client.HoodieCleanClient.scheduleClean(HoodieCleanClient.java:118)
	at org.apache.hudi.client.HoodieCleanClient.clean(HoodieCleanClient.java:95)
	at org.apache.hudi.client.HoodieWriteClient.clean(HoodieWriteClient.java:835)
	at org.apache.hudi.client.HoodieWriteClient.postCommit(HoodieWriteClient.java:512)
	at org.apache.hudi.client.AbstractHoodieWriteClient.commit(AbstractHoodieWriteClient.java:157)
	at org.apache.hudi.client.AbstractHoodieWriteClient.commit(AbstractHoodieWriteClient.java:101)
	at org.apache.hudi.client.AbstractHoodieWriteClient.commit(AbstractHoodieWriteClient.java:92)
	at org.apache.hudi.HoodieSparkSqlWriter$.checkWriteStatus(HoodieSparkSqlWriter.scala:268)
	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:188)
	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:108)
	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:24 (14 by maintainers)

github_iconTop GitHub Comments

1reaction
bvaradarcommented, Jul 14, 2020

@asheeshgarg : If the table is represented as simple parquet table, presto queries will start showing duplicates when there are multiple file versions present or could fail when writes are happening (no snapshot isolation). Creating a table using hive sync would ensure only valid and single file versions are read.

0reactions
bvaradarcommented, Aug 4, 2020

Closing this issue. Please reopen if needed.

Read more comments on GitHub >

github_iconTop Results From Across the Web

PL/SQL - Insert command doesn't execute in EXCEPTION block
The first issue I see is that you are trying to insert into the same table in your exception that you are attempting...
Read more >
10 Handling PL/SQL Errors
In PL/SQL, the pragma EXCEPTION_INIT tells the compiler to associate an exception name with an Oracle error number. That lets you refer to...
Read more >
Insert error message to a table in exception handler
Insert error message to a table in exception handler : Exception Handle « PL SQL « Oracle PL / SQL.
Read more >
Reg: Exception on INSERT Statement - SAP Community
Hi Friends, I am getting the CX_SY_OPEN_SQL_DB type of exception quit offen. There is no duplicate records in the table. still i am...
Read more >
Exception handling for INSERT - MSDN - Microsoft
Sign in to vote. User1355545883 posted. I simply want the syntax for how to encapsulate an INSERT statement with error-handling. I would like...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found