question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cannot perform join between points and polygon using Scala 2.11 and Spark 2.3.1

See original GitHub issue

Currently trying to join to dataframes with the following command:

val df_green_pickup = green_data.join(neighborhoods).where($"pickup_point" within $"polygon") display(df_green_pickup)

Having the following exception:

SparkException: Job aborted due to stage failure: Task 0 in stage 44.0 failed 4 times, most recent failure: Lost task 0.3 in stage 44.0 (TID 875, 10.139.64.11, executor 10): java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.expressions.codegen.ExprCode.value()Ljava/lang/String; at org.apache.spark.sql.catalyst.expressions.Within$$anonfun$doGenCode$2.apply(predicates.scala:202) at org.apache.spark.sql.catalyst.expressions.Within$$anonfun$doGenCode$2.apply(predicates.scala:180) at org.apache.spark.sql.catalyst.expressions.BinaryExpression.nullSafeCodeGen(Expression.scala:553) at org.apache.spark.sql.catalyst.expressions.Within.doGenCode(predicates.scala:180) at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$genCode$2.apply(Expression.scala:111) at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$genCode$2.apply(Expression.scala:108) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:108) at org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$.create(GeneratePredicate.scala:60) at org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$.generate(GeneratePredicate.scala:46) at org.apache.spark.sql.execution.SparkPlan.newPredicate(SparkPlan.scala:382) at org.apache.spark.sql.execution.joins.CartesianProductExec$$anonfun$doExecute$1.apply(CartesianProductExec.scala:84) at org.apache.spark.sql.execution.joins.CartesianProductExec$$anonfun$doExecute$1.apply(CartesianProductExec.scala:81) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24.apply(RDD.scala:830) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndexInternal$1$$anonfun$apply$24.apply(RDD.scala:830) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:42) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:336) at org.apache.spark.rdd.RDD.iterator(RDD.scala:300) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:112) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:384) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

Did anyone tried this on the same versions?

Thank you

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
Peradoscommented, Sep 19, 2018

@djpirra and @lmerchante Magellan 1.0.5 is not compatible with Spark 2.3.1. You have to wait until the next release or compile from source, since the master branch is already compatible with Spark 2.3.1. I tested it last week and it works just fine.

2reactions
djpirracommented, Sep 20, 2018

It does not work with full compatibility. I have compiled from master and was able to read the points and polygons, but it didn’t worked when I try to intersect them… Something is still not working right.

Em qua, 19 de set de 2018 às 17:29, Diego Mora Cespedes < notifications@github.com> escreveu:

@djpirra https://github.com/djpirra and @lmerchante https://github.com/lmerchante Magellan 1.0.5 is not compatible with Spark 2.3.1. You have to wait until the next release or compile from source, since the master branch is already compatible with Spark 2.3.1. I tested it last week and it works just fine.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/harsha2010/magellan/issues/230#issuecomment-422803794, or mute the thread https://github.com/notifications/unsubscribe-auth/APPYdgpqxlv5NItwjAlGo_OwHA873MSVks5uckbBgaJpZM4WXHgj .

– Melhores Cumprimentos, Luis Simões

Read more comments on GitHub >

github_iconTop Results From Across the Web

geospark-datasys/Lobby - Gitter
Hi, I am using geospark spatial join on skewed data set (ploygons in GB and points in KB/MB) and its taking very long...
Read more >
No Encoder found for org.locationtech.jts.geom.Point
The versions that I am using in my SBT are: spark: 2.3.0; scala: 2.11.12; geomesa: 2.2.1; jst-*: 1.17.0-SNAPSHOT.
Read more >
Overview - Spark 2.3.1 Documentation - Apache Spark
Get Spark from the downloads page of the project website. ... For the Scala API, Spark 2.3.1 uses Scala 2.11. You will need...
Read more >
GeoSpark: Bring sf to spark - README
And the geospark R package is keeping close with geospatial and big data ... as y FROM polygons) polygons INNER JOIN (SELECT ST_GeomFromWKT...
Read more >
Using Big Data Spatial and Graph with Spatial Data
When you create custom processing classes. you can use the Oracle Spatial Hadoop Raster Simulator Framework to do the following by "pretending" to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found