java.util.NoSuchElementException: key not found: path
See original GitHub issueI’m trying to test this code
from pyspark.sql import SQLContext
from pyspark import SparkContext
sc = SparkContext(appName="Connect Spark with Redshift")
sql_context = SQLContext(sc)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "ACCESSID")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "ACEESKEY")
df = sql_context.read \
.option("url", "jdbc:redshift://example.coyf2i236wts.eu-central1.redshift.amazonaws.com:5439/agcdb?user=user&password=pwd") \
.option("dbtable", "table_name") \
.option("tempdir", "s3://bucket/path") \
.load()
but i’m getting this error
Any ideas ?
Issue Analytics
- State:
- Created 7 years ago
- Comments:10 (5 by maintainers)
Top Results From Across the Web
Spark throws java.util.NoSuchElementException: key not found
I believe the issue is because of closure. When you run your application locally, everything might be running in same memory/process.
Read more >key not found: path -- when writing new table · Issue #205 ...
Testing in a spark-shell, I can successfully create tables, insertIntoTable and create from External succesfully, however, when loading an ...
Read more >RE: SparkSql - java.util.NoSuchElementException: key not found
NoSuchElementException : key not found: node when access JSON Array From: ... but looks like my syntax is off: sqlContext.sql( "SELECT path,`timestamp`, ...
Read more >java.util.NoSuchElementException: key not found: date
I'm guessing the issue is that your WishCountTable class is not being properly set on the executor classpath. Unfortunately you haven't sent the...
Read more >java.util.NoSuchElementException: key not found
java.util.NoSuchElementException: key not found: _PYSPARK_DRIVER_CALLBACK_HOST at scala.collection.MapLike$class.default(MapLike.scala:228)
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Note you write
-- jars
after the python script, but this is an option ofspark-submit
. For the record, this worked for me:Okay, and you also added
.format("com.databricks.spark.redshift")
to your code?