toDF() isn't working on the shell
See original GitHub issueI get the same error (see attached) when trying orgs.toDF().show()
or memberships.select_fields(['organization_id']).toDF().distinct().show()
Issue Analytics
- State:
- Created 6 years ago
- Comments:9 (2 by maintainers)
Top Results From Across the Web
toDF is not working in spark scala ide , but works perfectly ...
When I ran it from shell , it perfectly works . But in ide , it gives the compilation error. Please help package...
Read more >Solved: Spark/Scala Error: value toDF is not a member of o. ...
Hi all,. I am trying to create a DataFrame of a text file which gives me error: "value toDF is not a member...
Read more >Spark SQL, DataFrames and Datasets Guide
When working with Hive one must construct a HiveContext , which inherits from SQLContext , and adds support for finding tables in the...
Read more >Create DataFrame with Examples - PySpark
You can manually create a PySpark DataFrame using toDF() and ... relational databases which I've not covered here and I will leave this...
Read more >Code example: Joining and relationalizing data - AWS Glue
Following the steps in Working with crawlers on the AWS Glue console, ... The toDF() converts a DynamicFrame to an Apache Spark DataFrame...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks for using AWS Glue.
Please refer to the step 5 in AWS Glue documentation on using a REPL shell at: http://docs.aws.amazon.com/glue/latest/dg/tutorial-development-endpoint-repl.html The solution to resolve this error is as follows – you would have to stop the existing SparkContext and create a new one using GlueContext. spark.stop() glueContext = GlueContext(SparkContext.getOrCreate())
If you have further questions, you can also use the AWS Glue Forum: https://forums.aws.amazon.com/forum.jspa?forumID=262
The fix with
spark.stop()
worked for me. Let me also post the exact error message here for better indexing by search engines: