Standalone apache spark read from BigQuery errors out when loading data.
See original GitHub issueHi , I am using standalone apache spark on prem to connect to GCP BQ.
the connection is working and shows schema details (scala)
when I try to print data it throws error
df.show()
df.collect
java.lang.IllegalStateException: Could not find TLS ALPN provider; no working netty-tcnative, Conscrypt, or Jetty NPN/ALPN available
spark env: 2.3.1 with scala 2.11 connector used: spark-bigquery-with-dependencies_2.11.
Same issue with public cloud as well , am i missing any dependencies?
scala> val df = spark.read.format("bigquery").option("parentProject", "bigquery-public-data").option("table", "bigquery-public-data:samples.shakespeare").load()
df: org.apache.spark.sql.DataFrame = [word: string, word_count: bigint ... 2 more fields]
scala> df.columns
def columns: Array[String]
scala> df.show()
java.lang.IllegalStateException: Could not find TLS ALPN provider; no working netty-tcnative, Conscrypt, or Jetty NPN/ALPN available
at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:246)
at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:146)
at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:95)
at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder$DefaultProtocolNegotiator.newNegotiator(NettyChannelBuilder.java:628)
at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:530)
at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder$NettyChannelTransportFactoryBuilder.buildClientTransportFactory(NettyChannelBuilder.java:188)
Issue Analytics
- State:
- Created a year ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Issues · GoogleCloudDataproc/spark-bigquery-connector
BigQuery data source for Apache Spark: Read data from BigQuery into DataFrames, ... Standalone apache spark read from BigQuery errors out when loading...
Read more >Use the BigQuery connector with Spark - Google Cloud
The connector writes the data to BigQuery by first buffering all the data into a Cloud Storage temporary table.
Read more >Spark - Read from BigQuery Table - Kontext
It is a fully managed scalable service that can be used to perform different kinds of data processing and transformations. Dataproc also has ......
Read more >Dataproc: Errors when reading and writing data from BigQuery ...
1 Answer 1 · Add the BigQuery connector as a dependency through spark. · Specify the correct table name in <project>. · The...
Read more >Spark SQL, DataFrames and Datasets Guide
Loading Data Programmatically; Partition Discovery; Schema Merging ... Spark SQL can also be used to read data from an existing Hive installation.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Thanks, I will check the conscrypt issue
update -
Thanks