question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Ingestion with Spark: Job Management for Beam Spark Runner

See original GitHub issue

We would like to run ingestion on Spark (Streaming), i.e. with the Beam Spark Runner. Thus, an implementation of Feast’s job management is needed.

There are a couple of factors that make this a bit less straightforward than Google Cloud Dataflow:

  1. There is not a standard remote/HTTP API for job submission and management built into Spark*.
  2. The Beam Spark Runner does not upload your executable job artifact and submit it for you like it does for Dataflow, because of 1 and because there is no assumption of a cloud service like GCS for where to put it—conventions vary depending on how & where organizations run Spark: they might use S3, HDFS, or an artifact repository to ferry job packages to where they’re accessible from the runtime (YARN, Mesos, Kubernetes, EMR).

* Other than starting a SparkContext connected to the remote cluster, in-process in Feast Core. I feel that isn’t workable for a number of reasons, not least of which are heavy dependencies on Spark as a library, and the lifecycle of streaming ingestion jobs being unnecessarily coupled to that of the Feast Core instance.

Planned Approach

Job Management

We initially plan to implement JobManager using the Java client library for Apache Livy, a REST interface to Spark. This will use only an HTTP client, so it is light on dependencies and shouldn’t get in the way of alternative JobManagers for Spark, should another organization wish to implement one for something other than Livy. (Edit: turns out that Livy’s livy-http-client artifact still depends on Spark as a library, it’s not a plain REST client, so we’ll avoid that…)

We have internal experience and precedent using Livy, but not for Spark Streaming applications, so we have some uncertainties about whether it can work well. In case that it doesn’t, we’ll probably look to try spark-jobserver which does explicitly claim support for Streaming jobs.

Ingestion Job Artifact

We’re a bit less certain about how users should get the Feast ingestion Beam job artifact to their Spark cluster, due to the above mentioned variation in deployments.

Roughly speaking, Feast Ingestion would be packaged as an assembly JAR that includes beam-runners-spark as well. So, a new ingestion-spark module may be added to the Maven build which is simply a POM for doing just that.

Deployment itself may then need to rely on documentation.

Beam Spark Runner

A minor note, but we will use the “legacy”, non-portable Beam Spark Runner. As the Beam docs cover, the runner based on Spark Structured Streaming is incomplete and only supports batch jobs, and the non-portable runner is still recommended for Java-only needs.

In theory this is runtime configuration for Feast users: if they want to try the portable runner, it should be possible, but we’ll most likely be testing with the non-portable one.

cc @smadarasmi

Reference issues to keep tabs on during implementation: #302, #361.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:2
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
dr3scommented, Jun 9, 2020

This might be another option for modularity using a spark operator - https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/sparkctl/README.md#create

0reactions
woopcommented, Feb 8, 2021

Closing this issue. We have ingestion support for Spark with ERM, Dataproc, and Spark on K8s.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Using the Apache Spark Runner
The Spark Runner executes Beam pipelines on top of Apache Spark, providing: ... Deploying Spark with your application.
Read more >
Apache Spark Optimization Techniques and Performance ...
A cluster manager is the core in Spark that allows launching executors, and sometimes drivers can be launched by it also. Spark Scheduler...
Read more >
How to process spark JavaRDD data with apache beam?
There is no way to directly consume Spark Datasets or RDDs into Beam, but you should be able to ingest data from Hive...
Read more >
Running Beam Pipelines Written in Python and Go with Spark
Apache Spark is the most popular open source analytics engine for large-scale data processing. Spark is not only a mature system, ...
Read more >
Programming model for Apache Beam | Cloud Dataflow
Messaging service for event ingestion and delivery. ... Service for running Apache Spark and Apache Hadoop clusters. ... Data integration for building and...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found