Deploying Spark ML pipelines in production on AWS how to publish pipeline artifacts and run pipelines in production
"Translating a Spark application from running in a local environment to running on a production cluster in the cloud requires several critical steps, including publishing artifacts, installing dependencies, and defining the steps in a pipeline. This video is a hands-on guide through the process...
Main Author: | |
---|---|
Format: | eBook |
Language: | English |
Published: |
[Place of publication not identified]
O'Reilly
2017
|
Subjects: | |
Online Access: | |
Collection: | O'Reilly - Collection details see MPG.ReNa |
Summary: | "Translating a Spark application from running in a local environment to running on a production cluster in the cloud requires several critical steps, including publishing artifacts, installing dependencies, and defining the steps in a pipeline. This video is a hands-on guide through the process of deploying your Spark ML pipelines in production. You'll learn how to create a pipeline that supports model reproducibility--making your machine learning models more reliable--and how to update your pipeline incrementally as the underlying data change. Learners should have basic familiarity with the following: Scala or Python; Hadoop, Spark, or Pandas; SBT or Maven; Amazon Web Services such as S3, EMR, and EC2; Bash, Docker, and REST."--Resource description page |
---|---|
Item Description: | Title from title screen (Safari, viewed January 15, 2018). - Release date from resource description page (Safari, viewed January 15, 2018) |
Physical Description: | 1 streaming video file (23 min., 20 sec.) |