Applied data science using Pyspark learn the end-to-end predictive model-building cycle

Discover the capabilities of PySpark and its application in the realm of data science. This comprehensive guide with hand-picked examples of daily use cases will walk you through the end-to-end predictive model-building cycle with the latest techniques and tricks of the trade. Applied Data Science U...

Full description

Bibliographic Details
Main Author: Kakarla, Ramcharan
Other Authors: Krishnan, Sundar, Alla, Sridhar
Format: eBook
Language:English
Published: Berkeley, CA Apress 2021
Subjects:
Online Access:
Collection: O'Reilly - Collection details see MPG.ReNa
LEADER 06605nmm a2200577 u 4500
001 EB001909206
003 EBX01000000000000001072108
005 00000000000000.0
007 cr|||||||||||||||||||||
008 210123 ||| eng
020 |a 9781484265000 
020 |a 1484265009 
020 |a 9781484265017 
050 4 |a QA76.9.B45 
100 1 |a Kakarla, Ramcharan 
245 0 0 |a Applied data science using Pyspark  |b learn the end-to-end predictive model-building cycle  |c Ramcharan Kakarla, Sundar Krishnan, Sridhar Alla 
260 |a Berkeley, CA  |b Apress  |c 2021 
300 |a 427 pages 
505 0 |a Create a Simple Docker Image -- Download PySpark Docker -- Step-by-Step Approach to Understanding the Docker PySpark run Command -- Databricks Community Edition -- Create Databricks Account -- Create a New Cluster -- Create Notebooks -- How Do You Import Data Files into the Databricks Environment? -- Basic Operations -- Upload Data -- Access Data -- Calculate Pi -- Summary -- Chapter 2: PySpark Basics -- PySpark Background -- PySpark Resilient Distributed Datasets (RDDs) and DataFrames -- Data Manipulations -- Reading Data from a File -- Reading Data from Hive Table -- Reading Metadata 
505 0 |a Dropping Duplicates -- Data Visualizations -- Introduction to Machine Learning -- Summary -- Chapter 4: Variable Selection -- Exploratory Data Analysis -- Cardinality -- Missing Values -- Missing at Random (MAR) -- Missing Completely at Random (MCAR) -- Missing Not at Random (MNAR) -- Code 1: Cardinality Check -- Code 2: Missing Values Check -- Step 1: Identify Variable Types -- Step 2: Apply StringIndexer to Character Columns -- Step 3: Assemble Features -- Built-in Variable Selection Process: Without Target -- Principal Component Analysis -- Mechanics -- Singular Value Decomposition 
505 0 |a Intro -- Table of Contents -- About the Authors -- About the Technical Reviewer -- Acknowledgments -- Foreword 1 -- Foreword 2 -- Foreword 3 -- Introduction -- Chapter 1: Setting Up the PySpark Environment -- Local Installation using Anaconda -- Step 1: Install Anaconda -- Step 2: Conda Environment Creation -- Step 3: Download and Unpack Apache Spark -- Step 4: Install Java 8 or Later -- Step 5: Mac & Linux Users -- Step 6: Windows Users -- Step 7: Run PySpark -- Step 8: Jupyter Notebook Extension -- Docker-based Installation -- Why Do We Need to Use Docker? -- What Is Docker? 
505 0 |a Built-in Variable Selection Process: With Target -- ChiSq Selector -- Model-based Feature Selection -- Custom-built Variable Selection Process -- Information Value Using Weight of Evidence -- Monotonic Binning Using Spearman Correlation -- How Do You Calculate the Spearman Correlation by Hand? -- How Is Spearman Correlation Used to Create Monotonic Bins for Continuous Variables? -- Custom Transformers -- Main Concepts in Pipelines -- Voting-based Selection -- Summary -- Chapter 5: Supervised Learning Algorithms -- Basics -- Regression -- Classification -- Loss Functions -- Optimizers 
505 0 |a Counting Records -- Subset Columns and View a Glimpse of the Data -- Missing Values -- One-Way Frequencies -- Sorting and Filtering One-Way Frequencies -- Casting Variables -- Descriptive Statistics -- Unique/Distinct Values and Counts -- Filtering -- Creating New Columns -- Deleting and Renaming Columns -- Summary -- Chapter 3: Utility Functions and Visualizations -- Additional Data Manipulations -- String Functions -- Registering DataFrames -- Window Functions -- Other Useful Functions -- Collect List -- Sampling -- Caching and Persisting -- Saving Data -- Pandas Support -- Joins 
653 |a Parallel processing (Electronic computers) / http://id.loc.gov/authorities/subjects/sh85097826 
653 |a Big data / fast 
653 |a Machine learning / http://id.loc.gov/authorities/subjects/sh85079324 
653 |a Python (Computer program language) / fast 
653 |a Big data / http://id.loc.gov/authorities/subjects/sh2012003227 
653 |a Python (Computer program language) / http://id.loc.gov/authorities/subjects/sh96008834 
653 |a Computer software / fast 
653 |a Données volumineuses 
653 |a Parallélisme (Informatique) 
653 |a Machine learning / fast 
653 |a Apprentissage automatique 
653 |a Parallel processing (Electronic computers) / fast 
653 |a Python (Langage de programmation) 
700 1 |a Krishnan, Sundar 
700 1 |a Alla, Sridhar 
041 0 7 |a eng  |2 ISO 639-2 
989 |b OREILLY  |a O'Reilly 
500 |a Gradient Descent. - Includes index 
028 5 0 |a 10.1007/978-1-4842-6500-0 
776 |z 9781484264997 
776 |z 9781484265017 
776 |z 1484264991 
776 |z 9781484265000 
776 |z 1484265009 
856 4 0 |u https://learning.oreilly.com/library/view/~/9781484265000/?ar  |x Verlag  |3 Volltext 
082 0 |a 005.7 
082 0 |a 004 
520 |a Discover the capabilities of PySpark and its application in the realm of data science. This comprehensive guide with hand-picked examples of daily use cases will walk you through the end-to-end predictive model-building cycle with the latest techniques and tricks of the trade. Applied Data Science Using PySpark is divided unto six sections which walk you through the book. In section 1, you start with the basics of PySpark focusing on data manipulation. We make you comfortable with the language and then build upon it to introduce you to the mathematical functions available off the shelf. In section 2, you will dive into the art of variable selection where we demonstrate various selection techniques available in PySpark. In section 3, we take you on a journey through machine learning algorithms, implementations, and fine-tuning techniques. We will also talk about different validation metrics and how to use them for picking the best models. Sections 4 and 5 go through machine learning pipelines and various methods available to operationalize the model and serve it through Docker/an API. In the final section, you will cover reusable objects for easy experimentation and learn some tricks that can help you optimize your programs and machine learning pipelines. By the end of this book, you will have seen the flexibility and advantages of PySpark in data science applications. This book is recommended to those who want to unleash the power of parallel computing by simultaneously working with big datasets. You will: Build an end-to-end predictive model Implement multiple variable selection techniques Operationalize models Master multiple algorithms and implementations