|
|
|
|
LEADER |
06819nmm a2200517 u 4500 |
001 |
EB002138091 |
003 |
EBX01000000000000001276218 |
005 |
00000000000000.0 |
007 |
cr||||||||||||||||||||| |
008 |
230102 ||| eng |
020 |
|
|
|a 9781484289259
|
050 |
|
4 |
|a QA76.73.P98
|
100 |
1 |
|
|a Mishra, Pradeepta
|
245 |
0 |
0 |
|a PyTorch recipes
|b A Problem-Solution Approach to Build, Train and Deploy Neural Network Models
|c Pradeepta Mishra
|
250 |
|
|
|a Second edition
|
260 |
|
|
|a Berkeley, CA
|b Apress L. P.
|c 2022
|
300 |
|
|
|a xxiv, 266 pages
|b illustrations
|
505 |
0 |
|
|a Intro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Acknowledgments -- Introduction -- Chapter 1: Introduction to PyTorch, Tensors, and Tensor Operations -- What Is PyTorch? -- PyTorch Installation -- Recipe 1-1. Using Tensors -- Problem -- Solution -- How It Works -- Conclusion -- Chapter 2: Probability Distributions Using PyTorch -- Recipe 2-1. Sampling Tensors -- Problem -- Solution -- How It Works -- Recipe 2-2. Variable Tensors -- Problem -- Solution -- How It Works -- Recipe 2-3. Basic Statistics -- Problem -- Solution -- How It Works
|
505 |
0 |
|
|a Solution -- How It Works -- Recipe 3-11. Working with Autoencoders -- Problem -- Solution -- How It Works -- Recipe 3-12. Fine-Tuning Results Using Autoencoder -- Problem -- Solution -- How It Works -- Recipe 3-13. Restricting Model Overfitting -- Problem -- Solution -- How It Works -- Recipe 3-14. Visualizing the Model Overfit -- Problem -- Solution -- How It Works -- Recipe 3-15. Initializing Weights in the Dropout Rate -- Problem -- Solution -- How It Works -- Recipe 3-16. Adding Math Operations -- Problem -- Solution -- How It Works -- Recipe 3-17. Embedding Layers in RNN -- Problem
|
505 |
0 |
|
|a Recipe 2-4. Gradient Computation -- Problem -- Solution -- How It Works -- Recipe 2-5. Tensor Operations -- Problem -- Solution -- How It Works -- Recipe 2-6. Tensor Operations -- Problem -- Solution -- How It Works -- Recipe 2-7. Distributions -- Problem -- Solution -- How It Works -- Conclusion -- Chapter 3: CNN and RNN Using PyTorch -- Recipe 3-1. Setting Up a Loss Function -- Problem -- Solution -- How It Works -- Recipe 3-2. Estimating the Derivative of the Loss Function -- Problem -- Solution -- How It Works -- Recipe 3-3. Fine-Tuning a Model -- Problem -- Solution -- How It Works
|
505 |
0 |
|
|a Recipe 3-4. Selecting an Optimization Function -- Problem -- Solution -- How It Works -- Recipe 3-5. Further Optimizing the Function -- Problem -- Solution -- How It Works -- Recipe 3-6. Implementing a Convolutional Neural Network (CNN) -- Problem -- Solution -- How It Works -- Recipe 3-7. Reloading a Model -- Problem -- Solution -- How It Works -- Recipe 3-8. Implementing a Recurrent Neural Network -- Problem -- Solution -- How It Works -- Recipe 3-9. Implementing a RNN for Regression Problems -- Problem -- Solution -- How It Works -- Recipe 3-10. Using PyTorch's Built-In Functions -- Problem
|
505 |
0 |
|
|a Solution -- How It Works -- Conclusion -- Chapter 4: Introduction to Neural Networks Using PyTorch -- Recipe 4-1. Working with Activation Functions -- Problem -- Solution -- How It Works -- Linear Function -- Bilinear Function -- Sigmoid Function -- Hyperbolic Tangent Function -- Log Sigmoid Transfer Function -- ReLU Function -- Leaky ReLU -- Recipe 4-2. Visualizing the Shape of Activation Functions -- Problem -- Solution -- How It Works -- Recipe 4-3. Basic Neural Network Model -- Problem -- Solution -- How It Works -- Recipe 4-4. Tensor Differentiation -- Problem -- Solution -- How It Works -- Conclusion
|
653 |
|
|
|a Réseaux neuronaux (Informatique)
|
653 |
|
|
|a Neural networks (Computer science) / fast
|
653 |
|
|
|a Machine learning / http://id.loc.gov/authorities/subjects/sh85079324
|
653 |
|
|
|a Python (Computer program language) / fast
|
653 |
|
|
|a Python (Computer program language) / http://id.loc.gov/authorities/subjects/sh96008834
|
653 |
|
|
|a Neural networks (Computer science) / http://id.loc.gov/authorities/subjects/sh90001937
|
653 |
|
|
|a Machine learning / fast
|
653 |
|
|
|a Apprentissage automatique
|
653 |
|
|
|a Python (Langage de programmation)
|
041 |
0 |
7 |
|a eng
|2 ISO 639-2
|
989 |
|
|
|b OREILLY
|a O'Reilly
|
500 |
|
|
|a Description based upon print version of record
|
028 |
5 |
0 |
|a 10.1007/978-1-4842-8925-9
|
776 |
|
|
|z 1484289250
|
776 |
|
|
|z 1484289242
|
776 |
|
|
|z 9781484289242
|
776 |
|
|
|z 9781484289259
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781484289259/?ar
|x Verlag
|3 Volltext
|
082 |
0 |
|
|a 331
|
082 |
0 |
|
|a 500
|
082 |
0 |
|
|a 006.3/2
|
520 |
|
|
|a Distributed parallel processing for balancing PyTorch workloads, using PyTorch for image processing, audio analysis, and model interpretation are also covered in detail. Each chapter includes recipe code snippets to perform specific activities. By the end of this book, you will be able to confidently build neural network models using PyTorch.
|
520 |
|
|
|a What You Will Learn Utilize new code snippets and models to train machine learning models using PyTorch Train deep learning models with fewer and smarter implementations Explore the PyTorch framework for model explainability and to bring transparency to model interpretation Build, train, and deploy neural network models designed to scale with PyTorch Understand best practices for evaluating and fine-tuning models using PyTorch Use advanced torch features in training deep neural networks Explore various neural network models using PyTorch Discover functions compatible with sci-kit learn compatible models Perform distributed PyTorch training and execution Who This Book Is For Machine learning engineers, data scientists and Python programmers and software developers interested in learning the PyTorch framework
|
520 |
|
|
|a Learn how to use PyTorch to build neural network models using code snippets updated for this second edition. This book includes new chapters covering topics such as distributed PyTorch modeling, deploying PyTorch models in production, and developments around PyTorch with updated code. You'll start by learning how to use tensors to develop and fine-tune neural network models and implement deep learning models such as LSTMs, and RNNs. Next, you'll explore probability distribution concepts using PyTorch, as well as supervised and unsupervised algorithms with PyTorch. This is followed by a deep dive on building models with convolutional neural networks, deep neural networks, and recurrent neural networks using PyTorch. This new edition covers also topics such as Scorch, a compatible module equivalent to the Scikit machine learning library, model quantization to reduce parameter size, and preparing a model for deployment within a production system.
|