Generative deep learning teaching machines to paint, write, compose, and play

Generative AI is the hottest topic in tech. This practical book teaches machine learning engineers and data scientists how to use TensorFlow and Keras to create impressive generative deep learning models from scratch, including variational autoencoders (VAEs), generative adversarial networks (GANs),...

Full description

Bibliographic Details
Main Author: Foster, David
Other Authors: Friston, Karl (writer of foreword)
Format: eBook
Language:English
Published: Sebastopol, CA O'Reilly Media, Incorporated 2023
Edition:Second edition
Subjects:
Online Access:
Collection: O'Reilly - Collection details see MPG.ReNa
Table of Contents:
  • Includes bibliographical references and index
  • The Generative Modeling Framework
  • Representation Learning
  • Core Probability Theory
  • Generative Model Taxonomy
  • The Generative Deep Learning Codebase
  • Cloning the Repository
  • Using Docker
  • Running on a GPU
  • Summary
  • Chapter 2. Deep Learning
  • Data for Deep Learning
  • Deep Neural Networks
  • What Is a Neural Network?
  • Learning High-Level Features
  • TensorFlow and Keras
  • Multilayer Perceptron (MLP)
  • Preparing the Data
  • Building the Model
  • Compiling the Model
  • Training the Model
  • Evaluating the Model
  • Convolutional Neural Network (CNN)
  • Convolutional Layers
  • Analysis of the WGAN-GP
  • Conditional GAN (CGAN)
  • CGAN Architecture
  • Training the CGAN
  • Analysis of the CGAN
  • Summary
  • Chapter 5. Autoregressive Models
  • Introduction
  • Long Short-Term Memory Network (LSTM)
  • The Recipes Dataset
  • Working with Text Data
  • Tokenization
  • Creating the Training Set
  • The LSTM Architecture
  • The Embedding Layer
  • The LSTM Layer
  • The LSTM Cell
  • Training the LSTM
  • Analysis of the LSTM
  • Recurrent Neural Network (RNN) Extensions
  • Stacked Recurrent Networks
  • Gated Recurrent Units
  • Bidirectional Cells
  • PixelCNN
  • Batch Normalization
  • Dropout
  • Building the CNN
  • Training and Evaluating the CNN
  • Summary
  • Part II. Methods
  • Chapter 3. Variational Autoencoders
  • Introduction
  • Autoencoders
  • The Fashion-MNIST Dataset
  • The Autoencoder Architecture
  • The Encoder
  • The Decoder
  • Joining the Encoder to the Decoder
  • Reconstructing Images
  • Visualizing the Latent Space
  • Generating New Images
  • Variational Autoencoders
  • The Encoder
  • The Loss Function
  • Training the Variational Autoencoder
  • Analysis of the Variational Autoencoder
  • Exploring the Latent Space
  • The CelebA Dataset
  • Training the Variational Autoencoder
  • Analysis of the Variational Autoencoder
  • Generating New Faces
  • Latent Space Arithmetic
  • Morphing Between Faces
  • Summary
  • Chapter 4. Generative Adversarial Networks
  • Introduction
  • Deep Convolutional GAN (DCGAN)
  • The Bricks Dataset
  • The Discriminator
  • The Generator
  • Training the DCGAN
  • Analysis of the DCGAN
  • GAN Training: Tips and Tricks
  • Wasserstein GAN with Gradient Penalty (WGAN-GP)
  • Wasserstein Loss
  • The Lipschitz Constraint
  • Enforcing the Lipschitz Constraint
  • The Gradient Penalty Loss
  • Training the WGAN-GP
  • Part I. Introduction to Generative Deep Learning
  • Chapter 1. Generative Modeling
  • What Is Generative Modeling?
  • Generative Versus Discriminative Modeling
  • The Rise of Generative Modeling
  • Generative Modeling and AI
  • Our First Generative Model
  • Hello World!