Parallel Programming and Compilers

The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major­ ity of supercomputers during this period were regist...

Full description

Bibliographic Details
Main Author: Polychronopoulos, Constantine D.
Format: eBook
Language:English
Published: New York, NY Springer US 1988, 1988
Edition:1st ed. 1988
Series:The Springer International Series in Engineering and Computer Science
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
Table of Contents:
  • 1 Parallel Architectures and Compilers
  • 1.1 Introduction
  • 1.2 Book Overview
  • 1.3 Vector and Parallel Machines
  • 1.4 Parallelism in Programs
  • 1.5 Basic Concepts and Definitions
  • 2 Program restructuring for parallel execution
  • 2.1 Data Dependences
  • 2.2 Common Optimizations
  • 2.3 Transformations for Vector/Parallel Loops
  • 2.4 Cycle Shrinking
  • 2.5 Loop Spreading
  • 2.6 Loop Coalescing
  • 2.7 Run-Time Dependence Testing
  • 2.8 Subscript Blocking
  • 2.9 Future Directions
  • 3 A Comprehensive Environment for Automatic Packaging and Scheduling of Parallelism
  • 3.1 Introduction
  • 3.2 A Comprehensive Approach to Scheduling
  • 3.3 Auto-Scheduling Compilers
  • 4 Static and Dynamic Loop Scheduling
  • 4.1 Introduction
  • 4.2 The Guided Self-Scheduling (GSS(k)) Algorithm
  • 4.3 Simulation Results
  • 4.4 Static Loop Scheduling
  • 5 Run-Time Overhead
  • 5.1 Introduction
  • 5.2 Bounds for Dynamic Loop Scheduling
  • 5.3 Overhead of Parallel Tasks
  • 5.4 Two Run-Time Overhead Models
  • 5.5 Deciding the Minimum Unit of Allocation
  • 6 Static Program Partitioning
  • 6.1 Introduction
  • 6.2 Methods for Program Partitioning
  • 6.3 Optimal Task Composition for Chains
  • 6.4 Details of Interprocessor Communication
  • 7 Static Task Scheduling
  • 7.1 Introduction
  • 7.2 Optimal Allocations for High Level Spreading
  • 7.3 Scheduling Independent Serial Tasks
  • 7.4 High Level Spreading for Complete Task Graphs
  • 7.5 Bounds for Static Scheduling
  • 8 Speedup Bounds for Parallel Programs
  • 8.1 Introduction
  • 8.2 General Bounds on Speedup
  • 8.3 Speedup Measures for Task Graphs
  • 8.4 Speedup Measures for Doacr Loops
  • 8.5 Multiprocessors vs. Vector/Array Machines
  • References