Recent Advances in Parallel Virtual Machine and Message Passing Interface 11th European PVM/MPI Users' Group Meeting, Budapest, Hungary, September 19-22, 2004, Proceedings

The message passing paradigm is the most frequently used approach to develop high-performancecomputing applications on paralleland distributed computing architectures. Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) are the two main representatives in this domain. This volume comp...

Full description

Bibliographic Details
Other Authors: Kranzlmüller, Dieter (Editor), Kacsuk, Peter (Editor), Dongarra, Jack (Editor)
Format: eBook
Language:English
Published: Berlin, Heidelberg Springer Berlin Heidelberg 2004, 2004
Edition:1st ed. 2004
Series:Lecture Notes in Computer Science
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 07797nmm a2200469 u 4500
001 EB000652528
003 EBX01000000000000000505610
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9783540302186 
100 1 |a Kranzlmüller, Dieter  |e [editor] 
245 0 0 |a Recent Advances in Parallel Virtual Machine and Message Passing Interface  |h Elektronische Ressource  |b 11th European PVM/MPI Users' Group Meeting, Budapest, Hungary, September 19-22, 2004, Proceedings  |c edited by Dieter Kranzlmüller, Peter Kacsuk, Jack Dongarra 
250 |a 1st ed. 2004 
260 |a Berlin, Heidelberg  |b Springer Berlin Heidelberg  |c 2004, 2004 
300 |a XIV, 458 p  |b online resource 
505 0 |a Invited Talks -- PVM Grids to Self-assembling Virtual Machines -- The Austrian Grid Initiative – High Level Extensions to Grid Middleware -- Fault Tolerance in Message Passing and in Action -- MPI and High Productivity Programming -- High Performance Application Execution Scenarios in P-GRADE -- An Open Cluster System Software Stack -- Advanced Resource Connector (ARC) – The Grid Middleware of the NorduGrid -- Next Generation Grid: Learn from the Past, Look to the Future -- Tutorials -- Production Grid Systems and Their Programming -- Tools and Services for Interactive Applications on the Grid – The CrossGrid Tutorial -- Extensions and Improvements -- Verifying Collective MPI Calls -- Fast Tuning of Intra-cluster Collective Communications -- More Efficient Reduction Algorithms for Non-Power-of-Two Number of Processors in Message-Passing Parallel Systems -- Zero-Copy MPI Derived Datatype Communication over InfiniBand --  
505 0 |a Minimizing Synchronization Overhead in the Implementation of MPI One-Sided Communication -- Efficient Implementation of MPI-2 Passive One-Sided Communication on InfiniBand Clusters -- Providing Efficient I/O Redundancy in MPI Environments -- The Impact of File Systems on MPI-IO Scalability -- Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation -- Open MPI’s TEG Point-to-Point Communications Methodology: Comparison to Existing Implementations -- The Architecture and Performance of WMPI II -- A New MPI Implementation for Cray SHMEM -- Algorithms -- A Message Ordering Problem in Parallel Programs -- BSP/CGM Algorithms for Maximum Subsequence and Maximum Subarray -- A Parallel Approach for a Non-rigid Image Registration Algorithm -- Neighborhood Composition: A Parallelization of Local Search Algorithms -- Asynchronous Distributed Broadcasting in ClusterEnvironment -- A Simple Work-Optimal Broadcast Algorithm for Message-Passing Parallel Systems --  
505 0 |a Nesting OpenMP and MPI in the Conjugate Gradient Method for Band Systems -- An Asynchronous Branch and Bound Skeleton for Heterogeneous Clusters -- Applications -- Parallelization of GSL: Architecture, Interfaces, and Programming Models -- Using Web Services to Run Distributed Numerical Applications -- A Grid-Based Parallel Maple -- A Pipeline-Based Approach for Mapping Message-Passing Applications with an Input Data Stream -- Parallel Simulations of Electrophysiological Phenomena in Myocardium on Large 32 and 64-bit Linux Clusters -- Tools and Environments -- MPI I/O Analysis and Error Detection with MARMOT -- Parallel I/O in an Object-Oriented Message-Passing Library -- Detection of Collective MPI Operation Patterns -- Detecting Unaffected Race Conditions in Message-Passing Programs -- MPI Cluster System Software -- A Lightweight Framework for Executing Task Parallelism on Top of MPI -- Easing Message-Passing Parallel Programming Through a Data Balancing Service --  
505 0 |a Special Session of EuroPVM/MPI 2004. Current Trends in Numerical Simulation for Parallel Engineering Environments. ParSim 2004 -- Parallelization of a Monte Carlo Simulation for a Space Cosmic Particles Detector -- On the Parallelization of a Cache-Optimal Iterative Solver for PDEs Based on Hierarchical Data Structures and Space-Filling Curves -- Parallelization of an Adaptive Vlasov Solver -- A Framework for Optimising Parameter Studies on a Cluster Computer by the Example of Micro-system Design -- Numerical Simulations on PC Graphics Hardware 
505 0 |a TEG: A High-Performance, Scalable, Multi-network Point-to-Point Communications Methodology -- Cluster and Grid -- Efficient Execution on Long-Distance Geographically Distributed Dedicated Clusters -- Identifying Logical Homogeneous Clusters for Efficient Wide-Area Communications -- Coscheduling and Multiprogramming Level in a Non-dedicated Cluster -- Heterogeneous Parallel Computing Across Multidomain Clusters -- Performance Evaluation and Monitoring of Interactive Grid Applications -- A Domain Decomposition Strategy for GRID Environments -- A PVM Extension to Exploit Cluster Grids -- Performance -- An Initial Analysis of the Impact of Overlap and Independent Progress for MPI -- A Performance-Oriented Technique for Hybrid ApplicationDevelopment -- A Refinement Strategy for a User-Oriented Performance Analysis -- What Size Cluster Equals a Dedicated Chip -- Architecture and Performance of the BlueGene/L Message Layer -- Special Session: ParSim 2004 --  
653 |a Computer systems 
653 |a Compilers (Computer programs) 
653 |a Compilers and Interpreters 
653 |a Programming Techniques 
653 |a Computer science 
653 |a Numerical Analysis 
653 |a Computer System Implementation 
653 |a Computer programming 
653 |a Computer arithmetic and logic units 
653 |a Numerical analysis 
653 |a Arithmetic and Logic Structures 
653 |a Theory of Computation 
700 1 |a Kacsuk, Peter  |e [editor] 
700 1 |a Dongarra, Jack  |e [editor] 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a Lecture Notes in Computer Science 
028 5 0 |a 10.1007/b100820 
856 4 0 |u https://doi.org/10.1007/b100820?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 004.2 
520 |a The message passing paradigm is the most frequently used approach to develop high-performancecomputing applications on paralleland distributed computing architectures. Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) are the two main representatives in this domain. This volume comprises 50 selected contributions presented at the 11th - ropean PVM/MPI Users’ Group Meeting, which was held in Budapest, H- gary, September 19–22, 2004. The conference was organized by the Laboratory of Parallel and Distributed Systems (LPDS) at the Computer and Automation Research Institute of the Hungarian Academy of Sciences (MTA SZTAKI). The conference was previously held in Venice, Italy (2003), Linz, Austria (2002), Santorini, Greece (2001), Balatonfu ¨red, Hungary (2000), Barcelona, Spain (1999), Liverpool, UK (1998), and Krakow,Poland (1997).The ?rst three conferences were devoted to PVM and were held in Munich, Germany (1996), Lyon, France (1995), and Rome, Italy (1994). In its eleventh year, this conference is well established as the forum for users and developers of PVM, MPI, and other messagepassing environments.Inter- tionsbetweenthesegroupshaveprovedtobeveryusefulfordevelopingnewideas in parallel computing, and for applying some of those already existent to new practical?elds.Themaintopicsofthe meeting wereevaluationandperformance of PVM and MPI, extensions, implementations and improvements of PVM and MPI, parallel algorithms using the message passing paradigm, and parallel - plications in science and engineering. In addition, the topics of the conference were extended to include cluster and grid computing, in order to re?ect the importance of this area for the high-performance computing community