Advances in Randomized Parallel Computing

The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms h...

Full description

Bibliographic Details
Other Authors: Pardalos, Panos M. (Editor), Rajasekaran, Sanguthevar (Editor)
Format: eBook
Language:English
Published: New York, NY Springer US 1999, 1999
Edition:1st ed. 1999
Series:Combinatorial Optimization
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 06152nmm a2200397 u 4500
001 EB000622324
003 EBX01000000000000000475406
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9781461332824 
100 1 |a Pardalos, Panos M.  |e [editor] 
245 0 0 |a Advances in Randomized Parallel Computing  |h Elektronische Ressource  |c edited by Panos M. Pardalos, Sanguthevar Rajasekaran 
250 |a 1st ed. 1999 
260 |a New York, NY  |b Springer US  |c 1999, 1999 
300 |a XXVI, 287 p  |b online resource 
505 0 |a 11.4 Code of LAPACK90 Routines -- 11.5 LAPACK90 Documentation -- 11.6 LAPACK90 Test Programs -- 11.7 LAPACK90 User Callable Routines -- References -- Appendix: A, Generic Interfaces -- A.1 LAPACK77 Generic Interface Blocks -- A.2 LAPACK90 Generic Interface Blocks -- Appendix: B, Interface Subroutines -- B.1 LA_GESV and LA_GETRI subroutines -- B.2 Auxiliary Routines -- Appendix: C -- C.1 Documentation of LA_GESV -- Appendix: D -- D.1 The LA_GESV test results -- Appendix: E -- E.1 LAPACK90 User Callable Routines 
505 0 |a 4.3 Routing on the mesh -- 4.4 Sorting on the mesh -- 4.5 Selection on the mesh -- References -- 5 Efficient Randomized Algorithms -- 5.1 Introduction -- 5.2 Preliminaries -- 5.3 Randomized Routing -- 5.4 Randomized Selection -- 5.5 Randomized Sorting -- 5.6 Randomized PRAM Emulation -- 5.7 Selection and Sorting Schemes for Processing Large Distributed Files -- 5.8 Conclusions -- References -- 6 Ultrafast Randomized Parallel Algorithms for Spanning Forests -- 6.1 Introduction -- 6.2 Uitrafast Parallel Algorithms -- 6.3 Dense Instances -- 6.4 Ultrafast Algorithms for Spanning Forests -- 6.5 Open Problems and Further Research -- References -- 7 Parallel Randomized Techniques for Some Fundamental Geometric Problems -- 7.1 Introduction -- 7.2 Preliminaries and Definitions -- 7.3 The Use of Randomization in Computational Geometry -- 7.4 Applications to Fundamental Geometric Problems -- 7.5 Summary -- References -- 8 Capturing the Connectivity of High-Dimensional Geometric Spaces --  
505 0 |a 8.1 Introduction -- 8.2 Basic Probabilistic Roadmap Planner -- 8.3 Other Sampling Strategies -- 8.4 Roadmap Coverage -- 8.5 Roadmap Connectedness -- 8.6 Current and Future Work -- Appendix: A. Proof of Theorem 1 -- Appendix: B. Proof of Theorem 2 -- Appendix: C. Proof of Theorem 3 -- Appendix: D. Proof of Theorem 4 -- References -- 9 Randomized Parallel Prefetching and Buffer Management -- 9.1 Introduction -- 9.2 Definitions -- 9.3 Read-Once Reference Strings -- 9.4 Read-Many Reference Strings -- 9.5 Concluding Remarks -- References -- 10 DFA Problems -- 10.1 Introduction -- 10.2 Membership Problem -- 10.3 Containment and Equivalence Problems -- 10.4 Ranking and Related Problems -- 10.5 Coarsest Partition Problems -- 10.6 Automata Testing Problems -- 10.7 Conversion from Regular Expression to NFA -- 10.8 Applications -- 10.9 Open Problems -- References -- 11 LAPACK90 -- 11.1 Introduction -- 11.2 Interface Blocks for LAPACK 77 -- 11.3 Interface Blocks for LAPACK 90 --  
505 0 |a 1 Optimal Bounds on Tail Probabilities: A Study of an Approach -- 1.1 Introduction -- 1.2 Bounding Tail Probabilities with the Laplace Transform -- 1.3 When only the Mean is given: The Hoeffding Bound -- 1.4 When the Mean and the Variance are given: A Simple Proof of the Bennett Bound -- 1.5 When the First n Moments are Given: A Glimpse of the General Theory -- 1.6 An Application: Improved Bounds for the List Update Problem -- References -- 2 Parallelism in Comparison Problems -- 2.1 Introduction -- 2.2 Selection -- 2.3 Merging -- 2.4 Sorting -- 2.5 Conclusions -- References -- 3 Random Sampling -- 3.1 Introduction -- 3.2 Preliminaries -- 3.3 Partitioning I: Sorting -- 3.4 Partitioning II: List Ranking -- 3.5 Pruning I: Selection -- 3.6 Pruning II: Row maxima of monotone matrices -- 3.7 Pruning III: Graph Connected Components -- 3.8 Other examples -- 3.5 Bibliographic Notes -- References -- 4 Randomized Algorithms on the Mesh -- 4.1 Introduction -- 4.2 Preliminaries --  
653 |a Computer graphics 
653 |a Computer science 
653 |a Computer Graphics 
653 |a Algorithms 
653 |a Processor Architectures 
653 |a Microprocessors 
653 |a Theory of Computation 
653 |a Computer architecture 
700 1 |a Rajasekaran, Sanguthevar  |e [editor] 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a Combinatorial Optimization 
028 5 0 |a 10.1007/978-1-4613-3282-4 
856 4 0 |u https://doi.org/10.1007/978-1-4613-3282-4?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 518.1 
520 |a The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O( n log n) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all p0:.sible inputs