Qualitative Motion Understanding

Mobile robots operating in real-world, outdoor scenarios depend on dynamic scene understanding for detecting and avoiding obstacles, recognizing landmarks, acquiring models, and for detecting and tracking moving objects. Motion understanding has been an active research effort for more than a decade,...

Full description

Bibliographic Details
Main Authors: Burger, Wilhelm, Bhanu, Bir (Author)
Format: eBook
Language:English
Published: New York, NY Springer US 1992, 1992
Edition:1st ed. 1992
Series:The Springer International Series in Engineering and Computer Science
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 03973nmm a2200385 u 4500
001 EB000625155
003 EBX01000000000000000478237
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9781461535669 
100 1 |a Burger, Wilhelm 
245 0 0 |a Qualitative Motion Understanding  |h Elektronische Ressource  |c by Wilhelm Burger, Bir Bhanu 
250 |a 1st ed. 1992 
260 |a New York, NY  |b Springer US  |c 1992, 1992 
300 |a XIII, 210 p  |b online resource 
505 0 |a 1 Introduction -- 1.1 Aims of Motion Understanding -- 1.2 Autonomous Land Vehicle Navigation -- 1.3 Multi-Level Vision and Motion Analysis -- 1.4 Approaches to Motion Understanding -- 1.5 Outline of this Book -- 2 Framework for Qualitative Motion Understanding -- 2.1 Moving Through a Changing Environment -- 2.2 The “DRIVE” Approach -- 2.3 Low-Level Motion -- 2.4 Camera Motion and Scene Structure -- 2.5 Detecting 3-D Motion -- 2.6 Qualitative Modeling and Reasoning -- 3 Eli ECTS OF CAMERA MOTION -- 3.1 Viewing Geometry -- 3.2 Effects of Camera Rotation -- 3.3 Computing the Camera Rotation Angles -- 3.4 Effects of Camera Translation -- 3.5 Computing the Translation Parameters -- 4 Decomposing Image Motion -- 4.1 Motion Between Successive Frames -- 4.2 FOE from Rotations -- 4.3 Rotations from FOE -- 5 THE FUZZY FOE -- 5.1 Avoiding Unrealistic Precision -- 5.2 Defining the Fuzzy FOE -- 5.3 Computing the Fuzzy FOE -- 5.4 Experiments -- 6 Reasoning about Structure and Motion -- 6.1 Abstracting Image Events -- 6.2 Interpreting Image Events -- 6.3 Reasoning About 3-D Scene Structure -- 6.4 Reasoning About 3-D Motion -- 7 The Qualitative Scene Model -- 7.1 Basic Elements of the Model -- 7.2 Representing Multiple Interpretations -- 7.3 Conflict Resolution -- 7.4 Dynamic Evolution of the QSM -- 8 EXAMPLES -- 8.1 Simulated Data -- 8.2 Real Data -- 8.3 Implementation Issues -- 9 SUMMARY -- A.1 Geometric Constraint Method for Camera Motion -- A.2 Estimating Absolute Velocity -- REFERENCES. 
653 |a Image processing / Digital techniques 
653 |a Control, Robotics, Automation 
653 |a Computer vision 
653 |a Artificial Intelligence 
653 |a Computer Vision 
653 |a Computer Imaging, Vision, Pattern Recognition and Graphics 
653 |a Control engineering 
653 |a Artificial intelligence 
653 |a Robotics 
653 |a Automation 
700 1 |a Bhanu, Bir  |e [author] 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a The Springer International Series in Engineering and Computer Science 
028 5 0 |a 10.1007/978-1-4615-3566-9 
856 4 0 |u https://doi.org/10.1007/978-1-4615-3566-9?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 006 
520 |a Mobile robots operating in real-world, outdoor scenarios depend on dynamic scene understanding for detecting and avoiding obstacles, recognizing landmarks, acquiring models, and for detecting and tracking moving objects. Motion understanding has been an active research effort for more than a decade, searching for solutions to some of these problems; however, it still remains one of the more difficult and challenging areas of computer vision research. Qualitative Motion Understanding describes a qualitative approach to dynamic scene and motion analysis, called DRIVE (Dynamic Reasoning from Integrated Visual Evidence). The DRIVE system addresses the problems of (a) estimating the robot's egomotion, (b) reconstructing the observed 3-D scene structure; and (c) evaluating the motion of individual objects from a sequence of monocular images. The approach is based on the FOE (focus of expansion) concept, but it takes a somewhat unconventional route. The DRIVE system uses a qualitative scene model and a fuzzy focus of expansion to estimate robot motion from visual cues, to detect and track moving objects, and to construct and maintain a global dynamic reference model