- •CONTENTS
- •PREFACE
- •Abstract
- •1. Introduction
- •2.1. Differential Geometry of Space Curves
- •2.2. Inverse Problem Formulation
- •2.3. Reconstruction of Unique Space Curves
- •3. Rigid Motion Estimation by Tracking the Space Curves
- •4. Motion Estimation Using Double Stereo Rigs
- •4.1. Single Stereo Rig
- •4.2. Double Stereo Rigs
- •5.1. Space-Time or Virtual Camera Generation
- •5.2. Visual Hull Reconstruction from Silhouettes of Multiple Views
- •5.2.1. Volume Based Visual Hull
- •5.2.1.1. Intersection Test in Octree Cubes
- •5.2.1.2. Synthetic Model Results
- •5.2.2. Edge Base Visual Hull
- •5.2.2.1. Synthetic Model Results
- •Implementation and Exprimental Results
- •Conclusions
- •Acknowledgment
- •References
- •Abstract
- •Introduction: Ocular Dominance
- •Demography of Ocular Dominance
- •A Taxonomy of Ocular Dominance
- •Is Ocular Dominance Test Specific?
- •I. Tests of Rivalry
- •II. Tests of Asymmetry
- •III. Sighting Tests
- •Some Misconceptions
- •Resolving the Paradox of Ocular Dominance
- •Some Clinical Implications of Ocular Dominance
- •Conclusion
- •References
- •Abstract
- •1. Introduction
- •2. Basic Teory
- •3. Bezier Networks for Surface Contouring
- •4. Parameter of the Vision System
- •5. Experimental Results
- •Conclusions
- •References
- •Abstract
- •Introduction
- •Terminology (Definitions)
- •Clinical Assessment
- •Examination Techniques: Motility
- •Ocular Motility Recordings
- •Semiautomatic Analysis of Eye Movement Recordings
- •Slow Eye Movements in Congenital Nystagmus
- •Conclusion
- •References
- •EVOLUTION OF COMPUTER VISION SYSTEMS
- •Abstract
- •Introduction
- •Present-Day Level of CVS Development
- •Full-Scale Universal CVS
- •Integration of CVS and AI Control System
- •Conclusion
- •References
- •Introduction
- •1. Advantages of Binocular Vision
- •2. Foundations of Binocular Vision
- •3. Stereopsis as the Highest Level of Binocular Vision
- •4. Binocular Viewing Conditions on Pupil Near Responses
- •5. Development of Binocular Vision
- •Conclusion
- •References
- •Abstract
- •Introduction
- •Methods
- •Results
- •Discussion
- •Conclusion
- •References
- •Abstract
- •1. Preferential Processing of Emotional Stimuli
- •1.1. Two Pathways for the Processing of Emotional Stimuli
- •1.2. Intensive Processing of Negative Valence or of Arousal?
- •2. "Blind" in One Eye: Binocular Rivalry
- •2.1. What Helmholtz Knew Already
- •2.3. Possible Influences from Non-visual Neuronal Circuits
- •3.1. Significance and Predominance
- •3.2. Emotional Discrepancy and Binocular Rivalry
- •4. Binocular Rivalry Experiments at Our Lab
- •4.1. Predominance of Emotional Scenes
- •4.1.1. Possible Confounds
- •4.2. Dominance of Emotional Facial Expressions
- •4.3. Inter-Individual Differences: Phobic Stimuli
- •4.4. Controlling for Physical Properties of Stimuli
- •4.5. Validation of Self-report
- •4.6. Summary
- •References
- •Abstract
- •1. Introduction
- •2. Algorithm Overview
- •3. Road Surface Estimation
- •3.1. 3D Data Point Projection and Cell Selection
- •3.2. Road Plane Fitting
- •3.2.1. Dominant 2D Straight Line Parametrisation
- •3.2.2. Road Plane Parametrisation
- •4. Road Scanning
- •5. Candidate Filtering
- •6. Experimental Results
- •7. Conclusions
- •Acknowledgements
- •References
- •DEVELOPMENT OF SACCADE CONTROL
- •Abstract
- •1. Introduction
- •2. Fixation and Fixation Stability
- •2.1. Monocular Instability
- •2.2. Binocular Instability
- •2.3. Eye Dominance in Binocular Instability
- •3. Development of Saccade Control
- •3.1. The Optomotor Cycle and the Components of Saccade Control
- •3.4. Antisaccades: Voluntary Saccade Control
- •3.5. The Age Curves of Saccade Control
- •3.6. Left – Right Asymmetries
- •3.7. Correlations and Independence
- •References
- •OCULAR DOMINANCE
- •INDEX
New Trends in Surface Reconstruction Using Space-Time Cameras |
27 |
|
|
of the object's silhouette. The silhouette of an object in an image refers to the curve, which separates the object from background. The visual hull cannot recover concave regions regardless of the image numbers that are used. In addition, it needs a large number of different views for recovering the fine details. To moderate the first drawback of visual hull, combination with stereo matching can be employed. To get rid of the second drawback, more silhouettes of the object can be captured by the limited number of cameras across time. Cheng et al. presented a method to enhance the shape approximation by combining multiple silhouette images captured across time [34]. Employing a basic property of visual hull, which affirms that each bounding edge must touch the object in no less than one point, they use multi-view stereo to extract these touching points called Colored Surface Points (CSP) on the surface of the object. These CSPs are then used in a 3D image alignment algorithm to find the six rotation and translation parameters of rigid motion between two visual hulls. They utilize the color consistency property of the object to align the CSP points. Once the rigid motion across time is known, all of the silhouette images are treated as being captured at the same time instant and the shape of the object is refined.
Motion estimation by CSP method suffers from some drawbacks. Nonaccurate color adjustment between cameras is one problem that makes some error in color-consistency test. Moreover, variation of the light angle while the object moves around the light source produces additional error.
Our presented method of motion estimation which uses only the edge information as the space curves, is very robust against the color maladjustment of cameras and shading during the object motion. Moreover, it can be effectively used to extract visual hull of poorly textured objects.
In the remainder of this section, it is assumed that the motion parameters are known for multiple views of the object and the goal is to reconstruct 3D shape of the object from silhouette information across time.
5.1. Space-Time or Virtual Camera Generation
Let P defined in Eq.40, be the projection matrix of camera, which translates the 3D point W in the world coordinate to (x im , y im )in the image coordinate of
the camera plane:
