- •CONTENTS
- •PREFACE
- •Abstract
- •1. Introduction
- •2.1. Differential Geometry of Space Curves
- •2.2. Inverse Problem Formulation
- •2.3. Reconstruction of Unique Space Curves
- •3. Rigid Motion Estimation by Tracking the Space Curves
- •4. Motion Estimation Using Double Stereo Rigs
- •4.1. Single Stereo Rig
- •4.2. Double Stereo Rigs
- •5.1. Space-Time or Virtual Camera Generation
- •5.2. Visual Hull Reconstruction from Silhouettes of Multiple Views
- •5.2.1. Volume Based Visual Hull
- •5.2.1.1. Intersection Test in Octree Cubes
- •5.2.1.2. Synthetic Model Results
- •5.2.2. Edge Base Visual Hull
- •5.2.2.1. Synthetic Model Results
- •Implementation and Exprimental Results
- •Conclusions
- •Acknowledgment
- •References
- •Abstract
- •Introduction: Ocular Dominance
- •Demography of Ocular Dominance
- •A Taxonomy of Ocular Dominance
- •Is Ocular Dominance Test Specific?
- •I. Tests of Rivalry
- •II. Tests of Asymmetry
- •III. Sighting Tests
- •Some Misconceptions
- •Resolving the Paradox of Ocular Dominance
- •Some Clinical Implications of Ocular Dominance
- •Conclusion
- •References
- •Abstract
- •1. Introduction
- •2. Basic Teory
- •3. Bezier Networks for Surface Contouring
- •4. Parameter of the Vision System
- •5. Experimental Results
- •Conclusions
- •References
- •Abstract
- •Introduction
- •Terminology (Definitions)
- •Clinical Assessment
- •Examination Techniques: Motility
- •Ocular Motility Recordings
- •Semiautomatic Analysis of Eye Movement Recordings
- •Slow Eye Movements in Congenital Nystagmus
- •Conclusion
- •References
- •EVOLUTION OF COMPUTER VISION SYSTEMS
- •Abstract
- •Introduction
- •Present-Day Level of CVS Development
- •Full-Scale Universal CVS
- •Integration of CVS and AI Control System
- •Conclusion
- •References
- •Introduction
- •1. Advantages of Binocular Vision
- •2. Foundations of Binocular Vision
- •3. Stereopsis as the Highest Level of Binocular Vision
- •4. Binocular Viewing Conditions on Pupil Near Responses
- •5. Development of Binocular Vision
- •Conclusion
- •References
- •Abstract
- •Introduction
- •Methods
- •Results
- •Discussion
- •Conclusion
- •References
- •Abstract
- •1. Preferential Processing of Emotional Stimuli
- •1.1. Two Pathways for the Processing of Emotional Stimuli
- •1.2. Intensive Processing of Negative Valence or of Arousal?
- •2. "Blind" in One Eye: Binocular Rivalry
- •2.1. What Helmholtz Knew Already
- •2.3. Possible Influences from Non-visual Neuronal Circuits
- •3.1. Significance and Predominance
- •3.2. Emotional Discrepancy and Binocular Rivalry
- •4. Binocular Rivalry Experiments at Our Lab
- •4.1. Predominance of Emotional Scenes
- •4.1.1. Possible Confounds
- •4.2. Dominance of Emotional Facial Expressions
- •4.3. Inter-Individual Differences: Phobic Stimuli
- •4.4. Controlling for Physical Properties of Stimuli
- •4.5. Validation of Self-report
- •4.6. Summary
- •References
- •Abstract
- •1. Introduction
- •2. Algorithm Overview
- •3. Road Surface Estimation
- •3.1. 3D Data Point Projection and Cell Selection
- •3.2. Road Plane Fitting
- •3.2.1. Dominant 2D Straight Line Parametrisation
- •3.2.2. Road Plane Parametrisation
- •4. Road Scanning
- •5. Candidate Filtering
- •6. Experimental Results
- •7. Conclusions
- •Acknowledgements
- •References
- •DEVELOPMENT OF SACCADE CONTROL
- •Abstract
- •1. Introduction
- •2. Fixation and Fixation Stability
- •2.1. Monocular Instability
- •2.2. Binocular Instability
- •2.3. Eye Dominance in Binocular Instability
- •3. Development of Saccade Control
- •3.1. The Optomotor Cycle and the Components of Saccade Control
- •3.4. Antisaccades: Voluntary Saccade Control
- •3.5. The Age Curves of Saccade Control
- •3.6. Left – Right Asymmetries
- •3.7. Correlations and Independence
- •References
- •OCULAR DOMINANCE
- •INDEX
New Trends in Surface Reconstruction Using Space-Time Cameras |
29 |
|
|
carving [35,36] and view ray sampling [37]. In the voxel carving method, a discrete number of voxels are constructed around the volume of interest. Then, each voxel is checked for all silhouettes and any voxels that project outside the silhouettes are removed from the volume. Voxel carving can be accelerated using octree representation which employs coarse to fine hierarchy. In view ray sampling, a sampled representation of the visual hull is constructed. The visual hull is sampled in a view-dependent manner. For each viewing ray in some desired view, the intersection points with all surfaces of the visual hull are computed. Moezzi et al. [38] construct the visual hull using voxels in an off-line processing system. Cheung et al. [39, 40] show that the voxel method can achieve interactive reconstruction results. The polyhedral visual hull system developed by Matusik et al. [41] also runs at interactive rate.
In this section, two efficient algorithms are presented to improve the speed of computations in visual hull extraction. The first algorithm accelerates the voxel carving method. This algorithm reduces the number of check-points at intersection test procedure. The octree division method is optimized, by minimizing the number of check-points, to find intersection between cubes and silhouette images. To accomplish this function, the points are checked on the edges of octree cubes rather than the inside of volume. Furthermore, the points are checked hierarchically and their number is changed corresponding to the size of octree cubes. The second algorithm employs the ray sampling method to extract the bounding edge model of visual hull. To find the segments of any ray which lies inside the other silhouette cones, the points of ray are checked hierarchically.
5.2.1. Volume Based Visual Hull
Many algorithms have been developed to construct the volumetric models from a set of silhouette images [35, 36, 37, 39]. Starting from a bounding volume that is known to surround the whole scene, the volume is divided into voxels. The task is finding which voxels belong to the surface of 3D object, corresponding to the intersection of back-projected silhouette cones. The most important step in these algorithms is the intersection test. To make the projection and intersection test more efficient, most methods use an octree representation and test voxels in a course-to-fine hierarchy.
5.2.1.1. Intersection Test in Octree Cubes
The most important and time-consuming part of octree reconstruction is the cubes intersection check with silhouette images. All algorithms use one common
30 |
Hossein Ebrahimnezhad and Hassan Ghassemian |
|
|
rule to decide whether or not intersection is happened between cube and object. Cube is known as outside if all points inside cube are “1” and known as inside if all points inside cube are “0”. Also, intersected cube is the cube which has at least two different color points. Different methods of point checking are classified in figure 10. The number of check points may be constant in all size of cubes, or change dynamically based on the size of cube. In all methods, the 8 corners of each cube are checked by projecting them to all the silhouettes. If there were at least two different color corners, occurrence of intersection will be inferred and the process for this cube can be terminated, otherwise more points in the cube should be checked. If there was color difference during check, the cube will be marked as intersected cube and the process will be terminated. After checking all points, if there was no color difference, the cube will be identified as outside (or inside) according to the color of points "1" (or "0").
To compare the complexity of different types of intersection check in the octree cubes, the following parameters will be considered: L = level of octree division; CL = number of grey (intersected or surface) cubes in level L; NL= maximum number of checking points to identify the mark of cube in level L; S = number of silhouettes. Since each grey cube is divided to 8 sub-cubes in octree division, so the number of grey cubes in level L-1 will be equal or greater than 1/8 grey cubes in level L according to the number of child grey cubes. The total number of point projections to silhouette images in the worst case will be:
N tot (m ax ) = S (N L C L + N L -1C L -1 + N L -2 C L -2 + N L -3 C L -3 |
+ · · ·) |
(45) |
||||||||||
|
|
N |
|
|
N |
|
|
N |
|
|
|
|
|
L -1 |
|
L -2 |
|
L -3 |
|
|
|||||
≥ S C L |
N L + |
|
+ |
|
+ |
|
+ · · · |
|
|
|||
8 |
64 |
512 |
|
|
||||||||
|
|
|
|
|
|
|
||||||
Figure 10. Checking methods in octree cubes. a) Sequential check in volume b) Random check in volume c) Sequential check on edges d) Hierarchical check on edges.
New Trends in Surface Reconstruction Using Space-Time Cameras |
31 |
|
|
Obviously, the total number of check points will be smaller than Ntot(max), because intersected cubes normally will be recognized in early checking. This number can be used as a measure to compare different methods of intersection check. It is obvious that the computing time is proportional with Ntot. Edge based checking is one approach to decrease the number of checking points to identify the mark of cube without loss of accuracy. Any intersection between cube and silhouette should be occurred through edges in one-piece objects. Of course there is one exception case when the object is small and posed inside the cube so that there is no intersection between object and cube through the edges. For such cases, the edge base method can not be employed to decide if object is inside the cube or intersects with cube through the face. Therefore checking some points inside the volume will be inevitable for this situation. If the size of bounding cube be selected properly, comparable to that of object, the cube will be larger than the object only in first level of division. So the ambiguity of intersection test through the edges will be stay only for the first level. Since the octree division is done at all times without checking the occurrence of intersection in first level, the use of edge based intersection test for one piece object can be applied with certainty. Another approach to decrease the number of check points is to change the number of points dynamically in each level. In fact, the large cubes may intersect with small parts of silhouette and it needs checking of more points to identify the intersection. In small cubes, this situation can not be occurred and there is no need to check more points. By choosing NL=8 (checking only corners of cube in last level) and increasing checked points with the factor of 'k' in lower levels, we can minimize Ntot (max) as below:
|
|
8k |
|
|
8k |
|
|
8k |
|
|
|
|
|
|
k |
|
|
k |
2 |
|
k |
3 |
|
(46) |
Ntot (max)≥ S CL |
8+ |
|
1 |
+ |
|
2 |
+ |
|
3 |
+ ··· |
≥ |
8 |
S CL |
1+ |
|
1 |
+ |
|
+ |
|
+ ··· |
|||
8 |
|
|
|
64 |
512 |
|||||||||||||||||||
|
|
|
|
64 |
|
512 |
|
|
|
|
|
8 |
|
|
|
|
||||||||
The final approach to increase the speed is to check the edge points hierarchically. In this way the chance to find two different color points in early checks could be increased.
5.2.1.2. Synthetic Model Results
To determine the capability of presented algorithm and to quantify its performance, we have tested it on a synthetically generated image named Bunny and Horse. Simulation was run on PC Pentium-III 933Mhz using Matlab and C++ generated Mex files. In this analysis, 18 silhouettes of bunny from equal space
32 |
Hossein Ebrahimnezhad and Hassan Ghassemian |
|
|
angle viewpoints have been used. Figure 11 shows the result of simulation for different methods. In this figure 'CN' and 'DN' mean Constant Number or Dynamic Number of points should be checked in different size of cubes. 'S', 'H' and 'R' mean Sequential, Hierarchical and Random method to check the points, respectively. The last word 'V' and 'E' mean that the check points are selected inside the Volume or on the Edges of cube. To compare the efficiency of methods, computing time for a fix number of recovered cubes (voxels) in the last level could be evaluated for different types of intersection check. As it is cleared in figure, DNHE method gives the best result and CNRV method gives the worst result. Computing time for random check method is high, because some check points may be chosen near each other as it is cleared in figure 10-b.
|
1000 |
|
CNSV |
|
|
|
|
|
|
|
|
|
|
|
|
(Sec) |
|
|
DNSV |
|
|
|
|
|
|
CNSE |
|
|
|
|
|
|
|
CNHE |
|
|
|
|
|
Time |
|
|
|
|
|
|
|
100 |
|
DNHE |
|
|
|
|
|
Computing |
|
|
CNRV |
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
|
|
|
|
|
|
1 |
|
|
|
|
|
|
|
6880 |
6900 |
6920 |
6940 |
6960 |
6980 |
7000 |
Number of Recovered Voxels of Visual Hull
Figure 11. Computation time for different types of intersection check.
In figures 12, a synthetic object named Bunny has been applied to reconstruct the 3D shape using DNHE algorithm. The synthetic object has been captured from different views and 18 silhouettes of object have been prepared. These silhouettes are shown in figure 11-a. The different levels of octree division are illustrated in figure 11-b and the depth-map of reconstructed 3d-model is shown in figure 11-c
New Trends in Surface Reconstruction Using Space-Time Cameras |
33 |
|
|
from different views. Figure 13 shows the result of shape reconstruction for another synthetic object named Horse.
Figure 12. Three-dimensional shape Reconstruction of Bunny from 18 silhouettes using DNHE algorithm. a) different silhouettes of object from 18 view angles b) different levels of octree division using DNHE algorithm c) depth-map of reconstructed 3d-model in different view angles.
34 |
Hossein Ebrahimnezhad and Hassan Ghassemian |
|
|
Figure 13. Three-dimensional shape Reconstruction of Horse from 18 silhouettes using DNHE algorithm a) different silhouettes of object from 18 view angles b) different levels of octree division using DNHE algorithm c) depth-map of reconstructed 3d-model in different view angles.
