- •CONTENTS
- •PREFACE
- •Abstract
- •1. Introduction
- •2.1. Differential Geometry of Space Curves
- •2.2. Inverse Problem Formulation
- •2.3. Reconstruction of Unique Space Curves
- •3. Rigid Motion Estimation by Tracking the Space Curves
- •4. Motion Estimation Using Double Stereo Rigs
- •4.1. Single Stereo Rig
- •4.2. Double Stereo Rigs
- •5.1. Space-Time or Virtual Camera Generation
- •5.2. Visual Hull Reconstruction from Silhouettes of Multiple Views
- •5.2.1. Volume Based Visual Hull
- •5.2.1.1. Intersection Test in Octree Cubes
- •5.2.1.2. Synthetic Model Results
- •5.2.2. Edge Base Visual Hull
- •5.2.2.1. Synthetic Model Results
- •Implementation and Exprimental Results
- •Conclusions
- •Acknowledgment
- •References
- •Abstract
- •Introduction: Ocular Dominance
- •Demography of Ocular Dominance
- •A Taxonomy of Ocular Dominance
- •Is Ocular Dominance Test Specific?
- •I. Tests of Rivalry
- •II. Tests of Asymmetry
- •III. Sighting Tests
- •Some Misconceptions
- •Resolving the Paradox of Ocular Dominance
- •Some Clinical Implications of Ocular Dominance
- •Conclusion
- •References
- •Abstract
- •1. Introduction
- •2. Basic Teory
- •3. Bezier Networks for Surface Contouring
- •4. Parameter of the Vision System
- •5. Experimental Results
- •Conclusions
- •References
- •Abstract
- •Introduction
- •Terminology (Definitions)
- •Clinical Assessment
- •Examination Techniques: Motility
- •Ocular Motility Recordings
- •Semiautomatic Analysis of Eye Movement Recordings
- •Slow Eye Movements in Congenital Nystagmus
- •Conclusion
- •References
- •EVOLUTION OF COMPUTER VISION SYSTEMS
- •Abstract
- •Introduction
- •Present-Day Level of CVS Development
- •Full-Scale Universal CVS
- •Integration of CVS and AI Control System
- •Conclusion
- •References
- •Introduction
- •1. Advantages of Binocular Vision
- •2. Foundations of Binocular Vision
- •3. Stereopsis as the Highest Level of Binocular Vision
- •4. Binocular Viewing Conditions on Pupil Near Responses
- •5. Development of Binocular Vision
- •Conclusion
- •References
- •Abstract
- •Introduction
- •Methods
- •Results
- •Discussion
- •Conclusion
- •References
- •Abstract
- •1. Preferential Processing of Emotional Stimuli
- •1.1. Two Pathways for the Processing of Emotional Stimuli
- •1.2. Intensive Processing of Negative Valence or of Arousal?
- •2. "Blind" in One Eye: Binocular Rivalry
- •2.1. What Helmholtz Knew Already
- •2.3. Possible Influences from Non-visual Neuronal Circuits
- •3.1. Significance and Predominance
- •3.2. Emotional Discrepancy and Binocular Rivalry
- •4. Binocular Rivalry Experiments at Our Lab
- •4.1. Predominance of Emotional Scenes
- •4.1.1. Possible Confounds
- •4.2. Dominance of Emotional Facial Expressions
- •4.3. Inter-Individual Differences: Phobic Stimuli
- •4.4. Controlling for Physical Properties of Stimuli
- •4.5. Validation of Self-report
- •4.6. Summary
- •References
- •Abstract
- •1. Introduction
- •2. Algorithm Overview
- •3. Road Surface Estimation
- •3.1. 3D Data Point Projection and Cell Selection
- •3.2. Road Plane Fitting
- •3.2.1. Dominant 2D Straight Line Parametrisation
- •3.2.2. Road Plane Parametrisation
- •4. Road Scanning
- •5. Candidate Filtering
- •6. Experimental Results
- •7. Conclusions
- •Acknowledgements
- •References
- •DEVELOPMENT OF SACCADE CONTROL
- •Abstract
- •1. Introduction
- •2. Fixation and Fixation Stability
- •2.1. Monocular Instability
- •2.2. Binocular Instability
- •2.3. Eye Dominance in Binocular Instability
- •3. Development of Saccade Control
- •3.1. The Optomotor Cycle and the Components of Saccade Control
- •3.4. Antisaccades: Voluntary Saccade Control
- •3.5. The Age Curves of Saccade Control
- •3.6. Left – Right Asymmetries
- •3.7. Correlations and Independence
- •References
- •OCULAR DOMINANCE
- •INDEX
22 |
Hossein Ebrahimnezhad and Hassan Ghassemian |
|
|
points to the tracking curves, with more precise motion parameters, can be achieved.
Once the six motion parameters were estimated for two consecutive sequences, the motion matrix can be constructed as:
|
|
|
|
|
|
R(ϕx ,ϕy |
,ϕz |
) |
T |
|
|
|
|
|
(27) |
|||
|
|
|
|
|
M = |
0 |
0 |
0 |
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
|
|
1 |
|
|
|
|
|
|
||||
Where: |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
cosϕ |
sinϕ |
0 |
cosϕ |
0 −sinϕ |
|
1 |
|
0 |
0 |
|
|
|
|
||||
|
|
y |
|
y |
|
|
|
t |
|
|
||||||||
R ϕ ,ϕ ,ϕ |
|
z |
z |
|
0 |
1 |
0 |
|
cosϕ |
sinϕ |
|
; |
x |
(28) |
||||
= −sinϕ |
cosϕ |
0 |
|
|
0 |
|
T= t |
|
||||||||||
( x y z ) |
|
z |
z |
|
|
0 cosϕy |
|
|
x |
x |
|
y |
|
|||||
|
|
0 |
0 |
1 sinϕy |
|
0 |
−sinϕx |
cosϕx |
|
t |
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
z |
|
|
The new position of each point, in the next frame, can be calculated by multiplying the motion matrix to the position vector.
W |
(n +1) |
= Mn W |
(n ) |
where : |
W |
(1) |
|
w |
,Y |
w |
, Z |
w |
T |
(29) |
|
|
|
= X |
|
|
|
,1 |
4. Motion Estimation Using Double Stereo Rigs
In this section, we present a double stereo configuration to get as much accuracy as possible in estimation of motion parameters. The basic idea to achieve this end is to find an arrangement of stereo cameras in which the sensitivity of image pose variation to space pose variation is maximized. At first, the single stereo setup is investigated and then a perpendicular double stereo configuration is presented and its dominance to the single stereo is demonstrated.
4.1. Single Stereo Rig
As mentioned in section 2.3, the base line of stereo rig is adjusted neither small nor wide to compromise between depth uncertainty and occlusion. Moreover, to utilize the linear part of camera lens and to get rid of the complex computations of nonlinear distortion, the view angle is chosen as small as possible. Hence, the size of the object is usually very smaller than its distance from the camera center, i.e. 2r tz (see figure 8).
New Trends in Surface Reconstruction Using Space-Time Cameras |
23 |
|
|
Figure 8. Single stereo setup with small view angle ( tz >> 2r ).
Now, we would like to answer the question that how much accuracy is achievable in space motion estimation by tracking the projection of points in camera planes. For the sake of simplicity, we assume that the optical axis of camera1 is in the depth direction of world coordinate (i.e. Zw). Projection of any point (X w ,Yw , Zw ) in the image plane of camera1 is computed as:
(x im 1 , y im1 )= −f x 1 Z X+w t
w z
By differentiating, we can write:
|
x im1 |
= |
∂x |
im1 |
X w |
+ |
∂x |
im |
1 |
|||
|
|
|
||||||||||
∂X w |
∂Y w |
|
||||||||||
|
|
|
|
|
|
|
|
|||||
|
|
|
|
∂y im1 |
|
|
|
∂y im1 |
||||
|
y |
im1 |
= |
X |
w |
+ |
||||||
|
|
|||||||||||
|
|
|
∂X w |
|
|
∂Y w |
|
|||||
|
|
|
|
|
|
|
|
|||||
,−f y 1 |
Y w |
|
(30) |
|
|
||||
Zw +tz |
||||
|
|
|
Y |
w |
+ |
∂x im1 |
Z |
w |
|
∂Zw |
||||||
|
|
|
||||
|
|
|
∂y im1 |
|
(31) |
|
Y |
w |
+ |
Z |
w |
||
∂Zw |
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
f x 1 |
|
|
|
|
|
|
X w |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
x im1 |
= |
|
|
|
− |
X w |
+ |
|
|
Zw |
|
|
||
|
|
|
|
|
|
|
Zw +tz |
|
|
|
|||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Zw +tz |
|
|
|
|
|
|
(32) |
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
f y 1 |
|
|
|
|
|
Y w |
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
y im1 |
= |
|
|
|
|
− |
Y w |
+ |
|
|
|
Zw |
|
|
|
|
|
|
|
|
Zw +tz |
|
|
|||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Zw +tz |
|
|
|
|
|
|
|
||||
For |
the |
|
provision |
of |
small |
view |
angle |
(i.e. tz >> 2r ) |
and |
||||||||||||||||||
assuming |
|
X w |
|
, |
|
Yw |
|
, |
|
Zw |
|
≤ r , we have: |
|
|
|
|
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||||
24 |
Hossein Ebrahimnezhad and Hassan Ghassemian |
|
|
X w |
|
<<1, |
Y w |
|
<<1 |
(33) |
|
Zw +tz |
Zw +tz |
||||||
|
|
|
|
||||
Therefore, xim1 and yim1 are very sensitive to Xw and Yw compared to Zw. As we explained in section 3, the six motion parameters are adjusted in which the
distance |
error |
in image planes to |
be minimized. Hence, the assumption of |
xim1 ≈ 0 |
and |
yim1 ≈ 0 will be |
reasonable for each tracking point after |
convergence of motion estimation algorithm. Combination of this assumption with Eq.32 and Eq.33 can be resulted in:
x
y
im1
im1
|
|
X w , |
Zw ≈0 |
|
|
|
|
|
|
or |
|||||||
≈0 |
|
|
|
|
|
X w |
|
|
|
|
|
|
|
|
|
|
|
→ |
X |
|
≈ |
Z |
|
→ΔZ |
|
>>ΔX |
|
||||||||
|
|
w |
Zw +tz |
w |
w |
w |
|||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
Y w , |
Zw ≈0 |
|
|
|
|
|
|
|
|
(34) |
|||||
|
|
|
|
|
|
|
|
or |
|||||||||
≈0 |
|
|
|
|
|
Y w |
|
|
|
|
|
|
|
|
|
|
|
→ |
Y |
|
≈ |
|
Z |
|
|
→ΔZ |
|
|
>>ΔY |
|
|
||||
|
|
w |
|
Zw +tz |
w |
w |
w |
||||||||||
|
|
|
|
|
|
|
|
|
|
|
|||||||
This equation reveals that the inverse problem of 3D motion estimation by tracking the points in camera plain is an ill posed problem and does not have one
solution. Any small estimation error of Xw or Yw (i.e. |
X w ≠0 or |
Yw ≠0 ) imposes |
|||||
a large estimation error of |
Zw (i.e. |
Zw >>ΔX w or |
Zw >>ΔYw ). Therefore, |
the |
|||
total 3D positional error |
X 2 + Y 2 |
+ Z 2 |
will be |
notably |
increased and |
the |
|
|
w |
w |
w |
|
|
|
|
inaccurate 3D motion parameters will be estimated.
4.2. Double Stereo Rigs
Due to the limitation of large base line selection in single stereo rig, both stereo cameras have approximately the same effect in motion estimation process. To take the advantages of both small and wide base line stereo cameras, we present a combined double stereo setup. This combination is composed of two single stereo rigs in which they make angleθ from each other (see figure 9).
New Trends in Surface Reconstruction Using Space-Time Cameras |
25 |
|
|
Figure 9. Structure of double stereo setup: (a) Double stereo setup with angleθ , (b) Perpendecular double stereo setup.
Similar to single stereo setup and considering the rotation angle of θ for camera3, it can be easily shown that:
x
y
Where:
x
y
im 3
im 3
=−f
=−f
x3
y3
|
X w cosθ − Zw sinθ |
||
X w |
sinθ + Zw |
cosθ +tz |
|
|
|
Y w |
|
X w |
sinθ + Zw |
cosθ +tz |
|
+x
+y
o 3
o 3
im 3 |
= |
|
f x 3 |
|
(−(Zw +tz cosθ ) X w + |
(X w |
+tz sinθ) Zw ) |
|||||||||
|
A |
|||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
= |
f y 3 |
|
Y |
|
sinθ |
X |
|
−A |
Y |
|
+Y |
|
cosθ Z |
w ) |
|
|
A |
|
|
|
|
|
||||||||||
im 3 |
|
|
( |
w |
|
|
w |
|
|
w |
|
w |
|
|||
A = (X w sinθ + Zw cosθ +tz )2
(35)
(36)
(37)
By choosing a proper amount ofθ , it is possible to increase the sensitivity of xim3 and yim3 to Zw as much as possible. Therefore, we can minimize the 3D motion estimation errors ∆Xw and ∆Yw by minimizing ∆xim1 and ∆yim1, and the estimation
26 |
Hossein Ebrahimnezhad and Hassan Ghassemian |
|
|
error ∆Zw by ∆xim3 and ∆yim3. It can be verified, by differentiating, that the maximum value of sensitivity is achieved byθ =90 . Forθ =90 , the Eq.36 is simplified as:
|
|
|
|
|
|
f x 3 |
|
|
|
|
|
|
|
|
|
||
|
|
x im 3 = |
|
|
|
|
−Zw |
|
X w + |
Zw |
|
|
|||||
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
X w +tz X w +tz |
|
|
|
|
(38) |
|||||||
|
|
|
|
|
|
f y 3 |
|
|
|
Y w |
|
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
y im 3 = |
|
|
|
|
|
|
|
|
X w − |
Y w |
|
|
||
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
X w +tz X w +tz |
|
|
|
|
|
|||||||
Similar to single stereo, we can assume ( |
xim1, |
|
yim1 ≈ 0) for each tracking point |
||||||||||||||
in |
camera1 |
and ( |
xim3, yim3 ≈ 0) |
|
|
for |
each |
tracking |
point |
in camera3 after |
|||||||
convergence of motion estimation algorithm. Hence: |
|
|
|
||||||||||||||
|
|
|
|
X w , |
Zw ≈0 |
|
|
|
or |
|
|||||||
|
|
x im 3 |
|
Zw |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
≈0→ |
|
|
X w ≈ΔZw |
→ΔX w >>ΔZw |
|
||||||||||
|
|
|
|
|
|
+tz |
|
|
|
||||||||
|
|
|
X w |
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
Y w , |
Zw ≈0 |
|
|
|
or |
(39) |
|||||||
|
|
|
|
|
|
|
|||||||||||
|
|
y im 3 |
|
Y w |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
≈0→ |
|
|
X w ≈ΔY w |
→ΔX w >>ΔY w |
|
||||||||||
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
X w +tz |
|
|
|
|
|
|
|
|
|
|
|
|||
Combination form of the Eq.34 and Eq.39 result in X w , |
Yw , |
Zw ≈0 . Therefore, |
|||||||||||||||
the |
total 3D |
positional error |
X 2 + Y 2 + Z |
2 |
will be notably decreased in |
||||||||||||
|
|
|
|
|
|
|
w |
w |
w |
|
|
|
|
||||
perpendicular double stereo setup and more precise motion parameters will be resulted.
5. Shape Reconstruction from Object Silhouettes Across Time
Three-dimensional model reconstruction by extracting the visual hull of an object has been extensively used in recent years [31-34] and it has become a standard and popular method of shape estimation. Visual hull is defined as a rough model of the object surface, which can be calculated from different views
