- •CONTENTS
- •PREFACE
- •Abstract
- •1. Introduction
- •2.1. Differential Geometry of Space Curves
- •2.2. Inverse Problem Formulation
- •2.3. Reconstruction of Unique Space Curves
- •3. Rigid Motion Estimation by Tracking the Space Curves
- •4. Motion Estimation Using Double Stereo Rigs
- •4.1. Single Stereo Rig
- •4.2. Double Stereo Rigs
- •5.1. Space-Time or Virtual Camera Generation
- •5.2. Visual Hull Reconstruction from Silhouettes of Multiple Views
- •5.2.1. Volume Based Visual Hull
- •5.2.1.1. Intersection Test in Octree Cubes
- •5.2.1.2. Synthetic Model Results
- •5.2.2. Edge Base Visual Hull
- •5.2.2.1. Synthetic Model Results
- •Implementation and Exprimental Results
- •Conclusions
- •Acknowledgment
- •References
- •Abstract
- •Introduction: Ocular Dominance
- •Demography of Ocular Dominance
- •A Taxonomy of Ocular Dominance
- •Is Ocular Dominance Test Specific?
- •I. Tests of Rivalry
- •II. Tests of Asymmetry
- •III. Sighting Tests
- •Some Misconceptions
- •Resolving the Paradox of Ocular Dominance
- •Some Clinical Implications of Ocular Dominance
- •Conclusion
- •References
- •Abstract
- •1. Introduction
- •2. Basic Teory
- •3. Bezier Networks for Surface Contouring
- •4. Parameter of the Vision System
- •5. Experimental Results
- •Conclusions
- •References
- •Abstract
- •Introduction
- •Terminology (Definitions)
- •Clinical Assessment
- •Examination Techniques: Motility
- •Ocular Motility Recordings
- •Semiautomatic Analysis of Eye Movement Recordings
- •Slow Eye Movements in Congenital Nystagmus
- •Conclusion
- •References
- •EVOLUTION OF COMPUTER VISION SYSTEMS
- •Abstract
- •Introduction
- •Present-Day Level of CVS Development
- •Full-Scale Universal CVS
- •Integration of CVS and AI Control System
- •Conclusion
- •References
- •Introduction
- •1. Advantages of Binocular Vision
- •2. Foundations of Binocular Vision
- •3. Stereopsis as the Highest Level of Binocular Vision
- •4. Binocular Viewing Conditions on Pupil Near Responses
- •5. Development of Binocular Vision
- •Conclusion
- •References
- •Abstract
- •Introduction
- •Methods
- •Results
- •Discussion
- •Conclusion
- •References
- •Abstract
- •1. Preferential Processing of Emotional Stimuli
- •1.1. Two Pathways for the Processing of Emotional Stimuli
- •1.2. Intensive Processing of Negative Valence or of Arousal?
- •2. "Blind" in One Eye: Binocular Rivalry
- •2.1. What Helmholtz Knew Already
- •2.3. Possible Influences from Non-visual Neuronal Circuits
- •3.1. Significance and Predominance
- •3.2. Emotional Discrepancy and Binocular Rivalry
- •4. Binocular Rivalry Experiments at Our Lab
- •4.1. Predominance of Emotional Scenes
- •4.1.1. Possible Confounds
- •4.2. Dominance of Emotional Facial Expressions
- •4.3. Inter-Individual Differences: Phobic Stimuli
- •4.4. Controlling for Physical Properties of Stimuli
- •4.5. Validation of Self-report
- •4.6. Summary
- •References
- •Abstract
- •1. Introduction
- •2. Algorithm Overview
- •3. Road Surface Estimation
- •3.1. 3D Data Point Projection and Cell Selection
- •3.2. Road Plane Fitting
- •3.2.1. Dominant 2D Straight Line Parametrisation
- •3.2.2. Road Plane Parametrisation
- •4. Road Scanning
- •5. Candidate Filtering
- •6. Experimental Results
- •7. Conclusions
- •Acknowledgements
- •References
- •DEVELOPMENT OF SACCADE CONTROL
- •Abstract
- •1. Introduction
- •2. Fixation and Fixation Stability
- •2.1. Monocular Instability
- •2.2. Binocular Instability
- •2.3. Eye Dominance in Binocular Instability
- •3. Development of Saccade Control
- •3.1. The Optomotor Cycle and the Components of Saccade Control
- •3.4. Antisaccades: Voluntary Saccade Control
- •3.5. The Age Curves of Saccade Control
- •3.6. Left – Right Asymmetries
- •3.7. Correlations and Independence
- •References
- •OCULAR DOMINANCE
- •INDEX
Three-Dimensional Vision Based on Binocular Imaging… |
87 |
|
|
Figure 3. Maximum position from a set of pixels fitted to a Bezier curves.
3. Bezier Networks for Surface Contouring
From the binocular images, the line displacement is proportional to the object depth. A Bezier network is built to compute the object depth h(x,y) based on the displacement as(x,y) or es(x,y). This network is constructed based on a line projected on objects with known dimensions. The network structure consists of an input vector, a parametric input, a hidden layer and an output layer. This network is shown in figure 4. Each layer of the network is deduced as follow. The input includes: the object dimensions hi, the stripe displacements as, es and the parametric values u and v. The input data as0, as1, as2,….,asn and es0, es1, es2,….,esn are the stripe displacements obtained by image processing described in section 2. By means of these displacements, a linear combination LCa and LCe are determined [14] to compute u and v. The relationship between the displacement and the parametric values is described by
u= b0 |
+b1as, |
(6) |
v= c0 |
+c1es, |
(7) |
88 |
J. Apolinar Muñoz-Rodríguez |
|
|
where bi and ci are the unknown constants. The Bezier curves are defined in the interval 0≤ u ≤ 1 and 0≤v≤ 1 [17]. For the depth h0, the displacements as0 and es0 are produced. Therefore, u=0 for as0 and v=0 for es0. For hn the displacements asn and esn are produced. Therefore, the u=1 for asn and v=1 for esn is. Substituting the values (as0, u=0) and (asn, u=1) in Eq.(6), two equations with two unknown constants are obtained. Solving these equations b0 and b1 are determined and Eq.(6) is completed. Substituting the values (es0, v=0) and (esn, v=1) in Eq.(7), two equations with two unknown constants are obtained. Solving these equations c0 and c1 are determined and Eq.(7) is completed. Thus, for the displacement as and es, the parametric values u and v are computed via Eq.(6) and Eq.(7), respectively. The input h0, h1, h2,...,hn are obtained from the pattern objects, whose dimensions are known. The hidden layer is constructed by Bezier basis function [16], which is described by
|
n |
i |
|
n−i |
, |
n |
|
n! |
(8) |
|||
B |
(u) = |
u |
|
(1−u) |
|
|
|
= |
|
|
||
|
|
|
|
|||||||||
i |
|
|
|
|
|
|
|
|
|
i!(n −i)! |
|
|
|
i |
|
|
|
|
i |
|
|
||||
denotes the binomial distribution from statistics. The output layer is the summation of the neurons of the hidden layer, which are multiplied by a weight. The output is the depth h(x,y), which is represented by ah(u) and eh(v). These two outputs are described by the next equations
|
n |
|
|
a h(u) |
= ∑wi Bi (u)hi , |
0 ≤ u ≤ 1, |
(9) |
|
i=0 |
|
|
|
n |
|
|
e h(v) |
= ∑ri Bi (v)hi , |
0 ≤ v ≤ 1, |
(10) |
i=0
where wi and ri are the weights, hi is the known dimension of the pattern objects and Bi is the Bezier basis function Eq.(8). To obtain the network Eq.(9) and Eq.(10), the suitable weights wi and ri should be determined. To obtain the weights w0, w1, w2,…..,wn, the network is being forced to produce the outputs h0, h1, h2,.....,hn by means of an adjustment mechanism. To carry it out, the values hi and its u are substituted in Eq.(9). The value u is computed via Eq.(6) based on the displacement as, which corresponds to the known dimension hi.
Three-Dimensional Vision Based on Binocular Imaging… |
89 |
|
|
For each input u, an output ah is produced. Thus, the next equation system is obtained
ah0 |
= h0 |
n |
(1 - u)n u0 |
h0 |
n |
(1 |
- u) n-1uh1 |
n |
|
|
||
= w0 |
|
+ w1 |
|
+,…..,+ wn (1 - u) 0 un hn, 0 ≤ u ≤ 1. |
||||||||
|
|
0 |
|
|
1 |
|
|
n |
|
|
||
ah1 |
=h1 |
n |
|
(1 - u)n u0 |
|
n |
|
(1 |
|
n |
(1 - u) 0 |
un hn, 0 ≤ u ≤ 1. (11) |
= w0 |
|
h0 + w1 |
|
- u) n-1uh1 +,…..,+ wn |
||||||||
|
|
0 |
|
|
|
1 |
|
|
|
n |
|
|
# |
# |
# |
|
|
|
# |
|
|
|
|
|
|
n |
|
(1 - u)n u0 |
h0 |
n |
|
||
ahn =hn = w0 |
0 |
|
+ w1 |
1 |
(1 - u) n-1uh1 |
||
|
|
|
|
|
|
||
+,…..,+ wn n (1 - u) 0 un hn, 0 ≤ u ≤ 1.
n
This linear system of Eq.(11) can be represented as
h0 = w0β0,0 |
+ w1β0,1+ |
..........+wnβ0,n |
|
h1 = w0β1,0 |
+ w1β1,1+.......... |
+wnβ1,n |
(12) |
# # # |
# |
|
|
hn = w0βn,0 + w1βn,1+.......... |
+wnβn,n |
|
|
This equation can be rewritten as the product between the matrix of the input data and the matrix of the corresponding output values: βW = H. The linear system represented by the next matrix
|
β0,0 |
β0,1 |
β0,2 .... |
||||
|
β |
101 |
β |
1,1 |
β |
1,2 |
.... |
|
|
|
|
|
|||
|
|
# |
# |
# |
|
||
|
βn,0 |
βn,1 |
βn,2 .... |
||||
|
|||||||
β
β
#
β
0,n
1,n
n,n
w0 |
|
h0 |
|
|
||
w |
|
h |
|
(13) |
||
|
1 |
|
= |
1 |
|
|
|
# |
|
|
# |
|
|
|
|
|
|
|
|
|
wn |
|
hn |
|
|
||
This system Eq.(13) is solved by the Chelosky method [17]. Thus the weights w0, w1, w2,….,wn are calculated and Eq.(15) has been completed.
90 |
J. Apolinar Muñoz-Rodríguez |
|
|
To determine the weights r0, r1, r2,…..,rn, again, the network is being forced to produce the outputs h0, h1, h2,.....,hn. ri. To carry it out, the values v and hi are substituted in Eq.(10). The value v is calculated via Eq.(7) using the displacement es, which corresponds to the dimension hi. For each input v, an output eh is produced and the next equation system is obtained
eh0
eh1
= h0 |
n |
(1 |
- v)n v0 h0 |
n |
(1 |
- v) n-1vh1 |
n |
||||||
= r0 |
|
+ r1 |
|
+,…..,+ rn |
(1 - v) 0 vn hn, 0 ≤ v ≤ 1. |
||||||||
|
0 |
|
|
|
|
1 |
|
|
|
n |
|||
=h1 |
n |
|
(1 |
- v)n v0 |
h0 |
|
n |
|
(1 |
- v) n-1vh1 |
|
n |
|
= r0 |
|
+ r1 |
|
+,…..,+ rn |
(1 - v) 0 vn hn, 0 ≤ v ≤ 1. (14) |
||||||||
|
0 |
|
|
|
|
|
1 |
|
|
|
|
n |
|
# # # #
n |
|
(1 |
- v)n v0 |
h0 |
n |
|
(1 |
- v) n-1vh1 |
n |
|||
ehn =hn = r0 |
0 |
|
+ r1 |
1 |
|
+,…..,+ rn |
(1 - v) 0 vn hn, 0 ≤ v ≤ 1. |
|||||
|
|
|
|
|
|
|
|
|
n |
|||
The linear system Eq.(14) can be represented as the product between the input matrix and the matrix of the corresponding output: R = H. The linear system represented by the next matrix
0,0 |
0,1 |
0,2 |
|
|
|
|
|
|
101 |
1,1 |
1,2 |
|
# |
# |
# |
|
|
n,1 |
n,2 |
n,0 |
|||
.... 0,n
.... 1,n
#
.... n,n
r0 |
|
h0 |
|
|
||
r |
|
h |
|
(15) |
||
|
1 |
|
= |
1 |
|
|
|
# |
|
|
# |
|
|
|
|
|
|
|
|
|
rn |
|
hn |
|
|
||
This linear system Eq.(15) is solved and the weights r0, r1, r2,….,rn are determined. In this manner, Eq.(16) has been completed. Thus, the network produces the shape dimension via Eq.(9) and Eq.(10) based on the line displacement as and es respectively. In this manner, the BAN provides the object depth by means of h(x,y)= ah(u) and h(x,y)=eh(v). This network is applied to the binocular images shown in figure 5(a) and figure 5(b), respectively. The binocular images correspond to a line projected on a dummy face. From these images, the stripe position ap and ep are computed along the y-axis. Then, the displacement as(x,y) and es(x,y) are computed via Eq.(1) and Eq.(2), respectively.
Three-Dimensional Vision Based on Binocular Imaging… |
91 |
|
|
The contours provided by the network are shown figure 6(a) and figure 6(b), respectively. The contour of figure 6(a) is not completed due to the occlusion in figure 5(a). But, figure 5(b) does not contain line occlusions and the complete contour is achieved in figure 6(b).
Figure 4. Structure of the proposed Bezier network.
92 |
J. Apolinar Muñoz-Rodríguez |
|
|
Figure 5 (a). First line captured by the binocular system.
Figure 5 (b). Second line captured by the binocular system.
