Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Ординатура / Офтальмология / Английские материалы / Binocular Vision Development, Depth Perception and Disorders_McCoun, Reeves_2010.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
9.88 Mб
Скачать

New Trends in Surface Reconstruction Using Space-Time Cameras

27

 

 

of the object's silhouette. The silhouette of an object in an image refers to the curve, which separates the object from background. The visual hull cannot recover concave regions regardless of the image numbers that are used. In addition, it needs a large number of different views for recovering the fine details. To moderate the first drawback of visual hull, combination with stereo matching can be employed. To get rid of the second drawback, more silhouettes of the object can be captured by the limited number of cameras across time. Cheng et al. presented a method to enhance the shape approximation by combining multiple silhouette images captured across time [34]. Employing a basic property of visual hull, which affirms that each bounding edge must touch the object in no less than one point, they use multi-view stereo to extract these touching points called Colored Surface Points (CSP) on the surface of the object. These CSPs are then used in a 3D image alignment algorithm to find the six rotation and translation parameters of rigid motion between two visual hulls. They utilize the color consistency property of the object to align the CSP points. Once the rigid motion across time is known, all of the silhouette images are treated as being captured at the same time instant and the shape of the object is refined.

Motion estimation by CSP method suffers from some drawbacks. Nonaccurate color adjustment between cameras is one problem that makes some error in color-consistency test. Moreover, variation of the light angle while the object moves around the light source produces additional error.

Our presented method of motion estimation which uses only the edge information as the space curves, is very robust against the color maladjustment of cameras and shading during the object motion. Moreover, it can be effectively used to extract visual hull of poorly textured objects.

In the remainder of this section, it is assumed that the motion parameters are known for multiple views of the object and the goal is to reconstruct 3D shape of the object from silhouette information across time.

5.1. Space-Time or Virtual Camera Generation

Let P defined in Eq.40, be the projection matrix of camera, which translates the 3D point W in the world coordinate to (x im , y im )in the image coordinate of

the camera plane: