Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Стратегии детекции в рампознавании лиц / Detection Strategies For Face Recognition Using Learning And Evolution Phd Dissertation 1998

.pdf
Скачиваний:
17
Добавлен:
01.05.2014
Размер:
1.05 Mб
Скачать

performance - 89.66% correct detection - is satisfactory when taking into account that the visual routine can now actually lock on the eye. Post processed eye detection results using WTA are shown in Fig. 6.6.

Ground Truth / Test Decision

 

Eye Regions

 

Non-Eye Regions

 

 

 

 

 

Eye

95.69 %

4.31 %

Non-Eye

6.03 %

93.97 %

 

 

 

 

 

Table 6.1. Confusion Matrix for Eye Detection on the database

The results presented above for eye detection compare favorably with those reported by Johnson et al (1994) but on a different task related to human actions. Note that what is most important when one makes such comparisons is to use the results obtained during testing rather than those derived during learning. It is testing after all that provides an indication about the intrinsic generalization ability of the visual routine. Johnson et al (1994), using bitmap rather than gray level images and presegmented images, report their best cross validation performance, on one subtask only, that of left hand detection, in the range of 77% ± 21%.

++

+++

 

++

+

 

++

 

+++

 

 

++

++

 

 

 

 

 

 

++

 

 

 

 

 

 

 

 

+

 

 

 

+

 

 

 

 

++

 

 

 

++

 

 

 

 

 

 

 

+

 

 

 

 

 

 

 

 

 

++

 

 

 

 

 

 

 

+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

++

+

 

+

++

 

 

 

 

++

+++

 

 

 

+

 

++

++

 

 

 

 

 

 

 

 

 

 

Figure 6.5 Eye Detection Results

+

+

 

+

+

 

+

+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

+

+

 

+

+

 

 

 

 

 

Figure 6.6 Eye Detection Results on DB Using WTA

9 ; =

The goals of the novel architecture for eye detection are twofold: (i) derivation of the saliency attention map using consensus between navigation routines encoded as finite state automata (FSA) evolved using GAs, (ii) selection of

51

optimal features for eye classification using GAs and induction of DT (decision trees) for classifying the most salient locations identified earlier as eye vs non-eye regions. Specifically, we describe what the image base representations are, how visual routines are encoded as FSA and evolved, and how consensus methods can integrate ('fuse') visual routine outputs during testing on unseen facial images.

This;subsection; provides9 a detailed description regarding feature representation, derivation of FSA through evolution, and the use of consensus methods to integrate conspicuity outputs from several animats.

Feature Maps

The input consists of (256) gray level (facial) images whose resolution is 192x192 (see Fig. 6.7). To account for illumination changes, the original images are processed using 5x5 Laplacian masks. The Laplacian, filters out small changes due to illumination, and detects those image transitions usually associated with changes in image contents or contrast. Three feature maps corresponding to the mean, entropy, and standard deviation are then computed over 6x6 windows and then compressed to yield 32x32 images, each map encoded using four gray levels (2 bits) only. Examples of such feature maps are shown in Fig. 6.8 below.

Figure 6.7 Face Images

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

¹º»¼½¾º ¿À¼¾»Á¼ÂÃÄ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Å ÃƽǺ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ИЙКАЙК¾ºЛГЗ½¼ВГД9М

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

¢£

 

 

 

 

 

 

 

 

 

 

 

¬¦£­®­®¯ °±

 

 

 

 

 

¡¢£¤¥¦¢ §

¡¢£¤¥¦¢ §

 

 

 

¡¢£¤¥¦¢=§

 

 

 

 

²ª³©´ª³©¦¢µ¶·¥¤­¶®¸

 

 

ª

©

«

 

 

 

 

 

 

Figure 6.8 Mean, Standard Deviation, and Entropy Feature Maps

The FSA implements an animat (autonomous agent) exploring 32x32 feature maps in order to generate trajectories consisting of conspicuous points on the path to salient eye locations. The animat searches the features landscape starting from some defined initial point, in our case the chin. The FSA encoding, string-like, resembles a chromosome, the basic unit evolution would operate later on. If PS and NS stand for the present and next state, the FSA is defined in terms of f: {PS, INPUT} {NS, ACTION}, known as the transition function. The FSA is assumed to start from some initial state IS. The animat (FSA) exploring the features landscape consists of eight

52

states, and as it moves around it measures ('forages') three precomputed feature maps, whose composite range is encoded using 6 bits for 64 levels. As measurements are taken, the animat decides on its next state, and an appropriate course of action ('move'). As it is shown in Fig. 6.9, both the present state (PS) and the composite feature being sensed are implicitly represented using 8 consecutive (state) fields <0> through <7>, and 64 corresponding (feature) subfields <0> through <63> for each state. The explicit contents of the FSA consist of the next state (NS) and move (see Fig. 6.10). The animat never moves backwards and it can choose from five possible directional moves for a total of eight possible moves. Two of the moves are sideways (left and right), while two moves each are allocated to left 45o, straight on, and right 45o. The shaded blocks in Fig. 6.10 transition table, corresponding to the next state and directional move, together with the initial state, are subject to learning through evolution as described in the next section.

Pointer to

 

State # 0

 

 

State # 1

# 2-6

Initial State

field #0

field #1 field #2

field #63

field #0

field #1 field #2

field #63

State # 7

field #0 field #1 field #2

field #63

Figure 6.9 FSA Chromosome

Present State

State #0

 

 

State #1

 

State #7

(PS)

000

 

 

 

 

001

 

 

 

 

111

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Input field

0

1

...

63

 

 

0

1

...

63

...

0

....

....

63

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

New state (NS)

001

101

...

000

 

 

001

111

...

110

 

100

....

....

000

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Action (Move)

100

101

...

001

 

 

001

011

...

111

 

111

....

....

101

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(a)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

001

 

 

 

 

 

 

 

 

 

011

 

 

 

 

101

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

100

 

 

 

 

 

 

 

 

 

 

 

 

 

010

 

 

 

 

 

 

 

 

 

 

 

 

 

110

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

000

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

111

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(b)

Figure 6.10 FSA Animat (a) State Transition Table and (b) Moves.

Evolution of FSA Animats

Learning an FSA is known as a difficult and complex computational problem. As one expects that autonomy of behavior is the result of evolutionary pressures it becomes natural to evolve the FSA using GAs. Evolution is driven by fitness, where fitness is here defined as the ability of the FSA to find its way to the left or right eye within a limited number of moves (less than 64) and home on the eye within 2 pixels from its center for 10 training images. The GA component is implemented using GENESIS (Grefenstette, 1991). The standard default parameter settings from the GA literature were used. This resulted in a constant population size of 50 FSAs, a crossover rate 0.6 and a mutation rate 0.001. It takes on the order of about 2,000 generations before evolution yields successful animats

53

(100% performance on training data). As FSAs become more fit, left or right animats eventually learn to locate the corresponding eyes. Fig. 6.11 shows the conspicuous paths followed by the animats searching for left or right eye location using some of the images shown in Fig. 6.7 above.

Figure 6.11 Conspicuous Paths Leading to the Left and Right Eye Location Found during Training

Consensus Methods and Derivation of the Saliency Map

So far we have shown how one can train FSAs as successful autonomous agents for exploring the facial ('features') landscape. As several animats (FSAs) search the landscape in parallel, one has to collect and integrate their conspicuous outputs so eventually most salient eye locations are determined. The motivation for such an approach comes from the fact that if one were to let loose trained animats it is likely that areas of major traffic, subject to model constraints, would correspond to the eye regions. Towards that end we trained many different animats on similar tasks, Left and Right eye detection, using random seeds to start the GA/FSA model described in the previous subsection. Once the (L and R) animats end their travel, on the upper boundary of the face image, L and R traffic density across the facial landscape is collected, and one generates (L - R) traffic with the expectation that the eyes will show up strongly indicating increased image saliency, the nose regions will cancel out, while other facial areas will show only insignificant strength.

The consensus method implemented here consists of the following steps. Left and Right local but conspicuous traffics are counted for a number of different Left and Right animats, the (L - R) traffic map is generated and its significant local maxima are then detected using hystheresis and thresholding. This procedure, stepwise illustrated in Fig. 6.12, shows how animats detect salient eye locations on an unseen face image using 20 Left and 20 Right trained FSAs and consensus as described above.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

+

+

+

 

 

+

+

 

 

 

 

 

 

+ +

 

 

Left

Traffic

(L) -

 

+

+

 

+

+

 

 

 

 

 

-

Conspicuity Map

 

+

 

+

 

 

 

+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

+

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(L - R) traffic map

 

+

 

 

 

 

+

 

 

 

 

 

 

 

 

 

 

Thresholding

 

 

 

 

 

Detecting the

Local

Maxima

 

 

 

 

 

 

 

 

 

 

 

 

 

Using Hystheresis

 

 

 

 

 

Right

Traffic

(R)

 

 

 

 

 

 

 

 

-

Conspicuity Map

 

 

 

 

 

 

 

 

54

Figure 6.12 Saliency Derivation through Consensus

The goal for the derivation of the saliency map is to screen out most of the facial landscape as possible eye locations so the recognition channel ('pathway') can operate on less but most promising data. This goal, as it can be seen from Fig 6.13, has been achieved to a large degree. At the same time one has yet to find the means for discarding salient but false positive eye locations. while not missing any of the eye locations. Both eyes are correctly identified as salient candidates in Figs. 6.13a, 6.13b, and 6.13c, while for the test images shown in Figs. 6.13d and 6.13e, one eye has not been yet declared as a salient candidate.

Í Í

 

 

 

 

Í

 

Í

 

 

 

 

 

 

 

Í

 

Í

 

 

Í

 

 

Í

 

Í

Í

 

 

Í

Í Í

 

 

 

 

 

Í

Í

Í

Í

 

Í Í

 

 

 

 

Í

Í

 

Í

 

Í

Í

 

 

ÍÍÍ

 

 

 

 

Ï

 

Ð

 

 

Ñ

 

 

Î

 

 

 

 

 

Ò

 

Figure 6.13 Saliency Eye Detection Using Conspicuous Traffic and Consensus

In order to overcome the problem of missing eye locations, we expand on the consensus method and start the animats ('swarm') from five adjacent locations (close-by to the chin) and collect the corresponding traffic. Consensus then proceeds as before to identify salient eye locations. The result of such an approach is shown in Fig. 6.14. As it can be seen all the eye locations are now correctly identified as salient while several false positive eye locations are still kept.

Ó

Ó

 

Ó

Ó

Ó

Ó

 

 

Ó

Ó

ÓÓ

Ó

ÓÓ

 

Figure 6.14 Salient Eye Location Using Multiple Starting Positions

VerificationФ;ХЦ ХЧ;ХЧ ШЩЪЫЬ;ЭЩЯЮЩof Salient EyeабЫвг;дLocationsÚÛÙisä9åtædoneÝä usingз!аЪ9ивйвбЪЫвг;дthe hybrid genetic and learning approach described earlier in Chap.5.2. During evolution, each generation consists of a constant population of fifty individuals, and the crossover rate and mutation rate are 0.6 and 0.001, respectively. The set of six hundred examples, 120 (+) eye and 480 (-) non-eye examples, is divided into three equal subsets for cross validation (CV) training and tuning, and a tournament consisting of three sets of CV rounds takes place. The corresponding error rates, including both false positive - false detection of eyes - and false negative - missing the eyes, on tuning data, are shown as fitness measures in Fig. 6.4. The feature subset corresponding to the tree derived from the third CV round, which achieved

55

the smallest - 4.87% - error rate, consists of only 60 of the original 147 features (Fig. 6.4). This feature subset would be the one used to evaluate the overall performance on the eye detection task using all the candidates suggested by the saliency maps.

Once training is completed the eye locations suggested by the saliency map are tested across all the 20 testing facial images. Fig. 6.15 displays the 24x16 windows, centered at the center of interesting eye locations as indicated by the saliency map, which are used as testing cases for eye recognition.

ê

ê

 

 

ê

 

 

ê

 

 

 

 

 

 

ê

 

 

 

 

ê

ê

 

ê

ê

ê

 

 

 

ê

ê

ê

ê ê

ê ê

 

 

ê

ê

ê

ê

ê

ê

 

ê

ê

ê

ê

ê ê

ê

 

ê

êê

ê

 

ê

 

 

 

êêêê

 

 

 

Figure 6.15

Salient Eye Locations

 

 

 

We detected clusters of candidates, as shown in Fig. 6.16. No false negatives, i.e., missing eye locations, has been observed but due to the coarse resolution of the saliency map the true eye locations overlap. As a consequence post processing is warranted and WTA (Winner Take All) is used. The WTA procedure clusters adjacent candidates, find their corresponding centers, and filters out those who fail pairwise (horizontal and vertical) distances, like those holding between the two eyes, are maximal and minimal, respectively.

ë

ë

 

 

 

ë

ë

ë ë

ë ë

ë

 

ë

 

 

 

 

ë

ë

 

ë

 

 

 

ë

ë

 

 

 

 

 

 

 

 

 

 

 

Figure 6.16 Verification of Eye Location Candidates

Post processed eye detection results using WTA are shown in Fig. 6.17. Evolution modulated by learning appears to be beneficial to eye detection as we missed only one eye in two out of 20 test images.

ì ì

ì

ì

ì

ì ì

ì ì

ì ì

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 6.17

Final Results for Eye Detection

 

56

CHAPTER

7

нпо ртсфуцхшчшщоъртч

This thesis is concerned with Automated Face Recognition (AFR), which is a major challenge for applications related to biometrics, telecommunications and medicine. Towards that end, we have advanced novel strategies for both face and eye detection. Face detection is important because it restricts the field of view and thus reduces the amount of computation, while eye detection is important because it enables face normalization and leads to size invariant face recognition. The novel strategies for both face and eye detection are adaptive, are based on learning and evolution, and are characteristic of Behavior-Based AI and Active and Selective Vision. The feasibility of our novel methodology for detection tasks related to face recognition has been proved using FERET, which is a large and standard face image database.

Automatic Face Recognition (AFR) can become relevant only if it displays robust performance and if it can scale up and cope with hundreds of images. Towards that end generalization and prototyping become imperative and adaptation is the method of choice for addressing such problems. Robustness involves accurate performance despite changes in the image acquisition process or clustered images. Furthermore one expects that AFR can move from still imagery to video processing and take advantage of motion. The contributions of our thesis are twofold. First, we have introduced an adaptive methodology for face detection tasks that should carry over to the more general area of behavior-based AI and artificial life. Furthermore, we have investigated the interactions between learning and evolution and have advanced a hybrid approach where learning supports evolution by providing the fitness function. Second, we proved the feasibility of our approach in support of a real and very important technological challenge, that of face recognition using both still ('photography') and time-varying imagery ('video'). The robustness of our face detection approach applies to both grayscale and color images and has been proved using a large data base consisting of 2,340 face images drawn from the FERET data base. The algorithm is able to decide first if a face is present, and if the face is present it crops ('box') the face. Using grayscale imagery the performance on face and eye detection tasks yields an accuracy of 96% and 90%, respectively, and as the approach does not require multiple (scale) face templates the system displays thus scale invariance for face detection. Eye detection can be approached using an exhaustive search or one can consider the possibility of navigating the facial landscape in search of the eyes. Towards that end we have evolved optimal navigational skills taking the form of Finite State Automata (FSA). Using such an approach, on a limited data set consisting of 20 images, we have achieved 95% accuracy. This approach is relevant because it reduces the search space and it can become relevant for navigation robotics and also for speech recognition systems as FSA are quite similar in their structure to the sought after Hidden Markov Models (HMM).

57

Based on our findings and experimental results, future research should address issues related to what form the optimal representations supporting an adaptive and behavior-based AI methodology should take. In particular one should consider the evolution of optimal scale-space wavelet representations and the means to make them invariant to the image acquisition process. As face recognition is just one of the means for personal authentication one should consider the possibility for multi-modal authentication merging video and speech processing.

58

ûýü þ ü ÿ ü Rü

Allport, A. (1989), Visual Attention, in E. Posner (Ed.), Foundations of Cognitive Science, MIT Press.

Arad, N., N. Dyn, D. Reisfeld. and Y. Yeshurun (1994), Image Warping by Radial Basis Functions: Application to Facial Expressions , CVGIP: Graphical Models and Image Processing, 56, No 2, pp.161-172.

Atick, J. and A. Redlich (1993), "Convergent algorithm for sensory receptive field development", Neural Computation, Vol.5, pp.45-60.

Bala J., P. Pachowicz, and K. De Jong (1994), Multistrategy Learning from Engineering Data by Integrating Inductive

Generalization and Genetic Algorithms, in Machine Learning: A Multistrategy Approach, Vol. IV, R.S. Michalski and G. Tecuci (Eds.), Morgan Kaufmann, San Mateo, CA., pp. 121-138.

Bala, J., K. De Jong, J. Huang, H. Vafaie, and H. Wechsler (1996), Using Learning to Facilitate the Evolution of Features for Recognizing Visual Concepts, Special Issue of Evolutionary Computation - Evolution, Learning, and Instinct: 100 Years of the Baldwin Effect, Fall 1996, MIT.

Baldwin, J. M. (1896), A New Factor in Evolution. American Naturalist, 30, pp.441-451.

Baron, R. J. (1981), Mechanisms of Human Facial Recognition, Int. J. of Man-Machine Studies, Vol.15, pp.137-178,.

Bigun, J., B. Duc, F. Smeraldi, S. Fischer, and A. Makarov (1998), Multi-Modal Person Authentication, ), in Face Recognition:

From Theory to Applications, H. Wechsler, J. P. Phillips, V. Bruce, F. Fogelman - Soulie and T. Huang (Eds.), Springer - Verlag (to appear).

Brooks, R.A. (1985), Visual map making for a mobile robot, IEEE Int. Conference on Robotics and Automation, pp.819-824. Brooks, R.A. (1986), A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, Vol.2,

pp.14-22

Burel, G. and D. Carel (1994). Detection and Localization of Faces on Digital Images, Pattern Recognition Letters, Vol. 15, pp.963-867

Chellappa R., C. L. Wilson, and S. Sirohey (1995), Human and Machine Recognition of Faces: A Survey, Proc. IEEE 83, pp.705-740.

Coifman, R. and V. Wickerhauser (1992), Entropy-based algorithms for best basis selection, IEEE Trans. on Information Theory, 38 (2), pp.713-718.

Culhane, S. M. and Tsotso, J. K. (1992), An Attentional Prototype for Early Vision, Proc. of the 2nd European Conf. on Computer Vision, Santa Margherita Ligure, Italy.

Daubechies, I. (1988), Orthonormal bases of compactly supported wavelets, Comun. on Pure and Appl. Math., 41, pp.909-996. DePersia A. T. and P. J. Phillips (1995), The FERET Program: Overview and Accomplishments.

Ducottet, C., J. Daniere, M. Moine, J. P Schon, and M. Courbon (1994), Localization of Objects with Circular Symmetry in a Noisy Image Using Wavelet Transforms and Adapted Correlation, Pattern Recognition, Vol.27, No. 3, pp.351-364.

Edelman, G. M. (1987). Neural Darwinism, Basic Books.

Erman, D., F. Hayes-Roth, V. R. Lesser, and D. R. Reddy (1980), "The Hearsay-II speech-understanding system: Integrating knowledge to resolve uncertainty", ACM Comput. Surv. Vol.12, pp.213-253.

59

Fukuda, T., et al. (1994), Optimization of Group Behavior on Cellular Robotic System in Dynamic Environment, IEEE Int. Conference on Robotics and Automation, pp.1027-1032.

Fukunaga, K. (1991), Introduction to Statistical Pattern Recognition, 2nd Edition, Academic Press.

Gofman, Y. and N. Kiryati (1996), Detecting Symmetry in Gray Level Images: The Global Optimization Approach, Proceeding of ICPR '96, pp.889-894.

Goldberg, D. E. (1989), Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley. Grefenstette, J., L. David & D. Cerys (1991), Genesis and OOGA: Two Genetic Algorithms System, TSP: Melorse, MA.

Gruau F., and D. Whitley (1993), Adding Learning to the Cellular Development of Neural Networks: Evolution and the Baldwin Effect, Evolutionary Computation, Vol.1, No.3, pp. 213-234.

Gutta, S., J. Huang, I. Shah, D. Singh, B. Takacs, and H. Wechsler (1995), Benchmark Studies on Face Recognition Int.

Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland.

Hildreth, E. and D. Marr (1980), Theory of Edge Detection, Proceedings of Royal Society of London, Vol. 207, pp.187-217. Hinton, G. E., and S. J. Nowlan (1987), How Learning Can Guide Evolution, Complex Systems, Vol.1, pp. 495-502. Holland, J. H. (1975), Adaptation in Neural and Artificial Systems, University of Michigan Press, Ann Arbor, MI.

Holland, J. H. (1993), Echoing emergence: Objectives, rough definitions, and speculations for Echo-class models (Working Paper 93-04-023), Complexity: Metaphors, models, and reality, edited by G. Cowan, D. Pines, and D. Melzner, AddisonWesley.

Horn, B. K. P. and B. G. Schunck (1980), Determining Optical Flow, Artificial Intelligence, Vol. 17, pp. 185-203. Horswill, I. (1995), Visual routines and visual search, Int. Joint Conf. on Artificial Intelligence, Montreal, Canada.

Huang, J., Gutta, S. and Wechsler, H. (1996), Detection of Human Faces Using Decision Trees, in Proceedings of thr 2nd

International Conference on Automated Face and Gesture Recognition (ICAFGR), Killington, VT. Huber, P.J., (1981) Robust statistics, John Wiley, New York.

Johnson, M. P., P. Maes, and T. Darell (1994), Evolving visual routines, in Artificial Life IV, edited by R.A. Brooks and P. Maes, MIT Press.

Jolliffe, I. T. (1986), Principal Component Analysis, Springer, New York.

Koch, C. and S. Ullman (1987), Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry, in Vaina (Ed.),

Mattern of Intelligence, Redial Publishing.

Koza, J. R. (1992), Genetic Programming, MIT Press.

Krogh, A. and J. Vedelsby (1995), Neural Network Ensembles, Cross Validation and Active Learning, in Advances in Neural

Information Processing Systems (NIPS), D. S. Touretzky (Ed.), Morgan Kaufmann, Vol. 2, pp.231-238,

Lades, L., J. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, and R. Wurtz (1993), Distortion invariant object recognition in the dynamic link architecture, IEEE Trans. Computers, Vol. 42, pp.300-311, 1993.

Lam, K. M. and H. Yan (1996), Locating and Extracting the Eye in Human Face Images, Pattern Recognition, Vol.29, NO. 5, pp.771-779.

Langton, C. (1989), Artificial Life, Addison Wesley

Linsker, R. (1988), Self-organization in a perceptual neural network, Computer, 21, pp.105-117.

Liu C. and H. Wechsler (1998), Evolution of Optimal Projection Axes (OPA) for Face Recognition, 2nd Int. Conf. on Automatic Face and Gesture Recognition, Nara, Japan.

MacFarland, D. (Ed) (1987), The Oxford Companion to Animal Behavior, Oxford University Press.

60