Добавил:
kiopkiopkiop18@yandex.ru t.me/Prokururor I Вовсе не секретарь, но почту проверяю Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Ординатура / Офтальмология / Английские материалы / Eye Movements A Window on Mind and Brain_Van Gompel_2007

.pdf
Скачиваний:
0
Добавлен:
28.03.2026
Размер:
15.82 Mб
Скачать

254

R. Radach et al.

basis of L1 processing, the eyes go back to the attended region. Does this involve a different, non-decoupled (or better: coupled) mode of control? Our data presented in Figure 2 can be taken as evidence for such a view. The left panel of the figure shows the well-known effect of saccade launch distance on initial progressive saccade-landing position, commonly referred to as saccade distance effect (McConkie, Kerr, Reddix, & Zola, 1988). For launch distances further away from the target word the resulting distribution of landing positions is shifted to the right. The right panel depicts landing positions for regressive saccade coming from position to the right of the target word. Quite strikingly, the saccadic range error is no longer present and saccades from all launch sites attain the word center with remarkable precision (see Radach & McConkie, 1998, for a detailed discussion). This is in line with the high spatial accuracy observed in attention-saccade coupling experiments such as Deubel & Schneider (1996), suggesting that it may be the aid of perceptual attention that eliminates the range error. This, in turn, points to the possibility that “normal” progressive saccades are in fact not coupled with perceptual attention.

This argument reveals an interesting similarity between SAS theory and many PG models. As an example, Glenmore, in line with the theoretical framework proposed by Findlay & Walker (1999), assumes that word-objects are represented as potential saccade targets on a vector of saliency. When a fixate center triggers the execution of a saccade, it will go to the target with the highest saliency, co-determined by visual and linguistic processing. This process of target specification within the saliency map is already equivalent to the initial phase of saccade programming. In this model, eye movements are not “coupled” to a separate mechanism of attentional selection because such a mechanism does not play a role in the automated routine model of oculomotor control. This may be different in the preparation of interword regressions which are assumed to rely on spatial memory for prior fixation positions (Inhoff, Weger, & Radach, 2005). Hence, for quite different reasons, both E-Z reader and Glenmore appear compatible with the effect pattern shown in Figure 2.

Looking at models emphasizing low-level aspects of processing, the problem of spatial selection is approached in an essentially non-cognitive way. Here, spatial selection follows rather simple heuristic rules (like “attain the largest word within a 20-letter window”, Reilly & O’Regan, 1998) or may be based on a more sophisticated form of “educated guessing” in the spirit of Brysbaert & Vitu (1998; see also Brysbaert, Drieghe, & Vitu, 2005). The latter route was taken with impressive success in the SERIF model by McDonald, Carpenter, & Shillcock (2005), who nonetheless acknowledge that reading involves much more than moving eyes across a pager and that such a low-level “shortcutmechanism” will, in later versions of their model, need to be supplemented with or replaced by a word-processing module.

In line with most other models, the educated guessing mechanism in the SERIF model uses low-level word unit information obtained via parafoveal visual processing for saccade target selection. In our discussion we have repeatedly pointed to the overwhelming empirical evidence in favor of word-based visuomotor control. A theoretical conception that casts doubt on the importance of word units in eye guidance has been suggested

Ch. 11: Models of Oculomotor Control in Reading

255

by Yang & McConkie (2004). These authors report experiments in which they used gaze contingent display changes to mask lines of text for the duration of some critical fixations so that (among other changes) word boundary information was not available. As it turned out, only relatively long fixations were affected by manipulations of this kind and distributions of saccade landing sites were quite similar in conditions with and without the presence of word boundaries. In their contribution to this book, Yang & Vitu claim that the planning of some saccades are word based while others seems not to be influenced by low spatial frequency word boundary information. We suggest caution when applying data from masking studies to the question of word-based control. Yang & McConkie (2004) themselves discuss a number of alternative word-based hypothesis that might explain part of their results. A factor that may play a critical role is that word length information was only removed during a single critical fixation and then immediately restored. If, as suggested by McConkie, Kerr, Reddix, & Zola (1988), low spatial frequency information serves as the basis for saccade targeting, it is reasonable to assume that this information is accumulated over consecutive fixations at least within the total perceptual span. Thus information acquired during earlier fixations may have supported eye guidance when word spaces were temporarily filled during some fixations.

Rayner, Fischer, & Pollatsek (1998) have shown that the permanent removal of word space information interferes with both word-processing and eye-movement control, again providing solid support for word-based eye guidance. We recently examined eye movements while reading Thai, an alphabetic writing system with no spatial segmentation at word boundaries (Reilly, Radach, Corbic, & Luksaneeyanawin, 2005). Analyses of local eye-movement patterns revealed the existence of an attenuated preferred viewing position phenomenon position in Thai reading. Interestingly, the steepness of the Gaussian distribution of initial saccade-landing positions (see Figure 1, left panel) was a function of the frequency with which specific letters occur at the beginning and end of words. We concluded that, in the absence of visual cues for word segmentation, orthographic information can serve as a base for the parafoveal specification of saccade target units. This mode of oculomotor control requires a substantial degree of distributed (and perhaps interactive) processing where word segmentation may be a result of rather than a precondition for lexical access. From a more fundamental point of view, these observations can be taken as an intriguing example of the principle that, in the interests of optimal resource allocation for linguistic processing, oculomotor control is as low level as possible and as cognitive as necessary (Underwood & Radach, 1998).

With respect to the question of sequential vs parallel processing of visual target objects within the functional field of view, the traditional battlefield for this issue within the domain of reading is the highly debated evidence for the so-called “parafovea-on-fovea effects”, where properties of neighboring words appear to affect viewing duration on the currently fixated word. A recent study by Kliegl, Nuthmann, & Engbert (2006) can be taken as an example. Using one of the largest available data bases, where 222 participants each read 144 sentences, they showed that linguistic properties of both the prior and the next word in the sentence influenced the viewing time of a fixated word. Such effects of a parafoveal word on a fixated (foveal) word should not occur if the attention-controlled

256

R. Radach et al.

linguistic processing of words is strictly serial and if the saccade from the fixated word to the next word is programmed before a corresponding shift of attention takes place (for similar findings, see, e.g., Inhoff, Radach, Starr, & Greenberg, 2000; Inhoff, Starr, & Schindler, 2000; Kennedy & Pynte, 2005; Schroyens, Vitu, Brysbaert, & d’Ydewalle, 1999; Starr & Inhoff, 2004; Underwood, Binns, & Walker, 2000). However, it should be noted that parafovea-on-fovea effects have not always been found and that methodological objections can be raised against some studies (see Rayner & Juhasz, 2004, for a recent discussion).

In fact, Pollatsek, Reichle, & Rayner (2003) mustered a very clever defense against this type of evidence, by pointing out that during reading many saccades do not land on the intended target word. Therefore, a fixation falling on one of the last letters of word n might have been intended to land at word n+1, producing a spurious effect of word n+1 on word n. The issue of mislocated fixations in reading has assumed prominence also as a result of efforts to account for the inverted optimal viewing position (IOVP) effect observed for fixation durations in fluent reading (Vitu, McConkie, Kerr, & O’Regan, 2001; Nuthmann, Engbert, & Kliegl, 2005).

Chapter 14 of this book provides an elegant analytic solution to the problem of estimating the number of misplaced fixations on a word given that the landing site distribution comprises both intended landings and landings aimed at neighboring words. The surprising result is that the average rate of misplaced fixations is over 20 % and significantly greater in cases where the intended targets are shorter words. The authors’ estimate is considerably larger than some researchers have thus far assumed (e.g., Reichle, Rayner, & Pollatskek, 2003, p. 510). This chapter provides an excellent starting point for taking a much closer look at the potential impact of misplaced fixations on the reading process. The primary motivation for Engbert et al.’s analysis was to account for the IOVP phenomenon, which they do here and elsewhere (Nuthmann, Engbert, & Kliegl, 2005) with remarkable success. The broader and no less interesting issue is what impact the estimated high-incidence of misplaced fixations might have on the various classes of reading model. In the context of the discussion about spatially distributed and temporally overlapping word processing, the techniques developed by Engbert et al. may provide a way to estimate whether, as suggested by Pollatsek, Reichle, & Rayner (2003), parafovea-on-fovea effects are indeed compromised by misplaced fixations.6

6 As noted before, a distinguishing feature among current models of reading is the relative importance each ascribes to linguistic and oculomotor factors in eye guidance. In models where linguistic or lexical processes play a significant role in driving eye-movement control, the prospect of over a fifth of landings on a word being misdirected clearly requires to be taken account of. One would expect that E-Z reader, for example, might have problems in dealing with saccades that undershoot the intended target word resulting in either a refixation of the current word or a fixation on a word that was intended to be skipped. Given the importance to E-Z reader of early lexical processing, landing on an already-processed word is likely to have some disruptive effects. This problem should be less serious in models that permit the simultaneous processing of several words. In any case these models also need to account for misplaced fixations and the inverted OVP effect (see, e.g., Engbert, Nuthmann, Richter, & Kliegl, 2005; McDonald, Carpenter & Shillcock, 2005).

Ch. 11: Models of Oculomotor Control in Reading

257

In addition to the somewhat indirect argumentation based on parafovea-on-fovea effects, there have also been attempts to examine the issue of sequential vs parallel word processing more directly. Inhoff, Eiter, & Radach, (2005) examined several preview conditions in a sentence reading experiment involving a display change occurring 150 ms after fixation onset on a pretarget word. When this technique was used to allow a preview of the subsequent target word exclusively during the initial part of the pretarget fixation, a 24 ms preview benefit emerged relative to a control condition involving a target fully masked by a pseudoword. This effect was not very large relative to the 90 ms full preview benefit, and it was also smaller than the benefit from an end of fixation preview. However, it supported the view that there is some temporal overlap between the processing of subsequent words and hence some degree of parallel processing.

Experimental evidence in support of the sequential processing position has been presented by Rayner, Juhasz, & Brown (in press), who used a saccade contingent display change technique where an invisible boundary was set either at the end of a pretarget word (word n − 1) or at the end of word n − 2. Replicating a large number of studies on this issue (see Rayner, 1998, for a review), they obtained a substantial parafoveal preview benefit when the boundary was located at word n − 1. Importantly, no such benefit was obtained when the boundary was set at word n − 2. This result is in line with the assumption common to all SAS models that word processing is restricted to exactly one word at a time and contradicts the prediction of any processing gradient model that some preview benefit should occur when letter information from a word two positions to the right of the current fixation is available during prior fixations.

Eventually, the position one takes in debates of this kind will also depend on the more general issue of whether information processing and oculomotor control in reading is seen as task specific or whether reading is considered to be a special case of universal processing mechanisms involved in “active vision” (Findlay & Gilchrist, 2003). If the latter view is adopted, the multiple lines of evidence reviewed in the previous section will play their role in assessing the viability of theoretical conceptions and conceptual models in reading.

One area where generalization from the mainstream of basic research on visual processing is particularly important is the time course of attentional orienting. As our discussion has shown, any approach that takes the notion of an attention shift seriously will need to allocate a certain amount of time for the triggering and execution of the attentional “movement”. Pollatsek, Reichle, & Rayner, (2006) have noted that the assumption of an “instantaneous attention shift” in their model is not very plausible. They suggest that in later versions of the E-Z reader the time it takes to shift attention may be counted toward the duration of the L2 phase of lexical processing. However, if lexical processing and the shift are considered to be sequential, this would mean that the time allowed for L2 would need to be reduced by at least as much as 50 ms. To avoid this erosion of the sequential time line, it could be stated that part of L2 is equivalent to a “latency” for the shift, which is equivalent to positing that both occur in parallel (see also Reichle, Pollatsek, & Rayner, for a discussion of possible interpretations of L1 and L2). However, this eliminates the

258

R. Radach et al.

idea that the completion of lexical access is the trigger for moving attention, which in our view would constitute a major change in the philosophy and architecture of the model.

4. Problems of comparing and evaluating models

As evident from our discussions above, there is now a rich diversity of approaches to modeling information processing and eye-movement control during reading. In the concluding section of this chapter we would like to point to some problems related to the comparison between and evaluation of these competing models. We will try to avoid repeating the points made in prior discussions of these problems by Reichle, Rayner, & Pollastsek (1998) and Jacobs (2000). Our intention is to supplement their views by contributing a few remarks that may help raising awareness for what we believe to be a major deficit in the current state of the field. The question is how much has changed since Jacob’s refusal to compare three computational models presented in the volume edited by Kennedy, Radach, Heller, & Pynte (2000), based on his impression that these models differed in so many different respects that a fair comparison was impossible.

4.1. Levels of description for computational models

As we pointed out in the introduction, from a theoretical point of view these models can be classified along the two axis of oculomotor vs cognitive control and sequential vs parallel word processing. Looking at the existing models from a more technical point of view, there are a number of additional aspects that can provide useful classifications. On a conceptual level, computational models are necessarily abstractions from a larger phenomenon, described by a theory, of which the researcher is seeking to gain a deeper understanding. The conceptual aspects of the model comprise the building blocks or conceptual units that the modeler considers essential to the theoretical account of the target phenomenon. For competing models to be comparable, there must be some degree of agreement between theorists regarding the core conceptual units of a model. Of course, the choice and scope of these units and of testable data is an issue for debate among theorists. Fortunately, as noted by Grainger (2003), the development of models in the field of continuous reading is characterized by a broad consensus regarding the critical phenomena that are to be explained. A useful compilation of mostly well-replicated and unquestionable facts about eye movements during reading that serve as accepted benchmarks for modeling can be found in Reichle, Rayner, & Pollatsek (2003).

Those aspects of a model that are formally described mathematically represent its core. In a complex, cognitive science domain such as reading, such formalizations will necessarily be partial. A key feature of the formal components of a model, or more correctly, their computational realization, is their success or otherwise in fitting empirical data. Since one can fit any data given enough free parameters, a measure of a model’s power is the extent to which it can formally account for the data with the fewest free parameters. Any complex model will, inevitably, be unable to propose a complete, integrated

Ch. 11: Models of Oculomotor Control in Reading

259

formal account of all its conceptual components. In particular, it may not be possible to adequately account for how the various components interact. We refer to the nonor semi-formal framework that is used to integrate the various formal components of the model as the model’s architecture. Usually, this architecture is provided by the model’s computer implementation. So, for example, a computational model of eye-movement control in reading will comprise several distinct equations each describing the behavior of a conceptual unit of the model (word recognition, saccade triggering, etc.). These equations will be integrated within an algorithmic structure that can be used to generate data, which in turn can be compared to empirical observations.

The dynamical aspects of a model refer to the temporally extended behavior of the model. This is a function of the interaction between the model’s formally realized components and its architecture. Ultimately, it is the dynamics of the model that generate testable data. Computational models that can generate moment-to-moment data on the time course of, for example, saccade generation in reading will provide the most testable and convincing accounts of the process, as they (should) generate principally refutable predictions.

4.2. Implicit and explicit model assumptions

While the above-mentioned dimensions are largely uncontroversial and most computational models can be readily characterized in this way, it is more interesting to explore the boundary between what aspects of a particular dimension are explicitly highlighted by the modeler as theoretically crucial and what aspects are left implicit. This can often be the main arena for comparison between models, since what is made explicit is usually what the model designer considers testable and potentially refutable, whereas the implicit aspects are assumed to be uncontroversial and not critical to the explanatory status of the model. However, what is uncontroversial for one researcher may be a key battleground for another.

For all of the dimensions of the model described above, we can identify implicit assumptions that may or may not be well founded, but which are necessary in order to get the model to work. Beneath the exterior of any computational model there is a considerable amount of superstructure built upon a foundation of varying solidity. For example, in the case of reading, there is a common implicit assumption that the visual segmentation of a word should occur before it can be identified. However, an examination of reading data from non-spaced texts such as Thai suggests that this assumption may not be well founded, and that when reading in this type of script there may be an interactive process of segmentation and lexical identification involving multiple word candidates and multiple segmentation hypotheses (Reilly, Radach, et al., 2005). This raises an interesting question about which of the existing models could, in principle, survive exposure to alternative writing systems including Chinese and the different Japanese systems.

This is an example of a conceptual level assumption. However, implicit assumptions can be made at the formal, architectural and dynamical levels as well. For example, in the case of the well-known Interactive Activation (IA) model of word recognition

260

R. Radach et al.

(Rumelhart & McClelland, 1982), the designers made an architectural decision to represent letters in separate banks of letter-position channels. Certainly, there is no evidence from Rumelhart and McClelland’s description of their model that this decision was anything other than a computational convenience in order to get the model to function. Nonetheless, this did not prevent some researchers choosing to test the model on the basis of this particular architectural feature (e.g. Mewhort & Johns, 1988).7

As computational models become more complex, it will become increasingly difficult to delineate those aspects that one wishes to stand for empirically from those that are less central to theory testing. Ultimately one wishes to test the theory, not the model. If spurious tests of the model are to be avoided, one needs to find a principled rather than an ad hoc way of indicating those aspects of the model that are of central theoretical significance and those that are not. The proposals in the next section aim to go some way toward this.

4.3. Some methodological proposals

To go some way to avoid problems relating to the testing of implicit and explicit aspects of a model, we propose a set of methodological approaches to the modeling exercise. Our methodological proposals fall under three headings: (1) the facilitation of the comparison of the structural and functional assumptions of competing models; (2) the grounding of models in the neuroscience of vision and language; and (3) the establishment of data sets for model comparison and benchmarking.

(1) With regard to the comparison of the structure and function of models, this could be facilitated by using a common implementation framework comprising a set of reusable software components (Schmidt & Fayad, 1997). In software engineering terms, a framework is a reusable, “semicomplete” application that can be specialized to produce particular applications or, in this case, particular models. The components would need to be fine-grained enough to accommodate the range of model types and model instances that are to be considered. If one could develop an acceptable and widely adopted modeling framework, it would be possible to establish a common basis on which to implement a variety of models. This would make the models more directly comparable in terms of not only their ability to account for data, but also their underlying theoretical assumptions. The modeling environment could provide a semi-formal language within which a model’s structures and process functions could both be unambiguously articulated. This would aid both the task of designing the models and communicating the design to other researchers.

7 It is somewhat ironic that a key conceptual component of the IA model, namely the concept of interaction between letter and word units and its supposed central role in mediating the word-superiority effect proved not to be as crucial as the model’s designers first thought. As Norris (1992) demonstrated, it is possible to produce the word-superiority effect in a feed-forward network, without feedback connections, and without explicit interaction. Nonetheless, despite this lack of specificity the IA model still stands as a tour de force of cognitive modeling with an impressive set of empirical findings to its credit.

Ch. 11: Models of Oculomotor Control in Reading

261

(2)Functionalist computational models, of which E-Z reader is an excellent example, are inherently underdetermined in terms of their relationship to the brain mechanisms that underlie them. For example, one could envisage a family of E-Z reader–like models with quite different combinations of parameters and/or parameter values that would be capable of providing an equally good fit to the empirical data (e.g., Engbert & Kliegl, 2001). One way to reduce this lack of determinism is to invoke a criterion of biological plausibility when comparing models. There is an increasingly rich set of data emerging from the field of cognitive neuroscience which could be used to augment the traditional behavioral sources of constraint on computational models. An excellent example for this approach is the use of ERP analyses to delineate the time course of lexical access (e.g. Sereno, Rayner & Posner, 1998; Hauk & Pulvermüller, 2004). Another, not unrelated, factor in assessing competing models is to take account of the evolutionary context in which our visual system evolved. Because it evolved for purposes quite different from reading, we need to beware of too easy recourse to arguments of parsimony, particularly when they are couched solely in terms of the reading process itself. A model with the minimum of modifiable parameters may be parsimonious on its own terms but fail the test of biological realism when compared with, say, a model that comprises an artificial neural network with many hundreds of adjustable parameters. While evolution is parsimonious in the large, when we look at brain subsystems in isolation, such as those involved in reading, we need to be careful how we wield Occam’s razor.

(3)Finally, the issue of appropriate data sets with which to test and compare computational models of eye-movement control needs closer attention than it has been given to date. For example, the Schilling et al. (1998) data set used to parameterize and test E-Z reader and several other models is not particularly extensive. A good case can be made for establishing a range of publicly accessible data sets against which any proposed model can be tested. This would be similar to what has been done, for example, in machine learning, in data mining and most notably in the field of language acquisition (MacWhinney, 1995). Furthermore, the corpus of benchmark data should be extended to include corpora with common specifications in a variety of languages, alphabets and scripts. An excellent first step in this direction is the development of the Dundee-Corpus in English and French (Kennedy, 2003) that has been used to develop and test the SERIF model (McDonald, Carpenter, & Shillcock, 2005). In the long term, the more successful models will be those that can readily generalize beyond just one language and one writing system.

5.Challenges for future model developments

As the present literature, including the following three chapters of this book, shows, progress in the area has been impressive since the mid-1990s. There is now a rich spectrum of competing theories and models. Old debates about oculomotor vs cognitive control models have been replaced with much more complex approaches covering a theoretical middle ground in which both ends of the spectrum have their place. The scope of these

262

R. Radach et al.

models is still limited, partly for purposes of tractability, so that they cover a range of core phenomena, centered on the coordination of “eye” and “mind” during reading.

However, a close look at the state of the art in modeling eye-movement control in reading shows a striking deficit: So far no model has been published that is capable of accommodating inter-individual differences and intra-individual variations in reading. An obvious candidate for the latter would be variations along the axis of superficial vs careful reading (O’Regan, 1992) as can be induced by changing the reading task to induce a rather shallow vs deep linguistic processing during reading. Examples of inter-individual variation are the development of eye-movement control from childhood to skilled reading at adult age (Feng, 2006) and changes that occur in ageing readers (e.g. Kliegl, Grabner, Rolfs, & Engbert, 2004).

In addition, there are several aspects of the reading process itself that appear underspecified in current models. We would like to point to four areas that in our view deserve consideration. This list includes processes and mechanisms that manifest themselves relatively clearly via measurement of eye movements. As an example, a key component of reading that is likely to modulate oculomotor control but is not readily traceable in standard data sets is the processing of phonological information, both at the level of word processing (e.g. Lee, Binder, Kim, Pollatsek, & Rayner, 1999) and phonological working memory (Inhoff, Connine, Eiter, Radach, & Heller, 2004).

5.1. Binocular coordination

One of the most solid (and rarely challenged) implicit assumptions in research about eye movements is that both eyes behave essentially in the same way. Eye movements measured from one eye are routinely generalized to both and the few existing studies on binocular coordination have thus far received relatively little attention. As demonstrated by Heller & Radach (1999) there are systematic differences in the amplitude of saccades made by both eyes, resulting in mean disparities in the order of 1–1.5 letters, which in turn are partly offset by low convergence movements (Hendricks, 1996). A comprehensive metrical description of binocular coordination has been provided by Liversedge, White, Findlay & Rayner (2006). Critical for our discussion is Juhasz, Liversedge, White & Rayner (2006), who have shown that variation in word frequency does not affect fixation disparity or any other aspect of binocular coordination. Recent data from our laboratory confirm these observations for a sample of elementary school students who showed larger disparities than adults but again no effects of word frequency. The fact that binocular coordination is essentially a physiological phenomenon (see Colleweijn, Erkelens, & Steinman, 1988, for a seminal discussion) with no sensitivity to local variation of cognitive workload is good news for the modeling community. At this point we do not see the inclusion of this aspect of oculomotor behavior in future computational models as a priority. An important exception is the family of “split fovea models” (e.g. McDonald, Carpenter, & Shillcock, 2005). These make strong claims about both linguistic processing and eye movement control during reading based on the fact that the information entering the left vs right visual hemifields is projected to opposite brain hemispheres. In this

Ch. 11: Models of Oculomotor Control in Reading

263

context the disparity between both eyes needs to be accounted for, as a retinal split will feed different information to the hemispheres when the eyes exhibit uncrossed or crossed disparity.

5.2. Letter processing within the perceptual span

Current models do a relatively good job in approximating the visual and informational (e.g. orthographic) constraints for word processing within the perceptual span. In SWIFT, the rate with which letter level input is processed is approximated using a Gaussian distribution, and in the current version of Glenmore a gamma distribution scales the input to the saliency map. In the SERIF model a stochastic selection mechanism is based on the (Gaussian) extended optimal viewing position described by Brysbaert & Vitu (1998). However, as shown by McConkie & Zola (1987, see Figure 1), letter discrimination performance around the current fixation position is more complex, as word boundaries play a modulating role. In recent years there has been a lively debate on the role of letter position coding in word recognition (e.g. Peressotti & Grainger, 1999; Stevens & Grainger, 2003) and dynamic reading (e.g. Inhoff, Radach, Eiter, & Skelly, 2003). We anticipate that results from these lines of research will eventually lead to major refinements in the way letter recognition within the perceptual span is understood and implemented in computational models of reading.

5.3. Orthographic and lexical processing

In most current models, such as E-Z reader or SWIFT, there are no explicit mechanisms to simulate the microlevel of word processing. Instead, the time course of word processing is approximated on the bases of parameters like word length, frequency and contextual predictability. One exception is the Glenmore model, which includes a relatively realistic connectionist processing module, where the dynamics of activation and inhibition on the level of letter and word nodes determines the flow of linguistic processing. Although this is a step in the right direction, a more comprehensive approach would be to combine a model of continuous reading with one of the existing computational models of singleword recognition. These models presently ignore the dynamic aspects of continuous reading but provide detailed and plausible accounts of sub-lexical and lexical aspects of word processing. Candidates that could be considered for such a combination include the revised activation-verification model (Paap, Johansen, Chun, & Vonnahme, 2000), the multiple read-out model (Jacobs, Graf, & Kinder, 2003) and the cascaded dual route model of visual word recognition (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001).

5.4. Sentence-level processing

So far, computational models of eye movements during reading have eschewed consideration of specific sentence-level factors. Instead, in a number of models, supra-lexical knowledge is captured by a generic factor, word “predictability”, empirically established