- •Preface
- •Contents
- •1 Disability and Assistive Technology Systems
- •Learning Objectives
- •1.1 The Social Context of Disability
- •1.2 Assistive Technology Outcomes: Quality of Life
- •1.2.1 Some General Issues
- •1.2.2 Definition and Measurement of Quality of Life
- •1.2.3 Health Related Quality of Life Measurement
- •1.2.4 Assistive Technology Quality of Life Procedures
- •1.2.5 Summary and Conclusions
- •1.3 Modelling Assistive Technology Systems
- •1.3.1 Modelling Approaches: A Review
- •1.3.2 Modelling Human Activities
- •1.4 The Comprehensive Assistive Technology (CAT) Model
- •1.4.1 Justification of the Choice of Model
- •1.4.2 The Structure of the CAT Model
- •1.5 Using the Comprehensive Assistive Technology Model
- •1.5.1 Using the Activity Attribute of the CAT Model to Determine Gaps in Assistive Technology Provision
- •1.5.2 Conceptual Structure of Assistive Technology Systems
- •1.5.3 Investigating Assistive Technology Systems
- •1.5.4 Analysis of Assistive Technology Systems
- •1.5.5 Synthesis of Assistive Technology Systems
- •1.6 Chapter Summary
- •Questions
- •Projects
- •References
- •2 Perception, the Eye and Assistive Technology Issues
- •Learning Objectives
- •2.1 Perception
- •2.1.1 Introduction
- •2.1.2 Common Laws and Properties of the Different Senses
- •2.1.3 Multisensory Perception
- •2.1.4 Multisensory Perception in the Superior Colliculus
- •2.1.5 Studies of Multisensory Perception
- •2.2 The Visual System
- •2.2.1 Introduction
- •2.2.2 The Lens
- •2.2.3 The Iris and Pupil
- •2.2.4 Intraocular Pressure
- •2.2.5 Extraocular Muscles
- •2.2.6 Eyelids and Tears
- •2.3 Visual Processing in the Retina, Laternal Geniculate Nucleus and the Brain
- •2.3.1 Nerve Cells
- •2.3.2 The Retina
- •2.3.3 The Optic Nerve, Optic Tract and Optic Radiation
- •2.3.4 The Lateral Geniculate Body or Nucleus
- •2.3.5 The Primary Visual or Striate Cortex
- •2.3.6 The Extrastriate Visual Cortex and the Superior Colliculus
- •2.3.7 Visual Pathways
- •2.4 Vision in Action
- •2.4.1 Image Formation
- •2.4.2 Accommodation
- •2.4.3 Response to Light
- •2.4.4 Colour Vision
- •2.4.5 Binocular Vision and Stereopsis
- •2.5 Visual Impairment and Assistive Technology
- •2.5.1 Demographics of Visual Impairment
- •2.5.2 Illustrations of Some Types of Visual Impairment
- •2.5.3 Further Types of Visual Impairment
- •2.5.4 Colour Blindness
- •2.5.5 Corrective Lenses
- •2.6 Chapter Summary
- •Questions
- •Projects
- •References
- •3 Sight Measurement
- •Learning Objectives
- •3.1 Introduction
- •3.2 Visual Acuity
- •3.2.1 Using the Chart
- •3.2.2 Variations in Measuring Visual Acuity
- •3.3 Field of Vision Tests
- •3.3.1 The Normal Visual Field
- •3.3.2 The Tangent Screen
- •3.3.3 Kinetic Perimetry
- •3.3.4 Static Perimetry
- •3.4 Pressure Measurement
- •3.5 Biometry
- •3.6 Ocular Examination
- •3.7 Optical Coherence Tomography
- •3.7.1 Echo Delay
- •3.7.2 Low Coherence Interferometry
- •3.7.3 An OCT Scanner
- •3.8 Ocular Electrophysiology
- •3.8.1 The Electrooculogram (EOG)
- •3.8.2 The Electroretinogram (ERG)
- •3.8.3 The Pattern Electroretinogram
- •3.8.4 The Visual Evoked Cortical Potential
- •3.8.5 Multifocal Electrophysiology
- •3.9 Chapter Summary
- •Glossary
- •Questions
- •Projects
- •4 Haptics as a Substitute for Vision
- •Learning Objectives
- •4.1 Introduction
- •4.1.1 Physiological Basis
- •4.1.2 Passive Touch, Active Touch and Haptics
- •4.1.3 Exploratory Procedures
- •4.2 Vision and Haptics Compared
- •4.3 The Capacity of Bare Fingers in Real Environments
- •4.3.1 Visually Impaired People’s Use of Haptics Without any Technical Aid
- •4.3.2 Speech Perceived by Hard-of-hearing People Using Bare Hands
- •4.3.3 Natural Capacity of Touch and Evaluation of Technical Aids
- •4.4 Haptic Low-tech Aids
- •4.4.1 The Long Cane
- •4.4.2 The Guide Dog
- •4.4.3 Braille
- •4.4.4 Embossed Pictures
- •4.4.5 The Main Lesson from Low-tech Aids
- •4.5 Matrices of Point Stimuli
- •4.5.1 Aids for Orientation and Mobility
- •4.5.2 Aids for Reading Text
- •4.5.3 Aids for Reading Pictures
- •4.6 Computer-based Aids for Graphical Information
- •4.6.1 Aids for Graphical User Interfaces
- •4.6.2 Tactile Computer Mouse
- •4.7 Haptic Displays
- •4.7.1 Information Available via a Haptic Display
- •4.7.2 What Information Can Be Obtained with the Reduced Information?
- •4.7.3 Haptic Displays as Aids for the Visually Impaired
- •4.8 Chapter Summary
- •4.9 Concluding Remarks
- •Questions
- •Projects
- •References
- •5 Mobility: An Overview
- •Learning Objectives
- •5.1 Introduction
- •5.2 The Travel Activity
- •5.2.1 Understanding Mobility
- •5.2.2 Assistive Technology Systems for the Travel Process
- •5.3 The Historical Development of Travel Aids for Visually Impaired and Blind People
- •5.4 Obstacle Avoidance AT: Guide Dogs and Robotic Guide Walkers
- •5.4.1 Guide Dogs
- •5.4.2 Robotic Guides and Walkers
- •5.5 Obstacle Avoidance AT: Canes
- •5.5.1 Long Canes
- •5.5.2 Technology Canes
- •5.6 Other Mobility Assistive Technology Approaches
- •5.6.1 Clear-path Indicators
- •5.6.2 Obstacle and Object Location Detectors
- •5.6.3 The vOICe System
- •5.7 Orientation Assistive Technology Systems
- •5.7.1 Global Positioning System Orientation Technology
- •5.7.2 Other Technology Options for Orientation Systems
- •5.8 Accessible Environments
- •5.9 Chapter Summary
- •Questions
- •Projects
- •References
- •6 Mobility AT: The Batcane (UltraCane)
- •Learning Objectives
- •6.1 Mobility Background and Introduction
- •6.2 Principles of Ultrasonics
- •6.2.1 Ultrasonic Waves
- •6.2.2 Attenuation and Reflection Interactions
- •6.2.3 Transducer Geometry
- •6.3 Bats and Signal Processing
- •6.3.1 Principles of Bat Sonar
- •6.3.2 Echolocation Call Structures
- •6.3.3 Signal Processing Capabilities
- •6.3.4 Applicability of Bat Echolocation to Sonar System Design
- •6.4 Design and Construction Issues
- •6.4.1 Outline Requirement Specification
- •6.4.2 Ultrasonic Spatial Sensor Subsystem
- •6.4.3 Trial Prototype Spatial Sensor Arrangement
- •6.4.4 Tactile User Interface Subsystem
- •6.4.5 Cognitive Mapping
- •6.4.6 Embedded Processing Control Requirements
- •6.5 Concept Phase and Engineering Prototype Phase Trials
- •6.6 Case Study in Commercialisation
- •6.7 Chapter Summary
- •Questions
- •Projects
- •References
- •7 Navigation AT: Context-aware Computing
- •Learning objectives
- •7.1 Defining the Orientation/Navigation Problem
- •7.1.1 Orientation, Mobility and Navigation
- •7.1.2 Traditional Mobility Aids
- •7.1.3 Limitations of Traditional Aids
- •7.2 Cognitive Maps
- •7.2.1 Learning and Acquiring Spatial Information
- •7.2.2 Factors that Influence How Knowledge Is Acquired
- •7.2.3 The Structure and Form of Cognitive Maps
- •7.3 Overview of Existing Technologies
- •7.3.1 Technologies for Distant Navigation
- •7.3.2 User Interface Output Technologies
- •7.4 Principles of Mobile Context-aware Computing
- •7.4.1 Adding Context to User-computer Interaction
- •7.4.2 Acquiring Useful Contextual Information
- •7.4.3 Capabilities of Context-awareness
- •7.4.4 Application of Context-aware Principles
- •7.4.5 Technological Challenges and Unresolved Usability Issues
- •7.5 Test Procedures
- •7.5.1 Human Computer Interaction (HCI)
- •7.5.2 Cognitive Mapping
- •7.5.3 Overall Approach
- •7.6 Future Positioning Technologies
- •7.7 Chapter Summary
- •7.7.1 Conclusions
- •Questions
- •Projects
- •References
- •Learning Objectives
- •8.1 Defining the Navigation Problem
- •8.1.1 What is the Importance of Location Information?
- •8.1.2 What Mobility Tools and Traditional Maps are Available for the Blind?
- •8.2 Principles of Global Positioning Systems
- •8.2.1 What is the Global Positioning System?
- •8.2.2 Accuracy of GPS: Some General Issues
- •8.2.3 Accuracy of GPS: Some Technical Issues
- •8.2.4 Frequency Spectrum of GPS, Present and Future
- •8.2.5 Other GPS Systems
- •8.3 Application of GPS Principles
- •8.4 Design Issues
- •8.5 Development Issues
- •8.5.1 Choosing an Appropriate Platform
- •8.5.2 Choosing the GPS Receiver
- •8.5.3 Creating a Packaged System
- •8.5.4 Integration vs Stand-alone
- •8.6 User Interface Design Issues
- •8.6.1 How to Present the Information
- •8.6.2 When to Present the Information
- •8.6.3 What Information to Present
- •8.7 Test Procedures and Results
- •8.8 Case Study in Commercialisation
- •8.8.1 Understanding the Value of the Technology
- •8.8.2 Limitations of the Technology
- •8.8.3 Ongoing Development
- •8.9 Chapter Summary
- •Questions
- •Projects
- •References
- •9 Electronic Travel Aids: An Assessment
- •Learning Objectives
- •9.1 Introduction
- •9.2 Why Do an Assessment?
- •9.3 Methodologies for Assessments of Electronic Travel Aids
- •9.3.1 Eliciting User Requirements
- •9.3.2 Developing a User Requirements Specification and Heuristic Evaluation
- •9.3.3 Hands-on Assessments
- •9.3.4 Methodology Used for Assessments in this Chapter
- •9.4 Modern-day Electronic Travel Aids
- •9.4.1 The Distinction Between Mobility and Navigation Aids
- •9.4.2 The Distinction Between Primary and Secondary Aids
- •9.4.3 User Requirements: Mobility and Navigation Aids
- •9.4.4 Mobility Aids
- •9.4.5 Mobility Aids: Have They Solved the Mobility Challenge?
- •9.4.6 Navigation Aids
- •9.4.7 Navigation Aids: Have They Solved the Navigation Challenge?
- •9.5 Training
- •9.6 Chapter Summary and Conclusions
- •Questions
- •Projects
- •References
- •10 Accessible Environments
- •Learning Objectives
- •10.1 Introduction
- •10.1.1 Legislative and Regulatory Framework
- •10.1.2 Accessible Environments: An Overview
- •10.1.3 Principles for the Design of Accessible Environments
- •10.2 Physical Environments: The Streetscape
- •10.2.1 Pavements and Pathways
- •10.2.2 Road Crossings
- •10.2.3 Bollards and Street Furniture
- •10.3 Physical Environments: Buildings
- •10.3.1 General Exterior Issues
- •10.3.2 General Interior Issues
- •10.3.4 Signs and Notices
- •10.3.5 Interior Building Services
- •10.4 Environmental Information and Navigation Technologies
- •10.4.1 Audio Information System: General Issues
- •10.4.2 Some Technologies for Environmental Information Systems
- •10.5 Accessible Public Transport
- •10.5.1 Accessible Public Transportation: Design Issues
- •10.6 Chapter Summary
- •Questions
- •Projects
- •References
- •11 Accessible Bus System: A Bluetooth Application
- •Learning Objectives
- •11.1 Introduction
- •11.2 Bluetooth Fundamentals
- •11.2.1 Brief History of Bluetooth
- •11.2.2 Bluetooth Power Class
- •11.2.3 Protocol Stack
- •11.2.4 Bluetooth Profile
- •11.2.5 Piconet
- •11.3 Design Issues
- •11.3.1 System Architecture
- •11.3.2 Hardware Requirements
- •11.3.3 Software Requirements
- •11.4 Developmental Issues
- •11.4.1 Bluetooth Server
- •11.4.2 Bluetooth Client (Mobile Device)
- •11.4.3 User Interface
- •11.5 Commercialisation Issues
- •11.6 Chapter Summary
- •Questions
- •Projects
- •References
- •12 Accessible Information: An Overview
- •Learning Objectives
- •12.1 Introduction
- •12.2 Low Vision Aids
- •12.2.1 Basic Principles
- •12.3 Low Vision Assistive Technology Systems
- •12.3.1 Large Print
- •12.3.2 Closed Circuit Television Systems
- •12.3.3 Video Magnifiers
- •12.3.4 Telescopic Assistive Systems
- •12.4 Audio-transcription of Printed Information
- •12.4.1 Stand-alone Reading Systems
- •12.4.2 Read IT Project
- •12.5 Tactile Access to Information
- •12.5.1 Braille
- •12.5.2 Moon
- •12.5.3 Braille Devices
- •12.6 Accessible Computer Systems
- •12.6.1 Input Devices
- •12.6.2 Output Devices
- •12.6.3 Computer-based Reading Systems
- •12.6.4 Accessible Portable Computers
- •12.7 Accessible Internet
- •12.7.1 World Wide Web Guidelines
- •12.7.2 Guidelines for Web Authoring Tools
- •12.7.3 Accessible Adobe Portable Document Format (PDF) Documents
- •12.7.4 Bobby Approval
- •12.8 Telecommunications
- •12.8.1 Voice Dialling General Principles
- •12.8.2 Talking Caller ID
- •12.8.3 Mobile Telephones
- •12.9 Chapter Summary
- •Questions
- •Projects
- •References
- •13 Screen Readers and Screen Magnifiers
- •Learning Objectives
- •13.1 Introduction
- •13.2 Overview of Chapter
- •13.3 Interacting with a Graphical User Interface
- •13.4 Screen Magnifiers
- •13.4.1 Overview
- •13.4.2 Magnification Modes
- •13.4.3 Other Interface Considerations
- •13.4.4 The Architecture and Implementation of Screen Magnifiers
- •13.5 Screen Readers
- •13.5.1 Overview
- •13.5.2 The Architecture and Implementation of a Screen Reader
- •13.5.3 Using a Braille Display
- •13.5.4 User Interface Issues
- •13.6 Hybrid Screen Reader Magnifiers
- •13.7 Self-magnifying Applications
- •13.8 Self-voicing Applications
- •13.9 Application Adaptors
- •13.10 Chapter Summary
- •Questions
- •Projects
- •References
- •14 Speech, Text and Braille Conversion Technology
- •Learning Objectives
- •14.1 Introduction
- •14.1.1 Introducing Mode Conversion
- •14.1.2 Outline of the Chapter
- •14.2 Prerequisites for Speech and Text Conversion Technology
- •14.2.1 The Spectral Structure of Speech
- •14.2.2 The Hierarchical Structure of Spoken Language
- •14.2.3 Prosody
- •14.3 Speech-to-text Conversion
- •14.3.1 Principles of Pattern Recognition
- •14.3.2 Principles of Speech Recognition
- •14.3.3 Equipment and Applications
- •14.4 Text-to-speech Conversion
- •14.4.1 Principles of Speech Production
- •14.4.2 Principles of Acoustical Synthesis
- •14.4.3 Equipment and Applications
- •14.5 Braille Conversion
- •14.5.1 Introduction
- •14.5.2 Text-to-Braille Conversion
- •14.5.3 Braille-to-text Conversion
- •14.6 Commercial Equipment and Applications
- •14.6.1 Speech vs Braille
- •14.6.2 Speech Output in Devices for Daily Life
- •14.6.3 Portable Text-based Devices
- •14.6.4 Access to Computers
- •14.6.5 Reading Machines
- •14.6.6 Access to Telecommunication Devices
- •14.7 Discussion and the Future Outlook
- •14.7.1 End-user Studies
- •14.7.2 Discussion and Issues Arising
- •14.7.3 Future Developments
- •Questions
- •Projects
- •References
- •15 Accessing Books and Documents
- •Learning Objectives
- •15.1 Introduction: The Challenge of Accessing the Printed Page
- •15.2 Basics of Optical Character Recognition Technology
- •15.2.1 Details of Optical Character Recognition Technology
- •15.2.2 Practical Issues with Optical Character Recognition Technology
- •15.3 Reading Systems
- •15.4 DAISY Technology
- •15.4.1 DAISY Full Audio Books
- •15.4.2 DAISY Full Text Books
- •15.4.3 DAISY and Other Formats
- •15.5 Players
- •15.6 Accessing Textbooks
- •15.7 Accessing Newspapers
- •15.8 Future Technology Developments
- •15.9 Chapter Summary and Conclusion
- •15.9.1 Chapter Summary
- •15.9.2 Conclusion
- •Questions
- •Projects
- •References
- •Learning Objectives
- •16.1 Introduction
- •16.1.1 Print Impairments
- •16.1.2 Music Notation
- •16.2 Overview of Accessible Music
- •16.2.1 Formats
- •16.2.2 Technical Aspects
- •16.3 Some Recent Initiatives and Projects
- •16.3.2 Play 2
- •16.3.3 Dancing Dots
- •16.3.4 Toccata
- •16.4 Problems to Be Overcome
- •16.4.1 A Content Processing Layer
- •16.4.2 Standardization of Accessible Music Technology
- •16.5 Unifying Accessible Design, Technology and Musical Content
- •16.5.1 Braille Music
- •16.5.2 Talking Music
- •16.6 Conclusions
- •16.6.1 Design for All or Accessibility from Scratch
- •16.6.2 Applying Design for All in Emerging Standards
- •16.6.3 Accessibility in Emerging Technology
- •Questions
- •Projects
- •References
- •17 Assistive Technology for Daily Living
- •Learning Objectives
- •17.1 Introduction
- •17.2 Personal Care
- •17.2.1 Labelling Systems
- •17.2.2 Healthcare Monitoring
- •17.3 Time-keeping, Alarms and Alerting
- •17.3.1 Time-keeping
- •17.3.2 Alarms and Alerting
- •17.4 Food Preparation and Consumption
- •17.4.1 Talking Kitchen Scales
- •17.4.2 Talking Measuring Jug
- •17.4.3 Liquid Level Indicator
- •17.4.4 Talking Microwave Oven
- •17.4.5 Talking Kitchen and Remote Thermometers
- •17.4.6 Braille Salt and Pepper Set
- •17.5 Environmental Control and Use of Appliances
- •17.5.1 Light Probes
- •17.5.2 Colour Probes
- •17.5.3 Talking and Tactile Thermometers and Barometers
- •17.5.4 Using Appliances
- •17.6 Money, Finance and Shopping
- •17.6.1 Mechanical Money Indicators
- •17.6.2 Electronic Money Identifiers
- •17.6.3 Electronic Purse
- •17.6.4 Automatic Teller Machines (ATMs)
- •17.7 Communications and Access to Information: Other Technologies
- •17.7.1 Information Kiosks and Other Self-service Systems
- •17.7.2 Using Smart Cards
- •17.7.3 EZ Access®
- •17.8 Chapter Summary
- •Questions
- •Projects
- •References
- •Learning Objectives
- •18.1 Introduction
- •18.2 Education: Learning and Teaching
- •18.2.1 Accessing Educational Processes and Approaches
- •18.2.2 Educational Technologies, Devices and Tools
- •18.3 Employment
- •18.3.1 Professional and Person-centred
- •18.3.2 Scientific and Technical
- •18.3.3 Administrative and Secretarial
- •18.3.4 Skilled and Non-skilled (Manual) Trades
- •18.3.5 Working Outside
- •18.4 Recreational Activities
- •18.4.1 Accessing the Visual, Audio and Performing Arts
- •18.4.2 Games, Puzzles, Toys and Collecting
- •18.4.3 Holidays and Visits: Museums, Galleries and Heritage Sites
- •18.4.4 Sports and Outdoor Activities
- •18.4.5 DIY, Art and Craft Activities
- •18.5 Chapter Summary
- •Questions
- •Projects
- •References
- •Biographical Sketches of the Contributors
- •Index
74 2 Perception, the Eye and Assistive Technology Issues
ships, whereas those in the temporal lobe affect memory for objects and faces. However, lesions in both areas may produce transient impairments, implying that visual functions are widely distributed or other areas are able to take over functions previously carried out by areas which have acquired lesions. Therefore, the model of separate streams of visual processing concerned with ‘what’ and ‘where’ is a simplification, and cortical processing is more complicated.
There is also another important perceptual pathway from the eye, called the collicular pathway, which includes the superior colliculus, but which will not be discussed further here.
2.4 Vision in Action
2.4.1 Image Formation
Points on real objects reflect diverging light rays with negative light vergence. On entering the eye, the light rays are reduced by the aperture of the pupil, and therefore the divergence decreases with pupil size. The amount of refraction required to focus light on the retina increases with the vergence of light. The light vergence can be stated, using Figure 2.5, in terms of the angle φ of light from an object at distance d passing through the pupil of diameter a, as
tan φ /2 = a /2d
so that
φ = 2 tan−1 a /2d
Figure 2.5. Vergence of light through the eye pupil
2.4 Vision in Action |
75 |
For a fixed distance, the vergence varies approximately linearly with the aperture size and approximately, inversely with the distance for a fixed aperture size.
Image formation by a lens results from refraction of light due to the fact that the speed of light varies with the medium it is in. Light moves more slowly in matter, such as the lens of the eye, than it does in a vacuum. The degree of reduction in speed depends on the type of material, with the ratio of the speed of light, c, in a vacuum to its speed in another medium, ν, equal to the refractive index, n, of the medium. i.e.
n = c /ν > 1
The speed of light in air is very close to that in a vacuum and therefore the refractive index of air is generally taken to be 1.0. The refractive index is also dependent on wavelength and greater for shorter wavelengths than longer ones. The change in speed with the change in medium generally also results in a change in direction. This change in direction leads to a bending of the light rays. The change in direction can be expressed in terms of Snell’s law which states that the ratio of the sines of the directions (θi and θr for the incident direction and direction after refraction respectively) in two different media is equal to the ratios of the respective refractive indices (ni and nr):
sin θi |
= ni |
|
sin θr |
|
nr |
Since the degree of bending or refraction is determined by the difference between the refractive indices of the two media, the air-cornea interface has the greatest effect on the eye’s optical power. Vision is blurred underwater due to the reduced difference between the refractive indices of the cornea and water as compared to the cornea and air. Any scarring of the surface of the cornea or asymmetrical curvature can significantly reduce image quality, whereas the small loss in power due to removal of the lens can be compensated for by spectacles, contact lens or intraocular lens implants.
In general, the optical properties of the eye can be modelled by a lens of negligible thickness. Parallel rays striking the lens are made to converge towards a point on the axis called the focal point of the lens. A ray of light passing through the centre of a thin lens does not change direction. The nodal point is the point on the axis of the lens where rays of light pass through without changing direction. The eye is a compound optical system that has two nodal points. However, since the two nodal points are close together relative to the distance from the eye to the object being viewed, they can be treated in practice as one nodal point about 15–17 mm in front of the retina. This distance is called the posterior nodal distance and varies slightly between people. Using the one nodal point approximation makes it easier to calculate the size of the image on the retina. From Figure 2.6, the angle φ is given by
−1 |
y |
|
−1 y |
|
height of object |
|||
φ = tan |
|
≈ tan |
|
|
|
= |
|
|
x + d |
|
x |
distance of object from eye |
|||||
76 2 Perception, the Eye and Assistive Technology Issues
Figure 2.6. Simple model for eye optics using one nodal point
The height of the retinal image y is given by |
|
|
|
|
|
|
|
|
y = tan φ × posterior nodal distance |
y |
|
||||||
|
y |
|
|
|||||
= posterior nodal distance × |
|
|
|
= 17 × |
|
|
|
mm |
|
x |
x |
||||||
2.4.2 Accommodation
Accommodation is the process of changing the curvature of the lens, generally at the front surface, to change the angle of refraction of impinging light. When the ciliary muscle moves closer to the lens, pressure on the zonules relaxes and near objects are focused on the retina (see Figure 2.7b). When the ciliary muscle is relaxed, tension on the zonules and the lens capsule increases and rays of light from distant objects are focused on the retina (see Figure 2.7a). The accommodation reflex seems to be controlled by a combination of negative feedback and a tonic controller to minimise blur of the retinal image. When this blur exceeds a threshold, a neural signal to the ciliary muscles stimulates or inhibits it to correct the focusing error. This produces clear vision quickly, but not in the long-term, as the neural signal decays rapidly. A slow or tonic controller provides sustained and self-adaptive regulation of accommodation by giving long-term shifts in the tone of the ciliary muscle. When there is no retinal image, for instance in the dark, the tonic controller generally adds about 1.5 D of accommodation to the lens, to make it sufficiently curved to give a sharp focus to an object at about 67 cm. The ability to adapt to distant targets may have a role in preventing work-induced myopia, as it leads to relaxation of the ciliary muscle which is useful after a long period of close work.
2.4.3 Response to Light
When exposed to light, the visual pigments absorb photons and are chemically changed into other compounds that absorb light less well or have a different wavelength sensitivity. This process is called bleaching, as the photoreceptor pigment
2.4 Vision in Action |
77 |
Figure 2.7a,b. Accommodation diagram
loses colour. Other nerve cells depolarise, that is the potential across their cell membranes reduces in response to a stimulus. However, the retinal photoreceptors hyperpolarise, i.e. the potential across the cell membrane increases in response to light. This may be due to the fact that the photoreceptor membrane potential in the dark is lower than that of other nerve cells, about 50 mV rather than 70 mV.
In the dark the sodium pores, which are the portions of the receptor membrane with greater permeability to sodium ions, are open and sodium ions flow into the cells and potassium ions flow out. This gives a current called the dark current, which leads to the depolarisation of the receptor and its continued activity. When the visual pigment is bleached in response to light, the sodium pores close, the dark current decreases and the cell is hyperpolarised, leading to a decrease in the release of transmitter, as this is greater at lower rather than higher potentials. The bleaching of the visual pigment leads to the activation of many molecules of an enzyme called transducin, each of which inactivates hundreds of molecules of a chemical called cGMP (cyclic guanosine monophosphate). This leads to the closure of millions of sodium pores, as cGMP is responsible for keeping them open. This cascade response allows a single photon to excite a rod sufficiently to transmit a signal. Rods only respond to changes in illumination in dim light and not in bright light. Their high sensitivity means that the sodium pores are already closed in bright light, making the rods saturated and unable to respond to any further increases in light.
2.4.4 Colour Vision
Visible light consists of electromagnetic radiation with wavelengths between 400 and 700 nm. Both natural light and artificial illumination consist of an approximately even mixture of light energy at different wavelengths, called white light.
78 2 Perception, the Eye and Assistive Technology Issues
When light makes contact with an object, it can be absorbed (and the energy converted to heat), transmitted and/or reflected. In many cases, some light is absorbed and some reflected, with the proportions depending on the wavelengths. A pigment absorbs energy in some wavelengths and reflects others. Pigments appear to be coloured if the refracted energy is in the visible light range. However, the colour perceived depends on the properties of the visual system as well as the wavelengths reflected.
Rods contain rhodopsin, which has a peak sensitivity at about 510 nm in the green part of the spectrum. Rhodopsin is sometimes called visual purple, as it reflects blue and red, making it look purple. There are three different types of cones with peak absorptions at about 430, 530 and 560 nm. These types of cones are therefore referred to as blue, green and red cones. However, these names refer to the peak absorption sensitivities rather than how the pigments appear and monochromatic light with wavelengths of 430, 530 and 560 nm are violet, bluegreen and yellow-green rather than blue, green and red. In addition, the three cones have overlapping sensitivity curves. Light at 600 nm will have the greatest response from red cones with their peak sensitivity at 560 nm, but will also have a weaker response from the other two cone types (see Figure 2.8).
The presence of three different types of cones allows colour and brightness to be distinguished, and coloured light to be distinguished from white light. If there were only two types of cones, the monochromatic wavelength that stimulated the two cone systems in the same ratio as white light would be indistinguishable from white light. Therefore, colour-blind people with only two types of cones have a wavelength of light that they cannot distinguish from white light.
The cones function together, whereas the rods work on their own. Although rods and cones may both be functioning in intermediate levels of light intensity, the effects are not combined. People are able to distinguish shapes fairly well in dim light, but are unable to see colours.
Theories of colour vision need to account for the colours of the rainbow, the purples obtained by stimulating the red and blue cones or combining long and short-wavelength light, the pale or desaturated colours obtained from adding white to any other colour and the browns. There are two main theories: the Young- Helmholtz-Maxwell trichromatic theory (see Figure 2.9) and the opponent theory due to Hering (see Figure 2.10). These two theories were originally thought to be contradictory, but it is now realised they are complementary. According to the trichromatic theory, the perception of colour is the result of unequal stimulation of the three types of cones. Light with a broad spectrum of wavelengths, such as light from the sun or a candle, will stimulate all three types of cones approximately equally, giving white. The appearance of white light can also be obtained by using two complementary beams of narrow-band light, such as yellow and blue or red and blue-green which give white when they are mixed together (see Figure 2.9).
The trichromatic theory can predict and explain many, but not all, colour phenomena. For instance, it does not explain why the colour of an object depends on simultaneous and successive colour contrasts, that is, the colours surrounding it and the colours seen immediately before it respectively. According to the opponent colour theory, there are two chromatic visual pairs, comprising yellow and blue,
2.4 Vision in Action |
79 |
Figure 2.8. Sensitivity of the cones to different wavelengths of light
and red and green and an achromatic pair, white and black. Opponency means that the colours of an opponent chromatic pair cannot be seen together and therefore there are no reddish green or bluish yellow colours (see Figure 2.10).
Some retinal ganglion cells are excited by one colour in the pair and inhibited by the other, for instance, excited by red light and inhibited by green light, and only fire when there is no green light. These cells have a {red-on, green-off} receptive field and do not respond to a combination of red and green lights if the excitatory and inhibitory effects are approximately equal and therefore cancel each other. There are also cells with {green-on, red-off}, {blue-on, yellow-off} and {yellowon, blue-off} responses. Some cells have centre-surround receptive fields, such as a {red-on centre, green-off surround}.
A {red-on, green-off} cell receives excitatory input from an L (red) cone and inhibitory input from an M (green) cone, whereas a {yellow-on, blue-off} cell receives excitatory input from both M and L cones and inhibitory input from S (blue) cones. When the input is a combination of red and green light, {red, green} cells do not fire due to the opponency of the M and L cone inputs. However, the {yellow-on, blue-off} cells are excited by the M and L cone inputs, whereas the {blue-on, yellow-off} cells are inhibited, so that yellow is transmitted and the combination of green and red is perceived as yellow.
When the input is a combination of yellow and blue light, the yellow light affects the M and L cones and inhibits the {red, green} cells due to opponent input from these cones. The blue light affects the S cones and the {blue, yellow} system is
80 2 Perception, the Eye and Assistive Technology Issues
Figure 2.9. Response of the cones to different mixtures of coloured light
inactivated due to the opponent nature of the S and M+L cones. The black channel is also inhibited and therefore white is perceived. When looking at a grey or white background after a bright colour, such as red, people see the contrast colour, in this case green. A bright red flash or prolonged view of red makes the L cones adapt and reduces their sensitivity compared to the M cones. Therefore, subsequent viewing
2.4 Vision in Action |
81 |
Figure 2.10. Opponent theory of colour vision
of white light with medium and long-wavelengths has a greater effect on the M and L cones, so the green system is activated to a greater extent than the red one. Similar arguments hold for other colour contrasts.
