- •Preface
- •Contents
- •1 Disability and Assistive Technology Systems
- •Learning Objectives
- •1.1 The Social Context of Disability
- •1.2 Assistive Technology Outcomes: Quality of Life
- •1.2.1 Some General Issues
- •1.2.2 Definition and Measurement of Quality of Life
- •1.2.3 Health Related Quality of Life Measurement
- •1.2.4 Assistive Technology Quality of Life Procedures
- •1.2.5 Summary and Conclusions
- •1.3 Modelling Assistive Technology Systems
- •1.3.1 Modelling Approaches: A Review
- •1.3.2 Modelling Human Activities
- •1.4 The Comprehensive Assistive Technology (CAT) Model
- •1.4.1 Justification of the Choice of Model
- •1.4.2 The Structure of the CAT Model
- •1.5 Using the Comprehensive Assistive Technology Model
- •1.5.1 Using the Activity Attribute of the CAT Model to Determine Gaps in Assistive Technology Provision
- •1.5.2 Conceptual Structure of Assistive Technology Systems
- •1.5.3 Investigating Assistive Technology Systems
- •1.5.4 Analysis of Assistive Technology Systems
- •1.5.5 Synthesis of Assistive Technology Systems
- •1.6 Chapter Summary
- •Questions
- •Projects
- •References
- •2 Perception, the Eye and Assistive Technology Issues
- •Learning Objectives
- •2.1 Perception
- •2.1.1 Introduction
- •2.1.2 Common Laws and Properties of the Different Senses
- •2.1.3 Multisensory Perception
- •2.1.4 Multisensory Perception in the Superior Colliculus
- •2.1.5 Studies of Multisensory Perception
- •2.2 The Visual System
- •2.2.1 Introduction
- •2.2.2 The Lens
- •2.2.3 The Iris and Pupil
- •2.2.4 Intraocular Pressure
- •2.2.5 Extraocular Muscles
- •2.2.6 Eyelids and Tears
- •2.3 Visual Processing in the Retina, Laternal Geniculate Nucleus and the Brain
- •2.3.1 Nerve Cells
- •2.3.2 The Retina
- •2.3.3 The Optic Nerve, Optic Tract and Optic Radiation
- •2.3.4 The Lateral Geniculate Body or Nucleus
- •2.3.5 The Primary Visual or Striate Cortex
- •2.3.6 The Extrastriate Visual Cortex and the Superior Colliculus
- •2.3.7 Visual Pathways
- •2.4 Vision in Action
- •2.4.1 Image Formation
- •2.4.2 Accommodation
- •2.4.3 Response to Light
- •2.4.4 Colour Vision
- •2.4.5 Binocular Vision and Stereopsis
- •2.5 Visual Impairment and Assistive Technology
- •2.5.1 Demographics of Visual Impairment
- •2.5.2 Illustrations of Some Types of Visual Impairment
- •2.5.3 Further Types of Visual Impairment
- •2.5.4 Colour Blindness
- •2.5.5 Corrective Lenses
- •2.6 Chapter Summary
- •Questions
- •Projects
- •References
- •3 Sight Measurement
- •Learning Objectives
- •3.1 Introduction
- •3.2 Visual Acuity
- •3.2.1 Using the Chart
- •3.2.2 Variations in Measuring Visual Acuity
- •3.3 Field of Vision Tests
- •3.3.1 The Normal Visual Field
- •3.3.2 The Tangent Screen
- •3.3.3 Kinetic Perimetry
- •3.3.4 Static Perimetry
- •3.4 Pressure Measurement
- •3.5 Biometry
- •3.6 Ocular Examination
- •3.7 Optical Coherence Tomography
- •3.7.1 Echo Delay
- •3.7.2 Low Coherence Interferometry
- •3.7.3 An OCT Scanner
- •3.8 Ocular Electrophysiology
- •3.8.1 The Electrooculogram (EOG)
- •3.8.2 The Electroretinogram (ERG)
- •3.8.3 The Pattern Electroretinogram
- •3.8.4 The Visual Evoked Cortical Potential
- •3.8.5 Multifocal Electrophysiology
- •3.9 Chapter Summary
- •Glossary
- •Questions
- •Projects
- •4 Haptics as a Substitute for Vision
- •Learning Objectives
- •4.1 Introduction
- •4.1.1 Physiological Basis
- •4.1.2 Passive Touch, Active Touch and Haptics
- •4.1.3 Exploratory Procedures
- •4.2 Vision and Haptics Compared
- •4.3 The Capacity of Bare Fingers in Real Environments
- •4.3.1 Visually Impaired People’s Use of Haptics Without any Technical Aid
- •4.3.2 Speech Perceived by Hard-of-hearing People Using Bare Hands
- •4.3.3 Natural Capacity of Touch and Evaluation of Technical Aids
- •4.4 Haptic Low-tech Aids
- •4.4.1 The Long Cane
- •4.4.2 The Guide Dog
- •4.4.3 Braille
- •4.4.4 Embossed Pictures
- •4.4.5 The Main Lesson from Low-tech Aids
- •4.5 Matrices of Point Stimuli
- •4.5.1 Aids for Orientation and Mobility
- •4.5.2 Aids for Reading Text
- •4.5.3 Aids for Reading Pictures
- •4.6 Computer-based Aids for Graphical Information
- •4.6.1 Aids for Graphical User Interfaces
- •4.6.2 Tactile Computer Mouse
- •4.7 Haptic Displays
- •4.7.1 Information Available via a Haptic Display
- •4.7.2 What Information Can Be Obtained with the Reduced Information?
- •4.7.3 Haptic Displays as Aids for the Visually Impaired
- •4.8 Chapter Summary
- •4.9 Concluding Remarks
- •Questions
- •Projects
- •References
- •5 Mobility: An Overview
- •Learning Objectives
- •5.1 Introduction
- •5.2 The Travel Activity
- •5.2.1 Understanding Mobility
- •5.2.2 Assistive Technology Systems for the Travel Process
- •5.3 The Historical Development of Travel Aids for Visually Impaired and Blind People
- •5.4 Obstacle Avoidance AT: Guide Dogs and Robotic Guide Walkers
- •5.4.1 Guide Dogs
- •5.4.2 Robotic Guides and Walkers
- •5.5 Obstacle Avoidance AT: Canes
- •5.5.1 Long Canes
- •5.5.2 Technology Canes
- •5.6 Other Mobility Assistive Technology Approaches
- •5.6.1 Clear-path Indicators
- •5.6.2 Obstacle and Object Location Detectors
- •5.6.3 The vOICe System
- •5.7 Orientation Assistive Technology Systems
- •5.7.1 Global Positioning System Orientation Technology
- •5.7.2 Other Technology Options for Orientation Systems
- •5.8 Accessible Environments
- •5.9 Chapter Summary
- •Questions
- •Projects
- •References
- •6 Mobility AT: The Batcane (UltraCane)
- •Learning Objectives
- •6.1 Mobility Background and Introduction
- •6.2 Principles of Ultrasonics
- •6.2.1 Ultrasonic Waves
- •6.2.2 Attenuation and Reflection Interactions
- •6.2.3 Transducer Geometry
- •6.3 Bats and Signal Processing
- •6.3.1 Principles of Bat Sonar
- •6.3.2 Echolocation Call Structures
- •6.3.3 Signal Processing Capabilities
- •6.3.4 Applicability of Bat Echolocation to Sonar System Design
- •6.4 Design and Construction Issues
- •6.4.1 Outline Requirement Specification
- •6.4.2 Ultrasonic Spatial Sensor Subsystem
- •6.4.3 Trial Prototype Spatial Sensor Arrangement
- •6.4.4 Tactile User Interface Subsystem
- •6.4.5 Cognitive Mapping
- •6.4.6 Embedded Processing Control Requirements
- •6.5 Concept Phase and Engineering Prototype Phase Trials
- •6.6 Case Study in Commercialisation
- •6.7 Chapter Summary
- •Questions
- •Projects
- •References
- •7 Navigation AT: Context-aware Computing
- •Learning objectives
- •7.1 Defining the Orientation/Navigation Problem
- •7.1.1 Orientation, Mobility and Navigation
- •7.1.2 Traditional Mobility Aids
- •7.1.3 Limitations of Traditional Aids
- •7.2 Cognitive Maps
- •7.2.1 Learning and Acquiring Spatial Information
- •7.2.2 Factors that Influence How Knowledge Is Acquired
- •7.2.3 The Structure and Form of Cognitive Maps
- •7.3 Overview of Existing Technologies
- •7.3.1 Technologies for Distant Navigation
- •7.3.2 User Interface Output Technologies
- •7.4 Principles of Mobile Context-aware Computing
- •7.4.1 Adding Context to User-computer Interaction
- •7.4.2 Acquiring Useful Contextual Information
- •7.4.3 Capabilities of Context-awareness
- •7.4.4 Application of Context-aware Principles
- •7.4.5 Technological Challenges and Unresolved Usability Issues
- •7.5 Test Procedures
- •7.5.1 Human Computer Interaction (HCI)
- •7.5.2 Cognitive Mapping
- •7.5.3 Overall Approach
- •7.6 Future Positioning Technologies
- •7.7 Chapter Summary
- •7.7.1 Conclusions
- •Questions
- •Projects
- •References
- •Learning Objectives
- •8.1 Defining the Navigation Problem
- •8.1.1 What is the Importance of Location Information?
- •8.1.2 What Mobility Tools and Traditional Maps are Available for the Blind?
- •8.2 Principles of Global Positioning Systems
- •8.2.1 What is the Global Positioning System?
- •8.2.2 Accuracy of GPS: Some General Issues
- •8.2.3 Accuracy of GPS: Some Technical Issues
- •8.2.4 Frequency Spectrum of GPS, Present and Future
- •8.2.5 Other GPS Systems
- •8.3 Application of GPS Principles
- •8.4 Design Issues
- •8.5 Development Issues
- •8.5.1 Choosing an Appropriate Platform
- •8.5.2 Choosing the GPS Receiver
- •8.5.3 Creating a Packaged System
- •8.5.4 Integration vs Stand-alone
- •8.6 User Interface Design Issues
- •8.6.1 How to Present the Information
- •8.6.2 When to Present the Information
- •8.6.3 What Information to Present
- •8.7 Test Procedures and Results
- •8.8 Case Study in Commercialisation
- •8.8.1 Understanding the Value of the Technology
- •8.8.2 Limitations of the Technology
- •8.8.3 Ongoing Development
- •8.9 Chapter Summary
- •Questions
- •Projects
- •References
- •9 Electronic Travel Aids: An Assessment
- •Learning Objectives
- •9.1 Introduction
- •9.2 Why Do an Assessment?
- •9.3 Methodologies for Assessments of Electronic Travel Aids
- •9.3.1 Eliciting User Requirements
- •9.3.2 Developing a User Requirements Specification and Heuristic Evaluation
- •9.3.3 Hands-on Assessments
- •9.3.4 Methodology Used for Assessments in this Chapter
- •9.4 Modern-day Electronic Travel Aids
- •9.4.1 The Distinction Between Mobility and Navigation Aids
- •9.4.2 The Distinction Between Primary and Secondary Aids
- •9.4.3 User Requirements: Mobility and Navigation Aids
- •9.4.4 Mobility Aids
- •9.4.5 Mobility Aids: Have They Solved the Mobility Challenge?
- •9.4.6 Navigation Aids
- •9.4.7 Navigation Aids: Have They Solved the Navigation Challenge?
- •9.5 Training
- •9.6 Chapter Summary and Conclusions
- •Questions
- •Projects
- •References
- •10 Accessible Environments
- •Learning Objectives
- •10.1 Introduction
- •10.1.1 Legislative and Regulatory Framework
- •10.1.2 Accessible Environments: An Overview
- •10.1.3 Principles for the Design of Accessible Environments
- •10.2 Physical Environments: The Streetscape
- •10.2.1 Pavements and Pathways
- •10.2.2 Road Crossings
- •10.2.3 Bollards and Street Furniture
- •10.3 Physical Environments: Buildings
- •10.3.1 General Exterior Issues
- •10.3.2 General Interior Issues
- •10.3.4 Signs and Notices
- •10.3.5 Interior Building Services
- •10.4 Environmental Information and Navigation Technologies
- •10.4.1 Audio Information System: General Issues
- •10.4.2 Some Technologies for Environmental Information Systems
- •10.5 Accessible Public Transport
- •10.5.1 Accessible Public Transportation: Design Issues
- •10.6 Chapter Summary
- •Questions
- •Projects
- •References
- •11 Accessible Bus System: A Bluetooth Application
- •Learning Objectives
- •11.1 Introduction
- •11.2 Bluetooth Fundamentals
- •11.2.1 Brief History of Bluetooth
- •11.2.2 Bluetooth Power Class
- •11.2.3 Protocol Stack
- •11.2.4 Bluetooth Profile
- •11.2.5 Piconet
- •11.3 Design Issues
- •11.3.1 System Architecture
- •11.3.2 Hardware Requirements
- •11.3.3 Software Requirements
- •11.4 Developmental Issues
- •11.4.1 Bluetooth Server
- •11.4.2 Bluetooth Client (Mobile Device)
- •11.4.3 User Interface
- •11.5 Commercialisation Issues
- •11.6 Chapter Summary
- •Questions
- •Projects
- •References
- •12 Accessible Information: An Overview
- •Learning Objectives
- •12.1 Introduction
- •12.2 Low Vision Aids
- •12.2.1 Basic Principles
- •12.3 Low Vision Assistive Technology Systems
- •12.3.1 Large Print
- •12.3.2 Closed Circuit Television Systems
- •12.3.3 Video Magnifiers
- •12.3.4 Telescopic Assistive Systems
- •12.4 Audio-transcription of Printed Information
- •12.4.1 Stand-alone Reading Systems
- •12.4.2 Read IT Project
- •12.5 Tactile Access to Information
- •12.5.1 Braille
- •12.5.2 Moon
- •12.5.3 Braille Devices
- •12.6 Accessible Computer Systems
- •12.6.1 Input Devices
- •12.6.2 Output Devices
- •12.6.3 Computer-based Reading Systems
- •12.6.4 Accessible Portable Computers
- •12.7 Accessible Internet
- •12.7.1 World Wide Web Guidelines
- •12.7.2 Guidelines for Web Authoring Tools
- •12.7.3 Accessible Adobe Portable Document Format (PDF) Documents
- •12.7.4 Bobby Approval
- •12.8 Telecommunications
- •12.8.1 Voice Dialling General Principles
- •12.8.2 Talking Caller ID
- •12.8.3 Mobile Telephones
- •12.9 Chapter Summary
- •Questions
- •Projects
- •References
- •13 Screen Readers and Screen Magnifiers
- •Learning Objectives
- •13.1 Introduction
- •13.2 Overview of Chapter
- •13.3 Interacting with a Graphical User Interface
- •13.4 Screen Magnifiers
- •13.4.1 Overview
- •13.4.2 Magnification Modes
- •13.4.3 Other Interface Considerations
- •13.4.4 The Architecture and Implementation of Screen Magnifiers
- •13.5 Screen Readers
- •13.5.1 Overview
- •13.5.2 The Architecture and Implementation of a Screen Reader
- •13.5.3 Using a Braille Display
- •13.5.4 User Interface Issues
- •13.6 Hybrid Screen Reader Magnifiers
- •13.7 Self-magnifying Applications
- •13.8 Self-voicing Applications
- •13.9 Application Adaptors
- •13.10 Chapter Summary
- •Questions
- •Projects
- •References
- •14 Speech, Text and Braille Conversion Technology
- •Learning Objectives
- •14.1 Introduction
- •14.1.1 Introducing Mode Conversion
- •14.1.2 Outline of the Chapter
- •14.2 Prerequisites for Speech and Text Conversion Technology
- •14.2.1 The Spectral Structure of Speech
- •14.2.2 The Hierarchical Structure of Spoken Language
- •14.2.3 Prosody
- •14.3 Speech-to-text Conversion
- •14.3.1 Principles of Pattern Recognition
- •14.3.2 Principles of Speech Recognition
- •14.3.3 Equipment and Applications
- •14.4 Text-to-speech Conversion
- •14.4.1 Principles of Speech Production
- •14.4.2 Principles of Acoustical Synthesis
- •14.4.3 Equipment and Applications
- •14.5 Braille Conversion
- •14.5.1 Introduction
- •14.5.2 Text-to-Braille Conversion
- •14.5.3 Braille-to-text Conversion
- •14.6 Commercial Equipment and Applications
- •14.6.1 Speech vs Braille
- •14.6.2 Speech Output in Devices for Daily Life
- •14.6.3 Portable Text-based Devices
- •14.6.4 Access to Computers
- •14.6.5 Reading Machines
- •14.6.6 Access to Telecommunication Devices
- •14.7 Discussion and the Future Outlook
- •14.7.1 End-user Studies
- •14.7.2 Discussion and Issues Arising
- •14.7.3 Future Developments
- •Questions
- •Projects
- •References
- •15 Accessing Books and Documents
- •Learning Objectives
- •15.1 Introduction: The Challenge of Accessing the Printed Page
- •15.2 Basics of Optical Character Recognition Technology
- •15.2.1 Details of Optical Character Recognition Technology
- •15.2.2 Practical Issues with Optical Character Recognition Technology
- •15.3 Reading Systems
- •15.4 DAISY Technology
- •15.4.1 DAISY Full Audio Books
- •15.4.2 DAISY Full Text Books
- •15.4.3 DAISY and Other Formats
- •15.5 Players
- •15.6 Accessing Textbooks
- •15.7 Accessing Newspapers
- •15.8 Future Technology Developments
- •15.9 Chapter Summary and Conclusion
- •15.9.1 Chapter Summary
- •15.9.2 Conclusion
- •Questions
- •Projects
- •References
- •Learning Objectives
- •16.1 Introduction
- •16.1.1 Print Impairments
- •16.1.2 Music Notation
- •16.2 Overview of Accessible Music
- •16.2.1 Formats
- •16.2.2 Technical Aspects
- •16.3 Some Recent Initiatives and Projects
- •16.3.2 Play 2
- •16.3.3 Dancing Dots
- •16.3.4 Toccata
- •16.4 Problems to Be Overcome
- •16.4.1 A Content Processing Layer
- •16.4.2 Standardization of Accessible Music Technology
- •16.5 Unifying Accessible Design, Technology and Musical Content
- •16.5.1 Braille Music
- •16.5.2 Talking Music
- •16.6 Conclusions
- •16.6.1 Design for All or Accessibility from Scratch
- •16.6.2 Applying Design for All in Emerging Standards
- •16.6.3 Accessibility in Emerging Technology
- •Questions
- •Projects
- •References
- •17 Assistive Technology for Daily Living
- •Learning Objectives
- •17.1 Introduction
- •17.2 Personal Care
- •17.2.1 Labelling Systems
- •17.2.2 Healthcare Monitoring
- •17.3 Time-keeping, Alarms and Alerting
- •17.3.1 Time-keeping
- •17.3.2 Alarms and Alerting
- •17.4 Food Preparation and Consumption
- •17.4.1 Talking Kitchen Scales
- •17.4.2 Talking Measuring Jug
- •17.4.3 Liquid Level Indicator
- •17.4.4 Talking Microwave Oven
- •17.4.5 Talking Kitchen and Remote Thermometers
- •17.4.6 Braille Salt and Pepper Set
- •17.5 Environmental Control and Use of Appliances
- •17.5.1 Light Probes
- •17.5.2 Colour Probes
- •17.5.3 Talking and Tactile Thermometers and Barometers
- •17.5.4 Using Appliances
- •17.6 Money, Finance and Shopping
- •17.6.1 Mechanical Money Indicators
- •17.6.2 Electronic Money Identifiers
- •17.6.3 Electronic Purse
- •17.6.4 Automatic Teller Machines (ATMs)
- •17.7 Communications and Access to Information: Other Technologies
- •17.7.1 Information Kiosks and Other Self-service Systems
- •17.7.2 Using Smart Cards
- •17.7.3 EZ Access®
- •17.8 Chapter Summary
- •Questions
- •Projects
- •References
- •Learning Objectives
- •18.1 Introduction
- •18.2 Education: Learning and Teaching
- •18.2.1 Accessing Educational Processes and Approaches
- •18.2.2 Educational Technologies, Devices and Tools
- •18.3 Employment
- •18.3.1 Professional and Person-centred
- •18.3.2 Scientific and Technical
- •18.3.3 Administrative and Secretarial
- •18.3.4 Skilled and Non-skilled (Manual) Trades
- •18.3.5 Working Outside
- •18.4 Recreational Activities
- •18.4.1 Accessing the Visual, Audio and Performing Arts
- •18.4.2 Games, Puzzles, Toys and Collecting
- •18.4.3 Holidays and Visits: Museums, Galleries and Heritage Sites
- •18.4.4 Sports and Outdoor Activities
- •18.4.5 DIY, Art and Craft Activities
- •18.5 Chapter Summary
- •Questions
- •Projects
- •References
- •Biographical Sketches of the Contributors
- •Index
4.4 Haptic Low-tech Aids |
141 |
deaf blind people have been able to perceive the speaker’s talking movements by putting fingers on the active parts of the speaker’s face (the TADOMA method). The user of this method holds the thumb lightly on the lips and the other fingers on the cheek and neck, thereby obtaining information about movements of the talking person’s lips and jaw, airflow and vibrations of the vocal cords, and so being able to perceive speech (Reed et al. 1992). This capacity of perceiving speech tactually is obtained only after considerable training, preferably from an early age (Schultz et al. 1984). For research on the mechanisms responsible for this tactile capacity an artificial mechanical face capable of producing information similar to that of a talking person was built by Reed et al. (1985) and investigations with this device analysed different components involved in understanding speech in this way (Chomsky 1986; Leotta et al. 1988; Rabinowitz et al. 1990).
Even if it is not the topic of this chapter, it may be mentioned that several tactile aids for the hearing impaired have been developed (Summers 1992).
4.3.3 Natural Capacity of Touch and Evaluation of Technical Aids
As discussed above, haptics has large potentials in many respects when working under natural conditions, including detecting edges, identifying objects and perceiving their texture, hardness, form and weight. It is important to remember these potentials when the efficiency of technical aids is investigated. Likewise, it is important to consider under what conditions the successful performance worked, especially what stimulus properties were available and what exploratory procedures were possible. When a device fails to provide the information wanted important reasons are probably that it is not successfully utilizing the capabilities of haptics, that is, that it does not provide the stimulus properties and exploration possibilities that are essential for the functioning of haptics. As will be discussed below, there are usually many constraints on these basic requirements for the working of present-day haptic displays in comparison with what is naturally available.
4.4 Haptic Low-tech Aids
High-tech is not a necessary requirement for useful technical aids for the visually impaired. This fact is demonstrated by the successful use of the long cane and the guide dog for mobility, Braille for reading and embossed pictures for pictorial information.
4.4.1 The Long Cane
A cane as an aid for the mobility of visually impaired people has been used for centuries. Its basic feature is that it allows the users to extend their arms to reach the ground and objects in front of them. Such an extension of the arm is typical for many tools. The contact between the user and the environment is at the tip of the tool, not at the hand holding the tool. This gives a kind of direct contact with
142 4 Haptics as a Substitute for Vision
the world simplifying perception. As humans use many tools, the introduction of the cane as a tool is considered to be quite natural.
Even if canes have been used for a long time, they were not used systematically until the twentieth century and especially after the Second World War (Bledsoe 1997). A key feature of the long cane developed at that time was that it was long compared with the canes used previously and that it should mainly be swung over the ground in front of the pedestrian. That makes it possible to preview the ground for about 1 m in front of the user and the space up to the level of the user’s waist, which provides information and warnings important for safe walking (Barth and Foulke 1979). In addition to being long, the canes are usually white in order to be more easily observed by other travellers.
In spite of its technical simplicity a long cane can inform its user of many features of the environment, such as location of objects, texture and hardness of the ground, bumps and drop-offs which might cause the traveller to fall. Additionally, it makes possible the detection of street crossings and the direction of kerbs; information that is helpful in the direct guidance of locomotion (Guth and Rieser 1997; Jansson 2000b). The haptic stimulus properties made available via the cane are based on vibrations in the cane and force feedback during the handling of the cane. There is also important auditory information, but that is outside the scope of this chapter (Schenkman 1985; Schenkman and Jansson 1986).
There are several practical requirements on such a simple device: for instance, it should conduct vibrations, have a good weight distribution, be lightweight but have sufficient weight to withstand breeze and be strong and durable. For convenience, many people want a collapsible cane, for which there are more requirements. The suitability of the cane tip and the cane grip also needs to be considered, as well as its visibility for other travellers (Farmer and Smith 1997).
It should be noted that it is significant for the usefulness of the long cane that it is handled in a proper way, and that the user has suitable training in its use (Zimmerman and Roman 1997). There are also special programmes developed for preschool children, older persons, as well as for learners with hearing, motor, cognitive and health problems in addition to vision problems; chapters describing these special training programmes can be found in the book edited by Blash et al. (1997).
In addition to the long cane, there are many technically simple arrangements that can be useful for the guidance of walking without sight: handrails, traffic sounds and sounding traffic lights (Guth et al. 1989; Guth and LaDuke 1994) and aids providing information about veering from an intended path via hearing (Gesink et al. 1996; Guth et al. 1996) or touch (Jansson 1995).
4.4.2 The Guide Dog
At first thought the guide dog for the visually impaired pedestrian may not be considered as a technical aid as the dog is a living creature. However, it may be thought of in that way. The dog and the person form a unit together with the uniting harness. This is a girdle encircling the dog’s chest with a handle for the
4.4 Haptic Low-tech Aids |
143 |
user. With the handle a visually impaired pedestrian both controls the guide dog and receives information about its activities. Haptics is thus important also for guide dog users.
Dogs for guiding visually impaired people were used thousands of years ago, depicted, for instance, on the walls of Pompeii. However, the systematic and widespread use of guide dogs started only after the First World War (Hännestrand 1995; Whitstock et al. 1997). As with the long cane, training is an important prerequisite for success. The puppy intended to be a guide dog is carefully selected and trained at special schools, and the guide dog user also has to be trained. Guide dogs are very important for many visually impaired people, but availability and cost restrict their use. It should also be clear that the guide dog, as the long cane, can be helpful for mobility in the near-space, but not for orientation in the larger environment. This information must come from other sources, both from environmental information and from the pedestrian’s memory.
4.4.3 Braille
A visually impaired person can read text either by listening to spoken text or using the hands to obtain a tactile equivalent. The former medium is often called talking book; the most common version of the latter medium is Braille. Letters and other symbols in Braille are coded within a six (or sometimes eight) point rectangular matrix that is presented in embossed form and read by the fingers (Foulke 1991). It is an invention that found its form during the early nineteenth century, and it has been a most important reading medium for the visually impaired since then. However, much training is needed to be a fluent Braille reader, which is why it is mainly used by people who are blind from birth or become blind at an early age. The main reason for the difficulties of many older people to learn Braille is that their tactual acuity is not as good as that of a young person (Stevens 1992; Stevens et al. 1996). However, the cited problems of older people in learning new skills, slower movements and quicker loss of memory may be unjustified and no obstacle to learning to read Braille in older age (see for example, Millar 1997, pp 239 ff ).
There is another embossed reading system, Moon, where the letter symbols are similar to ordinary letters and therefore can be read with less training, but this method is not very widespread. However, Moon text has been rendered to be used in connection with simulated tactile diagrams (Hardwick 2004). Technological devices for reading Braille have been developed and these will be discussed below; more discussion can be found in Chapters 12 and 14 of this volume.
A basic prerequisite for reading Braille is that the fingers can discriminate between the point patterns within the matrix. In order to facilitate this, strict rules for the properties of the point matrices have been developed and changes have to be carefully motivated. A thorough analysis of perceptual, linguistic and cognitive aspects of Braille reading was given by Millar (1997).
Many thought that Braille would disappear when the talking book arrived. However, that has not been the case, probably because it has features that the talking book does not have to the same extent, for instance, easier skimming of the text to locate specific parts.
144 4 Haptics as a Substitute for Vision
4.4.4 Embossed Pictures
Pictorial information can also be presented in embossed form. Edges and surfaces can be reproduced with embossed lines and patterns of embossed units, respectively. The idea of embossed pictures is quite old; such pictures were available in the eighteenth century (Eriksson 1998). They have not been used as widely as the long cane and Braille, in spite of much effort to make them useful (for example, see Edman 1992). Some kinds of pictures can be read without much difficulty (Kennedy et al. 1991) and it has also been reported that blind people can draw pictures (in two dimensions) of three-dimensional objects and scenes (Kennedy 1993). However, more complex embossed pictures are more difficult to perceive haptically, the main problem being to get an overview of the picture. Even if it is possible to get information with a “haptic glance”, as discussed above, identification is often obtained only after exploratory movements that can take considerable time and that sometimes do not result in an understanding of the picture. Further, tactile pictures are typically two-dimensional, and to get perception of three-dimensional properties of the scene haptically is usually difficult, even if some possibilities have been reported. Kennedy (1993) described the understanding of perspective by blind people, while Holmes et al. (1998) and Jansson and Holmes (2003) reported on the haptic perception of depth in pictures via texture gradients, one of the main features of visual 3D perception of pictures.
4.4.4.1 Techniques for Making Embossed Pictures
Originally embossed pictures were “applications”, that is, the embossments were obtained by gluing things like strings, cloths and other materials onto a paper or board. This method is still used to some extent, but it is an expensive method, as it is necessary to apply all material manually onto each picture. A method making possible the production of copies from one such manually made application is thermoforming, a method where a model in 3D is copied onto a plastic sheet. In a machine for this procedure the plastic sheet is put onto the model, heated and formed after the model when the air between the sheet and the model is removed. For the production of models, special kits of symbols considered useful for tactile reading have been developed, especially for maps (see, for instance, the work of James and Armstrong 1976 and Barth 1982).
In another method, heating to get embossment is also used – microcapsule paper or swellpaper. This kind of paper has a layer of microscopic plastic capsules that expand when heated. Black points, lines and patterns written or printed on the paper make the part of paper below them to emboss, while the rest of the paper remains non-embossed at proper heat exposure (see, for instance Edman 1992, pp 82–89). No heating is needed for raised line drawing boards. They consist of a thin plastic sheet fastened to some kind of firm backing. The embossment is obtained by writing with, for instance, a ballpoint pen on the sheet. This method is probably the most important method for visually impaired children’s drawings (see, for instance, Edman 1992, pp 15–17). Other methods, as well as more details
