- •Preface
- •Contents
- •1 Disability and Assistive Technology Systems
- •Learning Objectives
- •1.1 The Social Context of Disability
- •1.2 Assistive Technology Outcomes: Quality of Life
- •1.2.1 Some General Issues
- •1.2.2 Definition and Measurement of Quality of Life
- •1.2.3 Health Related Quality of Life Measurement
- •1.2.4 Assistive Technology Quality of Life Procedures
- •1.2.5 Summary and Conclusions
- •1.3 Modelling Assistive Technology Systems
- •1.3.1 Modelling Approaches: A Review
- •1.3.2 Modelling Human Activities
- •1.4 The Comprehensive Assistive Technology (CAT) Model
- •1.4.1 Justification of the Choice of Model
- •1.4.2 The Structure of the CAT Model
- •1.5 Using the Comprehensive Assistive Technology Model
- •1.5.1 Using the Activity Attribute of the CAT Model to Determine Gaps in Assistive Technology Provision
- •1.5.2 Conceptual Structure of Assistive Technology Systems
- •1.5.3 Investigating Assistive Technology Systems
- •1.5.4 Analysis of Assistive Technology Systems
- •1.5.5 Synthesis of Assistive Technology Systems
- •1.6 Chapter Summary
- •Questions
- •Projects
- •References
- •2 Perception, the Eye and Assistive Technology Issues
- •Learning Objectives
- •2.1 Perception
- •2.1.1 Introduction
- •2.1.2 Common Laws and Properties of the Different Senses
- •2.1.3 Multisensory Perception
- •2.1.4 Multisensory Perception in the Superior Colliculus
- •2.1.5 Studies of Multisensory Perception
- •2.2 The Visual System
- •2.2.1 Introduction
- •2.2.2 The Lens
- •2.2.3 The Iris and Pupil
- •2.2.4 Intraocular Pressure
- •2.2.5 Extraocular Muscles
- •2.2.6 Eyelids and Tears
- •2.3 Visual Processing in the Retina, Laternal Geniculate Nucleus and the Brain
- •2.3.1 Nerve Cells
- •2.3.2 The Retina
- •2.3.3 The Optic Nerve, Optic Tract and Optic Radiation
- •2.3.4 The Lateral Geniculate Body or Nucleus
- •2.3.5 The Primary Visual or Striate Cortex
- •2.3.6 The Extrastriate Visual Cortex and the Superior Colliculus
- •2.3.7 Visual Pathways
- •2.4 Vision in Action
- •2.4.1 Image Formation
- •2.4.2 Accommodation
- •2.4.3 Response to Light
- •2.4.4 Colour Vision
- •2.4.5 Binocular Vision and Stereopsis
- •2.5 Visual Impairment and Assistive Technology
- •2.5.1 Demographics of Visual Impairment
- •2.5.2 Illustrations of Some Types of Visual Impairment
- •2.5.3 Further Types of Visual Impairment
- •2.5.4 Colour Blindness
- •2.5.5 Corrective Lenses
- •2.6 Chapter Summary
- •Questions
- •Projects
- •References
- •3 Sight Measurement
- •Learning Objectives
- •3.1 Introduction
- •3.2 Visual Acuity
- •3.2.1 Using the Chart
- •3.2.2 Variations in Measuring Visual Acuity
- •3.3 Field of Vision Tests
- •3.3.1 The Normal Visual Field
- •3.3.2 The Tangent Screen
- •3.3.3 Kinetic Perimetry
- •3.3.4 Static Perimetry
- •3.4 Pressure Measurement
- •3.5 Biometry
- •3.6 Ocular Examination
- •3.7 Optical Coherence Tomography
- •3.7.1 Echo Delay
- •3.7.2 Low Coherence Interferometry
- •3.7.3 An OCT Scanner
- •3.8 Ocular Electrophysiology
- •3.8.1 The Electrooculogram (EOG)
- •3.8.2 The Electroretinogram (ERG)
- •3.8.3 The Pattern Electroretinogram
- •3.8.4 The Visual Evoked Cortical Potential
- •3.8.5 Multifocal Electrophysiology
- •3.9 Chapter Summary
- •Glossary
- •Questions
- •Projects
- •4 Haptics as a Substitute for Vision
- •Learning Objectives
- •4.1 Introduction
- •4.1.1 Physiological Basis
- •4.1.2 Passive Touch, Active Touch and Haptics
- •4.1.3 Exploratory Procedures
- •4.2 Vision and Haptics Compared
- •4.3 The Capacity of Bare Fingers in Real Environments
- •4.3.1 Visually Impaired People’s Use of Haptics Without any Technical Aid
- •4.3.2 Speech Perceived by Hard-of-hearing People Using Bare Hands
- •4.3.3 Natural Capacity of Touch and Evaluation of Technical Aids
- •4.4 Haptic Low-tech Aids
- •4.4.1 The Long Cane
- •4.4.2 The Guide Dog
- •4.4.3 Braille
- •4.4.4 Embossed Pictures
- •4.4.5 The Main Lesson from Low-tech Aids
- •4.5 Matrices of Point Stimuli
- •4.5.1 Aids for Orientation and Mobility
- •4.5.2 Aids for Reading Text
- •4.5.3 Aids for Reading Pictures
- •4.6 Computer-based Aids for Graphical Information
- •4.6.1 Aids for Graphical User Interfaces
- •4.6.2 Tactile Computer Mouse
- •4.7 Haptic Displays
- •4.7.1 Information Available via a Haptic Display
- •4.7.2 What Information Can Be Obtained with the Reduced Information?
- •4.7.3 Haptic Displays as Aids for the Visually Impaired
- •4.8 Chapter Summary
- •4.9 Concluding Remarks
- •Questions
- •Projects
- •References
- •5 Mobility: An Overview
- •Learning Objectives
- •5.1 Introduction
- •5.2 The Travel Activity
- •5.2.1 Understanding Mobility
- •5.2.2 Assistive Technology Systems for the Travel Process
- •5.3 The Historical Development of Travel Aids for Visually Impaired and Blind People
- •5.4 Obstacle Avoidance AT: Guide Dogs and Robotic Guide Walkers
- •5.4.1 Guide Dogs
- •5.4.2 Robotic Guides and Walkers
- •5.5 Obstacle Avoidance AT: Canes
- •5.5.1 Long Canes
- •5.5.2 Technology Canes
- •5.6 Other Mobility Assistive Technology Approaches
- •5.6.1 Clear-path Indicators
- •5.6.2 Obstacle and Object Location Detectors
- •5.6.3 The vOICe System
- •5.7 Orientation Assistive Technology Systems
- •5.7.1 Global Positioning System Orientation Technology
- •5.7.2 Other Technology Options for Orientation Systems
- •5.8 Accessible Environments
- •5.9 Chapter Summary
- •Questions
- •Projects
- •References
- •6 Mobility AT: The Batcane (UltraCane)
- •Learning Objectives
- •6.1 Mobility Background and Introduction
- •6.2 Principles of Ultrasonics
- •6.2.1 Ultrasonic Waves
- •6.2.2 Attenuation and Reflection Interactions
- •6.2.3 Transducer Geometry
- •6.3 Bats and Signal Processing
- •6.3.1 Principles of Bat Sonar
- •6.3.2 Echolocation Call Structures
- •6.3.3 Signal Processing Capabilities
- •6.3.4 Applicability of Bat Echolocation to Sonar System Design
- •6.4 Design and Construction Issues
- •6.4.1 Outline Requirement Specification
- •6.4.2 Ultrasonic Spatial Sensor Subsystem
- •6.4.3 Trial Prototype Spatial Sensor Arrangement
- •6.4.4 Tactile User Interface Subsystem
- •6.4.5 Cognitive Mapping
- •6.4.6 Embedded Processing Control Requirements
- •6.5 Concept Phase and Engineering Prototype Phase Trials
- •6.6 Case Study in Commercialisation
- •6.7 Chapter Summary
- •Questions
- •Projects
- •References
- •7 Navigation AT: Context-aware Computing
- •Learning objectives
- •7.1 Defining the Orientation/Navigation Problem
- •7.1.1 Orientation, Mobility and Navigation
- •7.1.2 Traditional Mobility Aids
- •7.1.3 Limitations of Traditional Aids
- •7.2 Cognitive Maps
- •7.2.1 Learning and Acquiring Spatial Information
- •7.2.2 Factors that Influence How Knowledge Is Acquired
- •7.2.3 The Structure and Form of Cognitive Maps
- •7.3 Overview of Existing Technologies
- •7.3.1 Technologies for Distant Navigation
- •7.3.2 User Interface Output Technologies
- •7.4 Principles of Mobile Context-aware Computing
- •7.4.1 Adding Context to User-computer Interaction
- •7.4.2 Acquiring Useful Contextual Information
- •7.4.3 Capabilities of Context-awareness
- •7.4.4 Application of Context-aware Principles
- •7.4.5 Technological Challenges and Unresolved Usability Issues
- •7.5 Test Procedures
- •7.5.1 Human Computer Interaction (HCI)
- •7.5.2 Cognitive Mapping
- •7.5.3 Overall Approach
- •7.6 Future Positioning Technologies
- •7.7 Chapter Summary
- •7.7.1 Conclusions
- •Questions
- •Projects
- •References
- •Learning Objectives
- •8.1 Defining the Navigation Problem
- •8.1.1 What is the Importance of Location Information?
- •8.1.2 What Mobility Tools and Traditional Maps are Available for the Blind?
- •8.2 Principles of Global Positioning Systems
- •8.2.1 What is the Global Positioning System?
- •8.2.2 Accuracy of GPS: Some General Issues
- •8.2.3 Accuracy of GPS: Some Technical Issues
- •8.2.4 Frequency Spectrum of GPS, Present and Future
- •8.2.5 Other GPS Systems
- •8.3 Application of GPS Principles
- •8.4 Design Issues
- •8.5 Development Issues
- •8.5.1 Choosing an Appropriate Platform
- •8.5.2 Choosing the GPS Receiver
- •8.5.3 Creating a Packaged System
- •8.5.4 Integration vs Stand-alone
- •8.6 User Interface Design Issues
- •8.6.1 How to Present the Information
- •8.6.2 When to Present the Information
- •8.6.3 What Information to Present
- •8.7 Test Procedures and Results
- •8.8 Case Study in Commercialisation
- •8.8.1 Understanding the Value of the Technology
- •8.8.2 Limitations of the Technology
- •8.8.3 Ongoing Development
- •8.9 Chapter Summary
- •Questions
- •Projects
- •References
- •9 Electronic Travel Aids: An Assessment
- •Learning Objectives
- •9.1 Introduction
- •9.2 Why Do an Assessment?
- •9.3 Methodologies for Assessments of Electronic Travel Aids
- •9.3.1 Eliciting User Requirements
- •9.3.2 Developing a User Requirements Specification and Heuristic Evaluation
- •9.3.3 Hands-on Assessments
- •9.3.4 Methodology Used for Assessments in this Chapter
- •9.4 Modern-day Electronic Travel Aids
- •9.4.1 The Distinction Between Mobility and Navigation Aids
- •9.4.2 The Distinction Between Primary and Secondary Aids
- •9.4.3 User Requirements: Mobility and Navigation Aids
- •9.4.4 Mobility Aids
- •9.4.5 Mobility Aids: Have They Solved the Mobility Challenge?
- •9.4.6 Navigation Aids
- •9.4.7 Navigation Aids: Have They Solved the Navigation Challenge?
- •9.5 Training
- •9.6 Chapter Summary and Conclusions
- •Questions
- •Projects
- •References
- •10 Accessible Environments
- •Learning Objectives
- •10.1 Introduction
- •10.1.1 Legislative and Regulatory Framework
- •10.1.2 Accessible Environments: An Overview
- •10.1.3 Principles for the Design of Accessible Environments
- •10.2 Physical Environments: The Streetscape
- •10.2.1 Pavements and Pathways
- •10.2.2 Road Crossings
- •10.2.3 Bollards and Street Furniture
- •10.3 Physical Environments: Buildings
- •10.3.1 General Exterior Issues
- •10.3.2 General Interior Issues
- •10.3.4 Signs and Notices
- •10.3.5 Interior Building Services
- •10.4 Environmental Information and Navigation Technologies
- •10.4.1 Audio Information System: General Issues
- •10.4.2 Some Technologies for Environmental Information Systems
- •10.5 Accessible Public Transport
- •10.5.1 Accessible Public Transportation: Design Issues
- •10.6 Chapter Summary
- •Questions
- •Projects
- •References
- •11 Accessible Bus System: A Bluetooth Application
- •Learning Objectives
- •11.1 Introduction
- •11.2 Bluetooth Fundamentals
- •11.2.1 Brief History of Bluetooth
- •11.2.2 Bluetooth Power Class
- •11.2.3 Protocol Stack
- •11.2.4 Bluetooth Profile
- •11.2.5 Piconet
- •11.3 Design Issues
- •11.3.1 System Architecture
- •11.3.2 Hardware Requirements
- •11.3.3 Software Requirements
- •11.4 Developmental Issues
- •11.4.1 Bluetooth Server
- •11.4.2 Bluetooth Client (Mobile Device)
- •11.4.3 User Interface
- •11.5 Commercialisation Issues
- •11.6 Chapter Summary
- •Questions
- •Projects
- •References
- •12 Accessible Information: An Overview
- •Learning Objectives
- •12.1 Introduction
- •12.2 Low Vision Aids
- •12.2.1 Basic Principles
- •12.3 Low Vision Assistive Technology Systems
- •12.3.1 Large Print
- •12.3.2 Closed Circuit Television Systems
- •12.3.3 Video Magnifiers
- •12.3.4 Telescopic Assistive Systems
- •12.4 Audio-transcription of Printed Information
- •12.4.1 Stand-alone Reading Systems
- •12.4.2 Read IT Project
- •12.5 Tactile Access to Information
- •12.5.1 Braille
- •12.5.2 Moon
- •12.5.3 Braille Devices
- •12.6 Accessible Computer Systems
- •12.6.1 Input Devices
- •12.6.2 Output Devices
- •12.6.3 Computer-based Reading Systems
- •12.6.4 Accessible Portable Computers
- •12.7 Accessible Internet
- •12.7.1 World Wide Web Guidelines
- •12.7.2 Guidelines for Web Authoring Tools
- •12.7.3 Accessible Adobe Portable Document Format (PDF) Documents
- •12.7.4 Bobby Approval
- •12.8 Telecommunications
- •12.8.1 Voice Dialling General Principles
- •12.8.2 Talking Caller ID
- •12.8.3 Mobile Telephones
- •12.9 Chapter Summary
- •Questions
- •Projects
- •References
- •13 Screen Readers and Screen Magnifiers
- •Learning Objectives
- •13.1 Introduction
- •13.2 Overview of Chapter
- •13.3 Interacting with a Graphical User Interface
- •13.4 Screen Magnifiers
- •13.4.1 Overview
- •13.4.2 Magnification Modes
- •13.4.3 Other Interface Considerations
- •13.4.4 The Architecture and Implementation of Screen Magnifiers
- •13.5 Screen Readers
- •13.5.1 Overview
- •13.5.2 The Architecture and Implementation of a Screen Reader
- •13.5.3 Using a Braille Display
- •13.5.4 User Interface Issues
- •13.6 Hybrid Screen Reader Magnifiers
- •13.7 Self-magnifying Applications
- •13.8 Self-voicing Applications
- •13.9 Application Adaptors
- •13.10 Chapter Summary
- •Questions
- •Projects
- •References
- •14 Speech, Text and Braille Conversion Technology
- •Learning Objectives
- •14.1 Introduction
- •14.1.1 Introducing Mode Conversion
- •14.1.2 Outline of the Chapter
- •14.2 Prerequisites for Speech and Text Conversion Technology
- •14.2.1 The Spectral Structure of Speech
- •14.2.2 The Hierarchical Structure of Spoken Language
- •14.2.3 Prosody
- •14.3 Speech-to-text Conversion
- •14.3.1 Principles of Pattern Recognition
- •14.3.2 Principles of Speech Recognition
- •14.3.3 Equipment and Applications
- •14.4 Text-to-speech Conversion
- •14.4.1 Principles of Speech Production
- •14.4.2 Principles of Acoustical Synthesis
- •14.4.3 Equipment and Applications
- •14.5 Braille Conversion
- •14.5.1 Introduction
- •14.5.2 Text-to-Braille Conversion
- •14.5.3 Braille-to-text Conversion
- •14.6 Commercial Equipment and Applications
- •14.6.1 Speech vs Braille
- •14.6.2 Speech Output in Devices for Daily Life
- •14.6.3 Portable Text-based Devices
- •14.6.4 Access to Computers
- •14.6.5 Reading Machines
- •14.6.6 Access to Telecommunication Devices
- •14.7 Discussion and the Future Outlook
- •14.7.1 End-user Studies
- •14.7.2 Discussion and Issues Arising
- •14.7.3 Future Developments
- •Questions
- •Projects
- •References
- •15 Accessing Books and Documents
- •Learning Objectives
- •15.1 Introduction: The Challenge of Accessing the Printed Page
- •15.2 Basics of Optical Character Recognition Technology
- •15.2.1 Details of Optical Character Recognition Technology
- •15.2.2 Practical Issues with Optical Character Recognition Technology
- •15.3 Reading Systems
- •15.4 DAISY Technology
- •15.4.1 DAISY Full Audio Books
- •15.4.2 DAISY Full Text Books
- •15.4.3 DAISY and Other Formats
- •15.5 Players
- •15.6 Accessing Textbooks
- •15.7 Accessing Newspapers
- •15.8 Future Technology Developments
- •15.9 Chapter Summary and Conclusion
- •15.9.1 Chapter Summary
- •15.9.2 Conclusion
- •Questions
- •Projects
- •References
- •Learning Objectives
- •16.1 Introduction
- •16.1.1 Print Impairments
- •16.1.2 Music Notation
- •16.2 Overview of Accessible Music
- •16.2.1 Formats
- •16.2.2 Technical Aspects
- •16.3 Some Recent Initiatives and Projects
- •16.3.2 Play 2
- •16.3.3 Dancing Dots
- •16.3.4 Toccata
- •16.4 Problems to Be Overcome
- •16.4.1 A Content Processing Layer
- •16.4.2 Standardization of Accessible Music Technology
- •16.5 Unifying Accessible Design, Technology and Musical Content
- •16.5.1 Braille Music
- •16.5.2 Talking Music
- •16.6 Conclusions
- •16.6.1 Design for All or Accessibility from Scratch
- •16.6.2 Applying Design for All in Emerging Standards
- •16.6.3 Accessibility in Emerging Technology
- •Questions
- •Projects
- •References
- •17 Assistive Technology for Daily Living
- •Learning Objectives
- •17.1 Introduction
- •17.2 Personal Care
- •17.2.1 Labelling Systems
- •17.2.2 Healthcare Monitoring
- •17.3 Time-keeping, Alarms and Alerting
- •17.3.1 Time-keeping
- •17.3.2 Alarms and Alerting
- •17.4 Food Preparation and Consumption
- •17.4.1 Talking Kitchen Scales
- •17.4.2 Talking Measuring Jug
- •17.4.3 Liquid Level Indicator
- •17.4.4 Talking Microwave Oven
- •17.4.5 Talking Kitchen and Remote Thermometers
- •17.4.6 Braille Salt and Pepper Set
- •17.5 Environmental Control and Use of Appliances
- •17.5.1 Light Probes
- •17.5.2 Colour Probes
- •17.5.3 Talking and Tactile Thermometers and Barometers
- •17.5.4 Using Appliances
- •17.6 Money, Finance and Shopping
- •17.6.1 Mechanical Money Indicators
- •17.6.2 Electronic Money Identifiers
- •17.6.3 Electronic Purse
- •17.6.4 Automatic Teller Machines (ATMs)
- •17.7 Communications and Access to Information: Other Technologies
- •17.7.1 Information Kiosks and Other Self-service Systems
- •17.7.2 Using Smart Cards
- •17.7.3 EZ Access®
- •17.8 Chapter Summary
- •Questions
- •Projects
- •References
- •Learning Objectives
- •18.1 Introduction
- •18.2 Education: Learning and Teaching
- •18.2.1 Accessing Educational Processes and Approaches
- •18.2.2 Educational Technologies, Devices and Tools
- •18.3 Employment
- •18.3.1 Professional and Person-centred
- •18.3.2 Scientific and Technical
- •18.3.3 Administrative and Secretarial
- •18.3.4 Skilled and Non-skilled (Manual) Trades
- •18.3.5 Working Outside
- •18.4 Recreational Activities
- •18.4.1 Accessing the Visual, Audio and Performing Arts
- •18.4.2 Games, Puzzles, Toys and Collecting
- •18.4.3 Holidays and Visits: Museums, Galleries and Heritage Sites
- •18.4.4 Sports and Outdoor Activities
- •18.4.5 DIY, Art and Craft Activities
- •18.5 Chapter Summary
- •Questions
- •Projects
- •References
- •Biographical Sketches of the Contributors
- •Index
584 16 Designing Accessible Music Software for Print Impaired People
16.2 Overview of Accessible Music
This section gives an overview of the instruments and tools that are available for producing accessible music and the methods that are currently used.
From a production logistics point of view, the accessible solutions associated with providing for print impaired people present several problems. The accessible market is a niche market, so the problem becomes both a logistic problem and a design problem. Solutions are primarily designed with the mainstream market in mind. Once these user requirements have been adequately met, secondary user needs can be considered. However, as the accessibility requirements have been considered as a secondary solution or an afterthought, the solution at this stage is very often a piggyback solution. This creates a very poor design environment and fails to incorporate the basic ideas of Design and Accessibility For All (http://www.design-for-all.org/). The original software is usually designed with very robust and modern design methodologies, yet a quick solution is designed for the niche markets.
One of the most important requirements of musical presentation in Braille or Talking Music formats for print impaired users is the ease of comprehension of the context of the musical material. This means that the important information regarding context deduction must be clearly perceivable. That is, the end-user must be able to establish their location in the musical material and so be able to understand the functions of the musical material at hand. The end-user must not be overloaded with redundant information, but at the same time be provided with enough musical material to be able to deduce its function.
16.2.1 Formats
16.2.1.1 Braille Music
Ever since Louis Braille invented his system for representing music, blind musicians have been able to obtain scores in the Braille music format. There are international guidelines for Braille music notation (Krolick 1996). Materials in Braille music make up the largest portion of the available alternative formats, and include the standard repertoire for most instruments, vocal and choral music, some popular music, librettos, textbooks, instructional method books, and music periodicals. However, Braille music is produced by a relatively small number of institutions throughout the world.
A Braille model that is built on notions of BrailleCells and sequences of BrailleCells is used to codify musical aspects. The protocols of these coding rules are not standardized and there are still a few serious problems.
BrailleMusic is a sequential protocol with a defined grammar, much like serial communication through a RS232 serial interface. Starting from a graphical score, this means that a great amount of parallel data has to be ‘packed’ into sequential packages with a consistent structure. This can be achieved when using the WedelMusic Accessible Music Module to produce BrailleMusic (see Section 16.2.1.1.1).
16.2 Overview of Accessible Music |
585 |
16.2.1.1.1 WedelMusic
The WedelMusic project (http://www.wedelmusic.org), which ran from 2001 to 2003, provided a fully integrated solution that aimed at allowing publishers and consumers to manage interactive music: that is, music which can be manipulated (arranged, transposed, modified, reformatted, printed, etc.). This is done within a secure environment. The key components are a unified XML-based format for modelling music; reliable mechanisms for protecting music in symbolic, image and audio formats; and a full set of tools for building, converting, storing and distributing music on the Internet.
The interface for visually impaired people enabled blind musicians to have access to and share the facilities of the WedelMusic system in the same manner as sighted people. This implementation used many state of the art as well as conventional technologies like speech synthesis from text (widely known as TTS), Braille printing and technologies specifically developed by the WedelMusic project. The key element of this interface for blind people was the speech engine; this was based on Microsoft SAPI4 components (http://www.microsoft.com). A screenreader program is used to provide a surrogate for the lack of visual feedback by conveying audible feedback to the user. For visual feedback to be fully and efficiently replaced, any activity is reported to the user. A Braille printing module was also developed to ensure that Braille music output is easily available, and in the format specified by the end-user. The importance of this development was considerable, as it allows far greater access to reading and creating music scores for visually impaired people.
An example of a first measure of a piece of music produced using the WedelMusic Accessible Music module is shown in Figure 16.1.
The BrailleMusic transcription can be described using Figure 16.2.
An example of the second measure after a navigation action by the end user to move to the next measure is illustrated in Figure 16.3.
Since the measures are represented as separate though associated objects, one choice is to interpret only these measures. In other words, the perceived context can be limited to only the current measure. Invoking the Braille interpreter on higher-level musical collections permits BrailleMusic interpretations of higherlevel contexts. A simple example of such usage would be interpreting a page of BrailleMusic.
Figure 16.1. Navigation action through a musical score with additional real-time output to BrailleMusic
586 16 Designing Accessible Music Software for Print Impaired People
Figure 16.2. Measure 1, BrailleMusic explained
Figure 16.3. BrailleMusic interpretation of the second measure. Please note that the key information is missing, because the next measures contain no key information. In this particular set of interpretation settings, only key changes are presented
Figure 16.4. Real-time generation of a higher-level context description of a piece of music (in this case a page of measure)
An example of a page containing measures of a piece of music produced after selecting the appropriate menu item is shown in Figures 16.4 and 16.5.
The lower part preview of the BrailleMusic generated in this particular case is explained in Figure 16.6.
16.2 Overview of Accessible Music |
587 |
Figure 16.5. Measure 2, BrailleMusic explained
Figure 16.6. Higher-level musical information represented
588 16 Designing Accessible Music Software for Print Impaired People
16.2.1.2 Talking Music
This section firstly gives a general explanation of Talking Music followed by an explanation of an implementation of Talking Music using DAISY.
Reading Braille music is a complex task and, for those people who become impaired beyond the age of 20, it is extremely difficult to learn Braille. There is then a problem of finding a suitable teacher for Braille music, which can be difficult in some countries. Braille music can be seen as a specific dialect of Braille, so in theory anyone who can read Braille can learn to read Braille music. For those who can learn Braille music, there are often significant delays in receiving music scores, as the highly specialized production process is both time-consuming and expensive. Indeed, in many European countries the service either does not exist, or has been discontinued for reasons of cost or a drop in demand caused by fewer people learning Braille music.
One of the most popular alternatives to Braille music is Talking Music. Talking Music is presented as a played fragment of music, a few bars long, with a spoken description of the example. This spoken description describes the musical information using language which is familiar to musicians (i.e. fourth octave E quarter note etc.). The musical examples are now produced automatically by software and as far as possible with a non-interpreted version of the music. All notes are played at the same volume, at the same velocity, and all notes of a specific duration have exactly the same duration. The musical example is there to provide the user with a context for the Talking material, making it easier to understand and play. The musical example is non-interpreted, meaning that there is as little expression and interpretation in the musical meaning as possible. This affords the Talking Music user the same level of subjectivity to the musical content as the sighted user.
The most important part of the information is the name of the note. All the other information should be centred around this. Some of the additional musical information should be placed before the name of the note: some of it should be given afterwards. The guiding principles are the way in which the musical information will be processed, the duration of the musical information, and the importance of the information. For instance, there must be an indication which octave the note belongs to before the name of the note is said so that the listener can prepare some motor movements before playing the note. When, as an illustration of the second principle, a dynamic sign and a bow are mentioned at the same position in the score, the one with the longest duration should be mentioned first. Third, not all the information is equally important for everyone. For example, fingering is additional information, and is mentioned after the note and its duration.
Naturally, it is preferable not to describe every facet of every note, as some data are implicit. Having established a basic set of rules within DEDICON (formerly FNB, Dutch Federation of Libraries for the Blind) with several users, it was then necessary to consider ways of limiting the volume of information without losing any critical data. This led to the use of abbreviations. For example, the octave information does not need to be included for every note. If the interval between two consecutive notes is a third or less, then the octave information can be excluded.
16.2 Overview of Accessible Music |
589 |
In the same way, if the following note is a sixth or more above the current note, octave information has to be included. If the following note is a fourth or fifth above the current note, and the notes belong to different octaves, octave information does have to be included. The abbreviations adopted were the same as those employed in Braille music.
16.2.1.2.1 The DAISY Standard
Talking Music scores make use of the DAISY standard for Talking Books. The DAISY consortium (http://www.daisy.org) developed the DAISY format DAISY 3.0/NISO z39.86 (http://www.daisy.org/z3986/default.asp), which allows data to have an underlying structure for navigation reasons. The main application for this technology is in Talking Books for print impaired people. There are now hardware players for navigating DAISY structures and these are often supplied free of charge to visually impaired users in many countries.
DAISY is a key international standard to incorporate in accessible formats. Not only is it a structure with which impaired users are familiar, but it is also sufficiently well-established to be accepted in recognised hardware. A DAISY book contains many files, which are synchronised to provide the information and a means to navigate the information. A simplification of the DAISY format is shown in Figure 16.7.
There are several consumption means for the DAISY format. Hardware players are available which have simple interfaces that allow users to navigate from sentence to sentence, chapter to chapter or even book to book. Most visually impaired users in Europe have access to these. Other users prefer to use software players. There are various companies who design DAISY playback software.
A DAISY talking book is made up of fragments, which split the book into a structure. Each fragment contains an .mp3 file and a .smil file. The .mp3 file provides the audio information, while the .smil file provides the structural and
Figure 16.7. Basic structure of a DAISY Ebook
590 16 Designing Accessible Music Software for Print Impaired People
metadata information for that particular fragment. This essentially describes the content for each fragment. Each fragment has this information, and the .smil files can communicate with each other to provide different levels of abstraction. The collections of files are controlled and synchronised by a master .smil file and an ncc.html file, which provides the hierarchy for the structure, and is the file which is read by the DAISY player or software.
16.2.1.2.2 Talking Music Output
One of the guiding principles of universal design and Design for All guidelines (www.design-for-all.org) is that the print disabled should have access to the same information as the sighted, and that only the format in which the information is presented should change. For Talking Music, this means that everything on the page of a music score must be represented in a Talking format. Furthermore, this Talking format must be applicable to all types of music and instruments. By way of illustration, Figure 16.8 contains an excerpt form a Bach minuet.
The resulting Talking Music for measure 1 of the simple example above is shown below:
The title is: Minuet in G. Composer: Bach.
This piece contains 8 bars and consists of 2 sections. The piece is read in eighth notes, except where otherwise indicated.
Key Signature: 1 sharp. Time Signature: 3 fourths time. Section 1.
Bar 1.
second octave g half. a quarter.
in agreement with fourth octave d quarter. third octave g.
a.
b.
c.
The description above shows the information within the Talking Music book taken from a DAISY Software player. The description starts with a header which
Figure 16.8. Bach excerpt – Minuet in G – BMV 114
16.2 Overview of Accessible Music |
591 |
delivers the document data about the score (composer, title, production notes), a description of the score (in terms of length and content), and then the key and time signatures. The description then continues to Section 1, where it describes the first measure note by note. The lowest voice is described first.
This information exists as audio files, and the text description is used for production use and explanatory reasons. The audio files are structured in such a way that the user can easily navigate from one section of the score to another, in order to build up an idea of what is in the score. The tree structure of a more complicated score taken from a DAISY Software player is shown in Figure 16.9.
A Talking Music book exists as a collection of files controlled by an .ncc file but to describe the structure it helps to think of a folder hierarchy. The Talking Music Book contains at the first level folders for the header information and folders for each section. The folders for the header information can be navigated through vertically in order to skip to the desired piece of information. There is then a midi representation of the entire score. The midi representation provides a computer generated playback of the music. This is usually played at the standard tempo, but the tempo can be altered in the production process if desired. The sections then contain a folder with the midi for that section (usually slower, depending on the complexity of the piece). The midi is then followed by a folder for each measure which in turn contains a folder for each voice. After the first level the score can be navigated both vertically and horizontally. This means that the user can jump from section to section, measure to measure or voice to voice. This affords the user the luxury of relating the information with its context in a way similar to a sighted user, who can overview the score at several levels of abstraction.
Figure 16.9. Talking Music Score structure and an excerpt from the score
592 16 Designing Accessible Music Software for Print Impaired People
16.2.1.3 Large (Modified) Print Music
A large number of the users of accessible formats are not blind. One popular accessible format for many years has been that of Large Print Music (or modified stave notation). Because this is based on traditional common western music notation (CWMN) it is very widely used by people with diminished sight who could once use CWMN.
The concept is simple; the music notation is made larger than normal, where the scale factor is relevant to the quality of the user’s eyesight. The common way to do this is to take the A4 score, and to increase it to A3 using a photocopier. With the increasingly common use of computers as a composer’s or typesetter’s tool, the score becomes more adaptable to accessible solutions. Considering the previous example of the Bach minuet, the size of the zoom in a music notation package can be increased, and an example is given in Figure 16.10.
This is sufficient in many cases, but this resolution is the highest achievable before the pixels become visible, and the image becomes less useful. When using music notation editing software it is possible to optimise the score for large print versions. The notes can be typeset in such a way that the durations are placed evenly across the measure. There should be little or no overlapping information. As with all accessible solutions consistency should exist wherever possible; if dynamics appear below the measure they should continue to appear below the measure for the whole score. These guidelines make the score more viewable at small font sizes, and so should help the user in understanding the different elements of the data.
Large Print Music notation can benefit from the information digitisation process. It is important from an accessibility point of view that the traditional score is not modified, but that the information is presented in different ways. Interesting solutions would be possible if modern extensible methods were used and incorporated at the graphic rendering stage of the music notation production. One possibility in this field would be the use of scalable vector graphics (SVG – http://www.w3.org/TR/SVG/), where an XML-based description (http://www.w3.org/XML/) of the graphic is used and, because it is vector-based, the representation can in theory be used to increase the size of the rendering to any level. This extensible modularisation of the information also allows the different elements in the score to be represented in different ways. This is especially important for dyslexic users, as the variance in problems with information is greater, and personalised accessible formats may be a viable option.
Figure 16.10. Example of large print music notation
