
- •Preface
- •Introduction
- •1.1 Spatial coordinate systems
- •1.2 Sound fields and their physical characteristics
- •1.2.1 Free-field and sound waves generated by simple sound sources
- •1.2.2 Reflections from boundaries
- •1.2.3 Directivity of sound source radiation
- •1.2.4 Statistical analysis of acoustics in an enclosed space
- •1.2.5 Principle of sound receivers
- •1.3 Auditory system and perception
- •1.3.1 Auditory system and its functions
- •1.3.2 Hearing threshold and loudness
- •1.3.3 Masking
- •1.3.4 Critical band and auditory filter
- •1.4 Artificial head models and binaural signals
- •1.4.1 Artificial head models
- •1.4.2 Binaural signals and head-related transfer functions
- •1.5 Outline of spatial hearing
- •1.6 Localization cues for a single sound source
- •1.6.1 Interaural time difference
- •1.6.2 Interaural level difference
- •1.6.3 Cone of confusion and head movement
- •1.6.4 Spectral cues
- •1.6.5 Discussion on directional localization cues
- •1.6.6 Auditory distance perception
- •1.7 Summing localization and spatial hearing with multiple sources
- •1.7.1 Summing localization with two sound sources
- •1.7.2 The precedence effect
- •1.7.3 Spatial auditory perceptions with partially correlated and uncorrelated source signals
- •1.7.4 Auditory scene analysis and spatial hearing
- •1.7.5 Cocktail party effect
- •1.8 Room reflections and auditory spatial impression
- •1.8.1 Auditory spatial impression
- •1.8.2 Sound field-related measures and auditory spatial impression
- •1.8.3 Binaural-related measures and auditory spatial impression
- •1.9.1 Basic principle of spatial sound
- •1.9.2 Classification of spatial sound
- •1.9.3 Developments and applications of spatial sound
- •1.10 Summary
- •2.1 Basic principle of a two-channel stereophonic sound
- •2.1.1 Interchannel level difference and summing localization equation
- •2.1.2 Effect of frequency
- •2.1.3 Effect of interchannel phase difference
- •2.1.4 Virtual source created by interchannel time difference
- •2.1.5 Limitation of two-channel stereophonic sound
- •2.2.1 XY microphone pair
- •2.2.2 MS transformation and the MS microphone pair
- •2.2.3 Spaced microphone technique
- •2.2.4 Near-coincident microphone technique
- •2.2.5 Spot microphone and pan-pot technique
- •2.2.6 Discussion on microphone and signal simulation techniques for two-channel stereophonic sound
- •2.3 Upmixing and downmixing between two-channel stereophonic and mono signals
- •2.4 Two-channel stereophonic reproduction
- •2.4.1 Standard loudspeaker configuration of two-channel stereophonic sound
- •2.4.2 Influence of front-back deviation of the head
- •2.5 Summary
- •3.1 Physical and psychoacoustic principles of multichannel surround sound
- •3.2 Summing localization in multichannel horizontal surround sound
- •3.2.1 Summing localization equations for multiple horizontal loudspeakers
- •3.2.2 Analysis of the velocity and energy localization vectors of the superposed sound field
- •3.2.3 Discussion on horizontal summing localization equations
- •3.3 Multiple loudspeakers with partly correlated and low-correlated signals
- •3.4 Summary
- •4.1 Discrete quadraphone
- •4.1.1 Outline of the quadraphone
- •4.1.2 Discrete quadraphone with pair-wise amplitude panning
- •4.1.3 Discrete quadraphone with the first-order sound field signal mixing
- •4.1.4 Some discussions on discrete quadraphones
- •4.2 Other horizontal surround sounds with regular loudspeaker configurations
- •4.2.1 Six-channel reproduction with pair-wise amplitude panning
- •4.2.2 The first-order sound field signal mixing and reproduction with M ≥ 3 loudspeakers
- •4.3 Transformation of horizontal sound field signals and Ambisonics
- •4.3.1 Transformation of the first-order horizontal sound field signals
- •4.3.2 The first-order horizontal Ambisonics
- •4.3.3 The higher-order horizontal Ambisonics
- •4.3.4 Discussion and implementation of the horizontal Ambisonics
- •4.4 Summary
- •5.1 Outline of surround sounds with accompanying picture and general uses
- •5.2 5.1-Channel surround sound and its signal mixing analysis
- •5.2.1 Outline of 5.1-channel surround sound
- •5.2.2 Pair-wise amplitude panning for 5.1-channel surround sound
- •5.2.3 Global Ambisonic-like signal mixing for 5.1-channel sound
- •5.2.4 Optimization of three frontal loudspeaker signals and local Ambisonic-like signal mixing
- •5.2.5 Time panning for 5.1-channel surround sound
- •5.3 Other multichannel horizontal surround sounds
- •5.4 Low-frequency effect channel
- •5.5 Summary
- •6.1 Summing localization in multichannel spatial surround sound
- •6.1.1 Summing localization equations for spatial multiple loudspeaker configurations
- •6.1.2 Velocity and energy localization vector analysis for multichannel spatial surround sound
- •6.1.3 Discussion on spatial summing localization equations
- •6.1.4 Relationship with the horizontal summing localization equations
- •6.2 Signal mixing methods for a pair of vertical loudspeakers in the median and sagittal plane
- •6.3 Vector base amplitude panning
- •6.4 Spatial Ambisonic signal mixing and reproduction
- •6.4.1 Principle of spatial Ambisonics
- •6.4.2 Some examples of the first-order spatial Ambisonics
- •6.4.4 Recreating a top virtual source with a horizontal loudspeaker arrangement and Ambisonic signal mixing
- •6.5 Advanced multichannel spatial surround sounds and problems
- •6.5.1 Some advanced multichannel spatial surround sound techniques and systems
- •6.5.2 Object-based spatial sound
- •6.5.3 Some problems related to multichannel spatial surround sound
- •6.6 Summary
- •7.1 Basic considerations on the microphone and signal simulation techniques for multichannel sounds
- •7.2 Microphone techniques for 5.1-channel sound recording
- •7.2.1 Outline of microphone techniques for 5.1-channel sound recording
- •7.2.2 Main microphone techniques for 5.1-channel sound recording
- •7.2.3 Microphone techniques for the recording of three frontal channels
- •7.2.4 Microphone techniques for ambience recording and combination with frontal localization information recording
- •7.2.5 Stereophonic plus center channel recording
- •7.3 Microphone techniques for other multichannel sounds
- •7.3.1 Microphone techniques for other discrete multichannel sounds
- •7.3.2 Microphone techniques for Ambisonic recording
- •7.4 Simulation of localization signals for multichannel sounds
- •7.4.1 Methods of the simulation of directional localization signals
- •7.4.2 Simulation of virtual source distance and extension
- •7.4.3 Simulation of a moving virtual source
- •7.5 Simulation of reflections for stereophonic and multichannel sounds
- •7.5.1 Delay algorithms and discrete reflection simulation
- •7.5.2 IIR filter algorithm of late reverberation
- •7.5.3 FIR, hybrid FIR, and recursive filter algorithms of late reverberation
- •7.5.4 Algorithms of audio signal decorrelation
- •7.5.5 Simulation of room reflections based on physical measurement and calculation
- •7.6 Directional audio coding and multichannel sound signal synthesis
- •7.7 Summary
- •8.1 Matrix surround sound
- •8.1.1 Matrix quadraphone
- •8.1.2 Dolby Surround system
- •8.1.3 Dolby Pro-Logic decoding technique
- •8.1.4 Some developments on matrix surround sound and logic decoding techniques
- •8.2 Downmixing of multichannel sound signals
- •8.3 Upmixing of multichannel sound signals
- •8.3.1 Some considerations in upmixing
- •8.3.2 Simple upmixing methods for front-channel signals
- •8.3.3 Simple methods for Ambient component separation
- •8.3.4 Model and statistical characteristics of two-channel stereophonic signals
- •8.3.5 A scale-signal-based algorithm for upmixing
- •8.3.6 Upmixing algorithm based on principal component analysis
- •8.3.7 Algorithm based on the least mean square error for upmixing
- •8.3.8 Adaptive normalized algorithm based on the least mean square for upmixing
- •8.3.9 Some advanced upmixing algorithms
- •8.4 Summary
- •9.1 Each order approximation of ideal reproduction and Ambisonics
- •9.1.1 Each order approximation of ideal horizontal reproduction
- •9.1.2 Each order approximation of ideal three-dimensional reproduction
- •9.2 General formulation of multichannel sound field reconstruction
- •9.2.1 General formulation of multichannel sound field reconstruction in the spatial domain
- •9.2.2 Formulation of spatial-spectral domain analysis of circular secondary source array
- •9.2.3 Formulation of spatial-spectral domain analysis for a secondary source array on spherical surface
- •9.3 Spatial-spectral domain analysis and driving signals of Ambisonics
- •9.3.1 Reconstructed sound field of horizontal Ambisonics
- •9.3.2 Reconstructed sound field of spatial Ambisonics
- •9.3.3 Mixed-order Ambisonics
- •9.3.4 Near-field compensated higher-order Ambisonics
- •9.3.5 Ambisonic encoding of complex source information
- •9.3.6 Some special applications of spatial-spectral domain analysis of Ambisonics
- •9.4 Some problems related to Ambisonics
- •9.4.1 Secondary source array and stability of Ambisonics
- •9.4.2 Spatial transformation of Ambisonic sound field
- •9.5 Error analysis of Ambisonic-reconstructed sound field
- •9.5.1 Integral error of Ambisonic-reconstructed wavefront
- •9.5.2 Discrete secondary source array and spatial-spectral aliasing error in Ambisonics
- •9.6 Multichannel reconstructed sound field analysis in the spatial domain
- •9.6.1 Basic method for analysis in the spatial domain
- •9.6.2 Minimizing error in reconstructed sound field and summing localization equation
- •9.6.3 Multiple receiver position matching method and its relation to the mode-matching method
- •9.7 Listening room reflection compensation in multichannel sound reproduction
- •9.8 Microphone array for multichannel sound field signal recording
- •9.8.1 Circular microphone array for horizontal Ambisonic recording
- •9.8.2 Spherical microphone array for spatial Ambisonic recording
- •9.8.3 Discussion on microphone array recording
- •9.9 Summary
- •10.1 Basic principle and implementation of wave field synthesis
- •10.1.1 Kirchhoff–Helmholtz boundary integral and WFS
- •10.1.2 Simplification of the types of secondary sources
- •10.1.3 WFS in a horizontal plane with a linear array of secondary sources
- •10.1.4 Finite secondary source array and effect of spatial truncation
- •10.1.5 Discrete secondary source array and spatial aliasing
- •10.1.6 Some issues and related problems on WFS implementation
- •10.2 General theory of WFS
- •10.2.1 Green’s function of Helmholtz equation
- •10.2.2 General theory of three-dimensional WFS
- •10.2.3 General theory of two-dimensional WFS
- •10.2.4 Focused source in WFS
- •10.3 Analysis of WFS in the spatial-spectral domain
- •10.3.1 General formulation and analysis of WFS in the spatial-spectral domain
- •10.3.2 Analysis of the spatial aliasing in WFS
- •10.3.3 Spatial-spectral division method of WFS
- •10.4 Further discussion on sound field reconstruction
- •10.4.1 Comparison among various methods of sound field reconstruction
- •10.4.2 Further analysis of the relationship between acoustical holography and sound field reconstruction
- •10.4.3 Further analysis of the relationship between acoustical holography and Ambisonics
- •10.4.4 Comparison between WFS and Ambisonics
- •10.5 Equalization of WFS under nonideal conditions
- •10.6 Summary
- •11.1 Basic principles of binaural reproduction and virtual auditory display
- •11.1.1 Binaural recording and reproduction
- •11.1.2 Virtual auditory display
- •11.2 Acquisition of HRTFs
- •11.2.1 HRTF measurement
- •11.2.2 HRTF calculation
- •11.2.3 HRTF customization
- •11.3 Basic physical features of HRTFs
- •11.3.1 Time-domain features of far-field HRIRs
- •11.3.2 Frequency domain features of far-field HRTFs
- •11.3.3 Features of near-field HRTFs
- •11.4 HRTF-based filters for binaural synthesis
- •11.5 Spatial interpolation and decomposition of HRTFs
- •11.5.1 Directional interpolation of HRTFs
- •11.5.2 Spatial basis function decomposition and spatial sampling theorem of HRTFs
- •11.5.3 HRTF spatial interpolation and signal mixing for multichannel sound
- •11.5.4 Spectral shape basis function decomposition of HRTFs
- •11.6 Simplification of signal processing for binaural synthesis
- •11.6.1 Virtual loudspeaker-based algorithms
- •11.6.2 Basis function decomposition-based algorithms
- •11.7.1 Principle of headphone equalization
- •11.7.2 Some problems with binaural reproduction and VAD
- •11.8 Binaural reproduction through loudspeakers
- •11.8.1 Basic principle of binaural reproduction through loudspeakers
- •11.8.2 Virtual source distribution in two-front loudspeaker reproduction
- •11.8.3 Head movement and stability of virtual sources in Transaural reproduction
- •11.8.4 Timbre coloration and equalization in transaural reproduction
- •11.9 Virtual reproduction of stereophonic and multichannel surround sound
- •11.9.1 Binaural reproduction of stereophonic and multichannel sound through headphones
- •11.9.2 Stereophonic expansion and enhancement
- •11.9.3 Virtual reproduction of multichannel sound through loudspeakers
- •11.10.1 Binaural room modeling
- •11.10.2 Dynamic virtual auditory environments system
- •11.11 Summary
- •12.1 Physical analysis of binaural pressures in summing virtual source and auditory events
- •12.1.1 Evaluation of binaural pressures and localization cues
- •12.1.2 Method for summing localization analysis
- •12.1.3 Binaural pressure analysis of stereophonic and multichannel sound with amplitude panning
- •12.1.4 Analysis of summing localization with interchannel time difference
- •12.1.5 Analysis of summing localization at the off-central listening position
- •12.1.6 Analysis of interchannel correlation and spatial auditory sensations
- •12.2 Binaural auditory models and analysis of spatial sound reproduction
- •12.2.1 Analysis of lateral localization by using auditory models
- •12.2.2 Analysis of front-back and vertical localization by using a binaural auditory model
- •12.2.3 Binaural loudness models and analysis of the timbre of spatial sound reproduction
- •12.3 Binaural measurement system for assessing spatial sound reproduction
- •12.4 Summary
- •13.1 Analog audio storage and transmission
- •13.1.1 45°/45° Disk recording system
- •13.1.2 Analog magnetic tape audio recorder
- •13.1.3 Analog stereo broadcasting
- •13.2 Basic concepts of digital audio storage and transmission
- •13.3 Quantization noise and shaping
- •13.3.1 Signal-to-quantization noise ratio
- •13.3.2 Quantization noise shaping and 1-Bit DSD coding
- •13.4 Basic principle of digital audio compression and coding
- •13.4.1 Outline of digital audio compression and coding
- •13.4.2 Adaptive differential pulse-code modulation
- •13.4.3 Perceptual audio coding in the time-frequency domain
- •13.4.4 Vector quantization
- •13.4.5 Spatial audio coding
- •13.4.6 Spectral band replication
- •13.4.7 Entropy coding
- •13.4.8 Object-based audio coding
- •13.5 MPEG series of audio coding techniques and standards
- •13.5.1 MPEG-1 audio coding technique
- •13.5.2 MPEG-2 BC audio coding
- •13.5.3 MPEG-2 advanced audio coding
- •13.5.4 MPEG-4 audio coding
- •13.5.5 MPEG parametric coding of multichannel sound and unified speech and audio coding
- •13.5.6 MPEG-H 3D audio
- •13.6 Dolby series of coding techniques
- •13.6.1 Dolby digital coding technique
- •13.6.2 Some advanced Dolby coding techniques
- •13.7 DTS series of coding technique
- •13.8 MLP lossless coding technique
- •13.9 ATRAC technique
- •13.10 Audio video coding standard
- •13.11 Optical disks for audio storage
- •13.11.1 Structure, principle, and classification of optical disks
- •13.11.2 CD family and its audio formats
- •13.11.3 DVD family and its audio formats
- •13.11.4 SACD and its audio formats
- •13.11.5 BD and its audio formats
- •13.12 Digital radio and television broadcasting
- •13.12.1 Outline of digital radio and television broadcasting
- •13.12.2 Eureka-147 digital audio broadcasting
- •13.12.3 Digital radio mondiale
- •13.12.4 In-band on-channel digital audio broadcasting
- •13.12.5 Audio for digital television
- •13.13 Audio storage and transmission by personal computer
- •13.14 Summary
- •14.1 Outline of acoustic conditions and requirements for spatial sound intended for domestic reproduction
- •14.2 Acoustic consideration and design of listening rooms
- •14.3 Arrangement and characteristics of loudspeakers
- •14.3.1 Arrangement of the main loudspeakers in listening rooms
- •14.3.2 Characteristics of the main loudspeakers
- •14.3.3 Bass management and arrangement of subwoofers
- •14.4 Signal and listening level alignment
- •14.5 Standards and guidance for conditions of spatial sound reproduction
- •14.6 Headphones and binaural monitors of spatial sound reproduction
- •14.7 Acoustic conditions for cinema sound reproduction and monitoring
- •14.8 Summary
- •15.1 Outline of psychoacoustic and subjective assessment experiments
- •15.2 Contents and attributes for spatial sound assessment
- •15.3 Auditory comparison and discrimination experiment
- •15.3.1 Paradigms of auditory comparison and discrimination experiment
- •15.3.2 Examples of auditory comparison and discrimination experiment
- •15.4 Subjective assessment of small impairments in spatial sound systems
- •15.5 Subjective assessment of a spatial sound system with intermediate quality
- •15.6 Virtual source localization experiment
- •15.6.1 Basic methods for virtual source localization experiments
- •15.6.2 Preliminary analysis of the results of virtual source localization experiments
- •15.6.3 Some results of virtual source localization experiments
- •15.7 Summary
- •16.1.1 Application to commercial cinema and related problems
- •16.1.2 Applications to domestic reproduction and related problems
- •16.1.3 Applications to automobile audio
- •16.2.1 Applications to virtual reality
- •16.2.2 Applications to communication and information systems
- •16.2.3 Applications to multimedia
- •16.2.4 Applications to mobile and handheld devices
- •16.3 Applications to the scientific experiments of spatial hearing and psychoacoustics
- •16.4 Applications to sound field auralization
- •16.4.1 Auralization in room acoustics
- •16.4.2 Other applications of auralization technique
- •16.5 Applications to clinical medicine
- •16.6 Summary
- •References
- •Index

Spatial sound reproduction by wave field synthesis 489
the exterior radiation and thereby minimizes the influence of listening room reflections on interior sound field (Betlehem and Poletti, 2014).
Chang and Jacobsen (2012) suggested using a circular double-layer array of secondary sources to control sound fields. Secondary sources with the first-order directivity (a combination of monopole and dipole sources) are arranged in two concentric circular layers. The main axes of secondary sources in the outer and inner layers point to the outwardand inward-normal directions of the circles, respectively. The driving signals of secondary sources are derived by using the multiple receiver positions matching and least square error methods similar to those in Section 9.6.3. From the point of the multipole expansion of a sound field, this array of secondary sources is closely related to the array of secondary monopole and dipole sources on a circle and able to control interior and exterior sound fields independently.
Actually, the discussion in this section can be regarded as a kind spatial multizone sound field reconstruction. Here, a two-dimensional space is divided into two sub-regions. One subregion is inside the circular array, and the other is outside. By contrast, in Section 9.3.6, the region inside a circular array is divided into some sub-regions. The discussions in this section and Section 9.3.6 differ in the division of sub-regions.
10.4.3 Further analysis of the relationship between acoustical holography and Ambisonics
The relation between ideal acoustical holography and Ambisonics can be observed preliminarily from the discussion in Section 10.4.2. If the interior radiated sound field is controlled by an array of secondary monopole straight-line sources only, the driving signals of secondary dipole straight-line sources vanish, e.g., Edip(θ′, f) = 0. In this case, Equation (10.4.3) is simplified into the general formulation of multichannel sound field reconstruction in Equation (9.2.1) or (9.2.7), and Equations (10.4.7) to (10.4.10) are equivalent to Equation (9.2.27).
A two-dimensional acoustical holography in a circular region with radius r0 is considered to further explore the relation between ideal acoustical holography and Ambisonics, and the reconstructed sound field is expressed in Equation (10.2.16). After the line integral along the circle is converted to an integral over the azimuth, Equation (10.2.16) becomes
P r, f
|
P r , f 2D |
Gfree2D r, |
r , f |
|
||
|
|
Gfree r, r , f P r , f |
|
|
r0d . |
(10.4.17) |
n |
n |
|
||||
|
|
|
|
|
The corresponding driving signals of secondary monopole and dipole sources are given in Equation (10.4.14).
If the target sound field is created by a monopole straight-line source with unit strength and located at rS = (rS, θS) outside the circular array of secondary source (rS > r0), similar to the case in Section 9.2.2, then converting Equation (10.4.17) to the spatial spectral domain is convenient for analysis. For this purpose, the target pressure P(r′, f) in the boundary and
Green’s function Gfree2D r, r , f are expanded as Bessel–Fourier series according to Equation (9.2.18):
P r , f |
j |
H0 k|r rS | |
|
|
4 |
(10.4.18) |
|||
|
|
|||
|
j |
|
||
|
[ J0 kr0 H0 krS 2 Jq kr0 |
Hq krS cos q S . |
||
4 |
||||
|
|
q 1 |
|

490 Spatial Sound
Gfree2D r, r , f |
j |
H0 k | r r | |
|
|
|
|
4 |
|
|
|
|||
|
|
|
|
|
|
|
|
j |
|
|
|
|
(10.4.19) |
|
|
2 Jq kr Hq kr0 |
|
|||
|
|
J0 |
kr H0 kr0 |
cos q . |
|
|
4 |
|
|||||
|
|
|
|
q 1 |
|
|
Equation (10.4.20) can be derived through the following steps: (1) substituting Equations (10.4.18) and (10.4.19) into Equation (10.4.17); (2) using the integral orthogonalities of trigonometric functions in Equations (4.3.19) and (4.3.20); (3) applying the relationship Hq(ξ) = Jq(ξ)−jYq(ξ) among the Hankel function of the secondary kind, the Bessel function, and the Neumann function; and (4) using the Wronskian formula in Equation (10.4.13)
|
|
P r, f Gfree2D r, r , f E r , f d , |
(10.4.20) |
where
|
|
1 |
|
krS |
|
Hq krS |
|
|
|
|
|
|
|
|
H0 |
2 |
|
|
|
|
|||||
E S , rS , r0 |
, , f |
|
|
|
|
|
|
|
cos q cos q S |
sin q sin q S |
|
|
|
|
kr0 |
Hq kr0 |
|
|
|||||||
|
|
2 H0 |
q 1 |
|
|
|
(10.4.21) |
|||||
|
|
|
|
krS |
|
Hq krS |
|
|
|
|
||
|
|
1 |
|
|
|
|
||||||
|
|
H0 |
2 |
|
cos q S |
|
|
|||||
|
|
|
|
|
|
|
|
. |
|
|||
|
|
|
kr0 |
Hq kr0 |
|
|
|
|||||
|
|
2 H0 |
q 1 |
|
|
|
|
Equation (10.4.20) indicates that the interior pressure can be equivalently created by an array of secondary monopole straight-line sources only, and driving signals are expressed in Equation (10.4.21). For a target straight-line source with unit strength, driving signals of secondary sources are equal to their normalized amplitudes, e.g., E(θS, rS, r0, θ′, f) = A(θS, rS, r0, θ′, f). Equation (10.4.21) is the driving signal of the horizontal near-field-compensated Ambisonics with an infinite order. Equation (10.4.21) is consistent with Equation (9.3.53) except for a normalized gain. The difference in normalized gain is due to the variation in continuous and discrete secondary source arrays.
In conclusion, if only the target interior sound field is controlled, secondary monopole and dipole sources in acoustical holography or Kirchhoff–Helmholtz boundary integral equation have closely related radiation. Therefore, the target interior sound field can be equivalently reconstructed by an array of secondary monopole sources only. In other words, transition from acoustical holography to Ambisonics can occur without forcing the driving signals of secondary dipole sources to vanish. This analysis can be extended to the case of spatial Ambisonics (Daniel et al., 2003; Poletti, 2005b).
10.4.4 Comparison between WFS and Ambisonics
WFS and higher-order Ambisonics, which use an array of a single type of secondary sources, can be derived from the simplification of acoustical holography or Kirchhoff–Helmholtz boundary integral equation. However, the conditions and methods for simplification differ in two cases.
In WFS, Kirchhoff–Helmholtz boundary integral equation is approximately calculated by Rayleigh integrals or appropriate (Neumann) Green’s function to simplify the types of

Spatial sound reproduction by wave field synthesis 491
secondary sources. WFS can be theoretically achieved by an arbitrary array of secondary sources. When a curved array is used, a target source direction-dependent spatial window should be applied to driving signals to reconstruct the target sound field correctly. For example, a spatial window enables secondary sources in half of the horizontal-circular array to participate in the reconstruction of a target plane wave. Therefore, this process can be regarded as a local signal mixing method and is similar to local Ambisonic signal mixing in Section 5.2.4. Moreover, driving signals in WFS are not spatially bandlimited. In case of horizontal WFS, a stationary phase method enables the substitution of secondary straightline sources with point sources. However, mismatched secondary sources lead to errors in the reconstructed spectrum and the overall magnitude of pressure. The former can be pre-equal- ized by applying a special filter to the driving signals, but the latter can only be equalized at a special reference position or line.
Ideally, a horizontal Ambisonics requires an array of secondary straight-line sources arranged in a circle. For far-field approximation, secondary monopole straight-line sources can be substituted by point sources. Spatial Ambisonics requires an array of secondary monopole point sources. For secondary source arrays in a horizontal circle and a spherical surface, the driving signals of secondary monopole and dipole sources for controlling an interior sound field are dependent. Therefore, an array of secondary monopole sources is enough to control the interior sound field. Corresponding driving signals can be obtained from a combination of the pressure and normal velocity of a medium on the boundary of a circle or spherical surface (Section 9.8.3). When the target sound field is decomposed by spatial harmonics, driving signals are represented by a weighted combination of these spatial harmonics. In actual Ambisonics, spatial harmonic decomposition is truncated up to a certain order; thus, driving signals are spatially bandlimited. Ambisonic driving signals pertain to global signal mixing. All secondary sources in the circular or spherical array take part in the reconstruction of a target sound field, and the spatial window for these driving signals is usually not required. Given an upper frequency limit, Ambisonics can reconstruct the target sound field within a local region centered around the origin rather than an extended region within an array.
In practical WFS, M secondary sources are utilized to control the pressure or normal velocity of a medium on the boundary and reconstruct the target sound field in the entire region inside the boundary. As indicated in Section 10.1.5, Shannon–Nyquist spatial sampling theorem requires that the arc length between adjacent secondary sources on the boundary should not exceed half of the wavelength in the worst case. Therefore, more secondary sources are needed for reproduction in a larger region. By contrast, as indicated in Section 9.6.3, horizontal Ambisonics can be equivalent to a scheme of controlling the pressures at O uniform receiver positions in a circle with radius r through a uniform array of M secondary sources arranged in a circle with radius r0 > r. In Equation (9.3.15), Shannon–Nyquist spatial sampling theorem requires that the number of secondary sources should satisfy M ≥ O, and the arc length between adjacent receiver positions should not exceed half of the wavelength. The required spatial samples on a circle with radius r is smaller than that on a circle with radius r0 > r to satisfy the spatial sampling theorem. In other words, Ambisonics reconstructs the target sound field in a smaller region (rather than the entire region inside the array) through fewer secondary sources than WFS. A similar method is used in local WFS in Section 10.2.4. That is, when a smaller number of secondary sources are used, a local WFS improves the accuracy in sound field reconstruction at the cost of reducing the reproduction region.
The reconstructed sound field of WFS and Ambisonics exhibits different physical and perceptual characteristics because of the aforementioned differences between them (Spors and Wierstorf, 2008). However, WFS and Ambisonics can be analyzed using similar methods because their reconstructed sound field and driving signals are closely related to each other.

492 Spatial Sound
A horizontal-circular array of secondary monopole straight-line sources is considered, and its radius is r0. Spatial spectral analysis of a circular array is discussed in Section 9.2.2, and the problems of spatial aliasing and mirror spatial spectra are addressed in Section 9.5.2. The discussions in Chapter 9 focus on Ambisonics, but some general methods and results are applicable to WFS.
The reconstructed sound field of WFS and Ambisonics can be evaluated by substituting the driving signals into Equation (9.2.1) or (9.2.6). In the case of a target plane wave, the driving signals of WFS and Ambisonics are expressed in Equations (10.2.32) and (9.3.53), respectively. Analyses on the horizontal-circular array of secondary monopole straight-line sources, including the calculation of a relative energy error in Equation (9.5.15), lead to the following results.
For WFS, the following conditions are observed:
1. The active secondary sources do not constitute a close curve or curved surface because a spatial window is applied to the driving signals of secondary sources. Therefore, the problem of interior eigen modes in an enclosed space does not occur. A unique solution of driving signals is available to all frequencies or more strictly for all (kr0). The problem of instability in the solution of driving signals does not take place. However, the spatial window causes an edge effect.
2. Even if the spatial aliasing error is ignored, an error occurs in the reconstructed sound field inside an array. This result is also observed in other arrays of secondary sources.
3. For a target plane wave, driving signals are not spatially bandlimited. The analysis in Section 9.5.2 indicates that the discrete array of secondary sources leads to a spatial aliasing error in the reconstructed sound field. Above a certain frequency limit, spatial aliasing leads to the obvious interference of a sound field.
4. The spatial distribution of errors caused by a discrete array is irregular. Above a certain frequency limit, spatial aliasing occurs in the entire receiver region. However, when the receiver position is far from the secondary source array, the spatial aliasing error is reduced.
5. Spatial aliasing in the reconstructed sound field may lead to perceivable timbre coloration.
For higher-order Ambisonics, the following conditions are presented:
1. Active secondary sources constitute a close curve or curved surface. Interior eigen modes occur at some frequencies or more strictly at some (kr0). At these frequencies, the solutions of driving signals are not unique [see the discussion after Equation (9.3.5)]. In other words, the interior sound field cannot be controlled at some frequencies, and the problem of instability in the solution of driving signals occurs.
2. The driving signals of a Q-order Ambisonics are spatially bandlimited. If the number of secondary sources satisfies M ≥ (2Q + 1), driving signals do not cause spatial aliasing. However, the mirror spatial spectra of the driving signals caused by a discrete array may cause an error in the reconstructed sound field.
3. The spatial distribution of an error caused by mirror spatial spectra is regular. When the number of secondary sources satisfies the condition of M ≥ (2Q + 1), all v ≠ 0 terms in the summarization of Equation (9.5.12) can be omitted if kr is smaller than a certain value [Equation (9.3.14)] because the Bessel function Jq(kr) oscillates and decays when its order q is not less than [exp(1)kr/2]. Therefore, Ambisonics can reconstruct the target sound field in a circular region centered at the origin and up to a certain frequency. The radius of this region and the upper frequency limit, or more strictly the