- •Stellingen
- •Propositions
- •List of Figures
- •List of Tables
- •1 Introduction
- •Introduction
- •Affect, emotion, and related constructs
- •Affective Computing: A concise overview
- •The closed loop model
- •Three disciplines
- •Human-Computer Interaction (HCI)
- •Health Informatics
- •Three disciplines, one family
- •Outline
- •2 A review of Affective Computing
- •Introduction
- •Vision
- •Speech
- •Biosignals
- •A review
- •Time for a change
- •3 Statistical moments as signal features
- •Introduction
- •Emotion
- •Measures of affect
- •Affective wearables
- •Experiment
- •Participants
- •Equipment and materials
- •Procedure
- •Data reduction
- •Results
- •Discussion
- •Comparison with the literature
- •Use in products
- •4 Time windows and event-related responses
- •Introduction
- •Data reduction
- •Results
- •Mapping events on signals
- •Discussion and conclusion
- •Interpreting the signals measured
- •Looking back and forth
- •5 Emotion models, environment, personality, and demographics
- •Introduction
- •Emotions
- •Modeling emotion
- •Ubiquitous signals of emotion
- •Method
- •Participants
- •International Affective Picture System (IAPS)
- •Digital Rating System (DRS)
- •Signal processing
- •Signal selection
- •Speech signal
- •Heart rate variability (HRV) extraction
- •Normalization
- •Results
- •Considerations with the analysis
- •The (dimensional) valence-arousal (VA) model
- •The six basic emotions
- •The valence-arousal (VA) model versus basic emotions
- •Discussion
- •Conclusion
- •6 Static versus dynamic stimuli
- •Introduction
- •Emotion
- •Method
- •Preparation for analysis
- •Results
- •Considerations with the analysis
- •The (dimensional) valence-arousal (VA) model
- •The six basic emotions
- •The valence-arousal (VA) model versus basic emotions
- •Static versus dynamic stimuli
- •Conclusion
- •IV. Towards affective computing
- •Introduction
- •Data set
- •Procedure
- •Preprocessing
- •Normalization
- •Baseline matrix
- •Feature selection
- •k-Nearest Neighbors (k-NN)
- •Support vector machines (SVM)
- •Multi-Layer Perceptron (MLP) neural network
- •Discussion
- •Conclusions
- •8 Two clinical case studies on bimodal health-related stress assessment
- •Introduction
- •Post-Traumatic Stress Disorder (PTSD)
- •Storytelling and reliving the past
- •Emotion detection by means of speech signal analysis
- •The Subjective Unit of Distress (SUD)
- •Design and procedure
- •Features extracted from the speech signal
- •Results
- •Results of the Stress-Provoking Story (SPS) sessions
- •Results of the Re-Living (RL) sessions
- •Overview of the features
- •Discussion
- •Stress-Provoking Stories (SPS) study
- •Re-Living (RL) study
- •Stress-Provoking Stories (SPS) versus Re-Living (RL)
- •Conclusions
- •9 Cross-validation of bimodal health-related stress assessment
- •Introduction
- •Speech signal processing
- •Outlier removal
- •Parameter selection
- •Dimensionality Reduction
- •k-Nearest Neighbors (k-NN)
- •Support vector machines (SVM)
- •Multi-Layer Perceptron (MLP) neural network
- •Results
- •Cross-validation
- •Assessment of the experimental design
- •Discussion
- •Conclusion
- •10 Guidelines for ASP
- •Introduction
- •Signal processing guidelines
- •Physical sensing characteristics
- •Temporal construction
- •Normalization
- •Context
- •Pattern recognition guidelines
- •Validation
- •Triangulation
- •Conclusion
- •11 Discussion
- •Introduction
- •Hot topics: On the value of this monograph
- •Applications: Here and now!
- •TV experience
- •Knowledge representations
- •Computer-Aided Diagnosis (CAD)
- •Visions of the future
- •Robot nannies
- •Digital Human Model
- •Conclusion
- •Bibliography
- •Summary
- •Samenvatting
- •Dankwoord
- •Curriculum Vitae
- •Publications and Patents: A selection
- •Publications
- •Patents
- •SIKS Dissertation Series
2.4 Biosignals
2.4.2 Time for a change
Taken together, implicit messages of emotion are expressed through bodily (e.g., movements) and facial expressions [131, 192, 511, 652, 739] and by way of speech signal characteristics (e.g., intonation) [131, 182, 511, 590, 739]. In line with Picard [521, 524], I pose that this duo is not complete and physiological responses should be added to it to complete the pallet of affective signals. Although, such responses are hard to notice by humans, as is the case with various facial muscles [643]. In contrast, computing devices augmented by biosensors can record such signals, as has been shown in the last decade of research; see Table 2.4.
Biosignals have one significant advantage compared to visual, movement, and speech signals, they are free from social masking [643]. This is in sharp contrast to visual appearance and speech, which can all be (conveniently) manipulated to some extent [643], in particular by trained individuals such as actors. Moreover, an important advantage of biosignals over either speech or vision is that you get a continuous signal, as opposed to speech that is only of use when the person is speaking or facial expressions that tend to be sparse when people are doing, for example, computer work. So, biosignals enable communication, where traditional channels (i.e., vision and speech [148, 184]) are absent or fail (cf. [617]). So, par excellence, biosignals can augment HCI as well as human-human interaction [315].
To bring biosignals as affective signals from research to practice, however, significant improvements are needed. Although it is very possible that some closed loop applications function satisfactorily in practice, in general either the number of emotional states recognized is rather limited (often 2 to 4) or the ultimate classification accuracy is relatively low (often below 80%). So, there is significant room and need for improvement to obtain the high accuracy levels for the classification of multiple emotional states, which is necessary for the construction of smooth affective closed loops.
In the next three parts of this monograph, Parts II, III, and IV, I set out a series of studies to systematically review the options for improvement that are still open for ASP. These studies all address issues that are crucial for the development of closed loop ASP, as presented in Section 1.5. In particular, they are of importance for its signal processing + pattern recognition pipeline. These three parts will be succeeded by an epilogue in which the first chapter presents guidelines for each of these two steps in the processing pipeline. This monograph will now first continue with two chapters that employ four biosignals (i.e., 3× EMG and EDA), uses dynamic stimuli (i.e., movie fragments) to induce emotions, and explores the importance of the length of time windows for ASP.
37
2 A review of Affective Computing
38
II.BASELINE-FREE ASP
