- •Stellingen
- •Propositions
- •List of Figures
- •List of Tables
- •1 Introduction
- •Introduction
- •Affect, emotion, and related constructs
- •Affective Computing: A concise overview
- •The closed loop model
- •Three disciplines
- •Human-Computer Interaction (HCI)
- •Health Informatics
- •Three disciplines, one family
- •Outline
- •2 A review of Affective Computing
- •Introduction
- •Vision
- •Speech
- •Biosignals
- •A review
- •Time for a change
- •3 Statistical moments as signal features
- •Introduction
- •Emotion
- •Measures of affect
- •Affective wearables
- •Experiment
- •Participants
- •Equipment and materials
- •Procedure
- •Data reduction
- •Results
- •Discussion
- •Comparison with the literature
- •Use in products
- •4 Time windows and event-related responses
- •Introduction
- •Data reduction
- •Results
- •Mapping events on signals
- •Discussion and conclusion
- •Interpreting the signals measured
- •Looking back and forth
- •5 Emotion models, environment, personality, and demographics
- •Introduction
- •Emotions
- •Modeling emotion
- •Ubiquitous signals of emotion
- •Method
- •Participants
- •International Affective Picture System (IAPS)
- •Digital Rating System (DRS)
- •Signal processing
- •Signal selection
- •Speech signal
- •Heart rate variability (HRV) extraction
- •Normalization
- •Results
- •Considerations with the analysis
- •The (dimensional) valence-arousal (VA) model
- •The six basic emotions
- •The valence-arousal (VA) model versus basic emotions
- •Discussion
- •Conclusion
- •6 Static versus dynamic stimuli
- •Introduction
- •Emotion
- •Method
- •Preparation for analysis
- •Results
- •Considerations with the analysis
- •The (dimensional) valence-arousal (VA) model
- •The six basic emotions
- •The valence-arousal (VA) model versus basic emotions
- •Static versus dynamic stimuli
- •Conclusion
- •IV. Towards affective computing
- •Introduction
- •Data set
- •Procedure
- •Preprocessing
- •Normalization
- •Baseline matrix
- •Feature selection
- •k-Nearest Neighbors (k-NN)
- •Support vector machines (SVM)
- •Multi-Layer Perceptron (MLP) neural network
- •Discussion
- •Conclusions
- •8 Two clinical case studies on bimodal health-related stress assessment
- •Introduction
- •Post-Traumatic Stress Disorder (PTSD)
- •Storytelling and reliving the past
- •Emotion detection by means of speech signal analysis
- •The Subjective Unit of Distress (SUD)
- •Design and procedure
- •Features extracted from the speech signal
- •Results
- •Results of the Stress-Provoking Story (SPS) sessions
- •Results of the Re-Living (RL) sessions
- •Overview of the features
- •Discussion
- •Stress-Provoking Stories (SPS) study
- •Re-Living (RL) study
- •Stress-Provoking Stories (SPS) versus Re-Living (RL)
- •Conclusions
- •9 Cross-validation of bimodal health-related stress assessment
- •Introduction
- •Speech signal processing
- •Outlier removal
- •Parameter selection
- •Dimensionality Reduction
- •k-Nearest Neighbors (k-NN)
- •Support vector machines (SVM)
- •Multi-Layer Perceptron (MLP) neural network
- •Results
- •Cross-validation
- •Assessment of the experimental design
- •Discussion
- •Conclusion
- •10 Guidelines for ASP
- •Introduction
- •Signal processing guidelines
- •Physical sensing characteristics
- •Temporal construction
- •Normalization
- •Context
- •Pattern recognition guidelines
- •Validation
- •Triangulation
- •Conclusion
- •11 Discussion
- •Introduction
- •Hot topics: On the value of this monograph
- •Applications: Here and now!
- •TV experience
- •Knowledge representations
- •Computer-Aided Diagnosis (CAD)
- •Visions of the future
- •Robot nannies
- •Digital Human Model
- •Conclusion
- •Bibliography
- •Summary
- •Samenvatting
- •Dankwoord
- •Curriculum Vitae
- •Publications and Patents: A selection
- •Publications
- •Patents
- •SIKS Dissertation Series
10.2 Signal processing guidelines
10.2.3 Normalization
Finding an appropriate normalization method is both important and difficult for sensors whose readings depend on factors that can easily change on a daily basis, such as sensor placement, humidity, temperature, and the use of contact gel, as was already noted in Section 10.2.1. Physiological signals can be normalized using:
•Baseline corrections: applied when comparing or generalizing multiple measurements from one individual across a variety of tasks [418].
•Range corrections: reduce the inter individual variance by a transformation that sets each signal value to a proportion of the intra individual range [62].
Probably the most frequently used and powerful correction for continuous biosignals (e.g., EDA and skin temperature) is standardization (method 4 in Table 10.4) [62]. It corrects not only for the baseline level but also for the variation in the signal, making it robust.
Other correction methods are tailored to specific features; for example, the amplitude of skin conductance responses is often corrected by dividing by the maximum amplitude. An alternative is the use of delta, or reaction scores, (Table 10.4, nr. 2), which is suitable
Event in a 30 Minute Window
|
|
|
|
Event |
|
|
|
3500 |
|
|
|
|
|
|
3000 |
|
|
|
|
|
|
2500 |
|
|
|
|
|
EDA |
2000 |
|
|
|
|
|
|
|
|
|
|
|
|
|
1500 |
|
|
|
|
|
|
1000 |
|
|
|
|
|
|
500 |
|
|
|
|
|
|
3 |
3.1 |
3.2 |
3.3 |
3.4 |
3.5 |
|
|
|
|
Hours |
|
|
Figure 10.2: A 30 minute time window of an EDA signal, which is a part near the end of the signal presented in Figure 10.1. Three close-ups around the event near 3.3 hours are presented in Figure 10.3.
175
10 Guidelines for ASP
and reliable for absolute level comparisons [418]. If no baseline measurements are available, method nr. 3 of Table 10.4 is a good alternative.
Normalization methods 4 − 8 in Table 10.4 are often used as range corrections. In general, they provide a stronger normalization than baseline corrections 2 and 3 in Table 10.4. As such, range corrections can also be used to compensate for greater variability in signals. Typical measures that are subject to large inter individual differences are skin conductance (tonic levels vary per person: 2 − 16µS), skin temperature, and pulse volume.
Selecting a normalization method is difficult since each has different merits; see Table 10.4. Taking the minimum baseline is more equivalent to taking the resting EDA that would normally be used in a laboratory experiment. This is the best method if a consistent minimum seems apparent in all data being combined. The problem is that for each data segment, a minimum must be apparent. It is straightforward to eliminate point outliers such as those at 3.7 hours and 3.9 hours in Figure 10.2 and find a more robust minimum baseline.
With choosing an appropriate normalization method, the selection of a period (i.e., a time window) over which to calculate the parameters of the selection method (the normalization period) is also of importance, as was depicted in Section 10.2.2. As an example, Figures 10.2 and 10.4 show several hours of an ambulatory EDA signal, along with two
|
|
|
|
Looking Through Different Windows |
|
|
|
||
|
3340 |
|
|
3380 |
|
3500 |
|
|
|
|
|
|
|
3370 |
|
3400 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3330 |
|
|
|
|
|
|
|
|
|
|
|
|
3360 |
|
3300 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3350 |
|
|
|
|
|
|
3320 |
|
|
|
|
3200 |
|
|
|
|
|
|
|
3340 |
|
|
|
|
|
|
|
|
|
|
|
3100 |
|
|
|
EDA |
3310 |
|
|
3330 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3000 |
|
|
|
|
|
|
|
3320 |
|
|
|
|
|
|
3300 |
|
|
|
|
2900 |
|
|
|
|
|
|
|
3310 |
|
|
|
|
|
|
|
|
|
3300 |
|
2800 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3290 |
|
|
|
|
|
|
|
|
|
|
|
|
3290 |
|
2700 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3280 |
10 |
20 |
3280 |
20 |
2600 |
0 |
100 |
200 |
|
0 |
0 |
40 |
||||||
|
|
5 Seconds |
|
|
10 Seconds |
|
|
1 Minute |
|
Figure 10.3: Three close-ups around the event presented in Figure 10.2. The statistics accompanying the three close-ups can be found in Table 10.5.
176
10.2 Signal processing guidelines
strategies for baseline correction: a minimum value baseline and a mean baseline (methods 2 and 3 in Table 10.4). Once the baseline is removed, the signal becomes a new base (or zero) and the original value is lost.
For short term experiments, a single baseline period is usually sufficient. However, when monitoring continuously, the baseline may have to be re-evaluated with greater frequency. The challenge here is to find a good strategy for dividing the signal into segments over which the baseline should be re-calculated. A simple solution is to use a sliding window; for example, where the last 30 minutes are taken into account. However, Figure 10.4 shows an obvious problem with this: the problem of lost data (e.g., sensor which has fallen off); see also Section 10.2.2. In sum, at this moment the most useful correction methods for each individual physiological measurement should still be specified. The most useful technique depends on the aim of the study.
10.2.4 Context
“When humans talk with humans, they are able to use implicit situational information, or context, to increase the conversational bandwidth. Unfortunately, this ability to convey ideas does not transfer well to humans interacting with computers. In traditional interactive computing, users have an impoverished mechanism for providing input to computers. Consequently, computers are not currently enabled to take full advantage of the context of the human-computer dialogue. By improving the computer’s access to context, we increase the richness of communication in human-computer interaction and make it possible to produce more useful computational services.” A. K. Dey [158, p. 4] If anything, the experience and transmission of emotions via biosignals depends heavily on context [585, Chapter 23]. However, as is stated in the quote above, capturing context is easier said than done [6, 325, 668, 669]. Handling context is even considered to be one of AI’s traditional struggles [649, 675]. Perhaps this can be attributed partly to the fact that in the vast majority of cases, research on context aware computing has taken a technologycentered perspective as opposed to a human-centered perspective [383]. This technology push has been fruitful though, among many other techniques, sensor networks, body area networks, GPS, and RFID have been developed. Their use can be considered as a first step towards context aware computing. However, not only is the gathering challenging but processing (e.g., feature extraction) and interpretation are also hard [21, 676, 699].
Potentially, context aware computing can aid ASP significantly. Biosensors can be embedded in jewelery (e.g., a ring or necklace), in consumer electronics (e.g., a cell phone or music player), or otherwise as wearables (e.g., embedded in cloths or as part of a body area network). Connected to (more powerful) processing units they can record, tag, and interpret events [158] and, in parallel, tap into our emotional reactions through our physiological responses. However, affective biosignals are influenced by (the interaction between) a variety
177
