Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Семинар / Диссертации / The Netherlands, 2011.pdf
Скачиваний:
29
Добавлен:
19.05.2015
Размер:
2.6 Mб
Скачать

2.4 Biosignals

2.4.2 Time for a change

Taken together, implicit messages of emotion are expressed through bodily (e.g., movements) and facial expressions [131, 192, 511, 652, 739] and by way of speech signal characteristics (e.g., intonation) [131, 182, 511, 590, 739]. In line with Picard [521, 524], I pose that this duo is not complete and physiological responses should be added to it to complete the pallet of affective signals. Although, such responses are hard to notice by humans, as is the case with various facial muscles [643]. In contrast, computing devices augmented by biosensors can record such signals, as has been shown in the last decade of research; see Table 2.4.

Biosignals have one significant advantage compared to visual, movement, and speech signals, they are free from social masking [643]. This is in sharp contrast to visual appearance and speech, which can all be (conveniently) manipulated to some extent [643], in particular by trained individuals such as actors. Moreover, an important advantage of biosignals over either speech or vision is that you get a continuous signal, as opposed to speech that is only of use when the person is speaking or facial expressions that tend to be sparse when people are doing, for example, computer work. So, biosignals enable communication, where traditional channels (i.e., vision and speech [148, 184]) are absent or fail (cf. [617]). So, par excellence, biosignals can augment HCI as well as human-human interaction [315].

To bring biosignals as affective signals from research to practice, however, significant improvements are needed. Although it is very possible that some closed loop applications function satisfactorily in practice, in general either the number of emotional states recognized is rather limited (often 2 to 4) or the ultimate classification accuracy is relatively low (often below 80%). So, there is significant room and need for improvement to obtain the high accuracy levels for the classification of multiple emotional states, which is necessary for the construction of smooth affective closed loops.

In the next three parts of this monograph, Parts II, III, and IV, I set out a series of studies to systematically review the options for improvement that are still open for ASP. These studies all address issues that are crucial for the development of closed loop ASP, as presented in Section 1.5. In particular, they are of importance for its signal processing + pattern recognition pipeline. These three parts will be succeeded by an epilogue in which the first chapter presents guidelines for each of these two steps in the processing pipeline. This monograph will now first continue with two chapters that employ four biosignals (i.e., 3× EMG and EDA), uses dynamic stimuli (i.e., movie fragments) to induce emotions, and explores the importance of the length of time windows for ASP.

37

2 A review of Affective Computing

38

II.BASELINE-FREE ASP

Соседние файлы в папке Диссертации