Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Семинар / Диссертации / The Netherlands, 2011.pdf
Скачиваний:
21
Добавлен:
19.05.2015
Размер:
2.6 Mб
Скачать

11 Discussion

 

 

 

O

 

Table 11.1: A description of the four categories of affective com-

 

 

 

 

puting in terms of computer science’s input/output (I/O ) op-

 

 

no

 

yes

 

 

 

erations. In terms of affective computing, I/O denotes the ex-

I

no

–/–

 

I/–

pression (O ) and the perception, impression, or recognition (I)

yes

–/O

 

I/O

 

 

of affect. This division is adapted from the four cases identified

 

 

 

 

 

 

 

 

 

 

by Rosalind W. Picard [520].

computing [520]. Entities without any affective I/O (i.e., –/–), such as traditional machinery, can be very useful in all situations where emotions hinder instead of help. Entities with only affective O could for example be avatars (e.g., characters in games), consumer products (e.g., a sports car), toys for children, and our TV (see also Section 11.5.1). However, such entities would not know what affective state its user is in and, hence, what affect to show as they would lack the affective I for it. So, as its name, emotion-aware systems, already gives away, a requirement for such systems is affective I.

Throughout this monograph, I have focussed on the affective I that is the percept, impression, or recognition of affect. It has been shown to be complex and promising, in parallel; however, Chapter 10 provided a set of prerequisites ASP research and development can hold on to. Following these guidelines, successful ASP can be employed. Only affective I is possible. In such cases, the affective I alters other processes (e.g., scheduling breaks for pilots) and no affective O is given but another type of output closes the system (cf. Section 1.5 and see Section 11.5.2). In case of affective I/O, the affective O can follow the affective I immediately or with a (fixed or varying) delay. The affective O can also take various forms, as was already denoted in Section 1.6. Moreover, the person who provides the affective I is not necessarily the person who receives the affective O (see Section 11.5.3).

The theoretical framework concerning affective processes is a topic of continuous debate, as was already argued in Section 1.2. Consequently, an accurate interpretation of affective I and, subsequently, an appropriate affective O is hard to establish. In particular in real-world settings, where several sources of noise will disturb the closed-loop (see Section 1.5), this will be a challenging endeavor. So, currently, it is best to apply simple and robust mechanisms to generate affective O (e.g., on reflex agent level [568]) or slightly more advanced. Moreover, it is not specific states of affect that need to be the target but rather the core affect of the user that needs to be smoothly (and unnoticeably) directed to a target core state [316]; see also Section 10.2.2. The next section will provide a few of such applications.

11.5 Applications: Here and now!

In Part IV of this monograph, I presented the research conducted towards affective computing. Moreover, in the previous section I have discussed affective I/O to aid a structured

194

11.5 Applications: Here and now!

discussion towards closing the loop in real-world practice. However, this did not bring us to the development of consumer applications. That is what I will do here and now! In line with affective I/O, as outlined in Section 11.4, the two golden rules that secure such endeavors are: control the complexity and follow the guidelines (see Chapter 10).

One of the main rationales behind the applications that will be presented is that the influencing algorithm of the closed loop system (see Figure 1.1) is kept as simple as possible. This suggestion stems from the idea that ASP can never be entirely based on psychological changes. As has been discussed in Chapters 2 and 10, many factors outside people’s emotional state can contaminate affective signals. A pragmatic solution for this problem can be to express the goals of ASP in terms of the affective signals themselves [615], instead of in terms of emotional states. This approach has also been baptized the physiology-driven perspective [654, 680]. With this perspective in mind, I will now present three possible consumer products, one in each discipline of computer science discussed in Chapter 1: HCI, AI, and health informatics.

11.5.1 TV experience

HCI or better human media interaction is part of everyday life. Not only do we interact with our PC but, for example, also with our television [715]. However, as will be illustrated here, human media interaction will soon stretch far beyond that. About a decade ago, Philips developed Ambient Lighting Technology, which is best known as Ambilight [161]. Using an array of LEDs, , mounted at the back side of the television, Ambilight generates in real time video-content matching light effects around the television. These effects not only reduce eye-fatigue [74], but also enlarge the virtual screen resulting in a more immersive viewing experience [597, 599]. The latter can be understood by considering the characteristics of human peripheral vision.

Using real time analysis of both the video and audio signals [255, 705, 742], Ambilight can be augmented and be used to amplify the content’s effect on the viewer’s emotions. This would result in a loop similar to that presented in Section 1.5. However, note that this would require the viewer to be connected to the TV with a biosensing device, see Figure 1.1. Moreover, the feedback actuator would be the specifications of the Ambilight and/or the specifications of the audio signals. So, the loop would not be closed but rather open. Such an application is well within reach; part of the loop proposed here has already been developed repeatedly over the last decade; that is, the extraction of emotion-related content from audio and video [254, 255, 681, 705, 742].

In more advanced settings, user identification (see Section 10.3.3) could be employed to construct user profiles and tap into the user’s personal space [464]. Moreover, physical characteristics (see Section 10.2.1) and the context (see Section 10.2.4) could be taken into

195

11 Discussion

account. Ambient light should be considered as just one example of a next generation of applications, which can be extended to various other domains conveniently, such as our cloths and (ambient) lighting.

11.5.2 Knowledge representations

One of AI’s core challenges is knowledge representation, which is traditionally approached from an engineering rather than from a user perspective (cf. Chapter 1). Knowledge representation can play five distinct roles: i) a substitute for an object itself; ii) a set of ontological commitments; iii) a fragmentary theory of intelligent reasoning; iv) a medium for pragmatically efficient computation; and v) a medium of human expression [678].

Although knowledge representation has shown its value in a range of domains, I propose to take it one step further, using W3C’s Emotion Markup Language (EmotionML). EmotionML is a ‘plug-in’ language for : i) manual annotation of data; ii) automatic recognition of emotion-related states from user behavior; and iii) generation of emotion-related system behavior. As such, EmotionML enables a fusion between ASP and traditional knowledge representations. Amongst a range of other applications, this enables the digital preservation of our experiences augmented with emotions. For example, not only can I record my son’s first words or the first time he is playing with a ball, I could also preserve how my son, my wife, and I felt while this happened. Our affective signals (see Chapters 2-7) could be recorded, processed, and filed using EmotionML. In parallel, our affective signals could also be filed as raw signals. In the future, this would perhaps enable advanced digital human modeling.

11.5.3 Computer-Aided Diagnosis (CAD)

As was already stressed in the introduction (Chapter 1) of this monograph, emotions also have their impact on our health [326]. They influence our emotional and mental wellbeing directly but, as such, also indirectly have their impact on our physical health. Consequently, ASP should be considered as an important part of health informatics. In Chapter 8, two models were developed that relate people’s stress level to their speech signal. These two models can serve as a springboard for the development of Computer-Aided Diagnosis (CAD), which can serve as a second opinion for a therapist.

As was shown in Chapters 5 and 6, when the application area allows it, other biosignals can conveniently be combined with a speech signal. A combination of biosignals would improve the predictive power of the model, as was already discussed in Chapter 10. The models developed in Chapter 8 were tailored to PTSD patients. Follow-up research could either aim at other specific groups of patients or, preferably, at a generic modeling template

196

Соседние файлы в папке Диссертации