Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
искусственный интеллект.pdf
Скачиваний:
26
Добавлен:
10.07.2020
Размер:
27.02 Mб
Скачать

suai.ru/our-contacts

quantum machine learning

Are Decisions of Image Trustworthiness Contextual? A Pilot Study

41

2 A Probabilistic Fusion Model of Trust

Analysis of qualitative feedback in relation to an image of Vladimir Putin1 revealed a surprising number participants including comments that they didn’t trust the image simply because they didn’t like Putin, or didn’t think he was honest [7]. This was in spite of the fact that subjects were carefully instructed to judge the trustworthiness of the image itself. In other words, these participants seemed to be confounding a decision of whether they trust the image, with a decision with whether they trust the content of the image. Conversely, qualitative feedback from other participants revealed that they the trusted the image because they couldn’t detect any evidence of manipulation. Such feedback aligns with the visual fluency hypothesis. A similar dichotomy appeared in relation to an image of a strange looking creature known as a frill shark. Both cases are revealing as they seem to indicate that both content and representation features are involved when participants judge the trustworthiness of images. A question that we will explore below is whether decision making around these components are transacted independently, and how that relates to contextuality.

Deciding whether an image if Putin is trustworthy, obviously involves uncertainty, e.g., if may be photoshopped. It is therefore natural to consider a probabilistic decision model. For example, consider the simple model depicted in Fig. 1. The variable S is a random variable which ranged over a set of image stimuli, such as the Putin image. Bivalent random variables C1, C2 relate to features associated with the content of the image. For example, C1 may model the decision whether the subject of the image is honest. Conversely, R1 and R2 are bivalent random variables that relate to representational aspects. For example, R1 may model the decision whether the image has been manipulated, or not. Variable R2 might model the decision whether there was something unexpected in the image. Of course, there may be any number of variables related to decisions involving the content and representation of the image. In addition, the variables could be continuous rather than bivalent. However, for simplicity we will use the four bivalent variables; two content based and two representation based. A further assumption if the model latent variables γ and ρ. The C latent variable models the decision whether the content of the image is trustworthy, which only depends variables related to the content. For example, this would equate to the decision whether Putin the person is deemed trustworthy. Conversely, the latent R models the decision whether the image is deemed to be a true and accurate depiction of reality. Finally, the variable T corresponds to the decision whether the human subject trusts what they have been shown. Such a decision depends on both decisions whether the content and representation are trusted as modeled by the latent variable.

We term such a model, a probabilistic fusion model as both content and representation components are reconciled in order to produce the final decision of trustworthiness. The model allows for one component to dominate decision

1https://www.newyorker.com/humor/borowitz-report/putin-announces-historic-g1- summit.

suai.ru/our-contacts

quantum machine learning

42 P. D. Bruza and L. Fell

making. For example, if the visual fluency is broken, then p(T = y|ρ) would be much higher than p(T = y|γ), reflecting that representational aspects of the image dominate decision making of image trustworthiness.

In addition, the following probabilistic relationships are a consequence of the decision model:

p(R1 = y) = p(R1 = y, C1 = y) + p(R1 = y, C1 = n) = p(R1 = y, C2 = y) + p(R1 = y, C2 = n) p(R2 = y) = p(R2 = y, C1 = y) + p(R2 = y, C1 = n) = p(R2 = y, C2 = y) + p(R2 = y, C2 = n)

and the converse

p(C1 = y) = p(C1 = y, R1 = y) + p(C1 = y, R1 = n) = p(C1 = y, R2 = y) + p(C1 = y, R2 = n) p(C2 = y) = p(C2 = y, R1 = y) + p(C2 = y, R1 = n) = p(C2 = y, R2 = y) + p(C2 = y, R2 = n)

The preceding probabilistic relationships express that decision making around content and representation do not influence each other. For example, the probability of a decision that a subject trusts Putin (the person), denoted p(C1 = y), does not vary according to whether subject decides that image has been manipulated, say, denoted p(C1 = y, R1 = y) + p(C1 = y, R1 = y) or whether they detect something unexpected in the image, denoted p(C1 = y, R2 = y) + p(C1 = y, R2 = y).

S

C1

C2

R1

R2

γ ρ

T

Fig. 1. Probabilistic fusion model of trust

suai.ru/our-contacts

quantum machine learning

Are Decisions of Image Trustworthiness Contextual? A Pilot Study

43

3 Contextuality

A Bell scenario experiment involves two systems C (content) and R (representation). The content system C is probed with two questions modeled by bivalent variables C1 and C2 both of which range over the outcomes {y, n}. Similarly for system R with variables R1 and R2. Four measurement contexts are defined by jointly measuring one variable from each system:

 

 

 

 

 

R

 

 

 

 

 

 

R1

 

R2

 

 

 

 

 

 

y n

 

y n

 

 

C1

y p1 p2

 

p5 p6

 

 

 

 

C

 

n p3

p4

 

p7 p8

 

(1)

 

 

 

 

 

 

 

 

 

 

 

2 y

p9

p10

 

p13 p14

 

 

C

 

n

p11

p12

 

p15 p16

 

 

According to the first principle of Contextuality-by-Default, random variables should be indexed according to the experimental conditions in which they are measured [5]. For example, variable C1 is jointly measured with R1 in one experimental condition as well as being jointly measured with a variable R2 in another experimental condition. For this reason, two variables C11 and C12 are introduced. The same holds for the other three random variables resulting in eight random variables. Their expectations are computed as follows [4]:

C11 = 2(p1 + p2) 1

(2)

C12 = 2(p5 + p6) 1

(3)

C21 = 2(p9

+ p10) 1

(4)

C22 = 2(p13 + p14) 1

(5)

R11

= 2(p1

+ p3) 1

(6)

R12

= 2(p9

+ p11) 1

(7)

R21

= 2(p5

+ p7) 1

(8)

R22

= 2(p13 + p15) 1

(9)

Analysis of contextuality in Bell scenario experiments relies on the no-signalling condition. The experience so far in quantum cognition is that it is challenging to design experiments where this condition holds [6]. Moreover, the question that this challenge poses is whether any meaningful conception of contextuality exists when signalling is present. [4] have presented theory that specifies a threshold for signalling below which meaningful contextuality analysis can be performed. Using their approach, the degree of signalling 0 between the content and representation systems is computed as follows:

1

0 = 2 (|C11 − C12| + |C21 − C22| + |R11 − R12| + |R21 − R22|) (10)