- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
96 C. Moreira and A. Wichert
An example of a quantum-like influence diagram is presented in Fig. 2. One can notice the three di erent types of nodes: (1) random variable nodes (circleshape), denoted by X1, · · · , XN , of some Quantum-Like Bayesian Network, (2) a decision node (rectangle-shape), denoted by D, which corresponds to the decision that we want to make, and (3) an Utility node (diamond-shape), denoted by U , which in the scope of this paper, will represent the payo s in the Prisoner’s Dilemma Game.
The goal is to maximise the expected utility by taking into consideration the probabilistic inferences of the quantum-like Bayesian Network, which makes use of the quantum interference e ects to accommodate and predict violations to the Sure Thing Principle.
In the next sections, we will address each of these three components separately.
4 Quantum-Like Bayesian Networks
Quantum-like Bayesian Network have been initially proposed by Moreira and Wichert (2014, 2016) and they can be defined by a directed acyclic graph structure in which each node represents a di erent quantum random variable and each edge represents a direct influence from the source node to the target node. The graph can represent independence relationships between variables, and each node is associated with a conditional probability table that specifies a distribution of quantum complex probability amplitudes over the values of a node given each possible joint assignment of values of its parents. In other words, a quantum-like Bayesian Network is defined in the same way as classical network with the di erence that real probability values are replaced by complex probability amplitudes.
In order to perform exact inferences in a quantum-like Bayesian network, one needs to compute the:
–Quantum-Like full join probability distribution. The quantum-like full joint complex probability amplitude distribution over a set of N random variables ψ(X1, X2, ..., XN ) corresponds to the probability distribution assigned to all of these random variables occurring together in a Hilbert space. Then, the full joint complex probability amplitude distribution of a quantum-like Bayesian Network is given by:
N |
|
ψ(X1, . . . , XN ) = ψ(Xj |P arents(Xj )) |
(2) |
j=1 |
|
Note that, in Eq. 2, Xi is the list of random variables (or nodes of the network), P arents(Xi) corresponds to all parent nodes of Xi and ψ (Xi) is the complex probability amplitude associated with the random variable Xi. The probability value is extracted by applying Born’s rule, that is, by making the squared magnitude of the joint probability amplitude, ψ (X1, . . . , XN ):
P r(X1, . . . , XN ) = |ψ(X1, . . . , XN )|2 |
(3) |
suai.ru/our-contacts |
quantum machine learning |
Introducing Quantum-Like Influence Diagrams for Violations |
97 |
–Quantum-Like Marginalization. Given a query random variable X and let Y be the unobserved variables in the network, the marginal distribution of X is simply the amplitude probability distribution of X averaging over the
information about Y . The quantum-like marginal probability can be defined by Eq. 4. The summation is over all possible y, i.e., all possible combinations of values of the unobserved values y of variable Y . The term γ corresponds to a normalisation factor. Since the conditional probability tables used in Bayesian Networks are not unitary operators with the constraint of double stochasticity (like it is required in other works of the literature (Busemeyer et al. 2006b; Pothos and Busemeyer 2009)), we need to normalise the final scores. This normalisation is consistent with the notion of normalisation of wave functions used in Feynman’s Path Diagrams.
In classical Bayesian inference, on the other hand, normalisation is performed due to the independence assumptions made in Bayes rule.
|
N |
|
2 |
|
|
|
|
P r(X|e) = γ |
|
ψ(Xk |P arents(Xk ), e, y) |
(4) |
|
|
||
y |
k=1 |
|
|
Expanding Eq. 4, it will lead to the quantum marginalisation formula (Moreira and Wichert 2014), which is composed by two parts: one representing the classical probability and the other representing the quantum interference term (which corresponds to the emergence of destructive / constructive interference e ects):
|
|
|Y | |
N |
2 |
|
|
ψ(Xk |P arents(Xk ), e, y = i) + 2 · I nterf erence (5) |
||
P r(X|e) = γ |
|
|||
|
|
i=1 |
k |
|
I nterf erence = |
|
|
||
|Y |−1 |
|Y | |
N |
|
N |
i=1 |
j=i+1 |
ψ(Xk |P arents(Xk ), e, y = i) · ψ(Xk |P arents(Xk ), e, y = j) · cos(θi − θj ) |
||
k |
|
k |
Note that, in Eq. 5, if one sets (θi − θj ) to π/2, then cos(θi − θj ) = 0. This means that the quantum interference term is canceled and the quantum-like Bayesian Network collapses to its classical counterpart.
Formal methods to assign values to quantum interference terms are still an open research question, however some work has already been done towards that direction (Yukalov and Sornette 2011; Moreira and Wichert 2016, 2017).
5Maximum Expected Utility in Classical Influence Diagrams
In a decision scenario, D, given a set of possible decision rules, δA, the goal of Influence Diagrams is to compute the decision rule that leads to the Maximum Expected Utility. Additionally, P rδA (x|a) corresponds to a full joint probability
suai.ru/our-contacts |
quantum machine learning |
98 C. Moreira and A. Wichert
distribution of all possible outcomes, x, given di erent actions a belonging to the decision rules δA.
EU [D [δA]] = P rδA (x|a) U (x, a) |
(6) |
x |
|
The goal is to choose some action a that maximises the expected utility with respect to some decision rule, δA:
a = argmaxδA EU [D [δA]]
One can map the expected utility formalism to the scope of Bayesian networks in the following way. Knowing that P rδA (x|a) corresponds to a full joint probability distribution of all possible outcomes, x, given di erent actions a belonging to the decision rules δA, this means that we can decompose the full joint probability distribution to the chain rule of probability theory as the product of each node with its parent nodes.
EU [D [δA]] = |
P rδA (x|a) U (x, a) |
(7) |
|
x |
|
EU [D [δA]] = |
P r (Xi|P aXi ) U (P aU ) δA (A|Z) |
(8) |
X1,...,Xn ,A |
i |
|
In Eq. 8, Z = P aA represents the parent nodes of action A. We can factorise Eq. 8 in terms of the decision rule, δA, obtaining
EU [D [δA]] = δA (A|Z) |
P r (Xi|P aXi ) U (P aU ) , (9) |
Z,A |
W i |
where W = {X1, . . . , XN } − Z corresponds to all nodes of the Bayesian Network that are not contained in the set of nodes in Z.
By marginalising the summation over W , we obtain an expected utility formula that is written only in terms of the factor μ(A, Z). Note that this factor corresponds to a conditional distribution table of random variable Z (the outcomes of some action a) and action a.
EU [D [δA]] = δA (A|Z) μ (A, Z) |
(10) |
Z,A |
|
The Maximum Expected Utility for a classical Influence Diagrams is given by (Koller and Friedman 2009):
δA |
(a, Z) = α(x) = |
1 |
a = argmax (A, Z) |
(11) |
|
0 |
otherwise |
||||
|
|
|
suai.ru/our-contacts |
quantum machine learning |
Introducing Quantum-Like Influence Diagrams for Violations |
99 |
6Maximum Expected Utility in Quantum-Like Influence Diagrams
The proposed quantum-like influence diagram is built upon the formalisms of quantum-like Bayesian networks. This means that real classical probabilities need to be replaced by complex quantum amplitudes.
We start the derivation with the initial notion of expected utility already presented in the previous section. In a decision scenario, D, given a set of possible decision rules, δA, the goal of the Quantum-Like Influence Diagrams is to compute the decision rule that leads to the Maximum Expected Utility. P rδA (x|a) corresponds to a full joint probability distribution of all possible outcomes, x, given di erent actions a belonging to the decision rules δA.
EU [D [δA]] = P rδA (x|a) U (x, a) |
(12) |
x |
|
For simplicity, let’s consider the decision scenario where we have two binary events X1 and X2. Then, we can decompose the classical expected utility equation as
EU [D [δA]] = |
δA(A|X1)P r (X1) P r (X2|X1) U (X1, A) (13) |
X1,X2,A
Like before, we can factorise this formula in terms of the decision rule δA, obtaining
EU [D [δA]] = |
δA(A|X2) P r (X1) P r (X2|X1) U (X1, A) (14) |
A,X2 |
X1 |
For binary events, we obtain the marginalisation of X1 over both X2 and A
EU [D [δA]] = |
δA(A|X2) · μ (X2, A) |
(15) |
|
A,X2 |
|
where μ (X2, A) is a factor with the utility function expressed in terms of the distribution of X2. More specifically, it is given by
μ (X2, A) = P r (X1 = t) P r (X2|X1 = t) U (X1 = t, A)
(16)
+P r (X1 = f ) P r (X2|X1 = f ) U (X1 = f, A)
Since the proposed quantum-like influence diagram makes use of a quantumlike Bayesian network, this means that we need to convert the classical real probabilities into complex quantum amplitudes. This is performed by applying Born’s rule: for some classical probability A, the corresponding quantum amplitude is simply its squared magnitude, P r(A) = |ψA|2 (Deutsch 1988; Zurek 2011). Since in Eq. 16 we have an utility factor expressed in terms of the probability distribution of X1, we cannot apply Born’s rule directly, since we would not be satisfying its definition. For this reason, it is not possible to make a direct
suai.ru/our-contacts |
quantum machine learning |
100 C. Moreira and A. Wichert
mapping between this joint probability distribution over an utility function into a quantum-like scenario.
We propose to split Eq. 16 into two factors: one containing a classical probability and another containing the utility function. This procedure is inspired the Quantum Decision Theory model of Yukalov and Sornette (2015). In their model, a prospect πa is a composite event represented in the Hilbert space by an eigenstate |a . The probability of the prospect is composed of two factors: an utility factor, f (πa) (a factor containing the classical utility of a lottery) and an attraction factor, q(πa) (a probabilistic factor that results from the quantum interference e ect). More specifically, for a lottery La, the utility factor f (πa) corresponds to minimizing the Kullback-Leibler information functional, which in the simple case of uncertainty yields (Yukalov and Sornette 2015):
f (πa) = |
U (La ) |
. The attraction factor, on the other hand, represents the |
a U (La ) |
behavioural biases, which are expressed through quantum interference, and is a value between −1 < q(πa) < 1. The final probability of the prospect is then given by P r(πa) = f (πa) + q(πa).
In this work, considering P r(πa) the classical probability distribution of the factor μ (X2, A) and f (πa) the classical utility corresponding to the choice of some action A of the same factor μ (X2, A), then we obtain
P r(πa) = P r (X1 = t) P r (X2|X1 = t) + P r (X1 = f ) P r (X2|X1 = f )
f (πa) = U (X1 = t, A) + U (X1 = f, A)
We can get the attraction factor, q(πa), by replacing the classical real numbers in P r(πa) by quantum-like amplitudes. The quantum interference e ects emerge by applying Born’s rule,
q(πa) = |ψ(X1 = t)ψ(X2|X1 = t) + ψ(X1 = f )ψ(X2|X1 = f )|2
q(πa) = |ψ(X1 = t)ψ(X2|X1 = t)|2 + |ψ(X1 = f )ψ(X2|X1 = f )|2 + 2Interf,
(17)
where the quantum interference term is given by
Interf = |ψ(X1 = t)ψ(X2|X1 = t)| |ψ(X1 = f )ψ(X2|X1 = f )| cos (θ1 − θ2) .
(18) Since the factor μ (X2, A) represented a probability distribution over the utility functions, we need to update the utility factor f (πa) in order to also represent this distribution over the quantum interference term. The quantum
interference term, for N random variables grows (Moreira and Wichert 2016)
N −1 N
2 |ψ(Xi = t)ψ(Xj |Xi = t)| |ψ(Xi = f )ψ(Xj |Xi = f )| cos (θi − θj ) .
i=1 j=i+1
The utility factor μ (X2, A) already specifies the utility function expressed in terms of the distribution of X2. When we consider X2 a quantum random variable, then this probability distribution is extended to incorporate the quantum
suai.ru/our-contacts |
quantum machine learning |
Introducing Quantum-Like Influence Diagrams for Violations |
101 |
interference e ects. So, given that the utility function is expressed in terms of this probability distribution, we propose to update it in the same way as the interference term as
N
U (Xi = t, A) U (Xi = f, A) .
i=1
For our example, where we have utility factor μ (X2, A) expressed in terms of the distribution of X2, then
f (πa) = U (X1 = t, A) + U (X1 = f, A) + U (X1 = t, A) U (X1 = f, A) .
Under this representation, the result of the marginalisation, μ (X2, A), will be given by the product of the vector representation of the utility factor f (πa) with the attraction factor q(πa):
μ (X2, A) = q(πa)|f (πa) ,
where the vector representation corresponds to
|
|
|
|
|ψ(X1 = t)ψ(X2|X1 = t)| |
2 |
|
|
|
|
|
|
U (X1 = t, A) |
|
|
|||||
| |
q(πa ) |
|
= |
2 |
| |
f (πa ) |
|
= |
. |
||||||||||
| |
ψ(X |
|
= f )ψ(X2 |
X1 |
= f ) |
| |
|
|
U (X1 = f, A) |
|
|||||||||
|
|
|
1 |
I nterf| |
|
|
|
|
U (Xi = t, A) U (Xi = f, A) |
|
This way, the final marginalisation for the quantum-like influence diagram is
μ (X2, A) = q(πa)|f (πa) = |ψ(X1 = t)ψ(X2|X1 = t)|2 · U (X1 = t, A) +
|ψ(X1 = f )ψ(X2|X1 = f )|2 · U (X1 = f, A) + Interf
, (19)
where
Interf = 2 |ψ(X1 = t)ψ(X2|X1 = t)| |ψ(X1 = f )ψ(X2|X1 = f )| cos (θ1 − θ2) .
Note that, in Eq. 20, if one sets the interference term (θi − θj ) to π/2, then cos(θi − θj ) = 0. This means that the quantum interference term is canceled and the quantum-like influence diagram collapses to its classical counterpart.
Finally, the goal is to find the decision rule δA that maximizes μ (X2, A),
δA |
(a, Z) = α(x) = |
1 |
a = argmax μ (X2 |
, A) |
(20) |
0 |
otherwise |
|
|||
|
|
|
|
We will refer to Eq. 20 as the quantum-like influence Equation, since we cannot call it a maximization of the expected utility in the real sense, because we had to change the distribution of the utility factor μ (X2, A) in order to mach the distribution over the quantum interference terms.
suai.ru/our-contacts |
quantum machine learning |
102 C. Moreira and A. Wichert
7A Quantum-Like Influence Diagram for the Prisoner’s Dilemma Game
Several paradoxical findings have been reported over the literature showing that individuals do not act rationally in decision scenarios under uncertainty (Kuhberger et al. 2001; Tversky and Shafir 1992; Lambdin and Burdsal 2007; Hristova and Grinberg 2008; Busemeyer et al. 2006a). The quantum-like influence diagram can help to accommodate the several paradoxical decisions by manipulating the quantum interference e ects that emerge from the inferences in the quantum-like Bayesian network, which can be used to reestimate the expected utilities.
Fig. 3. Quantum-Like Influence Diagram representing the Prisoner’s DIlemma Experiment from (Shafir and Tversky 1992). Random variable X1 corresponds to the player’s own believes about the action of his opponent (either believes the opponent will defect or the opponent will cooperate) and X2 corresponds to the player’s own strategy, i.e. either he confesses the crime or he remains silent.
We will model the works reported in Table 1 under the proposed quantumlike influence diagram. Figure 3 corresponds to the representation of the work of Shafir and Tversky (1992). The three types of nodes in the represented quantum-like influence diagram are the following:
–Random Variables: the circle-shaped nodes are the random variables belonging to the quantum-like Bayesian Network representing the player that needs to make a decision in the Prisoner’s Dilemma, without being aware of the decision of his opponent. We modelled this network with two binary random variables, X1 and X2. X1 corresponds to the player’s own believes about the action of his opponent (either believes the opponent will defect or the opponent will cooperate) and X2 corresponds to the player’s own strategy, i.e. either he confesses the crime (and therefore would find it safe to engage in a def ect strategy) or he remains silent (and would prefer to engage in a cooperate strategy). The tables next to each random variable are conditional probability tables and they show the probability distribution of the
suai.ru/our-contacts |
quantum machine learning |
Introducing Quantum-Like Influence Diagrams for Violations |
103 |
variable towards its parent nodes. These conditional probability tables match the probability distributions reported in Table 1. In the specific case of Fig. 3, this table is filled with the values of the probability amplitudes identified in the work of Shafir and Tversky (1992).
–Action Node: the rectangle shaped node is the action that we want to make a decision. In the context of the prisoner’s dilemma we are interested to compute the maximum expected utility of defecting or not defecting (i.e. cooperating).
–Utility Node: the diamond shaped node corresponds to the payo s that the player will have for taking (or not) the action def ect, given his own personal strategy. The values in this node will be populated with the di erent payo s used across the di erent experiments of the prisoner’s dilemma game reported over the literature.
In the conditions where the player knows the strategy of his opponent, the quantum-like influence diagram collapses to its classical counterpart, since there is no uncertainty. This was already noticed in the previous works of Moreira and Wichert (2014, 2016, 2017). However, when the player is not informed about his opponent’s decision, then the quantum-like Bayesian network will produce interference e ects (Eq. 5). When computing the maximum expected utility, we will marginalise out X1 like it was shown in Eq. 16. This will result in a factor showing the distribution of the player’s personal strategy (either confess or remain silent) towards his believes over his opponents actions (either to defect or cooperate). The quantum interference term will play an important role to determine which quantum parameters can influence the player’s decision to switch from a classical (and rational) defect action towards the paradoxical decision found in the works the literature, i.e. to cooperate.
Fig. 4. Impact of quantum interference terms in the overall expected utility: (left) quantum parameters that maximize a cooperate decision, (center) variation of the expected utility when the player confesses (defects) and (right) variation of the expected utility when the player remains silent (cooperates).
Figure 4 demonstrates the impact of the quantum interference e ects in the player’s decision. The graphs in the centre and in the right of Fig. 4 represent