- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
Are Decisions of Image Trustworthiness Contextual? A Pilot Study |
41 |
2 A Probabilistic Fusion Model of Trust
Analysis of qualitative feedback in relation to an image of Vladimir Putin1 revealed a surprising number participants including comments that they didn’t trust the image simply because they didn’t like Putin, or didn’t think he was honest [7]. This was in spite of the fact that subjects were carefully instructed to judge the trustworthiness of the image itself. In other words, these participants seemed to be confounding a decision of whether they trust the image, with a decision with whether they trust the content of the image. Conversely, qualitative feedback from other participants revealed that they the trusted the image because they couldn’t detect any evidence of manipulation. Such feedback aligns with the visual fluency hypothesis. A similar dichotomy appeared in relation to an image of a strange looking creature known as a frill shark. Both cases are revealing as they seem to indicate that both content and representation features are involved when participants judge the trustworthiness of images. A question that we will explore below is whether decision making around these components are transacted independently, and how that relates to contextuality.
Deciding whether an image if Putin is trustworthy, obviously involves uncertainty, e.g., if may be photoshopped. It is therefore natural to consider a probabilistic decision model. For example, consider the simple model depicted in Fig. 1. The variable S is a random variable which ranged over a set of image stimuli, such as the Putin image. Bivalent random variables C1, C2 relate to features associated with the content of the image. For example, C1 may model the decision whether the subject of the image is honest. Conversely, R1 and R2 are bivalent random variables that relate to representational aspects. For example, R1 may model the decision whether the image has been manipulated, or not. Variable R2 might model the decision whether there was something unexpected in the image. Of course, there may be any number of variables related to decisions involving the content and representation of the image. In addition, the variables could be continuous rather than bivalent. However, for simplicity we will use the four bivalent variables; two content based and two representation based. A further assumption if the model latent variables γ and ρ. The C latent variable models the decision whether the content of the image is trustworthy, which only depends variables related to the content. For example, this would equate to the decision whether Putin the person is deemed trustworthy. Conversely, the latent R models the decision whether the image is deemed to be a true and accurate depiction of reality. Finally, the variable T corresponds to the decision whether the human subject trusts what they have been shown. Such a decision depends on both decisions whether the content and representation are trusted as modeled by the latent variable.
We term such a model, a probabilistic fusion model as both content and representation components are reconciled in order to produce the final decision of trustworthiness. The model allows for one component to dominate decision
1https://www.newyorker.com/humor/borowitz-report/putin-announces-historic-g1- summit.
suai.ru/our-contacts |
quantum machine learning |
42 P. D. Bruza and L. Fell
making. For example, if the visual fluency is broken, then p(T = y|ρ) would be much higher than p(T = y|γ), reflecting that representational aspects of the image dominate decision making of image trustworthiness.
In addition, the following probabilistic relationships are a consequence of the decision model:
p(R1 = y) = p(R1 = y, C1 = y) + p(R1 = y, C1 = n) = p(R1 = y, C2 = y) + p(R1 = y, C2 = n) p(R2 = y) = p(R2 = y, C1 = y) + p(R2 = y, C1 = n) = p(R2 = y, C2 = y) + p(R2 = y, C2 = n)
and the converse
p(C1 = y) = p(C1 = y, R1 = y) + p(C1 = y, R1 = n) = p(C1 = y, R2 = y) + p(C1 = y, R2 = n) p(C2 = y) = p(C2 = y, R1 = y) + p(C2 = y, R1 = n) = p(C2 = y, R2 = y) + p(C2 = y, R2 = n)
The preceding probabilistic relationships express that decision making around content and representation do not influence each other. For example, the probability of a decision that a subject trusts Putin (the person), denoted p(C1 = y), does not vary according to whether subject decides that image has been manipulated, say, denoted p(C1 = y, R1 = y) + p(C1 = y, R1 = y) or whether they detect something unexpected in the image, denoted p(C1 = y, R2 = y) + p(C1 = y, R2 = y).
S
C1 |
C2 |
R1 |
R2 |
γ ρ
T
Fig. 1. Probabilistic fusion model of trust
suai.ru/our-contacts |
quantum machine learning |
Are Decisions of Image Trustworthiness Contextual? A Pilot Study |
43 |
3 Contextuality
A Bell scenario experiment involves two systems C (content) and R (representation). The content system C is probed with two questions modeled by bivalent variables C1 and C2 both of which range over the outcomes {y, n}. Similarly for system R with variables R1 and R2. Four measurement contexts are defined by jointly measuring one variable from each system:
|
|
|
|
|
R |
|
|
||
|
|
|
|
R1 |
|
R2 |
|
|
|
|
|
|
|
y n |
|
y n |
|
|
|
C1 |
y p1 p2 |
|
p5 p6 |
|
|
||||
|
|
||||||||
C |
|
n p3 |
p4 |
|
p7 p8 |
|
(1) |
||
|
|
|
|
|
|
|
|
|
|
|
2 y |
p9 |
p10 |
|
p13 p14 |
|
|
||
C |
|
n |
p11 |
p12 |
|
p15 p16 |
|
|
According to the first principle of Contextuality-by-Default, random variables should be indexed according to the experimental conditions in which they are measured [5]. For example, variable C1 is jointly measured with R1 in one experimental condition as well as being jointly measured with a variable R2 in another experimental condition. For this reason, two variables C11 and C12 are introduced. The same holds for the other three random variables resulting in eight random variables. Their expectations are computed as follows [4]:
C11 = 2(p1 + p2) − 1 |
(2) |
||
C12 = 2(p5 + p6) − 1 |
(3) |
||
C21 = 2(p9 |
+ p10) − 1 |
(4) |
|
C22 = 2(p13 + p14) − 1 |
(5) |
||
R11 |
= 2(p1 |
+ p3) − 1 |
(6) |
R12 |
= 2(p9 |
+ p11) − 1 |
(7) |
R21 |
= 2(p5 |
+ p7) − 1 |
(8) |
R22 |
= 2(p13 + p15) − 1 |
(9) |
Analysis of contextuality in Bell scenario experiments relies on the no-signalling condition. The experience so far in quantum cognition is that it is challenging to design experiments where this condition holds [6]. Moreover, the question that this challenge poses is whether any meaningful conception of contextuality exists when signalling is present. [4] have presented theory that specifies a threshold for signalling below which meaningful contextuality analysis can be performed. Using their approach, the degree of signalling 0 between the content and representation systems is computed as follows:
1
0 = 2 (|C11 − C12| + |C21 − C22| + |R11 − R12| + |R21 − R22|) (10)