- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
J.B. Broekaert, et al. |
Cognitive Psychology 117 (2020) 101262 |
Table 2
Observed gamble patterns with notation order Win-Lose-Unknown in two-stage gamble experiments. TS92 is the original Tversky and Shafir study with consecutive delayed presentation of conditions(Tversky & Shafir, 1992). KKP-Exp3 is the within-participants (‘third’) experiment of Kühberger, Komunska, and Perner with random order juxtaposed conditions (Kühberger et al., 2001). LB-Coin is the coin experiment of Lambdin and Burdsal with random order juxtaposed conditions (Lambdin & Burdsal, 2007). BBP-Exp1 is extracted from our Experiment 1 study for payoff parameter
X = 2, with randomly ordered conditions. BBP Exp2KtoU are the extracted X = 2 results from our Experiment 2 study with blocked Win and Lose outcome conditioned gambles preceding the block of Unknown outcome gambles. BBP Exp2UtoK has the blocked conditions in reverse order.
Dominant patterns are tagged with an asterisk.
W |
L |
U |
TS92 |
KKP-Exp3 |
LB-Coin |
BBP-Exp1 |
BBP |
Exp2KtoU |
BBP |
Exp2UtoK |
|
|
|
N = 98 |
N = 35 |
N = 55 |
Natt = 94 |
Natt |
= 407 |
Natt |
= 415 |
g |
g |
g |
.14 |
.29 |
.22 |
.23 |
0.371 |
0.323 |
||
s |
g |
g |
.06 |
.0 |
.04 |
.24 |
0.194 |
0.195 |
||
g |
s |
g |
.11 |
.11 |
.11 |
.07 |
0.076 |
0.113 |
||
s |
s |
g |
.04 |
.03 |
.02 |
.06 |
0.032 |
0.070 |
||
g |
g |
s |
.27 |
.09 |
.16 |
.05 |
0.081 |
0.065 |
||
s |
g |
s |
.12 |
.0 |
.05 |
.14 |
0.086 |
0.068 |
||
g |
s |
s |
.17 |
.31 |
.15 |
.10 |
0.076 |
0.077 |
||
s |
s |
s |
.08 |
.17 |
.25 |
.11 |
0.084 |
0.089 |
of interference effects (see also, Busemeyer, Wang, & Townsend (2006)).
For both the Markov and the quantum-like models, we solve the dynamical differential equations up to a certain time point to compute the predicted probability of a response at that time point, which can be used to calculate the likelihood of the observed responses. Note, we did not collect response times. In past work on the disjunction effect, no one has looked at response times. Modeling data on response times is something to look at in future research. As a final note, regarding how a response is generated from predicted probabilities, we assume that if the model produces a state which indicates e.g. p (gamble) = .7, a response is sampled by a Bernoulli process producing either of the responses ‘gamble’ or ‘stop’ with probabilities .7 and .3 respectively. Therefore, no additional mechanism is required to go from probabilities to responses and model probabilities translate directly to predictions about choice proportions. Note, diffusion models are always applied to choice proportions in exactly the same way.
Finally, a third model is included as a comparison baseline to the dynamical process models, Section 4.4. The baseline model is essentially a Logistic regression which, as in latent trait modeling, will relate heuristic utility to theoretical gamble probabilities and which is further parametrized to mimic contextual and carry-over features shared by the two dynamical decisions models, Section 4.4.
1.1. The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
The Disjunction Effect and the violation of the Sure Thing Principle are not necessarily related. In principle the aggregation of individual gamble strategies could mask the violation of the Sure Thing principle, and vice versa, a dominant strategy of violation of the Sure Thing principle should not necessarily lead to a Disjunction Effect in the aggregate data. Tversky and Shafir asserted a strong relation between the occurrence of the violation of the Sure Thing principle and the Disjunction effect. In support of this they observed that when the Disjunction Effect occurred the dominant gamble pattern followed by their participants was ‘gamble - gamble - stop’, where the notation order of the decisions always follows the fixed order ‘Win - Lose - Unknown’ of the conditions. We will use a letter notation to denote these choices as (g|W , g|L, s|U), where ‘g’ stands for ‘do the gamble’ while ‘s’ stands for ‘stop the gamble’. In this notation adhering to the STP by replying ‘gamble - gamble - gamble’ corresponds to the pattern (g|W , g|L, g|U ). In their study precisely 26 out of 98 participants (Table 2, col. TS92) violated the STP by performing pattern (g|W , g|L, s|U), while by the same token 14 out of 98 abided to the STP by performing (g|W , g|L, g|U ). Assessing the validity of the STP by a frequency comparison treats each outcome gamble pattern as composed of three logically correlated – consequentially evaluated– rational decisions. We assert that each gamble decision, hence each gamble triplet, should be considered a stochastic realisation of a latent trait.
What choice patterns then lie at the origin of the Disjunction Effect? In terms of gamble patterns, how does a violation of the Law of Total Probability come about? Strictly the Law of Total probability can be violated in two ways, the gamble probability under Unknown outcome condition either undershoots or overshoots the span of the probabilities for gambling under Known outcome conditions.
The occurrence of these violations is related to the frequency of specific gamble patterns.2 With three conditions in the two-stage gamble paradigm there are eight possible gamble patterns in total, Fig. 1. Each conditional gamble probability can be expressed as the marginal of the joint probabilities identified by their corresponding gamble patterns
2 A single binary response pattern to gambles neither indicates a violation of nor consistency with the LTP. The intrinsic probability – the individual’s probability to gamble, given some conditions (e.g., payoffs) – generates binary random variable responses. That is, a priori any individual pattern is not a reliable indicator with respect to putative violations of the LTP nor the STP. The intrinsic probabilities produce binary choices that lead to choice proportions when averaged across participants and a law of large numbers assures the convergence of observed choice proportions to the intrinsic probabilities.
4