- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
J.B. Broekaert, et al. |
Cognitive Psychology 117 (2020) 101262 |
parameter {s}, however this parameter is now monitoring a hyperbolic tangent version of the logistic function, Eqs. (41). A similar ‘coupled-switching’ dynamics, controlled by a mixing parameter , is implemented for the attention switching from Win to Lose correlated to switching between Gamble or Stop decision. The mixing Hamiltonian, Eq. (42), and the Markov intensity rate mixing, Eq. (24), are structured differently due to Hermeticity instead of probability conservation requirements. In the gamble block with Win and Lose outcome conditions the context effect is implemented by the weight parameters {, µ} for first period and second period respectively.
In the quantum-like model the carry-over effect from first to second period is implemented differently in the belief-action state for the Unknown condition; instead of weighting two components the parameter {} now causes an interference between the two components by implementing a relative complex phase.
The quantum model, just like the Markov model, relies on 9 parameters to cover the process dynamics and the initial belief-action states in both flow orders, both periods and all payoffs amounting to theoretical values for 30 data points. The Supplementary Materials section, (SM 2) provides more details on the temporal evolution in this model.
4.4. Logistic model
In order to compare the performance of the Markov and quantum-like process models, we devised a third model which aims to heuristically reproduce the observed gamble probabilities. Similarly to the Markov and quantum-like models, this baseline model is also made context and order sensitive but does not comprise a dynamic process for the belief-action state. Instead the baseline model implements for each gamble condition ad hoc weightings of the two utilities for Win outcome and Lose outcome, Eqs. (16). The gamble probability is then simply obtained from a logistic function of the heuristic utility
p (gamble|X, Cond) = |
1 |
|
(50) |
1 + e s·U (X,Cond) |
, |
where the parameter s functions as a sensitivity parameter. In the first period, the utility of taking the second-stage gamble in the Win and in the Lose condition will be set according to
UKU (W ) = KUK uW + (1 |
KUK )uL, |
UKU (L) = (1 |
KUK )uW + KUK uL, |
(51) |
|
where KUK is a weight parameter, 0 |
KUK |
1, expressing the participant’s inclination towards Win and Lose beliefs. Hence also in |
the Logistic model we allow for partial adherence to the information in the outcome condition, but now this occurs on the level of utility instead of belief probability amplitude. It can be argued that each presented gamble is embedded between Win-outcome and Lose-outcome games, and this engenders residual utility-based support.
In the first period the utility of the Unknown outcome of a first-stage gamble is
UUK (U) = UKU UUK (W ) + (1 UKU )UUK (L) |
(52) |
which expresses the resulting utility is a weighting of Win and Lose conditioned utility assessments, by the parameter UKU .
In the second period the context effect is now modified by the carry-over effect, which changes the weighting in the utility of the Win and Lose outcome gambles
UUK (W ) = UKK uW + (1 UKK )uL, UUK (L) = (1 UKK )uW + UKK uL |
(53) |
in which the weight parameter now is UKK .
In the Unknown first-stage outcome gamble the utility weighting, KUU , is now changed because of the carry-over effect
UKU (U) = |
KUU UKU (W ) + (1 KUU )UKU (L) |
(54) |
with 0 KUU |
1. |
|
Parametrization. In contrast to the Markov and quantum-like models, the Logistic model does not rely on belief-action states but
rather on heuristically adapted utility functions. For each condition of Known or Unknown outcome, gamble payoff and period a dedicated utility weighting drives a logistic function to render the probabilities for the second-stage gamble. The logistic model
requires the same four dynamical parameters for the driving utility difference as in the two dynamical models, namely { |
0W , 1W } and |
|
{ 0L, 1L}. The effect of the utility difference on the decision is controlled by a sensitivity parameter {s} on a logistic function, Eq. (50). |
||
In the |
logistic model the carry-over effect and the context effect are covered by four ad hoc weighting |
parameters, |
{ KUK , |
KUU , UKK , UKU}. |
|
Like the Markov and the quantum-like model, the logistic model uses 9 parameters to produce the gamble probability for both flow orders, both periods and all payoffs amounting to theoretical values for 30 data points.
5. Theoretical model performance
The three models have been parametrized for maximum likelihood statistical estimation on the data set. With three initial gamble outcome conditions and the five variable payoff amounts the survey produces fifteen observed proportions for each block ordering. For each model the objective function G for the parameter optimalisation is
20
suai.ru/our-contacts |
quantum machine learning |
J.B. Broekaert, et al. |
Cognitive Psychology 117 (2020) 101262 |
Table 3
G-statistic for lack of fit test for the Markov, quantum-like and Logistic model fitted to the partitioning of less and more risk averse participants.
|
|
‘Less risk averse’ |
|
|
‘More risk averse’ |
|
group |
Markov |
quantum-like |
Logistic |
Markov |
quantum-like |
Logistic |
all |
70.96 |
18.96 |
62.15 |
52.77 |
23.23 |
61.00 |
Fig. 6. Observed and theoretical gamble probabilities for Less risk averse participants, for each of the three models Markov, Quantum and Logistic
for K-to-U (left) and U-to-K (right) flow orders. The payoff parametrised by XLevel [1, 5] appears on the x-axis. Error bars represent the standard error of the mean.
G = 2NKU |
15 |
oiln |
oi |
+ (1 |
oi)ln |
1 |
oi |
+ 2NUK |
15 |
(idem) |
|
i=1 |
i=1 |
|
|||||||||
ei |
1 |
ei |
(55) |
where NKU and NUK are the number of participants for each flow order, the oi and ei are the observed and expected probabilities, and the sums cover the order condition ‘K-to-U’ and ‘U-to-K’ respectively. The G statistic expresses the lack of fit of the model predictions with the observed values. The numerical optimization was executed in Matlab using a 39 grid for the initial parameter vectors (Supplementary materials, SM 8).
The model performance is compared using the Bayesian Information Criterion. The BIC penalizes a model for complexity through the number of free parameters. With BIC = G + plnN , and the three models sharing the same number of parameters and data points the BIC comparison reduces to a G value comparison.
5.1. Model comparison for risk attitude partitioning.
In the sample partitioning approach by risk attitude the observed gamble proportions are separated along ‘Less risk averse’ and ‘More risk averse’ attitude. The BIC comparison favours the quantum-like model as the best fit for the risk partitioned observations. For the Logistic model G = 123.15, for the Markov model G = 123.73 and for the quantum-like model G = 42.19 (Table 3). In this partitioning of participants both the Markov model and the Logistic model perform equally unsatisfactorily (Figs. 6 and 7).15 This is in the line with expectation since both the Logistic and the Markov models are abiding by classical logical constraints on the probabilities which prevent these models from providing inflative or deflative Disjunction Effects.
15 The more fine-grained partitioning using the played gamble patterns of each participant, Supplementary Materials Section SM 5, reveals the Markov model performs better than the Logistic model only in four out of eight subgroups, namely for ‘More risk averse’ attitude in the ID-ranges [-2,2], [0,0] and [-1,0] and ‘Less risk averse’ attitude in the ID-range [-2,2] (the ID-ranges are defined in Table S2). This analysis also shows that the two subgroups with pronounced contributions to the violation of the Law of Total Probability are the ones best modeled by the quantum model (namely for ‘More risk averse’ attitude in the ID-ranges ]-2,2] for UK and [-2,2[ for KU).
21