- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
Investigating Bell Inequalities for Multidimensional Relevance Judgments |
181 |
Fig. 1. Hilbert space representation of Order E ects
Suppose that while judging Document d, the user has the order T opicality → Reliability in mind. Then the final probability of relevance is the projection from
d → |
|
2 → |
2 |
|
|
|
|
|
|
| | | | |
|
| | | |
T |
| |
= |
|||||
|
T |
|
|
R as shown in Fig. 1a. This is calculated as |
|
T d |
|
2 |
R |
|
2 |
|||||||||
0.3535 |
|
0.5651 |
2 |
= |
0.0399. If the |
user |
reverses the |
order |
|
of relevance |
||||||||||
|
|
|
2 |
|
|
|
2 |
2 |
|
|
|
|
→ |
R |
→ |
T |
= |
|||
dimensions |
considered |
while judging document d, we get d |
|
|
|
|
||||||||||||||
| R| |d | |
|
| T | |R | |
|
= |
0.9715 0.5651 |
|
= |
0.3014, which is 7.5 times larger |
(Fig. 1b).
Order E ects in decision making have been successfully modeled and predicted using the Quantum framework [7, 16].
3 Deriving a Bell Inequality for Documents
3.1CHSH Inequality
In Sect. 2, we showed how we can calculate the relevance probabilities of a document for di erent dimensions. We constructed a Hilbert space for each document, consisting of seven di erent basis, representing each dimension of relevance. Two or more such documents can be considered as a composite system by taking a tensor product of the document Hilbert spaces. If |d1 and |d2 are the state vectors of two documents, we can represent the tensor product as |d1 |d2 . Figure 2 shows the geometrical representation of two such Hilbert spaces. Here
|R hab represents |
Relevance |
in the Habit basis, or in IR |
terms, relevance of |
|
|
|
|||
document d with |
respect to |
the Habit dimension. Similarly, |R hab |
represents |
irrelevance in the Habit basis.
In the CHSH inequality, we have observables A1 and A2 for a system taking values in ±1. For a document d1, we have observables corresponding to the di erent relevance dimensions. Taking the case of two relevance dimensions, Habit and Novelty, we have observables Rhab and Rnov which take values in
±1. Where Rhab = +1 corresponds to a projection on the basis vector |R hab,
− |
Rhab = 1 corresponds to the projection on its orthogonal basis vector R hab.
suai.ru/our-contacts |
quantum machine learning |
182 S. Uprety et al.
Fig. 2. Tensor product of two Hilbert spaces
Taking two documents as a composite system, we can write the CHSH inequality in the following way:
| Rhab1Rhab2 + Rhab1Rnov2 + Rnov1Rhab2 − Rnov1Rnov2 | ≤ 2 (8)
where the subscripts 1 and 2 denote that the observables belong to document 1 and document 2 respectively. Using the fact that AB = 1 P (AB = 1) + (−1) P (AB = −1) and P (AB = 1) + P (AB = −1) = 1, we can convert the above inequality into its probability form as:
1 ≤ P (Rhab1Rhab2 = 1) + P (Rhab1Rnov2 = 1)+ |
(9) |
P (Rnov1Rhab2 = 1) + P (Rnov1Rnov2 = −1) |
≤ 3 |
We don’t have the joint probabilities P (AB) in our dataset, hence we assuming P (AB) = P (A)P (B) (this where the assumption of realism is incorrectly made, which will not lead to the CHSH inequality violation), we get:
1≤ P (Rhab1 = 1)P (Rhab2 = 1) + P (Rhab1 = −1)P (Rhab2 = −1)+ P (Rhab1 = 1)P (Rnov2 = 1) + P (Rhab1 = −1)P (Rnov2 = −1)+
P (Rnov1 = 1)P (Rhab2 = 1) + P (Rnov1 = −1)P (Rhab2 = −1)+
P (Rnov1 = 1)P (Rnov2 = −1) + P (Rnov1 = −1)P (Rnov2 = 1) ≤ 3 (10)
As we mentioned above, Rhab = +1 corresponds to the basis vector |Rhab and therefore P (Rhab1 = 1) corresponds to the probability that document d1 is relevant with respect to the H abit dimension of relevance. Therefore we can calculate these probabilities as projections in the Hilbert space:
P (Rhab1 |
= 1) |
= | Rhab |d1 |2 |
|
|
P (Rhab1 |
|
|
2 |
|
= −1) = | Rhab |
|d1 | |
|
||
P (Rnov1 |
= 1) |
= | Rnov |d1 |2 |
|
|
P (Rnov1 |
|
|
2 |
(11) |
= −1) = | Rnov |d1 | |
and similarly for document d2.
suai.ru/our-contacts |
quantum machine learning |
Investigating Bell Inequalities for Multidimensional Relevance Judgments |
183 |
3.2CHSH Inequality for Documents Using the Trace Method
Another way to define the CHSH inequality for documents is by directly calculating the expectation values using the trace rule. According to this rule, expectation value of an observable A in a state |d is given by
|
|
|
|
|
A = tr(Aρ) |
|
|
|
|
|
(12) |
||||||
where the quantity ρ = |d d| is the density matrix for the state |d . |
|
||||||||||||||||
Let the two documents be represented in the standard basis as follows: |
|
||||||||||||||||
|
|
| |
1 |
= a1 |H 1 + b1 |
| |
|
1 |
|
|
||||||||
|
|
|
|
|
|||||||||||||
|
|
D |
|
|
|
| |
|
|
|
|
|
| |
H |
|
|
|
|
|
|
| |
|
|
2 + b2 |
|
|
|
|
||||||||
|
|
D2 |
|
= a2 |
H |
|
|
H |
|
2 |
|
(13) |
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
|
|
0 |
Hence, the state vector and the density |
||||||||||||
where |H 1,2 = 0 and |H 1,2 |
= 1 |
||||||||||||||||
matrix for a document |
| |
can be written as: |
|
|
|
|
|
|
|||||||||
D |
|
|
|
|
|
|
|||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
| |
= |
a |
|
|
| |
D |
| |
= |
|
|
a12 |
a1b1 |
(14) |
||||
b |
|
|
|
a1b1 |
b12 |
||||||||||||
D |
|
|
|
|
D |
|
|
|
|
|
|
|
|
The document representations in another basis are as follows:
| |
1 = c1 |N 1 |
+ d1 | |
|
1 |
|
|
|
||||
D |
|
|
N |
|
|
|D2 = c2 |N 2 |
+ d2 |N 2 |
(15) |
H and N are basically relevance with respect to two relevance dimensions, say Habit and Novelty. We can write the N basis in terms of the H basis (see Appendix A) as:
|
|
|
|
N |
|
|
= (a c + b |
d |
) H |
|
+ (b |
c |
|
a d |
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
) H |
|
|
|
|
|
|
|||||||||||||||||
|
|
|
|
| |
1 |
|
|
1 1 |
|
1 |
1 |
| 1 |
|
|
|
1 |
|
1 − 1 1 |
|
| |
|
1 |
|
|
|
(16) |
|||||
|
|
|
|
|N |
1 = (a1d1 − b1c1) |H 1 + (a1c1 + b1d1) |H |
1 |
|
|
|
||||||||||||||||||||||
and similarly for the second document. |
|
|
|
|
|
|
|
|
|
N |
1 and |
|
|
|
1 as: |
||||||||||||||||
Thus we get the vector representations for basis states |
| |
| |
N |
|
|||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
|
|
|
|
N |
|
|
= |
a1c1 + b1d1 |
|
|
|
|
= |
|
a1d1 − b1c1 |
|
|
|
(17) |
||||||||||||
|
|
|
|
|
|
| |
N |
1 |
|
|
|
|
|||||||||||||||||||
|
|
|
|
| 1 |
|
|
|
b1c1 |
− |
a1d1 |
|
|
|
|
a1c1 + b1d1 |
|
|
|
|
||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Now the observables H and N are defined as: |
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(18) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
H |
|
H |
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
H = |H H | − | |
|
|
| |
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
|
|
|
N = |H N | − |N |
N |
| |
|
|
|
|
|
|
|
|
|
|||||||||
where |
|
H |
H |
and |
|
|
|
|
|
are the projection operators for standard basis vec- |
|||||||||||||||||||||
| |
H |
|
H |
| |
|||||||||||||||||||||||||||
|
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
tors with eigen values 1 and −1 respectively. This is the spectral decomposi- |
|||||||||||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
tion of the observables. We get H = 0 −1 |
. The matrix for observable N is |