- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
150 S. Gogioso and C. M. Scandolo
4 Higher Order Interference
In this section, we will show that the theory of density hypercubes displays thirdand fourth-order interference e ects, broadly inspired by the framework for higher-order interference in GPTs presented by [1, 2, 18]. Because interference has to do with decompositions of the identity map in terms of certain projectors, we begin by introducing a handy graphical notation for keeping track of the various pieces that the identity map is composed of.
The identity map of hyper-quantum systems idDD(H) : DD(H) → DD(H) takes the following explicit form in fHilb, for any orthonormal basis (|ψx )x X
of the Hilbert space H:
H |
|
H |
H |
|
|
T |
|
ψ |
|
H |
|||||
|
|
|
ψx01 |
|
x01 |
|
|||||||||
H |
|
H |
H |
|
|
|
|
|
|
|
H |
||||
|
|
|
ψxT10 |
|
ψx10 |
|
|
||||||||
|
|
|
|||||||||||||
|
|
|
|
|
= |
|
|
|
|
|
|
|
(19) |
||
|
|
|
|
|
|
|
|
|
ψx†11 |
|
|
|
|
|
|
H |
|
|
|
H |
x00,x01,x10,x11 X H |
|
|
|
|
ψx11 |
|
|
H |
||
|
|
|
|
|
|
|
|||||||||
H |
|
|
|
H |
H |
|
|
|
ψx†00 |
|
ψx00 |
|
|
H |
|
|
|
|
|
|
|
|
In order to denote the pieces in the decomposition corresponding to specific values x00, x01, x10, x11 X of the indices, we adopt the following graphical notation, inspired by the Z2 × Z2 symmetry of the components:
|
H |
|
|
T |
ψ |
|
H |
|||||
|
|
|
ψx01 |
|
x01 |
|
||||||
x00 |
x10 |
|
|
|
|
|
|
|
|
|
|
|
|
H |
|
|
ψxT10 |
ψx10 |
|
H |
|||||
|
|
|
||||||||||
|
:= |
|
|
|
|
|
|
|
|
(20) |
||
|
|
|
|
|
ψx† |
|
|
|
|
|
|
|
x01 |
H |
|
|
|
11 |
ψx11 |
|
|
H |
|||
|
|
|
|
|||||||||
x11 |
|
|
|
|
|
|
|
|
|
|
||
|
H |
|
|
|
ψx† |
00 |
ψx00 |
|
|
H |
||
|
|
|
|
|
In fact, we will adopt the same colour-based notation for index values which we originally introduced in Sect. 2, so that the following is a decomposition piece involving two distinct index values {•, •} X:
H |
|
|
ψT |
|
ψ |
|
H |
|||||||
|
|
|
||||||||||||
|
|
|
|
|
|
• |
|
• |
|
|
|
|||
H |
|
|
|
|
|
|
|
|
|
|
H |
|||
|
|
ψ•T |
|
ψ• |
|
|
||||||||
|
|
|||||||||||||
:= |
|
|
|
|
|
|
|
|
|
|
|
(21) |
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
H |
|
|
|
|
|
ψ•† |
|
|
|
|
|
H |
||
|
|
|
|
|
|
ψ• |
|
|
||||||
|
|
|
|
|
||||||||||
H |
|
|
|
|
|
ψ•† |
|
ψ• |
|
|
H |
|||
|
|
|
|
|
Using the colour-based notation defined above for its pieces, the identity on a 2-dimensional hyper-quantum system (with X = {•, •}) would be fully decomposed as follows:
suai.ru/our-contacts |
quantum machine learning |
Density Hypercubes, Higher Order Interference and Hyper-decoherence |
151 |
idC2 = |
+ + + + + + + + + + + + + + + |
(22) The same notation can be used to graphically decompose projectors corresponding to various subspaces determined by the orthonormal basis (|ψx )x X . For any non-empty subset U X, we define the following projector on DD(H):
PU := DD |
|ψx ψx| |
(23) |
|
x U |
|
In particular, the P{•} for • X are the projectors corresponding to the indi-
vidual vectors |ψ• of the basis, while PX is the identity idDD(H). No matter how large X is (with #X ≥ 2), the projectors P{•,•} corresponding to 2-element
subsets {•, •} X are always decomposed as follows:
P{•,•} |
= |
+ + + + + + + + + + + + + + + |
|
(24) The presence of higher order interference in the theory of density hypercubes is really a matter of shapes: when the dimension of H is at least 3, the identity contains pieces of shapes which do not appear in projectors for 1-element and 2-element subsets. Because of this, in the theory of density hypercubes the probabilities obtained from 1-slit and 2-slit interference experiments will not be enough to explain the probabilities obtained from 3-slit and/or 4-slit experiments; however, the probabilities obtained from 1-slit, 2-slit, 3-slit and 4-slit experiments will always be enough to explain the probabilities obtained in experiments with 5 or more slits.
Below you can see an atlas of all possible shapes that pieces of the identity can take in our graphical notation, together with a note of the smallest dimension that a projector must have to contain pieces of that shape:
(25)
1-dim |
2-dim |
|
(26) |
3-dim |
4-dim |
The shape labelled as 1-dimensional only requires a single index value, and hence pieces of that shape appear in all projectors. The shapes labelled as 2- dimensional all require exactly two distinct index values, and hence pieces of those shapes can only appear in projectors for subsets with at least 2 elements. The shapes labelled as 3-dimensional all require exactly three distinct index values, and hence pieces of those shapes can only appear in projectors for subsets with at least 3 elements. Finally, the shape labelled as 4-dimensional requires exactly four index values, and hence pieces of that shape can only appear in projectors for subsets with at least 4 elements.
suai.ru/our-contacts |
quantum machine learning |
152 S. Gogioso and C. M. Scandolo
Thanks to the graphical notation introduced above, we already have a first intuition of why density hypercubes display higher-order interference. However, a rigorous proof requires a complete set-up with states, projectors, measurements and probabilities for a d-slit interference experiment, so that is what we now endeavour to provide.
1. We choose a d-dimensional space H Cd, and we value our tensor indices in
=
the set X = {1, ..., d} (the same set that we use to label the d slits).
2.We fix an orthonormal basis (|x )x X , and we interpret |x to be the state in which the particle goes through slit x with certainty.
3.The initial state for the particle is the superposition state in which the particle
goes through each slit with the same amplitude. More precisely, it is the pure
|
|
|
|
|
|
|
|
|
|
1 |
|
| |
|
|
||||
1 |
|
|
|
|
|
|
|
|
|
|
|
d |
|
|||||
normalised density hypercube state ρ+ corresponding to the vector √ |
|
|
ψ+ |
|
:= |
|||||||||||||
|
d |
| |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
√ |
|
( 1 |
+ ... + d |
): |
1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
H |
|
|
|
|
|
|
|
|
|
|
|
|
|
ρ+ := |
|
Ψ+ |
|
|
|
(27) |
|||||||
|
|
|
|
|
|
|
||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
|
|
|
|
|
d2 |
|
Ψ+ |
|
|
|
||||||||
|
|
|
|
|
|
|
H |
|
|
|||||||||
|
|
|
|
|
|
|
|
|
|
4.The particle goes through some non-empty subset U X of slits at random: afterwards, the experimenter knows which subset the particle passed through, but no more information than that is available in the universe.
5.The particle is measured at the screen, and the experimenter estimates the probability P[+|U ] that the particle is still in state ρ+ after having passed through the given subset U of the slits:
|
1 |
|
Ψ+ |
|
|
|
|
|
|
|
|
† |
1 |
|
||
P[+|U ] := |
|
|
P |
U |
|
Ψ |
(28) |
|||||||||
|
|
|||||||||||||||
|
|
|
|
|
|
|
|
+ |
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
d2 Ψ+ |
|
|
|
|
† |
d2 |
||||||||||
|
PU |
|
Ψ |
|||||||||||||
|
|
|
||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
+ |
|
|
|
It is immediate to see that the outcome probability P[+|U ] depends solely on the number of di erent pieces appearing in the decomposition of the projector
PU :
P[+|U ] = |
1 |
· number of pieces in PU |
(29) |
d4 |
To count the number of pieces in PU , it is convenient to group them by shapes. If U is a subset of size k, standard combinatorial arguments can be used to obtain the number of pieces of each shape appearing in the decomposition (as a
convention, we set k = 0 for j > k):
j
(k1)·1! |
7 shapes, (k2)·2! each |
|
|
|
(30) |
|
6 shapes, (k3)·3! each |
(k4)·4! |
suai.ru/our-contacts |
quantum machine learning |
Density Hypercubes, Higher Order Interference and Hyper-decoherence |
153 |
By adding up the contributions from pieces of each shape, we get the following closed expression for the outcome probability P[+|U ]:
P[+|U ] = |
1 |
(#U )4 |
(31) |
d4 |
For d ≥ 3 we observe third-order interference, witnessed (by definition) by the following inequality:
P[+|{1, 2, 3}] = |
P[+|V ] − |
P[+|V ] |
(32) |
|
V {1,2,3} |
V {1,2,3} |
|
|
s.t. #V =2 |
s.t. #V =1 |
|
Indeed, the left hand side evaluates to 81/d4, while the right hand side evaluates to the following expression (again by standard combinatorial arguments):
1 |
3 |
24 − |
3 |
14 |
= |
1 |
45 |
= |
|
1 |
81 |
(33) |
d4 |
2 |
1 |
d4 |
d4 |
The di erence between left and right hand sides is 36/d4, which is exactly the contribution d14 6 · 33 · 3! of the 6 shapes requiring 3 distinct values (appearing in P{1,2,3} but not in any of the sub-projectors). For d ≥ 4 we observe fourth-order interference, witnessed (by definition) by the following inequality:
P[+|{1, 2, 3, 4}] = |
P[+|V ] − |
P[+|V ] + |
P[+|V ] (34) |
|
V {1,2,3,4} |
V {1,2,3,4} |
V {1,2,3,4} |
|
s.t. #V =3 |
s.t. #V =2 |
s.t. #V =1 |
Indeed, the left hand side evaluates to 256/d4, while the right hand side evaluates to the following expression (again by standard combinatorial arguments):
1 |
4 |
|
4 |
|
4 |
|
1 |
|
1 |
|
|
||
|
3 |
34 − |
2 |
24 + |
1 |
14 |
= |
|
232 |
= |
|
256 |
(35) |
d4 |
d4 |
d4 |
The di erence between left and right hand sides is 24/d4, which is exactly the
contribution |
1 |
4 |
· 4! of the shape requiring 4 distinct values (appearing in |
|
|
d4 |
4 |
P{1,2,3,4} but not in any of the sub-projectors).
For d ≥ 5, however, we observe absence of fifth-order (or higher-order) inter-
ference, witnessed (by definition) by the following equality: |
|
||
P[+|{1, 2, 3, 4, 5}] = |
P[+|V ] − |
P[+|V ] |
|
|
V {1,2,3,4,5} |
V {1,2,3,4,5} |
|
|
s.t. #V =4 |
s.t. #V =3 |
|
+ |
P[+|V ] − |
P[+|V ] |
(36) |
|
V {1,2,3,4,5} |
V {1,2,3,4,5} |
|
|
s.t. #V =2 |
s.t. #V =1 |
|
Indeed, the left hand side evaluates to 625/d4, and the right hand side yields the same:
1 |
5 |
44 − |
5 |
34 + |
5 |
24 − |
5 |
14 |
= |
1 |
625 |
(37) |
d4 |
4 |
3 |
2 |
1 |
d4 |