- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
Introduction to Hilbert Space Multi-Dimensional Modeling |
45 |
3 Steps to Build an HSM Model
An HSM model is constructed from the following six steps:
1.Determine the compatibility and incompatibility relations among the variables.
2.Determine the dimension of the Hilbert space based on assumed compatibility relations.
3.DeÞne the initial state given the dimension of the Hilbert space.
4.DeÞne the projectors for the variables using unitary transformations to change the basis.
5.Compute the choice probabilities given the initial state and the projectors.
6.Estimate model parameters, compare Þt of models.
3.1 How to Determine the Compatibility Relations
There are two ways to investigate and determine compatibility between a pair of variables. The direct way is to empirically determine whether or not the joint frequencies change depending on order of presentation. If there are order effects, then that is evidence for incompatibility. An indirect way is to compare Þts of models that assume different compatibility relations. This indirect methods might be needed if no empirical tests of order effects are available.
3.2 How to Determine the Dimension
The basic idea of HSM modeling is to start with the minimum dimension required, and then add dimensions only if needed to obtain a satisfactory Þt to the data. Of course this model comparison and model selection process needs to provide a reasonable balance between accuracy and parsimony. For example, when Þtting the models using maximum likelihood estimation, model comparison indices such as Bayesian information criterion or Akaike information criterion can be used.
The minimum dimension is determined by the maximum number of combinations of values produced by the maximum number of compatible variables. For example, in Fig 1, suppose variables B and C are compatible with each other, and variables A and D are compatible with each other, but the pair B,C is incompatible with the pair A,D. In this case, there are at most two compatible variables. The B,C pair produces 2 · 4 = 8 combinations of values, but the A,D pair produces 3 · 5 = 15 combinations. The minimum dimension needs to include all 15 combinations produced by the A,D pair. Therefore, the minimum dimension is 15 in this example.
suai.ru/our-contacts |
quantum machine learning |
46 |
J. Busemeyer and Z. J. Wang |
3.3 Define the Initial State
The compatible variables can be chosen to form the basis used to deÞne the coordinates of the initial state ψ. In this example, the space is 15 dimensional, and the compatible pair, A,D can be chosen to deÞne the initial basis for the unit length 15 × 1 column matrix ψ. Each coordinate ψij represents the amplitude corresponding to the pair of values (A = i, D = j ), i = 1, 2, 3; j = 1, 2, . . . , 5, for representing the initial state. The squared magnitude of a coordinate equals the probability of the combination, p(A = i, D = j ) = ψij 2. In general, the coordinates can be complex, but in practice they are usually estimated as real values.
3.4 Define the Projectors
The orthogonal projector for an event that is deÞned in the initial basis is simply an indicator matrix that picks out the coordinates that correspond to the event. For example, using the previous example, the projector for the event (A = i, D = j ) is simply PA=i,D=j = diag 0 · · · 1 · · · 0 , where the one is located in the row corresponding to (i, j ), which is a one-dimensional projector. The projector for the event (A = i) equals PA=i = j PA=i,D=j and the projector for the event (D = j ) equals PD=j = i PA=i,D=j , and these are multi-dimensional projectors.
The projectors for the incompatible events require changing the basis from the original basis to the new basis for the incompatible variables. For example, suppose we wish to deÞne the events for the variables B,C. If we originally deÞned the initial state ψ in the B,C basis from the start, then these projectors would simply be deÞned by indicator matrices as well. Recall that the dimension of the space is 15, and there are only 8 combination of values for B, C. Therefore one or more of the combinations for B, C need to be deÞned by a multi-dimensional projector, Mkl , which is simply an indicator matrix, such as Mkl = diag 1 0 . . . 1 0 that picks out two or more coordinates for the event (B = k, C = l). The collection of indicator matrices, {Mkl , k = 1, 2; l = 1, 2, 3, 4}, forms a complete orthonormal set of projectors. The projector for the event (B = k), in the B, C basis, is simply p(B = k) = l MB=k,C=l , and the projector for (C = l) in the B, C basis is p(C = l) =
We did not, however, deÞne the initial state ψ in terms of the B,C basis. Instead we deÞned the initial state ψ in terms of the A,D basis. Therefore we need to ÒrotateÓ the basis from the A, D basis to the B, C basis to form the B,C projectors as follows: PB=k,C=l = U ·Mj k ·U , where U is a unitary matrix (an orthonormal matrix). Now the projector for the event (B = k), in the A, D basis, is p(B = k) = l PB=k,C=l , and the projector for (C = l) in the A, D basis is p(C = l) =
Any unitary matrix can be constructed from a Hermitian matrix H = H by the matrix exponential U = exp(−i · H ). Therefore, the most challenging problem is to construct a Hermitian matrix that captures the change in bases. This is facilitated
suai.ru/our-contacts |
quantum machine learning |
Introduction to Hilbert Space Multi-Dimensional Modeling |
47 |
by using substantive theory from the domain under investigation. We describe this step in more detail in the original articles.
3.5 Compute the Choice Probabilities
Once the projectors have been deÞned, it is straightforward to compute the probabilities for any contingency table using the quantum algorithm described earlier. For example, the probabilities for the AB table are obtained from the equation p(A = i, B = j ) = PB=j PA=i ψ 2, and the probabilities for the A, B, D table are obtained from the equation p(A = i, B = k, D = j ) = PD=j PB=k PA=i · ψ 2.
3.6 Estimate Model Parameters, Compare and Test Models
Once the model has been deÞned, the parameters of the initial state ψ and the parameters in the Hamiltonian matrix H can be estimated from the frequencies contained within contingency table data. This can be accomplished by using maximum likelihood estimation procedures. Suppose the dimension equals n (n = 15 in our example). If we use a real valued initial state, then initial state has n − 1 parameters (because the state is restricted to unit length). If the Hamiltonian is restricted to real values, then the Hamiltonian has (n · (n + 1)/2) − 1 parameters (one diagonal entry is arbitrary). However, often it is possible to use a lower number of parameters for the Hamiltonian. Model comparison methods, such as Bayesian information criterion or Akaike information criterion, can be used to compare models for accuracy adjusted for parsimony (deÞned by number of parameters).
HSM models can also be empirically tested using a generalization criterion. After estimating the projectors from two-way tables shown in Fig. 1, the model can be used to make a priori predictions for table A by D or for a three-way table such as A by B by D. This provides strong empirical tests of the model predictions.
The model also provides interpretable parameters to help understand the complex array of contingency tables. The estimate of the initials state ψ provides the initial tendencies to respond to questions. In the previous example, ψ represents the probabilities to respond to the A, D questions. The rotation, U ψ gives the initial tendencies to respond to the B, C questions. The squared magnitude of an entry in the unitary matrix, uj k 2, represents the squared correlation between a basis vector representing an event in one basis (e.g., an event in the A, D basis) and a basis vector representing an event in another basis (e.g., the B, C basis). These squared correlations describe the inter-relations between the variables, independent of the initial state.