- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
An Update on Updating
Bart Jacobs(B)
Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands bart@cs.ru.nl
The main aim of this short contribution is to give an introduction to some challenging research issues wrt. updating and probabilistic logic, together with some relevant references. We use the word ‘update’ for what is also called ‘belief update’ or (probabilistic) ‘conditioning’. It involves the adaptation of a probability distribution in the light of certain evidence. Such updating is typically expressed via conditional probabilities and is governed by Bayes’ rule. It is a fascinating topic, with wide applications, ranging from statistical data analysis to cognition theory (see e.g. [3, 9]).
Updating exists both in classical probability and in quantum probability. One of the key characteristics of updating in a quantum setting is that it is not commutative: successive updates do not commute. This forms a basis for using quantum theory in cognition theory [1] since the human mind is also very sensitive to the order in which information is presented—or, in di erent words, to the order of priming. The study of quantum updating is still in its infancy, but already two di erent mathematical definitions have appeared, called ‘lower’ and ‘upper’ conditioning in [5], see also [2, 7]. Interestingly, the lower version satisfies the product rule, whereas the upper version satisfies Bayes’ rule proper. Classically, there is no di erence between these two rules (see [5] for details).
In classical probability things seem to be better understood. But that is only because in practice people mostly restrict themselves to sharp evidence, given by subsets of the space at hand. These subsets are used as predicates in updating. The situation changes when soft or fuzzy evidence is allowed, of the form: I was 80% sure that I heard the alarm. Updating with fuzzy evidence can be done basically in two ways, called ‘constructive’ (following Pearl) or ‘destructive’ (following Je rey), see [4]. Constructive and destructive updating agree on point evidence, but they can give completely di erent outcomes when applied with the same (soft) evidence (and the same prior). It is unclear which version of updating should be applied when. This is a bit worrying. Should we start asking our doctors: did you arrive at this most likely diagnosis via constructive or destructive updating?
Constructive updating involves a smooth integration of the prior distribution with the evidence, following the standard formula: posterior prior · likelihood. Constructive updating is commutative. It is such that if the evidence contains no information (is constant/uniform), then you learn nothing new from updating.
Destructive updating involves overriding the prior by the evidence. As a result, it is not commutative. If the evidence is what we can predict, then we learn nothing new from destructive updating. This also makes sense. Given the
c Springer Nature Switzerland AG 2019
B. Coecke and A. Lambert-Mogiliansky (Eds.): QI 2018, LNCS 11690, pp. 191–192, 2019. https://doi.org/10.1007/978-3-030-35895-2
suai.ru/our-contacts |
quantum machine learning |
192 B. Jacobs
precise mathematical distinction between constructive and destructive updating in [4], the question also arises: which form of updating best matches cognitive experiments?
Thus, in the end we have four forms of updating: two quantum ones (lower and upper) and two classical ones (constructive and destructive). Clearly more research is needed to understand this situation. Part of this research should involve developing a proper probabilistic language for expressing logical and computational properties (see also [6]). It is an embarrasment to the field that no widely accepted and used probabilistic symbolic logic exists so far. Developing such a logic is by no means an easy task, for instance because probabilistic updating leads to non-monotonicity: adding assumptions may weaken the validity of the conclusion. Non-monotonicity is avoided by most logicians. However, it is quite natural in a probabilistic setting, as becomes clear in the quote below from [8] with which we conclude.
To those trained in traditional logics, symbolic reasoning is the standard, and nonmonotonicity a novelty. To students of probability, on the other hand, it is symbolic reasoning that is novel, not nonmonotonicity. Dealing with new facts that cause probabilities to change abruptly from very high values to very low values is a commonplace phenomenon in almost every probabilistic exercise and, naturally, has attracted special attention among probabilists. The new challenge for probabilists is to find ways of abstracting out the numerical character of high and low probabilities, and cast them in linguistic terms that reflect the natural process of accepting and retracting beliefs.
References
1.Busemeyer, J., Bruza, P.: Quantum Models of Cognition and Decision. Cambridge University Press, Cambridge (2012)
2.Coecke, B., Spekkens, R.: Picturing classical and quantum Bayesian inference. Synthese 186(3), 651–696 (2012)
3.Hohwy, J.: The Predictive Mind. Oxford University Press, Oxford (2013)
4.Jacobs, B.: A mathematical account of soft evidence, and of Je rey’s ‘destructive’ versus Pearl’s ‘constructive’ updating. See arxiv.org/abs/1807.05609 2018
5.Jacobs, B.: Lower and upper conditioning in quantum Bayesian theory. In: Quantum Physics and Logic, EPTCS (2018)
6.Jacobs, B., Zanasi, F.: The logical essentials of Bayesian reasoning. See arxiv.org/abs/1804.01193, book chapter, to appear
7.Leifer, M., Spekkens, R.: Towards a formulation of quantum theory as a causally neutral theory of Bayesian inference. Phys. Rev. A 88(5), 052130 (2013)
8.Pearl, J.: Probabilistic semantics for nonmonotonic reasoning: a survey. In: Brachman, R., Levesque, H., Reiter, R. (eds.) First International Conference on Principles of Knowledge Representation and Reasoning, pp. 505–516. Morgan Kaufmann, San Mateo (1989)
9.Sloman, S.: Causal Models. How People Think About the World and Its Alternatives. Oxford University Press, Oxford (2005)