- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
154 |
E. Di Buccio and M. Melucci |
Meet and join only resembles intersection and union of sets. In fact, some properties of set operators cannot hold for meet and join anymore; for example, the distributive law holds for sets, but it does not for vector spaces.
From the point of view of information access, an interpretation of meet and join is necessary. The interpretation provided in this chapter for these two operators starts from the notion of basis. A basis vector mathematically represents a basic concept corresponding to data such as keywords or terms. In the event of a canonical basis, the basis vectors represent the starting vocabulary through which the content of documents and queries is produced and matched. When a document and a query or two documents share a concept their respective mathematical representations share a basis vector with non-null weight.
Consider the meet of two planes. The result of meeting two distinct planes is a ray, that is, a one-dimensional subspace. A one-dimensional subspace is spanned by a vector. Any vector can belong to any basis; indeed, the vector spanning the ray is the only vector of the basis of this subspace. As a basis vector can be a mathematical representation of a basic concept, the meet of two planes can be a mathematical representation of a basic concept. The planes meeting at the basis vector represent information sharing one concept, i.e., the concept represented by the meet, since the vector resulting from the meet of two planes may be a basis vector for both planes provided that each plane is spanned by a basis including the meet and another independent vector.
Consider the join of two distinct rays. The result of joining two rays is a plane, that is, a bi-dimensional subspace is spanned by two vectors. The subspace resulting from joining two rays is spanned by the vectors spanning the rays. The plane resulting from the join of two rays represents information based on two concepts, i.e., the concept represented by the basis vector of one ray and the concept represented by the basis vector of the other ray. Indeed, the vectors belonging to the plane resulting from the join of two rays are expressed by two basis vectors, each basis vector representing one individual concept.
However, it is safe to state that meet and join are only a mathematical representation and nothing can be argued about the meaning of what these two operators represent; we can nevertheless argue that if the planes meeting at the basis vector or the rays joined to a plane represent information sharing one concept or consisting of two concepts, respectively, the vector resulting from the meet of the two planes or the basis resulting from the join of two rays may be viewed as a sensible mathematical representation of complex concepts.
4 Structures of a Query-by-Theme Language
This section introduces the building blocks of a QTL. First, features and terms are introduced in Sect. 4.1. Then, Sect. 4.2 presents themes that are further exploited to rank documents as explained in Sect. 4.3. Finally, the composition of themes by
suai.ru/our-contacts |
quantum machine learning |
Searching for Information with Meet and Join Operators |
155 |
||
Table 2 Notations used in this chapter |
|
||
|
|
|
|
Symbol |
Meaning |
Comment |
|
|w |
Feature |
Textual keyword and other modality depending on media |
|
|t |
Term |
Unigrams, bigrams, trigrams, etc., such as information |
|
|
|
retrieval and quantum theory |
|
|τ |
Theme |
Expressions like information retrieval quantum theory |
|
|
|
or information retrieval quantum theory |
|
|φ |
Document |
Webpages, news, posts, etc. |
|
using meet and join is described in Sect. 4.4. Table 2 summarizes the notation used in this chapter.
4.1 Features and Terms
Consider the features extracted from a collection of documents; for example, a word is a textual feature, the gray level of a pixel or a code word of an image is a visual feature, and a chroma-based descriptor for content-based music representation is an audio feature. Despite their differences, the features extracted from a collection of multimedia or multimodal documents can co-exist together in the same vector space if each feature is represented by a canonical basis vector. Consider k distinct features and the following.
Definition 7 (Term) Given the canonical basis1 |e1 , . . . |ek of a subspace over the real Þeld, a term is deÞned as
k |
ai R, |
|t = ai |ei = (a1, . . . , ak ) |
|
i=1 |
|
where the aÕs are the coefÞcients with respect to the basis. Therefore, terms are a combination of features; for example, if k = 2 textual features, say ÒinformationÓ and Òretrieval,Ó then Òinformation retrievalÓ is a term represented by
|information retrieval = ainformation |information + aretrieval |retrieval .
The main difference between features and terms lies in orthogonality, since the feature vectors assume mutual orthogonality whereas the term vectors only assume mutual independence. Non-orthogonal independence also distinguishes the QTL from the VSM, since term vectors might not beÑand they are often notÑorthogonal whereas keyword vectors are usually assumed orthogonal; for example,
1The i-th canonical basis vector has k − 1 zeros and 1 at the i-th component.
suai.ru/our-contacts |
quantum machine learning |
156 |
E. Di Buccio and M. Melucci |
|retrieval system = asystem |system + aretrieval |retrieval
is not orthogonal to, yet it is still independent of |information retrieval .
4.2 Themes
Consider a vector space over the real Þeld and the following:2
Definition 8 (Theme) Given m independent term vectors |t1 , . . . , |tm , where 1 ≤ m ≤ k, a theme is represented by the m-dimensional subspace of all vectors like
|τ = b1 |t1 + · · · + bm |tm bi R.
From the deÞnition, one can see that a feature is a term and a term is the simplest form of a theme. In particular, a term is a one-dimensional theme. Moreover, if |t is a term, then any c |t is the same term for all c R.
Moreover, themes can be combined to further deÞne more complex themes; to start with, a theme can be represented by a one-dimensional subspace (i.e., a ray) as follows: if |t represents a term, we have that |τ = b |t spans a one-dimensional subspace (i.e., a ray) and represents a theme. Also, a theme can be represented by a bi-dimensional subspace (i.e., a plane) in the k-dimensional space as follows: if |t1 and |t2 are term vectors, we have that b1 |t1 + b2 |t2 spans a bi-dimensional subspace (i.e., a plane) representing a theme.
In general, a theme may be represented by multi-dimensional subspaces by using different methods; for example, Òinformation retrieval systemsÓ can be a term represented as a linear combination of three feature vectors (e.g., keywords) or it can be a theme represented by a linear combination of two or more term vectors such as Òinformation retrieval,Ó Òretrieval,Ó Òsystems,Ó Òretrieval systems,Ó or Òinformation.Ó Therefore, the correspondence between themes and subspaces is more complex than the correspondence between keywords and vectors of the VSM. The conceptual relationships between themes are depicted in Fig. 2.
4.3 Document Ranking
A document is represented by a vector
|φ = (c1, . . . , cm) |
ci R |
2Note that we are overloading the symbol |τ to mean both the theme subspace and a vector of that subspace.
suai.ru/our-contacts |
quantum machine learning |
Searching for Information with Meet and Join Operators |
|
|
|
157 |
|||||||||||||||||||
|
|
|
|
|
τ4 |
|
|
|
|
|
|
|
τ3 |
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
τ1 |
||
|
|
|
|
|
|
|
|
|
|
|
|
|t4 |
|
|t1 |
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|t5 |
|
τ5 |
|||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
τ6 |
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|t2 |
|
τ2 |
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|t3 |
|
|
|
|||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Fig. 2 A pictorial representation of features, terms, and themes. Three feature vectors |e1 , |e2 , |e3 span a tridimensional vector space, but they are not depicted for the sake of clarity; the reader may suppose that the feature vectors are any triple of independent vectors. Each term vector |t can be spanned by any subset of feature vectors; for example, |t1 = a1,1 |e1 + a1,2 |e2 for some a1,1, a1,2. A theme can be represented by a subspace spanned by term vectors; for example, |t1 and |t2 span a bi-dimensional subspace representing a theme and including all vectors |τ1 = b1,1 |t1 + b1,2 |t2
on the same basis as that which is used for themes and terms such that ci is the measure of the degree to which the document represented by the vector is about term i.
The ranking rule for measuring the degree to which a document is about a theme relies on the theory of abstract vector spaces. To measure this degree, a representation of a document in a vector space and a representation of a theme in the same space are necessary. Document and theme share the same representation, if they are expressed with respect to the same basis of m term vectors. When orthogonality holds, a ranking rule is then the squared projection of |φ on the subspace spanned by a set of m term vectors as explained in [17].
To describe the implementation of the ranking rule, projectors are necessary. To this end, an orthogonal basis of the same subspace can be obtained through linear transformation of the |t Õs. Let {|v1 , . . . , |vm } be such an orthogonal basis, which determines the projector of the subspace as follows:
τ = |v1 v1| + · · · + |vm vm|.
The measure of the degree to which a document is about a theme τ represented by the subspace spanned by the basis vectors |v is the size of the projection of the document vector on the theme subspace, that is,