
- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
Weights in CQQL |
141 |
Arithmetic Formula on Operands The main idea of [14Ð16] is to apply arithmetical formulas on operands of a disjunction or conjunction. Thus, they all do not fulÞll R4. For example, Sung [13] deÞnes SΘ (μ1(o), . . . , μn(o), θ1, . . . , θn) = S(θ1μ1(o), . . . , θnμn(o)). Interestingly, the operand formula 1−θ (1−μ(o)) for the t-norm min proposed by Carson et al. [16] is very near to our evaluation formula on CQQL.
Weighted Sum Singitham et al. [17], Fuhr and Gro§johann [18], and Oracle Corporation [19] propose a weighted sum approach completely independent from a logic. As shown, we can simulate the weighted sum by connected weights.
Logic-Based Weighting Approach on min/max The approaches proposed by Dubois and Prade [20], Pasi [21], and Yager [22] are very similar to our weighting approach and fulÞll R1, R2, R3, and R4. However, they are strictly connected to the t-norm/t-conorm min/max. This results in a problem described by Fagin and Wimmers [4]. First, linearity cannot be fulÞlled. Second, if μ1(o) ≥ 1 − θ2/θ1 ≥ μ2(o) holds, then the result is completely independent from μ1(o) and μ2(o).
OWA Approach The OWA approach is discussed, for example, in [9, 23]. It was not developed to weight certain conditions. Instead, the user can assign weights to the highest score, to the second highest score, and so on. As a result, the characteristic of a conjunction can be gradually shifted to one of a disjunction. Thus, the weight does not express the importance of a condition.
We conclude that our weighting approach can be seen as a logic-based generalization of existing weighting approaches.
7 Conclusion
In this paper we propose a logic-based weighting mechanism which is completely embedded in a logic. As logic we used the logic of our query language CQQL. Later, we show that our approach can also be applied to other logics. A very interesting result are connected weights in CQQL. They produce the weighted sum by means of logic.
One problem not tackled here is the question, if a user is always able to specify weight values. We propose to use a kind of user interactions to infer that values. For example, a user starts with equal weight values and is able to adjust them after she/he sees the query result. Another approach is to learn weight values from required preferences over result objects.
We evaluated our approach successfully in a content-based image retrieval context [24]. There, weight values are not given by users but are learnt from user interactions.
suai.ru/our-contacts |
quantum machine learning |
142 |
I. Schmitt |
References
1.Zadeh, L. A. (1988). Fuzzy logic. IEEE Computer, 21, 83Ð93.
2.Schmitt, I. (2008). QQL: A DB&IT query language. The VLDB Journal: The International Journal on Very Large Data Bases, 17, 39Ð56.
3.Cooper, W. S. (1988). Getting beyond Boole. Information Processing & Management, 24, 243Ð248.
4.Fagin, R., & Wimmers, E. L. (2000). A formula for incorporating weights into scoring rules.
Theoretical Computer Science, 239, 309Ð338.
5.van Rijsbergen, C. J. (1979). Information retrieval. London: Butterworths.
6.Baeza-Yates, R., & Ribeiro-Neto, B. (1999). Modern information retrieval. Essex : ACM Press.
7.van Rijsbergen, C. J. (2004). The geometry of information retrieval. Cambridge: Cambridge University Press.
8.Olteanu, D., Huang, J., & Koch, C. (2009). Sprout: Lazy vs. eager query plans for tupleindependent probabilistic databases. In 2009 IEEE 25th International Conference on Data Engineering (pp. 640Ð651).
9.Herrera-Viedma, E., Lopez-Herrera, A. G., Alonso, S., Porcel, C., & Cabrerizo, F. J. (2007).
Alinguistic multi-level weighted query language to represent user information needs. In
Proceedings of the IEEE International Conference on Fuzzy Systems, London, July 23–26, 2007 (pp. 1Ð6). Piscataway: IEEE.
10.Robertson, S. E., & Jones, K. S. (1976). Relevance weighting of search terms. Journal of the American Society for Information Science, 27, 129Ð146.
11.Van Rijsbergen, C. J., Robertson, S. E., & Porter, M. F. (1980). New models in probabilistic information retrieval. London: British Library Research and Development Department London.
12.Schulz, N., & Schmitt, I. (2003). Relevanzwichtung in komplexen €hnlichkeitsanfragen. In
G.Weikum, H. Schšning, & E. Rahm (Eds.), Datenbanksysteme in Business, Technologie und Web, BTW’03, 10. GI-Fachtagung, Leipzig, Februar 2003. Lecture notes in informatics (LNI)
(Vol. P-26, pp. 187Ð196). Bonn: Gesellschaft fŸr Informatik.
13.Sung, S. (1998). A Linear Transform Scheme for Combining Weights into Scores. Technical report, Rice University.
14.Bookstein, A. (1980). Fuzzy requests: An approach to weighted Boolean searches. Journal of the American Society for Information Science (JASIS), 31, 240Ð247.
15.Bookstein, A. (1980). A comparison of two weighting schemes for Boolean retrieval. In
R.N. Oddy, S. E. Robertson, C. J. van Rijsbergen, & P. W. Williams (Eds.), Information Retrieval Research, Proceedings of the Joint ACM/BCS Symposium in Information Storage and Retrieval, Cambridge, June 1980 (pp. 23Ð34).
16.Carson, C., Belongie, S., Greenspan, H., & Malik, J. (1997). Region-based image querying. In Proceedings of the IEEE Workshop CVPR ’97 Workshop on Content-Based Access of Image and Video Libraries, Puerto Rico (pp. 42Ð49).
17.Singitham, P. K. C., Mahabhashyam, M. S., & Raghavan, P. (2004). EfÞciency-quality tradeoffs for vector score aggregation. In M. A. Nascimento, M. T. …zsu, D. Kossmann, R. J. Miller,
J.A. Blakeley, K. B. Schiefer (Eds.), Proceedings of the Thirtieth International Conference on Very Large Data Bases, Toronto, August 31–September 3 2004 (pp. 624Ð635). Burlington: Morgan Kaufmann.
18.Fuhr, N., & Gro§johann, K. (2001). XIRQL: A query language for information retrieval in xml documents. In W. B. Croft, D. J. Harper, D. H. Kraft, J. Zobel (Eds.), SIGIR 2001: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, September 9–13, 2001, New Orleans, LA (pp. 172Ð180). New York: ACM.
19.Oracle Corporation. (1999). Oracle8i visual information retrieval – user guide and reference.
suai.ru/our-contacts |
quantum machine learning |
Weights in CQQL |
143 |
20.Dubois, D., & Prade, H. (1986). Weighted minimum and maximum operations in fuzzy set theory. Information Science, 39, 205Ð210
21.Pasi, G. (1999). A logical formulation of the Boolean model and of weighted Boolean models. In Proceedings of the Workshop on Logical and Uncertainty Models for Information Systems (LUMIS 99). London: University College London.
22.Yager, R. R. (1987). A note on weighted queries in information retrieval systems. Journal of the American Society for Information Science, 38, 23Ð24.
23.Boughanem, M., Loiseau, Y., & Prade, H. (2005). Rank-ordering documents according to their relevance in information retrieval using reÞnements of ordered-weighted aggregations. In M. Detyniecki, J. M. Jose, A. NŸrnberger, C. J. van Rijsbergen (Eds.), Adaptive Multimedia Retrieval: User, Context, and Feedback, Third International Workshop, AMR 2005, Glasgow, July 28–29, 2005, Revised Selected Papers. Lecture notes in computer science (Vol. 3877, pp. 44Ð54). Berlin: Springer.
24.Zellhšfer, D. (2015). A Preference-Based Relevance Feedback Approach for Polyrepresentative Multimedia Retrieval. Doctoral thesis, BTU Cottbus-Senftenberg.