Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
искусственный интеллект.pdf
Скачиваний:
26
Добавлен:
10.07.2020
Размер:
27.02 Mб
Скачать

suai.ru/our-contacts

quantum machine learning

An Update on Updating

Bart Jacobs(B)

Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands bart@cs.ru.nl

The main aim of this short contribution is to give an introduction to some challenging research issues wrt. updating and probabilistic logic, together with some relevant references. We use the word ‘update’ for what is also called ‘belief update’ or (probabilistic) ‘conditioning’. It involves the adaptation of a probability distribution in the light of certain evidence. Such updating is typically expressed via conditional probabilities and is governed by Bayes’ rule. It is a fascinating topic, with wide applications, ranging from statistical data analysis to cognition theory (see e.g. [3, 9]).

Updating exists both in classical probability and in quantum probability. One of the key characteristics of updating in a quantum setting is that it is not commutative: successive updates do not commute. This forms a basis for using quantum theory in cognition theory [1] since the human mind is also very sensitive to the order in which information is presented—or, in di erent words, to the order of priming. The study of quantum updating is still in its infancy, but already two di erent mathematical definitions have appeared, called ‘lower’ and ‘upper’ conditioning in [5], see also [2, 7]. Interestingly, the lower version satisfies the product rule, whereas the upper version satisfies Bayes’ rule proper. Classically, there is no di erence between these two rules (see [5] for details).

In classical probability things seem to be better understood. But that is only because in practice people mostly restrict themselves to sharp evidence, given by subsets of the space at hand. These subsets are used as predicates in updating. The situation changes when soft or fuzzy evidence is allowed, of the form: I was 80% sure that I heard the alarm. Updating with fuzzy evidence can be done basically in two ways, called ‘constructive’ (following Pearl) or ‘destructive’ (following Je rey), see [4]. Constructive and destructive updating agree on point evidence, but they can give completely di erent outcomes when applied with the same (soft) evidence (and the same prior). It is unclear which version of updating should be applied when. This is a bit worrying. Should we start asking our doctors: did you arrive at this most likely diagnosis via constructive or destructive updating?

Constructive updating involves a smooth integration of the prior distribution with the evidence, following the standard formula: posterior prior · likelihood. Constructive updating is commutative. It is such that if the evidence contains no information (is constant/uniform), then you learn nothing new from updating.

Destructive updating involves overriding the prior by the evidence. As a result, it is not commutative. If the evidence is what we can predict, then we learn nothing new from destructive updating. This also makes sense. Given the

c Springer Nature Switzerland AG 2019

B. Coecke and A. Lambert-Mogiliansky (Eds.): QI 2018, LNCS 11690, pp. 191–192, 2019. https://doi.org/10.1007/978-3-030-35895-2

suai.ru/our-contacts

quantum machine learning

192 B. Jacobs

precise mathematical distinction between constructive and destructive updating in [4], the question also arises: which form of updating best matches cognitive experiments?

Thus, in the end we have four forms of updating: two quantum ones (lower and upper) and two classical ones (constructive and destructive). Clearly more research is needed to understand this situation. Part of this research should involve developing a proper probabilistic language for expressing logical and computational properties (see also [6]). It is an embarrasment to the field that no widely accepted and used probabilistic symbolic logic exists so far. Developing such a logic is by no means an easy task, for instance because probabilistic updating leads to non-monotonicity: adding assumptions may weaken the validity of the conclusion. Non-monotonicity is avoided by most logicians. However, it is quite natural in a probabilistic setting, as becomes clear in the quote below from [8] with which we conclude.

To those trained in traditional logics, symbolic reasoning is the standard, and nonmonotonicity a novelty. To students of probability, on the other hand, it is symbolic reasoning that is novel, not nonmonotonicity. Dealing with new facts that cause probabilities to change abruptly from very high values to very low values is a commonplace phenomenon in almost every probabilistic exercise and, naturally, has attracted special attention among probabilists. The new challenge for probabilists is to find ways of abstracting out the numerical character of high and low probabilities, and cast them in linguistic terms that reflect the natural process of accepting and retracting beliefs.

References

1.Busemeyer, J., Bruza, P.: Quantum Models of Cognition and Decision. Cambridge University Press, Cambridge (2012)

2.Coecke, B., Spekkens, R.: Picturing classical and quantum Bayesian inference. Synthese 186(3), 651–696 (2012)

3.Hohwy, J.: The Predictive Mind. Oxford University Press, Oxford (2013)

4.Jacobs, B.: A mathematical account of soft evidence, and of Je rey’s ‘destructive’ versus Pearl’s ‘constructive’ updating. See arxiv.org/abs/1807.05609 2018

5.Jacobs, B.: Lower and upper conditioning in quantum Bayesian theory. In: Quantum Physics and Logic, EPTCS (2018)

6.Jacobs, B., Zanasi, F.: The logical essentials of Bayesian reasoning. See arxiv.org/abs/1804.01193, book chapter, to appear

7.Leifer, M., Spekkens, R.: Towards a formulation of quantum theory as a causally neutral theory of Bayesian inference. Phys. Rev. A 88(5), 052130 (2013)

8.Pearl, J.: Probabilistic semantics for nonmonotonic reasoning: a survey. In: Brachman, R., Levesque, H., Reiter, R. (eds.) First International Conference on Principles of Knowledge Representation and Reasoning, pp. 505–516. Morgan Kaufmann, San Mateo (1989)

9.Sloman, S.: Causal Models. How People Think About the World and Its Alternatives. Oxford University Press, Oxford (2005)