Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
искусственный интеллект.pdf
Скачиваний:
26
Добавлен:
10.07.2020
Размер:
27.02 Mб
Скачать

suai.ru/our-contacts

quantum machine learning

Moral Dilemmas for Articial Intelligence

135

dilemma, with slightly different words and how the answers change in comparison with the observed changes in humans (framing, wording, question order effects, affective construction, etc.). The second attempt is looking for meaning structure exploring the whyanswers of AI and the connection with its own semantic spaces. Specically, in the moral test, the biggest effects are expected when the moral dilemma confronts emotional and rational thoughts. One hypothesis is that current AI will not be able to answer these questions, but if they can, even if answers and meanings could be similar, the structure and dynamics of their meaning will be different from the structure of meaning in humans across dilemmas. It can be quantied and compared thanks to the experimental probabilistic distributions of answers in both humans and machines.

6 Discussion and Conclusion

It is reasonable to expect that even if machines can demonstrate completely different logical or illogicalanswers, the general way of thinking dealing with different contexts in moral dilemmas should be more or less generic among species, including machines, if they reach high-level cognition. In other words, contextual dependency, QC effects, meaning and subjectivity built on previous and inferred knowledge should be captured by a complete model of cognition, which would be able to identify and measure these features. Thus, interference, conjunction and disjunction, wording, question ordering effects, and distributed meaning could be expected as general features of rational and emotional thoughts altogether, and where specic dynamics would emerge confronting some kind of moral dilemmas.

Hence, one suggestion of this position paper is that a moral test can help quantify these differences in a particular semantic space characterized with both emotional and rational components. Thus, a machine would reach part of what is dened as compositional human intelligence if the machine is able to show autonomously speaking (in the sense of dening their own goals), the intricate type of thinking that humans have when they are confronted with these kinds of dilemmas, even if their answers can be completely different from ours, the structure and dynamics of meaning is expected to be similar. In other words, the possibility or not to rst understand/differentiate context, second project external objects into internal objects to have meaning and third argue any decision based on rational and emotional components, is what is intrinsically related to a complex individual and social intelligence, which characterize humans and should be expected in some high-level cognitive AI. If subjects can justify their choice, it would imply that emotion and reason were playing some kind of role, while if they fail to justify their actions, only intuitive processes were involved. Can we expect something similar in the current AI approaches?

Finally, these views imply another ethical problem regarding the kind of morality which would emerge in AI. For example, independently of how one implements morality in AI (please see [1] for restrictions), there are important concerns about replicating morality in AI and using moral tests: if the replication of human moral process (dynamics and QC effects) in AI is desirable, this replication can be dangerous since AI is different from human, so AI will also develop a different kind of morality, based on certain type of subjectivity; other way around, if the exact replication of this

suai.ru/our-contacts

quantum machine learning

136 C. M. Signorelli and X. D. Arsiwalla

moral process is not desirable, why moral dilemmas, QC effects and cognitive fallacies would be a useful tool? First, the Moral test suggested here will try to evaluate the understanding of context (through QC effects), evolution of meaning (through CCMM) and the effects of confronting rational and emotional thoughts (through changing the degree of rational and emotional components among versions of each dilemma). In this sense, morality itself (as purely set of rules) is not necessarily desirable (especially the biased morality), but the balance among rational and emotional thoughts is what we claim can be really desirable as a high-level compositional intelligence. Then, we argued, morality emerges from these interactions. Moreover, our framework here was following the anthropomorphic approach, it implies that AI is compared with human moral dilemmas. This is a contradictory strategy regarding our general intelligence case, however, with that we expect to show the paradoxical and ethical consequences of the anthropomorphic views [1, 37]. In this sense, the risk of dangerous machines is exactly the same risk of dangerous humans, and before worrying about dangerous AI, we should rst care about making a psychological healthy world, and in consequence humans and machines would share similar social behaviours. Of course, a truly nonanthropomorphic test should consider the same two rst elements (context and meaning), but the balance can be through other types of reasoning and depends on the autonomous specication of machine goals.

Appendix A: Example of a Moral Dilemma

After a shipwreck, a healthy dog and an injured man are oating and trying to swim to survive. If you are in the emergency boat with only one space left:

Please indicate your degree of agreement with the next options (where +5 strongly agree, +3 moderately agree, +1 slightly agree, 1 slightly disagree, 3 moderately disagree, 5 strongly disagree)

(a) Save the healthy dog

+5

+3

+1

-1

-3

-5

(b) Save the injured man

+5

+3

+1

-1

-3

-5

Who would you save?

(a)The healthy dog

(b)The injured man

Why?

References

1.Signorelli, C.M.: Can computers become conscious and overcome humans? Front. Robot. Artif. Intell. 5, 121 (2018). https://doi.org/10.3389/frobt.2018.00121

2.Turing, A.: Computing machinery and intelligence. Mind 59, 433460 (1950)

suai.ru/our-contacts

quantum machine learning

Moral Dilemmas for Articial Intelligence

137

3.Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354359 (2017)

4.Bringsjord, S., Licato, J., Sundar, N., Rikhiya, G., Atriya, G.: Real robots that pass human tests of self-consciousness. In: Proceeding of the 24th IEEE International Symposium on Robot and Human Interactive Communication, pp. 498504 (2015)

5.Legg, S., Hutter, M.: Universal intelligence: a denition of machine intelligence. Minds Mach. 17, 391444 (2007)

6.Arsiwalla, X.D., Signorelli, C.M., Puigbo, J.-Y., Freire, I.T., Verschure, P.: What is the physics of intelligence? In: Frontiers in Articial Intelligence and Applications, Proceeding of the 21st International Conference of the Catalan Association for Articial Intelligence, vol. 308, pp. 283286 (2018)

7.Arsiwalla, X.D., Sole, R., Moulin-Frier, C., Herreros, I., Sanchez-Fibla, M., Verschure, P.: The morphospace of consciousness. ArXiv:1705.11190 (2017)

8.Coecke, B., Sadrzadeh, M., Clark, S.: Mathematical foundations for a compositional distributional model of meaning. Linguist. Anal. 36, 345384 (2010)

9.Bruza, P.D., Wang, Z., Busemeyer, J.R.: Quantum cognition: a new theoretical approach to psychology. Trends Cogn. Sci. 19, 383393 (2015)

10.Kiverstein, J., Miller, M.: The embodied brain: towards a radical embodied cognitive neuroscience. Front. Hum. Neurosci. 9, 237 (2015)

11.Arsiwalla, X.D., Verschure, P.: The global dynamical complexity of the human brain network. Appl. Netw. Sci. 1, 16 (2016)

12.Arsiwalla, X.D., Signorelli, C.M., Puigbo, J.-Y., Freire, I.T., Verschure, P.F.M.J.: Are brains computers, emulators or simulators? In: Vouloutsi, V., et al. (eds.) Living Machines 2018. LNCS (LNAI), vol. 10928, pp. 1115. Springer, Cham (2018). https://doi.org/10.1007/978- 3-319-95972-6_3

13.Arsiwalla, X.D., Herreros, I., Verschure, P.: On three categories of conscious machines. In: Lepora, N., Mura, A., Mangan, M., Verschure, P., Desmulliez, M., Prescott, T. (eds.) Living Machines 2016. LNCS (LNAI), vol. 9793, pp. 389392. Springer, Cham (2016). https://doi. org/10.1007/978-3-319-42417-0_35

14.Gilovich, T., Grifn, D., Kahneman, D.: Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, Cambridge (2002)

15.Pothos, E.M., Busemeyer, J.R.: Can quantum probability provide a new direction for cognitive modeling? Behav. Brain Sci. 36, 255274 (2013)

16.Wang, Z., Solloway, T., Shiffrin, R.M., Busemeyer, J.R.: Context effects produced by question orders reveal quantum nature of human judgments. Proc. Natl. Acad. Sci. U.S.A. 111, 94319436 (2014)

17.Aerts, D., Gabora, L., Sozzo, S.: Concepts and their dynamics: a quantum-theoretic modeling of human thought. Top. Cogn. Sci. 5, 737772 (2013)

18.White, L.C., Barqué-Duran, A., Pothos, E.M.: An investigation of a quantum probability model for the constructive effect of affective evaluation. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 374, 20150142 (2016)

19.Petrinovich, L., ONeill, P.: Inuence of wording and framing effects on moral intuitions. Ethol. Sociobiol. 17, 145171 (1996)

20.Searle, J.R.: Minds, brains, and programs. Behav. Brain Sci. 3, 417457 (1980)

21.Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E., Barrett, L.F.: The brain basis of emotion: a meta-analytic review. Behav. Brain Sci. 35, 121143 (2012)

22.Hauser, M., Cushman, F.A., Young, L., Jin, R.K.X., Mikhail, J.: A dissociation between moral judgments and justications. Mind Lang. 22, 121 (2007)

23.Christensen, J.F., Flexas, A., Calabrese, M., Gut, N.K., Gomila, A.: Moral judgment reloaded: a moral dilemma validation study. Front. Psychol. 5, 118 (2014)

suai.ru/our-contacts

quantum machine learning

138 C. M. Signorelli and X. D. Arsiwalla

24.Bekoff, M., Pierce, J.: Wild Justice: The Moral Lives of Animals. The University of Chicago Press, Chicago (2009)

25.Greene, J.D., Sommerville, R.B., Nystrom, L.E.: An fMRI investigation of emotional engagement in moral judgment. Science 293, 21052108 (2001)

26.Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., Grafman, J.: The neural basis of human moral cognition. Nat. Rev. Neurosci. 6, 799809 (2005)

27.Grefenstette, E., Sadrzadeh, M.: Experimental support for a categorical compositional distributional model of meaning. In: Conference on Empirical Methods in Natural Language Processing, Edinburgh, pp. 13941404 (2011)

28.Coecke, B., Lewis, M.: A compositional explanation of the pet shphenomenon. In: Atmanspacher, H., Filk, T., Pothos, E. (eds.) QI 2015. LNCS, vol. 9535, pp. 179192. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-28675-4_14

29.Bolt, J., Coecke, B., Genovese, F., Lewis, M., Marsden, D., Piedeleu, R.: Interacting conceptual spaces I : grammatical composition of concepts. ArXiv (2017)

30. Yearsley, J.M., Busemeyer, J.R.: Quantum cognition and decision theories: a tutorial.

J. Math. Psychol. 74, 99116 (2016)

31.Lambek, J.: From Word to Sentence. Polimetrica, Milan (2008)

32.Gardenfors, P.: Conceptual spaces as a framework for knowledge representation. Mind Matter. 2, 927 (2004)

33.Aerts, D., Gabora, L., Sozzo, S., Veloz, T.: Quantum structure in cognition: fundamentals and applications. In: Privman, V., Ovchinnikov, V. (eds.), IARIA, Proceedings of Fifth International Conference on Quantum, Nano and Micro Technologies, pp. 5762 (2011)

34.Veloz, T.: Toward a quantum theory of cognition: history, development, and perspectives (2016)

35. Varela, F.J.: Neurophenomenology: a methodological remedy for the hard problem.

J. Conscious. Stud. 3, 330349 (1996)

36.Mohammad, S.M., Turney, P.D.: Crowdsourcing a word-emotion association lexicon. Comput. Intell. 29, 436465 (2013)

37.Signorelli, C.M.: Types of cognition and its implications for future high-level cognitive machines. In: AAAI Spring Symposium Series (2017)