Artificial intelligences as extended minds. Why not?

Gianfranco Pellegrino, Mirko Daniel Garasic

Abstract


Abstract: Artificial intelligences and robots increasingly mimic human mental powers and intelligent behaviour. However, many authors claim that ascribing human mental powers to them is both conceptually mistaken and morally dangerous. This article defends the view that artificial intelligences can have human-like mental powers, by claiming that both human and artificial minds can be seen as extended minds – along the lines of Chalmers and Clark’s view of mind and cognition. The main idea of this article is that the Extended Mind Model is independently plausible and can easily be extended to artificial intelligences, providing a solid base for concluding that artificial intelligences possess minds. This may warrant viewing them as morally responsible agents.

Keywords: Artificial Intelligence; Mind; Moral Responsibility; Extended Cognition


Intelligenze artificiali come menti estese. Perché no?

Riassunto: Intelligenze artificiali e robot simulano in misura sempre crescente le capacità mentali e i comportamenti intelligenti umani. Molti autori, tuttavia, sostengono che attribuire loro capacità mentali umane sia concettualmente errato e moralmente pericoloso. In questo lavoro si difende l’idea per cui le intelligenze artificiali possano avere capacità mentali simili a quelle umane, sostenendo che menti umane e artificiali possano essere considerate come menti estese – sulla scorta della prospettiva di Chalmers e Clark circa la mente e la cognizione. L’idea principale alla base di questo lavoro è che il Modello della Mente Estesa abbia plausibilità a prescindere e che possa essere facilmente esteso alle intelligenze artificiali, fornendo una base solida per concludere che le intelligenze artificiali possiedano delle menti e si possano considerare come agenti moralmente responsabili.

Parole chiave: Intelligenza artificiale; Mente; Responsabilità morale; Conoscenza estesa


Parole chiave


Artificial Intelligence; Mind; Moral Responsibility; Extended Cognition

Full Text

PDF

Riferimenti bibliografici


ADAMS, F., AIZAWA, K. (2001). The bounds of cognition. In: «Philosophical Psycholo-gy», vol. XIV, n. 1, pp. 43-64.

ADAMS, F., AIZAWA, K. (2009). Why the mind is still in the head. In: P. ROBBINS, M. AYDEDE (eds.), The Cambridge handbook of situated cognition, Cambridge University Press, Cambridge, pp. 78-95.

BOSTROM, N. (2014). Superintelligence: Paths, dangers, strategies, Oxford University Press, Oxford.

BOSTROM, N., YIDKOWSKY, E. (2014). The ethics of artificial intelligence. In: K. FRANKISH, W.M. RAMSEY (eds.), The Cambridge handbook of Artificial Intelli-gence, Cambridge University Press, Cam-bridge, pp. 316-334.

BRINGSJORD, S., SUNDAR GOVINDARAJULU, N. (2018). Artificial Intelligence. In: E.N. ZALTA (ed.), Stanford encyclopedia of philosophy, Spring Edition, URL: https://plato.stanford.edu/entries/artificial-intelligence/.

CLARK, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence, Oxford University Press, Oxford.

CLARK, A. (2005). Intrinsic content, active memory and the extended mind. In: «Analysis», vol. LXV, n. 285, pp. 1-11.

CLARK, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension, Oxford University Press, Oxford.

CLARK, A. (2010). Memento’s revenge: The extended mind extended. In: R. MENARY (ed.), The extended mind, MIT Press, Cambridge (MA), pp. 43-66.

CLARK, A. (2010). Coupling, constitution, and the cognitive kind: A reply to Adams and Aizawa. In: R. MENARY (ed.), The extended mind, MIT Press, Cambridge (MA), pp. 81-99.

CLARK, A. (2016). Surfing uncertainty. Prediction, action, and the embodied mind, Oxford University Press, Oxford.

CLARK, A., CHALMERS, D. (1998). The extended mind. In: «Analysis», vol. LVIII, n. 1, pp. 7-19.

COLOMBO, M., IRVINE, E., STAPLETON, M. (eds.) (2019). Andy Clark and his critics, Oxford University Press, Oxford.

DEGRAZIA, D. (2005). Human identity and bioethics, Cambridge University Press, Cambridge.

FLORIDI, L. (2013). The ethics of information, Oxford University Press, Oxford.

FLORIDI, L. (2016). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. In: «Philosophical Transactions of the Royal Society», vol. CCCLXIV, n. 2083, Art.Nr. 20160112 – doi: 10.1098/rsta.2016.0112.

FLORIDI, L., SANDERS, S.W. (2004). On the morality of artificial agents. In: «Minds & Machines», vol. XIV, n. 3, pp. 349-379.

FRENCH, P. (1984). Collective and corporate responsibility, Columbia University Press, New York.

FRENCH, R.M. (1990). Subcognition and the limits of the Turing test. In: «Mind», vol. XCIX, n. 393, pp. 53-65.

GETTIER, E.L. (1963). Is justified true belief knowledge?. In: «Analysis», vol. XXIII, n. 6, pp. 121-123.

GILBERT, M. (2000). Sociality and responsibility. New essays in plural subject theory, Rowman & Littlefield, Lanham.

HUEBNER, B. (2014). Macrocognition. A theory of distributed cognition and collective intentionality, Oxford University Press, Oxford.

KURZWEIL, R. (2002). We are becoming cyborgs - URL: https://www.kurzweilai.net/we-are-becoming-cyborgs.

KURZWEIL, R. (2006). The singularity is near: When humans transcend biology, Duckworth, London.

LIST, C., PETTITT, P. (2011). Group agency: The possibility, design, and status of corpo-rate agents, Oxford University Press, New York.

MCMAHAN, J. (2002). The ethics of killing: Problems at the margins of life, Oxford University Press, New York.

MORE, M., VITA-MORE, N. (2013). The transhumanist reader: Classical and contemporary essays on the science, technology, and philosophy of the human future, Wiley-Blackwell, London.

NAGEL, T. (1997). Justice and nature. In: «Oxford Journal of Legal Studies», vol. XVII, n. 2, pp. 303-321.

OPPY, G., DOWE, D. (2019). The Turing test. In: E.N. ZALTA (ed.), The Stanford encyclopedia of philosophy, Spring Edition, URL: https://plato.stanford.edu/archives/spr2019/entriesuring-test/.

SEARLE, J.R. (1981). Minds, brains, and programs. In: «Behavioural & Brain Sciences», vol. III, n. 3, pp. 417-457.

SEARLE, J.R. (1984). Minds, brains and science: The 1984 Reith Lectures, Harvard University Press, Cambridge.

SEARLE, J.R. (1992). The rediscovery of the mind, MIT Press, Cambridge (MA).

STONE, C.D. (2010). Should trees have standing? Law, morality, and the environment, Oxford University Press, Oxford.

TASIOULAS, J. (2019). First steps towards an ethics of robots and artificial intelligence. In: «Journal of Practical Ethics», vol. VII, n. 1, pp. 61-95.

TOMASELLO, M. (2009). The origins of human communication, MIT Press, Cambridge (MA).

TRIBE, L.H. (1974). Ways not to think about plastic trees: New foundations for environmental law. In: «Yale Law Journal», vol. LXXXIII, pp. 1315-1346.

TURING, A.M. (1950). Computing machinery and intelligence. In: «Mind», vol. LIX, n. 236, pp. 433-460.

UNESCO (2017). Report of COMEST on robotics ethics - UNESCO Digital Library, Paris - URL: https://unesdoc.unesco.org/ark:/48223/pf0000253952.

WIGGINS, D. (2016). Continuants: Their activity, their being, and their identity, Oxford University Press, Oxford.




DOI: https://doi.org/10.4453/rifp.2020.0010

Copyright (c) 2020 Gianfranco Pellegrino, Mirko Daniel Garasic

URLdella licenza: http://creativecommons.org/licenses/by/4.0/

Rivista internazionale di Filosofia e Psicologia - ISSN: 2039-4667 (print) - E-ISSN: 2239-2629 (online)

Registrazione al Tribunale di Milano n. 634 del 26-11-2010 - Direttore Responsabile: Aurelia Delfino

Web provider Aruba spa - Loc. Palazzetto, 4 - 52011 Bibbiena (AR) - P.IVA 01573850516 - C.F. e R.I./AR 04552920482

Licenza Creative Commons
Dove non diversamente specificato, i contenuti di Rivista Internazionale di Filosofia e Psicologia sono distribuiti con Licenza Creative Commons Attribuzione 4.0 Internazionale.