QUESTIONING QUASI-OTHERNESS

Artificial Intelligence (“AI”) and trends toward digitization are unavoidable and are developing at accelerated rates. With this implementation of the digital we face the truth of our evolving existence as a society dependent on technology and AI. Floridi in The Onlife Manifesto articulates his coining of the term ‘onlife’ as descriptive of “the new experience of a hyperconnected reality in which it is no longer sensible to ask whether one may be online or offline” (2015: 1). With this, information and communication technologies (ICTs) become “not mere tools, but rather environmental forces” which have the capacity to alter and affect concepts related to self-identity, social interactions, and reality (Floridi 2015: 2). From this new environment that we find ourselves in, enveloped by and increasingly reliant upon digitization, the extent to which we are able to recognize and distinguish between reality and virtuality becomes essential. To conflate the two realms - the real and the virtual - is to risk conflating human existence and AI existence. The two are not destined to become identical; instead, human existence should be complemented by the development of AI and new digital technologies. 

Ihde refers to the quasi-face and quasi-otherness, writing that “[t]echnological otherness is quasi-otherness” (1990) and Wellner expands upon this quasi-face as a definition for a screen that a human has the ability to interact with (2014). However, categorizing that any screen embodies the potential to be a quasi-face is misleading, and this is the case for two reasons. Quasi implies that a face that is visible via the use of a screen is ‘nearly’ or ‘almost’ equivalent to that of a face - the face of which Levinas writes in describing the face-to-face between the Self and the Other. The face of the Other that meets the Self via a screen, in a digitized format, is not identical or nearly identical to meeting and confronting the face of the Other in person. It embodies the exteriority of the face, however it is not nearly the same; it is a different and distinct version of the face. In meeting face-to-face, there is an occupation of the same space and there is no differentiation with regard to the experience of temporality. This does not occur in the Self and the Other meeting digitally through the use of a screen; with the use of a screen - whether a phone, computer, or other device - is a stand-in or replacement for in person confrontation between the Self and the Other. 

The screen acts as a terminal for viewing the face of the Other, but this remains dissimilar to that of meeting face-to-face. The screen facilitates an interaction which is necessarily altered from Levinas’ face-to-face through its digitization. Not only does this eliminate the occupation of the same spatiality, it also minimizes the Other’s capacity to command the Self. The command of responsibility requires, for example, that someone receive and pick up a call, and is threatened or jeopardized by the ability to end the digital interaction, by then ending the call or ceasing to reply or respond via a digital interface. These digitized interactions between the Self and the Other vary from Levinas’ meeting face-to-face in that there exists more options to avoid the direct or continued confrontation by the Other.

To describe and define this interaction, I would instead utilize the Latin word Alius - the face of the Other which one would interact with digitally or through a screen, is not almost or nearly the same as a face-to-face engagement of the Self and the Other. The interaction of face-to-Alius is what occurs when humans connect utilizing digital technology, wherein the Alius is a variation of the Other, presenting to the Self via a digital terminal, such as a screen. Interaction is still possible, however it remains limited by spatiality and temporality. 

The digital interface is not identical and cannot achieve the same result of beckoning or commanding responsibility from the Self to the Other. It is limited by temporality - the Other can only command me for as long the engagement on the digital interface lasts. It is also limited by space, as the face of the Self and the face of the Other do not inhabit and interact within the same space. There is a definitive difference that exists between the reality of the face-to-face and the undeniable element of virtuality of digital communication through a digital terminal or screen. This is why the word Alius ought to be employed - the Alius of the face of the Other appears as the Other, but it is not identical to the Other. The Other has the option to be available but veiled to the Self by use of a digital terminal; the digitized perception creates a veil between the Self and the Other. The perception of Self to Other and Other to Self would be different in-person and face-to-face than it would be in a digitized format. With digitization there is also the ability to mute audio and to eliminate video - features and functions of the digital interaction which are not available in an identical format with face-to-face meetings between the Self and the Other. It is imperative to be able to “distinguish between artefact employment and reality” and to comprehend the variances of in-person, face-to-face human interaction and digitized forms of communication (Floridi 2015: 44).

A question that arises out of the rampant digitization we are witness to is thus - does there exist the potential for AI to recognize humans as the Other? And conversely, is there a likelihood that humans will recognize AI as the Other? Let us first turn to the possibility of AI recognizing humans as the Other. In this hypothetical event, for AI to encounter humans in Levinas’ face-to-face interaction, AI would be required to have a sense of self and sense of beingness equal to that which exists for humans. This is a more expansive question that will not be undertaken in great detail herein, but it can be acknowledged that there is unique human awareness related to human beingness, human self-conception, and human identity. Descartes, for example, took up this hypothetical and concluded that machination would not be identical to a human in all modes of thought, awareness, and being. Methods such as the Turing Test and Lovelace Test were designed for the purpose of investigating to what, if any degree, digital and computational technologies have the capacity for human-like cognition and consciousness (Copeland 2017). Relatively few existing technologies have passed these tests, and they continue to be debated with regard to their usefulness and sufficiency. Searle, with “The Chinese Room Argument” concludes that programmed computers do not have what could be called cognitive states of being (Searle 1981). For the purposes of this examination, we may affirm that AI as we know it may exist in an intelligent ‘body’ or environment, but it is not duplicative of a wholeness or totality of the human body or human awareness. Digital tools can enhance human cognition and create opportunities for Alius connection and a better quality of human life. New developments in digitization have the potential to improve human lives on a global scale, however this will not occur by machines replacing humans. It will occur in machines and digital technology complementing human thought and human existence (Johnson 2000).

What evokes responsibility of the Self to the Other is due to the inhabitation and experience of the same spatial reality and the recognition of the Self in the face of the Other, which then compels and commands the Self toward greater responsibility (Levinas 1961: 210, 214). Given the understanding of this premise, humans will likely not see themselves in the face of technological artefacts, no matter how closely the artefacts optically resemble, simulate, or are trained in human awareness. Humans, therefore, will not be drawn to the sense of responsibility that Levinas’ face-to-face allows for. This human awareness is unique to humans, and while not something that can be replicated, AI development should strive to create greater awareness that mimics some elements of human awareness. This is to ensure that AI is designed with regard for the care, support, and sustainability of human life and the natural environments we live in. This is not problematic; it is instead indicative of that fact that AI will continue to be a continuation of human thought - not a replacement of it - and that human interaction and acknowledgement of the responsibility of Self to the Other must underlie all AI development.

The rationale for employing Levinas’ Self and Other to the discussion herein is on account of its exemplification of a system of ethics that is relevant for the most pressing and evolving ethical dilemmas of our time with respect to digitization. For Levinas, “the epiphany of the face is ethical” (1961:199). Due to the rapidity at which AI and digital technology continues to develop, there has been a persistent lag in how the moral and ethical implications of ICTs are examined on a global scale. Contemporary philosophers of information, including Floridi, call for due care, mindfulness, and better design within AI, machine learning, and digital technologies. The necessary governance of the digital must take into account the role of these technologies, not as a replica or replacement for humans or human interactions but rather as an enhancement of human existence. Levinas’ ethics of responsibility to the Other are imperative at a time when the digitization of communication is becoming increasingly normative and unquestioned (Bergen & Verbeek 2020). Employing the ethical relationship of the Self in relation to the Other is a method for imbuing the responsibility for the Other into the continued development of AI and digital technologies.

AI is not destined to replace uniquely human experiences and functions. It should enhance human decision-making, support human responsibility, and it should enhance human work and thought. AI is not a replacement of human cognition, it is the next step in the evolution of human cognition. A continually puzzling piece of this for human self-identity is the necessity to reevaluate human centrism, as humans have ceased to be entities with the highest processing power - computational technologies can outperform humans in terms of intelligence. This factuality has served as a cause for fearfulness toward AI, although it is purely a result of the misunderstanding of what is truly human. It is autonomy, not intelligence, which is unique and will remain unique in humans. AI can be intelligent, but it must be programmed to act, and it is therefore not at all autonomous in a way identical to human thought and human action. Centering humans should, therefore, be taken up with regard to human autonomy as opposed to human intelligence. We can then envision the possibility and potential of AI in positive terms.

AI design and development should include the implementation of teaching AI to possess certain awareness to better support human care and cognition, and the philosophy of AI remains pertinent and essential for ensuring that such an approach is included. This undertaking will continue to be complex as concepts of digitization and AI continue to evolve (Floridi 2004: 558). The necessity for face-to-face interaction is still essential, as is the recognition that Alius is the digitized presentation of the Other to the Self and not an identical replica, substitute, or substantive alternative to the face-to-face. It is a digitized tool for the Self to meet and interact with the Other, however it does not equate to the same experience or resulting ethical responsibility. Encountering the Alius does not compel the Self to acknowledge the same sense of responsibility toward the Other. As such, the human face-to-face element consisting of the responsibility that the Self is obligated toward the Other cannot be excluded from the continuing development of the ethics of AI and further globalized digitization. 

Bibliography

Bergen, J. P., & Verbeek, P. (2020). To-Do Is to Be: Foucault, Levinas, and Technologically Mediated Subjectivation. Philosophy & Technology. doi:10.1007/s13347-019-00390-7

Copeland, J., et al. (eds.), 2017, The Turing Guide, Oxford: Oxford University Press

Floridi, L. 2004. Open Problems in the Philosophy of Information. Metaphilosophy, 35(4), 554-582. 

Floridi, L., (eds.), 2015, The Onlife Manifesto: Being Human in a Hyperconnected Era. Springer International Publishing.

Ihde, D. 1990, Technology and the Lifeworld: from Garden to Earth. Bloomington and Indianapolis: Indiana University Press.

Johnson, D. 2000, Computer Ethics. Third edition. Upper Saddle River, NJ: Prentice Hall. 

Levinas, E. 1961, Totality and Infinity: An Essay on Exteriority. The Hague, Netherlands: Martinus Nijhoff Publishers and Duquesne University Press. 

Searle, J. 1981, “Minds, Brains, and Programs,” Behavioral and Brain Sciences, 3: 417–57.

Wellner, G. 2014, The Quasi-Face of the Cell Phone: Rethinking Alterity and Screens. Human Studies, Vol. 37, No. 3: 299-316