The dead parrot and the hole in the paper sky

Messy thoughts on humans, machines, language and trust.

A blog post written by Stefania Santagati, a DIM student at the IT University. Stefania was a Junior Researcher in ETHOS Lab in the spring semester 2017.

ALIV

Much later, he would conclude that nothing was real except chance. But that was much later. In the beginning, there was simply the event and its consequences.

Whether it might have turned out differently is not the question. The question is the story itself, and whether or not it means something is not for the story to tell.

Paul Auster, City of Glass

My recent research started the day ALIV met my friends. ALIV is a small text adventure simulating a super-intelligent AI. My friends, its playtesters.

Monika and I developed ALIV as a quick prototype for a videogame idea we had in mind: a game where the interface is lying to you. We placed our concept’s setting as a futuristic spaceship, in which the player character, John Doe, finds himself after being awoken from cryosleep. He is appointed current captain and needs to cooperate with the on-board AI, ALIV.

He is told to deliver settlers to the planet Oztralia, but in reality the mission is to transport political opponents there for hard labour, and to exterminate all life on the entire planet whenever a new batch of “settlers” arrives. The game’s main interaction is the communication with the board computer ALIV via text input.

There would be much to say about our design process itself – why we decided not to apply the human concept of “lying” to our AI and only have it witholding information, and later introduced a repressive government as the deceiving maker of ALIV – but that’s another story.

Eventually ALIV was clumsy, often revealing its ineptitude when collapsing against the smallest typo, and the game frustrating – requiring the player to decipher subtle hints along the way, many of which, in hindsight, must have been obscure for everybody except us. Despite that, some made it to the end.

Now that was the thing, the moment where the original premise bit back. The player who had spent his game time blindly following AI’s instructions would find himself faced with two presumably horrifying options: cooperate or die. ALIV would reveal its true face, and the player, abruptly stripped of the taste of a probable victory, would often stare blankly at the screen for a few seconds. Some of them, sometimes, would turn to us and say, “You betrayed me”.

The outcome of our small experiment was influenced by many factors and relied on some conventions. The anecdote in itself has one obvious morale – and another, which, after months of research, is still not entirely clear to me.

First, it seems to me that the trust instinctively granted to ALIV was partially due to its inherent authority as a videogame interface. Interfaces are usually places where designers share, and players gather knowledge about the specific means by which the artifact is intended to be used; much like an instruction manual for Ikea furniture, it is seldom a place where to apply one’s critical thinking.

Secondly, it was far too obvious that all of ALIV’s interactions were scripted, goofily responding to simple keyword search restricted by finite-state machines. Our brute force approach to a complex reality – one where different players would express their commands in unpredictable ways – was hilariously brittle. It’s easy to perceive the designer behind the design when the fictional pact – the willingness to suspend one’s critical faculties and believe something surreal – is repeatedly broken. ALIV’s voice was, clearly, our voice. ALIV was our creature, and we were literally standing within a stone’s throw from it. Therefore ALIV had no agency, and we were the ones to hold accountable for the betrayal.

But observing players struggling on where to place their trust made me think of something else, too. What happens with commercial applications of conversational AI, the ones that don’t come with easily blameable developers attached? Even state of the art natural language processing – think Alexa, or the famous Sophia – cannot go too far from echoing their creators’ voice and intentions.

When our players addressed the ghost in the machine – the entity with enough judgement to intentionally deceive them – they easily reckoned it must have been us. But how does that work with a real, complex, layered system of artificial intelligence resembling humanness? Does it have agency? Can it be trusted? Who should be held accountable for it? What is it, anyway?

Pining for the fjords

There is a famous Monty Python’s sketch that comes to my mind when I think about this story. The sketch portrays a conflict between a disgruntled customer (played by Cleese) and a shopkeeper (Michael Palin), who argue whether or not a recently-purchased “Norwegian Blue” parrot is dead. “When I purchased it, not half an hour ago, you assured me that its total lack of movement was due to it being tired and shagged out following a prolonged squawk”, complains Cleese. “He’s probably pining for the fjords”, the shopkeeper replies. The pun became so popular that is now simply used as an euphemism for something that is dead but is pretended to be still alive.

At times, trying to design ALIV’s answers we felt like Monty Python’s clerk: absurdly, pointlessly trying to give a resemblance of life to an inanimate object.

Design practices for artificial agents often play on this concealment of the designer behind the design: “the new idea is that the intelligibility of artifacts is not just a matter of the availability to the user of the designer’s intentions for the artifact, but of the intentions of the artifact itself” (Suchman, 1987). See also Franchi and GĂŒzeldere (1995) slightly outdated, but still relevant account on the advancements in the field of AI: “in chess playing programs […] the brute force is veiled behind a form of behavior typically associated with something dear to our hearts: the intricate game of chess, where ‘minds clash’. […] the machine intelligence involved in chess playing owes more to the ‘eye of the beholder’ than to any actual intellectual capacity”. Today, despite having mastered games much more complicated than chess, “Artificial Intelligence” is still a contradiction-in-terms, a tech industry self-aggrandizing misnomer.

This form of concealment, or projection, might be the natural derivative of the idea that first sparkled the development of AI itself: that if a machine can imitate human behavior convincingly enough, it cannot be distinguished from a human respondent (Turing, 1950).

The ELIZA effect

Coming close to passing Turing’s test in the mid-60s was Joseph Weizenbaum’s groundbreaking experiment ELIZA, a program written while Mr. Weizenbaum was a professor at MIT and named after Eliza Doolittle, who learned proper English in “Pygmalion” and “My Fair Lady”. The program made it possible for a person typing in plain English at a computer terminal to interact with a machine in a semblance of a normal conversation, and is therefore as the first “chatterbot”, or conversational agent.

Meant as a parody of a Rogerian psychotherapist, and indeed as a proof of the superficiality of the communication between humans and machines, ELIZA went above and beyond its initial purpose, spurring enthusiastic reactions from both practicing psychiatrists and people involved in the experiment, who quickly “very deeply…became emotionally involved with the computer” and “unequivocally anthropomorphized it” (Weizenbaum, 1976). Some of his students exhibited strong emotional connections to the program; his secretary asked him to be left alone when talking to ELIZA.

Weizenbaum was deeply troubled by what he discovered during his experiments with ELIZA. In 1976, he sketched out a humanist critique of computer technology in his book Computer Power and Human Reason: From Judgment to Calculation. The book did not argue against the possibility of artificial intelligence, but was a passionate criticism of systems that substituted automated decision-making for the human mind, and an invitation to carefully consider “the proper place of computers in the social order”.

Social scientist Sherry Turkle, the director of MIT’s Initiative on Technology and Self and one of Weizenbaum’s former colleagues, considers ELIZA and its ilk “relational artifacts”: machines that use simple tricks like mirroring speech or holding eye contact to appeal to our emotions and trigger a sense of social engagement.

After working with children’s perception of advanced humanoid robots such as Cog and Kismet, she noted: “The relational artifacts of the past decade, specifically designed to make people feel understood, are more sophisticated interfaces, but they are still parlor tricks. […] If our experience with [these robots] is based on a fundamentally deceitful interchange—[their] ability to persuade us that they know of and care about our existence—can it be good for us?” (Turkle, 2006)

Camouflage

Weizenbaum had unexpectedly discovered the tendency to unconsciously assume computer behaviors are analogous to human behaviors—a phenomenon now known as the “ELIZA effect.” In the interaction with conversational agents, this also builds on our view of linguistic actions as inherently human.

Natural language it’s the medium of communication among members of our species, and our species only. Anthropomorphisation, already observed in the interaction with infinitely less complex systems such as household appliances (Taylor, 2009), can only be strengthened by the use of intentional, moral vocabulary when interacting with conversational systems.

But as the parrot, even on its better days ELIZA (and ALIV, and the others to various degrees) could only but repeat the words it had been exposed to, giving a pale resemblance of intelligence. To them, words are memory allocations, faint electrical pulses through silicon circuits, whose relational distances can be measured through vectors operations. Signs systems converted to other signs system, so that a conversation can be held, on our terms.

There’s nothing wrong with that but it might be beneficial, if the aim of AI research is still to gain insights on the nature of intelligence, to redefine the terms of the representation. Maybe for example we could think of AI more of a simulacrum, than a copy or imitation:

“The terms copy and model bind us to the world of representation and objective (re)production. A copy, no matter how many times removed, authentic or fake, is defined by the presence or absence of internal, essential relations of resemblance to a model. The simulacrum, on the other hand, bears only an external and deceptive resemblance to a putative model. The process of its production, its inner dynamism, is entirely different from that of its supposed model; its resemblance to it is merely a surface effect, an illusion.” (Massumi, 1987)

Historically, the ontological divide between the human and the non-human has been secured by grounding human personhood in the use of language to create meaning. But already within second order systems theory meaning is disarticulated from language and stems rather from the preference of human and nonhuman (even non-biological) systems for reducing complexity or “noise” which autopoietic systems must do if they are to survive.

Human beings are just one of many autopoietic systems sharing their environment with a wide range of non-human animals, each “bringing forth a world” in a meaningful, even if not human, way.

In this view, traditional humanism is no longer adequate to understand the human’s entangled, complex relations with animals, the environment, and technology.

Life under a torn paper sky

“Lucky marionettes, I sighed, over whose wooden heads the false sky has no holes! No anguish or perplexity, no hesitations, obstacles, shadows, pity—nothing! And they can go right on with their play and enjoy it.”

Luigi Pirandello, The Late Mattia Pascal

In The Late Mattia Pascal,  the main character in the novel is invited to the Tragedy of Orestes performed by “automatic dolls” in a marionette theater. Suppose, the character muses, that the puppet playing Orestes, at the moment of avenging his father’s death by killing his mother, suppose he were confronted with a little hole torn in the paper sky of the scenery. What then?

At that point, hypothesizes the character, Orestes would be overwhelmed:

”Orestes would become Hamlet. There’s the whole difference between ancient tragedy and modern . . . a hole torn in a paper sky.”

This is the spot where Pirandello’s characters are often trapped, conscious that they’re moving within cartoon scenery, but unable to leap out. Addressing a conundrum which he calls ”the clumsy, inadequate metaphor of ourselves”, he hints to themes that would then become central in postmodernist thinking, in which humans with confused, fragmented identity descend into a labyrinth where reality and fiction become increasingly difficult to separate.

Haraway has argued that humans had to come to terms with multiple decenterings, which inflicted human narcissism successive wounds: the Copernican wound — the decentering of the earth from the center of the universe, then the Darwinian wound, the decentering of humanity from the centre of our organic life, the Freudian wound, decentering of consciousness, and a fourth wound would be the synthetic— the decentering of the natural from the artificial, so that the liveliness of technological entities has had to be accommodated (Haraway, 2003).

In this last decentering, the age of our mechanical reproduction, we might find ourselves lost and speechless. I suggest that to establish a truthful conversation with these uncanny others we must first reconsider what we have long been accepting as representations of our humanness.

The aim then should not be to reject artificial agents as valuable mean to analyze and make sense of the messy real, but to encourage a comprehensive understanding of intelligence beyond imitation and towards the integration of viewpoints, however diverse. Again, with Massumi:

“The thrust of the process is not to become an equivalent of the “model” but to turn against it and its world in order to open a new space for the simulacrum’s own mad proliferation. The simulacrum affirms its own difference. It is not an implosion, but a differentiation; it is an index not of absolute proximity, but of galactic distances.” (Massumi, 1987)

The ontological divide we have imagined between ourselves and the non-human world is not nearly as impassable as we have been led to believe. This recognition, however, must be framed not in terms of a granting to the other what we think ourselves to be, but by a radical reconfiguration of how we even think of ourselves in the first place.

Turkle reminds us that “[…] objects with no clear place, play important roles. On the lines between categories, they draw attention to how we have drawn the lines” (Turkle, 2005). The challenge of figuring out meaningful ways of interacting with multiple, puzzling, undefined others might spur the research of a better understanding of intelligence, beyond the human.

Comprehending a “new reality” in which human beings occupy a universe populated by non-human subjects requires a theory which entails “an increase in the vigilance, responsibility, and humility that accompany living in a world so newly, and differently, inhabited” (Wolfe 2010). We must relinquish our sense of bounded identity and fixed categories to understand and communicate with frightening ‘others’.

———-

Agre, P. (1997). Computation and human experience. Cambridge University Press.

GĂŒzeldere, G., & Franchi, S. (1995). Mindless Mechanisms, Mindful Constructions. Constructions of the Mind, Special issue of the Stanford Humanities Review, 4(2). Chicago

Haraway, D. (2010). When species meet: Staying with the trouble. Environment and Planning D: Society and Space, 28(1), 53-55.

Massumi, B. (1987). Realer than real. Copyright no, 1, 90-97.

Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.

Turkle, S. (2005). The second self: Computers and the human spirit. Mit Press.

Turkle, S., Breazeal, C., Dasté, O., & Scassellati, B. (2006). Encounters with kismet and cog: Children respond to relational artifacts. Digital media: Transformations in human communication, 120.

Wolfe, C. (2010). What is posthumanism? (Vol. 8). U of Minnesota Press.