Written by Jan Redzisz. Jan has a background in applied cultural analysis, and is currently a visiting researcher at ETHOS Lab. Here he is working on speculative games for engaging future users of robots and artificial intelligence. As part of his visit Jan is writing a series of articles on the topic of his research, where this is the second. The first article can be found here.

In 2014, the obscure Robot Film Festival gave out one of its obscure Botsker awards (“most uncanny” category) to a video about Brad the toaster. All that Simone Rebaudengo of automato.farm needed to tell Brad’s tale was 4 minutes of mockumentary about the toaster’s lifestyle and users. The thing about Brad the toaster was that he (sic!) was needy, pickily assessing his owners on their worthiness of Brad’s delicious bread making. Should they fail to provide Brad with enough love and appreciation, he will promptly remove himself from their household and ship towards more suitable new owners. The absurdity of this story is that it blows out of proportion the reality where it is us, dumb users, who serve the products and their imagined selves. However, I refuse to accept that absurdity. In fact, I think Brad really helps us to unveil already established bizarre outcomes of today’s consumerism. Siri, meet Brad. Brad, meet Siri. I hope you two get along! The rest of you, please set your sat nav GPS devices to Morgan Freeman’s voice and buckle up for the second instalment in the series of my ETHOS articles.

Speaking of toasters, any Battlestar Galactica fans out there? Remember the Cylons? The evil robot-like species, otherwise known as “toasters”, that rebelled against their human creators in the bid for Earth domination. Okay, that was a little farfetched as passages go, I’ll give you that. However, I don’t see why we wouldn’t discuss Brad the toaster in the same category as robots (Cylon or not), sex dolls, and other objects, that we so fondly insist on attributing identities to. We tend to endow products with back-stories, pet names, and often fantasize about having them even more customized so as to posses voices, which we deem necessary for further development in new interactions. So far, such interaction takes shape mostly in the written form, on various screens, like in computer games of yore. I take comfort in thinking, that my past experience with gaming of sorts could one day serve as a skill in interacting with new social actors, that will come out of new-tech. I sincerely believe that my life would have turned out differently, had it not been for RPGs, and Baldur’s Gate in particular. Don’t you dare reject the fact that life’s chores are simply side quests that happen on the way to its main objectives.

Upon years of arduous battles to obtain a university degree, you succeeded in receiving an ancient diploma from a group of old men in robes and funny hats. You find yourself relieved, yet unemployed. You should have SO not studied humanities, old pal. What do you do?

A. Enrol in A-kasse fund
B. Run away to a gap year of adventures
C. Beg on the streets of Copenhagen
D. Prostitute yourself

Great! You make your way to the unemployment fund office. A gentle-looking lady offers you coffee and evaluates your CV. You nervously spill the coffee onto her lap and a CV – she becomes aggravated. Select your weapon or a charm.
As true as it is, there’s a fundamental difference between my interaction with the coffee-covered lady, and disgruntled Brad the toaster. I know that lady is aggravated because of my hopeless clumsiness, that hot coffee is painful, that stains are hard to wash, that she’s older than me, Danish, that she’s been at work for 6 hours and that’s tiring. I know, or assume so much more – that’s why the list of my RPG answers to appease her is almost limitless. What I know about Brad is what the creators of Brad decided to equip him with, that manifests itself in our interaction. I know he’s a moody bugger, and that he’s apparently male. Here’s exactly where I object – we seem to be giving voices and identities to products like robots, that ultimately are as much of an empty vessel as the toaster would be. We sometimes do it because it is much cuter that way, sure – just relax, it’s for fun. Wait, I know that…! I am not disputing the cuteness or fun factor of it at all. What I advocate, however, is the kind of interaction with objects that allows for the multitude of possible narrative angles, so that next time a shopping mall security robot accidently steps on an unattended kid’s foot, we don’t dive head on into some ready-made cliché. In the ideal world, firstly – we wouldn’t be introducing the problem as “RoboCop” (sic!), and secondly –  we wouldn’t report on it like futurism.com did:
“Autonomous Robot Intentionally Knocks down 1 Year Old Child and Continues Assault”

In this case, seemingly the journalists choose to speak on behalf of the robot as if it possessed a moral code of conduct and furthermore place it in a suitable narrative, where the robot refuses to follow morally right procedure. If we are so keen to allow objects the voice, let’s allow them to accumulate a “story” of their own too.  In my previous article I quoted Kathleen Richardson (2015) on how she considers robots “empty artefacts”, which can solely reflect the properties we fill them with. So far, it seems like we are doing so either involuntarily, or on purpose but due to some limited understanding of what we expect from a particular product of ours. In the first case, we could just recently hear some concerned voices of how AI search algorithms might be showing “white” prejudice. Simply read NYT’s “Artificial Intelligence’s White Guy Problem” by K. Crawford.

In the second case, it is truly just as hard to point fingers at anyone in particular. The nature of innovation is that we often don’t initiate it with a whole mental map of anticipated outcomes. Instead, we simply experiment, which then takes on a life of its own. Only this Summer, National Science Fdn (www.medium.com/@NSF) jokingly released a handful of motivational posters for robots as part of a #GenerationR campaign to show how we could approach robotics in a more endearing way, to promote their development and exposure to the public. Disclaimer: I don’t seek any beef with that! However, the motivational posters can really illustrate how we sometimes might be a little too keen to seek out Brad the toaster in various fruits of today’s innovation. The campaign encourages social robots to remain strong, as the real strength is to be gentle; evolutionary robots are cheered to commitment in order to compete with real organisms; industrial robots are motivated to empathy, nano-robots to power, and various other to team work, communication etc. It’s just a joke, and a private one at that, it seems. But the joke is on all of us. Last month, I wrote about robot-literacy and how we can all instinctively tap into pop-cultural narratives at hand in order to retell future robotic developments. I see those motivational posters for robots as yet another example of such mechanisms. Past the joke, we put up little fight in subscribing to the idea that one day, a social robot should remain patient in its quest for more gentleness and empathy. By momentarily suspending our knowledge that such robots are mere objects, we choose to favour the fantasy of them having a life and a story of their own. Why don’t we pause for a second and rephrase the main objective here. Is it for a robot to BE more gentle, or for it to MEDIATE more gentleness? It seems like we could benefit from taking a few steps back and revisiting the notion of what we want ANY robot to stand for, in a first place.

Drawing from Marshal McLuhan’s theory, which states that “(…) the personal and social consequences of any medium  – that is, of any extension of ourselves – result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology” (in Understanding Media, 1964), we can try to redefine what it is that robots introduce into our lives. Robotic seal Paro, popular among seniors and children in therapy, does its job by simply purring, moving ever so slightly and uttering cute noises of reassurance. In McLuhan’s world of media, Paro mediates our human capacity for closeness, warmth, as well as providing an outlet for our human need to care for others. In this world, the robotic surgical arm is a medium for a surgeon’s actual arm, just as much as bicycle is for our legs. The controversy starts when we go further from mediating mechanical abilities and into the psychological abilities, that cute Paro is meant to extend. To some, Paro does a great therapeutic duty to people in need. Others worry that it might be replacing human bonds, which in itself is just an unnatural substitution for something lesser (and in so doing a tad inhumane). It seems to me that the two sides of argument might have opposing vision of what Paro the seal actually is, in its essence. If, in your mind, you endow Paro with its own self, then it would BE a substitute to another (human) self. If you endow Paro with mediating function, then it would merely EXTEND various human selves – that of a caretaker’s ability to tend to patients, and that of a patient to give caring warmth to things. In a sense, a common cat usually does Paro’s job, but in big elderly homes a lot of people might have allergies, thus Paro sweeps in. A robotic sea mammal proves a problematic notion, so what about androids? What do they mediate, if anything? To me, asking this specific question should be a key homework exercise that we should all be doing, prior to attributing humanoid robots with any additional human traits. For what are today’s robots, as we know them, if not walking toasters with proverbial baggage? Heck, forget racist AI, what was the deal with David Hasselhoff’s car friend in “Knight Rider” (1982)?! A cross between a police partner, a butler and a friend, the car had been perfectly gendered into its social role within the story. We simply knew that “KITT” was a car-dude, with this or that story, identity, motivation etc. Just as we knew that a car in John Carpenter’s horror “Christine” (1983) was a blood sucking femme fatale. Both cars had personalities reflected in their design. Masculine black and red racing KITT stood in stark contrast with blood red Christine, whose dashboard would light up with poisonous green to underline the suave treachery of an evil woman. Those were cars with some serious gender baggage… Cars!  Perhaps it’s a cautionary tale for what might happen if you embody a singular trait in an object and let its personality live solely around that one property.

No wonder sex robotics are often clouded with various parties’ disapproval. You can’t create a human-looking object, equip it with exclusively one aspect of human condition (sexuality), and don’t expect it to seem downright creepy in the eyes of the public. Truth be told, there are degrees to that phenomenon. There’s a reason less people will perv out towards window display mannequins, than to sex robots. That reason being a curatorial element in designing just how sexy should an object be. In episode 5 of a New York Times documentary Robotica “The Uncanny Lover” (2015), Matt McMullen, CEO of Abyss Creations, explains how he wants to create a perfect illusion of pleasure communicated by a sex doll, so that a client could feel “real” bond in this interaction. In so doing, McMullen equips AI with a singular trait of being a “lover”, while not wanting her to “sound dumb”, as that might be a turn-off. Through this, as much as through perfect-looking animatronics, the robot is being fitted with a sense of performative self, which in turn is intended to interact with human selves. In my opinion, the root of the problem here is lack of a blatant mediation factor in favour of pretend autonomous identity (which is simply not there). At the same time, McMullen proclaims he would rather keep his sex robots look like dolls, as to not evoke the “uncanny valley” effect, which is a psychological and physical repulsion we feel when faced with something seemingly human, but not quite or any more (hyper realistic models, also cadavers). Where is the logic then? To introduce a new humanoid actor into social relations, which conveys totally believable sexual servitude, while posing no physical claim to being a human.  If a garbage can-looking security robot runs over a toddler’s foot and we call it  “continued assault”, what do you think we’ll be calling victims of failed relationships with a perfectly humanoid doll that keeps begging for sex?

What I am trying to demonstrate here is that whatever property we imbue an object with, it’s an arbitrary process, subject to curatorial decisions. And where there’s curation, there’s a curator, where there’s the curator, there’s an agenda. In times to follow, other human traits might be projected onto other androids and there will always be actual humans to decide on rights and wrongs of what it means to be: racist, catholic, male, an athlete, a pop star, etc. Wouldn’t it make more sense to make that process of arbitration less centralized? Especially when we think of robots as potential media (in McLuhan sense), that should enhance our own abilities, that are bound to be ultimately individualistic.

In her “Dilemma of expertise: Democratising expertise and socially robust knowledge” (2003), H. Nowotny writes about the possibility of resolving expert problems in the “agora” of sorts, where the problems are being generated (raised) but also solved in a way that allows for all types of competing expertise along with participation of more general public. In such a way, various expertises would be both carried out and negotiated. At the same time J. Law & J. Urry in their “Enacting the social” (2005) advocate the need for new methodologies, which would address otherwise outdated understandings of social sciences. New methods would need to keep up with the complex and elusive character of the social domain, thus tapping into its multiplicities and latent factors. Both articles point to the need for a middle way into resolving social issues in more decentralized way, that wouldn’t rely on such rigid divisions as those currently used i.e. when we debate new tech. In other words, we need new methods for new times. Or as I like to think of it – we need more bottom-up engagement in innovation, utilizing the vast pool of experience that we, as future users, don’t seem to be activating enough.

During last month’s ETHOS Summer Lab, I ran a workshop on lateral thinking in innovation. Using contextual props and roulette-like manner of work, the Lab’s participants attempted to uncover various latent factors to result from combining different tech that might be dormant at present, but also utilized or avoided in the future. Among many other aspects, we entertained the issues of social stratification in the aftermath of brain stimulating devices; pondered the implications of sensors at ITU’s Cafe Analog to space industry; asked whether AI-supported film script writing could impact our future crops management. Nonsensical as it sounds here, it was a good exercise in mapping out the possible futures, dynamics, as well as uncovering unaddressed notions that already play their parts around us. It was our little nod to practicing self-reflexivity in innovation, drawing inspiration from Actor Network Theory (albeit not totally copying it), semiotics and general preparedness. It could be one of many new ways into addressing the elusive, complex dilemmas of new tech, just as called for by Law and Urry. It could also be an example of creating our little agora, as anticipated by Nowotny. It could also be a potential contribution into redefining robots as media. For clearly, an android model put in the context of crops management in outer space will differ from the same product put in the context of the river traffic control. Different will be a role of a killer drone in a discussion about Canadian nightlife scene from that about using drones in health examination practices. When we discuss an object within a wider frame of what it can mediate for us, we immediately take a step sidewise from what that same object is meant to signify as an entity. Better yet, it would allow for more voices in debating specific applications of objects, than just one researcher’s ramblings and hopes.

Additionally, I think a bottom-up engagement in designing Human-Robot Interaction could also help in individualization of the products, something surely handy in legal realities to come, i.e. when the same model of a robot as your beloved robotic seal or a hedgehog is being lynched by the press as the next Terminator. Possibly my computer games deliberations are as abstract as the relevance of Cafe Analog to space industry, or emotional needs of Brad the toaster, but let me have my fun in speculating. If we just cannot refrain from giving objects their own voices, let’s think again about the need for variable narrations and possibilities of a RPG-like dialogue in Human-Robot communication. It could allow for storing more personalized feedback from various users and cataloguing them into social cues on the complexities of expectations referring to our socio-cultural diversity. Perhaps it could also help the same types of a robot mediate different roles in our respective lifestyles. Maybe this way, we won’t be thinking as much about robot exploitation, but utilization of new media.

You visit your workmate’s house for dinner. To your surprise, a sex robot is seated next to you at a table, while your host cooks away in the kitchen. You promptly depart from the robot and refuse to help her in setting up a table, even though she asked you to. She then asks you:

  1. Is it because you don’t like setting dinner tables?
  2. Is it because I’m a robot?
  3. Is it because I’m a sex robot?
  4. Different reason?

Ah I see. Says the robot.

  1. Do you find my dialogue options tedious?
  2. Do you find my current attire insulting?
  3. Are you uncomfortable having someone’s sex robot at a social event?

Understood. I shall remove myself from this situation. Enjoy your meeting.

 

I shall also remove myself from this article and see you one last time in my next month’s account of upcoming conference “What social robots can and should do? Robophilosophy 2016 | Transor 2016” in Aarhus, October 17-21.