Hackable Bodies. An Unknown Error Has Occurred

Florence Gouvrit Montaño

Ohio State University / Columbus College of Art & Design

Jordi Vallverdú

Universidad Autonóma de Barcelona


Not only are the spaces in which humans live increasingly intelligent, but humans bodies and even their extensions are becoming smarter too: prosthetics, phones, computers, robots, glasses, etc. This is an extraordinary context in which bodies as spaces are not only extended but also intertwined, while also fragile and prone to being hacked by malicious agents. Humans are increasingly living in a more intelligent and invisible milieu. Current technologies have inevitably led us to create Ambient Intelligence (AmI) environments, where an invisible, distributed and omnipresent technological agent performs all tasks that affect the user (climatization, security checking, healthcare, feeding necessities, net media management, to name a few). During these processes the user receives human-like assistance in a hidden, almost ‘magical’ way. Before technology reached these responsive capabilities, humans were attended to by other humans, which activated the internal imitative sensorimotor cortical areas (the F5 area of the brain, according to Fadiga [1]). By ‘sensorimotor’ we mean the brain processes that happen during the execution, observation or even imagination of a physical activity, which invariably involves unconscious mechanical bodily calculations. For example, someone watching a waiter serving a drink or someone walking: in both cases, the observer is unconsciously calculating body movements, and anticipating the following actions. Passively, humans observe humans performing tasks. With AmI, there is no more observation but, rather, direct connection: humans become connected to their technological environment in such a way that the human brain is not able to understand naturally how AmI works.

In the Ambient Intelligence environment, humans feel understood: their bodies are in symbiosis with the environment, which has been designed to be one step ahead of their basic needs. This is a crucial and confusing aspect for the user. The interaction with these intelligent systems is not placed under a mutual and symbiotic sensorimotor or emotional context. When these technologies fail, and it can happen very easily, the users have no means to understand what is happening nor have a natural, from a cognitive perspective, way to reconnect with these fake intelligences. If this happens with Bodily Integrated Technologies (BIT) like medical prosthetics (e.g. artificial pacemakers), which can be hacked, then the loss of control becomes an expression of the human incapacity of controlling their expanded environments. Even worse, it violates all of our natural ways of understanding that something is going wrong by atrophying a naturalized (or even cultural) common sense. On the one hand there is an inevitable fear of present and future techno-failures, but on the other, there is also a production of wonder and awe.

A hackable body opens the possibility for creating a better ‘you.’ Here, the meaning of ‘hacking’ refers only to the possibility of modification, not to an unwanted manipulation. This modified/upgraded/extended ‘me’ can intervene in the perception and interpretation of the world, either through ‘upgraded’ emotions or hacked and integrated multisensory inputs. From an evolutionary perspective, there has already been a development of such synesthetic ‘skills.’

With upgraded bodies moving themselves into intelligent spaces, we are creating a new action domain, which we will call ‘ambient wetware.’ This new space is a mixture of informational technologies with human bodies, their embedded extensions and surrounding intelligent spaces. Given these new bodies with technological extensions, an external hacker could be able to manipulate a wide range of emotional and sensorimotor actions of the user. Despite security mechanisms, some e-health pacemakers, as well as some body prosthetics, can be externally controlled, If NASA satellites have been hacked, why wouldn’t these ambient wetwares? With the operational hacking capacity of a direct intervention into the reign of physical bodies or personal and intimate space of the spectator, the body hacker might access the informational space of an augmented human.

In this article we revise some new media artworks that examine these new lines of research, which integrate cognitive sciences and AI, and especially focus on Ambient Intelligence environments and body hacking. Given that cognitive neurosciences have demonstrated during the last decades that the crucial aspects of sensorimotor processes are related to cognitive processes, we start from an embodied cognition premise, a theory where cognition is shaped by body (motor + perceptual) systems. Among cognitive processes, emotions emerge as the fundamental forces that integrate and make possible the modulation, capture and selection of a complex array of multisensory inputs. With this premise, we wonder if the intervention or hacking of environments can become a possible available space for new media or intermedia art, and, if in such a space the artist could become physically integrated with the user self in a way that renders the artist a symbiotic part of, rather than external to, the audience.

Searching Human Spaces for Bodies and Minds

“We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy.” – Hans Moravec

Human beings are prewired with genetic data that makes possible the existence of bodies, which are at the same time co-managed by cultural dynamics/traditions. Although historical understanding of cognition has been focused on the mind and its symbolic processing (including Aristotle’s syllogisms, Leibniz mathesis universalis or Frege/Boole/Russell and Whitehead’s attempts to reduce reasoning to logics), this has changed drastically in the past few decades.

Since the 1980s a naturalization tendency has been introduced into cognitive studies: natural minds were the result of natural bodies. Just a decade later, in the 1990s, the notion of ‘mind’ was extended. First, the body/mind separation was blurred [2], then extended, embodied, enacted. Emotions were then declared the core of any cognitive or informational process inside living entities [3] under the overwhelming influence of the neurosciences. [4] However, at the beginning of the 21st century, a new actor appeared: artificial systems that share cognitive processes with humans. Machines, computers, or robots share and help humans in the process of knowledge acquisition, processing, and analysis. [5, 6] The recent notion of ‘ground cognition’ can epitomize all these attempts. [7] At the same time those debates were experimentally translated and performed into two related domains: AI & robotics. At first AI, with Alan Newell and Herb Simon as its leaders, was essentially symbolic, but the 1980s brought on a revolution, with researchers like Rodney Brooks at MIT defending the embodied robotics of his small insect-like robots, a view that today is widely accepted. [8] Brooks argued that true artificial intelligence can only be achieved by machines that have sensory and motor skills connected to the world through a body. The sensorimotor dominance over all perceptual, planning, and guiding processes has transformed the classic dualistic debate into a more unified and practical one. While these ideas about artificial intelligence have modified the limits of what were considered stagnant fields, their growth offered new options within the study of cognition, expanded body and environments, and located several new domains for hacking. Here we consider how human existence has been transformed through symbiotic artificial systems, and how artists and non-scientists engaged with these possibilities.


AI or expert systems are examples of ways of extending human cognition outside of the brain. New computational efforts, like e-Science, computer simulations or automated theorems place us in front of epistemic black boxes that can only be supported by joint human-machine actions. [9] These new cognitive frameworks are interactive, and the automatization is performed beyond human control. We must now believe in our scientific and cognitive data, which are computationally mediated. Open-source software strategies have defied common knowledge management ideas based on a centralized control; instead, a collaborative work is possible and highly efficient.

Some minds have been hacked, but always with the modification and implementation of brain sensors. José Manuel Rodríguez Delgado’s pioneering research into mind control through electrical stimulation was proved when he faced a hacked bull during a scientific performance. Delgado stopped the charging bull with a remote control button connected to a stimoceiver implanted in the bull’s brain. [10]

Within the arts there are two examples that express the cognition intervention paradigm. First, Ken Goldberg’s seminal artwork The Telegarden, which explores information resulting from a live interaction with remote physical environments, opening the question about how telerobotics provide new insights about the nature and possibility of knowledge. This approach is related to Dreyfus, who suggests advances in Internet telerobotics could reinvigorate the notion that our knowledge of the world is fundamentally indirect, provoking further advances and refutations. [11, 12] The second work is Processing.org, by Reas & Fry. [13] Though a programming environment, it is a structure for art development that was started by the artists but was quickly converted into an online growing project and platform. The distributed cognition model as main framework is a clear example of collaborative cognition.

The Telegarden, 1995–2004, Ken Goldberg, networked art installation at Ars Electronica Museum, Austria. Image courtesy of the artist. Photo by Robert Wedemeyer. (Used with permission.)

The Telegarden, 1995–2004, Ken Goldberg, networked art installation at Ars Electronica Museum, Austria. Image courtesy of the artist. Photo by Robert Wedemeyer. (Used with permission.)


Our bodies are not just what we see or feel. They are the sum of several modules, some of them not even the result of DNA’s ‘machinery.’ [14] We are also hacking our body genetically: thanks to genetic engineering, synthetic biology and several techniques, we can select, modify, or create DNA informational strings. Finally, we are hacking our body morphologically: pacemakers, intelligent prosthetics, exoskeletons or cyborg extensions (glasses, wearable devices, etc.) in order to modify our everyday activities. Wearable devices like smart clothes or watches connected to smartphones connect our bodies with monitoring practices and Internet control. In some of these cases, wearable devices become part of our bodies, and some of them are accessible and programmable by Bluetooth, using security programs as well as android/iPhone apps. Some implantable medical devices (IMD) like artificial cardiac pacemakers can be accessed by NFC or Bluetooth technologies. [15] While security protocols are implemented, these are technically easy to bypass, meaning that these devices can be externally hacked. This unexpected situation triggers the emotional responses of wonder and/or fear. Of wonder if the user feels herself/himself as helped unexpectedly by the system; of fear if the user understands the ambient AI as a menace to the own decision-taking process or is forced to suffer an unexpected bad action.

There have been several examples of external body upgrades. Kevin Warwick was the first person to implant a RFID chip into his arm, which connected him to systems at the University of Reading, allowing him to open doors and turn lights on or off thanks to different ambient scanning sensors. He even linked his nervous system to his wife’s, connecting their sensations.

Stelarc’s third arm was a first instance of body extensions. Today this approach is applied to people with a disability (mostly amputees). [16] There are other body hacking examples that not only replace a missing sense or body part but also create new sensorial experiences. Neil Harbisson, who has achromatopsia (color-blindness), deals with his condition using an implanted antenna in his skull, with a color sensor placed into his head that transforms light waves into sound waves; with this system, he can hear the colors. When Harbisson talks about the cyborg experience and hacking the body, he has mentioned the possibility of being hacked. [17]

Other examples include Body Modification Artists who, since the late nineties, have been refining fingertip magnetic implants, which have become a ‘common’ body modification (over a thousand users) that enables the wearer to sense electromagnetic fields. [18] These enhance perception and the way we work and learn with our hands. Tim Cannon, one of the founders of the biohacking collective Grindhouse Wetwares has, meanwhile, been enhancing his senses to distinguish between alternating current and direct current in electricity. [19]

Within the new media arts, artists have been pushing the boundaries of body hacking as an art form. Jalila Essaïdi explores how to modify human skin properties in bulletproof skin, implanting transgenic spider silk in the skin and proving the new material’s resistance to a bullet. [20] With this work, she takes the conversation to the social, political, ethical, and cultural issues surrounding safety in a world where new biotechnologies are embedded in our everyday practices.

bulletproof skin, 2014, Jalila Essaïdi, bio art. Image courtesy of the artist. (Used with permission.)

bulletproof skin, 2014, Jalila Essaïdi, bio art. Image courtesy of the artist. (Used with permission.)

A different example of body hacking comes from the artist duo Marion Laval-Jeantet and Benoît Mangin, in May the horse live in me. In this work, Laval-Jeantet was injected with horse blood plasma. In the months preceding the performance, she built up her tolerance to the foreign animal bodies by being injected with horse antibodies. [21]


With new sensors and computational systems, our houses can become home automated areas or ‘domotic’ spaces. These environments can communicate through their informational devices within themselves or with their users (creating relationships between doors, lights, fridge, car, heating, music system, etc., and even the human body using wearable ID devices or anthropometric recognition). Philips laboratories coined the term “ambient intelligence” to describe these environments (AmI). [22] For example, the modification of a fridge that only opens after the user smiles can ultimately alter our mental and behavioral condition. [23]

Likewise, artists have been exploring for years how we react to systems that ask us to show facial behaviors or emotions, forcing us to engage with machines like we would usually do only with living creatures. In Brian Knep’s Big Smile [24] an archetypal smiley face blinks, looks at viewers, and smiles only when no one is looking directly at it. By smiling only when no one is looking, the piece is disdainful of the viewer’s participation, forcing the viewer to engage in reciprocal recognition with the device. Meanwhile, Ken Rinaldo’s Paparazzi Robots decide to photograph people, while ignoring others, basing their decision on whether the viewer is smiling or, for example, on the shape of their smile. [25]

Big Smile, 2003, Brian Knep, installation. Image courtesy of the artist. (Used with permission.)

Big Smile, 2003, Brian Knep, installation. Image courtesy of the artist. (Used with permission.)

Paparazzi Bots, 2011, Ken Rinaldo, robotic installation. Image courtesy of the artist. (Used with permission.)

Paparazzi Bots, 2011, Ken Rinaldo, robotic installation. Image courtesy of the artist. (Used with permission.)

Users can also modify the environment. Lucy & Bart create wearable outfits that explore the space between the body and the near environment. [26] Frisson is a body suit with LEDs that light up according to the wearer’s state of excitement; while the Blush Dress is equipped with sensors that respond to changes in the wearer’s emotions and projects them onto the outer textile.

Like Warwick, Erin Gee connects her body impulses to the ambient. In the cybernetic musical performance work Swarming Emotional Pianos (2012–2014) she uses microneurography (microelectrode inserted into a nerve listening electrical activity) to control a set of mobile robots performers. [27]

A New Conceptual Framework: Some Conclusions

The possibility of hacking the body radically changes the notion of ‘intimacy,’ and blurs the distinction between others and self. Human bodies are increasingly being extended into technological domains as well as upgraded with technological devices. In order to merge these two notions, we suggest the creation of a new concept: ambient wetware. This idea includes humans with upgraded bodies as well as humans performing their actions into extended artificial ambient intelligences. In a future of bodies mixed with intra- as well as extra- corporeal devices, it is reasonable to think about the future consequences of the hackability of these beings or systems.


  1. L. Fadiga, et al. “Visuomotor Neurons: Ambiguity of the Discharge or ‘Motor’ Perception?” International Journal of Psychophysiology 35 (2000): 165–177.
  2. Coursera,“Emotions: A Philosophical Introduction,” accessed March 26, 2015, https://www.coursera.org/course/emotions
  3. Antonio Damasio, The Feeling of What Happens: Body, Emotion and the Making of Consciousness (London: Vintage, 1999).
  4. Paolo Legrenzi and Carlo Umiltà, Neuromania: On the Limits of Brain Science (Oxford: Oxford University Press, 2011).
  5. Jordi Vallverdú, “Computational Epistemology and e-Science: A New Way of Thinking,” Minds and Machines 19, no. 4 (2009): 557–567.
  6. David Casacuberta and Jordi Vallverdú, “E-Science and the Data Deluge,” Philosophical Psychology 27, no. 1 (2014): 126–140.
  7. Lawrence Barsalou, “Grounded Cognition,” Annual Review of Psychology 59 (2008): 617–645.
  8. Rolf Pfeiffer, ETH Zürich University, accessed March 20, 2015, http://www.ifi.uzh.ch/ailab/group/professors/rolfpfeifer.html;https://www.youtube.com/watch?v=mhWwoaoxIyc/
  9. Jordi Vallverdú, ed., Thinking Machines and the Philosophy of Computer Science: Concepts and Principles (Hershey, PA: IGI Global Group, 2010).
  10. “José Delgado, Implants, and Electromagnetic Mind Control,” accessed March 26, 2015,
  11. Ken Goldberg, The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet, ed. (Cambridge, MA: MIT Press, 2000). See also: http://goldberg.berkeley.edu/garden/Ars/; http://goldberg.berkeley.edu/art/ (accessed March 20, 2015).
  12. Hubert Dreyfus, “Telepistemology: Descartes’s Last Stand,” in The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet, ed. Ken Goldberg (Cambridge, Mass. MIT Press, 2000).
  13. Casey Reas and Ben Fry’s websites, accessed March 20, 2015, http://reas.com/; http://benfry.com/; https://processing.org/
  14. “The NIH Human Microbiome Project,” Genome Research 19, no. 12 (2009): 2317-2323.
  15. Byungjo Kim, Jiung Yu, and Hyogon Kim, “In-Vivo NFC for Remote Monitoring of Implanted Medical Devices with Improved Privacy,” SenSys 2012: Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems, Toronto, November 6–9, 2012, 327-328.
  16. Aviva Rutkin, “Bionic Arm Gives Cyborg Drummer Superhuman Skills,” New Scientist, March 2014, http://www.newscientist.com/article/dn25142-bionic-arm-gives-cyborg-drummer-superhuman-skills.html#; Michelle Starr, “Amputee Simultaneously Controls Two Prosthetic Arms with his Mind,” cnet.com, December 17, 2014, accessed March 26, 2015, http://www.cnet.com/news/amputee-simultaneously-controls-two-prosthetic-arms-with-his-mind/
  17. Ivan Barajas, “Skull-Implanted Antenna Device Allows Color-Blind Man To See Color,” NewEgg Blog, April 16, 2014, accessed April 15, 2015, http://blog.newegg.com/skull-implanted-antenna-device-color-blind-man-color/
  18. Dann Berg, “I Have a Magnet Implant In My Finger,” Gizmodo, March 22, 2012, accessed March 26, 2015, http://gizmodo.com/5895555/i-have-a-magnet-implant-in-my-finger
  19. Grindhouse Wetware website, http://grindhousewetware.com/ (accessed March 26, 2015).
  20. Jalila Essaïdi’s website, http://jalilaessaidi.com/; http://vimeo.com/65254981 (accessed March 26, 2015).
  21. “Prix Forum I Hybrid Art – Art Orienté Objet,” accessed March 26, 2015, https://www.youtube.com/watch?v=dxztkwiAJCY
  22. Philips website, “Ambient Intelligence,” accessed March 26, 2015, http://www.research.philips.com/technologies/ambintel.html
  23. Samantha Murphy Kelly, “This Refrigerator Only Opens if You Smile,” Mashable, October 10, 2012, accessed March 26, 2015, http://mashable.com/2012/10/10/smile-refrigerator/
  24. Brian Knep’s website, “Big Smile, 2003,” accessed March 26, 2015, http://www.blep.com/works/big-smile/; https://www.youtube.com/watch?v=SgruVmGw0sI
  25. Ken Rinaldo,

    target=”_blank”>http://www.paparazzibot.com/; http://kenrinaldo.com; “The Paparazzi Bots by Ken Rinaldo,” href=”https://vimeo.com/32243812″ target=”_blank”>https://vimeo.com/32243812 (accessed March 26, 2015).

  26. Lucy & Bart’s website (Lucy McRae & Bart Hess), http://lucyandbart.blogspot.com/; Regine Debatty, “Lucy McRae’s talk at NEXT,” We Make Money Not Art, December 4, 2006, accessed March 20, 2015, http://we-make-money-not-art.com/archives/2006/12/lucy-mcraes-tal.php#
  27. Erin Gee’s website, accessed March 20, 2015, http://www.erin-gee.com/


Florence Gouvrit is a Mexico City/Columbus-based new media artist. She has a BFA from “La Esmeralda” in Mexico City, an MPhil in Science and Technology from the National Autonomous University of Mexico (UNAM), and a MFA in Art & Technology from Ohio State University. From 2000 to 2008 she was part of the virtual reality and art theory research labs in the Multimedia Center in Mexico City. Her work is based on synthetic emotions, artificiality, human-machine interaction, empathy, culture and technologically-mediated behavior. Her work utilizes interactive video installations, robotics, data visualization and digital fabrication. She has received the FONCA Fellowship for young creators twice (2002, 2004) and has shown her work and research in Mexico, Spain, and the United States. She received an Ohio State GTA award, FONCA Study Abroad grant, and GCAC grant. Since 2011, she has been teaching at the Ohio State University in the Art & Technology program in the Department of Art and since 2014 at the Columbus College of Art and Design.

Jordi Vallverdú, Ph.D., M.Sci., B.Mus, B.Phil is Tenure Professor at Universitat Autònoma de Barcelona (Catalonia, Spain), where he teaches Philosophy and History of Science and Computing. His research is dedicated to the epistemological, cognitive, ethical, and educational aspects of Philosophy of Computing and Science and AI. He is Editor-in-Chief of the International Journal of Synthetic Emotions (IJSE). He was keynote at ECAP09 (TUM, München, Germany), EBICC2012 (UNESP, Brazil) and SLACTIONS 2013 (Portugal). He has written 8 books as author or (co)editor, among them Bioética computacional (México FCE, 2009), Handbook of Research on Synthetic Emotions and Sociable Robotics (USA: IGI Global Group, 2009), Thinking Machines and the Philosophy of Computer Science: Concepts and Principles (USA: IGI Global Group, 2011). In 2011 he won a prestigious Japanese JSPS fellowship to make his research on computational HRI interfaces at Kyoto University.