Associate Professor in Experimental Music & Digital Media, Louisiana State University
Vincent A. Cellucci
Poet and CxC Coordinator for the College of Art + Design, Louisiana State University
Assistant Professor in Digital Art, Louisiana State University
Keywords: interactive, experience, performance, creative, data mining, poetry, TED talk, interdisciplinary, new media, collaboration
One of the most popular new media platforms for the proliferation and distribution of ideas related to technology, entertainment, and design is produced in the form of TED Talks. For its web-based video series, the nonprofit TED hosts lecturers from various disciplines to deliver “ideas worth spreading” to an increasingly global community. TED solicits, curates, and produces their talks, resulting in high quality media, which are increasingly utilized in education for reasons ranging from subject to public speaking skills. But perhaps what drives the attraction is that many people favor the medium of streaming video as a form of collective consumption. The brand TED has become so widely recognized for its curated speeches the organization has blossomed into independent (designated by the x) programming of TEDx events. Even at over 10,000 Tedx local events, a share taking place at or affiliated with universities, TED Talks remain relatively absent in the academic publishing world, but they have certainly found their way to campuses where they are consumed by today’s digital native students.
More specifically, the TED Talk, since its inception in 1984, has become a fairly standardized new media form, one that is even common in American households given wide distribution through the TED television application, Netflix, and the ever-popular YouTube. The TED Talk genre is also fairly standard, which is essentially a speaker focusing on one central idea, framed in a compact yet entertaining personal narrative, while possessing an air of optimism. Given the replication of its form and the high prices and celebrity linked to the large TED conference, the TED brand is not without its detractors. Benjamin Bratton wrote an essay criticizing the “oversimplification” he finds in the talks and admonishing the public to be suspicious of the “solutions” offered in these talks.  Joey Watson, co-founder of TEDxLSU and founder of TEDxGSW, claims in his dissertation regarding the rhetorical analysis of TED, that the nonprofit “employs current uses of digital media technologies to manufacture its ethos of expertise within public culture.”  Watson believes that higher education pedagogy needs to catch up to the new generation of learners, which bears resemblance to the architectonic shift and ongoing changes in news delivery mechanism, participatory culture, and the quality and credibility of content within journalism. But the purpose of this particular essay supersedes this critical debate surrounding TED. Rather, the project of this essay is an introduction to a particular artistic mode that reworks the very form of the medium. Here phenomena and the ultimate audience experience, introduced by interactive capabilities, creatively reconstitutes and extends the traditional TED Talk.
We were approached to give a performance at TEDxLSU 2016 because of a pre-existing interdisciplinary project, an interactive poetry app, we unveiled at a series of campus performances the previous academic year. The collaboration—consisting of Jesse Allison (an experimental music professor), Derick Ostrenko (a digital arts professor), and Vincent Cellucci (a working poet and education professional)—drew upon our collective expertise and digital media fluencies to “make available for musical purposes any and all sounds that can be heard . . . [and introduce]… mediums for the synthetic production of music.”  Interactive web and mobile technology mediums served as the interstices for combining our overlapping and related mediums of art, music, and poetry to realize the possibilities for the type of experimental music performance Cage referred to in his credo—we sought to repurpose an analogous vision for making media expansive within performance.
The TEDxLSU performance, Diamonds in Dystopia, was our second attempt to create a technological talisman who arrives at the ability to write poems.  The first iteration was an interactive (solely in audio and visual) web poetry app, to which we added a “creative data mining” functionality (central to the platform). This update gave the performer the ability to receive data from the audience interaction to generate improvisational text to incorporate into the performance. Again akin to Cage’s vision, “This means that each performance of such a piece [experimental or fragmented] of music is unique, as interesting to its composer as to others listening. It is easy to see again the parallel with nature, for even with leaves of the same tree, no two are exactly the alike.”  The remainder of this article will detail the background development, technical project specifications (as well as functionality for what we refer to as “creative data mining”), and future implications of incorporating interactive media into performances that challenge people’s perceptions and expectations for digitally experiencing the arts.
An interdisciplinary initiative that focuses on the need for students in the disciplines to improve their skillsets with not only written and spoken modes of communication, but also in the visual and technological modes was realized in a unique program (called Communication Across the Curriculum CxC) at Louisiana State University. As new faculty appointments within the Center for Computation & Technology (CCT), we (Professors Jesse Allison and Derick Ostrenko) established a pedagogical alliance within the parameters of CxC. Both CCT, which functions as a faculty research collective, and CxC as a teaching collective (and champions of TEDxLSU), foster interdisciplinary, inventive collaborations. Vincent Cellucci, the College of Art and Design CxC Coordinator, has worked with these faculty on course development for several years. While having mutual admiration for each other’s art practice and loosely conversing about ideas and the potential for collaboration, it took the individuals until the summer of 2015 to find the time for meaningful collaboration.
When the trio first began to get together to brainstorm about the potential for an interactive poetry app, the guiding phrase the group used was “A guitar pedal for poets.” Although this verbiage was beneficial, for it helped the creators isolate and prioritize performance as a key function for the application, the description seemed to imply more of a digital instrument controlled solely by the poet. However, this limited a second and equally important priority—interactivity. We knew that we wanted the mobile device (given their ubiquitous presence in American society) to contain a user interface for each audience member to interact with the words of an original poem. We also realized we wanted a theater view as visualization for the audience’s interactivity. Subsequently, we determined there would also need to be a controller for the creators and/or performers to start and stop the performance and push sound effects as well as the more immediately necessary ability of maneuvering through sections of a long poem, which could not be displayed on a single screen without disruptive scrolling.
The first iteration of our interactive poetry application was titled Causeway (after a poem of the same title that Cellucci wrote about post-Katrina Louisiana), and it enables audience members to individually trigger sound effects of pre-recorded lines of poetry. We took full advantage of the 92-speaker constellation sound system of CCT’s Theater, which allowed audience members to select their location in the venue as well as increased the potential—with the sheer quantity of directional speakers—that audience members would recognize and create a direct correspondence from their interactive taps on poetry lines to the sound effects generated. When developing the concept, we were inspired by a number of early collaborative mobile device pieces, in particular Dialtones (A Telesymphony) done in 2001 by Golan Levin, Gregory Shakar, Scott Gibbons et. al, where audience members create a symphony using their own cell phones. 
In addition to establishing a causal sound relationship, Causeway was designed to provide a visual one. Each audience member was randomly assigned a color. This color underlines the text interactions on both the mobile UI and also the theater visualization, which starts off with a blurry visually obscured version of the poem. The more taps individual lines received, the clearer the line reads until it is eventually fully legible. This revealing process takes ten individual user taps to clarify the text, and then the following interactions return the visualization to the blurred state, which has a quilt-like effect of the audience authoring the theater visualized poem from user interactive selections. Typifying this kind of interactivity gave rise to the catch phrase we used as the sole instructions to guide the audience: “Tapping is the new snapping”—a play on the performance poetry tradition of snapping after favorite or profound lines. We decided this minimal description encouraged the exploration of the poem’s language and the resulting effects, and since the user palate of interactivity was limited to the text, sound, and the visual process of clearing and obscuring, we considered it a fairly restrained system. The intended reaction was that the audience members would really immerse themselves into the language of the poem, essentially tapping the lines they “liked”—a well-nurtured, albeit limited reaction to mobile interactivity because of their experience with social media networks like Facebook. The digital effects created a multi-sensory environment as an indication of audience interactivity unique to each performance, culminating collectively via sound and visualization that the attendees had direct agency in enacting or sustaining.
We noticed a range of engagement after performing the piece several times. For instance, a trigger happy audience member that elects to repeatedly click every word or multiple words creates an aural cacophony in that section of the performance, while the more selective audience member, only hitting the words that resonate with them the most, has more of a semantic experience in terms of their interactivity. It also must be stated that there was a slight learning curve for individual audience members to understand or make the causal tapping-sound-visualization interactivity association. Some audience members for either technical or personal reasons prefered just enjoying the reading and multimedia landscape; the interactivity element was not the major attraction, so they found the effects contributed by others sufficient for their experience. Certainly we anticipated this and welcomed it as a margin of safety considering the quantities of users and bandwidth expected for each performance. Lastly the technology also enabled us to track user analytics and data to assess participation. For our most recent performance of Causeway, we logged 5,805 taps for 70 audience members with 33 users participating at least once and averaging 176 taps/user, with one user tapping 590 times during the approximately 10-minute window.
Causeway provided a concrete series of performances as well as the pathway toward an online container version and a more traditional fine arts installation version of the content containing the interactivity (sans performative) aspect. After we informally evaluated our achievements of the first iteration, we decided to parallel and embody a multisensory experience akin to the imaginative experience of writing a poem. As a result, human mediation was introduced as a constraint factor in the second iteration. The human mediation was also needed to filter the improvisation for an exponentially larger audience (which would total around 600). In order to handle this extra load, we also wanted to take advantage of a new 448-core cloud computing system being built by Allison and Ostrenko called HIVE (High Performance Visualization and Electroacoustics).
The second collaborative interactive web poetry app, developed for TEDxLSU and titled Diamonds in Dystopia, applied advanced coding techniques to aggregate 2,200+ TED transcripts, which acted as the found text that generated new stanzas using a Markov chains process. The RiTa “toolkit for computational literature” by Daniel Howe was an inspirational launching point for ideating text-mining operations and the Markov-based generations.  The expanded level of interactivity featured a text database to generate language sent to a performer, and it is this process we mean when we refer to “creative data mining.” Our process was similar to other artists working with an abundance of freely available data. Here we followed Lev Manovich’s directive: “While the availability of large digitized collections of humanities data certainly creates the case for humanists to use computational tools, the rise of social media and globalization of professional culture leave us no other choice.”  We augmented our software to enable a succinct recombination of massive amounts of language as source material, pushing the boundaries of the traditional TED Talk by adding another exciting and popular form of new media, interactivity, to the mixture of mediums currently found online.
Performance-scaled interactivity (or enhancement), specifically using mobile devices in an audience comprised of hundreds of users, replaces the individualized information dissemination system (solely output) and turns it into one capable of creative input or collaboration. Because the user interaction enables a parallel creative bond to form between the audience experience and the performing poet, in terms of the found text methodology (a common practice among contemporary poets which integrates various language sources from a separate context into a poem for language generation), Diamonds further engaged information dissemination. Cellucci crafted this poem predominantly using TED Talk transcripts as found texts undergoing the same Markov chain creative process. By picking or clicking on the individual word selections of the poet’s original text, or the seed poem, that resonate with them, audience members collectively pull transcripts in the database that are associated with their word choice. This text fodder is recombined and sent as a flurry of generated stanzas to the poet, which they then improvise into the poem on stage, creating and archiving an event-specific version of the poem and performance. The collaborative text contributed by the audience becomes incorporated into the poet’s reading and the theater projection. The final poem, crowd-sourced, remixed from transcripts, and improvised by the poet leads the participants to at times “read the machine rather than the text.” 
Much as news coverage is live or streaming, this web app is a new media performance enhancer which enables a live stream of poetry, interactively taking the audience into the sensory decisions and experience of creating a poem collectively. Together, the audience helps author the poetry performance, while experiencing the instant gratification of interactivity—both aurally and visually—tied to new media stimula. The interactivity also triggers synthesized audio effects at varying pitches to provide a musical experience as well as contributes to the animation aesthetic of the visual projection of the poem. In the TEDxLSU performance, there was a background composition that was accented by live voice synthesis interactivity. These sound effects were at varying pitches that change over time so that different chords and patterns emerge, all having an aural sensation similar to the visual, quilt-like effect of Causeway. In the next section, we would like to share the technology behind this sensory experience.
Details of Technology
In terms of hardware, audience members use their own mobile devices that are connected to the internet via WiFi or a cellular network; these devices also add to the musical sound installation by contributing sound effects that vary the pitch for each user, synthesized voices of the tapped words, and pushed audio effects sent from the controller. As for serving the content, Diamonds relies on virtualized “infrastructure as a service” products such as OpenStack and Google Cloud Compute. A venue’s projectors and speakers further visualize and sonify the media performance content for the audience in the theater.
Over the past year of collaboration, we have performed Diamonds in Dystopia at TEDxLSU and Causeway three times (The CCT Katrina 10th Anniversary Symposium, an experimental music showcase—Digital Divide, and most recently, the international conference for New Interfaces for Musical Expression in Brisbane, Australia). We have an upcoming performance of Diamonds in Dystopia at South by Southwest (SXSW) Interactive in March 2017. These opportunities aided us to adapt the app to various media contexts (i.e., as a musical performance for NIME 2016 or exhibiting the container as an art installation for Louisiana Contemporary at the Ogden Museum of Southern Art in New Orleans) as well as for various subject or thematic contexts (i.e., Katrina anniversary, TEDxLSU).
Future iteration could employ the creative data mining abilities built into Diamonds in addition to open up the system (and subsequently the context) with the intention of performers being able to upload large collections of manuscripts and their own seed text to enable future poetry and new media art performances. This way one need not be a developer to perform heightened text collaboration—with the opportunity for generative input from an audience and a dynamic set of texts—or create unique, new media collaborative poetry from mobile interactivity. This new system would be attractive to contemporary poets, performing artists, and audiences seeking more engagement from readings as well as scholars curious about language processing to creative datamine subtexts.
This essay outlines the evolution of several theoretical and technical questions we have been asking ourselves as researchers, artists, and performers as we open our creative workflows to interactive technologies. For example, what does a digitally engaging collaboration between music, visual, and literary artists look like and how might it evolve? How can interactive mobile technology be harnessed to benefit the presentation and experience of the visual, musical, or literary arts? In the post-human technological world, are art forms able to thrive off a level of audience interactivity previously impossible? Causeway and Diamonds in Dystopia are our two first drafts. While these applications animate our practices, they perhaps provide us less with answers than present us with new questions. We eagerly anticipate sharing more quandaries and discoveries with those interested in using technologies to integrate authors, audiences, and performers in new creative ways.
- Benjamin Bratton “We need to talk about TED” The Guardian, December 30, 2013. Accessed January 30, 2017, https://www.theguardian.com/commentisfree/2013/dec/30/we-need-to-talk-about-ted.
- Joseph Watson, “Screening TED: A rhetorical analysis of the intersections of rhetoric, digital media, and pedagogy.” PhD dissertation, Louisiana State University, 2014.
- John Cage and Kyle Gann, Silence: Lectures and Writings, 50th Anniversary Edition, 2 edition (Middletown, Conn: Wesleyan, 2011), 4.
- Charles O. Hartman, Virtual Muse: Experiments in Computer Poetry, 1st edition (Hanover, NH: Wesleyan, 1996), 1.
- John Cage and Kyle Gann, Silence: Lectures and Writings, 50th Anniversary Edition, 2 edition (Middletown, Conn: Wesleyan, 2011), 11.
- Golan Levin et al., “Dialtones (A Telesymphony)” (Ars Electronica, Linz, Austria, September 2001), https://www.theguardian.com/commentisfree/2013/dec/30/we-need-to-talk-about-ted.
- Daniel C. Howe, “RiTa: Creativity Support for Computational Literature” (ACM Press, 2009), 205, doi:10.1145/1640233.1640265.
- Lev Manovich, “How to Compare One Million Images?,” in Understanding Digital Humanities, ed. D. Berry, 2012 edition (Houndmills, Basingstoke, Hampshire; New York: Palgrave Macmillan, 2012), 250.
- Roberto Simanowski, Digital Art and Meaning: Reading Kinetic Poetry, Text Machines, Mapping Art, and Interactive Installations (Minneapolis: University of Minnesota Press, 2011), 113.