Associate Professor, HR Hope School of Fine Arts
Senior Research Scientist, Pervasive Technology Institute
Indiana University, Bloomington
My artworks capture intimate moments of belonging. Belonging as it relates to the self and to the collective consciousness. I draw and paint automatically, without forming any a priori or specific intention. I witness my hand’s movement across the page or computer screen. This methodology is a type of active imagination. It captures a stream of consciousness moment. From the drawings, I decide what to develop as the foundation for paintings and virtual environments. The paintings are a type of digital surrealism that evokes the power of thoughts and mental situations. The virtual environments are non-linear and non-hierarchical, much like the generative unfolding process of a dream. The artwork includes 3D rapid prototype sculptures, interactive projections and virtual reality environments for the CAVE, which is a stereoscopic 3D theater. The interactive projections use motion tracking, facial detection and/or body detection.
Figuratively Speaking is an immersive, interactive 3D stereo virtual reality art environment that is based on a series of paintings featuring figures that appear predominately as faces. The paintings were used as a method for designing the virtual portrait as an environment. The body parts and facial features provide a method for establishing ‘wayfinding’ through the landscape. Navigation occurs through a subversive confrontation and communication with the figures. As the faces speak when they are approached, interaction or speaking with the figures enhances the emotional aesthetics of the experience.
Poke holes in my thoughts (swimming at the edge) simulates the surface of a pond where fish glide idly by and vegetation slowly sways to invisible currents below the surface. Visitors can see their reflection on the smooth surface, touch the water to make it ripple, stir the water to make waves or even jump in and scare the fish. The visitors’ slightest touch triggers water-drop audio events. The pond is mapped with a range of low to high notes based on vertical and horizontal locations of the hands and head adding an interactive musical experience. If one physically crosses the virtual surface of the pond, the character of their reflection changes to simulate appearing partially submersed underwater as the visitor becomes a part of the pond and the artwork themselves.
Margaret Dolinsky has been working with virtual environments since 1995, creating interactive art experiences that have been exhibited at SIGGRAPH, Ars Electronica, ICC in Tokyo, and the Walker Art Center. She was commissioned by the Indianapolis Museum of Art to create “Cabinet of Dreams” a VR experience of Chinese antiquities. She has had several exhibits in China, including her piece “Emotable Portraits” which uses facial detection. She designed interactive video for the American Opera Theater’s production Annunciation + Visitation: Operatic Projections of her sexual insight. Her recent work involves digital projections for opera and experimental film. Dolinsky is co-chair of the IST & SPIE Engineering Reality of Virtual Reality conference with Ian McDowall, Fakespace Labs.