#consumer electronics #gaming #human-machine interaction #learning #storytelling #interfaces #social science #machine learning #affective computing #ethics #industry #interactive #computer science #interfaces #learning #entertainment #art #artificial intelligence #design #creativity #history #technology #archives
Hololens, Kinect, Realsense D315, Volumetric Capture Stage, Scatter’s Depthkit, Looking Glass, Leap Motion Controller, Lighting Equipment, various ephemera of volumetric media history.
Graduate student assistants Armighan Behzad and Huy Ngo collaborated and assisted on the development & presentation of this project.
In the Spring of 2019, I was invited by the Science Museum of Western Virginia to develop a Holo-Booth, a project exploring volumetric capture in a public context, hosted during an exhibit opening held at the museum.
Using the contemporary mixed reality headset, the Microsoft Hololens, along with the volumetric capture software Depthkit, I arranged 3D stereo cameras to create a “photo-shoot”-like environment. We were enabled to rapidly capture holograms of visitors to an opening exhibit at the museum, each of whom was prompted to give short, improvisational performances.
Capturing real-time depth video and viewing it wirelessly- streamed directly into mixed reality- presented a significant technical challenge. By implementing a novel workflow, which seamlessly linked captured footage into a live application from Unity onto the Hololens 1, we were able to immediately project each holographic performance into the environment of the museum.
Each 3D holographic capture, once loaded into the Hololens, was visible in the environment of the Science Museum throughout the opening.
In addition to engaging participants with the Hololens, a Looking Glass volumetric display was utilized, to demonstrate the diversity of modern holo-centric hardware.
As part of my ongoing interest in engaging audiences with the media archeology
of technology, a kit was developed that contained early stereoscopic displays, vintage viewfinders, and a range of historic image-making technologies.
This project used a number of emerging mediums in a socially engaging way, encouraging the public to experience new technology firsthand while broadening their awareness of where the technology is situated in the history of media and culture.
Utilizing a combination of contemporary volumetric capture technologies, including a Kinect V2 and an Intel Realsense Depth Camera, each capture was seamlessly imported into the Unity Game Engine from Depthkit.
Each capture was then displayed in both the Hololens, as well as the Looking Glass Volumetric Display.
Mixed Reality Performance
For the event, approximately 30 participants were captured, creating short, improvisational performances.
Each 3D video was then edited and looped, capturing a holographic performance that each participant could view through the Hololens, seeing themselves in the collections of the museum.
The resulting work, a composite of each performance by all participants, became a holographic re-creation of the entire event.
When viewed through a Hololens, the museum became filled by all of the attendees- permanently on loop- in a spatially-aware holographic simulation.
Mixed Reality Independent Study
Mixed Reality Independent Study
In the Spring of 2020, while the pandemic took hold, along with Aisling Kelliher, I oversaw an independent study, making use of the comprehensive dataset of depth video captured over a two-year period.
Graduate Student Nikita Shokov and I developed a Hololens application to position the depth videos throughout the three-story virtual reality research space, the Cube.
We created a virtual version for VR as well- a 360 animation of the VR experience is featured to the right.