Drosera Obscura

Fall 2025

Drosera Obscura is a deeply immersive XR project that dissolves the boundaries between the organic and the digital through a multi-sensory environment shaped by sight, sound, scent, and touch. Set in a speculative post-human bogscape, the experience invites participants to explore a living world of animatronic flora, reactive soundscapes, and tactile interfaces that blur the line between virtual and physical. Inspired by evolved carnivorous plants and their symbiotic relationships with sound and synthetic biology, Drosera Obscura transforms passive observation into active participation. Participants influence the environment in real-time—disturbing the digital foliage releases subtle fragrances, shifting light patterns, and spatialized sound. The experience expands VR outward, turning an often isolated medium into a performative, theatrical ecosystem that responds to human presence with intimacy and beauty.

VR Experience

Below is an example of the VR experience of Drosera Obscura

Performance

In October of 2025, we had a performance in the Center of the Arts at Virginia Tech. It was held in the Cube, a room with surround sound and projection mapping technology. We made use of this, creating a 15-minute performance combining animatronic movements, puppetry, vocal performance, projection-mapped environments, and an immersive surround sound music.

My Role in the Project

In this project, I worked on the construction of the animatronic, the communication between the animatronic and the game engine, and the supporting visuals for the performance.

Establishing communication between the animatronic and Unreal Engine was essential to match the movements of the model in-engine with the movements of the model in real life. I helped work on the Wifi Communication between Unreal Engine to PlatformIO - an open-source, cross-platform ecosystem for embedded software development that includes a build system, libraries, and frameworks for a wide range of microcontrollers and development boards. Then, we communicated between PlatformIO and an ESP32 board to control the movements of the animatronic and used Encoders to check the angles of the motors on the animatronic so that the movements of the model in-engine would match the movements of the Animatronic.

I also helped construct the animatronic, creating parts such as a rotating base plate, wheel stabilizers, and the handle used to control the puppet.

Finally, for the performance, I created 15 minute long animations and used projection mapping to project them onto scrims made by others working on the project.

Collaborators
Jason Hodge (Student in Creative Technologies, Virginia Tech)
Sydney Dechow (Student in Industrial Design, Virginia Tech)

Thomas Tucker (Professor in Creative Technologies, Virginia Tech)
Matthew Swarts (Georgia Tech Research Institute)
Joseph Kubalak (DREAMS Lab, Virginia Tech)
Dongsoo Choi (Professor in Creative Technologies, Virginia Tech)
Tohm Judson (University of Washington, US)
Brook Kennedy (Professor in Industrial Design, Virginia Tech)
Yamin Xu (Assistant Professor in Digital Arts, Bowling Green State University)

Previous
Previous

Quarterback Simulator

Next
Next

Power Station Training Simulation