CAVE2 Hybrid Reality Environment

RESEARCH | 2011-2013


Maxine Brown, Andrew Johnson, Jason Leigh, Lance Long, Tom Peterka, JD Pirtle, Dana Plepys, Luc Renambot, Daniel Sandin, Jonas Talandis, Alan Verlo

TL;DR? Here's a link to a short documentary on CAVE2.

CAVE2, the next-generation large-scale virtual-reality environment, is a hybrid system that combines the benefits of both scalable-resolution display walls and virtual-reality systems to create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual-reality simulation exploration at a resolution matching human visual acuity.

Alongside the other developers listed above and a group of BS, MS, and PhD students, I participated in every step of the design, construction and implementation of CAVE2. In addition, I designed and implemented the sound system, both hardware and software. The (22) channel audio system in CAVE2 provides developers with the ability to place sonic objects matching visual events in 2d Ambisonic space.

I wrote the CAVE2 Sound Server in Supercollider, an open source programming language and environment for real-time audio synthesis and algorithmic composition. One of the primary goals at the onset of software development was to provide a simple set of commands to interact with the CAVE2 Sound Server, which would facilitate playback and positioning of sound objects in virtual space; individuals developing for CAVE2 access these commands via a C++ sound API. In addition to this functionality, real-time audio synthesis is also possible.