Dream Sequence is based on a recurring dream I have had many times in several variations. Entirely sound-based, Dream Sequence is made up of a database of story fragments and ambient sounds, which are triggered by users’ movements through space tracked by an overhead camera.
Faster movement results in many sounds and story fragments overlapping, while slow movements produce a more measured, coherent experience.
In the dream, I am on foot approaching a beach which, in the dream, I know I have seen before. My dreaming memory is of a magically beautiful place that I have always longed to return to. In the way of dreams, although I hurry towards my destination, I invariably wake up before I set foot on the beach or see the water over the dunes.
Sound output evolves over time, so that users who remain in the space and continue to interact are rewarded with a more complete story, and more diverse information. Motion tracking uses the Flob library for Processing.