Level of interactions

Selected interaction scenarios.






Full list of executed scenarios. VR experiences available upon request.

Level 1: Spatial orientation, object detection


01
The sound is represented as an unsaturated stream of particles, which is distributed in a 15 degrees cone, and the particle life is set to 15 seconds. As particle age, they slightly grow in size, while gradually fade transparent.
The scene is completely dark
When particles collide with an object, they stop moving and stick to the object, until they die.
Combination of particles stuck on the objects, and head tracking allows users brain to interpret visible as spatial information and let them “see” in the dark.
The scene contains simple and easy to recognize objects, such as cubes and boxes.

02
Same as above
Besides, particles change color from warm (red) to cold (blue) as they age, which helps a user to perceive the depth of the scene, even without head tracking. Note: Adding color replicates its role in human spatial perception, (where the horizon receives colder saturation due to air thickness. However, it requires specific practice in this extreme interpretation, and it may take a while to adapt to it.


03
Same as above
The scene contains complex geometries - human skull in an example. Note: As it is relatively easy to detect and recognize primitive objects such as boxes, it might be difficult with more complex geometries, namely organic forms. As user passed through the previous two levels of interaction, they become better adapted to this form of “seeing,” and it causes fewer issues to recognize these objects.

04
Same as above
The scene contains invisible objects. As animals which use echolocation struggle to detect sound absorbing objects, this simulation exposes users to a similar situation. The scene contains simple objects which allow sound particles to stick to them, but they are instantly destroyed, and so the object appears invisible. However, thanks to head tracking, the user can identify these objects as negative space.


05
Same as above, but the scene is lit - daylight simulation
Individual objects are still invisible, and they are detectable only through the sound particles Note: This simulation combines normal visual perception under the daylight condition, with simulated echolocation. The user can naturally see while having sound particles as an extra sensor, or skill. They can detect and identify objects beyond the reality visible to human eye.



Level 2: Interaction with objects



06
Same as above
Specific objects respond to sound particles. As particle collision is detected, they notice that they are observed, and they hide: Simple script moves them under the simulated ground and brings them back after 10 seconds is counted. Any new particle collision will cause them to hide again.
As particles bounce with an object, they remain in its place even after the object hides. This allows a user to detect these objects post interaction. Note: User learns that observation is always bidirectional, and never passive. Looking at the object causes interaction.

07
Same as above, but the scene is dark again
A new type of object: When the collision with sound particles is detected, this object turns on the local light source and illuminates space around itself for 10 seconds. Note: Objects can contribute to users perception.



Level 3: Locomotion


08
Same as above, daylight
A new type of object: When the collision with sound particles is detected, this object attracts user and eventually transports them to the object location. Before the user gets transported, the object flashes to indicate an upcoming event. Note: This is the first example of triggered locomotion, and it may cause uncomfortable feeling and even motion sickness. However, a user is informed about the upcoming motion; they get used to it as they keep practicing it. The length of the light flashing could be reduced then.


09
Same as above, daylight
Confirmed transportation: When such object detects the collision with sound particles, it toggles into “ready” state and indicates it with the local illumination. Another particle collision will cause a user to be transported to the object location, while in another case the object will get back to “sleep” state in 10 seconds again. Note: This interaction is an analogy to computer mouse double-click event. With the first particle burst user selects the object, and with the second one the interaction is confirmed/initiated. If the second burst does not happen within the given period, the object is “unselected.”

10
Same as above
Self-locomotion: Motion is triggered by activating the object anchored to the user's virtual body. When sound particles collide with this object, the user will be transported in the direction of that particle stream. Note: This interaction allows a user to experience free locomotion across the simulated environment without using additional controllers.



Continue to
Evaluation