1) An open development platform
2) An environment without the light source and no ambient lighting
3) Simulation of echolocation
4) Alignment of virtual and real

Development platform

Despite the wide availability of dedicated and easy to use VR development software tools, such as Samsara, Steam Workshop or Sinuscape, the need to control users spatial position and orientation, along with real-time sound capturing required the use of the much more open solution.
Fortunately, the growing support of various VR API's found their way into game development engines.
The very first attempts to implement the B_BODY idea in a tangible form happened with Unreal Engine, mainly for the visual scripting language, that does not require more in-depth knowledge of any specific programming language. However, it turned out soon, that visual scripting challenges developers skills as much as would common programming language do. So, that lack of local resources that would be able to support the development in terms of consulting forced the switch over to
Unity 3d.

Making the scene dark is comfortable in any of tools mentioned above, but the real challenge is how to make the objects and space discoverable and recognizable without the natural light.
First of all, the objects must be easy to recognize. Anything too complicated, or unique in its form, may confuse. Consequently, the only objects that user will interact with in the beginning are simple boxes and pillars of different sizes. However, as user accommodates to B_BODY experience, object complexity may increase.
Secondly, removing the light from the scene produced very constant, computer-like impression. The depth perception was lost.
So as the sound travels through space, it loses its intensity due to the air thickness. Hence it is an attribute of an environment. Therefore fog-like effect was added.

B_BODY algorithm interprets echolocation as a stream of visible particles, which represents sound traveling through the air. We are using Unity's collision system to allow particles to reach the object and eventually bounce away. As particles age, the fade out, and simulate the loss of the acoustic energy over the time.
Particle burst is initiated by the sound detection operated by
MicControl script. Each burst contains up to 16000 particles, whose are emitted in conic shape.

Alignment of virtual and real
The particle emitter is anchored to the user's virtual head, exactly in the position of the real head wearing the HMD. Alignment of particle emissions with head movements provides a natural link between virtual and real.


  • SoundParticles:
  • B_Body uses basic Open VR Player Rig
  • Both hand controllers and their tracking are removed
    Forward facing particle emitter produces visible stream of particles
    representing the sound traveling through the air
    The amount of the particles is controlled with modified MicControl script, where:
    • Loudness above level 2 initiates particle burst
      Loudness controls the amount of the particles emitted in one burst
      Particle system is anchored to users head origin, and matches the face directions and its moves
      Particles generate collision data, which are used as interaction triggers
      When particles collide with scene objects, they either:
    • Stick with the objects surface
      Ignore the object and travel trough it (Invisible object)
      Particle change their size and opacity with time, until they “die” after 16 seconds.
  • Scene and object types:
    • All lighting removed
    • No ambient light
    • Solid color background, lightness=0. (HSL color model)
    • Cubes and boxes for initial recognition tests
    • Complex objects for extended recognition tests: human skull
    • Transparent objects: Collision detection removed
    • Interactive object: Particle collision triggers an event (light turns on,
    object moves, etc)
    • Global turbulence influences the particle trajectories in order to simulate the air fluctuation

b_body_2b_body_3b_body_4GCS0006GCS0008Screen Shot 2018-06-25 at 11.21.09Screen Shot 2018-06-25 at 11.21.44Continue to Scenarios