Introduction


What if the tools we make with “easy to use” in mind, following the “human-centered design” framework represent the less efficient solution of that particular, desired task. Think about asking a person to move from the point “A” to point “B” in the shortest time and with the smallest effort. We may observe the best runners, study their movement patterns and then design the lightest and most comfortable pair of walking shoes. Without any surprise, just wearing those will make the person move a bit faster to required destination, without requiring them to learn to use such tool. Alternatively, we will give them a bicycle and force them to learn to ride it. Once they adapt to this new way of moving, once it will become subconscious, they will be able to beat any runner wearing any over-designed pair of sneakers.

What if our natural ability to adapt to the situation would allow us to complete any task in a shorter time and less effort? Naturally, once we pass through the adaptation phase.

According to contemporary Neuroscience, our senses only contribute to our brain’s perception of reality, and to our consciousness. Our brain seems to create an inner model of the outside world, which is being continuously refined by our senses, by our interactions and experiences.

What does that mean? We might be to contribute to that inner model in Virtual reality in many other ways and perhaps more efficiently. Instead of creating a user-friendly interface, we might be able to adapt to an interface that is very efficient within given context, within given reality.
We might be able to create the task-friendly interface.

Speaking of which, here are the central questions we will try to answer with this experiment:


What is the minimal VR UI? (How much of physical controllers we can avoid?)

What is the minimal set of channels that still allows essential interactions?

Would even such an interface appeal to the users?



Pasted Graphic



Continue to
Background