Walk it off

Screen Shot 2017-11-24 at 08.41.42

In 1985, GM made a decision that has changed the way we design cars (And anything else in general.) On that day, they've signed a contract with Alias Systems to develop NURBS modeling technology compatible with their current CAD tools. Just three years after, the software piece called Alias/2 became the substantial part of design process among the most of the industry leaders, including brands such as Honda, Volvo, BMW, ILM, Apple or Sony. Computer Aided Industrial Design was born and since that day digital modeling and visualization anchored themselves in the development of the human-made products, alongside to traditional model making with wood or clay.

It was natural to expect that digital modeling would replace the clay modeling. As nobody uses mechanical typewriters to write, except few pathetic hipsters probably, it seemed to be inevitable. What is interesting though, even after many attempts to fully digitalize the design process, it has never entirely happened. Despite today's democratized access to computers, despite all the high-resolution-room-sized screens or VR lately, the clay never disappeared from the process. It seems that when it comes to full-size models, the choice is clear and it is the clay, not the computer screen.

There is a number of reasons why it would make sense to avoid clay modeling. First of all, it takes too much time to build model manually, and even NC milling the digital model out might be not as direct as it appears. Changes on sculpted surfaces are a relatively easy task, but they have to be scanned and re-surfaced with CAID tool, as the rest of the process is digital. Additionally, working with the industrial clay requires specific conditions, such as well ventilated room as it may contain sulfur.

So, why are we still using clay? Is it just a pathetic choice made by prominent chief designers? Or is it that one joy of pushing the model to the sunlight? I guess everyone who has ever experienced it, would confirm that sheer satisfaction of walking around the model outside the modeling hall, but that won't serve as a convincing reason to spend so much time and money on clay modeling.

There is no clear answer for that. In an example, it is tough to accept Chris Svensson's opinion, (He is the director of design for Ford’s North and South American operations) which he shared with the Wall Street Journal in 2014: ‘We always came back to clay.’ The problem is, he says, digital projections can’t accurately show how light will play on a car’s surface. ‘You can’t replicate the sun.’ While it may sound just about right, it is far from the truth. Today's digital tools are much more precise in its way of controlling and evaluating highlights than anything we know in the analog world. As of today, we can simulate pretty much any lighting scenario, and support the visual fidelity with physically correct shaders and materials. We can even present design models in the virtual reality where we can see them in their real size and observe them from any angle as we turn the camera view in space at the front of us. Still, it fails to deliver enough stimuli to judge and evaluate the forms accurately. Nevertheless, he succeeded in pushing me in a right direction. If we can replicate the sun, what else we may need to replicate "real" visual experience with digital simulation?

It appears that we can generate digital content that is convincing enough to satisfy our eyes. Yet, there is much more to visual perception than just our eyesight. Neuroscientist Anil Seth reveals the truth in his speech “Neuroscience of consciousness“: "What we consciously see, is our brain's best guess of the causes of its sensory inputs." David Eagleman from Stanford University adds: "Our brain continually creates a visual model of the outside world refined by our eyesight and combined with proprioception." Just add to it, he also claims (in his book "Incognito") that we don't even see fully in 3d, but we instead of it calculate our three-dimensional mental image using the different viewing angles generated by the offset of our eyes, our head orientation and our body movement in space. By the way, this theory also explains why some people with a one eye injury are still capable of perceiving the depth.

So what does it mean? And does it have anything to do with our case of clay modeling? In fact, we have a lot to consider! The thing is, that whenever we walk around the observed object, we are adding additional information needed for the better perception. As we tilt our heads, as we walk around it, we continuously improve our inner mental image with new viewing angles. At the same time, our brain uses all the senses of our own body, our height, the length of our arms and also our proprioception to refine our mental judgment of the size and proportions of the object. Our brain also compares the object with another object around, especially with objects of known size and proportions: such as the human figures. All of the above-mentioned inputs improve the way we interpret the observed object. If any of it is missing in current observation experience, the brain’s guess is incomplete.

We can unmistakably produce hyper-realistic, highly detailed digital visualization of the digital object, we can display it as a stereoscopic projection, but when we skip our physical body from such experience, the visual perception becomes far from being complete. At the same time, the clay model pushed to the sunlight, despite all its imperfections, will provide much more information about itself than any virtual reality immersion, if we can’t walk around it if we can’t use our body to complement our vision.

Do I suggest that we just need to implement the walking system into VR and achieve that required visual fidelity? I would say yes, but there is another path to reach the same goal. Our ability to correctly evaluated observed objects could be trained. The only problem is that it may take long years of clay modeler's or designer's practice, to earn an ability to see that skillfully. It is the skill that everyone can learn, as babies do learn to recognize the faces or to understand colors. Although, it may take years. So for the rest of us, we have to walk it off.

You can also follow the conversation about this article at LinkedIn:

References and related reading