As it becomes increasingly immersive, virtual reality will demand virtual materials. Users, such as doctors performing operations remotely via a robotic interface, might need not only to see their artificial reality but to feel it too. The interface will therefore need to be haptic: to create a sensation of touch, a simulacrum of the mechanical properties of materials that the user is supposedly manipulating.

Haptic interfaces have existed for many years, but touch is still challenging to emulate. There is still plenty to be understood about how the mind develops a tactile sense of materials — their softness, compliance, texture and so forth — from the delicate feedback between skin and brain. It’s not clear, for example, what the relevant coordinates are for tactile space: how we categorize such sensual characteristics.

A new study of haptic sensation in virtual reality (VR) supplies a demonstration of how touch is acutely sensitive to other sensory cues, especially vision. Berger et al. tested users of a VR system that generates an illusion of material objects from small vibrations delivered to handheld controllers in each hand1. When the vibrations are suitably synchronized, the user experiences the sense of there being a single, material source located in the empty space between the hands. In the experiments this source could be rendered visually in the VR headset as a vibrating white marble.

It seems natural to assume that the more intense the haptic sensation, the more realistic and immersive the VR environment will be. But it’s not as simple as that. Participants reported a good sense of localization for the source of vibration, yet making this sensation increasingly realistic (by control of the synchrony and amplitude of the vibration) didn’t enhance, but rather diminished, a sense of immersion unless the visual cues were similarly enhanced.

This diminution of the illusion could be avoided, however, if the VR headset showed an animated cloud that ‘obscured’ the marble, or if the haptic stimulation occurred only in response to user movements — in both cases offering a plausible ‘reason’ for the mismatch of stimuli.

Berger et al. interpret their findings in the context of the well-known ‘uncanny valley’ of robotics. Robots that closely approach but do not quite attain fully human appearance elicit more unease — a greater cognitive dissonance — than ones with lower realism. They argue that there is a haptic uncanny valley too.

On the one hand these results can be regarded as a cautionary note for designing haptic interfaces: it benefits you little to enhance the tactile experience if other stimuli are not similarly improved. But one can also see here an indication of the subtlety of how the human mind creates our reality by integrating sensations and judging them against prior knowledge. Put simply, that process is not easily fooled.

Or perhaps one should say, the brain demands causal consistency. We will believe what we experience only if we can construct reasons — a narrative — for it, a creative act that enlists all available sensory input. A key factor here is agency: I sense this because that caused it, or indeed because I caused it. It’s in this respect that the haptic experiments truly connect with notions of the uncanny in robotics and AI more generally. Robin Murphy suggests that the uncanny valley exists only when we suspect a humanoid robot of being a zombie-like automaton emulating a conscious agent (so-called weak AI)2. If, in contrast, we have reason to suspect the robot is a genuine thinking entity (strong AI), our sympathy is engaged and the creepiness disappears. This distinction, says Murphy, is apparent in the robots from the original 1973 movie of Westworld (weak AI) and those of the new HBO series (strong AI). By the same token, it seems, we will not be misled by zombie materials.