The brain tricks itself into perceiving unambiguous depth
Stereogram experiments confirm that higher visual areas use feedback to censor conflicting visual cues
Li Zhaoping of the Max Planck Institute for Biological Cybernetics in Tübingen uses stereograms to test how different brain areas interact when they process visual data. Her new study shows how certain conflicting visual cues for perceiving depth will be censored by the brain if given enough time. The findings provide algorithmic details for a longstanding hypothesis about how different visual areas in the brain interact with each other.
Humans can perceive visual depth quite well. This is partly because the left and the right eye, being located only few centimeters from each other, see their surroundings from a slightly different perspective and hence capture hardly distinguishable images of the world. The brain interprets these differences as a way to encode what is close and what is far. A popular way to use this phenomenon for fun are stereograms: when you look at them cross-eyed or wall-eyed (which can be achieved by slowly moving the stereogram away from your eyes), they present slightly different images to the left and to the right eye, so that the brain perceives visual depth.
Random-dot stereograms are relatively simple versions of stereograms: they are sets of two images which only contain dots. Rather than forcing a cross-eyed or wall-eyed stare, the images are presented to the left and to the right eye separately via a setup of mirrors. Some pairs of dots – one dot in the image presented to the left eye, the other in the image presented to the right eye – are at identical positions relative to the overall images; other pairs of dots are marginally offset. Normally, if the dot seen by the left eye is a bit more to the right than the other one, then this pair of dots is perceived as closer; if it is more to the left, the brain perceives the pair as further away.
Neurons switch their roles
Vision is a complex process that requires the collaboration of different brain areas. In a first step, signals from the eyes are processed in a brain area called the primary visual cortex, before a small percentage of them is transmitted to the higher visual areas. Interestingly, different neurons in the primary visual cortex are in charge of signaling ‘close’ and ‘far’; some neurons have a pronounced response to seemingly closer pairs of dots, others to seemingly more remote pairs. However, if a black dot is matched with a white dot, these neurons switch their jobs: for neurons in the primary visual cortex, depth gets reversed if the points are contrasting.
“While the primary visual cortex is quite well understood, we wanted to find out more about the role of the higher visual areas for depth perception,” says Li Zhaoping. So, she and her collaborators asked test subjects to look at different random-dot stereograms with black and white dots, all of which created the perception of a central disk in front of (or sometimes behind) a surrounding ring. But in the central disk of some of the stereograms, the researchers switched the shades of some dots such that some of the pairs were black-white. Since depth perception is reversed for black-white pairs, the mix of monochrome and black-white pairs prompts the primary visual cortex to send contradictory signals to the higher visual areas. One might expect that this should make it more difficult to make out the central disk.
But interestingly, it turned out that depth perception was only impaired if subjects could view the stereograms for a short time – given enough viewing time, the otherwise confusing black-white dot pairs did not hinder depth perception any more.
Higher visual areas send feedback asking for more information
“What is really fascinating here is that these findings, as contradictory as they may seem at first glance, back up a long-standing hypothesis in visual research,” explains Zhaoping. “When the primary visual cortex transmits conflicting data – when the information from the black-white dot pairs does not match the information from the other dot pairs – the signals from the black-white pairs will first just get added to the other signals. This is why the subjects had difficulties making out the disk in an experiment with incongruent signals and short viewing time.”
But given enough time, the higher visual areas come more into play: they use the visual input image to guess the content of the visual scene, and employ their internal model of reality to generate synthetic input image. “This image is then fed back to the primary visual cortex as a query to amend and complete the original data. One might say that the higher visual areas ask of the primary visual cortex: ‘This is what a disk in the foreground would look like – please check if this matches the input you’re getting.’”
In order to build a consistent world view, the synthetic and actual visual images are compared to confirm or veto the guess, the confirmed guesses are thus perceived: in the case of the contradictory stereograms, the feedback ends up vetoing confusing signals from the black-white dot pairs, so that the image of the disk is perceived clearly. “We even discovered some other cases where the brain can utilize the black-white pairs to construct an enhanced depth perception, and we can explain those phenomena in a similar manner.” For Li Zhaoping, this study is just one step on her way to understand vision: “These findings motivate further investigations to test how the higher visual areas interact with the primary visual cortex and to reveal the underlying neural mechanisms and their perceptual consequences.”