The study at MIT looked into the brain’s visual recognition memory to learn how it focuses on what’s new and ignores what isn’t. A study at the University of Pennsylvania looked at the theory of repetition suppression, namely that less activity in the inferotemporal (IT) cortex meant that the image in question was familiar. The thing is that theory didn’t sit right with neuroscientist Nicole Rust. “Different images produce different amounts of activation even when they are all novel,” said Rust, an associate professor in the Department of Psychology.
Rust and her associates put forth a new theory, namely that the brain understands the level of activation expected from an image and corrects for it. That in turn leaves behind a signal for familiarity. Rust’s lab calls it sensory referenced suppression.
The big question surrounding vision is how does the information from the outside world come into our head in a way that can be interpreted. The sensory system breaks it down this way, simplified for clarity. Visual information comes into the eye by way of the rods and cones. It travels via the neurons to the visual area called the IT cortex. The 16 million neurons in the brain create different patterns depending on what is being seen. The brain must interpret the pattern in order to understand what is being seen. One pattern is for a specific face, another pattern is for a cup, etc.
The IT cortex also plays a role in memory. The old theory of repetition suppression states that more neural activity tell the brain that the image is novel. Because of factors affecting neural activity, the brain can discern what’s cause the reaction. It could be memory, image contrast or something else. Researchers propose that the brain corrects for changes caused by these factors and what remains is the brain activity for something previously seen.
To arrive at this conclusion, Rust’s lab presented sequences of grayscale images to two adult male rhesus macaques. Each image appeared twice, first as novel, second as a familiar, in a range of high and low contrast combinations. Each presentation lasted half a second and the animals were trained to use eye movements to show whether an image was new or familiar, while disregarding contrast levels.
The researchers recorded the neural activity in the IT cortex and using a mathematical approach they interpreted the patterns of spikes that accounted for how the animals could distinguish memory from contrast. This showed that both familiarity and contrast change the neural firing rate and that the brain can isolate one from the other. This research could have implications for both artificial intelligence and Alzheimer’s Disease. In the case of artificial intelligence, once there is an understanding of how the brain uses information in memory in the presence of sensory inputs like contrast, it will be possible to design machines that work the same way the brain does. In the case of Alzheimer’s, understanding how memory works in a healthy brain, can lead to treatment for the disease.
Whether an image is familiar or novel, the brain has its work cut out for itself when it comes to processing the information that comes in.
Source:
https://penntoday.upenn.edu/news/Penn-research-what-happens-brain-when-something-looks-familiar