A little research volunteer gazes at the screen, ready to demonstrate some surprising new findings about how babies use context to direct their attention.
Courtesy Kristen Tummeltshammer

Just six months into the world, babies already have the capacity to learn, remember and use contextual cues in a scene to guide their search for objects of interest, such as faces, a new Brown University study shows.

“It was pretty surprising to find that 6-month-olds were capable of this memory-guided attention,” said lead author Kristen Tummeltshammer, a postdoctoral scholar at Brown. “We didn’t expect them to be so successful so young.”

In the experiment described in Developmental Science, babies showed steady improvement in finding faces in repeated scenes, but didn’t get any quicker or more accurate in finding faces in new scenes. Senior author Dima Amso, an associate professor in Brown’s Department of Cognitive, Linguistic and Psychological Sciences, said the finding that infants can recognize and exploit patterns of context provides important new insights into typical and possibly atypical brain development.

“What that means is that they are efficient in using the structure in their environment to maximize attentional resources on the one hand and to reduce uncertainty and distraction on the other,” Amso said. “A critical question in our lab has been whether infants at risk for neurodevelopmental disorders, especially autism spectrum disorders, have differences in the way that they process visual information, and whether this would impact future learning and attention. These data lay the developmental groundwork for asking whether there are differences in using previously learned visual information to guide future learning and attention across various neurodevelopmental populations.”

Find the face

To make the findings, Tummeltshammer and Amso invited 46 healthy, full-term infants, either 6 or 10 months old, to their lab to play a little game of finding faces. Seated on a parent’s lap, the babies simply had to watch a screen as they were presented with a series of arrangements of four colored shapes. In each arrangement, the shapes would turn around with one revealing a face. An eye-tracking system would measure where the baby looked.

Eventually the babies would always look at the face, especially because after two seconds, the face would become animate and say words like “peekaboo.” In all, each baby saw 48 arrangements over eight minutes, with little breaks to watch clips of Elmo from “Sesame Street”. That, Tummeltshammer said, was to help keep them (and maybe their parents) engaged and happy.

The trick of the experiment is that while half the time the shape arrangements were randomly scrambled and the face could be revealed anywhere, the other half of the time the same arrangements were repeated, meaning a baby could learn from that context to predict where to look for the face. In this way, the babies beheld faces both in novel and repeated contexts. If babies could notice the repeated context pattern, remember it and put it to use, they should be quicker and more accurate in finding the face when it came up in that kind of scene again.

By several measures reported in the study, the babies demonstrated that capacity clearly. For example, as they saw more scenes, babies consistently reduced the amount of time it took to find the face in repeated-context scenes, but not in new-context scenes. Also they became better at ignoring non-face shapes in repeated-context scenes as they went along, but didn’t show that same improvement in new-context scenes.

Babies even learned to anticipate where the faces would be on the screen based on their experiences in the experiment.

Tummeltshammer said there was little difference between the 6-month-olds and the 10-month-olds, suggesting that the skill is already developed at the younger age.

In new research, Tummeltshammer said, she and Amso plan to experiment with more realistic scenes. After all, babies rarely need to look for faces among cleanly defined abstract shapes. A more real-world challenge for a baby, for instance, might be finding a parent’s familiar and comforting face across a holiday dinner table.

But even from this simpler experimental setting, the ability is clearly established.

“We think of babies as being quite reactive in how they spread their attention,” Tummeltshammer said. “This helps us recognize that they are actually quite proactive. They are able to use recent memory and to extract what’s common in an environment as a shortcut to be able to locate things quickly.”

Source: Brown University

support
Need Help?
Support Ticket