首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6‐ and 9‐month‐old infants. Infants viewed two side‐by‐side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female sound track. Results showed that 6‐month‐old infants did not match the audible and visible attributes of gender, and 9‐month‐old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices.  相似文献   

2.
Unimodal emotionally salient visual and auditory stimuli capture attention and have been found to do so cross-modally. However, little is known about the combined influences of auditory and visual threat cues on directing spatial attention. In particular, fearful facial expressions signal the presence of danger and capture attention. Yet, it is unknown whether human auditory distress signals that accompany fearful facial expressions potentiate their capture of attention. It was hypothesized that the capture of attention by fearful faces would be enhanced when co-presented with auditory distress signals. To test this hypothesis, we used a modified multimodal dot-probe task where fearful faces were paired with three sound categories: no sound control, non-distressing human vocalizations, and distressing human vocalizations. Fearful faces captured attention across all three sound conditions. In addition, this effect was potentiated when fearful faces were paired with auditory distress signals. The results provide initial evidence suggesting that emotional attention is facilitated by multisensory integration.  相似文献   

3.
The study of vocal coordination between infants and adults has led to important insights into the development of social, cognitive, emotional, and linguistic abilities. We used an automatic system to identify vocalizations produced by infants and adults over the course of the day for fifteen infants studied longitudinally during the first 2 years of life. We measured three different types of vocal coordination: coincidence‐based, rate‐based, and cluster‐based. Coincidence‐based coordination and rate‐based coordination are established measures in the developmental literature. Cluster‐based coordination is new and measures the strength of matching in the degree to which vocalization events occur in hierarchically nested clusters. We investigated whether various coordination patterns differ as a function of vocalization type, whether different coordination patterns provide unique information about the dynamics of vocal interaction, and how the various coordination patterns each relate to infant age. All vocal coordination patterns displayed greater coordination for infant speech‐related vocalizations, adults adapted the hierarchical clustering of their vocalizations to match that of infants, and each of the three coordination patterns had unique associations with infant age. Altogether, our results indicate that vocal coordination between infants and adults is multifaceted, suggesting a complex relationship between vocal coordination and the development of vocal communication.  相似文献   

4.
For several decades, many authors have claimed the existence, early in life, of a tight link between perceptual and productive systems in speech. However, the question whether this link is acquired or is already present at birth remains open. This study aimed at investigating this question by employing the paradigm of neonatal facial imitation. We compared imitative responses of newborn infants presented either visual‐only, audiovisual congruent, or audiovisual incongruent models. Our results revealed that the newborns imitated significantly more quickly the movements of the model's mouth when this model was audiovisual congruent rather than visual‐only. Moreover, when observing an audiovisual incongruent model, the newborns did not produce imitative behavior. These findings, by highlighting the influence of speech perception on newborns' imitative responses, suggest that the neural architecture for perception–production is already in place at birth. The implications of these results are discussed in terms of a link between language and neonatal imitation, which could represent a precursor of more mature forms of vocal imitation and speech development in general.  相似文献   

5.
For effective communication, infants must develop the phonology of sounds and the ability to use vocalizations in social interactions. Few studies have examined the development of the pragmatic use of prelinguistic vocalizations, possibly because gestures are considered hallmarks of early pragmatic skill. The current study investigated infant vocal production and maternal responsiveness to examine the relationship between infant and maternal behavior in the development of infants' vocal communication. Specifically, we asked whether maternal responses to vocalizations could influence the development of prelinguistic vocal usage, as has been documented in recent experimental studies exploring the relation between maternal responses and phonological development. Twelve mother–infant dyads participated over a six‐month period (between 8 and 14 months of age). Mothers completed the MacArthur Communicative Development Inventory when infants were 15 months old. Maternal sensitive responses to infant vocalizations in the previous months predicted infants' mother‐directed vocalizations in the following months, rather than overall response rate. Furthermore, mothers' sensitive responding to mother‐directed vocalizations was correlated with an increase in developmentally advanced, consonant–vowel vocalizations and some language measures. This is the first study to document a social shaping mechanism influencing developmental change in pragmatic usage of vocalizations in addition to identifying the specific behaviors underlying development.  相似文献   

6.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

7.
Speech preferences emerge very early in infancy, pointing to a special status for speech in auditory processing and a crucial role of prosody in driving infant preferences. Recent theoretical models suggest that infant auditory perception may initially encompass a broad range of human and nonhuman vocalizations, then tune in to relevant sounds for the acquisition of species‐specific communication sounds. However, little is known about sound properties eliciting infants’ tuning‐in to speech. To address this issue, we presented a group of 4‐month‐olds with segments of non‐native speech (Mandarin Chinese) and birdsong, a nonhuman vocalization that shares some prosodic components with speech. A second group of infants was presented with the same segment of birdsong paired with Mandarin played in reverse. Infants showed an overall preference for birdsong over non‐native speech. Moreover, infants in the Backward condition preferred birdsong over backward speech whereas infants in the Forward condition did not show clear preference. These results confirm the prominent role of prosody in early auditory processing and suggest that infants’ preferences may privilege communicative vocalizations featured by certain prosodic dimensions regardless of the biological source of the sound, human or nonhuman.  相似文献   

8.
Perceptual narrowing—a phenomenon in which perception is broad from birth, but narrows as a function of experience—has previously been tested with primate faces. In the first 6 months of life, infants can discriminate among individual human and monkey faces. Though the ability to discriminate monkey faces is lost after about 9 months, infants retain human face discrimination, presumably because of their experience with human faces. The current study demonstrates that 4‐ to 6‐month‐old infants are able to discriminate nonprimate faces as well. In a visual paired comparison test, 4‐ to 6‐month‐old infants (n = 26) looked significantly longer at novel sheep (Ovis aries) faces, compared to a familiar sheep face (p = .017), while 9‐ to 11‐month‐olds (n = 26) showed no visual preference, and adults (n = 27) had a familiarity preference (p < .001). Infants’ face recognition systems are broadly tuned at birth—not just for primate faces, but for nonprimate faces as well—allowing infants to become specialists in recognizing the types of faces encountered in their first year of life.  相似文献   

9.
This study investigates the role of dynamic information in identity face recognition at birth. More specifically we tested whether semi‐rigid motion, conveyed by a change in facial expression, facilitates identity recognition. In Experiment 1, two‐day‐old newborns, habituated to a static happy or fearful face (static condition) have been tested using a pair of novel and familiar identity static faces with a neutral expression. Results indicated that newborns manifest a preference for the familiar stimulus, independently of the facial expression. In contrast, in Experiment 2 newborns habituated to a semi‐rigid moving face (dynamic condition) manifest a preference for the novel face, independently of the facial expression. In Experiment 3, a multistatic image of a happy face with different degrees of intensity was utilized to test the role of different amount of static pictorial information in identity recognition (multistatic image condition). Results indicated that newborns were not able to recognize the face to which they have been familiarized. Overall, results clearly demonstrate that newborns' face recognition is strictly dependent on the learning conditions and that the semi‐rigid motion conveyed by facial expressions facilitates identity recognition since birth.  相似文献   

10.
Although vocalization and mouthing are behaviors frequently performed by infants, little is known about the characteristics of vocalizations that occur with objects, hands, or fingers in infants' mouths. The purpose of this research was to investigate characteristics of vocalizations associated with mouthing in 6‐ to 9‐month‐old infants during play with a primary caregiver. Results suggest that mouthing may influence the phonetic characteristics of vocalizations by introducing vocal tract closure and variation in consonant production.  相似文献   

11.
Previous work has shown that 4‐month‐olds can discriminate between two‐dimensional (2D) depictions of structurally possible and impossible objects [S. M. Shuwairi (2009), Journal of Experimental Child Psychology, 104, 115; S. M. Shuwairi, M. K. Albert, & S. P. Johnson (2007), Psychological Science, 18, 303]. Here, we asked whether evidence of discrimination of possible and impossible pictures would also be revealed in infants’ patterns of reaching and manual exploration. Nine‐month‐old infants were presented with realistic photograph displays of structurally possible and impossible cubes along with a series of perceptual controls, and engaged in more frequent manual exploration of pictures of impossible objects. In addition, the impossible cube display elicited significantly more social referencing and vocalizations than the possible cube and perceptual control displays. The increased manual gestures associated with the incoherent figure suggest that perceptual and manual action mechanisms are interrelated in early development. The infant’s visual system extracts structural information contained in 2D images in analyzing the projected 3D configuration, and this information serves to control both the oculomotor and manual action systems.  相似文献   

12.
Past studies have found equivocal support for the ability of young infants to discriminate infant‐directed (ID) speech information in the presence of auditory‐only versus auditory + visual displays (faces + voices). Generally, younger infants appear to have more difficulty discriminating a change in the vocal properties of ID speech when they are accompanied by faces. Forty 4‐month‐old infants were tested using either an infant‐controlled habituation procedure (Experiment 1) or a fixed‐trial habituation procedure (Experiment 2). The prediction was that the infant‐controlled habituation procedure would be a more sensitive measure of infant attention to complex displays. The results indicated that 4‐month‐old infants discriminated voice changes in dynamic face + voice displays depending on the order in which they were viewed during the infant‐controlled habituation procedure. In contrast, no evidence of discrimination was found in the fixed‐trial procedure. The findings suggest that the selection of experimental methodology plays a significant role in the empirical observations of infant perceptual abilities.  相似文献   

13.
Infant contingent responsiveness to maternal language and gestures was examined in 190 Mexican American, Dominican American, and African American infant–mother dyads when infants were 14 and 24 months. Dyads were video‐recorded during book‐sharing and play. Videos were coded for the timing of infants’ vocalizations and gestures and mothers’ referential language (i.e., statements that inform infants about objects and events in the world; e.g., “That's a big doggy!”), regulatory language (i.e., statements that regulate infants’ attention or actions; e.g., “Look at that”, “Put it down!”), and gestures. Infants of all three ethnicities responded within 3 sec of mothers’ language and gestures, increased their responsiveness over development, and displayed specificity in their responses: They vocalized and gestured following mothers’ referential language and gestures, but were less likely than chance to communicate following mothers’ regulatory language. At an individual level, responsive infants had responsive mothers.  相似文献   

14.
Eye gaze has been shown to be an effective cue for directing attention in adults. Whether this ability operates from birth is unknown. Three experiments were carried out with 2‐ to 5‐day‐old newborns. The first experiment replicated the previous finding that newborns are able to discriminate between direct and averted gaze, and extended this finding from real to schematic faces. In Experiments 2 and 3 newborns were faster to make saccades to peripheral targets cued by the direction of eye movement of a central schematic face, but only when the motion of the pupils was visible. These results suggest that newborns may show a rudimentary form of gaze following.  相似文献   

15.
Two studies illustrate the functional significance of a new category of prelinguistic vocalizing—object‐directed vocalizations (ODVs)—and show that these sounds are connected to learning about words and objects. Experiment 1 tested 12‐month‐old infants’ perceptual learning of objects that elicited ODVs. Fourteen infants’ vocalizations were recorded as they explored novel objects. Infants learned visual features of objects that elicited the most ODVs but not of objects that elicited the fewest vocalizations. Experiment 2 assessed the role of ODVs in learning word–object associations. Forty infants aged 11.5 months played with a novel object and received a label either contingently on an ODV or on a look alone. Only infants who received labels in response to an ODV learned the association. Taken together, the findings suggest that infants’ ODVs signal a state of attention that facilitates learning.  相似文献   

16.
17.
Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller’s emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former’s greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.  相似文献   

18.
Behavioral and electrophysiological evidence suggests a gradual, experience‐dependent specialization of cortical face processing systems that takes place largely in the 1st year of life. To further investigate these findings, event‐related potentials (ERPs) were collected from typically developing 9‐month‐old infants presented with pictures of familiar and unfamiliar monkey or human faces in 2 different orientations. Analyses revealed differential processing across changes in monkey and human faces. The N290 was greater for familiar compared to unfamiliar faces, regardless of species or orientation. In contrast, the P400 to unfamiliar faces was greater than to familiar faces, but only for the monkey condition. The P400 to human faces differentiated the orientation of both familiar and unfamiliar faces. These results suggest more specific processing of human compared to monkey faces in 9‐month‐olds.  相似文献   

19.
Leading up to explicit mirror self‐recognition, infants rely on two crucial sources of information: the continuous integration of sensorimotor and multisensory signals, as when seeing one's movements reflected in the mirror, and the unique facial features associated with the self. While visual appearance and multisensory contingent cues may be two likely candidates of the processes that enable self‐recognition, their respective contribution remains poorly understood. In this study, 18‐month‐old infants saw side‐by‐side pictures of themselves and a peer, which were systematically and simultaneously touched on the face with a hand. While watching the stimuli, the infant's own face was touched either in synchrony or out of synchrony and their preferential looking behavior was measured. Subsequently, the infants underwent the mirror‐test task. We demonstrated that infants who were coded as nonrecognizers at the mirror test spent significantly more time looking at the picture of their own face compared to the other‐face, irrespective of whether the multisensory input was synchronous or asynchronous. Our results suggest that right before the onset of mirror self‐recognition, featural information about the self might be more relevant in the process of recognizing one's face, compared to multisensory cues.  相似文献   

20.
In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight‐month‐old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a novel face. In the switch faces, a new combination of identity and emotional expression was presented. The results show that infants differentiated between switch and habituation faces only in the upright condition but not in the inverted condition. Experiment 2 provides evidence that infants’ nonresponse in the inverted condition can be attributed to their independent processing of facial identity and emotional expression. This suggests that infants in the upright condition processed facial identity and emotional expression in conjunction with one another.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号