首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Over half the world's population speaks a tone language, yet infant speech perception research has typically focused on consonants and vowels. Very young infants can discriminate a wide range of native and nonnative consonants and vowels, and then in a process of perceptual reorganization over the 1st year, discrimination of most nonnative speech sounds deteriorates. We investigated perceptual reorganization for tones by testing 6‐ and 9‐month‐old infants from tone (Chinese) and nontone (English) language environments for speech (lexical tone) and nonspeech (violin sound) tone discrimination in both cross‐sectional and longitudinal studies. Overall, Chinese infants performed equally well at 6 and 9 months for both speech and nonspeech tone discrimination. Conversely, English infants' discrimination of lexical tone declined between 6 and 9 months of age, whereas their nonspeech tone discrimination remained constant. These results indicate that the reorganization of tone perception is a function of the native language environment, and that this reorganization is linguistically based. Supplementary materials to this article are available on the World Wide Web at http:www.infancyarchives.com  相似文献   

2.
Infants' responses in speech sound discrimination tasks can be nonmonotonic over time. Stager and Werker (1997) reported such data in a bimodal habituation task. In this task, 8‐month‐old infants were capable of discriminations that involved minimal contrast pairs, whereas 14‐month‐old infants were not It was argued that the older infants' attenuated performance was linked to their processing of the stimuli for meaning. The authors suggested that these data are diagnostic of a qualitative shift in infant cognition. We describe an associative connectionist model showing a similar decrement in discrimination without any qualitative shift in processing. The model suggests that responses to phonemic contrasts may be a nonmonotonic function of experience with language. The implications of this idea are discussed. The model also provides a formal framework fer studying habituation‐dishabituation behaviors in infancy.  相似文献   

3.
The infant literature suggests that humans enter the world with impressive built‐in talker processing abilities. For example, newborns prefer the sound of their mother's voice over the sound of another woman's voice, and well before their first birthday, infants tune in to language‐specific speech cues for distinguishing between unfamiliar talkers. The early childhood literature, however, suggests that preschoolers are unable to learn to identify the voices of two unfamiliar talkers unless these voices are highly distinct from one another, and that adult‐level talker recognition does not emerge until children near adolescence. How can we reconcile these apparently paradoxical messages conveyed by the infant and early childhood literatures? Here, we address this question by testing 16.5‐month‐old infants (= 80) in three talker recognition experiments. Our results demonstrate that infants at this age have difficulty recognizing unfamiliar talkers, suggesting that talker recognition (associating voices with people) is mastered later in life than talker discrimination (telling voices apart). We conclude that methodological differences across the infant and early childhood literatures—rather than a true developmental discontinuity—account for the performance differences in talker processing between these two age groups. Related findings in other areas of developmental psychology are discussed.  相似文献   

4.
Previous work suggested that humans' sophisticated speech perception abilities stem from an early capacity to pay attention to speech in the auditory environment. What are the roots of this early preference? We assess the extent to which it is due to it being a vocal sound, a natural sound, and a familiar sound through a meta-analytic approach, classifying experiments as a function of whether they used native or foreign speech and whether the competitor, against which preference is tested, was vocal or non-vocal, natural or artificial. We also tested for the effect of age. Synthesizing data from 791 infants across 39 experiments, we found a medium effect size, confirming at the scale of the literature that infants reliably prefer speech over other sounds. This preference was not significantly moderated by the language used, vocal quality, or naturalness of the competitor, nor by infant age. The current body of evidence appears most compatible with the hypothesis that speech is preferred consistently as such and not just due to its vocal, natural, or familiar nature. We discuss limitations of the extant body of work on speech preference, including evidence consistent with a publication bias and low representation of certain stimuli types and ages.  相似文献   

5.
Language rhythm determines young infants' language discrimination abilities. However, it is unclear whether young bilingual infants exposed to rhythmically similar languages develop sensitivities to cross‐linguistic rhythm cues to discriminate their dual language input. To address this question, 3.5‐month‐old monolingual Basque, monolingual Spanish and bilingual Basque‐Spanish infants' language discrimination abilities (across low‐pass filtered speech samples of Basque and Spanish) have been tested using the visual habituation procedure. Although falling within the same rhythmic class, Basque and Spanish exhibit significant differences in their distributions of vocalic intervals (within‐rhythmic class variation). All infant groups in our study successfully discriminated between the languages, although each group exhibited a different pattern. Monolingual Spanish infants succeeded only when they heard Basque during habituation, suggesting that they were influenced by native language recognition. The bilingual and the Basque monolingual infants showed no such asymmetries and succeeded irrespective of the language of habituation. Additionally, bilingual infants exhibited longer looking times in the test phase as compared with monolinguals, reflecting that bilingual infants attend to their native languages differently than monolinguals. Overall, results suggest that bilingual infants are sensitive to within‐rhythm acoustic regularities of their native language(s) facilitating language discrimination and hence supporting early bilingual acquisition.  相似文献   

6.
Assessing speech discrimination skills in individual infants from clinical populations (e.g., infants with hearing impairment) has important diagnostic value. However, most infant speech discrimination paradigms have been designed to test group effects rather than individual differences. Other procedures suffer from high attrition rates. In this study, we developed 4 variants of the Visual Habituation Procedure (VHP) and assessed their robustness in detecting individual 9‐month‐old infants' ability to discriminate highly contrastive nonwords. In each variant, infants were first habituated to audiovisual repetitions of a nonword (seepug) before entering the test phase. The test phase in Experiment 1 (extended variant) consisted of 7 old trials (seepug) and 7 novel trials (boodup) in alternating order. In Experiment 2, we tested 3 novel variants that incorporated methodological features of other behavioral paradigms. For the oddity variant, only 4 novel trials and 10 old trials were used. The stimulus alternation variant was identical to the extended variant except that novel trials were replaced with “alternating” trials—trials that contained repetitions of both the old and novel nonwords. The hybrid variant incorporated elements from both the oddity and the stimulus alternation variants. The hybrid variant proved to be the most successful in detecting statistically significant discrimination in individual infants (8 out of 10), suggesting that both the oddity and the stimulus alternation features contribute to providing a robust methodology for assessing discrimination in individual infants. In Experiment 3, we found that the hybrid variant had good test‐retest reliability. Implications of these results for future infant speech perception work with clinical populations are discussed.  相似文献   

7.
When adults speak or sing with infants, they sound differently than in adult communication. Infant-directed (ID) communication helps caregivers to regulate infants' emotions and helps infants to process speech information, at least from ID-speech. However, it is largely unclear whether infants might also process speech information presented in ID-singing. Therefore, we examined whether infants discriminate vowels in ID-singing, as well as potential differences with ID-speech. Using an alternating trial preference procedure, infants aged 4–6 and 8–10 months were tested on their discrimination of an unfamiliar non-native vowel contrast presented in ID-like speech and singing. Relying on models of early speech sound perception, we expected that infants in their first half year of life would discriminate the vowels, in contrast to older infants whose non-native sound perception should deteriorate, at least in ID-like speech. Our results showed that infants of both age groups were able to discriminate the vowels in ID-like singing, while only the younger group discriminated the vowels in ID-like speech. These results show that infants process speech sound information in song from early on. They also hint at diverging perceptual or attentional mechanisms guiding infants' sound processing in ID-speech versus ID-singing toward the end of the first year of life.  相似文献   

8.
Attunement theories of speech perception development suggest that native‐language exposure is one of the main factors shaping infants' phonemic discrimination capacity within the second half of their first year. Here, we focus on the role of acoustic–perceptual salience and language‐specific experience by assessing the discrimination of acoustically subtle Basque sibilant contrasts. We used the infant‐controlled version of the habituation procedure to assess discrimination in 6‐ to 7‐month and 11‐ to 12‐month‐old infants who varied in their amount of exposure to Basque and Spanish. We observed no significant variation in the infants' discrimination behavior as a function of their linguistic experience. Infants in both age‐groups exhibited poor discrimination, consistent with Basque adults finding these contrasts more difficult than some others. Our findings are in agreement with previous research showing that perceptual discrimination of subtle speech sound contrasts may follow a different developmental trajectory, where increased native‐language exposure seems to be a requisite.  相似文献   

9.
Infant phonetic perception reorganizes in accordance with the native language by 10 months of age. One mechanism that may underlie this perceptual change is distributional learning, a statistical analysis of the distributional frequency of speech sounds. Previous distributional learning studies have tested infants of 6–8 months, an age at which native phonetic categories have not yet developed. Here, three experiments test infants of 10 months to help illuminate perceptual ability following perceptual reorganization. English‐learning infants did not change discrimination in response to nonnative speech sound distributions from either a voicing distinction (Experiment 1) or a place‐of‐articulation distinction (Experiment 2). In Experiment 3, familiarization to the place‐of‐articulation distinction was doubled to increase the amount of exposure, and in this case infants began discriminating the sounds. These results extend the processes of distributional learning to a new phonetic contrast, and reveal that at 10 months of age, distributional phonetic learning remains effective, but is more difficult than before perceptual reorganization.  相似文献   

10.
Children with hearing loss (HL) remain at risk for poorer language abilities than normal hearing (NH) children despite targeted interventions; reasons for these differences remain unclear. In NH children, research suggests speech discrimination is related to language outcomes, yet we know little about it in children with HL under the age of 2 years. We utilized a vowel contrast, /a-i/, and a consonant-vowel contrast, /ba-da/, to examine speech discrimination in 47 NH infants and 40 infants with HL. At Mean age =3 months, EEG recorded from 11 scalp electrodes was used to compute the time-frequency mismatched response (TF-MMRSE) to the contrasts; at Mean age =9 months, behavioral discrimination was assessed using a head turn task. A machine learning (ML) classifier was used to predict behavioral discrimination when given an arbitrary TF-MMRSE as input, achieving accuracies of 73% for exact classification and 92% for classification within a distance of one class. Linear fits revealed a robust relationship regardless of hearing status or speech contrast. TF-MMRSE responses in the delta (1–3.5 Hz), theta (3.5–8 Hz), and alpha (8–12 Hz) bands explained the most variance in behavioral task performance. Our findings demonstrate the feasibility of using TF-MMRSE to predict later behavioral speech discrimination.  相似文献   

11.
Both quality and quantity of speech from the primary caregiver have been found to impact language development. A third aspect of the input has been largely ignored: the number of talkers who provide input. Some infants spend most of their waking time with only one person; others hear many different talkers. Even if the very same words are spoken the same number of times, the pronunciations can be more variable when several talkers pronounce them. Is language acquisition affected by the number of people who provide input? To shed light on the possible link between how many people provide input in daily life and infants’ native vowel discrimination, three age groups were tested: 4‐month‐olds (before attunement to native vowels), 6‐month‐olds (at the cusp of native vowel attunement) and 12‐month‐olds (well attuned to the native vowel system). No relationship was found between talker number and native vowel discrimination skills in 4‐ and 6‐month‐olds, who are overall able to discriminate the vowel contrast. At 12 months, we observe a small positive relationship, but further analyses reveal that the data are also compatible with the null hypothesis of no relationship. Implications in the context of infant language acquisition and cognitive development are discussed.  相似文献   

12.
This paper quantifies the extent to which infants can perceive audio–visual congruence for speech information and assesses whether this ability changes with native language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio–visual congruence for speech. Moderator analyses, moreover, suggest that infants’ audio–visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.  相似文献   

13.
To successfully acquire language, infants must be able to track multiple levels of regularities in the input. In many cases, regularities only emerge after some learning has already occurred. For example, the grammatical relationships between words are only evident once the words have been segmented from continuous speech. To ask whether infants can engage in this type of learning process, 12‐month‐old infants in 2 experiments were familiarized with multiword utterances synthesized as continuous speech. The words in the utterances were ordered based on a simple finite‐state grammar. Following exposure, infants were tested on novel grammatical and ungrammatical sentences. The results indicate that the infants were able to perform 2 statistical learning tasks in sequence: first segmenting the words from continuous speech, and subsequently discovering the permissible orderings of the words. Given a single set of input, infants were able to acquire multiple levels of structure, suggesting that multiple levels of representation (initially syllable‐level combinations, subsequently word‐level combinations) can emerge during the course of learning.  相似文献   

14.
The ability to distinguish phonetic variations in speech that are relevant to meaning is essential for infants' language development. Previous studies into the acquisition of prosodic categories have focused on lexical stress, lexical pitch accent, or lexical tone. However, very little is known about the developmental course of infants' perception of linguistic intonation. In this study, we investigate infants' perception of the correlates of the statement/yes–no question contrast in a language that marks this sentence type distinction only by prosodic means, European Portuguese (EP). Using a modified version of the visual habituation paradigm, EP‐learning infants at 5–6 and 8–9 months were able to successfully discriminate segmentally varied, single‐prosodic word intonational phrases presented with statement or yes–no question intonation, demonstrating that they are sensitive to the prosodic cues marking this distinction as early as 5 months and maintain this sensitivity throughout the first year. These results suggest the presence of precocious discrimination abilities for intonation across segmental variation, similarly to previous reports for lexical pitch accent, but unlike previous findings for word stress.  相似文献   

15.
When addressing infants, many adults adopt a particular type of speech, known as infant‐directed speech (IDS). IDS is characterized by exaggerated intonation, as well as reduced speech rate, shorter utterance duration, and grammatical simplification. It is commonly asserted that IDS serves in part to facilitate language learning. Although intuitively appealing, direct empirical tests of this claim are surprisingly scarce. Additionally, studies that have examined associations between IDS and language learning have measured learning within a single laboratory session rather than the type of long‐term storage of information necessary for word learning. In this study, 7‐ and 8‐month‐old infants' long‐term memory for words was assessed when words were spoken in IDS and adult‐directed speech (ADS). Word recognition over the long term was successful for words introduced in IDS, but not for those introduced in ADS, regardless of the register in which recognition stimuli were produced. Findings are discussed in the context of the influence of particular input styles on emergent word knowledge in prelexical infants.  相似文献   

16.
The maternal voice appears to have a special role in infants’ language processing. The current eye‐tracking study investigated whether 24‐month‐olds (= 149) learn novel words easier while listening to their mother's voice compared to hearing unfamiliar speakers. Our results show that maternal speech facilitates the formation of new word–object mappings across two different learning settings: a live setting in which infants are taught by their own mother or the experimenter, and a prerecorded setting in which infants hear the voice of either their own or another mother through loudspeakers. Furthermore, this study explored whether infants’ pointing gestures and novel word productions over the course of the word learning task serve as meaningful indexes of word learning behavior. Infants who repeated more target words also showed a larger learning effect in their looking behavior. Thus, maternal speech and infants’ willingness to repeat novel words are positively linked with novel word learning.  相似文献   

17.
Families of infants who are congenitally deaf now have the option of cochlear implantation at a very young age. In order to assess the effectiveness of early cochlear implantation, however, new behavioral procedures are needed to measure speech perception and language skills during infancy. One important component of language development is word learning-a complex skill that involves learning arbitrary relations between words and their referents. A precursor to word learning is the ability to perceive and encode intersensory relations between co-occurring auditory and visual events. Recent studies in infants with normal hearing have shown that intersensory redundancies, such as temporal synchrony, can facilitate the ability to learn arbitrary pairings between speech sounds and objects (Gogate & Bahrick, 1998). To investigate the early stages of learning arbitrary pairings of sounds and objects after cochlear implantation, we used the Preferential Looking Paradigm (PLP) to assess infants' ability to associate speech sounds to objects that moved in temporal synchrony with the onset and offsets of the signals. Children with normal hearing ranging in age from 6, 9, 18, and 30 months served as controls and demonstrated the ability to learn arbitrary pairings between temporally synchronous speech sounds and dynamic visual events. Infants who received their cochlear implants (CIs) at earlier ages (7-15 months of age) performed similarly to the infants with normal hearing after about 2-6 months of CI experience. In contrast, infants who received their implants at later ages (16-25 months of age) did not demonstrate learning of the associations within the context of this experiment. Possible implications of these findings are discussed.  相似文献   

18.
Visual speech cues from a speaker's talking face aid speech segmentation in adults, but despite the importance of speech segmentation in language acquisition, little is known about the possible influence of visual speech on infants' speech segmentation. Here, to investigate whether there is facilitation of speech segmentation by visual information, two groups of English-learning 7-month-old infants were presented with continuous speech passages, one group with auditory-only (AO) speech and the other with auditory-visual (AV) speech. Additionally, the possible relation between infants' relative attention to the speaker's mouth versus eye regions and their segmentation performance was examined. Both the AO and the AV groups of infants successfully segmented words from the continuous speech stream, but segmentation performance persisted for longer for infants in the AV group. Interestingly, while AV group infants showed no significant relation between the relative amount of time spent fixating the speaker's mouth versus eyes and word segmentation, their attention to the mouth was greater than that of AO group infants, especially early in test trials. The results are discussed in relation to the possible pathways through which visual speech cues aid speech perception.  相似文献   

19.
The present experiments were designed to assess infants' abilities to use syllable co-occurrence regularities to segment fluent speech across contexts. Specifically, we investigated whether 9-month-old infants could use statistical regularities in one speech context to support speech segmentation in a second context. Contexts were defined by different word sets representing contextual differences that might occur across conversations or utterances. This mimics the integration of information across multiple interactions within a single language, which is critical for language acquisition. In particular, we performed two experiments to assess whether a statistically segmented word could be used to anchor segmentation in a second, more challenging context, namely speech with variable word lengths. The results of Experiment 1 were consistent with past work suggesting that statistical learning may be hindered by speech with word-length variability, which is inherent to infants' natural speech environments. In Experiment 2, we found that infants could use a previously statistically segmented word to support word segmentation in a novel, challenging context. We also present findings suggesting that this ability was associated with infants' early word knowledge but not their performance on a cognitive development assessment.  相似文献   

20.
Human languages rely on the ability to learn and produce an indefinite number of words by combining consonants and vowels in a lawful manner. The categorization of speech representations into consonants and vowels is evidenced by the tendency of adult speakers, attested in many languages, to use consonants and vowels for different tasks. Consonants are favored in lexical tasks, while vowels are favored to learn structural regularities. Recent results suggest that this specialization is already observable at 12 months of age in Italian participants. Here, we investigated the representations of younger infants. In a series of anticipatory looking experiments, we showed that Italian 6‐month‐olds rely more on vowels than on consonants when learning the predictions made by individual words (Experiment 1) and are better at generalizing a structure when it is implemented over vowels than when it is implemented over consonants (Experiments 2 and 3). Until 6 months of age, infants thus show a general vocalic bias, which contrasts with the specialization previously observed at 12 months. These results suggest the format of speech representations changes during the second semester of life.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号