首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
The ability of infants to recognize phonotactic patterns in their native language is widely acknowledged. However, the specific ability of infants to recognize patterns created by nonadjacent vowels in words has seldom been investigated. In Semitic languages such as Hebrew, groups of multisyllabic words are identical in their nonadjacent vowel sequences and stress position but differ in the consonants interposed between the vowels. The goals of this study were to assess whether infants learning Hebrew show a preference for (1) a nonadjacent vocalic pattern or template, common in Hebrew nouns (CéCeC), over a nonattested nonadjacent vocalic pattern (CóCoC), and (2) a nonadjacent vocalic pattern common in Hebrew words (CaCóC) over an existing but less common pattern (CaCéC). Twenty Hebrew‐learning infants aged 8 to 11 months were presented with lists of nonsense words featuring the first two patterns (Experiment 1), and 20 were presented with nonsense words featuring the second two patterns (Experiment 2). The results showed longer listening to CéCeC than to CóCoC lists and to CaCóC than to CaCéC lists, suggesting that infants recognized the common nonadjacent vocalic patterns in both cases. The study thus demonstrates that Hebrew‐learning infants are able to disregard the intervening consonants within words and generalize their vocalic pattern to previously unheard nonwords, whether this pattern includes identical or different vowels and regardless of the rhythmic pattern of the word (trochaic or iambic). Analysis of the occurrence of the relevant vowel patterns in input speech in three Hebrew corpora (two addressed to children and one to adults) suggests that exposure to these patterns in words underlies the infants' preferences.  相似文献   

2.
Forms that are nonlinguistic markers in one language (i.e., “tsk‐tsk” in English) may be part of the phoneme inventory—and hence part of words—in another language. In the current paper, we demonstrate that infants' ability to learn words containing unfamiliar language sounds is influenced by the age and vocabulary size of the infant learner, as well as by cues to the speaker's referential intent. When referential cues were available, infants at 14 months learned words with non‐native speech sounds, but at 20 months only those infants with smaller vocabularies succeeded. When no referential cues were present, infants at both 14 and 20 months failed to learn the same words. The implications of the relation between linguistic sophistication and non‐native word learning are discussed.  相似文献   

3.
Recognizing word boundaries in continuous speech requires detailed knowledge of the native language. In the first year of life, infants acquire considerable word segmentation abilities. Infants at this early stage in word segmentation rely to a large extent on the metrical pattern of their native language, at least in stress‐based languages. In Dutch and English (both languages with a preferred trochaic stress pattern), segmentation of strong‐weak words develops rapidly between 7 and 10 months of age. Nevertheless, trochaic languages contain not only strong–weak words but also words with a weak‐strong stress pattern. In this article, we present electrophysiological evidence of the beginnings of weak‐strong word segmentation in Dutch 10‐month‐olds. At this age, the ability to combine different cues for efficient word segmentation does not yet seem to be completely developed. We provide evidence that Dutch infants still largely rely on strong syllables, even for the segmentation of weak–strong words.  相似文献   

4.
Fourteen‐month‐olds are sensitive to mispronunciations of the vowels and consonants in familiar words (N. Mani & K. Plunkett (2007), Journal of Memory and Language, 57, 252; D. Swingley & R. N. Aslin (2002), Psychological Science, 13, 480). To examine the development of this sensitivity further, the current study tests 12‐month‐olds’ sensitivity to different kinds of vowel and consonant mispronunciations of familiar words. The results reveal that vocalic changes influence word recognition, irrespective of the kinds of vocalic changes made. While consonant changes influenced word recognition in a similar manner, this was restricted to place and manner of articulation changes. Infants did not display sensitivity to voicing changes. Infants’ sensitivity to vowel mispronunciations, but not consonant mispronunciations, was influenced by their vocabulary size—infants with larger vocabularies were more sensitive to vowel mispronunciations than infants with smaller vocabularies. The results are discussed in terms of different models attempting to chart the development of acoustically or phonologically specified representations of words during infancy.  相似文献   

5.
The literature reports some contradictory results on the degree of phonological specificity of infants’ early lexical representations in the Romance language, French, and Germanic languages. It is not clear whether these discrepancies are because of differences in method, in language characteristics, or in participants’ age. In this study, we examined whether 12‐ and 17‐month‐old French‐speaking infants are able to distinguish well‐pronounced from mispronounced words (one or two features of their initial consonant). To this end, 46 infants participated in a preferential looking experiment in which they were presented with pairs of pictures together with a spoken word well pronounced or mispronounced. The results show that both 12‐ and 17‐month‐old infants look longer at the pictures corresponding to well‐pronounced words than to mispronounced words, but show no difference between the two mispronunciation types. These results suggest that, as early as 12 months, French‐speaking infants, like those exposed to Germanic languages, already possess detailed phonological representations of familiar words.  相似文献   

6.
Previous research using the name‐based categorization task has shown that 20‐month‐old infants can simultaneously learn 2 words that only differ by 1 consonantal feature but fail to do so when the words only differ by 1 vocalic feature. This asymmetry was taken as evidence for the proposal that consonants are more important than vowels at the lexical level. This study explores this consonant‐vowel asymmetry in 16‐month‐old infants, using an interactive word learning task. It shows that the pattern of the 16‐month‐olds is the same as that of the 20‐month‐olds. Infants succeeded with 1‐feature consonantal contrasts (either place or voicing) but were at chance level with 1‐feature vocalic contrasts (either place or height). These results thus contribute to a growing body of evidence establishing, from early infancy to adulthood, that consonants and vowels have different roles in lexical acquisition and processing.  相似文献   

7.
Although realization of the same speech sound is far from being consistent across different contexts, speech recognition has to rely on phonetic detail in order to detect words. So far, it appeared that young infants cannot avoid noticing subtle speech sound variation whenever it occurs. Only later on, they are able to tolerate speech sound variation in some word recognition tasks. Here, we test whether this ability is associated with the time infants start storing their first word forms. We recorded event‐related potentials (ERPs) in a priming paradigm. German words (targets) followed syllables (primes) with a different amount of phoneme overlap. We tested infants at three, six, and nine months after birth. ERPs reflected sensitivity to prime‐target variation in a single phoneme in three‐month‐olds, tolerance to this in six‐month‐olds, and both processing aspects in nine‐month‐olds. Our findings reveal individual developmental priorities for different aspects of speech processing, with very detailed speech processing dominating at around 3 months, rough processing dominating at around half a year after birth, and an architecture of parallel rough and detailed processing at around 9 months. Functional parallelism at the end of infancy might explain the heterogeneous pattern of results regarding the degree of acoustic detail that toddlers appear to consider at different ages and across different paradigms.  相似文献   

8.
While phonological development is well‐studied in infants, we know less about morphological development. Previous studies suggest that infants around one year of age can process words analytically (i.e., they can decompose complex forms to a word stem and its affixes) in morphologically simpler languages such as English and French. The current study explored whether 15‐month‐old infants learning Hungarian, a morphologically complex, agglutinative language with vowel harmony, are able to decompose words into a word stem and a suffix. Potential differences between analytical processing of complex forms with back versus front vowels were also studied. The results of Experiment 1 indicate that Hungarian infants process morphologically complex words analytically when they contain a frequent suffix. Analytic processing is present both in the case of complex forms with back and front vowels according to the results of Experiment 2. In light of the results, we argue for the potential relevance of the early development of analytic processing for language development.  相似文献   

9.
Over half the world's population speaks a tone language, yet infant speech perception research has typically focused on consonants and vowels. Very young infants can discriminate a wide range of native and nonnative consonants and vowels, and then in a process of perceptual reorganization over the 1st year, discrimination of most nonnative speech sounds deteriorates. We investigated perceptual reorganization for tones by testing 6‐ and 9‐month‐old infants from tone (Chinese) and nontone (English) language environments for speech (lexical tone) and nonspeech (violin sound) tone discrimination in both cross‐sectional and longitudinal studies. Overall, Chinese infants performed equally well at 6 and 9 months for both speech and nonspeech tone discrimination. Conversely, English infants' discrimination of lexical tone declined between 6 and 9 months of age, whereas their nonspeech tone discrimination remained constant. These results indicate that the reorganization of tone perception is a function of the native language environment, and that this reorganization is linguistically based. Supplementary materials to this article are available on the World Wide Web at http:www.infancyarchives.com  相似文献   

10.
This study investigated prosodic and structural characteristics of infant‐directed speech to hearing‐impaired infants as they gain hearing experience with a cochlear implant over a 12‐month period of time. Mothers were recorded during a play interaction with their HI infants (N = 27, mean age 18.4 months) at 3, 6, and 12 months postimplantation. Two separate control groups of mothers with age‐matched normal‐hearing infants (NH‐AM) (N = 21, mean age 18.1 months) and hearing experience‐matched normal‐hearing infants (NH‐EM) (N = 24, mean age 3.1 months) were recorded at three testing sessions. Mothers produced less exaggerated pitch characteristics, a larger number of syllables per utterance, and faster speaking rate when interacting with NH‐AM as compared to HI infants. Mothers also produced more syllables and demonstrated a trend suggesting faster speaking rate in speech to NH‐EM relative to HI infants. Age‐related modifications included decreased pitch standard deviation and increased number of syllables in speech to NH‐AM infants and increased number of syllables in speech to HI and NH‐EM infants across the 12‐month period. These results suggest that mothers are sensitive to the hearing status of their infants and modify characteristics of infant‐directed speech over time.  相似文献   

11.
Adults typically use an exaggerated, distinctive speaking style when addressing infants. However, the effects of infant‐directed (ID) speech on infants' learning are not yet well understood. This research investigates how ID speech affects how infants perform a key function in language acquisition, associating the sounds of words with their meanings. Seventeen‐month‐old infants were presented with two label‐object pairs in a habituation‐based word learning task. In Experiment 1, the labels were produced in adult‐directed (AD) speech. In Experiment 2, the labels were produced in ID prosody; they had higher pitch, greater pitch variation, and longer durations than the AD labels. We found that infants failed to learn the labels in AD speech, but succeeded in learning the same labels when they were produced in ID speech. Experiment 3 investigated the role of variability in learning from ID speech. When the labels were presented in ID prosody with no variation across tokens, infants failed to learn them. Our findings indicate that ID prosody can affect how readily infants map sounds to meanings and that the variability in prosody that is characteristic of ID speech may play a key role in its effect on learning new words.  相似文献   

12.
By the end of their first year of life, infants’ representations of familiar words contain phonetic detail; yet little is known about the nature of these representations at the very beginning of word learning. Bouchon et al. ( 2015 ) showed that French‐learning 5‐month‐olds could detect a vowel change in their own name and not a consonant change, but also that infants reacted to the acoustic distance between vowels. Here, we tested British English‐learning 5‐month‐olds in a similar study to examine whether the acoustic/phonological characteristics of the native language shape the nature of the acoustic/phonetic cues that infants pay attention to. In the first experiment, British English‐learning infants failed to recognize their own name compared to a mispronunciation of initial consonant (e.g., Molly versus Nolly) or vowel (e.g., April versus Ipril). Yet in the second experiment, they did so when the contrasted name was phonetically dissimilar (e.g., Sophie versus Amber). Differences in phoneme category (stops versus continuants) between the correct consonant versus the incorrect one significantly predicted infants’ own name recognition in the first experiment. Altogether, these data suggest that infants might enter into a phonetic mode of processing through different paths depending on the acoustic characteristics of their native language.  相似文献   

13.
Previous studies show that young monolingual infants use language‐specific cues to segment words in their native language. Here, we asked whether 8 and 10‐month‐old infants (N = 84) have the capacity to segment words in an inter‐mixed bilingual context. Infants heard an English‐French mixed passage that contained one target word in each language, and were then tested on their recognition of the two target words. The English‐monolingual and French‐monolingual infants showed evidence of segmentation in their native language, but not in the other unfamiliar language. As a group, the English‐French bilingual infants segmented in both of their native languages. However, exploratory analyses suggest that exposure to language mixing may play a role in bilingual infants’ segmentation skills. Taken together, these results indicate a close relation between language experience and word segmentation skills.  相似文献   

14.
Infants rapidly learn both linguistic and nonlinguistic representations of their environment and begin to link these from around 6 months. While there is an increasing body of evidence for the effect of labels heard in‐task on infants’ online processing, whether infants’ learned linguistic representations shape learned nonlinguistic representations is unclear. In this study 10‐month‐old infants were trained over the course of a week with two 3D objects, one labeled, and one unlabeled. Infants then took part in a looking time task in which 2D images of the objects were presented individually in a silent familiarization phase, followed by a preferential looking trial. During the critical familiarization phase, infants looked for longer at the previously labeled stimulus than the unlabeled stimulus, suggesting that learning a label for an object had shaped infants’ representations as indexed by looking times. We interpret these results in terms of label activation and novelty response accounts and discuss implications for our understanding of early representational development.  相似文献   

15.
Retaining detailed representations of unstressed syllables is a logical prerequisite for infants' use of probabilistic phonotactics to segment iambic words from fluent speech. The head‐turn preference study was used to investigate the nature of English‐learners' representations of iambic word onsets. Fifty‐four 10.5‐month‐olds were familiarized to passages containing the nonsense iambic word forms ginome and tupong. Following familiarization, infants were either tested on familiar (ginome and tupong) or near‐familiar (pinome and bupong) versus unfamiliar (kidar and mafoos) words. Infants in the familiar test group (familiar vs. unfamiliar) oriented significantly longer to familiar than unfamiliar test items, whereas infants in the near‐familiar test group (near‐familiar vs. unfamiliar) oriented equally long to near‐familiar and unfamiliar test items. Our results provide evidence that infants retain fairly detailed representations of unstressed syllables and therefore support the hypothesis that infants use phonotactic cues to find words in fluent speech.  相似文献   

16.
Previous research has shown that children as young as 2 can learn words from 3rd‐party conversations (Akhtar, Jipson, & Callanan, 2001). The focus of this study was to determine whether younger infants could learn a new word through overhearing. Novel object labels were introduced to 18‐month‐old infants in 1 of 2 conditions: directly by an experimenter or in the context of overhearing the experimenter use the word while interacting with another adult. The findings suggest that, when memory demands are not too high, 18‐month‐old infants can learn words through overhearing.  相似文献   

17.
To successfully acquire language, infants must be able to track multiple levels of regularities in the input. In many cases, regularities only emerge after some learning has already occurred. For example, the grammatical relationships between words are only evident once the words have been segmented from continuous speech. To ask whether infants can engage in this type of learning process, 12‐month‐old infants in 2 experiments were familiarized with multiword utterances synthesized as continuous speech. The words in the utterances were ordered based on a simple finite‐state grammar. Following exposure, infants were tested on novel grammatical and ungrammatical sentences. The results indicate that the infants were able to perform 2 statistical learning tasks in sequence: first segmenting the words from continuous speech, and subsequently discovering the permissible orderings of the words. Given a single set of input, infants were able to acquire multiple levels of structure, suggesting that multiple levels of representation (initially syllable‐level combinations, subsequently word‐level combinations) can emerge during the course of learning.  相似文献   

18.
The maternal voice appears to have a special role in infants’ language processing. The current eye‐tracking study investigated whether 24‐month‐olds (= 149) learn novel words easier while listening to their mother's voice compared to hearing unfamiliar speakers. Our results show that maternal speech facilitates the formation of new word–object mappings across two different learning settings: a live setting in which infants are taught by their own mother or the experimenter, and a prerecorded setting in which infants hear the voice of either their own or another mother through loudspeakers. Furthermore, this study explored whether infants’ pointing gestures and novel word productions over the course of the word learning task serve as meaningful indexes of word learning behavior. Infants who repeated more target words also showed a larger learning effect in their looking behavior. Thus, maternal speech and infants’ willingness to repeat novel words are positively linked with novel word learning.  相似文献   

19.
When mothers speak to infants at risk for developmental dyslexia, they do not hyperarticulate vowels in their infant‐directed speech (IDS). Here, we used an innovative cross‐dyad design to investigate whether the absence of vowel hyperarticulation in IDS to at‐risk infants is a product of maternal infant‐directed behavior or of infants’ parent‐directed cues. Interactions between mothers and infants who were at risk or not at risk for dyslexia were recorded in three conditions: when mothers interacted with (a) their own infants, (b) infants who were not their own but of the same risk status, and (c) infants who were not their own and of the opposite risk status. This design revealed both infant and parent effects. Mothers of not‐at‐risk infants hyperarticulated vowels significantly more when speaking to not‐at‐risk than to at‐risk infants. In contrast, mothers of at‐risk infants hyperarticulated vowels significantly less than NAR mothers, and this was irrespective of infant status. Mothers of not‐at‐risk infants thus adjusted their IDS to the infant's risk status, while mothers of at‐risk infants did not. We suggest that IDS is determined reciprocally by characteristics of both partners in the dyad: Both infant and maternal factors are essential for the vowel hyperarticulation component of IDS.  相似文献   

20.
The interaction between infant's communicative competence and responsiveness of caregivers facilitates the transition from prelinguistic to linguistic communication. It is thus important to know how infants' communicative behavior changes in relation to different caregiver responses; furthermore, how infants' modification of communicative behavior relates to language outcomes. We investigated 39 10‐month‐old infants' communication as a function of mothers' attention and responses and the relationship to language outcomes at 15 months. We elicited infants' communicative behavior in three conditions: (1) joint attention: Mothers were visually attending and responding to infants' attention and interest; (2) available: Mothers were visually attending to infants, but not responding contingently to infants' attention and interest; (3) unavailable: Mothers were not attending to infants nor responding to them. Infants vocalized more when mothers attended and responded to them (conditions 1 and 2) than when mothers did not (condition 3), but infants' gesture and gesture‐vocal production did not differ across conditions. Furthermore, infants' production of a higher proportion of vocalizations in the unavailable condition relative to the joint attention condition correlated with, and predicted, infants' language scores at 15 months. Thus, infants who appear to be aware of the social effects of vocalizations may learn words better.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号