首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An error analysis of the word recognition responses of cochlear implant users and listeners with normal hearing was conducted to determine the types of partial information used by these two populations when they identified spoken words under auditory-alone and audiovisual conditions. The results revealed that the two groups used different types of partial information in identifying spoken words under auditory-alone or audiovisual presentation. Different types of partial information were also used in identifying words with different lexical properties. In our study, however, there were no significant interactions with hearing status, indicating that cochlear implant users and listeners with normal hearing identify spoken words in a similar manner. The information available to users with cochlear implants preserves much of the partial information necessary for accurate spoken word recognition.  相似文献   

2.
3.
To learn their first words, infants must attend to a variety of cues that signal word boundaries. One such cue infants might use is the language-specific phonotactics to track legal combinations and positions of segments within a word. Studies have demonstrated that, when tested across statistically high and low phonotactics, infants repeatedly reject the low-frequency wordforms. We explore whether the capacity to access low-frequency phonotactic combinations is available at 9 months when pre-exposed to wordforms containing statistically low combinations of segments. Using a modified head-turn procedure, one group of infants was presented with nonwords with low-frequency complex onsets (dr-), and another group was presented with zero-frequency onset nonwords (dl-). Following pre-exposure and familiarization, infants were then tested on their ability to segment nonwords that contained either the low- or the zero-frequency onsets. Only infants in the low-frequency condition were successful at the task, suggesting some experience with these onsets supports segmentation.  相似文献   

4.
While the specificity of infants' early lexical representations has been studied extensively, researchers have only recently begun to investigate how words are organized in the developing lexicon and what mental representations are activated during processing of a word. Integrating these two lines of research, the current study asks how specific the phonological match between a perceived word and its stored form has to be in order to lead to (cascaded) lexical activation of related words during infant lexical processing. We presented German 24‐month‐olds with a cross‐modal semantic priming task where the prime word was either correctly or incorrectly pronounced. Results indicate that correct pronunciations and mispronunciations both elicit similar semantic priming effects, suggesting that the infant word recognition system is flexible enough to handle deviations from the correct form. This might be an important prerequisite to children's ability to cope with imperfect input and to recognize words under more challenging circumstances.  相似文献   

5.
Retaining detailed representations of unstressed syllables is a logical prerequisite for infants' use of probabilistic phonotactics to segment iambic words from fluent speech. The head‐turn preference study was used to investigate the nature of English‐learners' representations of iambic word onsets. Fifty‐four 10.5‐month‐olds were familiarized to passages containing the nonsense iambic word forms ginome and tupong. Following familiarization, infants were either tested on familiar (ginome and tupong) or near‐familiar (pinome and bupong) versus unfamiliar (kidar and mafoos) words. Infants in the familiar test group (familiar vs. unfamiliar) oriented significantly longer to familiar than unfamiliar test items, whereas infants in the near‐familiar test group (near‐familiar vs. unfamiliar) oriented equally long to near‐familiar and unfamiliar test items. Our results provide evidence that infants retain fairly detailed representations of unstressed syllables and therefore support the hypothesis that infants use phonotactic cues to find words in fluent speech.  相似文献   

6.
Leher Singh  Paul C. Quinn 《Infancy》2023,28(4):738-753
Due to the COVID-19 pandemic, many children receive language input through face coverings. The impact of face coverings for children's abilities to understand language remains unclear. Past research with monolingual children suggests that hearing words through surgical masks does not disrupt word recognition, but hearing words through transparent face shields proves more challenging. In this study, we investigated effects of different face coverings (surgical masks and transparent face shields) on language comprehension in bilingual children. Three-year-old English-Mandarin bilingual children (N = 28) heard familiar words in both English and Mandarin spoken through transparent face shields, surgical masks, and without masks. When tested in English, children recognized words presented without a mask and through a surgical mask, but did not recognize words presented with transparent face shields, replicating past findings with monolingual children. In contrast, when tested in Mandarin, children recognized words presented without a mask, through a surgical mask, and through a transparent face shield. Results are discussed in terms of specific properties of English and Mandarin that may elicit different effects for transparent face shields. Overall, the present findings suggest that face coverings, and in particular, surgical masks do not disrupt spoken word recognition in young bilingual children.  相似文献   

7.
Reading skills in hearing children are closely related to their phonological processing skills, often measured using a nonword repetition task in which a child relies on abstract phonological representations in order to decompose, encode, rehearse in working memory and reproduce novel phonological patterns. In the present study of children who are deaf and have cochlear implants, we found that nonword repetition performance was significantly related to nonword reading, single word reading and sentence comprehension. Communication mode and nonverbal IQ were also found to be correlated with nonword repetition and reading skills. A measure of the children's lexical diversity, derived from an oral language sample, was found to be a mediating factor in the relationship between nonword repetition and reading skills. Taken together, the present findings suggest that the construction of robust phonological representations and phonological processing skills may be important contributors to the development of reading in children who are deaf and use cochlear implants.  相似文献   

8.
The present study investigated the development of audiovisual speech perception skills in children who are prelingually deaf and received cochlear implants. We analyzed results from the Pediatric Speech Intelligibility (Jerger, Lewis, Hawkins, & Jerger, 1980) test of audiovisual spoken word and sentence recognition skills obtained from a large group of young children with cochlear implants enrolled in a longitudinal study, from pre-implantation to 3 years post-implantation. The results revealed better performance under the audiovisual presentation condition compared with auditory-alone and visual-alone conditions. Performance in all three conditions improved over time following implantation. The results also revealed differential effects of early sensory and linguistic experience. Children from oral communication (OC) education backgrounds performed better overall than children from total communication (TC backgrounds. Finally, children in the early-implanted group performed better than children in the late-implanted group in the auditory-alone presentation condition after 2 years of cochlear implant use, whereas children in the late-implanted group performed better than children in the early-implanted group in the visual-alone condition. The results of the present study suggest that measures of audiovisual speech perception may provide new methods to assess hearing, speech, and language development in young children with cochlear implants.  相似文献   

9.
Previous research has shown that infants begin to display sensitivities to language‐specific phonotactics and probabilistic phonotactics at around 9 months of age. However, certain phonotactic patterns have not yet been examined, such as contrast neutralization, in which phonemic contrasts are neutralized typically in syllable‐ or word‐final position. Thus, the acquisition of contrast neutralization is dependent on infants' ability to perceive certain contrasts in final position. The studies reported here test infants' sensitivity to voicing neutralization in word‐final position and infants' discrimination of voicing and place of articulation (POA) contrasts in word‐initial and word‐final position. Nine and 11‐month‐old Dutch‐learning infants showed no preference for legal versus illegal voicing phonotactics that were contrasted in word‐final position. Furthermore, 10‐month‐old infants showed no discrimination of voicing or POA contrasts in word‐final position, whereas they did show sensitivity to the same contrasts in word‐initial position. By 16 months, infants were able to discriminate POA contrasts in word‐final position, although showing no discrimination of the word‐final voicing contrast. These findings have broad implications for models of how learners acquire the phonological structures of their language, for the types of phonotactic structures to which infants are presumed to be sensitive, and for the relative sensitivity to phonemic distinctions by syllable and word position during acquisition.  相似文献   

10.
Families of infants who are congenitally deaf now have the option of cochlear implantation at a very young age. In order to assess the effectiveness of early cochlear implantation, however, new behavioral procedures are needed to measure speech perception and language skills during infancy. One important component of language development is word learning-a complex skill that involves learning arbitrary relations between words and their referents. A precursor to word learning is the ability to perceive and encode intersensory relations between co-occurring auditory and visual events. Recent studies in infants with normal hearing have shown that intersensory redundancies, such as temporal synchrony, can facilitate the ability to learn arbitrary pairings between speech sounds and objects (Gogate & Bahrick, 1998). To investigate the early stages of learning arbitrary pairings of sounds and objects after cochlear implantation, we used the Preferential Looking Paradigm (PLP) to assess infants' ability to associate speech sounds to objects that moved in temporal synchrony with the onset and offsets of the signals. Children with normal hearing ranging in age from 6, 9, 18, and 30 months served as controls and demonstrated the ability to learn arbitrary pairings between temporally synchronous speech sounds and dynamic visual events. Infants who received their cochlear implants (CIs) at earlier ages (7-15 months of age) performed similarly to the infants with normal hearing after about 2-6 months of CI experience. In contrast, infants who received their implants at later ages (16-25 months of age) did not demonstrate learning of the associations within the context of this experiment. Possible implications of these findings are discussed.  相似文献   

11.
Language learners rapidly acquire extensive semantic knowledge, but the development of this knowledge is difficult to study, in part because it is difficult to assess young children's lexical semantic representations. In our studies, we solved this problem by investigating lexical semantic knowledge in 24‐month‐olds using the Head‐turn Preference Procedure. In Experiment 1, looking times to a repeating spoken word stimulus (e.g., kitty‐kitty‐kitty) were shorter for trials preceded by a semantically related word (e.g., dog‐dog‐dog) than trials preceded by an unrelated word (e.g., juice‐juice‐juice). Experiment 2 yielded similar results using a method in which pairs of words were presented on the same trial. The studies provide evidence that young children activate of lexical semantic knowledge, and critically, that they do so in the absence of visual referents or sentence contexts. Auditory lexical priming is a promising technique for studying the development and structure of semantic knowledge in young children.  相似文献   

12.
The literature reports some contradictory results on the degree of phonological specificity of infants’ early lexical representations in the Romance language, French, and Germanic languages. It is not clear whether these discrepancies are because of differences in method, in language characteristics, or in participants’ age. In this study, we examined whether 12‐ and 17‐month‐old French‐speaking infants are able to distinguish well‐pronounced from mispronounced words (one or two features of their initial consonant). To this end, 46 infants participated in a preferential looking experiment in which they were presented with pairs of pictures together with a spoken word well pronounced or mispronounced. The results show that both 12‐ and 17‐month‐old infants look longer at the pictures corresponding to well‐pronounced words than to mispronounced words, but show no difference between the two mispronunciation types. These results suggest that, as early as 12 months, French‐speaking infants, like those exposed to Germanic languages, already possess detailed phonological representations of familiar words.  相似文献   

13.
In the present pilot study, the researchers investigated how people with impaired hearing identify emotions from auditory and visual stimuli, with people with normal hearing acting as their controls. Two separate experiments were conducted. The viewpoint was in the communication and social function of emotion perception. Professional actors of both genders produced emotional nonsense samples without linguistic content, samples in the Finnish language, and prolonged vowel samples. In Experiment 1, nine Cochlear implant users and nine controls participated in the listening test. In Experiment 2, nine users of a variety of hearing aids and nine controls participated in the perception test. The results of both experiments showed a statistically significant difference between the two testing groups, people with hearing impairment and people with normal hearing, in the emotion identification and valence perception from both auditory and visual stimuli. The results suggest that hearing aids and cochlear implants do not transfer well enough the nuances within emotions conveyed by the voice. The results also suggest difficulties in the visual perception among people with hearing impairment. This warrants further studies with larger samples.  相似文献   

14.
This study aims to elucidate the factors that affect the robustness of word form representations by exploring the relative influence of lexical stress and segmental identity (consonant vs. vowel) on infant word recognition. Our main question was which changes to the words may go unnoticed and which may lead the words to be unrecognizable. One‐hundred 11‐month‐old Hebrew‐learning infants were tested in two experiments using the Central Fixation Procedure. In Experiment 1, 20 infants were presented with iambic Familiar and Unfamiliar words. The infants listened longer to Familiar than to Unfamiliar words, indicating their recognition of frequently heard word forms. In Experiment 2, four groups of 20 infants each were tested in each of four conditions involving altered iambic Familiar words contrasted with iambic Unfamiliar nonwords. In each condition, one segment in the Familiar word was changed—either a consonant or a vowel, in either the first (unstressed) or the second (stressed) syllable. In each condition, recognition of the Familiar words despite the change indicates a less accurate or less well‐specified representation. Infants recognized Familiar words despite changes to the weak (first) syllable, regardless of whether the change involved a consonant or a vowel (conditions 2a, 2c). However, a change of either consonant or vowel in the stressed (second) syllable blocked word recognition (conditions 2b, 2d). These findings support the proposal that stress pattern plays a key role in early word representation, regardless of segmental identity.  相似文献   

15.
This study investigated the effects of age, hearing loss, and cochlear implantation on mothers' speech to infants and children. We recorded normal‐hearing (NH) mothers speaking to their children as they typically would do at home and speaking to an adult experimenter. Nine infants (10–37 months) were hearing‐impaired and had used a cochlear implant (CI) for 3 to 18 months. Eighteen NH infants and children were matched either by chronological age (10–37 months) or hearing experience (3–18 months) to the CI children. Prosodic characteristics such as fundamental frequency, utterance duration, and pause duration were measured across utterances in the speech samples. The results revealed that mothers use a typical infant‐directed speech style when speaking to hearing‐impaired children with CIs. The results also suggested that NH mothers speak with more similar vocal styles to NH children and hearing‐impaired children with CIs when matched by hearing experience rather than chronological age. Thus, mothers are sensitive to hearing experience and linguistic abilities of their NH children as well as hearing‐impaired children with CIs.  相似文献   

16.
Mary K. Fagan 《Infancy》2019,24(3):338-355
Infant development has rarely been informed by the behavior of infants with sensory differences despite increasing recognition that infant behavior itself creates sensory learning opportunities. The purpose of this study of object exploration was to compare the behavior of hearing and deaf infants, with and without cochlear implants, in order to identify the effects of profound sensorineural hearing loss on infant exploration before cochlear implantation, the behavioral effects of access to auditory feedback after cochlear implantation, and the sensory motivation for exploration behaviors performed by hearing infants as well. The results showed that 9‐month‐old deaf infants explored objects as often as hearing infants but they used systematically different approaches and less variation before compared to after cochlear implantation. Potential associations between these early experiences and later learning are discussed in the context of embodied developmental theory, comparative studies, and research with adults. The data call for increased recognition of the active sensorimotor nature of infant learning and future research that investigates differences in sensorimotor experience as potential mechanisms in later learning and sequential memory development.  相似文献   

17.
We examined whether mothers' use of temporal synchrony between spoken words and moving objects, and infants' attention to object naming, predict infants' learning of word–object relations. Following 5 min of free play, 24 mothers taught their 6‐ to 8‐month‐olds the names of 2 toy objects, Gow and Chi, during a 3‐min play episode. Infants were then tested for their word mapping. The videotaped episodes were coded for mothers' object naming and infants' attention to different naming types. Results indicated that mothers' use of temporal synchrony and infants' attention during play covaried with infants' word‐mapping ability. Specifically, infants who switched eye gaze from mother to object most frequently during naming learned the word–object relations. The findings suggest that maternal naming and infants' word‐mapping abilities are bidirectionally related. Variability in infants' attention to maternal multimodal naming explains the variability in early lexical‐mapping development.  相似文献   

18.
Languages differ in their phonological use of vowel duration. For the child, learning how duration contributes to lexical contrast is complicated because segmental duration is implicated in many different linguistic distinctions. Using a language‐guided looking task, we measured English and Dutch 21‐month‐olds’ recognition of familiar words with normal or manipulated vowel durations. Dutch but not English learners were affected by duration changes, even though distributions of short and long vowels in both languages are similar, and English uses vowel duration as a cue to (for example) consonant coda voicing. Additionally, we found that word recognition in Dutch toddlers was affected by shortening but not lengthening of vowels, matching an asymmetry also found in Dutch adults. Considering the subtlety of the cross‐linguistic difference in the input, and the complexity of duration as a phonetic feature, our results suggest a strong capacity for phonetic analysis in children before their second birthday.  相似文献   

19.
Detailed representations enable infants to distinguish words from one another and more easily recognize new words. We examined whether 17‐month‐old infants encode word stress in their familiar word representations. In Experiment 1, infants were presented with pairs of familiar objects while hearing a target label either properly pronounced with the correct stress (e.g., baby /’be?bi/) or mis‐pronounced with the incorrect stress pattern (e.g., baby /be?’bi/). Infants mapped both the correctly stressed and mis‐stressed labels to the target objects; however, they were slower to fixate the target when hearing the mis‐stressed label. In Experiment 2, we examined whether infants appreciate that stress has a nonproductive role in English (i.e., altering the stress of a word does not typically signal a change in word meaning) by presenting infants with a familiar object paired with a novel object while hearing either correctly stressed or mis‐stressed familiar words (Experiment 2). Here, infants mapped the correctly stressed label to the familiar object but did not map the mis‐stressed label reliably to either the target or distractor objects. These findings suggest that word stress impacts the processing of familiar words, and infants have burgeoning knowledge that altering the stress pattern of a familiar word does not reliably signal a new referent.  相似文献   

20.
Toward the end of their first year of life, infants’ overly specified word representations are thought to give way to more abstract ones, which helps them to better cope with variation not relevant to word identity (e.g., voice and affect). This developmental change may help infants process the ambient language more efficiently, thus enabling rapid gains in vocabulary growth. One particular kind of variability that infants must accommodate is that of dialectal accent, because most children will encounter speakers from different regions and backgrounds. In this study, we explored developmental changes in infants’ ability to recognize words in continuous speech by familiarizing them with words spoken by a speaker of their own region (North Midland‐American English) or a different region (Southern Ontario Canadian English), and testing them with passages spoken by a speaker of the opposite dialectal accent. Our results demonstrate that 12‐ but not 9‐month‐olds readily recognize words in the face of dialectal variation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号