首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Probabilistic phonotactics refers to the frequency with which segments and sequences of segments occur in syllables and words. Knowledge of phonotactics has been shown to be an important source of information in recognizing spoken words in listeners with normal hearing. Two online tasks (an auditory same-different task and an auditory lexical decision task) were used to examine the use of phonotactic information by adults who were postlingually deafened who have received cochlear implants. The results of the experiments showed that cochlear implant patients with better word recognition abilities (as measured by the Northwestern University Auditory Test No. 6 (NU-6) produced patterns of results that were similar to the pattern of results obtained from listeners with normal hearing in Vitevitch and Luce (1999). This finding suggests that cochlear implant patients with better word recognition abilities use lexical and sublexical representations to process spoken words, much like listeners with normal hearing. In contrast, cochlear implant patients with poor word recognition abilities could not differentiate between stimuli varying in phonotactic probability and lexicality, suggesting that less distinct representations are used by these patients to process spoken words. The implications of these results for outcome assessments and clinical interventions are discussed.  相似文献   

2.
The present study investigated the development of audiovisual speech perception skills in children who are prelingually deaf and received cochlear implants. We analyzed results from the Pediatric Speech Intelligibility (Jerger, Lewis, Hawkins, & Jerger, 1980) test of audiovisual spoken word and sentence recognition skills obtained from a large group of young children with cochlear implants enrolled in a longitudinal study, from pre-implantation to 3 years post-implantation. The results revealed better performance under the audiovisual presentation condition compared with auditory-alone and visual-alone conditions. Performance in all three conditions improved over time following implantation. The results also revealed differential effects of early sensory and linguistic experience. Children from oral communication (OC) education backgrounds performed better overall than children from total communication (TC backgrounds. Finally, children in the early-implanted group performed better than children in the late-implanted group in the auditory-alone presentation condition after 2 years of cochlear implant use, whereas children in the late-implanted group performed better than children in the early-implanted group in the visual-alone condition. The results of the present study suggest that measures of audiovisual speech perception may provide new methods to assess hearing, speech, and language development in young children with cochlear implants.  相似文献   

3.
An error analysis of the word recognition responses of cochlear implant users and listeners with normal hearing was conducted to determine the types of partial information used by these two populations when they identified spoken words under auditory-alone and audiovisual conditions. The results revealed that the two groups used different types of partial information in identifying spoken words under auditory-alone or audiovisual presentation. Different types of partial information were also used in identifying words with different lexical properties. In our study, however, there were no significant interactions with hearing status, indicating that cochlear implant users and listeners with normal hearing identify spoken words in a similar manner. The information available to users with cochlear implants preserves much of the partial information necessary for accurate spoken word recognition.  相似文献   

4.
This study investigated the effects of age, hearing loss, and cochlear implantation on mothers' speech to infants and children. We recorded normal‐hearing (NH) mothers speaking to their children as they typically would do at home and speaking to an adult experimenter. Nine infants (10–37 months) were hearing‐impaired and had used a cochlear implant (CI) for 3 to 18 months. Eighteen NH infants and children were matched either by chronological age (10–37 months) or hearing experience (3–18 months) to the CI children. Prosodic characteristics such as fundamental frequency, utterance duration, and pause duration were measured across utterances in the speech samples. The results revealed that mothers use a typical infant‐directed speech style when speaking to hearing‐impaired children with CIs. The results also suggested that NH mothers speak with more similar vocal styles to NH children and hearing‐impaired children with CIs when matched by hearing experience rather than chronological age. Thus, mothers are sensitive to hearing experience and linguistic abilities of their NH children as well as hearing‐impaired children with CIs.  相似文献   

5.
This study investigated prosodic and structural characteristics of infant‐directed speech to hearing‐impaired infants as they gain hearing experience with a cochlear implant over a 12‐month period of time. Mothers were recorded during a play interaction with their HI infants (N = 27, mean age 18.4 months) at 3, 6, and 12 months postimplantation. Two separate control groups of mothers with age‐matched normal‐hearing infants (NH‐AM) (N = 21, mean age 18.1 months) and hearing experience‐matched normal‐hearing infants (NH‐EM) (N = 24, mean age 3.1 months) were recorded at three testing sessions. Mothers produced less exaggerated pitch characteristics, a larger number of syllables per utterance, and faster speaking rate when interacting with NH‐AM as compared to HI infants. Mothers also produced more syllables and demonstrated a trend suggesting faster speaking rate in speech to NH‐EM relative to HI infants. Age‐related modifications included decreased pitch standard deviation and increased number of syllables in speech to NH‐AM infants and increased number of syllables in speech to HI and NH‐EM infants across the 12‐month period. These results suggest that mothers are sensitive to the hearing status of their infants and modify characteristics of infant‐directed speech over time.  相似文献   

6.
There are reasons to believe that infant‐directed (ID) speech may make language acquisition easier for infants. However, the effects of ID speech on infants' learning remain poorly understood. The experiments reported here assess whether ID speech facilitates word segmentation from fluent speech. One group of infants heard a set of nonsense sentences spoken with intonation contours characteristic of adult‐directed (AD) speech, and the other group heard the same sentences spoken with intonation contours characteristic of ID speech. In both cases, the only cue to word boundaries was the statistical structure of the speech. Infants were able to distinguish words from syllable sequences spanning word boundaries after exposure to ID speech but not after hearing AD speech. These results suggest that ID speech facilitates word segmentation and may be useful for other aspects of language acquisition as well. Issues of direction of preference in preferential listening paradigms are also considered.  相似文献   

7.
Children with hearing loss (HL) remain at risk for poorer language abilities than normal hearing (NH) children despite targeted interventions; reasons for these differences remain unclear. In NH children, research suggests speech discrimination is related to language outcomes, yet we know little about it in children with HL under the age of 2 years. We utilized a vowel contrast, /a-i/, and a consonant-vowel contrast, /ba-da/, to examine speech discrimination in 47 NH infants and 40 infants with HL. At Mean age =3 months, EEG recorded from 11 scalp electrodes was used to compute the time-frequency mismatched response (TF-MMRSE) to the contrasts; at Mean age =9 months, behavioral discrimination was assessed using a head turn task. A machine learning (ML) classifier was used to predict behavioral discrimination when given an arbitrary TF-MMRSE as input, achieving accuracies of 73% for exact classification and 92% for classification within a distance of one class. Linear fits revealed a robust relationship regardless of hearing status or speech contrast. TF-MMRSE responses in the delta (1–3.5 Hz), theta (3.5–8 Hz), and alpha (8–12 Hz) bands explained the most variance in behavioral task performance. Our findings demonstrate the feasibility of using TF-MMRSE to predict later behavioral speech discrimination.  相似文献   

8.
This study investigated the effects of anxiety on nonverbal aspects of speech using data collected in the framework of a large study of social phobia treatment. The speech of social phobics (N = 71) was recorded during an anxiogenic public speaking task both before and after treatment. The speech samples were analyzed with respect to various acoustic parameters related to pitch, loudness, voice quality, and temporal aspects of speech. The samples were further content-masked by low-pass filtering (which obscures the linguistic content of the speech but preserves nonverbal affective cues) and subjected to listening tests. Results showed that a decrease in experienced state anxiety after treatment was accompanied by corresponding decreases in (a) several acoustic parameters (i.e., mean and maximum voice pitch, high-frequency components in the energy spectrum, and proportion of silent pauses), and (b) listeners’ perceived level of nervousness. Both speakers’ self-ratings of state anxiety and listeners’ ratings of perceived nervousness were further correlated with similar acoustic parameters. The results complement earlier studies on vocal affect expression which have been conducted on posed, rather than authentic, emotional speech.  相似文献   

9.
Families of infants who are congenitally deaf now have the option of cochlear implantation at a very young age. In order to assess the effectiveness of early cochlear implantation, however, new behavioral procedures are needed to measure speech perception and language skills during infancy. One important component of language development is word learning-a complex skill that involves learning arbitrary relations between words and their referents. A precursor to word learning is the ability to perceive and encode intersensory relations between co-occurring auditory and visual events. Recent studies in infants with normal hearing have shown that intersensory redundancies, such as temporal synchrony, can facilitate the ability to learn arbitrary pairings between speech sounds and objects (Gogate & Bahrick, 1998). To investigate the early stages of learning arbitrary pairings of sounds and objects after cochlear implantation, we used the Preferential Looking Paradigm (PLP) to assess infants' ability to associate speech sounds to objects that moved in temporal synchrony with the onset and offsets of the signals. Children with normal hearing ranging in age from 6, 9, 18, and 30 months served as controls and demonstrated the ability to learn arbitrary pairings between temporally synchronous speech sounds and dynamic visual events. Infants who received their cochlear implants (CIs) at earlier ages (7-15 months of age) performed similarly to the infants with normal hearing after about 2-6 months of CI experience. In contrast, infants who received their implants at later ages (16-25 months of age) did not demonstrate learning of the associations within the context of this experiment. Possible implications of these findings are discussed.  相似文献   

10.
The Supreme Court's recent decision in Miller v. Alabama found that juvenile life without the possibility of parole sentences for homicide crimes was unconstitutional if mandated by state law. Thus, allowing this sentence only after an individualized decision determines the sanction proportional given the circumstances of the offense and mitigating factors. This decision, for a number of reasons, does not go far enough in protecting those youthful offenders afflicted with maltreatment victimizations, mental health problems, and/or learning disabilities — all potential links for some adolescents to serious offending and potentially homicide. While the Supreme Court has not protected these youthful offenders from a potential life sentence, there are early interventions and preventative programming that can help decrease serious adolescent offending behaviors. So while many states will, post Miller, allow this life imprisonment sentence, it is only just, in light of the extensive difficulties for many of these adolescents, that their future allows at least the possibility of a parole hearing.  相似文献   

11.
Accurate sound source localization has advantages for the performance of work by humans. The ability to accurately localize sound sources contributes to perception, decision making and task performance. Two studies were conducted to investigate the prevalence of accurate sound source localization and the enhancement that spatially separated sound source locations can have on speech perception. The first study was conducted to characterize the ability to detect the location of horizontal plane sound sources. A sample of 117 participants with the hearing capacity within the normal limits participated in the study. The results indicated that sound sources located towards the front of the participant were identified more frequently than those sound sources located towards the rear positions. Based on the results found in the first study, a second study was conducted to assess performance within a listening task. Three different spatial configurations were used to assess if similar trends in performance translated to sound sources through headphones. Fifteen research participants performed a Coordinated Response Measure (CRM) task requiring the identification of a speech phrase and its associated information for a diotic configuration and two different spatial sound source configurations. Performance measured for the diotic configuration was significantly (p(0.05) less than for the two spatial configurations. The current studies indicate distinct advantages of utilizing localized sound sources to present auditory signal and speech to listeners.  相似文献   

12.
Assessing speech discrimination skills in individual infants from clinical populations (e.g., infants with hearing impairment) has important diagnostic value. However, most infant speech discrimination paradigms have been designed to test group effects rather than individual differences. Other procedures suffer from high attrition rates. In this study, we developed 4 variants of the Visual Habituation Procedure (VHP) and assessed their robustness in detecting individual 9‐month‐old infants' ability to discriminate highly contrastive nonwords. In each variant, infants were first habituated to audiovisual repetitions of a nonword (seepug) before entering the test phase. The test phase in Experiment 1 (extended variant) consisted of 7 old trials (seepug) and 7 novel trials (boodup) in alternating order. In Experiment 2, we tested 3 novel variants that incorporated methodological features of other behavioral paradigms. For the oddity variant, only 4 novel trials and 10 old trials were used. The stimulus alternation variant was identical to the extended variant except that novel trials were replaced with “alternating” trials—trials that contained repetitions of both the old and novel nonwords. The hybrid variant incorporated elements from both the oddity and the stimulus alternation variants. The hybrid variant proved to be the most successful in detecting statistically significant discrimination in individual infants (8 out of 10), suggesting that both the oddity and the stimulus alternation features contribute to providing a robust methodology for assessing discrimination in individual infants. In Experiment 3, we found that the hybrid variant had good test‐retest reliability. Implications of these results for future infant speech perception work with clinical populations are discussed.  相似文献   

13.
A review of speech identification studies examining the abilities of listeners to distinguish African American and European American voices shows that Americans can recognize many African American voices with a high degree of accuracy even in the absence of stereotypical morphosyntactic and lexical features. Experiments to determine what cues listeners use to distinguish ethnicity have not yielded such consistent results, perhaps suggesting that listeners may access a wide variety of cues if necessary. An experiment involving African Americans with features of a European American vernacular demonstrated that African Americans with atypical features are difficult for listeners to identify. Analysis suggested that vowel quality and intonation could have misled respondents but did not rule out timing and voice quality as factors in identification.  相似文献   

14.
Infants' responses in speech sound discrimination tasks can be nonmonotonic over time. Stager and Werker (1997) reported such data in a bimodal habituation task. In this task, 8‐month‐old infants were capable of discriminations that involved minimal contrast pairs, whereas 14‐month‐old infants were not It was argued that the older infants' attenuated performance was linked to their processing of the stimuli for meaning. The authors suggested that these data are diagnostic of a qualitative shift in infant cognition. We describe an associative connectionist model showing a similar decrement in discrimination without any qualitative shift in processing. The model suggests that responses to phonemic contrasts may be a nonmonotonic function of experience with language. The implications of this idea are discussed. The model also provides a formal framework fer studying habituation‐dishabituation behaviors in infancy.  相似文献   

15.
A long line of research investigates how infants learn the sounds and words in their ambient language over the first year of life, through behavioral tasks involving discrimination and recognition. More recently, individual performance in such tasks has been used to predict later language development. Does this mean that dependent measures in such tasks are reliable and can stably measure speech perception skills over short time spans? Our three laboratories independently tested infants with a given task and retested them within 0–18 days. Together, we can report data from 12 new experiments (total number of paired observations N = 409), ranging from vowel and consonant discrimination to recognition of phrasal units. Results reveal that reliability is extremely variable across experiments. We discuss possible causes and implications of this variability, as well as the main effects revealed by this work. Additionally, we offer suggestions for the field of infant speech perception to improve the reliability of its methodologies through data repositories and crowd sourcing.  相似文献   

16.
The infant literature suggests that humans enter the world with impressive built‐in talker processing abilities. For example, newborns prefer the sound of their mother's voice over the sound of another woman's voice, and well before their first birthday, infants tune in to language‐specific speech cues for distinguishing between unfamiliar talkers. The early childhood literature, however, suggests that preschoolers are unable to learn to identify the voices of two unfamiliar talkers unless these voices are highly distinct from one another, and that adult‐level talker recognition does not emerge until children near adolescence. How can we reconcile these apparently paradoxical messages conveyed by the infant and early childhood literatures? Here, we address this question by testing 16.5‐month‐old infants (= 80) in three talker recognition experiments. Our results demonstrate that infants at this age have difficulty recognizing unfamiliar talkers, suggesting that talker recognition (associating voices with people) is mastered later in life than talker discrimination (telling voices apart). We conclude that methodological differences across the infant and early childhood literatures—rather than a true developmental discontinuity—account for the performance differences in talker processing between these two age groups. Related findings in other areas of developmental psychology are discussed.  相似文献   

17.
Reading skills in hearing children are closely related to their phonological processing skills, often measured using a nonword repetition task in which a child relies on abstract phonological representations in order to decompose, encode, rehearse in working memory and reproduce novel phonological patterns. In the present study of children who are deaf and have cochlear implants, we found that nonword repetition performance was significantly related to nonword reading, single word reading and sentence comprehension. Communication mode and nonverbal IQ were also found to be correlated with nonword repetition and reading skills. A measure of the children's lexical diversity, derived from an oral language sample, was found to be a mediating factor in the relationship between nonword repetition and reading skills. Taken together, the present findings suggest that the construction of robust phonological representations and phonological processing skills may be important contributors to the development of reading in children who are deaf and use cochlear implants.  相似文献   

18.
While both face-to-face and telephone interaction involve problems of management for stutterers and their listeners, the absence of visual cues in telephone talk poses special problems which can lead to interactional breakdown. These problems are accentuated by factors such as an individual's pattern of stuttering and adaptations to stuttering, the awareness context in which interaction takes place, and the nature of the relationship between the speakers. The author's sociological perspective goes beyond the clinical perspective of speech pathology in helping to understand the interactional and identity troubles of stutterers. His analysis also shows how both stutterers' breaches of conversational norms and the practices used to remedy these breaches illuminate taken-for-granted expectations in telephone interaction.  相似文献   

19.
In the present pilot study, the researchers investigated how people with impaired hearing identify emotions from auditory and visual stimuli, with people with normal hearing acting as their controls. Two separate experiments were conducted. The viewpoint was in the communication and social function of emotion perception. Professional actors of both genders produced emotional nonsense samples without linguistic content, samples in the Finnish language, and prolonged vowel samples. In Experiment 1, nine Cochlear implant users and nine controls participated in the listening test. In Experiment 2, nine users of a variety of hearing aids and nine controls participated in the perception test. The results of both experiments showed a statistically significant difference between the two testing groups, people with hearing impairment and people with normal hearing, in the emotion identification and valence perception from both auditory and visual stimuli. The results suggest that hearing aids and cochlear implants do not transfer well enough the nuances within emotions conveyed by the voice. The results also suggest difficulties in the visual perception among people with hearing impairment. This warrants further studies with larger samples.  相似文献   

20.
Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to initial prominence (trochaic‐based hypothesis) or whether, as predicted by the Iambic‐Trochaic Law (ITL), intensity and duration have characteristic and separable effects on rhythmic grouping (ITL‐based hypothesis) in a statistical learning task. Infants were familiarized with an artificial language (Experiments 1 and 3) or a tone stream (Experiment 2) in which there was an alternation in either intensity or duration. In addition to potential acoustic cues, the familiarization sequences also contained statistical cues to word boundaries. In speech (Experiment 1) and nonspeech (Experiment 2) conditions, 9‐month‐old infants demonstrated discrimination patterns consistent with an ITL‐based hypothesis: intensity signaled initial prominence and duration signaled final prominence. The results of Experiment 3, in which 6.5‐month‐old infants were familiarized with the speech streams from Experiment 1, suggest that there is a developmental change in infants’ willingness to treat increased duration as a cue to word offsets in fluent speech. Infants’ perceptual systems interact with linguistic experience to constrain how infants learn from their auditory environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号