首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Disparities in children's early language skills associated with socioeconomic factors have led to many studies examining children's early language environments, but few as yet in the first year of life. This longitudinal study assessed the home language environments of 50 Australian infants, who varied in maternal education (university education, or not). Full‐day audio recordings were collected and analyzed using the LENA system when infants were aged 6–9 months and 12–15 months. Using the device‐specific analysis software, we assessed 12‐h projected counts of (1) adult speech input, (2) conversational interactions, and (3) child vocalizations. At both ages, higher maternal education was associated with higher counts of adult words and conversational turns, but not child vocalizations. The study adds to the literature by demonstrating disparities in the infants’ language experience within the first year of life, related to mothers’ education, with implications for early intervention and parenting supports.  相似文献   

2.
Forms that are nonlinguistic markers in one language (i.e., “tsk‐tsk” in English) may be part of the phoneme inventory—and hence part of words—in another language. In the current paper, we demonstrate that infants' ability to learn words containing unfamiliar language sounds is influenced by the age and vocabulary size of the infant learner, as well as by cues to the speaker's referential intent. When referential cues were available, infants at 14 months learned words with non‐native speech sounds, but at 20 months only those infants with smaller vocabularies succeeded. When no referential cues were present, infants at both 14 and 20 months failed to learn the same words. The implications of the relation between linguistic sophistication and non‐native word learning are discussed.  相似文献   

3.
The ability to distinguish phonetic variations in speech that are relevant to meaning is essential for infants' language development. Previous studies into the acquisition of prosodic categories have focused on lexical stress, lexical pitch accent, or lexical tone. However, very little is known about the developmental course of infants' perception of linguistic intonation. In this study, we investigate infants' perception of the correlates of the statement/yes–no question contrast in a language that marks this sentence type distinction only by prosodic means, European Portuguese (EP). Using a modified version of the visual habituation paradigm, EP‐learning infants at 5–6 and 8–9 months were able to successfully discriminate segmentally varied, single‐prosodic word intonational phrases presented with statement or yes–no question intonation, demonstrating that they are sensitive to the prosodic cues marking this distinction as early as 5 months and maintain this sensitivity throughout the first year. These results suggest the presence of precocious discrimination abilities for intonation across segmental variation, similarly to previous reports for lexical pitch accent, but unlike previous findings for word stress.  相似文献   

4.
Language rhythm determines young infants' language discrimination abilities. However, it is unclear whether young bilingual infants exposed to rhythmically similar languages develop sensitivities to cross‐linguistic rhythm cues to discriminate their dual language input. To address this question, 3.5‐month‐old monolingual Basque, monolingual Spanish and bilingual Basque‐Spanish infants' language discrimination abilities (across low‐pass filtered speech samples of Basque and Spanish) have been tested using the visual habituation procedure. Although falling within the same rhythmic class, Basque and Spanish exhibit significant differences in their distributions of vocalic intervals (within‐rhythmic class variation). All infant groups in our study successfully discriminated between the languages, although each group exhibited a different pattern. Monolingual Spanish infants succeeded only when they heard Basque during habituation, suggesting that they were influenced by native language recognition. The bilingual and the Basque monolingual infants showed no such asymmetries and succeeded irrespective of the language of habituation. Additionally, bilingual infants exhibited longer looking times in the test phase as compared with monolinguals, reflecting that bilingual infants attend to their native languages differently than monolinguals. Overall, results suggest that bilingual infants are sensitive to within‐rhythm acoustic regularities of their native language(s) facilitating language discrimination and hence supporting early bilingual acquisition.  相似文献   

5.
Previous studies show that young monolingual infants use language‐specific cues to segment words in their native language. Here, we asked whether 8 and 10‐month‐old infants (N = 84) have the capacity to segment words in an inter‐mixed bilingual context. Infants heard an English‐French mixed passage that contained one target word in each language, and were then tested on their recognition of the two target words. The English‐monolingual and French‐monolingual infants showed evidence of segmentation in their native language, but not in the other unfamiliar language. As a group, the English‐French bilingual infants segmented in both of their native languages. However, exploratory analyses suggest that exposure to language mixing may play a role in bilingual infants’ segmentation skills. Taken together, these results indicate a close relation between language experience and word segmentation skills.  相似文献   

6.
Recognizing word boundaries in continuous speech requires detailed knowledge of the native language. In the first year of life, infants acquire considerable word segmentation abilities. Infants at this early stage in word segmentation rely to a large extent on the metrical pattern of their native language, at least in stress‐based languages. In Dutch and English (both languages with a preferred trochaic stress pattern), segmentation of strong‐weak words develops rapidly between 7 and 10 months of age. Nevertheless, trochaic languages contain not only strong–weak words but also words with a weak‐strong stress pattern. In this article, we present electrophysiological evidence of the beginnings of weak‐strong word segmentation in Dutch 10‐month‐olds. At this age, the ability to combine different cues for efficient word segmentation does not yet seem to be completely developed. We provide evidence that Dutch infants still largely rely on strong syllables, even for the segmentation of weak–strong words.  相似文献   

7.
Children with hearing loss (HL) remain at risk for poorer language abilities than normal hearing (NH) children despite targeted interventions; reasons for these differences remain unclear. In NH children, research suggests speech discrimination is related to language outcomes, yet we know little about it in children with HL under the age of 2 years. We utilized a vowel contrast, /a-i/, and a consonant-vowel contrast, /ba-da/, to examine speech discrimination in 47 NH infants and 40 infants with HL. At Mean age =3 months, EEG recorded from 11 scalp electrodes was used to compute the time-frequency mismatched response (TF-MMRSE) to the contrasts; at Mean age =9 months, behavioral discrimination was assessed using a head turn task. A machine learning (ML) classifier was used to predict behavioral discrimination when given an arbitrary TF-MMRSE as input, achieving accuracies of 73% for exact classification and 92% for classification within a distance of one class. Linear fits revealed a robust relationship regardless of hearing status or speech contrast. TF-MMRSE responses in the delta (1–3.5 Hz), theta (3.5–8 Hz), and alpha (8–12 Hz) bands explained the most variance in behavioral task performance. Our findings demonstrate the feasibility of using TF-MMRSE to predict later behavioral speech discrimination.  相似文献   

8.
Infants perceptually tune to the phonemes of their native languages in the first year of life, thereby losing the ability to discriminate non‐native phonemes. Infants who perceptually tune earlier have been shown to develop stronger language skills later in childhood. We hypothesized that socioeconomic disparities, which have been associated with differences in the quality and quantity of language in the home, would contribute to individual differences in phonetic discrimination. Seventy‐five infants were assessed on measures of phonetic discrimination at 9 months, on the quality of the home environment at 15 months, and on language abilities at both ages. Phonetic discrimination did not vary according to socioeconomic status (SES), but was significantly associated with the quality of the home environment. This association persisted when controlling for 9‐month expressive language abilities, rendering it less likely that infants with better expressive language skills were simply engendering higher quality home interactions. This suggests that infants from linguistically richer home environments may be more tuned to their native language and therefore less able to discriminate non‐native contrasts at 9 months relative to infants whose home environments are less responsive. These findings indicate that home language environments may be more critical than SES in contributing to early language perception, with possible implications for language development more broadly.  相似文献   

9.
Caregiver voices may provide cues to mobilize or calm infants. This study examined whether maternal prosody predicted changes in infants’ biobehavioral state after the still face, a stressor in which the mother withdraws and reinstates social engagement. Ninety-four dyads participated in the study (infant age 4–8 months). Infants’ heart rate and respiratory sinus arrhythmia (measuring cardiac vagal tone) were derived from an electrocardiogram (ECG). Infants’ behavioral distress was measured by negative vocalizations, facial expressions, and gaze aversion. Mothers’ vocalizations were measured via a composite of spectral analysis and spectro-temporal modulation using a two-dimensional fast Fourier transformation of the audio spectrogram. High values on the maternal prosody composite were associated with decreases in infants’ heart rate (β = ?.26, 95% CI: [?0.46, ?0.05]) and behavioral distress (β = ?.23, 95% CI: [?0.42, ?0.03]), and increases in cardiac vagal tone in infants whose vagal tone was low during the stressor (1 SD below mean β = .39, 95% CI: [0.06, 0.73]). High infant heart rate predicted increases in the maternal prosody composite (β = .18, 95% CI: [0.03, 0.33]). These results suggest specific vocal acoustic features of speech that are relevant for regulating infants’ biobehavioral state and demonstrate mother–infant bi-directional dynamics.  相似文献   

10.
Families of infants who are congenitally deaf now have the option of cochlear implantation at a very young age. In order to assess the effectiveness of early cochlear implantation, however, new behavioral procedures are needed to measure speech perception and language skills during infancy. One important component of language development is word learning-a complex skill that involves learning arbitrary relations between words and their referents. A precursor to word learning is the ability to perceive and encode intersensory relations between co-occurring auditory and visual events. Recent studies in infants with normal hearing have shown that intersensory redundancies, such as temporal synchrony, can facilitate the ability to learn arbitrary pairings between speech sounds and objects (Gogate & Bahrick, 1998). To investigate the early stages of learning arbitrary pairings of sounds and objects after cochlear implantation, we used the Preferential Looking Paradigm (PLP) to assess infants' ability to associate speech sounds to objects that moved in temporal synchrony with the onset and offsets of the signals. Children with normal hearing ranging in age from 6, 9, 18, and 30 months served as controls and demonstrated the ability to learn arbitrary pairings between temporally synchronous speech sounds and dynamic visual events. Infants who received their cochlear implants (CIs) at earlier ages (7-15 months of age) performed similarly to the infants with normal hearing after about 2-6 months of CI experience. In contrast, infants who received their implants at later ages (16-25 months of age) did not demonstrate learning of the associations within the context of this experiment. Possible implications of these findings are discussed.  相似文献   

11.
During their first year, infants attune to the faces and language(s) that are frequent in their environment. The present study investigates the impact of language familiarity on how French-learning 9- and 12-month-olds recognize own-race faces. In Experiment 1, infants were familiarized with the talking face of a Caucasian bilingual German-French speaker reciting a nursery rhyme in French (native condition) or in German (non-native condition). In the test phase, infants’ face recognition was tested by presenting a picture of the speaker's face they were familiarized with, side by side with a novel face. At 9 and 12 months, neither infants in the native condition nor the ones in the non-native condition clearly recognized the speaker's face. In Experiment 2, we familiarized infants with the still picture of the speaker's face, along with the auditory speech stream. This time, both 9- and 12-month-olds recognized the face of the speaker they had been familiarized with, but only if she spoke in their native language. This study shows that at least from 9 months of age, language modulates the way faces are recognized.  相似文献   

12.
Although the second year of life is characterized by dramatic changes in expressive language and by increases in negative emotion expression, verbal communication and emotional communication are often studied separately. With a sample of twenty‐five one‐year‐olds (12–23 months), we used Language Environment Analysis (LENA; Xu, Yapanel, & Gray, 2009, Reliability of the LENA? language environment analysis system in young children’s natural home environment. LENA Foundation) to audio‐record and quantify parent–toddler communication, including toddlers’ vocal negative emotion expressions, across a full waking day. Using a multilevel extension of lag‐sequential analysis, we investigated whether parents are differentially responsive to toddlers’ negative emotion expressions compared to their verbal or preverbal vocalizations, and we examined the effects of parents’ verbal responses on toddlers’ subsequent communicative behavior. Toddlers’ negative emotions were less likely than their vocalizations to be followed by parent speech. However, when negative emotions were followed by parent speech, toddlers were most likely to vocalize next. Post hoc analyses suggest that older toddlers and toddlers with higher language abilities were more likely to shift from negative emotion to verbal or preverbal vocalization following parent response. Implications of the results for understanding the parent–toddler communication processes that support both emotional development and verbal development are discussed.  相似文献   

13.
Strollers and backpacks are employed early, frequently, and throughout the first year, with parents overwhelmingly using strollers. However, because these transport modalities put infants in different proximities to caregivers, postures, and states of alertness, their use may translate to different opportunities that are of developmental consequence, particularly with regard to language. We used GoPro technology in a within‐subjects counterbalanced design to measure dyadic vocalizations in strollers and backpacks with 7‐ to 11‐month‐old infants. Parent‐infant dyads (= 36) who regularly used both transport modes took two 8‐min walks in their own neighborhoods using their own carriers while wearing lightweight head‐mounted GoPros. There was significantly more parent speech, infant vocalizations, dyadic conversations, and infant‐initiated speech in backpacks, as well as more head motion consistent with visual scanning by infants. Backpacks appear to be a practical way to encourage more engaging, language‐enriched developmental opportunities in the critical first year.  相似文献   

14.
Sensitivity to language‐specific stress patterns during infancy facilitates finding, mapping, and recognizing words, and early preferences for the predominate stress pattern of the infant's native language have been argued to facilitate language relevant outcomes (Ference & Curtin, 2013 Journal of Experimental Child Psychology, 116, 891; Weber et al., 2005 Cognitive Brain Research, 25, 180). We examined 12‐month‐old infant siblings of typically developing children (SIBS‐TD) and infant siblings of children diagnosed with autism spectrum disorder (ASD; SIBS‐A) on their ability to map differentially stressed labels to objects. We also examined whether success at this task relates to infants’ vocabulary size at 12 months, and more specifically to SIBS‐A's vocabulary at both 12 and 24 months. SIBS‐TD successfully mapped the word–object pairings, which related to their vocabulary comprehension at 12 months. In contrast, SIBS‐A as a group did not map the word–object pairings, which was unrelated to vocabulary size at 12 months. However, success on this task for SIBS‐A predicted expressive language abilities at 24 months using the Mullen Scales of Early Learning (MSEL; Mullen, 1995 Mullen Scales of Early Learning. Circle Pines, MN: American Guidance) and the MacArthur‐Bates Communicative Development Inventory (MB‐CDI; Fenson et al., 1993 MacArthur Communicative Development Inventory: Users Guide and Technical Manual. San Diego, CA: Singular Publishing Company). Our study is the first to demonstrate that 12‐month‐old SIBS‐A who succeed at word mapping using lexical stress are more likely to have stronger expressive language abilities at 24 months.  相似文献   

15.
Infant phonetic perception reorganizes in accordance with the native language by 10 months of age. One mechanism that may underlie this perceptual change is distributional learning, a statistical analysis of the distributional frequency of speech sounds. Previous distributional learning studies have tested infants of 6–8 months, an age at which native phonetic categories have not yet developed. Here, three experiments test infants of 10 months to help illuminate perceptual ability following perceptual reorganization. English‐learning infants did not change discrimination in response to nonnative speech sound distributions from either a voicing distinction (Experiment 1) or a place‐of‐articulation distinction (Experiment 2). In Experiment 3, familiarization to the place‐of‐articulation distinction was doubled to increase the amount of exposure, and in this case infants began discriminating the sounds. These results extend the processes of distributional learning to a new phonetic contrast, and reveal that at 10 months of age, distributional phonetic learning remains effective, but is more difficult than before perceptual reorganization.  相似文献   

16.
Caregivers' touches that occur alongside words and utterances could aid in the detection of word/utterance boundaries and the mapping of word forms to word meanings. We examined changes in caregivers' use of touches with their speech directed to infants using a multimodal cross-sectional corpus of 35 Korean mother-child dyads across three age groups of infants (8, 14, and 27 months). We tested the hypothesis that caregivers' frequency and use of touches with speech change with infants' development. Results revealed that the frequency of word/utterance-touch alignment as well as word + touch co-occurrence is highest in speech addressed to the youngest group of infants. Thus, this study provides support for the hypothesis that caregivers' use of touch during dyadic interactions is sensitive to infants' age in a way similar to caregivers' use of speech alone and could provide cues useful to infants' language learning at critical points in early development.  相似文献   

17.
Speech preferences emerge very early in infancy, pointing to a special status for speech in auditory processing and a crucial role of prosody in driving infant preferences. Recent theoretical models suggest that infant auditory perception may initially encompass a broad range of human and nonhuman vocalizations, then tune in to relevant sounds for the acquisition of species‐specific communication sounds. However, little is known about sound properties eliciting infants’ tuning‐in to speech. To address this issue, we presented a group of 4‐month‐olds with segments of non‐native speech (Mandarin Chinese) and birdsong, a nonhuman vocalization that shares some prosodic components with speech. A second group of infants was presented with the same segment of birdsong paired with Mandarin played in reverse. Infants showed an overall preference for birdsong over non‐native speech. Moreover, infants in the Backward condition preferred birdsong over backward speech whereas infants in the Forward condition did not show clear preference. These results confirm the prominent role of prosody in early auditory processing and suggest that infants’ preferences may privilege communicative vocalizations featured by certain prosodic dimensions regardless of the biological source of the sound, human or nonhuman.  相似文献   

18.
Speech rhythm is considered one of the first windows into the native language, and the taxonomy of rhythm classes is commonly used to explain early language discrimination. Relying on formal rhythm classification is problematic for two reasons. First, it is not known to which extent infants’ sensitivity to language variation is attributable to rhythm alone, and second, it is not known how infants discriminate languages not classified in any of the putative rhythm classes. Employing a central-fixation preference paradigm with natural stimuli, this study tested whether infants differentially attend to native versus nonnative varieties that differ only in temporal rhythm cues, and both of which are rhythmically unclassified. An analysis of total looking time did not detect any rhythm preferences at any age. First-look duration, arguably more closely reflecting infants’ underlying perceptual sensitivities, indicated age-specific preferences for native versus non-native rhythm: 4-month-olds seemed to prefer the native-, and 6-month-olds the non-native language-variety. These findings suggest that infants indeed acquire native rhythm cues rather early, by the 4th month, supporting the theory that rhythm can bootstrap further language development. Our data on infants’ processing of rhythmically unclassified languages suggest that formal rhythm classification does not determine infants’ ability to discriminate language varieties.  相似文献   

19.
Infants demonstrate robust audiovisual (AV) perception, detecting, for example, which visual face matches auditory speech in many paradigms. For simple phonetic segments, like vowels, previous work has assumed developmental stability in AV matching. This study shows dramatic differences in matching performance for different vowels across the first year of life: 3‐, 6‐, and 9‐month‐olds were familiarized for 40 sec with a visual face articulating a vowel in synchrony with auditory presentations of that vowel, but crucially, the mouth of the face was occluded. At test, infants were shown two still photos of the same face without occlusion for 1 min in silence. One face had a static articulatory configuration matching the previously heard vowel, while the other face had a static configuration matching a different vowel. Three auditory vowels were used: /a/, /i/, and /u/. Results suggest that AV matching performance varies according to age and to the familiarized vowel. Interestingly, results are not linked to the frequency of vowels in auditory input, but may instead be related to infants' ability to produce the target vowel. A speculative hypothesis is that vowel production in infancy modulates AV vowel matching.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号