首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
We demonstrate that 18‐month‐olds, but not 14‐month‐olds, can anticipate others' actions based on an interpretation of shared goals that bind together individual actions into a collaborative sequence. After viewing a sequence of actions performed by two people who socially interact, 18‐month‐olds bound together the socially engaged actors' actions such that they later expected the actors to share the same final goal. Eighteen‐month‐olds who saw nonsocially engaged actors did not have this expectation and neither did 14‐month‐olds when viewing either socially or nonsocially engaged actors. The results are discussed in light of the possibility that experience in collaborations could be necessary for understanding collaboration from a third‐person perspective.  相似文献   

2.
Infants' sensitivity to changes in social contingency was investigated by presenting 2‐, 4‐, and 6‐month‐old infants with 3 episodes of social interaction from mothers and strangers: 2 contingent interactions and 1 noncontingent replay. Three orders were presented: (a) contingent, noncontingent, contingent; (b) contingent, contingent, noncontingent; and (c) noncontingent, contingent, contingent. Contingency and carryover effects were shown to both mothers and strangers in the different orders of presentation. Infants were more visually attentive to contingent interactions than to the noncontingent replay when contingent interactions occurred prior to the replay, and the infants' level of attention to the noncontingent replay carried over to subsequent contingent interactions. The 4‐ and 6‐month‐old infants showed contingency and carryover effects by their visual attention and smiling. Examination of effect sizes for attention suggests 2‐month‐old infants may be beginning to show the effects. Reasons for age changes in sensitivity to social contingency are discussed.  相似文献   

3.
Past research using a deferred imitation task has shown that 6‐month‐olds remember a 3‐part action sequence for only 1 day. The concept of a time window suggests that there is a limited period within which additional information can be integrated with a prior memory. Its width tracks the forgetting function of the memory. This study asked if retrieving the memory of the modeled actions at the end of the time window protracts its retention, if the type of retrieval (active or passive) differentially influences retention, and if the retrieval delay influences its specificity. In Experiment 1, 6‐month‐olds either imitated the modeled actions (active retrieval group) or merely watched them modeled again (passive retrieval group) 1 day after the original demonstration. Both groups showed deferred imitation after 10 days. In Experiment 2, 6‐month‐olds who repeatedly retrieved the memory at or near the end of the time window deferred imitation for 2.5 months. In Experiment 3,6‐month‐olds spontaneously generalized imitation late in the time window after 1 prior retrieval, whether it was active or passive. These studies reveal that the retention benefit of multiple retrievals late in the time window is huge. Because most retrievals are undoubtedly latent, the contribution of repeated events to the growth of the knowledge base early in infancy has been greatly underestimated.  相似文献   

4.
Ross Flom  Anne D. Pick 《Infancy》2005,7(2):207-218
The study of gaze following in infants younger than 12 months of age has emphasized the effects of gesture, type of target, and its position or placement. This experiment extends this literature by examining the effects of adults' affective expression on 7‐month‐olds' gaze following. The effects of 3 affective expressions—happy, sad, and neutral—on 7‐month‐olds' frequency of gaze following were examined. The results indicated that infants more frequently followed the gaze of an adult posing a neutral expression than that of an adult posing either a happy or a sad expression. The infants also looked proportionately longer toward the indicated target when the adult's expression was neutral. The results are interpreted in terms of infants' flexibility of attention.  相似文献   

5.
Human languages rely on the ability to learn and produce an indefinite number of words by combining consonants and vowels in a lawful manner. The categorization of speech representations into consonants and vowels is evidenced by the tendency of adult speakers, attested in many languages, to use consonants and vowels for different tasks. Consonants are favored in lexical tasks, while vowels are favored to learn structural regularities. Recent results suggest that this specialization is already observable at 12 months of age in Italian participants. Here, we investigated the representations of younger infants. In a series of anticipatory looking experiments, we showed that Italian 6‐month‐olds rely more on vowels than on consonants when learning the predictions made by individual words (Experiment 1) and are better at generalizing a structure when it is implemented over vowels than when it is implemented over consonants (Experiments 2 and 3). Until 6 months of age, infants thus show a general vocalic bias, which contrasts with the specialization previously observed at 12 months. These results suggest the format of speech representations changes during the second semester of life.  相似文献   

6.
Complex systems are often built from a relatively small set of basic features or operations that can be combined in myriad ways. We investigated the developmental origins of this compositional architecture in 9‐month‐old infants, extending recent work that demonstrated rudimentary compositional abilities in preschoolers. Infants viewed two separate object‐occlusion events that depicted a single‐feature‐change operation. They were then tested with a combined operation to determine whether they expected the outcome of the two feature changes, even though this combination was unfamiliar. In contrast to preschoolers, infants did not appear to predictively compose these simple feature‐change operations. A second experiment demonstrated the ability of infants to track two operations when not combined. The failure to compose basic operations is consistent with limitations on object tracking and early numerical cognition (Feigenson & Yamaguchi, Infancy, 2009, 14, 244). We suggest that these results can be unified via a general principle: Infants have difficulty with multiple updates to a representation of an unobservable.  相似文献   

7.
Mark Nielsen 《Infancy》2009,14(3):377-389
Following Meltzoff's (1995) behavioral reenactment paradigm, this study investigated the ability of 12‐month‐olds (N = 44) to reproduce a model's attempted‐but‐failed actions on objects. Testing was conducted using a novel set of objects designed to enable young infants to readily identify the potential outcome of the model's actions. Infants who saw an adult's attempted‐but‐failed actions now produced her intended outcomes at an equivalent rate to infants who saw the model's completed acts, and significantly more so than infants who either observed an adult manipulating the test apparatus using nontarget actions or who did not see any actions demonstrated on the test apparatus. This result shows that, contrary to previous studies, 12‐month‐olds can produce the intended but unconsummated acts of others.  相似文献   

8.
The experiments reported here investigated the development of a fundamental component of cognition: to recognize and generalize abstract relations. Infants were presented with simple rule‐governed patterned sequences of visual shapes (ABB, AAB, and ABA) that could be discriminated from differences in the position of the repeated element (late, early, or nonadjacent, respectively). Eight‐month‐olds were found to distinguish patterns on the basis of the repetition, but appeared insensitive to its position in the sequence; 11‐month‐olds distinguished patterns over the position of the repetition, but appeared insensitive to the nonadjacent repetition. These results suggest that abstract pattern detection may develop incrementally in a process of constructing complex relations from more primitive components.  相似文献   

9.
Infants infer social and pragmatic intentions underlying attention‐directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12‐month‐olds use information from act‐accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.  相似文献   

10.
Maternal depression has been associated with the mother‐child dyad's ability to engage in joint attention. This study of 69 depressed and 63 control mothers and their 18‐month‐olds addresses how aspects of maternal psychopathology are related to joint attention during a snack interaction. Although nondepressed‐mother dyads appeared better at joint attention than depressed‐mother dyads, this difference was not statistically significant. Among the depressed‐mother dyads, joint attention was related to presence of a comorbid Axis I diagnosis (usually an anxiety disorder) versus a diagnosis of major depressive disorder (MDD) only. Surprisingly, dyads with mothers who met criteria for a comorbid diagnosis were better at joint attention than those with MDD only, despite the fact that those mothers were likely to have longer and more severe depressive histories. The relationship between comorbid status and joint attention was mediated by the mother's affect. Rationale for the paradoxical finding that the “more pathological” mothers had greater success in engaging in joint attention is discussed.  相似文献   

11.
The role of selective attention in infant phonetic perception was examined using a distraction masker paradigm. We compared perception of /bu/ versus /gu/ in 6‐ to 8‐month‐olds using a visual fixation procedure. Infants were habituated to multiple natural productions of 1 syllable type and then presented 4 test trials (old‐new‐old‐new). Perception of the new syllable (indexed as novelty preference) was compared across 3 groups: habituated and tested on syllables in quiet (Group 1), habituated and tested on syllables mixed with a nonspeech signal (Group 2), and habituated with syllables mixed with a non‐speech signal and tested on syllables in quiet (Group 3). In Groups 2 and 3, each syllable was mixed with a segment spliced from a recording of bird and cricket songs. This nonspeech signal has no overlapping frequencies with the syllable; it is not expected to alter the sensory structure or perceptual coherence of the syllable. Perception was negatively affected by the presence of the auditory distracter during habituation; individual performance levels also varied more in these groups. The findings show that perceiving speech in the presence of irrelevant sounds poses a cognitive challenge for young infants. We conclude that selective attention is an important skill that supports speech perception in infants; the significance of this skill for language learning during infancy deserves investigation.  相似文献   

12.
This study examined the hypothesis that toddlers interpret an adult's head turn as evidence that the adult was looking at something, whereas younger infants interpret gaze based on an expectancy that an interesting object will be present on the side to which the adult has turned. Infants of 12 months and toddlers of 24 months were first shown that an adult head turn to the side predicted the activation of a remote‐controlled toy on that side of the room. After this connection had been demonstrated, participants were assigned to 2 conditions. In the head turn condition the toys were removed but the adult continued to produce head turns to the side. In the toy condition the adult stopped turning but the toys continued to be activated when the participant turned toward them. Results showed that, compared to 12‐month‐olds, 24‐month‐olds were more likely to continue to turn to the side when the adult continued to turn even though there was no longer anything of interest to see. In contrast, compared to 24‐month‐olds, 12‐month‐olds were, if anything, more likely to continue to turn to the side in the condition in which the adult stopped turning. The latter result was replicated in a condition in which the activation of the toy was not contingent on the child's own head turn. These results imply that the meaning of gaze following may change significantly over the 2nd year of life. For 12‐month‐olds, gaze is a useful predictor of where interesting sights may occur. In contrast, for 24‐month‐olds, gaze may be a signal that the adult is looking at something.  相似文献   

13.
This study examines 16‐month‐olds' understanding of word order and inflectional properties of familiar nouns and verbs. Infants preferred grammatical sentences over ungrammatical sentences when the ungrammaticality was cued by both misplaced inflection and word order reversal of nouns and verbs. Infants were also sensitive to inflection alone as a cue to grammaticality, but not word order alone. The preference for grammatical sentence forms was also disrupted when adjacent function word cues were removed from the stimuli, and when familiar content words were replaced by nonce words. These results suggest that sensitivity to the relationship between functional morphemes and content words, rather than sensitivity to either independently, drives the development of early grammatical knowledge. Furthermore, infants showed some ability to generalize from familiar to nonce content word contexts.  相似文献   

14.
Previous research has shown that infants begin to display sensitivities to language‐specific phonotactics and probabilistic phonotactics at around 9 months of age. However, certain phonotactic patterns have not yet been examined, such as contrast neutralization, in which phonemic contrasts are neutralized typically in syllable‐ or word‐final position. Thus, the acquisition of contrast neutralization is dependent on infants' ability to perceive certain contrasts in final position. The studies reported here test infants' sensitivity to voicing neutralization in word‐final position and infants' discrimination of voicing and place of articulation (POA) contrasts in word‐initial and word‐final position. Nine and 11‐month‐old Dutch‐learning infants showed no preference for legal versus illegal voicing phonotactics that were contrasted in word‐final position. Furthermore, 10‐month‐old infants showed no discrimination of voicing or POA contrasts in word‐final position, whereas they did show sensitivity to the same contrasts in word‐initial position. By 16 months, infants were able to discriminate POA contrasts in word‐final position, although showing no discrimination of the word‐final voicing contrast. These findings have broad implications for models of how learners acquire the phonological structures of their language, for the types of phonotactic structures to which infants are presumed to be sensitive, and for the relative sensitivity to phonemic distinctions by syllable and word position during acquisition.  相似文献   

15.
Ben Kenward 《Infancy》2010,15(4):337-361
It is known that young infants can learn to perform an action that elicits a reinforcer, and that they can visually anticipate a predictable stimulus by looking at its location before it begins. Here, in an investigation of the display of these abilities in tandem, I report that 10‐month‐olds anticipate a reward stimulus that they generate through their own action: .5 sec before pushing a button to start a video reward, they increase their rate of gaze shifts to the reward location; and during periods of extinction, reward location gaze shifts correlate with bouts of button pushing. The results are consistent with the hypothesis that the infants have an expectation of the outcome of their actions: several alternative hypotheses are ruled out by yoked controls. Such an expectation may, however, be procedural, have minimal content, and is not necessarily sufficient to motivate action.  相似文献   

16.
Previous research has found that young children recognize an adult as being acquainted with an object most readily when the child and adult have previously engaged socially with that object together. In the current study, we tested the hypothesis that such social engagement is so powerful that it can sometimes lead children to overestimate what has been shared. After having shared two objects with an adult in turn, 2‐year‐old children played with a third object the adult could not see. In three out of four conditions, the adult remained co‐present and/or communicated to the child while she played with the third object. Children falsely perceived the adult as being acquainted with the third object when she remained co‐present (whether or not she also communicated) but not when she clearly terminated the interaction by disengaging and leaving. These results suggest that when young children are engaged with a co‐present person they tend to overestimate the other’s knowledge.  相似文献   

17.
We examined changes in the efficiency of visual selection over the first postnatal year with an adapted version of a spatial negative priming paradigm. In this task, when a previously ignored location becomes the target to be selected, responses to it are impaired, providing a measure of visual selection. Oculomotor latencies to target selection were the dependent measure. Each trial consisted of a prime and a probe presentation, separated by a 67‐, 200‐, or 550‐msec interstimulus interval (ISI), to test the efficiency of selection as a function of processing time. In the prime, the target was accompanied by a distractor item. In the probe, the target appeared either in the location formerly occupied by the distractor (repeated distractor trials) or in one of the other two locations (control trials). We tested 41 infants in each of 3 age groups (3, 6, and 9 months) on the three different ISIs. Nine‐month‐old infants' saccade latencies were slowed on repeated distractors relative to control trials, given sufficiently long ISIs. Saccade latencies in the youngest two age groups showed only facilitation on repeated distractor trials at short ISIs. These results suggest that visual selection efficiency is a function of the interaction of the processing limitations of a system with environmental conditions, in this case the time allotted for the selection process.  相似文献   

18.
Although a large literature discusses infants' preference for infant‐directed speech (IDS), few studies have examined how this preference might change over time or across listening situations. The work reported here compares infants' preference for IDS while listening in a quiet versus a noisy environment, and across 3 points in development: 4.5 months of age, 9 months of age, and 13 months of age. Several studies have suggested that IDS might help infants to pick out speech in the context of noise (Colombo, Frick, Ryther, Coldren, & Mitchell, 1995; Fernald, 1984; Newman, 2003); this might suggest that infants' preference for IDS would increase in these settings. However, this was not found to be the case; at all 3 ages, infants showed similar advantage (or lack thereof) for IDS as compared to adult‐directed speech when presented in noise versus silence. There was, however, a significant interaction across ages: Infants aged 4.5 months showed an overall preference for IDS, whereas older infants did not, despite listening to the same stimuli. The lack of an effect with older infants replicates and extends recent findings by Hayashi, Tamekawa, and Kiritani (2001), suggesting that the variations in fundamental frequency and affect are not sufficient cues to IDS for older infants.  相似文献   

19.
Two experiments tested the DeLoache, Pierroutsakos, Uttal, Rosengren, and Gottlieb (1998 claim that 9‐month‐old infants attempt to grasp objects depicted in photographs. In Experiment 1, 9‐month‐olds viewed an object, a photograph of the object, and 2 flat, nonpictorial displays. On average, they reached for the photograph and nonpictorial displays with their hands approximately horizontal and close to the display surfaces, but reached for the object with their hands oriented obliquely and at significantly higher heights. The infants also exhibited similar behaviors when touching the photograph and nonpictorial displays. In Experiment 2, 9‐month‐olds exhibited similar behaviors when touching a photograph of an object and a photograph of textured carpet. The results of both experiments suggest that 9‐month‐olds treat photographs of objects as 2‐dimensional surfaces and not as graspable objects.  相似文献   

20.
Comprehending spoken words requires a lexicon of sound patterns and knowledge of their referents in the world. Tincoff and Jusczyk (1999) demonstrated that 6‐month‐olds link the sound patterns “Mommy” and “Daddy” to video images of their parents, but not to other adults. This finding suggests that comprehension emerges at this young age and might take the form of very specific word‐world links, as in “Mommy” referring only to the infant’s mother and “Daddy” referring only to the infant’s father. The current study was designed to investigate if 6‐month‐olds also show evidence of comprehending words that can refer to categories of objects. The results show that 6‐month‐olds link the sound patterns “hand” and “feet” to videos of an adult’s hand and feet. This finding suggests that very early comprehension has a capacity beyond specific, one‐to‐one, associations. Future research will need to consider how developing categorization abilities, social experiences, and parent word use influence the beginnings of word comprehension.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号