首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We examined whether mothers' use of temporal synchrony between spoken words and moving objects, and infants' attention to object naming, predict infants' learning of word–object relations. Following 5 min of free play, 24 mothers taught their 6‐ to 8‐month‐olds the names of 2 toy objects, Gow and Chi, during a 3‐min play episode. Infants were then tested for their word mapping. The videotaped episodes were coded for mothers' object naming and infants' attention to different naming types. Results indicated that mothers' use of temporal synchrony and infants' attention during play covaried with infants' word‐mapping ability. Specifically, infants who switched eye gaze from mother to object most frequently during naming learned the word–object relations. The findings suggest that maternal naming and infants' word‐mapping abilities are bidirectionally related. Variability in infants' attention to maternal multimodal naming explains the variability in early lexical‐mapping development.  相似文献   

2.
3.
Seven‐month‐old infants require redundant information, such as temporal synchrony, to learn arbitrary syllable‐object relations (Gogate & Bahrick, 1998). Infants learned the relations between 2 spoken syllables, /a/ and /i/, and 2 moving objects only when temporal synchrony was present during habituation. This article presents 2 experiments to address infants' memory for these relations. In Experiment 1, infants remembered the syllable‐object relations after 10 min, only when temporal synchrony between the vocalizations and moving objects was provided during learning. In Experiment 2, 7‐month‐olds were habituated to the same syllable‐object pairs in the presence of temporal synchrony and tested for memory after 4 days. Once again, infants learned and showed emerging memory for the syllable‐object relations 4 days after original learning under the temporally synchronous condition. These findings are consistent with the view that prior to symbolic development, infants learn and remember word‐object relations by perceiving redundant information in the vocal and gestural communication of adults.  相似文献   

4.
We tested 7‐month‐old infants' sensitivity to others' goals in an imitation task, and assessed whether infants are as likely to imitate the goals of nonhuman agents as they are to imitate human goals. In the current studies, we used the paradigm developed by Hamlin et. al (in press) to test infants' responses to human actions versus closely matched inanimate object motions. The experimental events resembled those from Luo and Baillargeon's (2005) looking‐time study in which infants responded to the movements of an inanimate object (a self‐propelled box) as goal‐directed. Although infants responded visually to the goal structure of the object's movement, here they did not reproduce the box's goal. These results provide further evidence that 7‐month‐olds' goal representations are sufficiently robust to drive their own manual actions. However, they indicate that infants' responses to inanimate object movements may not be robust in this way.  相似文献   

5.
This study examined 6‐month‐old infants' abilities to use the visual information provided by simulated self‐movement through the world, and movement of an object through the world, for spatial orientation. Infants were habituated to a visual display in which they saw a toy hidden, followed by either rotation of the point of observation through the world (simulated self‐movement) or movement of the object itself through the world (object movement). Following habituation, infants saw test displays in which the hidden toy reappeared at the correct or incorrect location, relative to the earlier movements. Infants habituated to simulated self‐movement looked longer at the recovery of the toy from an incorrect, relative to correct location. In contrast, infants habituated to object movement showed no differential looking to either correct or incorrect test displays. These findings are discussed within a theoretical framework of spatial orientation emphasizing the availability and use of spatial information.  相似文献   

6.
Prior research suggests that when very simple event sequences are used, 4.5‐month‐olds demonstrate the ability to individuate objects based on the continuity or disruption of their speed of motion (Wilcox & Schweinle, 2003). However, infants demonstrate their ability to individuate objects in an event‐monitoring task (i.e., infants must keep track of an ongoing event) at a younger age than in an event‐mapping task (i.e., infants must compare information from 2 different events). The research presented here built on these findings by examining infants' capacity to succeed on an event‐mapping task with a more complex event sequence to determine if the complexity of the event interferes with their ability to form summary representations of the event, and, in short, individuate the objects. Three experiments were conducted with infants 4.5 to 9.5 months of age. The results indicated that (a) increasing the complexity of the objects' trajectories adversely affected infants' performance on the task, and (b) boys were more likely to succeed than girls. These findings shed light on how representational capacities change during the first year of life and are discussed in terms of information processing and representational capabilities as well as neuro‐anatomical development.  相似文献   

7.
Research on the influence of multimodal information on infants' learning is inconclusive. While one line of research finds that multimodal input has a negative effect on learning, another finds positive effects. The present study aims to shed some new light on this discussion by studying the influence of multimodal information and accompanying stimulus complexity on the learning process. We assessed the influence of multimodal input on the trial‐by‐trial learning of 8‐ and 11‐month‐old infants. Using an anticipatory eye movement paradigm, we measured how infants learn to anticipate the correct stimulus–location associations when exposed to visual‐only, auditory‐only (unimodal), or auditory and visual (multimodal) information. Our results show that infants in both the multimodal and visual‐only conditions learned the stimulus–location associations. Although infants in the visual‐only condition appeared to learn in fewer trials, infants in the multimodal condition showed better anticipating behavior: as a group, they had a higher chance of anticipating correctly on more consecutive trials than infants in the visual‐only condition. These findings suggest that effects of multimodal information on infant learning operate chiefly through effects on infants' attention.  相似文献   

8.
This study examined infants' sensitivity to a speaker's verbal accuracy and whether the reliability of the speaker had an effect on their selective trust. Forty‐nine 18‐month‐old infants were exposed to a speaker who either accurately or inaccurately labeled familiar objects. Subsequently, the speaker administered a series of tasks in which infants had an opportunity to: learn a novel word, imitate the speaker's “irrational” actions, and help the speaker obtain an out‐of‐reach object. In contrast to infants in the accurate (reliable) condition, those in the inaccurate (unreliable) condition performed more poorly on a word‐learning task and were less likely to imitate. All infants demonstrated high rates of instrumental helping behavior. These results are the first to demonstrate that infants as young as 18 months of age cannot only detect a speaker's verbal inaccuracy but also use this information to attenuate their word recognition and learning of novel actions.  相似文献   

9.
Work with infants on the “visual cliff” links avoidance of drop‐offs to experience with self‐produced locomotion. Adolph's (2002) research on infants' perception of slope and gap traversability suggests that learning to avoid falling down is highly specific to the postural context in which it occurs. Infants, for example, who have learned to avoid crossing risky slopes while crawling must learn anew such avoidance when they start walking. Do newly walking infants avoid crossing the drop‐off of the visual cliff? Twenty prewalking but experienced crawling infants were compared with 20 similarly aged newly walking infants on their reactions to the visual cliff. Newly walking infants avoided moving onto the cliff's deep side even more consistently than did the prewalking crawlers. Thus, in the context of drop‐offs in visual texture, our results show that once avoidance of drop‐offs is established under conditions of crawling, it is developmentally maintained once infants begin walking.  相似文献   

10.
Twelve‐month‐old infants' ability to perceive gaze direction in static video images was investigated. The images showed a woman who performed attention‐directing actions by looking or pointing toward 1 of 4 objects positioned in front of her (2 on each side). When the model just pointed at the objects, she looked straight ahead, and when she just looked, her hands were hidden below the tabletop. An eye movement system (TOBII) was used to register the gaze of the participants. We found that the infants clearly discriminated the gaze directions to the objects. There was no tendency to mix up the 2 object positions, located 10° apart, on the same side of the model. The infants spent more time looking at the attended objects than the unattended ones and they shifted gaze more often from the face of the model to the attended object than to the unattended objects. Pointing did not significantly increase the infants' tendency to move gaze to the attended object, irrespective of whether the pointing gesture was accompanied by looking or not. In all conditions the infants spent most of the time looking at the model's face. This tendency was especially noticeable in the pointing‐only condition and the condition where the model just looked straight ahead.  相似文献   

11.
Recent work has suggested the value of electroencephalographic (EEG) measures in the study of infants' processing of human action. Studies in this area have investigated desynchronization of the sensorimotor mu rhythm during action execution and action observation in infancy. Untested but critical to theory is whether the mu rhythm shows a differential response to actions which share similar goals but have different motor requirements or sensory outcomes. By varying the invisible property of object weight, we controlled for the abstract goal (reach, grasp, and lift the object), while allowing other aspects of the action to vary. The mu response during 14‐month‐old infants' own executed actions showed a differential hemispheric response between acting on heavier and lighter objects. EEG responses also showed sensitivity to “expected object weight” when infants simply observed an experimenter reach for objects that the infants' prior experience indicated were heavier vs. lighter. Crucially, this neural reactivity was predictive—during the observation of the other reaching toward the object, before lifting occurred. This suggests that infants' own self‐experience with a particular object's weight influences their processing of others' actions on the object, with implications for developmental social‐cognitive neuroscience.  相似文献   

12.
What do novice word learners know about the sound of words? Word‐learning tasks suggest that young infants (14 months old) confuse similar‐sounding words, whereas mispronunciation detection tasks suggest that slightly older infants (18–24 months old) correctly distinguish similar words. Here we explore whether the difficulty at 14 months stems from infants' novice status as word learners or whether it is inherent in the task demands of learning new words. Results from 3 experiments support a developmental explanation. In Experiment 1, infants of 20 months learned to pair 2 phonetically similar words to 2 different objects under precisely the same conditions that infants of 14 months (Experiment 2) failed. In Experiment 3, infants of 17 months showed intermediate, but still successful, performance in the task. Vocabulary size predicted word‐learning performance, but only in the younger, less experienced word learners. The implications of these results for theories of word learning and lexical representation are discussed.  相似文献   

13.
Much research has been devoted to questions regarding how infants begin to perceive the unity of partly occluded objects, and it is clear that object motion plays a central role. Little is known, however, about how infants' motion processing skills are used in such tasks. One important kinetic cue for object shape is structure from motion, but its role in unity perception remains unknown. To address this issue, we presented 2‐ and 4‐month‐old infants with displays in which object unity was specified by vertical rotation. After habituation to this display, infants viewed broken and complete versions of the object to test their preference for the broken object, an indication of perception of unity in the occlusion display. Positive evidence for the perception of unity was provided by both age groups. Concomitant edge translation available in 1 condition did not appear to contribute above and beyond simple rotation. These results suggest that structure from motion, and perhaps contour deformation and shading cues, can contribute important information for veridical object percepts in very young infants.  相似文献   

14.
One of the most powerful sources of information about spatial relationships available to mobile organisms is the pattern of visual motion called optic flow. Despite its importance for spatial perception and for guiding locomotion, very little is known about how the ability to perceive one's direction of motion, or heading, from optic flow develops early in life. In this article, we report the results of 3 experiments that tested the abilities of 4‐month‐old infants to discriminate optic flow patterns simulating different directions of self‐motion. The combined results from 2 different experimental paradigms suggest that 4‐month‐olds discriminate optic flow patterns that simulate only large (> 32°) changes in the direction of the observer's motion through space. This suggests that prior to the onset of locomotion, there are limitations on infants' abilities to process patterns of optic flow related to self‐motion.  相似文献   

15.
In order to disentangle the effects of an adult model's eye gaze and head orientation on infants' processing of objects attended to by the adult, we presented 4‐month‐olds with faces that either (1) shifted eye gaze toward or away from an object while the head stayed stationary or (2) that turned their head while maintaining gaze directed straight ahead. Infants' responses to the previously attended and unattended objects were measured using eye‐tracking and event‐related potentials. In both conditions, infants responded to objects that were not cued by the adult's head or eye gaze shift with more visual attention and an increased negative central (Nc) component relative to cued objects. This suggests that cued objects had been encoded more effectively, whereas uncued objects required further processing. We conclude that eye gaze and head orientation act independently as cues to direct infants' attention and object processing. Both head orientation and eye gaze, when presented in motion, even override the effects of incongruent stationary information from the other kind of cue.  相似文献   

16.
Recent research has revealed the important role of multimodal object exploration in infants' cognitive and social development. Yet, the real‐time effects of postural position on infants' object exploration have been largely ignored. In the current study, 5‐ to 7‐month‐old infants (= 29) handled objects while placed in supported sitting, supine, and prone postures, and their spontaneous exploratory behaviors were observed. Infants produced more manual, oral, and visual exploration in sitting compared to lying supine and prone. Moreover, while sitting, infants more often coupled manual exploration with mouthing and visual examination. Infants' opportunities for learning from object exploration are embedded within a real‐time postural context that constrains the quantity and quality of exploratory behavior.  相似文献   

17.
Previous research has demonstrated that social interactions underlie the development of object‐directed imitation. For example, infants differentially learn object action sequences from a live social partner compared to a social partner over a video monitor; however, what is not well understood is what aspects of social interactions influence social learning. Previous studies have found variable influences of different types of caregiver responsiveness on attention, language, and cognitive development. Therefore, the purpose of this study was to examine how the responsive style of a social partner influenced the learning of object‐directed action sequences. Infants interacted with either a sensitive or redirective experimenter before the learning trial. Results revealed infants changed their patterns of engagement; infants interacting with a sensitive experimenter had longer periods of attentional engagement than infants interacting with a redirective experimenter. Furthermore, during the learning trial, the amount of sensitivity during interaction with the social partner predicted learning scores. These findings suggest that infants' attention is influenced by social partners' interactive style during ongoing interaction, which subsequently affects how infants learn from these social partners.  相似文献   

18.
Most research on object individuation in infants has focused on the visual domain. Yet the problem of object individuation is not unique to the visual system, but shared by other sensory modalities. This research examined 4.5‐month‐old infants' capacity to use auditory information to individuate objects. Infants were presented with events in which they heard 2 distinct sounds, separated by a temporal gap, emanate from behind a wide screen; the screen was then lowered to reveal 1 or 2 objects. Longer looking to the 1‐ than 2‐object display was taken as evidence that the infants (a) interpreted the auditory event as involving 2 objects and (b) found the presence of only 1 object when the screen was lowered unexpected. The results indicated that the infants used sounds produced by rattles, but not sounds produced by an electronic keyboard, as the basis for object individuation (Experiments 1 and 2). Data collected with adult participants revealed that adults are also more sensitive to rattle sounds than electronic tones. A final experiment assessed conditions under which young infants attend to rattle sounds (Experiment 3). Collectively, the outcomes of these experiments suggest that infants and adults are more likely to use some sounds than others as the basis for individuating objects. We propose that these results reflect a processing bias to attend to sounds that reveal something about the physical properties of an object—sounds that are obviously linked to object structure—when determining object identity.  相似文献   

19.
Is infant looking behavior in ambiguous situations best described in terms of information seeking (social referencing) or as attachment behavior? Twelve‐month‐old infants were assigned to 1 of 2 conditions (Study 1); each infant's mother provided positive information about an ambiguous toy and an experimenter provided positive information. In Study 2, 12‐month‐old infants were assigned to 1 of 3 conditions: mother provided positive information about the toy, mother was inattentive, or mother provided negative information; the experimenter was inattentive. The infants preferred to look at the experimenter in almost all conditions and they regulated their behavior in accordance with information obtained from the experimenter. None of the studies lends support for an explanation in terms of behaviors deriving from the attachment system, and they raise questions concerning social referencing interpretations of infants' looking behavior. Other alternatives for explaining infant looking behavior in social referencing situations (e.g., associative learning) are discussed.  相似文献   

20.
Two studies illustrate the functional significance of a new category of prelinguistic vocalizing—object‐directed vocalizations (ODVs)—and show that these sounds are connected to learning about words and objects. Experiment 1 tested 12‐month‐old infants’ perceptual learning of objects that elicited ODVs. Fourteen infants’ vocalizations were recorded as they explored novel objects. Infants learned visual features of objects that elicited the most ODVs but not of objects that elicited the fewest vocalizations. Experiment 2 assessed the role of ODVs in learning word–object associations. Forty infants aged 11.5 months played with a novel object and received a label either contingently on an ODV or on a look alone. Only infants who received labels in response to an ODV learned the association. Taken together, the findings suggest that infants’ ODVs signal a state of attention that facilitates learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号