首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
To examine key parameters of the initial conditions in early category learning, two studies compared 5‐month‐olds’ object categorization between tasks involving previously unseen novel objects, and between measures within tasks. Infants in Experiment 1 participated in a visual familiarization–novelty preference (VFNP) task with two‐dimensional (2D) stimulus images. Infants provided no evidence of categorization by either their looking or their examining even though infants in previous research systematically categorized the same objects by examining when they could handle them directly. Infants in Experiment 2 participated in a VFNP task with 3D stimulus objects that allowed visual examination of objects’ 3D instantiation while denying manual contact with the objects. Under these conditions, infants demonstrated categorization by examining but not by looking. Focused examination appears to be a key component of young infants’ ability to form category representations of novel objects, and 3D instantiation appears to better engage such examining.  相似文献   

2.
Infants in laboratory settings look longer at events that violate their expectations, learn better about objects that behave unexpectedly, and match utterances to the objects that likely elicited them. The paradigms revealing these behaviors have become cornerstones of research on preverbal cognition. However, little is known about whether these canonical behaviors are observed outside laboratory settings. Here, we describe a series of online protocols that replicate classic laboratory findings, detailing our methods throughout. In Experiment 1a, 15-month-old infants (N = 24) looked longer at an online support event culminating in an Unexpected outcome (i.e., appearing to defy gravity) than an Expected outcome. Infants did not, however, show the same success with an online solidity event. In Experiment 1b, 15-month-old infants (N = 24) showed surprise-induced learning following online events—they were better able to learn a novel object's label when the object had behaved unexpectedly compared to when it behaved expectedly. Finally, in Experiment 2, 16-month-old infants (N = 20) who heard a valenced utterance (“Yum!”) showed preferential looking to the object most likely to have generated that utterance. Together, these results suggest that, with some adjustments, online testing is a feasible and promising approach for infant cognition research.  相似文献   

3.
Infants can infer agents’ goals after observing agents’ goal‐directed actions on objects and can subsequently make predictions about how agents will act on objects in the future. We investigated the representations supporting these predictions. We familiarized 6‐month‐old infants to an agent who preferentially reached for one of two featurally distinct objects following a cue. At test, the objects were sequentially occluded from the infant in the agent's presence. We asked whether infants could generate action predictions without visual access to the relevant objects by measuring whether infants shifted their gaze to the location of the agent's hidden goal object following the cue. We also examined what infants represented about the hidden objects by removing one of the occluders to reveal either the original hidden object or the unexpected other object and measuring infants’ looking time. We found that, even without visual access to the objects, infants made predictive gazes to the location of the agent's occluded goal object, but failed to represent the features of either hidden object. These results suggest that infants make goal‐based action predictions when the relevant objects in the scene are occluded, but doing so may come at the expense of maintaining representations of the objects.  相似文献   

4.
Yuyan Luo 《Infancy》2010,15(4):392-419
Some actions of agents are ambiguous in terms of goal‐directedness to young infants. If given reasons why an agent performed these ambiguous actions, would infants then be able to perceive the actions as goal‐directed? Prior results show that infants younger than 12 months can not encode the relationship between a human agent’s looking behavior and the target of her gaze as goal‐directed. In the present experiments, 8‐month‐olds responded in ways suggesting that they interpreted an agent’s action of looking at object‐A as opposed to object‐B as evidence for her goal directed toward object‐A, if her looking action was rational given certain situational constraints: a barrier separated her from the objects or her hands were occupied. Therefore, the infants seem to consider situational constraints when attributing goals to agents’ otherwise ambiguous actions; they seem to realize that within such constraints, these actions are efficient ways for agents to achieve goals.  相似文献   

5.
Event Set × Event Set designs were used to study the rotating screen paradigm introduced by Baillargeon, Spelke, and Wasserman (1985). In Experiment 1, 36 5 1/2‐month‐old infants were habituated to a screen rotating 180° with no block, a screen rotating 120° up to a block, or a screen rotating 180° up to and seemingly through a block. All infants were then tested on the same 3 events and also a screen rotating 120° with no block. The results indicate that infants are using novelty and familiarity preference to determine their looking times. To confirm this, in Experiment 2, 52 5 1/2‐month‐old infants were familiarized on either 3 or 7 trials to a screen rotating 180° with no block or a screen rotating 120° with no block. All infants were then tested on the same test events as in Experiment 1. Infants with fewer familiarization trials were more likely to prefer the familiar rotation event. The results of these 2 experiments indicate that infants did not use the possibility or impossibility of events but instead used familiarity or novelty relations between the habituation events and the test events to determine their looking times, and suggest that the Baillargeon et al. study should not be interpreted as indicating object permanence or solidity knowledge in young infants.  相似文献   

6.
Two experiments investigated 9‐month‐old infants’ abilities to recognize the correspondence between an actual three‐dimensional (3D) object and its two‐dimensional (2D) representation, looking specifically at representations that did not literally depict the actual object: schematic line drawings. In Experiment 1 , infants habituated to a line drawing of either a doll or a sheep and were then tested with the actual objects themselves. Infants habituated to the sheep drawing recovered to the unfamiliar but not the familiar object, showing a novelty preference. Infants habituated to the doll drawing, however, recovered to both familiar and unfamiliar objects, failing to show any preference between the two. In Experiment 2 , infants habituated to the 3D objects and were then tested with the 2D line drawings. In this case, both groups of infants showed a preference only for the novel displays. Together these findings demonstrate that 9‐month‐old infants recognize the correspondence between 3D objects and their 2D representations, even when these representations are not literal copies of the objects themselves.  相似文献   

7.
Infant visual attention has been studied extensively within cognitive paradigms using measures such as look duration and reaction time, but less work has examined how infant attention operates in social contexts. In addition, little is known about the stability of individual differences in attention across cognitive and social contexts. In this study, a cross‐sectional sample of 50 infants (4 and 6 months of age) were first tested in a look duration and reaction time task with static visual stimuli. Next, their mothers participated with the infants in the still‐face procedure, a mildly distressing social interaction paradigm that involves violation of expectancy. Individual differences in looking and emotion were stable across the phases of the still‐face task. Further, individual differences in looking measures from the visual attention task were related to the pattern of looking shown across the phases of the still‐face procedure. Results indicate that individual differences in attentional measures show moderate stability within cognitive and social contexts, and that the ability of infants to shift and disengage looks may affect their ability to regulate interaction in social contexts.  相似文献   

8.
In 2 experiments, the interplay of action perception and action production was investigated in 6‐month‐old infants. In Experiment 1, infants received 2 versions of a means‐end task in counterbalanced order. In the action perception version, a preferential looking paradigm in which infants were shown an actor performing means‐end behavior with an expected and an unexpected outcome was used. In the action production version, infants had to pull a cloth to receive a toy. In Experiment 2, infants' ability to perform the action production task with a cloth was compared to their ability to perform the action production task with a less flexible board. Finally, Experiment 3 was designed to control for alternative low‐level explanations of the differences in the looking times toward the final states presented in Experiment 1 by only presenting the final states of the action perception task without showing the initial action sequence. Results obtained in Experiment 1 showed that in the action perception task, infants discriminated between the expected and the unexpected outcome. This perceptual ability was independent of their actual competence in executing means‐ end behavior in the action production task. Experiment 2 showed no difference in 6‐month‐olds' performance in the action production task depending on the properties of the support under the toy. Similarly, in Experiment 3, no differences in looking times between the 2 final states were found. The findings are discussed in light of theories on the development of action perception and action production.  相似文献   

9.
Two habituation experiments were conducted to investigate how 4‐month‐old infants perceive partly occluded shapes. In the first experiment, we presented a simple, partly occluded shape to the infants until habituation was reached. Then we showed either a probable completion (one that would be predicted on the basis of both local and global cues) or an improbable completion. Longer looking times were found for the improbably completed shape (compared to probable and control conditions), suggesting that the probable shape was perceived during partial occlusion. In the second experiment, infants were habituated to more ambiguous partly occluded shapes, where local and global cues would result in different completions. For adults, the percept of these shapes is usually dominated by global influences. However, after habituation the infants looked longer at the globally completed shapes. These results suggest that by the age of 4 months, infants are able to infer the perceptual completion of partly occluded shapes, but for more ambiguous shapes, this completion seems to be dominated by local influences.  相似文献   

10.
Most research on object individuation in infants has focused on the visual domain. Yet the problem of object individuation is not unique to the visual system, but shared by other sensory modalities. This research examined 4.5‐month‐old infants' capacity to use auditory information to individuate objects. Infants were presented with events in which they heard 2 distinct sounds, separated by a temporal gap, emanate from behind a wide screen; the screen was then lowered to reveal 1 or 2 objects. Longer looking to the 1‐ than 2‐object display was taken as evidence that the infants (a) interpreted the auditory event as involving 2 objects and (b) found the presence of only 1 object when the screen was lowered unexpected. The results indicated that the infants used sounds produced by rattles, but not sounds produced by an electronic keyboard, as the basis for object individuation (Experiments 1 and 2). Data collected with adult participants revealed that adults are also more sensitive to rattle sounds than electronic tones. A final experiment assessed conditions under which young infants attend to rattle sounds (Experiment 3). Collectively, the outcomes of these experiments suggest that infants and adults are more likely to use some sounds than others as the basis for individuating objects. We propose that these results reflect a processing bias to attend to sounds that reveal something about the physical properties of an object—sounds that are obviously linked to object structure—when determining object identity.  相似文献   

11.
We monitored changes in looking that emerged when 3‐ to 6‐month‐old infants were presented with 48 trials pairing familiar and novel faces. Graphic displays were used to identify changes in looking throughout the task. Many infants exhibited strong side biases produced by infants looking repeatedly in the same direction. Although an overall novelty preference was found for the group, individual infants exhibited brief novelty runs. Few infants began with a familiarity preference. We argue that variable looking patterns emerged during the task from competition between the infants' preference to look for something novel versus their tendency to look back to previous locations. Our data suggest that looking during paired‐comparison tasks is a dynamic process dependent on perceptual‐motor events happening during the task itself.  相似文献   

12.
Rochelle S. Newman 《Infancy》2011,16(5):447-470
Infants and toddlers are often spoken to in the presence of background sounds, including speech from other talkers. Prior work has suggested that infants 1 year of age and younger can only recognize speech when it is louder than any distracters in the environment. The present study tests 24‐month‐olds’ ability to understand speech in a multitalker environment. Children were presented with a preferential‐looking task in which a target voice told them to find one of two objects. At the same time, multitalker babble was presented as a distracter, at one of four signal‐to‐noise ratios. Children showed some ability to understand speech and look at the appropriate referent at signal‐to‐noise ratios as low as ?5 dB. These findings suggest that 24‐month‐olds are better able to selectively attend to an interesting voice in the context of competing distracter voices than are younger infants. There were significant correlations between individual children’s performance and their vocabulary size, but only at one of the four noise levels; thus, it does not appear that vocabulary size is the driving factor in children’s listening improvement, although it may be a contributing factor to performance in noisy environments.  相似文献   

13.
We conducted two experiments to address questions over whether 9‐month‐old infants believe that objects depicted in realistic photographs can be picked up. In Experiment 1, we presented 9‐month‐old infants with realistic color photographs of objects, colored outlines of objects, abstract colored “blobs,” and blank pages. Infants most commonly rubbed or patted depictions of all types. They also showed significantly more grasps toward the realistic photographs than toward the colored outlines, blobs, and blank pages, but only 24% of infants directed grasping exclusively at the photographs. In Experiment 2, we further explored infants’ actions toward objects and pictures while controlling for tactile information. We presented 9‐month‐old infants with objects and pictures of objects under a glass cover in a false‐bottom table. Although there were no significant differences between the proportion of rubs and pats infants directed toward the objects versus the photographs, infants exhibited significantly more grasping toward the objects than the photographs. Together, these findings show that 9‐month‐old infants largely direct appropriate actions toward realistic photographs and real objects, indicating that they perceive different affordances for pictures and objects.  相似文献   

14.
Infants rapidly learn both linguistic and nonlinguistic representations of their environment and begin to link these from around 6 months. While there is an increasing body of evidence for the effect of labels heard in‐task on infants’ online processing, whether infants’ learned linguistic representations shape learned nonlinguistic representations is unclear. In this study 10‐month‐old infants were trained over the course of a week with two 3D objects, one labeled, and one unlabeled. Infants then took part in a looking time task in which 2D images of the objects were presented individually in a silent familiarization phase, followed by a preferential looking trial. During the critical familiarization phase, infants looked for longer at the previously labeled stimulus than the unlabeled stimulus, suggesting that learning a label for an object had shaped infants’ representations as indexed by looking times. We interpret these results in terms of label activation and novelty response accounts and discuss implications for our understanding of early representational development.  相似文献   

15.
Toddlers show a surprising lack of knowledge about solidity when they are asked to search for a ball that rolled behind a screen and stopped at a barrier whose top was visible above the screen. They search incorrectly, failing to take into account the position of the barrier. This study examined details of this failure by simplifying the task in 2 ways: by using a search task that did not require children to understand solidity, and by using a prediction task that did not require children to conduct a search. For the prediction task, children had to predict where a ball was going to stop when a barrier intersected its trajectory. Children 2.0 and 2.5 years old were able to conduct a simple search and could predict, to some extent, where the ball was going to stop when the barrier was fully visible. Children did not show this predictive knowledge when the barrier was partially occluded by the screen.  相似文献   

16.
What do novice word learners know about the sound of words? Word‐learning tasks suggest that young infants (14 months old) confuse similar‐sounding words, whereas mispronunciation detection tasks suggest that slightly older infants (18–24 months old) correctly distinguish similar words. Here we explore whether the difficulty at 14 months stems from infants' novice status as word learners or whether it is inherent in the task demands of learning new words. Results from 3 experiments support a developmental explanation. In Experiment 1, infants of 20 months learned to pair 2 phonetically similar words to 2 different objects under precisely the same conditions that infants of 14 months (Experiment 2) failed. In Experiment 3, infants of 17 months showed intermediate, but still successful, performance in the task. Vocabulary size predicted word‐learning performance, but only in the younger, less experienced word learners. The implications of these results for theories of word learning and lexical representation are discussed.  相似文献   

17.
Nine‐month‐old infants were presented with an engaging and challenging task of visually tracking and reaching for a rolling ball that disappeared and reappeared from behind an occluder. On some trials, the infant observed the experimenter place a barrier on the ball's track; the barrier remained partially visible above the occluder throughout the remainder of the trial. When the task involved only predictive tracking, infants' anticipatory gaze shifts were faster when no barrier was present. When the task involved both tracking and reaching, there were more reaches when no barrier was present. If the infant reached, the timing and extension of the reach and the accompanying gaze shift did not differ with regard to the barrier. Because catching the ball was quite difficult for these infants, task demands interfered with the integration of visual information and visuospatial reasoning about the barrier with the reaching action.  相似文献   

18.
In the two-alternative forced-choice (2AFC) paradigm, manual responses such as pointing have been widely used as measures to estimate cognitive abilities. While pointing measurements can be easily collected, coded, analyzed, and interpreted, absent responses are often observed particularly when adopting these measures for toddler studies, which leads to an increase of missing data. Although looking responses such as preferential looking can be available as alternative measures in such cases, it is unknown how well looking measurements can be interpreted as equivalent to manual ones. This study aimed to answer this question by investigating how accurately pointing responses (i.e., left or right) could be predicted from concurrent preferential looking. Using pre-existing videos of toddlers aged 18–23 months engaged in an intermodal word comprehension task, we developed models predicting manual from looking responses. Results showed substantial prediction accuracy for both the Simple Majority Vote and Machine Learning-Based classifiers, which indicates that looking responses would be reasonable alternative measures of manual ones. However, the further exploratory analysis revealed that when applying the created models for data of toddlers who did not produce clear pointing responses, the estimation agreement of missing pointing between the models and the human coders slightly dropped. This indicates that looking responses without pointing were qualitatively different from those with pointing. Bridging two measurements in forced-choice tasks would help researchers avoid wasting collected data due to the absence of manual responses and interpret results from different modalities comprehensively.  相似文献   

19.
Much research has been devoted to questions regarding how infants begin to perceive the unity of partly occluded objects, and it is clear that object motion plays a central role. Little is known, however, about how infants' motion processing skills are used in such tasks. One important kinetic cue for object shape is structure from motion, but its role in unity perception remains unknown. To address this issue, we presented 2‐ and 4‐month‐old infants with displays in which object unity was specified by vertical rotation. After habituation to this display, infants viewed broken and complete versions of the object to test their preference for the broken object, an indication of perception of unity in the occlusion display. Positive evidence for the perception of unity was provided by both age groups. Concomitant edge translation available in 1 condition did not appear to contribute above and beyond simple rotation. These results suggest that structure from motion, and perhaps contour deformation and shading cues, can contribute important information for veridical object percepts in very young infants.  相似文献   

20.
Three studies were conducted to determine whether differential patterns of categorization observed in studies using visual familiarization and object‐examining measures hold up as procedural confounds are eliminated. In Experiment 1, we attempted as direct a comparison as possible between visual and object‐examining measures of categorization. Consistent with previous reports, 9‐month‐old infants distinguished a basic‐level contrast (dog–horse) in the visual task, but not in the examining task. Experiment 2 was designed to reduce levels of nonexploratory activity in an examining task; 9‐month‐olds again failed to distinguish categories of dogs and horses. In Experiment 3, we adopted a paired‐comparison test format in the object‐examining task. Infants did display a novel category preference under paired testing conditions. The results suggest that the different patterns of categorization often seen in looking and touching tasks are a reflection, not of different categorization processes, but of the differential sensitivity of the tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号