首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 214 毫秒
1.
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3‐, 6‐, and 9‐month‐old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous audiovisual, and asynchronous audiovisual. Infants looked longer toward Sesame Street stimuli than geometric patterns, and infants also looked longer during multimodal audiovisual (synchronous and asynchronous) presentations than during unimodal visual presentations. There was a three‐way interaction of age, stimulus type, and stimulus modality. Significant differences were found within and between age groups related to stimulus modality (visual or audiovisual) while viewing Sesame Street clips. No significant interaction was found between age and stimulus type while infants viewed dynamic geometric patterns. These findings indicate that patterns of developmental change in infant attention vary based on stimulus complexity and modality of presentation.  相似文献   

2.
Research has demonstrated that intersensory redundancy (stimulation synchronized across multiple senses) is highly salient and facilitates processing of amodal properties in multimodal events, bootstrapping early perceptual development. The present study is the first to extend this central principle of the intersensory redundancy hypothesis (IRH) to certain types of intrasensory redundancy (stimulation synchronized within a single sense). Infants were habituated to videos of a toy hammer tapping silently (unimodal control), depicting intersensory redundancy (synchronized with a soundtrack) or intrasensory redundancy (synchronized with another visual event; light flashing or bat tapping). In Experiment 1, 2‐month‐olds showed both intersensory and intrasensory facilitation (with respect to the unimodal control) for detecting a change in tempo. However, intrasensory facilitation was found when the hammer was synchronized with the light flashing (different motion) but not with the bat tapping (same motion). Experiment 2 tested 3‐month‐olds using a somewhat easier tempo contrast. Results supported a similarity hypothesis: intrasensory redundancy between two dissimilar events was more effective than that between two similar events for promoting processing of amodal properties. These findings extend the IRH and indicate that in addition to intersensory redundancy, intrasensory redundancy between two synchronized dissimilar visual events is also effective in promoting perceptual processing of amodal event properties.  相似文献   

3.
Information presented concurrently and redundantly to 2 or more senses (intersensory redundancy) has been shown to recruit attention and promote perceptual learning of amodal stimulus properties in animal embryos and human infants. This study examined whether the facilitative effect of intersensory redundancy also extends to the domain of memory. We assessed bobwhite quail chicks' ability to remember and prefer an individual maternal call presented either unimodally or redundantly and synchronously with patterned light during the period prior to hatching. Embryos provided with unimodal auditory exposure failed to prefer the familiar call over a novel maternal call postnatally at 48 hr and 72 hr following exposure. In contrast, embryos provided with redundant, synchronous audiovisual stimulation significantly preferred the familiar call at 48 hr following exposure, but not at 72 hr. A second experiment provided chicks with a single 10‐min refamiliarization with the familiar call at either 48 hr or 72 hr following hatching. Chicks given only unimodal auditory exposure prior to hatching did not appear to benefit from this brief postnatal refamiliarization, showing no preference for the familiar call at either 72 or 96 hr. Chicks that received redundant audiovisual stimulation prenatally showed a significant preference for the familiar call (following the brief reexpo sure 24 hr earlier) at both 72 and 96 hr of age. These results are the first to demonstrate that redundantly specified information is remembered longer and reactivated more easily than the same information presented unimodally. These findings provide further evidence of the salience of intersensory redundancy in guiding selective attention and perceptual learning during early development.  相似文献   

4.
Research in developmental cognitive science reveals that human infants perceive shape changes in 2D visual forms that are repeatedly presented over long durations. Nevertheless, infants’ sensitivity to shape under the brief conditions of natural viewing has been little studied. Three experiments tested for this sensitivity by presenting 128 seven‐month‐old infants with shapes for the briefer durations under which they might see them in dynamic scenes. The experiments probed infants’ sensitivity to two fundamental geometric properties of scale‐ and orientation‐invariant shape: relative length and angle. Infants detected shape changes in closed figures, which presented changes in both geometric properties. Infants also detected shape changes in open figures differing in angle when figures were presented at limited orientations. In contrast, when open figures were presented at unlimited orientations, infants detected changes in relative length but not in angle. The present research therefore suggests that, as infants look around at the cluttered and changing visual world, relative length is the primary geometric property by which they perceive scale‐ and orientation‐invariant shape.  相似文献   

5.
Previous research shows that infants represent approximate number: After habituation to a constant numerosity (e.g., eight dots), 6‐month‐old infants dishabituate to a novel numerosity (e.g., 16 dots). However, numerical information in the real world is far more variable and rarely offers repeated presentations of a single quantity. Instead, we often encounter quantities in the form of distributions around a central tendency. It remains unknown whether infants can represent frequency distributions from this type of distributed numerical input. Here, we asked whether 6‐month‐old infants can represent distributions of large approximate numerosities. In two experiments, we first familiarized infants to sequences of dot collections with varying numerosities. For half the infants, the sequence contained a unimodal frequency distribution, with numerosities centered around a single mean, and for the other half, it contained a bimodal frequency distribution of numerosities with two numerical peaks. We then tested infants with alternating or constant numerosities. Infants who had been familiarized to a unimodal distribution looked longer at alternating numerosities than constant numerosities (experiments 1 and 2), whereas infants who had been familiarized to a bimodal distribution looked longer at constant numerosities (Exp. 2). These findings suggest that infants can spontaneously extract frequency distributions from distributed numerical input.  相似文献   

6.
Studies of infant development concerned with the emergence of specific perceptual or cognitive abilities have typically focused on responsiveness in only one sensory modality. Research on infant perception, learning, and memory often attempts to reduce multimodal stimulation to “noise” and to control or omit stimulation from other sensory modalities in experimental designs. This type of unimodal research, although important, may not generalize well to the behavior of infants in the multimodal context of the everyday world. Research from animal and human development is reviewed that documents that significant differences in infants' perceptual skills and abilities can be observed under conditions of unimodal versus multimodal stimulation. These studies provide converging evidence for a functional distinction between unimodal and multimodal stimulation during early development and suggest that ecological validity can be enhanced when research findings are generalized appropriately to the natural environment and are not overgeneralized across stimulus properties, tasks, or contexts.  相似文献   

7.
For several decades, many authors have claimed the existence, early in life, of a tight link between perceptual and productive systems in speech. However, the question whether this link is acquired or is already present at birth remains open. This study aimed at investigating this question by employing the paradigm of neonatal facial imitation. We compared imitative responses of newborn infants presented either visual‐only, audiovisual congruent, or audiovisual incongruent models. Our results revealed that the newborns imitated significantly more quickly the movements of the model's mouth when this model was audiovisual congruent rather than visual‐only. Moreover, when observing an audiovisual incongruent model, the newborns did not produce imitative behavior. These findings, by highlighting the influence of speech perception on newborns' imitative responses, suggest that the neural architecture for perception–production is already in place at birth. The implications of these results are discussed in terms of a link between language and neonatal imitation, which could represent a precursor of more mature forms of vocal imitation and speech development in general.  相似文献   

8.
The development of spatial visual attention has been extensively studied in infants, but far less is known about the emergence of object‐based visual attention. We tested 3–5‐ and 9–12‐month‐old infants on a task that allowed us to measure infants’ attention orienting bias toward whole objects when they competed with color, motion, and orientation feature information. Infants’ attention orienting to whole objects was affected by the dimension of the competing visual feature. Whether attention was biased toward the whole object or its salient competing feature (e.g., “ball” or “red”) changed with age for the color feature, with infants biased toward whole objects with age. Moreover, family socioeconomic status predicted feature‐based attention in the youngest infants and object‐based attention in the older infants when color feature information competed with whole‐object information.  相似文献   

9.
Twenty‐four infants were tested monthly for gaze and point following between 9 and 15 months of age and mother‐infant free play sessions were also conducted at 9, 12, and 15 months (Carpenter, Nagell, & Tomasello, 1998). Using this data set, this study explored relations between maternal talk about mental states during mothers' free play with their infants and the emergence of joint visual attention in infants. Contrary to hypothesis, mothers' comments about their infants' perceptual states significantly declined after their infants began to engage in joint visual attention. Comments about other mental states did not change relative to acquisition of joint visual attention skill. We speculate that after infants begin to reliably follow gaze and points, mothers may switch the focus of their conversation from their infants' visual behavior and experiences to the object of their mutual attention.  相似文献   

10.
In 3 experiments, the temporal processing sequence of local and global visual properties was investigated with 3‐month‐old infants. Across the experiments, a global pattern was discriminated under conditions of less familiarization than was necessary for local elements to be discriminated, thus indicating a global precedence in the sequence of visual processing at 3 months of age. Patterns of discrimination were also observed to vary as a function of individual differences in infants' look duration. Furthermore, the pattern of novelty and familiarity preferences for short‐looking infants varied in complex ways as a function of familiarization time: Preferences for novel global properties were supplanted by familiarity preferences at the point in familiarization at which infants first became sensitive to local properties.  相似文献   

11.
Research on the influence of multimodal information on infants' learning is inconclusive. While one line of research finds that multimodal input has a negative effect on learning, another finds positive effects. The present study aims to shed some new light on this discussion by studying the influence of multimodal information and accompanying stimulus complexity on the learning process. We assessed the influence of multimodal input on the trial‐by‐trial learning of 8‐ and 11‐month‐old infants. Using an anticipatory eye movement paradigm, we measured how infants learn to anticipate the correct stimulus–location associations when exposed to visual‐only, auditory‐only (unimodal), or auditory and visual (multimodal) information. Our results show that infants in both the multimodal and visual‐only conditions learned the stimulus–location associations. Although infants in the visual‐only condition appeared to learn in fewer trials, infants in the multimodal condition showed better anticipating behavior: as a group, they had a higher chance of anticipating correctly on more consecutive trials than infants in the visual‐only condition. These findings suggest that effects of multimodal information on infant learning operate chiefly through effects on infants' attention.  相似文献   

12.
By 7 months, infants, when reaching for an object, visually guide their grasp by preorienting their hands to match the object's orientation. Evidence at earlier ages, however, for prospective grasp control via anticipatory hand orientation is mixed. This study examined longitudinally the development of anticipatory hand orientation in 15 infants, seen every 3 weeks between 5 and 7.5 months. On each visit, infants were given 8 trials of reaching for an object oriented vertically and horizontally. Hand orientation at the first point of contact, prior to any tactile feedback, indexed infant prospective grasp. Between 5 and 7 months, infants showed evidence for qualitative transition in prospective control of grasp, supporting the contention that control of grasp shifts from being based on tactual feedback to being visually and therefore prospectively based. Implications for how prospective grasp emerges developmen‐tally are discussed.  相似文献   

13.
There is an increasing interest in alpha-range rhythms in the electroencephalogram (EEG) in relation to perceptual and attentional processes. The infant mu rhythm has been extensively studied in the context of linkages between action observation and action production in infancy, but less is known about the mu rhythm in relation to cross-modal processes involving somatosensation. We investigated differences in mu responses to cued vibrotactile stimulation of the hand in two age groups of infants: From 6 to 7 months and 13 to 14 months. We were also interested in anticipatory neural responses in the alpha frequency range prior to tactile stimulation. Tactile stimulation of infants’ left or right hand was preceded by an audiovisual cue signaling which hand would be stimulated. In response to the tactile stimulus, infants demonstrated significant mu desynchronization over the central areas contralateral to the hand stimulated, with higher mu peak frequency and greater contralateral mu desynchronization for older infants. Prior to the tactile stimulus, both age groups showed significant bilateral alpha desynchronization over frontocentral sites, which may be indicative of generalized anticipation of an upcoming stimulus. The findings highlight the potential of examining the sensorimotor mu rhythm in the context of infant attentional development.  相似文献   

14.
Infant visual attention develops rapidly over the first year of life, significantly altering the way infants respond to peripheral visual events. Here, we present data from 5‐, 7‐, and 10‐month‐old infants using the Infant Orienting With Attention (IOWA) task, designed to capture developmental changes in visual spatial attention and saccade planning. Results indicate rapid development of spatial attention and visual response competition between 5 and 10 months. We use a dynamic neural field (DNF) model to link behavioral findings to neural population activity, providing a possible mechanistic explanation for observed developmental changes. Together, the behavioral and model simulation results provide new insights into the specific mechanisms that underlie spatial cueing effects, visual competition, and visual interference in infancy.  相似文献   

15.
Several studies have shown that at 7 months of age, infants display an attentional bias toward fearful facial expressions. In this study, we analyzed visual attention and heart rate data from a cross‐sectional study with 5‐, 7‐, 9‐, and 11‐month‐old infants (Experiment 1) and visual attention from a longitudinal study with 5‐ and 7‐month‐old infants (Experiment 2) to examine the emergence and stability of the attentional bias to fearful facial expressions. In both experiments, the attentional bias to fearful faces appeared to emerge between 5 and 7 months of age: 5‐month‐olds did not show a difference in disengaging attention from fearful and nonfearful faces, whereas 7‐ and 9‐month‐old infants had a lower probability of disengaging attention from fearful than nonfearful faces. Across the age groups, heart rate (HR) data (Experiment 1) showed a more pronounced and longer‐lasting HR deceleration to fearful than nonfearful expressions. The results are discussed in relation to the development of the perception and experience of fear and the interaction between emotional and attentional processes.  相似文献   

16.
Given the importance of infants' perception of bimodal speech for emerging language and emotion development, this study used eye‐tracking technology to examine infants' attention to face+voice displays differing by emotion (fear, sad, happy) and visual stimulus (dynamic versus static). Peripheral distracters were presented to measure attention disengagement. It was predicted that infants would look longer at and disengage more slowly from dynamic bimodal emotion displays, especially when viewing dynamic fear. However, the results from twenty‐two 10‐month‐olds found significantly greater attention on dynamic versus static trials, independent of emotion. Interestingly, infants looked equally to mouth and eye regions of speakers' faces except when viewing/hearing dynamic fear; in this case, they fixated more on the speakers' mouth region. Average latencies to distracters were longer on dynamic compared to static bimodal stimuli, but not differentiated by emotion. Thus, infants' attention was enhanced (in terms of both elicitation and maintenance) by dynamic, bimodal emotion displays. Results are compared to conflicting findings using static emotion displays, with suggestions for future research using more ecologically relevant dynamic, multimodal displays to gain a richer understanding of infants' processing of emotion.  相似文献   

17.
Languages instantiate many different kinds of dependencies, some holding between adjacent elements and others holding between nonadjacent elements. In the domain of phonology–phonotactics, sensitivity to adjacent dependencies has been found to appear between 6 and 10 months. However, no study has directly established the emergence of sensitivity to nonadjacent phonological dependencies in the native language. The present study focuses on the emergence of a perceptual Labial‐Coronal (LC) bias, a dependency involving two nonadjacent consonants. First, Experiment 1 shows that a preference for monosyllabic consonant‐vowel‐consonant LC words over CL (Coronal‐Labial) words emerges between 7 and 10 months in French‐learning infants. Second, two experiments, presenting only the first or last two phonemes of the original stimuli, establish that the LC bias at 10 months cannot be explained by adjacent dependencies or by a preference for more frequent coronal consonants ( Experiment 2a & b ). At 7 months, by contrast, infants appear to react to the higher frequency of coronal consonants ( Experiment 3a & b ). The present study thus demonstrates that infants become sensitive to nonadjacent phonological dependencies between 7 and 10 months. It further establishes a change between these two ages from sensitivity to local properties to nonadjacent dependencies in the phonological domain.  相似文献   

18.
A fundamental question of perceptual development concerns how infants come to perceive partly hidden objects as unified across a spatial gap imposed by an occluder. Much is known about the time course of development of perceptual completion during the first several months after birth, as well as some of the visual information that supports unity perception in infants. The goal of this investigation was to examine the inputs to this process. We recorded eye movements in 3‐month‐old infants as they participated in a standard object unity task and found systematic differences in scanning patterns between those infants whose post‐habituation preferences were indicative of unity perception versus those infants who did not perceive unity. Perceivers, relative to nonperceivers, scanned more reliably in the vicinity of the visible rod parts and scanned more frequently across the range of rod motion. These results suggest that emerging object concepts are tied closely to available visual information in the environment, and the process of information pickup.  相似文献   

19.
Infants are readily able to use their recent experience to shape their future behavior. Recent work has confirmed that infants generate neural predictions based on their recent experience (Emberson, Richards, & Aslin, 2015) and that neural predictions trigger visual system activity similar to that elicited by visual stimulation. This study uses behavioral methods to ask, how visual is visual prediction? In Experiment 1, we confirmed that when additional trials provide additional visual experience with the experimental shape, infants exhibit a robust novelty preference. In Experiment 2, we removed the visual stimulus from some trials and presented the predictive auditory cue alone, allowing the effects of neural prediction to be assessed. We found no evidence of looking preferences at test, suggesting that visual prediction does not contribute to the computation of visual familiarity. In Experiment 3, we provided infants with a degraded visual stimulus to test whether visual prediction could bias visual perception under ambiguous conditions. Again, we found no evidence of looking preferences at test, suggesting that visual prediction is not biasing perception of an uncertain stimulus. Overall, our results suggest that visual prediction is not visual, in the strictest sense, despite the presence of visual system activation.  相似文献   

20.
We examined whether infants organize information according to the newly proposed principle of common region, which states that elements within a region are grouped together and separated from those of other regions. In Experiment 1, 6‐ to 7‐month‐olds exhibited sensitivity to regions by discriminating between the displacement of an element within a region versus across regions. In Experiments 2 (6‐ to 7‐month‐olds) and 3 (3‐ to 4‐month‐olds), infants who were habituated to 2 elements in each of 2 regions subsequently discriminated between a familiar and novel grouping in familiar and novel regions. Thus, infants as young as 3 to 4 months of age are not only sensitive to regions in visual images, but also use these regions to group elements in accord with the principle of common region. Because common region analysis is critical to such basic visual functions as figure‐ground and object segregation, these results suggest that the organizational mechanism that underlies many vital visual functions is already operational by 3 to 4 months of age.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号