首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This study investigated the effects of two different types of hand gestures on memory recall of preschool children. Experiment 1 found that children who were instructed to use representational gestures while retelling an unfamiliar story retrieved more information about the story than children who were asked to hold their hands still. In addition, children who engaged in some forms of bodily movements other than hand gestures also recalled better. Experiment 2 showed that a simpler and more basic form of gesture, the pointing gesture, had a similar effect on recollecting and retelling the details of a story. The findings provide evidence for the beneficial effects of hand gestures, both representational gestures and pointing gestures, on cognitive processes such as memory retrieval and verbal communication for preschool aged children.  相似文献   

2.
Two experiments investigated gesture as a form of external support for spoken language comprehension. In both experiments, children selected blocks according to a set of videotaped instructions. Across trials, the instructions were given using no gesture, gestures that reinforced speech, and gestures that conflicted with speech. Experiment 1 used spoken messages that were complex for preschool children but not for kindergarten children. Reinforcing gestures facilitated speech comprehension for preschool children but not for kindergarten children, and conflicting gestures hindered comprehension for kindergarten children but not for preschool children. Experiment 2 tested preschool children with simpler spoken messages. Unlike Experiment 1, preschool children's comprehension was not facilitated by reinforcing gestures. However, children's comprehension also was not hindered by conflicting gestures. Thus, the effects of gesture on speech comprehension depend both on the relation of gesture to speech, and on the complexity of the spoken message.  相似文献   

3.
4.
To investigate the influence of different kinds of gesture on children’s memory, 60 6- to 7-year-old children participated in an event conducted by the experimenters (“visiting the pirate”) and were interviewed to assess memory for the event approximately 2 weeks later. Children were assigned to 1 of 4 conditions; in 3 conditions, gesture was possible (gesture-instructed, gesture-modelled, gesture-allowed) whereas in the fourth condition (gesture-not allowed), children’s hands were constrained. The amount of gesture engaged in was limited but was greatest in the gesture-instructed condition. Children in the gesture-instructed condition, who were asked to gesture during the interview, recalled more than did those in the other conditions. Further, relative to children in the gesture-modelled and gesture-allowed conditions, children in the gesture-instructed condition conveyed significantly more information in gesture that had not also been reported verbally. Although further research is necessary to understand the underlying mechanism, the findings suggest that instructing children to gesture as well as verbally recall an experience has cognitive and communicative benefits. Elizabeth Stevanoni and Karen Salmon are affiliated with the School of Psychology, University of New South Wales, Sydney, Australia. We thank the children, parents, teachers and principals at the participating schools, St Michaels and Villa Maria Primary Schools, and acknowledge Kay Pegg for help with data collection.  相似文献   

5.
There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.  相似文献   

6.
7.
8.
Naming is a verbal developmental capability and cusp that allows children to acquire listener and speaker functions without direct instruction (e.g., incidental learning of words for objects). We screened 19 typically developing 2- and 3-year-old children for the presence of Naming for 3-dimensional objects. All 9 3-year-olds had Naming, and 8 of 10 2-year-olds lacked Naming. For the 2-year-old children who lacked Naming, we used multiple-probe designs (2 groups of 4 children) to test the effect of multiple exemplar instruction (MEI) across speaker and listener responses on the emergence of Naming. Prior to the MEI, the children could not emit untaught listener or speaker responses following match-to-sample instruction with novel stimuli, during which they had heard the experimenter tact the stimuli. After MEI with a different set of novel stimuli, the children emitted listener and speaker responses when probed with the original stimuli, in the absence of any further instruction with those stimuli. Seven of 8 children acquired the speaker and listener responses of Naming at 83% to 100% accuracy. We discuss the basic and applied science implications.  相似文献   

9.
Infants infer social and pragmatic intentions underlying attention‐directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12‐month‐olds use information from act‐accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.  相似文献   

10.
Two experiments examined how developmental changes in processing speed, reliance on visual articulatory cues, memory retrieval, and the ability to interpret representational gestures influence memory for spoken language presented with a view of the speaker (visual-spoken language). Experiment 1 compared 16 children (M = 9.5 yrs.) and 16 young adults, using an immediate recall procedure. Experiment 2 replicated the methods with new speakers, stimuli, and participants. Results showed that both children's and adults' memory for sentences was aided by the presence of visual articulatory information and gestures. Children's slower processing speeds did not adversely affect their ability to process visual-spoken language. However, children's ability to retrieve the words from memory was poorer than adults'. Children's memory was also more influenced by representational gestures that appeared along with predicate terms than by gestures that co-occurred with nouns.  相似文献   

11.
Seven‐month‐old infants require redundant information, such as temporal synchrony, to learn arbitrary syllable‐object relations (Gogate & Bahrick, 1998). Infants learned the relations between 2 spoken syllables, /a/ and /i/, and 2 moving objects only when temporal synchrony was present during habituation. This article presents 2 experiments to address infants' memory for these relations. In Experiment 1, infants remembered the syllable‐object relations after 10 min, only when temporal synchrony between the vocalizations and moving objects was provided during learning. In Experiment 2, 7‐month‐olds were habituated to the same syllable‐object pairs in the presence of temporal synchrony and tested for memory after 4 days. Once again, infants learned and showed emerging memory for the syllable‐object relations 4 days after original learning under the temporally synchronous condition. These findings are consistent with the view that prior to symbolic development, infants learn and remember word‐object relations by perceiving redundant information in the vocal and gestural communication of adults.  相似文献   

12.
This study aimed to focus on a niche that has not yet been investigated in infants' gesture studies that is the effect of the prior context of one specific gestural behavior (gives) on maternal behavior. For this purpose, we recruited 23 infants at 11 and 13 months of age yielded 246 giving gesture bouts that were performed in three contexts: typical when the object was offered immediately, contingent on exploration, and contingent on play. The analysis revealed that maternal responses to infants' giving gestures varied and were affected by their age and gesture context. Hence, mothers amended their responses according to the background that generated each gesture. The number of verbal responses to infants' giving gestures decreased as the infants aged, whereas the number of pretense responses increased. For infants aged 11 months, mothers generally provided motor responses to typical gestures. However, for infants aged 13 months, this trend declined and was replaced by a strong positive correlation between giving gestures contingent on play and verbal responses. We concluded that the type of activity with objects prior to employing giving gestures could enhance infants' symbolic skills because caregivers monitor the contingent act that yields the gesture that shapes their response.  相似文献   

13.
Playing infants often direct smiling looks toward social partners. In some cases the smile begins before the look, so it cannot be a response to the sight or behavior of the social partner. In this study we asked whether smiles that anticipate social contact are used by 8‐ to 12‐month‐old infants as voluntary social signals. Eighty infants—20 at each of 8, 9, 10, and 12 months of age—completed 5 tasks. The tasks assessed anticipatory smiling during toy play, means‐end understanding (2 tasks), intentional communication via gesture and vocalizations, and memory for mother's location. Across all ages, anticipatory smiling was strongly predicted by intentional gestural and vocal communication and by means‐end understanding. The findings are discussed in terms of the nature and origins of infants' voluntary communications.  相似文献   

14.
This study investigates the role of dynamic information in identity face recognition at birth. More specifically we tested whether semi‐rigid motion, conveyed by a change in facial expression, facilitates identity recognition. In Experiment 1, two‐day‐old newborns, habituated to a static happy or fearful face (static condition) have been tested using a pair of novel and familiar identity static faces with a neutral expression. Results indicated that newborns manifest a preference for the familiar stimulus, independently of the facial expression. In contrast, in Experiment 2 newborns habituated to a semi‐rigid moving face (dynamic condition) manifest a preference for the novel face, independently of the facial expression. In Experiment 3, a multistatic image of a happy face with different degrees of intensity was utilized to test the role of different amount of static pictorial information in identity recognition (multistatic image condition). Results indicated that newborns were not able to recognize the face to which they have been familiarized. Overall, results clearly demonstrate that newborns' face recognition is strictly dependent on the learning conditions and that the semi‐rigid motion conveyed by facial expressions facilitates identity recognition since birth.  相似文献   

15.
Eva Murillo  Marta Casla 《Infancy》2021,26(1):104-122
The aim of this study was to analyze the use of representational gestures from a multimodal point of view in the transition from one-word to multi-word constructions. Twenty-one Spanish-speaking children were observed longitudinally at 18, 21, 24, and 30 months of age. We analyzed the production of deictic, symbolic, and conventional gestures and their coordination with different verbal elements. Moreover, we explored the relationship between gestural multimodal and unimodal productions and independent measures of language development. Results showed that gesture production remains stable in the period studied. Whereas deictic gestures are frequent and mostly multimodal from the beginning, conventional gestures are rare and mainly unimodal. Symbolic gestures are initially unimodal, but between 24 and 30 months of age, this pattern reverses, with more multimodal symbolic gestures than unimodal. In addition, the frequency of multimodal representational gestures at specific ages seems to be positively related to independent measures of vocabulary and morphosyntax development. By contrast, the production of unimodal representational gestures appears negatively related to these measures. Our results suggest that multimodal representational gestures could have a facilitating role in the process of learning to combine meanings for communicative goals.  相似文献   

16.
Young (M = 23 years) and older (M = 77 years) adults' interpretation and memory for the emotional content of spoken discourse was examined in an experiment using short, videotaped scenes of two young actresses talking to each other about emotionally-laden events. Emotional nonverbal information (prosody or facial expressions) was conveyed at the end of each scene at low, medium, and high intensities. Nonverbal information indicating anger, happiness, or fear, conflicted with the verbal information. Older adults' ability to differentiate levels of emotional intensity was not as strong (for happiness and anger) compared to younger adults. An incidental memory task revealed that older adults, more often than younger adults, reconstruct what people state verbally to coincide with the meaning of the nonverbal content, if the nonverbal content is conveyed through facial expressions. A second experiment with older participants showed that the high level of memory reconstructions favoring the nonverbal interpretation was maintained when the ages of the participants and actresses were matched, and when the nonverbal content was conveyed both through prosody and facial expressions.  相似文献   

17.
Previous studies have shown that teachers’ gestures are beneficial for student learning. In this research, we investigate whether teachers’ gestures have comparable effects in face-to-face live instruction and video-based instruction. We provided sixty-three 7–10 year old students with instruction about mathematical equivalence problems (e.g., 3 + 4 + 5 = __ + 5). Students were assigned to one of four experimental conditions in a 2 × 2 factorial design that varied (1) instruction medium (video vs. live), and (2) instruction modality (speech vs. speech + gesture). There was no main effect of medium: The same amount of learning occurred whether instruction was done live or on video. There was a main effect of modality: Speech instruction accompanied by gesture resulted in significantly more learning and transfer than instruction conveyed through speech only. Gesture’s effect on instruction was stronger for video instruction than live instruction. These findings suggest that there may be a limit to gesture’s role in communication that results in student learning.  相似文献   

18.
Nonconscious processing of sexual information: a generalization to women   总被引:1,自引:0,他引:1  
Sexually competent stimuli may nonconsciously activate sexual memory and set up sexual responding. In men, subliminally presented sexual pictures facilitated recognition of sexual information. The goal of the two experiments reported here was to investigate to what extent this result can be generalized to women. A direct replication in women failed in Experiment 1. In Experiment 2, besides the male-oriented sexual picture set, pictures of two other sets were presented: female-oriented sexual pictures and baby pictures. Effects of the menstrual cycle were also examined. In Experiment 2 only male-oriented pictures showed a facilitation effect. Sensitivity for reproductive stimuli was enhanced during the midluteal phase. Like men, women may nonconsciously recognize a stimulus as sexual. This recognition process seems unrelated to the potential of the stimulus to elicit subjective arousal.  相似文献   

19.
Sexually competent stimuli may nonconsciously activate sexual memory and set up sexual responding. In men, subliminally presented sexual pictures facilitated recognition of sexual information (Spiering, Everaerd, & Janssen, 2003). The goal of the two experiments reported here was to investigate to what extent this result can be generalized to women. A direct replication in women failed in Experiment 1. In Experiment 2, besides the male‐oriented sexual picture set, pictures of two other sets were presented: female‐oriented sexual pictures and baby pictures. Effects of the menstrual cycle were also examined. In Experiment 2, only male‐oriented pictures showed a facilitation effect. Sensitivity for reproductive stimuli was enhanced during the midluteal phase. Like men, women may nonconsciously recognize a stimulus as sexual. This recognition process seems unrelated to the potential of the stimulus to elicit subjective arousal.  相似文献   

20.
Infants initially use words and symbolic gestures in markedly similar ways, to name and refer to objects. The goal of these studies is to examine how parental verbal and gestural input shapes infants' expectations about the communicative functions of words and gestures. The studies reported here suggest that infants may initially accept both words and gestures as symbols because parents often produce both verbal labels and gestural routines within the same joint-attention contexts. In two studies, we examined the production of verbal and gestural labels in parental input during joint-attention episodes. In Study 1, parent-infant dyads engaged in a picture-book reading task in which parents introduced their infants to drawings of unfamiliar objects (e.g., accordion). Parents' verbal labeling far outstripped their gestural communication, but the number of gestures produced was non-trivial and was highly predictive of infant gestural production. In Study 2, parent-infant dyads engaged in a free-play session with familiar objects. In this context, parents produced both verbal and gestural symbolic acts frequently with reference to objects. Overall, these studies support an input-driven explanation for why infants acquire both words and gestures as object names, early in development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号