首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In past studies, different kinds of gestures have shown different developmental trajectories, with iconic gestures being acquired after words and other gestures before. Similarly, when speech is missing or weak, iconic gestures are rarely used in compensation. These results suggest that iconic gestures are less independent of speech than other kinds of gestures. The present study tested this idea in French-English bilingual children who showed unequal proficiency in their two languages. Eight children between the ages of 3;6 and 4;11 were videotaped in two separate free-play sessions, one in each language. Their use of gestures was coded. The results showed that the children used a higher rate of iconics in their more proficient language but the use of other kinds of gestures did not differ by proficiency. These results suggest that the relationship between iconic gestures and speech is closer than that of other kinds of gestures with speech and cannot therefore be used in the preschool years as a compensatory strategy for weak proficiency.  相似文献   

2.
Gesture is widely regarded to play an important role in communication, both in conjunction with and independent of speech. Indeed, gesture is known to develop even before the onset of spoken words. However, little is known about the communicative conditions under which gesture emerges. The aim of this study was to explore the role of vision in early gesturing. We examined gesture development in 5 congenitally blind and 5 sighted toddlers videotaped longitudinally between the ages of 14 and 28 months in their homes while engaging in free play with a parent or experimenter. All of the blind children were found to produce at least some gestures during the one-word stage of language development. However, gesture production was relatively low among the blind children relative to their sighted peers. Moreover, although blind and sighted children produced the same overall set of gesture types, the distribution of gesture types across categories differed. In addition, blind children used gestures primarily to communicate about objects that were nearby, while sighted children used them for nearby as well as distally located objects. These findings suggest that gesture may play different roles in the language-learning process for sighted and blind children. Nevertheless, they also make it clear that gesture is a robust phenomenon of early communicative development, emerging even in the absence of experience with a visual model.  相似文献   

3.
Infants initially use words and symbolic gestures in markedly similar ways, to name and refer to objects. The goal of these studies is to examine how parental verbal and gestural input shapes infants' expectations about the communicative functions of words and gestures. The studies reported here suggest that infants may initially accept both words and gestures as symbols because parents often produce both verbal labels and gestural routines within the same joint-attention contexts. In two studies, we examined the production of verbal and gestural labels in parental input during joint-attention episodes. In Study 1, parent-infant dyads engaged in a picture-book reading task in which parents introduced their infants to drawings of unfamiliar objects (e.g., accordion). Parents' verbal labeling far outstripped their gestural communication, but the number of gestures produced was non-trivial and was highly predictive of infant gestural production. In Study 2, parent-infant dyads engaged in a free-play session with familiar objects. In this context, parents produced both verbal and gestural symbolic acts frequently with reference to objects. Overall, these studies support an input-driven explanation for why infants acquire both words and gestures as object names, early in development.  相似文献   

4.
5.
Two experiments examined how developmental changes in processing speed, reliance on visual articulatory cues, memory retrieval, and the ability to interpret representational gestures influence memory for spoken language presented with a view of the speaker (visual-spoken language). Experiment 1 compared 16 children (M = 9.5 yrs.) and 16 young adults, using an immediate recall procedure. Experiment 2 replicated the methods with new speakers, stimuli, and participants. Results showed that both children's and adults' memory for sentences was aided by the presence of visual articulatory information and gestures. Children's slower processing speeds did not adversely affect their ability to process visual-spoken language. However, children's ability to retrieve the words from memory was poorer than adults'. Children's memory was also more influenced by representational gestures that appeared along with predicate terms than by gestures that co-occurred with nouns.  相似文献   

6.
This study investigated the effects of two different types of hand gestures on memory recall of preschool children. Experiment 1 found that children who were instructed to use representational gestures while retelling an unfamiliar story retrieved more information about the story than children who were asked to hold their hands still. In addition, children who engaged in some forms of bodily movements other than hand gestures also recalled better. Experiment 2 showed that a simpler and more basic form of gesture, the pointing gesture, had a similar effect on recollecting and retelling the details of a story. The findings provide evidence for the beneficial effects of hand gestures, both representational gestures and pointing gestures, on cognitive processes such as memory retrieval and verbal communication for preschool aged children.  相似文献   

7.
Infants’ early communicative repertoires include both words and symbolic gestures. The current study examined the extent to which infants organize words and gestures in a single unified lexicon. As a window into lexical organization, eighteen‐month‐olds’ (N = 32) avoidance of word–gesture overlap was examined and compared with avoidance of word–word overlap. The current study revealed that when presented with novel words, infants avoided lexical overlap, mapping novel words onto novel objects. In contrast, when presented with novel gestures, infants sought overlap, mapping novel gestures onto familiar objects. The results suggest that infants do not treat words and gestures as equivalent lexical items and that during a period of development when word and symbolic gesture processing share many similarities, important differences also exist between these two symbolic forms.  相似文献   

8.
Two experiments investigated gesture as a form of external support for spoken language comprehension. In both experiments, children selected blocks according to a set of videotaped instructions. Across trials, the instructions were given using no gesture, gestures that reinforced speech, and gestures that conflicted with speech. Experiment 1 used spoken messages that were complex for preschool children but not for kindergarten children. Reinforcing gestures facilitated speech comprehension for preschool children but not for kindergarten children, and conflicting gestures hindered comprehension for kindergarten children but not for preschool children. Experiment 2 tested preschool children with simpler spoken messages. Unlike Experiment 1, preschool children's comprehension was not facilitated by reinforcing gestures. However, children's comprehension also was not hindered by conflicting gestures. Thus, the effects of gesture on speech comprehension depend both on the relation of gesture to speech, and on the complexity of the spoken message.  相似文献   

9.
Observing hand gestures during learning consistently benefits learners across a variety of tasks. How observation of gestures benefits learning, however, is yet unanswered, and cannot be answered without further understanding which types of gestures aid learning. Specifically, the effects of observing varying types of iconic gestures are yet to be established. Across two studies we examined the role that observing different types of iconic hand gestures has in assisting adult narrative comprehension. Some iconic hand gestures (typical gestures) were produced more frequently than others (atypical gestures). Crucially, observing these different types of gestures during a narrative comprehension task did not provide equal benefit for comprehension. Rather, observing typical gestures significantly enhanced narrative comprehension beyond observing atypical gestures or no gestures. We argue that iconic gestures may be split into separate categories of typical and atypical gestures, which in turn have differential effects on narrative comprehension.  相似文献   

10.
In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts.  相似文献   

11.
Many children with severe developmental disabilities emit idiosyncratic gestures that may function as verbal operants (Sigafoos et al., 2000). This study examined the effectiveness of a functional analysis methodology to identify the variables responsible for gestures emitted by 2 young children with severe developmental disabilities. Potential verbal operants for each participant were functionally analyzed using a multi-element design. Results indicate that gestures were maintained by access to tangible items or the delivery of information about novel stimuli. This study extends the use of functional analysis to identify conditions under which children with developmental disabilities emit gestural verbal behavior.  相似文献   

12.
Studies in the literature have revealed that a speaker’s co-speech gestures favor the construction of a complete and articulated mental model of the discourse by the listener; moreover, from the literature on co-speech gestures we know that they help the speaker to organize the stream of thought. Given these data, we hypothesized that a person who listens to a discourse accompanied by gestures would produce fewer co-speech gestures in recollecting the discourse compared to a person who listens to a discourse not accompanied by gestures. The analysis of the co-speech gestures produced by the participants in two experiments while recollecting the content of a discourse confirmed our predictions.  相似文献   

13.
This paper examines the importance of better recognizing and representing haafu students in Japanese education policies by using Fraser's tripartite theory of social justice. In today's transnational Japan, there has been a remarkable increase in the number of haafu, a term used in reference to children with Japanese and non‐Japanese parents. However, the educational experiences of haafu children have not been adequately investigated by researchers and the government for education policies. Central to these arguments are concerns that haafu children occupy a liminal space, and hence are potentially educationally “at risk.” They are generally viewed as Japanese because of their nationality and are expected to perform like the majority of Japanese students with two Japanese parents due to their familiarity with Japanese culture. Yet, in practice there is a paradox that haafu students might be marginalized as a consequence of being viewed as not Japanese enough. In this context, how should public education respond to an increasingly culturally diverse student body? This paper argues why there is a need for public education, its policy and practices to more effectively recognize, represent and redistribute resources ‐ as Fraser frames the three dimensions of social justice ‐ in support of these students.  相似文献   

14.
Eva Murillo  Marta Casla 《Infancy》2021,26(1):104-122
The aim of this study was to analyze the use of representational gestures from a multimodal point of view in the transition from one-word to multi-word constructions. Twenty-one Spanish-speaking children were observed longitudinally at 18, 21, 24, and 30 months of age. We analyzed the production of deictic, symbolic, and conventional gestures and their coordination with different verbal elements. Moreover, we explored the relationship between gestural multimodal and unimodal productions and independent measures of language development. Results showed that gesture production remains stable in the period studied. Whereas deictic gestures are frequent and mostly multimodal from the beginning, conventional gestures are rare and mainly unimodal. Symbolic gestures are initially unimodal, but between 24 and 30 months of age, this pattern reverses, with more multimodal symbolic gestures than unimodal. In addition, the frequency of multimodal representational gestures at specific ages seems to be positively related to independent measures of vocabulary and morphosyntax development. By contrast, the production of unimodal representational gestures appears negatively related to these measures. Our results suggest that multimodal representational gestures could have a facilitating role in the process of learning to combine meanings for communicative goals.  相似文献   

15.
This paper describes the use of video to explore cultural differences in gestures. Video recordings were used to capture a large sample of international gestures, and these are edited into a documentary video, A World of Gestures: Culture and Nonverbal Communication. This paper describes the approach and methodology used. A number of specific questions are examined: Are there universally understood hand gestures?; Are there universal categories of gestures—i.e., universal messages with unique instances in each society?; Can the exact same gesture have opposite meanings in two cultures?; Can individuals articulate and explain the gestures common in their culture?; How can video methods provide “visual replication” of nuanced behaviors such as gestures?; Are there gender differences in knowing or performing gestures?; and finally, Is global diversity collapsing toward Western gestural forms under the onslaught of cultural imperialism? The research findings suggest that there are both cultural “differences” and also cultural “meta-differences”—more profound differences involving deeply embedded categories of meaning that make cultures unique.  相似文献   

16.
17.
18.
Laura L. Namy 《Infancy》2001,2(1):73-86
Infants begin acquiring object labels as early as 12 months of age. Recent research has indicated that the ability to acquire object names extends beyond verbal labels to other symbolic forms, such as gestures. This experiment examines the latitude of infants' early naming abilities. We tested 17‐month‐olds' ability to map gestures, nonverbal sounds, and pictograms to object categories using a forced‐choice triad task. Results indicated that infants accept a wide range of symbolic forms as object names when they are embedded in familiar referential naming routines. These data suggest that infants may initially have no priority for words over other symbolic forms as object names, although the relative status of words appears to change with development. The implications of these findings for the development of criteria for determining whether a symbol constitutes an object name early in development are considered.  相似文献   

19.
20.
A persistent question in the deception literature has been the extent to which nonverbal behaviors can reliably distinguish between truth and deception. It has been argued that deception instigates cognitive load and arousal that are betrayed through visible nonverbal indicators. Yet, empirical evidence has often failed to find statistically significant or strong relationships. Given that interpersonal message production is characterized by a high degree of simultaneous and serial patterning among multiple behaviors, it may be that patterns of behaviors are more diagnostic of veracity. Or it may be that the theorized linkage between internal states of arousal, cognitive taxation, and efforts to control behavior and nonverbal behaviors are wrong. The current investigation addressed these possibilities by applying a software program called THEME to analyze the patterns of kinesic movements (adaptor gestures, illustrator gestures, and speaker and listener head movements) rated by trained coders for participants in a mock crime experiment. Our multifaceted analysis revealed that the quantity and quality of patterns distinguish truths from untruths. Quantitative and qualitative analyses conducted by case and condition revealed high variability in the types and complexities of patterns that were produced and differences between truthful and deceptive respondents questioned about a theft. Patterns incorporating adaptors and illustrator gestures were correlated in counterintuitive ways with arousal, cognitive load, and behavioral control, and qualitative analyses produced unique insights into truthful and untruthful communication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号