首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Older adults tend to perform worse on emotion perception tasks compared to younger adults. How this age difference relates to other interpersonal perception tasks and conversation ability remains an open question. In the present study, we assessed 32 younger and 30 older adults’ accuracy when perceiving (1) static facial expressions, (2) emotions, attitudes, and intentions from videos, (3) and interpersonal constructs (e.g., kinship). Participants’ conversation ability was rated by coders from a videotaped, dyadic problem-solving task. Younger adults were more accurate than older adults perceiving some but not all emotions. No age differences in accuracy were found on any perception task or in conversation ability. Some but not all of the interpersonal perception tasks were related. None of the perception tasks predicted conversation ability. Thus, although the literature suggests a robust age difference in emotion perception accuracy, this difference does not seem to transfer to other interpersonal perception tasks or interpersonal outcomes.  相似文献   

2.
Recent research has demonstrated that preschool children can decode emotional meaning in expressive body movement; however, to date, no research has considered preschool children's ability to encode emotional meaning in this media. The current study investigated 4- (N = 23) and 5- (N = 24) year-old children's ability to encode the emotional meaning of an accompanying music segment by moving a teddy bear using previously modeled expressive movements to indicate one of four target emotions (happiness, sadness, anger, or fear). Adult judges visually categorized the silent videotaped expressive movement performances by children of both ages with greater than chance level accuracy. In addition, accuracy in categorizing the emotion being expressed varied as a function of age of child and emotion. A subsequent cue analysis revealed that children as young as 4 years old were systematically varying their expressive movements with respect to force, rotation, shifts in movement pattern, tempo, and upward movement in the process of emotional communication. The theoretical significance of such encoding ability is discussed with respect to children's nonverbal skills and the communication of emotion.  相似文献   

3.
Despite known differences in the acoustic properties of children’s and adults’ voices, no work to date has examined the vocal cues associated with emotional prosody in youth. The current study investigated whether child (n = 24, 17 female, aged 9–15) and adult (n = 30, 15 female, aged 18–63) actors differed in the vocal cues underlying their portrayals of basic emotions (anger, disgust, fear, happiness, sadness) and social expressions (meanness, friendliness). We also compared the acoustic characteristics of meanness and friendliness to comparable basic emotions. The pattern of distinctions between expressions varied as a function of age for voice quality and mean pitch. Specifically, adults’ portrayals of the various expressions were more distinct in mean pitch than children’s, whereas children’s representations differed more in voice quality than adults’. Given the importance of pitch variables for the interpretation of a speaker’s intended emotion, expressions generated by adults may thus be easier for listeners to decode than those of children. Moreover, the vocal cues associated with the social expressions of meanness and friendliness were distinct from those of basic emotions like anger and happiness respectively. Overall, our findings highlight marked differences in the ways in which adults and children convey socio-emotional expressions vocally, and expand our understanding of the communication of paralanguage in social contexts. Implications for the literature on emotion recognition are discussed.  相似文献   

4.
The present study was designed to assess the contribution of general features of gross body movements to the attribution of emotions. Eighty-five adult subjects were shown ninety-six videotaped body movements, performed by three actors. Each movement was determined by seven general dimensions: trunk movement, arm movement, vertical direction, sagittal direction, force, velocity and directness. Using rating scales, the subjects judged the compatibility of each movement with each of twelve emotion categories. The results showed which movement features predicted particular ratings. Emotion categories differed as to the amount, type, and weights of predicting movement features. Three factors were extracted from the original ratings and interpreted as Rejection-Acceptance, Withdrawal-Approach, and Preparation-Defeatedness.I wish to thank Jacques van Meel for his stimulating contribution to this study, and the reviewers for their comments on earlier drafts of the final report.  相似文献   

5.
There is consistent evidence that older adults have difficulties in perceiving emotions. However, emotion perception measures to date have focused on one particular type of assessment: using standard photographs of facial expressions posing six basic emotions. We argue that it is important in future research to explore adult age differences in understanding more complex, social and blended emotions. Using stimuli which are dynamic records of the emotions expressed by people of all ages, and the use of genuine rather than posed emotions, would also improve the ecological validity of future research into age differences in emotion perception. Important questions remain about possible links between difficulties in perceiving emotional signals and the implications that this has for the everyday interpersonal functioning of older adults.  相似文献   

6.
The facial expressions of emotion and the circumstances under which the expressions occurred in a sample of the most popular children's television programs were investigated in this study. Fifteen-second randomly selected intervals from episodes of five television programs were analyzed for displays of happiness, sadness, anger, fear, disgust, and surprise. In addition, the contexts in which the emotions occurred were examined. Results indicated that particular emotional expressions occurred at significantly different frequencies and that there was an association between emotional displays and emotion-contexts. The high rate of emotional displays found in television shows has implications for the development of knowledge regarding emotional display rules in viewers.We are grateful to Sharon Galligan for assistance in coding part of the data and to Carolyn Saarni and Amy Halberstadt for helpful comments on an earlier draft of this paper. This research was supported in part by a grant from the National Institute of Disabilities and Rehabilitation Research, #GOO85351. The opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education.  相似文献   

7.
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds’ fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analyzed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).  相似文献   

8.
Young (M = 23 years) and older (M = 77 years) adults' interpretation and memory for the emotional content of spoken discourse was examined in an experiment using short, videotaped scenes of two young actresses talking to each other about emotionally-laden events. Emotional nonverbal information (prosody or facial expressions) was conveyed at the end of each scene at low, medium, and high intensities. Nonverbal information indicating anger, happiness, or fear, conflicted with the verbal information. Older adults' ability to differentiate levels of emotional intensity was not as strong (for happiness and anger) compared to younger adults. An incidental memory task revealed that older adults, more often than younger adults, reconstruct what people state verbally to coincide with the meaning of the nonverbal content, if the nonverbal content is conveyed through facial expressions. A second experiment with older participants showed that the high level of memory reconstructions favoring the nonverbal interpretation was maintained when the ages of the participants and actresses were matched, and when the nonverbal content was conveyed both through prosody and facial expressions.  相似文献   

9.
Women were videotaped while they spoke about a positive and a negative experience either in the presence of an experimenter or alone. They gave self-reports of their emotional experience, and the videotapes were rated for facial and verbal expression of emotion. Participants spoke less about their emotions when the experimenter (E) was present. When E was present, during positive disclosures they smiled more, but in negative disclosures they showed less negative and more positive expression. Facial behavior was only related to experienced emotion during positive disclosure when alone. Verbal behavior was related to experienced emotion for positive and negative disclosures when alone. These results show that verbal and nonverbal behaviors, and their relationship with emotional experience, depend on the type of emotion, the nature of the emotional event, and the social context.  相似文献   

10.
《Marriage & Family Review》2013,49(3-4):182-212
SUMMARY

According to Tompkins' (1991) theory on the socialization of emotion, young children's emotional and social competence are influenced by others' reactions to the children's emotions. Patterns of parental reactions to emotions have been shown to account for significant variance in preschoolers' emotion and social competence. However, the impact of others significant in the preschooler's life has been largely ignored. To help fill this gap, associations were examined between older siblings' reactions to 41 preschoolers' emotions and the preschoolers' social-emotional competence (i.e., affective balance, emotion knowledge, positive, prosocial, and provocative responding to peers' emotions, sociometric likability, and teacher-rated social competence). Using a multiple regression strategy, the contributions of sibling reactions and moderating demographic variables to preschooler emotional and social competence were evaluated. Certain sibling reactions, especially positive emotional responsiveness, were shown to play important roles. Many predictions were moderated by age of child, sex of one dyad member  相似文献   

11.
The aim of this study was to examine the effect of two basic emotions, happiness and sadness, on dance movement. A total of 32 adult participants were induced to feel emotional states of either happiness or sadness and then danced intuitively to an emotionally ‘neutral’ piece of music, composed specifically for the experiment. Based on an Effort-Shape analysis of body movement, full body movement was captured and seven different movement cues were examined, in order to explore whether differences in corporeal articulations between the happy and sad condition existed. Results revealed that in the happy condition, participants moved faster, with more acceleration, and made more expanded and more impulsive movements than in the sad condition. Results are discussed with respect to possible consequences for future research on human movement.  相似文献   

12.
A method for studying emotional expression using posthypnotic suggestion to induce emotional states is presented. Subjects were videotaped while roleplaying happiness, sadness or anger or after hypnotic induction of one of these three emotions. Judges then rated these videotapes for emotional tone. Results indicated a main effect for emotion expressed, with happiness and sadness more easily identified by judges than anger. Accuracy was also greater for happiness and sadness in the hypnotically induced condition. However, role-played anger was more easily identified than hypnotically induced anger. An interaction of channel (body/face) and emotion indicated that identification of sadness and anger was easier for judges when the body alone was shown. Findings are discussed in terms of display rules for the expression of emotion.We gratefully acknowledge Irving Kirsch, Ross Buck, and Paul Ekman for their helpful comments on a draft of this article. Special thanks to Reuben Baron, without whose support neither this article nor our careers in psychology would have been possible.A preliminary report of this study was presented at the meeting of the American Psychological Association in Toronto, August 1984.  相似文献   

13.
Whether recognition of emotion from facial expression is affected by distortions of pictorial quality has rarely been tested, with the exception of the influence of picture size on emotion recognition. On the other hand, this question is important for (low-cost) tele-communication and tele-conferencing. Here an attempt is made to study whether emotion recognition from facial expression is impaired when video stimuli are distorted both with respect to spatial (pixel) resolution and with respect to temporal resolution (refreshment rate).N=56 stimuli, in which professional actors encoded 14 different emotions, were presented to groups of judges (N=10 in each condition) in six different distortion conditions. Furthermore, judges were confronted with a control condition, presenting the non-distorted stimuli. Channel of information (close-up of the face versus full body recording) was in addition manipulated. Results indicate (besides main effects for type of emotion encoded and for channel) that emotion recognition is impaired by reductions of both spatial resolution and temporal resolution, but that even very low spatial and temporal resolutions result in recognition rates which are still considerably above chance expectation. Results are discussed with respect to the importance of facial expression and body movement in communicating emotions, and with respect to applied aspects concerning tele-communication.  相似文献   

14.
It was tested whether professional actors would be able to communicate emotional meaning via facial expression when this is presented to judges without context information. Forty takes were selected from movies, depicting a female or a male actor in close-up showing facial expression. These close-ups were selected by two expert judges, who knew the complete movie and had to agree on the emotion expressed or expected to be expressed by the female or male actor in the respective take. Five takes each were selected to represent the basic emotions of joy, sadness, fear, and anger. Twenty takes each were selected showing female and male actors. These 40 takes (edited in random order onto video tape) were presented to 90 judges (about half female and half male; about half younger pupils and about half older ones), whose task it was to judge the emotion(s) expressed on nine 5-point-emotion scales. Results indicated that female actors are somewhat better (though not statistically significant) in communicating emotion via facial expression without context than male actors are. Furthermore, significant interactions between portrayed emotion and gender of actors were found: While recognition rate for joy is very high and identical for both genders, female actors seem to be superior in communicating fear and sadness via facial expression, while male actors are more successful in communicating anger. A display rule approach seems to be appropriate for the interpretation of these results, indicating that gender-specific attempts to control/inhibit certain emotions do not only exist in real life, but also in movies.The research reported here was supported by a grant of the Deutsche Forschungsgemeinschaft (WA 519/2-2). I thank Uwe Balser and Tina Mertesacker for their help in selecting the stimuli, preparing the judgment tapes, recruiting subjects and conducting the study.  相似文献   

15.
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11‐month‐olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants' nonfacial behavior differed across procedures, suggesting that the target emotions were successfully elicited. However evidence for distinct emotion‐specific facial configurations corresponding to fear versus anger was not obtained. Although facial responses were largely similar across cultures, some differences also were observed. Results are discussed in terms of functionalist and dynamical systems approaches to emotion and emotional expression.  相似文献   

16.
Socially anxiety may be related to a different pattern of facial mimicry and contagion of others’ emotions. We report two studies in which participants with different levels of social anxiety reacted to others’ emotional displays, either shown on a computer screen (Study 1) or in an actual social interaction (Study 2). Study 1 examined facial mimicry and emotional contagion in response to displays of happiness, anger, fear, and contempt. Participants mimicked negative and positive emotions to some extent, but we found no relation between mimicry and the social anxiety level of the participants. Furthermore, socially anxious individuals were more prone to experience negative emotions and felt more irritated in response to negative emotion displays. In Study 2, we found that social anxiety was related to enhanced mimicry of smiling, but this was only the case for polite smiles and not for enjoyment smiles. These results suggest that socially anxious individuals tend to catch negative emotions from others, but suppress their expression by mimicking positive displays. This may be explained by the tendency of socially anxious individuals to avoid conflict or rejection.  相似文献   

17.
This article outlines an emotional achievement perspective for the study of emotions in social movements. Following Denzin's work on emotions, I consider emotions as self-feelings that are situated, interactional, and temporal in nature. The concept of emotions as achievement complements Hochschild's emotion management perspective. While management focuses on control, achievement emphasizes articulation and creativity. I argue that, although individuals may be compelled to suppress feelings in the organizational context, different social contexts and practices make it possible for individuals to pursue emotional fulfillment and self-realization. In social movements, the process of emotional achievement among participants unfolds as a process of mobilization. An analysis of the emotional dynamics of the 1989 Chinese student movement shows that emotions were inextricably intertwined with identities and action and that the emotional dynamics generated in this process significantly contributed to movement mobilization. The article concludes with a discussion of the theoretical contributions of the emotional achievement perspective.  相似文献   

18.
We report on what, to our knowledge, represents the first study of nonverbal emotional behavior in crowded public places combining naturalistic videotaping of situated activity, objective coding of facial movement, and sequential analysis of behavior. In the first part of the study we argue that passengers do not lose emotional sensitivity to physical contact as density (passengers per square meter) increases, which indicates that physical contact is experienced as a territorial intrusion regardless of crowdedness. In the second part of the study, we suggest that passengers resolve the emotions due to intrusive physical contacts through two interactional strategies involving facial movements usually interpreted as “expressions of emotions.” Since proxemic violations seem to represent a pervasive emotion elicitor, the protocol can be extended to other means of transportation and replicated in other locations. We conclude that the methodology provides an effective tool for theory-building in the study of nonverbal emotional behavior.  相似文献   

19.
This research utilized a novel methodology to explore the relative salience of facial cues to age, sex, race, and emotion in differentiating faces. Inspired by the Stroop interference effect, participants viewed pairs of schematic faces on a computer that differed simultaneously along two facial dimensions (e.g., race and age) and were prompted to make similarity judgments about the faces along one of the dimensions (e.g., race). On a second round of trials, judgments were made along the other dimension (e.g., age). Analysis of response speed and accuracy revealed that participants judged the race of the faces more quickly and with fewer errors compared to their age, gender, or emotional expression. Methodological and theoretical implications for studying the role of facial cues in social perception are discussed.  相似文献   

20.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged three to five years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundatin (#01523721) to Nathan A. Fox. The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for her help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号