首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
The present study examined effects of temporarily salient and chronic self-construal on decoding accuracy for positive and negative facial expressions of emotion. We primed independent and interdependent self-construal in a sample of participants who then rated the emotion expressions of a central character (target) in a cartoon showing a happy, sad, angry, or neutral facial expression in a group setting. Primed interdependence was associated with lower recognition accuracy for negative emotion expressions. Primed and chronic self-construal interacted such that for interdependence primed participants, higher chronic interdependence was associated with lower decoding accuracy for negative emotion expressions. Chronic independent self-construal was associated with higher decoding accuracy for negative emotion. These findings add to an increasing literature that highlights the significance of perceivers’ socio-cultural factors, self-construal in particular, for emotion perception.  相似文献   

2.
To examine children's ability to control their affective expression facially, 68 second- and fourth-grade boys and girls were unobtrusively videotaped while discussing six self-chosen activities about which they felt positively, neutrally, or negatively. Children then performed three facial management tasks: (a)inhibition (showing no emotion instead of a felt emotion); (b)simulation (showing an emotion when not feeling anything); and (c)masking (showing an emotion that is different than what is felt). Twelve raters judged the edited tapes of the children's performance for affect positivity and deceptiveness. Repeated measures ANOVAs indicated, in contrast to previous research, that children were highly competent in managing facial displays. To understand children's techniques for managing affective displays, two raters categorized the primary cognitive strategies children used. Results showed that fourth graders used more complex strategies more often than second graders. These results highlight children's skill and strategies in affect management.Funding for this project was provided by a NICHHD grant (#HD22367) to the first author. We gratefully acknowledge the assistance of Nancy A. Ballard and Michael G. Rakouskas in data collection and preparation. We also thank the children whose participation and cooperation made this research possible.  相似文献   

3.
The Intensity of Emotional Facial Expressions and Decoding Accuracy   总被引:2,自引:0,他引:2  
The influence of the physical intensity of emotional facial expressions on perceived intensity and emotion category decoding accuracy was assessed for expressions of anger, disgust, sadness, and happiness. The facial expressions of two men and two women posing each of the four emotions were used as stimuli. Six different levels of intensity of expression were created for each pose using a graphics morphing program. Twelve men and 12 women rated each of the 96 stimuli for perceived intensity of the underlying emotion and for the qualitative nature of the emotion expressed. The results revealed that perceived intensity varied linearly with the manipulated physical intensity of the expression. Emotion category decoding accuracy varied largely linearly with the manipulated physical intensity of the expression for expressions of anger, disgust, and sadness. For the happiness expressions only, the findings were consistent with a categorical judgment process. Sex of encoder produced significant effects for both dependent measures. These effects remained even after possible gender differences in encoding were controlled for, suggesting a perceptual bias on the part of the decoders.  相似文献   

4.
The capacity to engage with one’s child in a reciprocally responsive way is an important element of successful and rewarding parent–child conversations, which are common contexts for emotion socialization. The degree to which a parent–child dyad shows a mutually responsive orientation presumably depends on both individuals’ socio-emotional skills. For example, one or both members of a dyad needs to be able to accurately interpret and respond to the other’s nonverbal cues, such as facial expressions, to facilitate mutually responsive interactions. Little research, however, has examined whether and how mother and/or child facial expression decoding skill relates to dyads’ emotional mutuality during conversations. We thus examined associations between both mother and child facial expression decoding skill and observed emotional mutuality during parent-preschooler conversations about happy child memories. Results lend support to our hypotheses by suggesting that both mother and child capacities to read others’ emotional cues make distinct contributions to parent–child emotional mutuality in the context of reminiscing conversations. Specifically, mothers’ accurate decoding of child facial expressions predicted maternal displays of positive affect and interest, while children’s accurate decoding of adult facial expressions predicted dyadic displays of mutual enjoyment. Contrary to our hypotheses, however, parent/child facial expression decoding skills did not interact to predict observed mutual responsiveness. These findings underscore the importance of attending to both parent and child contributions to successful dyadic interactions that facilitate effective emotion socialization.  相似文献   

5.
Successful social interaction depends on the ability to decode emotion in the nonverbal behaviors of others. Because relationship difficulties are paramount in adolescents with schizotypal personality disorder (SPD), we predicted that adolescents with SPD would (1) make more emotion decoding errors than adolescents with other personality disorders (OPD) or non-psychiatric controls (NPC); and (2) exhibit more social and thought problems than OPD or NPC adolescents. Further, we predicted greater emotion decoding errors for all adolescents would relate to concurrent and future social problems, thought problems, and social reasoning deficits. SPD adolescents made more errors than OPD and NPC adolescents in decoding voices but not faces (except in specific emotion categories). For all adolescents, vocal errors correlated with greater social problems, and facial and vocal errors correlated with greater thought difficulties concurrently and a year later.  相似文献   

6.
A procedure for improving children's skill in decoding facial expressions of emotion was studied in this experiment. In the first phase of the study, thirty-six fifth and sixth grade children watched video segments showing facial expressions of stimulus persons experiencing happiness, sadness, or fear and tried to identify each stimulus person's emotion. Subjects assigned to the feedback condition were given the correct answer for each segment, and subjects assigned to the no feedback condition received no information. Results for the second phase of the experiment, in which subjects' decoding skills were assessed, showed that the feedback method was effective in improving general decoding abilities. Furthermore, differences between subjects in the feedback and no feedback conditions were affected by subjects' sex and the specific emotion being decoded.Portions of this study were presented at the annual meeting of the Eastern Psychological Association, Boston, April, 1989. This study was funded by a grant from the Marks Meadow Research Foundation, as well as through ongoing support from the National Institute of Disability and Rehabilitation Research, U.S. Department of Education, to the second author.  相似文献   

7.
There is consistent evidence that older adults have difficulties in perceiving emotions. However, emotion perception measures to date have focused on one particular type of assessment: using standard photographs of facial expressions posing six basic emotions. We argue that it is important in future research to explore adult age differences in understanding more complex, social and blended emotions. Using stimuli which are dynamic records of the emotions expressed by people of all ages, and the use of genuine rather than posed emotions, would also improve the ecological validity of future research into age differences in emotion perception. Important questions remain about possible links between difficulties in perceiving emotional signals and the implications that this has for the everyday interpersonal functioning of older adults.  相似文献   

8.
The extent to which distress of the female model contributed to the erotic value of sexually explicit photographs of women in bondage was studied for a sample of 54 young-adult college males. In addition, subjects were categorized by level of antisociality and level of facial-decoding skill with the prediction that the erotic value of a model in distress would be greatest for subjects departing most from social values (antisociality) and most capable of recognizing emotions as facially displayed by another person (facial decoding). There was an overall sadism effect. Most of the men reported pictures depicting a distressed model in bondage to be more sexually stimulating than pictures in which the female model displayed positive affect. The erotic value of distressed females in bondage was greatest when subjects combined greater anti-sociality and better facial-decoding skill.  相似文献   

9.
The relation between knowledge of American Sign Language (ASL) and the ability to decode facial expressions of emotion was explored in this study. Subjects were 60 college students, half of whom were intermediate level students of ASL and half of whom had no exposure to a signed language. Subjects viewed and judged silent video segments of stimulus persons experiencing spontaneous emotional reactions representing either happiness, sadness, anger, disgust, or fear/surprise. Results indicated that hearing subjects knowledgeable in ASL were generally better than hearing non-signers at identifying facial expressions of emotion, although there were variations in decoding accuracy regarding the specific emotion being judged. In addition, females were more successful decoders than males. Results have implications for better understanding the nature of nonverbal communication in deaf and hearing individuals.We are grateful to Karl Scheibe for comments on an earlier version of this paper and to Erik Coats for statistical analysis. This study was conducted as part of a Senior Honors thesis at Wesleyan University.  相似文献   

10.
This study examined preschool children's decoding and encoding of facial emotions and gestures, interrelationships between these skills, and the relationship between these skills and children's popularity. Subjects were 34 preschoolers (eighteen 4-year-olds, sixteen 5-year-olds), with an equal number of boys and girls. Children's nonverbal skill was measured on four tasks: decoding emotions, decoding gestures, encoding facial emotions, and encoding gestures. Children's popularity was measured by teacher ratings. Analyses revealed the following major findings: (a) There were no age or gender effects on performance on any of the tasks. (b) Children performed better on decoding than encoding tasks, suggesting that nonverbal comprehension precedes production. Also, children appeared better at facial emotion skills than gesture skills. There were significant correlations between decoding and encoding gestures, and between encoding gestures and encoding emotions. (c) Multiple regression analyses indicated that encoding emotions and decoding gestures were marginally predictive of popularity. In addition, when children's scores on the four tasks were combined via z-score transformations, children's aggregate nonverbal skill correlated significantly with peer popularity.Portions of this paper were presented at the meeting of the American Psychological Society, San Diego, CA, June, 1992. We thank the Child Study Center of Wellesley College, Janine Jarrell, Jennifer Mascola, and David Mills for their cooperation, and Carlene Nelson, Mark Runco, and Ed Stearns for statistical support. We also appreciate the valuable suggestions from Robin Akert, Annick Mansfield, the anonymous reviewers, and especially the guest editor.  相似文献   

11.
Lipps (1907) presented a model of empathy which had an important influence on later formulations. According to Lipps, individuals tend to mimic an interaction partner's behavior, and this nonverbal mimicry induces—via a feedback process—the corresponding affective state in the observer. The resulting shared affect is believed to foster the understanding of the observed person's self. The present study tested this model in the context of judgments of emotional facial expressions. The results confirm that individuals mimic emotional facial expressions, and that the decoding of facial expressions is accompanied by shared affect. However, no evidence that emotion recognition accuracy or shared affect are mediated by mimicry was found. Yet, voluntary mimicry was found to have some limited influence on observer' s assessment of the observed person's personality. The implications of these results with regard to Lipps' original hypothesis are discussed.  相似文献   

12.
This study was designed to investigate the potential association between social anxiety and children's ability to decode nonverbal emotional cues. Participants were 62 children between 8 and 10 years of age, who completed self-report measures of social anxiety, depressive symptomatology, and nonspecific anxious symptomatology, as well as nonverbal decoding tasks assessing accuracy at identifying emotion in facial expressions and vocal tones. Data were analyzed with multiple regression analyses controlling for generalized cognitive ability, and nonspecific anxious and depressive symptomatology. Results provided partial support for the hypothesis that social anxiety would relate to nonverbal decoding accuracy. Difficulty identifying emotions conveyed in children's and adults' voices was associated with general social avoidance and distress. At higher levels of social anxiety, children more frequently mislabeled fearful voices as sad. Possible explanations for the obtained results are explored.  相似文献   

13.
Twenty-five high-functioning, verbal children and adolescents with autism spectrum disorders (ASD; age range 8–15 years) who demonstrated a facial emotion recognition deficit were block randomized to an active intervention (n = 12) or waitlist control (n = 13) group. The intervention was a modification of a commercially-available, computerized, dynamic facial emotion training tool, the MiX by Humintell©. Modifications were introduced to address the special learning needs of individuals with ASD and to address limitations in current emotion recognition programs. Modifications included: coach-assistance, a combination of didactic instruction for seven basic emotions, scaffold instruction which included repeated practice with increased presentation speeds, guided attention to relevant facial cues, and imitation of expressions. Training occurred twice each week for 45–60 min across an average of six sessions. Outcome measures were administered prior to and immediately after treatment, as well as after a delay period of 4–6 weeks. Outcome measures included (a) direct assessment of facial emotion recognition, (b) emotion self-expression, and (c) generalization through emotion awareness in videos and stories, use of emotion words, and self-, parent-, and teacher-report on social functioning questionnaires. The facial emotion training program enabled children and adolescents with ASD to more accurately and quickly identify feelings in facial expressions with stimuli from both the training tool and generalization measures and demonstrate improved self-expression of facial emotion.  相似文献   

14.
This study examined age and gender differences in decoding nonverbal cues in a school population of 606 (pre)adolescents (9–15 years). The focus was on differences in the perceived intensity of several emotions in both basic and non-basic facial expressions. Age differences were found in decoding low intensity and ambiguous faces, but not in basic expressions. Older adolescents indicated more negative meaning in these more subtle and complex facial cues. Girls attributed more anger to both basic and non-basic facial expressions and showed a general negative bias in decoding non-basic facial expressions compared to boys. Findings are interpreted in the light of the development of emotion regulation and the importance for developing relationships.
Yolanda van BeekEmail:
  相似文献   

15.
We report data concerning cross-cultural judgments of emotion in spontaneously produced facial expressions. Americans, Japanese, British, and International Students in the US reliably attributed emotions to the expressions of Olympic judo athletes at the end of a match for a medal, and at two times during the subsequent medal ceremonies. There were some observer culture differences in absolute attribution agreement rates, but high cross-cultural agreement in differences in attribution rates across expressions (relative agreement rates). Moreover, we operationalized signal clarity and demonstrated that it was associated with agreement rates similarly in all cultures. Finally, we obtained judgments of won-lost match outcomes and medal finish, and demonstrated that the emotion judgments were associated with accuracy in judgments of outcomes. These findings demonstrated that members of different cultures reliably judge spontaneously expressed emotions, and that across observer cultures, lower absolute agreement rates are related to noise produced by non-emotional facial behaviors. Also, the findings suggested that observers of different cultures utilize the same facial cues when judging emotions, and that the signal value of facial expressions is similar across cultures.  相似文献   

16.
The relationship between an individual's habitual, emotional dispositions or tendencies — an aspect of personality — and his ability to recognize facially expressed emotions was explored. Previous studies have used global scores of recognition accuracy across several emotions, but failed to examine the relationship between emotion traits and recognition of specific emotion expressions. In the present study, these more specific relationships were examined. Results indicated that females with an inhibited-non-assertive personality style tended to have poorer emotion recognition scores than more socially oriented females. In contrast, for males, the relationship between traits and recognition scores was much more emotion specific: Emotion traits were not significantly related to a global measure of recognition accuracy but were related to recognition rates of certain emotion expressions — sadness, anger, fear, surprise, and disgust. For most of the emotions, males appeared to be more likely to see what they feel. Possible explanations of the findings in terms of perceptual set and other mechanisms are discussed. Implications for clinical studies and research are noted. The study also highlights the importance of separate analysis of data for specific emotions, as well as for sex.  相似文献   

17.
Several studies have already documented how Americans and Japanese differ in both the expression and perception of facial expressions of emotion in general, and of smiles in particular. These cultural differences can be linked to differences in cultural display and decoding rules (Ekman, 1972; and Buck, 1984, respectively). The existence of these types of rules suggests that people of different cultures may hold different assumptions about social-personality characteristics, on the basis of smiling versus non-smiling faces. We suggest that Americans have come to associate more positive characteristics to smiling faces than do the Japanese. We tested this possibility by presenting American and Japanese judges with smiles or neutral faces (i.e., faces with no muscle movement) depicted by both Caucasian and Japanese male and female posers. The judges made scalar ratings of each face they viewed on four different dimensions. The findings did indicate that Americans and Japanese differed in their judgments, but not on all dimensions.David Matsumoto was supported in part by a research grant from the National Institute of Mental Health (MH 42749-01), and from a Faculty Award for Creativity, Scholarship, and Research from San Francisco State University. We would like to thank Masami Kobayashi, Fazilet Kasri, Deborah Krupp, Bill Roberts, and Michelle Weissman for their aid in our research program on emotion. We would especially like to thank the Editor for her excellent suggestions and help in conceptualizing this research.  相似文献   

18.
The present study was designed to determine whether the technique used to control the semantic content of emotional communications might influence the results of research on the effects of gender, age, and particular affects on accuracy of decoding tone of voice. Male and female college and elementary school students decoded a 48-item audio tape-recording of emotional expressions encoded by two children and two college students. Six emotions — anger, fear, happiness, jealousy, pride and sadness — were expressed in two types of content-standard messages, namely letters of the alphabet and an affectively neutral sentence. The results of the study indicate that different methods for controlling content can indeed influence the results of studies of determinants of decoding performance. Overall, subjects demonstrated greater accuracy when decoding emotions expressed in the standard sentence than when decoding emotions embedded in letters of the alphabet. A technique by emotion interaction, however, revealed that this was especially true for the purer emotions of anger, fear, happiness and sadness. Subjects identified the less pure emotions of jealousy and pride relatively more accurately when these emotions were embedded in the alphabet technique. The implications of these results for research concerning the vocal communication of affect are briefly discussed.Preparation of this article was supported in part by the National Science Foundation.  相似文献   

19.
ABSTRACT

Women and blacks are more likely than men and whites to use prayer to manage negative emotions such as anger. However, the pathways explaining these associations are not fully understood. Using data from the 1996 General Social Survey’s emotion module, we evaluate four potential mechanisms that might account for these associations: women’s and blacks’ relatively high levels of religious participation, relatively low socioeconomic status, extended duration of their negative emotional experiences, and relatively lower perceived control. Women’s and blacks’ higher likelihood of using prayer to manage anger is partially accounted for by their higher levels of religious participation, lower socioeconomic status, and duration of anger. Lower levels of perceived control contribute only to blacks’ use of prayer to manage anger. Our findings highlight the importance of identifying pathways that explain why particular social groups use particular emotion management strategies.  相似文献   

20.
The purpose of the present study was to investigate the relation between nonverbal decoding skills and relationship well-being. Sixty college students were administered tests of their abilities to identify the affective meanings in facial expressions and tones of voice. The students also completed self-report measures of relationship well-being and depression. Correlational analyses indicated that errors in decoding facial expressions and tones of voice were associated with less relationship well-being and greater depression. Hierarchical regression revealed that nonverbal decoding accuracy was significantly related to relationship well-being even after controlling for depression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号