首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

2.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

3.
The aim of the current study was to investigate the influence of happy and sad mood on facial muscular reactions to emotional facial expressions. Following film clips intended to induce happy and sad mood states, participants observed faces with happy, sad, angry, and neutral expressions while their facial muscular reactions were recorded electromyografically. Results revealed that after watching the happy clip participants showed congruent facial reactions to all emotional expressions, whereas watching the sad clip led to a general reduction of facial muscular reactions. Results are discussed with respect to the information processing style underlying the lack of mimicry in a sad mood state and also with respect to the consequences for social interactions and for embodiment theories.  相似文献   

4.
Facial expressions related to sadness are a universal signal of nonverbal communication. Although results of many psychology studies have shown that drooping of the lip corners, raising of the chin, and oblique eyebrow movements (a combination of inner brow raising and brow lowering) express sadness, no report has described a study elucidating facial expression characteristics under well-controlled circumstances with people actually experiencing the emotion of sadness itself. Therefore, spontaneous facial expressions associated with sadness remain unclear. We conducted this study to accumulate important findings related to spontaneous facial expressions of sadness. We recorded the spontaneous facial expressions of a group of participants as they experienced sadness during an emotion-elicitation task. This task required a participant to recall neutral and sad memories while listening to music. We subsequently conducted a detailed analysis of their sad and neutral expressions using the Facial Action Coding System. The prototypical facial expressions of sadness in earlier studies were not observed when people experienced sadness as an internal state under non-social circumstances. By contrast, they expressed tension around the mouth, which might function as a form of suppression. Furthermore, results show that parts of these facial actions are not only related to sad experiences but also to other emotional experiences such as disgust, fear, anger, and happiness. This study revealed the possibility that new facial expressions contribute to the experience of sadness as an internal state.  相似文献   

5.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged three to five years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundatin (#01523721) to Nathan A. Fox. The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for her help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

6.
A method for studying emotional expression using posthypnotic suggestion to induce emotional states is presented. Subjects were videotaped while roleplaying happiness, sadness or anger or after hypnotic induction of one of these three emotions. Judges then rated these videotapes for emotional tone. Results indicated a main effect for emotion expressed, with happiness and sadness more easily identified by judges than anger. Accuracy was also greater for happiness and sadness in the hypnotically induced condition. However, role-played anger was more easily identified than hypnotically induced anger. An interaction of channel (body/face) and emotion indicated that identification of sadness and anger was easier for judges when the body alone was shown. Findings are discussed in terms of display rules for the expression of emotion.We gratefully acknowledge Irving Kirsch, Ross Buck, and Paul Ekman for their helpful comments on a draft of this article. Special thanks to Reuben Baron, without whose support neither this article nor our careers in psychology would have been possible.A preliminary report of this study was presented at the meeting of the American Psychological Association in Toronto, August 1984.  相似文献   

7.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged 3 to 5 years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundation (#BNS8317229) to Nathan A. Fox. The research was also supported by a grant awarded to Nathan Fox from the National Institutes of Health (#R01MH/HD17899). The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

8.
9.
This study tested the hypothesis derived from ecological theory that adaptive social perceptions of emotion expressions fuel trait impressions. Moreover, it was predicted that these impressions would be overgeneralized and perceived in faces that were not intentionally posing expressions but nevertheless varied in emotional demeanor. To test these predictions, perceivers viewed 32 untrained targets posing happy, surprised, angry, sad, and fearful expressions and formed impressions of their dominance and affiliation. When targets posed happiness and surprise they were perceived as high in dominance and affiliation whereas when they posed anger they were perceived as high in dominance and low in affiliation. When targets posed sadness and fear they were perceived as low in dominance. As predicted, many of these impressions were overgeneralized and attributed to targets who were not posing expressions. The observed effects were generally independent of the impact of other facial cues (i.e., attractiveness and babyishness).  相似文献   

10.
Facial expressions of emotion influence interpersonal trait inferences   总被引:4,自引:0,他引:4  
Theorists have argued that facial expressions of emotion serve the interpersonal function of allowing one animal to predict another's behavior. Humans may extend these predictions into the indefinite future, as in the case of trait inference. The hypothesis that facial expressions of emotion (e.g., anger, disgust, fear, happiness, and sadness) affect subjects' interpersonal trait inferences (e.g., dominance and affiliation) was tested in two experiments. Subjects rated the dispositional affiliation and dominance of target faces with either static or apparently moving expressions. They inferred high dominance and affiliation from happy expressions, high dominance and low affiliation from angry and disgusted expressions, and low dominance from fearful and sad expressions. The findings suggest that facial expressions of emotion convey not only a target's internal state, but also differentially convey interpersonal information, which could potentially seed trait inference.This article constitutes a portion of my dissertation research at Stanford University, which was supported by a National Science Foundation Fellowship and an American Psychological Association Dissertation Award. Thanks to Nancy Alvarado, Chris Dryer, Paul Ekman, Bertram Malle, Susan Nolen-Hoeksema, Steven Sutton, Robert Zajonc, and more anonymous reviewers than I can count on one hand for their comments.  相似文献   

11.
To increase the effectiveness of fundraising campaigns, many human‐need charities include pictures of beneficiaries in their ads. However, it is unclear when and why the facial expression of these beneficiaries (sad versus happy) may influence the effectiveness of charity ads. To answer these questions, an experiment was conducted to investigate the effect of the facial expression on donation intentions, while considering the moderating role of psychological involvement with charities. It found that psychological involvement with charities moderated the impact of the facial expression on donation intentions in that seeing a picture of a sad versus happy person increased intentions to give among participants with lower levels of psychological involvement, whereas the reverse was true for highly involved participants. The moderating effect of psychological involvement was fully explained by the perceived efficacy of one's donation. The findings not only contribute to our understanding of the effect of the facial expression of people pictured in charity appeals on donation behavior, but also suggest that nonprofits should tailor their ads to target potential donors with various levels of psychological involvement with charities.  相似文献   

12.
Although self-reported gambling urge intensities have clinical utility in the treatment of pathological gambling (PG), prior studies have not investigated their neural correlates. Functional magnetic resonance imaging (fMRI) was conducted while 10 men with PG and 11 control comparison (CON) men viewed videotaped scenarios of gambling, happy or sad content. Participants rated the intensity of their emotions and motivations and reported the qualities of their responses. Relative to the CON group, the PG group reported similar responses to sad and happy scenarios, but stronger emotional responses and gambling urges when viewing the gambling scenarios. Correlations between self-reported responses and brain activations were typically strongest during the period of reported onset of emotional/motivational response and more robust in PG than in CON subjects for all conditions. During this epoch, corresponding with conscious awareness of an emotional/motivational response, subjective ratings of gambling urges in the PG group were negatively correlated with medial prefrontal cortex activation and positively correlated with middle temporal gyrus and temporal pole activations. Sadness ratings in the PG group correlated positively with activation of the medial orbitofrontal cortex, middle temporal gyrus, and retrosplenial cortex, while self-reported happiness during the happy videos demonstrated largely inverse correlations with activations in the temporal poles. Brain areas identified in the PG subjects have been implicated in explicit, self-referential processing and episodic memory. The findings demonstrate different patterns of correlations between subjective measures of emotions and motivations in PG and CON subjects when viewing material of corresponding content, suggesting in PG alterations in the neural correlates underlying experiential aspects of affective processing.  相似文献   

13.
Adult judges were presented with videotape segments showing an infant displaying facial configurations hypothesized to express discomfort/pain, anger, or sadness according to differential emotions theory (Izard, Dougherty, & Hembree, 1983). The segments also included the infant's nonfacial behavior and aspects of the situational context. Judges rated the segments using a set of emotion terms or a set of activity terms. Results showed that judges perceived the discomfort/pain and anger segments as involving one or more negative emotions not predicted by differential emotions theory. The sadness segments were perceived as involving relatively little emotion overall. Body activity accompanying the discomfort/pain and anger configurations was judged to be more jerky and active than body activity accompanying the sadness configurations. The sadness segments were accompanied by relatively little body movement overall. The results thus fail to conform to the predictions of differential emotions theory but provide information that may contribute to the development of a theory of infant expressive behavior.This article is based on the second author's master's thesis. The authors thank Dennis Ross for his expert assistance in the data analyses.  相似文献   

14.
This paper describes an application of emotion recognition in human gait by means of kinetic and kinematic data using artificial neural nets. Two experiments were undertaken, one attempting to identify participants’ emotional states from gait patterns, and the second analyzing effects on gait patterns of listening to music while walking. In the first experiment gait was analyzed as participants attempted to simulate four distinct emotional states (normal, happy, sad, angry). In the second experiment, participants were asked to listen to different types of music (excitatory, calming, no music) before and during gait analysis. Derived data were fed into different types of artificial neural nets. Results showed not only a clear distinction between individuals, but also revealed clear indications of emotion recognition in nets.  相似文献   

15.
The aim of the present study was to investigate developmental differences in reliance on situational versus vocal cues for recognition of emotions. Turkish preschool, second, and fifth grade children participated in the study. Children listened to audiotape recordings of situations between a mother and a child where the emotional cues implied by the context of a vignette and the vocal expression were either consistent or inconsistent. After listening to each vignette, participants were questioned about the content of the incident and were asked to make a judgment about the emotion of the mother referred to in the recording. Angry, happy, and neutral emotions were utilized. Results revealed that 1) recognition of emotions improved with age, and 2) children relied more on the channel depicting either anger or happiness than on the channel depicting neutrality.  相似文献   

16.
In drawing, psychological mood can be denoted in a direct way (i.e., “literally”) through facial expression cues (e.g., a frowning face denotes sadness in a direct way), but it can also be connoted in an indirect way (i.e., “non-literally”) through figurative or non-figurative cues. This study examines how child and adult drawers selectively use literal and non-literal expressive strategies in accordance with the nature of the topic being depicted. In a between-subject design, 120 participants produced drawings of either a person or a house, in one of three versions: baseline, happy, and sad. The results indicated that drawers preferentially used literal expressive strategies for the person and non-literal strategies for the house. There was an increasing tendency between 7 and 11 years of age to express the drawn person’s mood non-literally in addition to literally. The positive correlation obtained between representational and expressive drawing ability suggests that enrichment of drawers’ graphic repertoire enhances their ability to draw expressively. Implications for clinical and educational practitioners are discussed.  相似文献   

17.
Inconsistencies in previous findings concerning the relationship between emotion and social context are likely to reflect the multi-dimensionality of the sociality construct. In the present study we focused on the role of the other person by manipulating two aspects of this role: co-experience of the event and expression of emotion. We predicted that another's co-experience and expression would affect emotional responses and that the direction of these effects would depend upon the manipulated emotion and how the role of the other person is appraised. Participants read vignettes eliciting four different emotions: happiness, sadness, anxiety, and anger. As well as an alone condition, there were four conditions in which a friend was present, either co-experiencing the event or merely observing it, and either expressing emotions consistent with the event or not showing any emotion. There were significant effects of co-experience in the case of anger situations, and of expression in the case of happiness and sadness situations. Social appraisal also appeared to influence emotional response. We discuss different processes that might be responsible for these results.  相似文献   

18.
The purpose of this study was to characterize the movement qualities of 5 target emotions during walking. We used an autobiographical memories paradigm for elicitation and observer judgments for emotion recognition. For each of the felt and recognized emotion portrayals, 6 Effort-Shape qualities were judged on a continuum between opposite qualities at the anchor points. Three general categories of movement style emerged, so that anger and joy shared anchor qualities at one end of the continuum, sadness had qualities at the opposite anchor, and content and neutral had qualities between the anchor extremes. The Effort-Shape profiles were unique for each target emotion, however, and mean scores were different between emotions even when emotions shared similar qualities. Emotions were classified using the Effort-Shape scores with accuracies ranging from 74–32 % for sad, anger, content and joy, respectively. For most of the target emotions, decoding accuracy was related to at least 4 Effort-Shape qualities, suggesting that decoding accuracy may be associated with a profile of movement qualities. This study highlights the importance of movement quality in bodily expression of emotion and demonstrates the effectiveness of Effort-Shape analysis in distinguishing among emotion-related movement styles.  相似文献   

19.
Ross Flom  Anne D. Pick 《Infancy》2005,7(2):207-218
The study of gaze following in infants younger than 12 months of age has emphasized the effects of gesture, type of target, and its position or placement. This experiment extends this literature by examining the effects of adults' affective expression on 7‐month‐olds' gaze following. The effects of 3 affective expressions—happy, sad, and neutral—on 7‐month‐olds' frequency of gaze following were examined. The results indicated that infants more frequently followed the gaze of an adult posing a neutral expression than that of an adult posing either a happy or a sad expression. The infants also looked proportionately longer toward the indicated target when the adult's expression was neutral. The results are interpreted in terms of infants' flexibility of attention.  相似文献   

20.
Darwin (1872) hypothesized that some facial muscle actions associated with emotion cannot be consciously inhibited, particularly when the to-be concealed emotion is strong. The present study investigated emotional “leakage” in deceptive facial expressions as a function of emotional intensity. Participants viewed low or high intensity disgusting, sad, frightening, and happy images, responding to each with a 5 s videotaped genuine or deceptive expression. Each 1/30 s frame of the 1,711 expressions (256,650 frames in total) was analyzed for the presence and duration of universal expressions. Results strongly supported the inhibition hypothesis. In general, emotional leakage lasted longer in both the upper and lower face during high-intensity masked, relative to low-intensity, masked expressions. High intensity emotion was more difficult to conceal than low intensity emotion during emotional neutralization, leading to a greater likelihood of emotional leakage in the upper face. The greatest and least amount of emotional leakage occurred during fearful and happiness expressions, respectively. Untrained observers were unable to discriminate real and false expressions above the level of chance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号