首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The aim of the current study was to investigate the influence of happy and sad mood on facial muscular reactions to emotional facial expressions. Following film clips intended to induce happy and sad mood states, participants observed faces with happy, sad, angry, and neutral expressions while their facial muscular reactions were recorded electromyografically. Results revealed that after watching the happy clip participants showed congruent facial reactions to all emotional expressions, whereas watching the sad clip led to a general reduction of facial muscular reactions. Results are discussed with respect to the information processing style underlying the lack of mimicry in a sad mood state and also with respect to the consequences for social interactions and for embodiment theories.  相似文献   

2.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

3.
The present study examined effects of temporarily salient and chronic self-construal on decoding accuracy for positive and negative facial expressions of emotion. We primed independent and interdependent self-construal in a sample of participants who then rated the emotion expressions of a central character (target) in a cartoon showing a happy, sad, angry, or neutral facial expression in a group setting. Primed interdependence was associated with lower recognition accuracy for negative emotion expressions. Primed and chronic self-construal interacted such that for interdependence primed participants, higher chronic interdependence was associated with lower decoding accuracy for negative emotion expressions. Chronic independent self-construal was associated with higher decoding accuracy for negative emotion. These findings add to an increasing literature that highlights the significance of perceivers’ socio-cultural factors, self-construal in particular, for emotion perception.  相似文献   

4.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

5.
Subjects were exposed to a sequence of facial expression photographs to determine whether viewing earlier expressions in a sequence would alter intensity judgments of a final expression in the sequence. Results showed that whether the preceding expressions were shown by the same person who displayed the final expression, or different people, intensity ratings of both sad and happy final expressions were enhanced when preceded by a sequence of contrasting as opposed to similar or identical facial expressions. Results are discussed from the perspective of adaptation-level theory.  相似文献   

6.
Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety‐five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co‐occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.  相似文献   

7.
The hypotheses of this investigation were based on conceiving of facial mimicry reactions in face-to-face interactions as an early automatic component in the process of emotional empathy. Differences between individuals high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity, when individuals were exposed to pictures of angry or happy faces. The present study distinguished between spontaneous facial reactions and facial expressions associated with more controlled or modulated emotions at different information processing levels, first at a preattentive level and then consecutively at more consciously controlled levels: 61 participants were exposed to pictures at three different exposure times (17, 56, and 2350 ms). A significant difference in facial mimicry reactions between high- and low-empathy participants emerged at short exposure times (56 ms), representing automatic, spontaneous reactions, with high-empathy participants showing a significant mimicking reaction. The low-empathy participants did not display mimicking at any exposure time. On the contrary, the low-empathy participants showed, in response to angry faces, a tendency to an elevated activation in the cheek region, which often is associated with smiling.  相似文献   

8.
Darwin (1872) hypothesized that some facial muscle actions associated with emotion cannot be consciously inhibited, particularly when the to-be concealed emotion is strong. The present study investigated emotional “leakage” in deceptive facial expressions as a function of emotional intensity. Participants viewed low or high intensity disgusting, sad, frightening, and happy images, responding to each with a 5 s videotaped genuine or deceptive expression. Each 1/30 s frame of the 1,711 expressions (256,650 frames in total) was analyzed for the presence and duration of universal expressions. Results strongly supported the inhibition hypothesis. In general, emotional leakage lasted longer in both the upper and lower face during high-intensity masked, relative to low-intensity, masked expressions. High intensity emotion was more difficult to conceal than low intensity emotion during emotional neutralization, leading to a greater likelihood of emotional leakage in the upper face. The greatest and least amount of emotional leakage occurred during fearful and happiness expressions, respectively. Untrained observers were unable to discriminate real and false expressions above the level of chance.  相似文献   

9.
Cross-cultural and laboratory research indicates that some facial expressions of emotion are recognized more accurately and faster than others. We assessed the hypothesis that such differences depend on the frequency with which each expression occurs in social encounters. Thirty observers recorded how often they saw different facial expressions during natural conditions in their daily life. For a total of 90 days (3 days per observer), 2,462 samples of seen expressions were collected. Among the basic expressions, happy faces were observed most frequently (31 %), followed by surprised (11.3 %), sad (9.3 %), angry (8.7 %), disgusted (7.2 %), and fearful faces, which were the least frequent (3.4 %). A significant amount (29 %) of non-basic emotional expressions (e.g., pride or shame) were also observed. We correlated our frequency data with recognition accuracy and response latency data from prior studies. In support of the hypothesis, significant correlations (generally, above .70) emerged, with recognition accuracy increasing and latency decreasing as a function of frequency. We conclude that the efficiency of facial emotion recognition is modulated by familiarity of the expressions.  相似文献   

10.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

11.
This study tested the hypothesis derived from ecological theory that adaptive social perceptions of emotion expressions fuel trait impressions. Moreover, it was predicted that these impressions would be overgeneralized and perceived in faces that were not intentionally posing expressions but nevertheless varied in emotional demeanor. To test these predictions, perceivers viewed 32 untrained targets posing happy, surprised, angry, sad, and fearful expressions and formed impressions of their dominance and affiliation. When targets posed happiness and surprise they were perceived as high in dominance and affiliation whereas when they posed anger they were perceived as high in dominance and low in affiliation. When targets posed sadness and fear they were perceived as low in dominance. As predicted, many of these impressions were overgeneralized and attributed to targets who were not posing expressions. The observed effects were generally independent of the impact of other facial cues (i.e., attractiveness and babyishness).  相似文献   

12.
Accurate assessment of emotion requires the coordination of information from different sources such as faces, bodies, and voices. Adults readily integrate facial and bodily emotions. However, not much is known about the developmental origin of this capacity. Using a familiarization paired‐comparison procedure, 6.5‐month‐olds in the current experiments were familiarized to happy, angry, or sad emotions in faces or bodies and tested with the opposite image type portraying the familiar emotion paired with a novel emotion. Infants looked longer at the familiar emotion across faces and bodies (except when familiarized to angry images and tested on the happy/angry contrast). This matching occurred not only for emotions from different affective categories (happy, angry) but also within the negative affective category (angry, sad). Thus, 6.5‐month‐olds, like adults, integrate emotions from bodies and faces in a fairly sophisticated manner, suggesting rapid development of emotion processing early in life.  相似文献   

13.
Character judgments, based on facial appearance, impact both perceivers’ and targets’ interpersonal decisions and behaviors. Nonetheless, the resilience of such effects in the face of longer acquaintanceship duration is yet to be determined. To address this question, we had 51 elderly long-term married couples complete self and informant versions of a Big Five Inventory. Participants were also photographed, while they were requested to maintain an emotionally neutral expression. A subset of the initial sample completed a shortened version of the Big Five Inventory in response to the pictures of other opposite sex participants (with whom they were unacquainted). Oosterhof and Todorov’s (in Proc Natl Acad Sci USA 105:11087–11092, 2008) computer-based model of face evaluation was used to generate facial trait scores on trustworthiness, dominance, and attractiveness, based on participants’ photographs. Results revealed that structural facial characteristics, suggestive of greater trustworthiness, predicted positively biased, global informant evaluations of a target’s personality, among both spouses and strangers. Among spouses, this effect was impervious to marriage length. There was also evidence suggestive of a Dorian Gray effect on personality, since facial trustworthiness predicted not only spousal and stranger, but also self-ratings of extraversion. Unexpectedly, though, follow-up analyses revealed that (low) facial dominance, rather than (high) trustworthiness, was the strongest predictor of self-rated extraversion. Our present findings suggest that subtle emotional cues, embedded in the structure of emotionally neutral faces, exert long-lasting effects on personality judgments even among very well-acquainted targets and perceivers.  相似文献   

14.
We studied if emotional empathy is related to sensitivity to facial feedback. The participants, 112 students, rated themselves on the questionnaire measure of emotional empathy (QMEE) and were divided into one high and one low empathic group. Facial expressions were manipulated to produce a happy or a sulky expression. During the manipulation, participants rated humorous films with respect to funniness. These ratings were the dependent variable. No main effect of facial expression was found. However, a significant interaction between empathy and condition indicated that the high as compared to the low empathic group rated the films as being funnier in a happy condition and a tendency to be less funny in a sulky condition. On the basis of the present results we suggest emotional empathy to be one important and previously ignored factor to explain individual differences in effects of facial feedback.  相似文献   

15.
Psychopathic individuals are characterized as “intra-species predators”—callous, impulsive, aggressive, and proficient at interpersonal manipulation. For example, despite their high risk for re-offending, psychopathic offenders often receive early release on parole. While reputed to be social chameleons, research suggests that even naive observers can accurately infer high levels of psychopathic traits in others with very brief exposures to behavior, but accuracy degrades with extended observation. We utilized a lens model approach to examine the communication styles (emotional facial expressions, body language, and verbal content) of offenders varying in levels of psychopathic traits using “thin slice” video clips of psychological assessment interviews and to reveal which cues observers use to inform their evaluations of psychopathy. Psychopathic traits were associated with more (a) Duchenne smiles, (b) negative (angry) emotional language, and (c) hand gestures (illustrators). Further, psychopathy was associated with a marked behavioral incongruence; when individuals scoring high in psychopathic traits engaged in Duchenne smiles they were also more likely to use angry language. Naïve observers relied on each of these valid behavioral signals to quickly and accurately detect psychopathic traits. These findings provide insight into psychopathic communication styles, opportunities for improving the detection of psychopathic personality traits, and may provide an avenue for understanding successful psychopathic manipulation.  相似文献   

16.
To examine individual differences in decoding facial expressions, college students judged type and emotional intensity of emotional faces at five intensity levels and completed questionnaires on family expressiveness, emotionality, and self-expressiveness. For decoding accuracy, family expressiveness was negatively related, with strongest effects for more prototypical faces, and self-expressiveness was positively related. For perceptions of emotional intensity, family expressiveness was positively related, emotionality tended to be positively related, and self-expressiveness tended to be negatively related; these findings were all qualified by level of ambiguity/clarity of the facial expressions. Decoding accuracy and perceived emotional intensity also related positively with each other. Results suggest that even simple facial judgments are made through an interpretive lens partially created by family expressiveness, individuals’ own emotionality, and self-expressiveness.  相似文献   

17.
This work constitutes a systematic review of the empirical literature about emotional facial expressions displayed in the context of family interaction. Searches of electronic databases from January 1990 until December 2016 generated close to 4400 articles, of which only 26 met the inclusion criteria. Evidence indicate that affective expressions were mostly examined through laboratory and naturalistic observations, within a wide range of interactive contexts in which mother–child dyads significantly outnumbered father–child dyads. Moreover, dyadic partners were found to match each others’ displays and positive and neutral facial expressions proving more frequent than negative facial expressions. Finally, researchers observed some developmental and gender differences regarding the frequency, intensity, and category of emotional displays and identified certain links among facial expression behavior, family relations, personal adjustment, and peer-related social competence.  相似文献   

18.
Infants are attuned to emotional facial and vocal expressions, reacting most prominently when they are exposed to negative expressions. However, it remains unknown if infants can detect whether a person's emotions are justifiable given a particular context. The focus of the current paper was to examine whether infants react the same way to unjustified (e.g., distress following a positive experience) and justified (e.g., distress following a negative experience) emotional reactions. Infants aged 15 and 18 months were shown an actor experiencing negative and positive experiences, with one group exposed to an actor whose emotional reactions were consistently unjustified (i.e., did not match the event), while the other saw an actor whose emotional reactions were justified (i.e., always matched the event). Infants' looking times and empathic reactions were examined. Only 18‐month‐olds detected the mismatching facial expressions: Those in the unjustified group showed more hypothesis testing (i.e., checking) across events than the justified group. Older infants in the justified group also showed more concerned reactions to negative expressions than those in the unjustified group. The present findings indicate that infants implicitly understand how the emotional valence of experiences is linked to subsequent emotional expressions.  相似文献   

19.
The goal of this study was to examine whether individual differences in the intensity of facial expressions of emotion are associated with individual differences in the voluntary control of facial muscles. Fifty college students completed a facial mimicry task, and were judged on the accuracy and intensity of their facial movements. Self-reported emotional experience was measured after subjects viewed positive and negative affect-eliciting filmclips, and intensity of facial expressiveness was measured from videotapes recorded while the subjects viewed the filmclips. There were significant sex differences in both facial mimicry task performance and responses to the filmclips. Accuracy and intensity scores on the mimicry task, which were not significantly correlated with one another, were both positively correlated with the intensity of facial expressiveness in response to the filmclips, but were not associated with reported experiences.We wish to thank the Editor and two anonymous reviewers for their helpful comments on an earlier draft of this paper.  相似文献   

20.
One of the most prevalent problems in face transplant patients is an inability to generate facial expression of emotions. The purpose of this study was to measure the subjective recognition of patients’ emotional expressions by other people. We examined facial expression of six emotions in two facial transplant patients (patient A = partial, patient B = full) and one healthy control using video clips to evoke emotions. We recorded target subjects’ facial expressions with a video camera while they were watching the clips. These were then shown to a panel of 130 viewers and rated in terms of degree of emotional expressiveness on a 7-point Likert scale. The scores for emotional expressiveness were higher for the healthy control than they were for patients A and B, and these varied as a function of emotion. The most recognizable emotion was happiness. The least recognizable emotions in Patient A were fear, surprise, and anger. The expressions of Patient B scored lower than those of Patient A and the healthy control. The findings show that partial and full-face transplant patients may have difficulties in generating facial expression of emotions even if they can feel those emotions, and different parts of the face seem to play critical roles in different emotional expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号