首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The aim of the current study was to investigate the influence of happy and sad mood on facial muscular reactions to emotional facial expressions. Following film clips intended to induce happy and sad mood states, participants observed faces with happy, sad, angry, and neutral expressions while their facial muscular reactions were recorded electromyografically. Results revealed that after watching the happy clip participants showed congruent facial reactions to all emotional expressions, whereas watching the sad clip led to a general reduction of facial muscular reactions. Results are discussed with respect to the information processing style underlying the lack of mimicry in a sad mood state and also with respect to the consequences for social interactions and for embodiment theories.  相似文献   

2.
The impact of singular (e.g. sadness alone) and compound (e.g. sadness and anger together) facial expressions on individuals' recognition of faces was investigated. In three studies, a face recognition paradigm was used as a measure of the proficiency with which participants processed compound and singular facial expressions. For both positive and negative facial expressions, participants displayed greater proficiency in processing compound expressions relative to singular expressions. Specifically, the accuracy with which faces displaying compound expressions were recognized was significantly higher than the accuracy with which faces displaying singular expressions were recognized. Possible explanations involving the familiarity, distinctiveness, and salience of the facial expressions are discussed.  相似文献   

3.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

4.
This work constitutes a systematic review of the empirical literature about emotional facial expressions displayed in the context of family interaction. Searches of electronic databases from January 1990 until December 2016 generated close to 4400 articles, of which only 26 met the inclusion criteria. Evidence indicate that affective expressions were mostly examined through laboratory and naturalistic observations, within a wide range of interactive contexts in which mother–child dyads significantly outnumbered father–child dyads. Moreover, dyadic partners were found to match each others’ displays and positive and neutral facial expressions proving more frequent than negative facial expressions. Finally, researchers observed some developmental and gender differences regarding the frequency, intensity, and category of emotional displays and identified certain links among facial expression behavior, family relations, personal adjustment, and peer-related social competence.  相似文献   

5.
Izard (2004/this issue) clarifies the position of differential emotions theory by proposing a distinction between hard and soft versions of event‐emotion expression relations. We concur that the best design to examine situational specificity in facial expressions is one that utilizes multiple stimulus situations assessed over multiple occasions and ages. However, the problem of how to identify, a priori, a family of stimulus situations remains. We offer an example from our own recent work demonstrating how facial expressions and physiological indexes may converge to indicate the presence of a meaningful family of stimulus situations. Specifically, we found evidence for a family of frustrating, goal‐blocking events that elicited expressions and cortisol responses indicative of anger at 4 months. Yet, individual differences exist in that these situations also elicited expressions and cortisol changes indicative of sadness. Identification of a more comprehensive set of such situations throughout infancy will allow researchers to more systematically examine the degree to which situational specificity of emotions is present.  相似文献   

6.
This study examined age and gender differences in decoding nonverbal cues in a school population of 606 (pre)adolescents (9–15 years). The focus was on differences in the perceived intensity of several emotions in both basic and non-basic facial expressions. Age differences were found in decoding low intensity and ambiguous faces, but not in basic expressions. Older adolescents indicated more negative meaning in these more subtle and complex facial cues. Girls attributed more anger to both basic and non-basic facial expressions and showed a general negative bias in decoding non-basic facial expressions compared to boys. Findings are interpreted in the light of the development of emotion regulation and the importance for developing relationships.
Yolanda van BeekEmail:
  相似文献   

7.
We examined 6‐month‐old infants' abilities to discriminate smiling and frowning from neutral stimuli. In addition, we assessed the relationship between infants' preferences for varying intensities of smiling and frowning facial expressions and their mothers' history of depressive symptoms. Forty‐six infants were presented pairs of facial expressions, and their preferential looking time was recorded. They also participated in a 3‐min interaction with their mothers for which duration of both mother and infant gazing and smiling were coded. Analyses revealed that the infants reliably discriminated between varying intensities of smiling and frowning facial expressions and a paired neutral expression. In addition, infants' preferences for smiling and frowning expressions were related to self‐reports of maternal depressive symptoms experienced since the birth of the infant. Potential implications for social cognitive development are discussed.  相似文献   

8.
The Intensity of Emotional Facial Expressions and Decoding Accuracy   总被引:2,自引:0,他引:2  
The influence of the physical intensity of emotional facial expressions on perceived intensity and emotion category decoding accuracy was assessed for expressions of anger, disgust, sadness, and happiness. The facial expressions of two men and two women posing each of the four emotions were used as stimuli. Six different levels of intensity of expression were created for each pose using a graphics morphing program. Twelve men and 12 women rated each of the 96 stimuli for perceived intensity of the underlying emotion and for the qualitative nature of the emotion expressed. The results revealed that perceived intensity varied linearly with the manipulated physical intensity of the expression. Emotion category decoding accuracy varied largely linearly with the manipulated physical intensity of the expression for expressions of anger, disgust, and sadness. For the happiness expressions only, the findings were consistent with a categorical judgment process. Sex of encoder produced significant effects for both dependent measures. These effects remained even after possible gender differences in encoding were controlled for, suggesting a perceptual bias on the part of the decoders.  相似文献   

9.
The capacity to engage with one’s child in a reciprocally responsive way is an important element of successful and rewarding parent–child conversations, which are common contexts for emotion socialization. The degree to which a parent–child dyad shows a mutually responsive orientation presumably depends on both individuals’ socio-emotional skills. For example, one or both members of a dyad needs to be able to accurately interpret and respond to the other’s nonverbal cues, such as facial expressions, to facilitate mutually responsive interactions. Little research, however, has examined whether and how mother and/or child facial expression decoding skill relates to dyads’ emotional mutuality during conversations. We thus examined associations between both mother and child facial expression decoding skill and observed emotional mutuality during parent-preschooler conversations about happy child memories. Results lend support to our hypotheses by suggesting that both mother and child capacities to read others’ emotional cues make distinct contributions to parent–child emotional mutuality in the context of reminiscing conversations. Specifically, mothers’ accurate decoding of child facial expressions predicted maternal displays of positive affect and interest, while children’s accurate decoding of adult facial expressions predicted dyadic displays of mutual enjoyment. Contrary to our hypotheses, however, parent/child facial expression decoding skills did not interact to predict observed mutual responsiveness. These findings underscore the importance of attending to both parent and child contributions to successful dyadic interactions that facilitate effective emotion socialization.  相似文献   

10.
Darwin (1872) hypothesized that some facial muscle actions associated with emotion cannot be consciously inhibited, particularly when the to-be concealed emotion is strong. The present study investigated emotional “leakage” in deceptive facial expressions as a function of emotional intensity. Participants viewed low or high intensity disgusting, sad, frightening, and happy images, responding to each with a 5 s videotaped genuine or deceptive expression. Each 1/30 s frame of the 1,711 expressions (256,650 frames in total) was analyzed for the presence and duration of universal expressions. Results strongly supported the inhibition hypothesis. In general, emotional leakage lasted longer in both the upper and lower face during high-intensity masked, relative to low-intensity, masked expressions. High intensity emotion was more difficult to conceal than low intensity emotion during emotional neutralization, leading to a greater likelihood of emotional leakage in the upper face. The greatest and least amount of emotional leakage occurred during fearful and happiness expressions, respectively. Untrained observers were unable to discriminate real and false expressions above the level of chance.  相似文献   

11.
In this study, we investigated the emotional effect of dynamic presentation of facial expressions. Dynamic and static facial expressions of negative and positive emotions were presented using computer-morphing (Experiment 1) and videos of natural changes (Experiment 2), as well as other dynamic and static mosaic images. Participants rated the valence and arousal of their emotional response to the stimuli. The participants consistently reported higher arousal responses to dynamic than to static presentation of facial expressions and mosaic images for both valences. Dynamic presentation had no effect on the valence ratings. These results suggest that dynamic presentation of emotional facial expressions enhances the overall emotional experience without a corresponding qualitative change in the experience, although this effect is not specific to facial images.
Wataru SatoEmail:
  相似文献   

12.
The ability to model facial expressions by applying shading gradient manipulations to Langer stress lines was assessed in the current experiments. It was hypothesized that the perceived intensity of expressions and the judged faceness of the schematic stimuli would vary as a function of the depth or density of stress lines as measured by shading differences; light and dark contrasts on the facial surface. Expressiveness differences based on the shading manipulations were observed for some stimulus configurations with some differences differentiating the upper and lower halves of the face. The results implicate the shading manipulations as a potential notation system for describing the informational support for the perception of facial expressions, thus providing an empirical measure of relevant stimulus parameters.  相似文献   

13.
This study examined the relationship between decoding nonverbal cues and depressive symptoms in a general school population of 606 children and adolescents (9–15 years). The focus was on the perceived intensity of several emotions in both basic and non-basic facial expressions. The perceived intensities of anger and joy in low intensity facial expressions were related to depression. The higher the perceived intensity of anger the more depressed adolescents were, whereas the reversed effect was found for the perception of joy, but only in girls. These results suggest that the development of decoding biases in low intensity facial expressions may be useful for understanding the development of individual and gender differences in depression during adolescence.
Yolanda van BeekEmail:
  相似文献   

14.
The present study examined the impact of conflict over emotional expression on the nonverbal communication process between romantic partners. Fifty-four romantically involved female undergraduate students who scored within the upper or lower 30th percentile range on the Ambivalence over the Expression of Emotion Questionnaire (AEQ; King & Emmons, 1990) were recruited along with their romantic partners. The facial expressions of these women were examined during a conflict resolution task. Analyses indicated that highly ambivalent women expressed a greater number of negative facial expressions and shorter lasting positive facial expressions (measured with FACES; Kring & Sloan, 1992) than less ambivalent women. These expressions were not entirely explained by current mood, as ambivalence predicted a greater number of negative facial expressions, and a briefer display of positive facial expressions, above and beyond current levels of negative and positive affect. Furthermore, analyses indicated that the number of women's negative expressions predicted significant increases in men's dysphoria and marginal increases in men's anxiety, suggesting potential negative interactional patterns between ambivalent women and their partners.  相似文献   

15.
Previous research has demonstrated that individuals who were accurate at recognizing facial expressions of emotions reported better relationships with family and friends. The purpose of the present study was to test whether the ability to recognize facial expressions of negative emotions predicted greater relationship satisfaction with their romantic relationships and whether this link was mediated by constructive responses to conflict. Participants currently involved in a romantic relationship completed a validated performance measure of recognition of facial expressions and afterwards reported on the responses they engaged in during conflict with their romantic partner and rated their romantic relationship satisfaction. Results showed that accurate recognition of facial expressions of negative emotions (anger, contempt, disgust, fear, and sadness) predicted less conflict engaging behaviors during conflict with their romantic partners (but not positive problem solving and withdrawal), which in turn predicted greater relationship satisfaction. The present study is the first to show that the ability to recognize facial expressions of negative emotions is related to romantic relationship satisfaction and that constructive responses to conflict such as less conflict engaging behaviors, mediate this process.  相似文献   

16.
Categorical perception, demonstrated as reduced discrimination of within‐category relative to between‐category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event‐related potential (ERP) methods to assess discrimination of within‐category (happy‐happy) and between‐category (happy‐sad) differences in facial expressions in 7‐month‐old infants. Data from a visual paired‐comparison test and recordings of attention‐sensitive ERPs showed no discrimination of facial expressions in the within‐category condition, whereas reliable discrimination was observed in the between‐category condition. The results also showed that face‐sensitive ERPs over occipital‐temporal scalp (P400) were attenuated in the within‐category condition relative to the between‐category condition, suggesting a potential neural basis for the reduced within‐category sensitivity. Together, these results suggest that the neural systems underlying categorical representation of facial expressions emerge during the early stages of postnatal development, before acquisition of language.  相似文献   

17.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

18.
19.
Facial expressions of emotion influence interpersonal trait inferences   总被引:4,自引:0,他引:4  
Theorists have argued that facial expressions of emotion serve the interpersonal function of allowing one animal to predict another's behavior. Humans may extend these predictions into the indefinite future, as in the case of trait inference. The hypothesis that facial expressions of emotion (e.g., anger, disgust, fear, happiness, and sadness) affect subjects' interpersonal trait inferences (e.g., dominance and affiliation) was tested in two experiments. Subjects rated the dispositional affiliation and dominance of target faces with either static or apparently moving expressions. They inferred high dominance and affiliation from happy expressions, high dominance and low affiliation from angry and disgusted expressions, and low dominance from fearful and sad expressions. The findings suggest that facial expressions of emotion convey not only a target's internal state, but also differentially convey interpersonal information, which could potentially seed trait inference.This article constitutes a portion of my dissertation research at Stanford University, which was supported by a National Science Foundation Fellowship and an American Psychological Association Dissertation Award. Thanks to Nancy Alvarado, Chris Dryer, Paul Ekman, Bertram Malle, Susan Nolen-Hoeksema, Steven Sutton, Robert Zajonc, and more anonymous reviewers than I can count on one hand for their comments.  相似文献   

20.
Lipps (1907) presented a model of empathy which had an important influence on later formulations. According to Lipps, individuals tend to mimic an interaction partner's behavior, and this nonverbal mimicry induces—via a feedback process—the corresponding affective state in the observer. The resulting shared affect is believed to foster the understanding of the observed person's self. The present study tested this model in the context of judgments of emotional facial expressions. The results confirm that individuals mimic emotional facial expressions, and that the decoding of facial expressions is accompanied by shared affect. However, no evidence that emotion recognition accuracy or shared affect are mediated by mimicry was found. Yet, voluntary mimicry was found to have some limited influence on observer' s assessment of the observed person's personality. The implications of these results with regard to Lipps' original hypothesis are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号