首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

2.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

3.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

4.
While numerous studies have investigated children’s recognition of facial emotional expressions, little evidence has been gathered concerning their explicit knowledge of the components included in such expressions. Thus, we investigated children’s knowledge of the facial components involved in the expressions of happiness, sadness, anger, and surprise. Four- and 5-year-old Japanese children were presented with the blank face of a young character, and asked to select facial components in order to depict the emotions he felt. Children’s overall performance in the task increased as a function of age, and was above chance level for each emotion in both age groups. Children were likely to select the Cheek raiser and Lip corner puller to depict happiness, the Inner brow raiser, Brow lowerer, and Lid droop to depict sadness, the Brow lowerer and Upper lid raiser to depict anger, and the Upper lid raiser and Jaw drop to depict surprise. Furthermore, older children demonstrated a better knowledge of the involvement of the Upper lid raiser in surprise expressions.  相似文献   

5.
The aim of the current study was to investigate the influence of happy and sad mood on facial muscular reactions to emotional facial expressions. Following film clips intended to induce happy and sad mood states, participants observed faces with happy, sad, angry, and neutral expressions while their facial muscular reactions were recorded electromyografically. Results revealed that after watching the happy clip participants showed congruent facial reactions to all emotional expressions, whereas watching the sad clip led to a general reduction of facial muscular reactions. Results are discussed with respect to the information processing style underlying the lack of mimicry in a sad mood state and also with respect to the consequences for social interactions and for embodiment theories.  相似文献   

6.
Physical attractiveness is suggested to be an indicator of biological quality and therefore should be stable. However, transient factors such as gaze direction and facial expression affect facial attractiveness, suggesting it is not. We compared the relative importance of variation between faces with variation within faces due to facial expressions. 128 participants viewed photographs of 14 men and 16 women displaying the six basic facial expressions (anger, disgust, fear, happiness, sadness, surprise) and a neutral expression. Each rater saw each model only once with a randomly chosen expression. The effect of expressions on attractiveness was similar in male and female faces, although several expressions were not significantly different from each other. Identity was 2.2 times as important as emotion in attractiveness for both male and female pictures, suggesting that attractiveness is stable. Since the hard tissues of the face are unchangeable, people may still be able to perceive facial structure whatever expression the face is displaying, and still make attractiveness judgements based on structural cues.  相似文献   

7.
This study examined the emergence of affect specificity in infancy. In this study, infants received verbal and facial signals of 2 different, negatively valenced emotions (fear and sadness) as well as neutral affect via a television monitor to determine if they could make qualitative distinctions among emotions of the same valence. Twenty 12‐ to 14‐month‐olds and 20 16‐ to 18‐month‐olds were examined. Results suggested that younger infants showed no evidence of referential specificity, as they responded similarly to both the target and distracter toys, and showed no evidence of affect specificity, showing no difference in play between affect conditions. Older infants, in contrast, showed evidence both of referential and affect specificity. With respect to affect specificity, 16‐ to 18‐month‐olds touched the target toy less in the fear condition than in the sad condition and showed a larger proportion of negative facial expressions in the sad condition versus the fear condition. These findings suggest a developmental emergence after 15 months of age for affect specificity in relating emotional messages to objects.  相似文献   

8.
Facial expressions of emotion influence interpersonal trait inferences   总被引:4,自引:0,他引:4  
Theorists have argued that facial expressions of emotion serve the interpersonal function of allowing one animal to predict another's behavior. Humans may extend these predictions into the indefinite future, as in the case of trait inference. The hypothesis that facial expressions of emotion (e.g., anger, disgust, fear, happiness, and sadness) affect subjects' interpersonal trait inferences (e.g., dominance and affiliation) was tested in two experiments. Subjects rated the dispositional affiliation and dominance of target faces with either static or apparently moving expressions. They inferred high dominance and affiliation from happy expressions, high dominance and low affiliation from angry and disgusted expressions, and low dominance from fearful and sad expressions. The findings suggest that facial expressions of emotion convey not only a target's internal state, but also differentially convey interpersonal information, which could potentially seed trait inference.This article constitutes a portion of my dissertation research at Stanford University, which was supported by a National Science Foundation Fellowship and an American Psychological Association Dissertation Award. Thanks to Nancy Alvarado, Chris Dryer, Paul Ekman, Bertram Malle, Susan Nolen-Hoeksema, Steven Sutton, Robert Zajonc, and more anonymous reviewers than I can count on one hand for their comments.  相似文献   

9.
This study found that the facial action of moderately or widely opening the mouth is accompanied by brow raising in infnats, thus producing surprise expressions in non-surprise situations. Infants (age = 5 months and 7 months) were videotaped as they were presented with toys that they often grasped and brought to their mouths. Episodes of mouth opening were identified and accompanying brow, nose, and eyelid movements were coded. Results indicated that mouth opening is selectively associated with raised brows rather than to other brow movements. Trace levels of eyelid raising also tended to accompany this facial configuration. The findings are discussed in terms of a dynamical systems theory of facial behavior and suggest that facial expression cannot be used as investigators' sole measure of surprise in infants.This research was conducted as part of the second author's undergraduate honors program project and was supported in part by a grant from the NICHHD #1RO1 HD 22399-A3 awarded to G. F. Michel.  相似文献   

10.
Subjects were exposed to a sequence of facial expression photographs to determine whether viewing earlier expressions in a sequence would alter intensity judgments of a final expression in the sequence. Results showed that whether the preceding expressions were shown by the same person who displayed the final expression, or different people, intensity ratings of both sad and happy final expressions were enhanced when preceded by a sequence of contrasting as opposed to similar or identical facial expressions. Results are discussed from the perspective of adaptation-level theory.  相似文献   

11.
Self-report studies have found evidence that cultures differ in the display rules they have for facial expressions (i.e., for what is appropriate for different people at different times). However, observational studies of actual patterns of facial behavior have been rare and typically limited to the analysis of dozens of participants from two or three regions. We present the first large-scale evidence of cultural differences in observed facial behavior, including 740,984 participants from 12 countries around the world. We used an Internet-based framework to collect video data of participants in two different settings: in their homes and in market research facilities. Using computer vision algorithms designed for this dataset, we measured smiling and brow furrowing expressions as participants watched television ads. Our results reveal novel findings and provide empirical evidence to support theories about cultural and gender differences in display rules. Participants from more individualist cultures displayed more brow furrowing overall, whereas smiling depended on both culture and setting. Specifically, participants from more individualist countries were more expressive in the facility setting, while participants from more collectivist countries were more expressive in the home setting. Female participants displayed more smiling and less brow furrowing than male participants overall, with the latter difference being more pronounced in more individualist countries. This is the first study to leverage advances in computer science to enable large-scale observational research that would not have been possible using traditional methods.  相似文献   

12.
Studies with socially anxious adults suggest that social anxiety is associated with problems in decoding other persons' facial expressions of emotions. Corresponding studies with socially anxious children are lacking. The aim of the present study was to test whether socially phobic children show deficits in classifying facial expressions of emotions or show a response bias for negative facial expressions. Fifty socially anxious and 25 socially non-anxious children (age 8 to 12) participated in the study. Pictures of faces with either neutral, positive (joyful) or negative (angry, disgusted, sad) facial expressions (24 per category) were presented for 60 ms on a monitor screen in random order. The children were asked to indicate by pressing a key whether the facial expression was neutral, positive, or negative, and to rate how confident they were about their classification. With regard to frequency of errors the socially anxious children reported significantly more often that they saw emotions when neutral faces were presented. Moreover, reaction times were longer. However, they did not feel less certain about their performance. There is neither an indication of an enhanced ability to decode negative facial expressions in socially anxious children, nor was there a specific tendency to interpret neutral or positive faces as negative.  相似文献   

13.
We examined 6‐month‐old infants' abilities to discriminate smiling and frowning from neutral stimuli. In addition, we assessed the relationship between infants' preferences for varying intensities of smiling and frowning facial expressions and their mothers' history of depressive symptoms. Forty‐six infants were presented pairs of facial expressions, and their preferential looking time was recorded. They also participated in a 3‐min interaction with their mothers for which duration of both mother and infant gazing and smiling were coded. Analyses revealed that the infants reliably discriminated between varying intensities of smiling and frowning facial expressions and a paired neutral expression. In addition, infants' preferences for smiling and frowning expressions were related to self‐reports of maternal depressive symptoms experienced since the birth of the infant. Potential implications for social cognitive development are discussed.  相似文献   

14.
Sullivan MW  Lewis M 《Infancy》2012,17(2):159-178
Infants and their mothers participated in a longitudinal study of the sequelae of infant goal blockage responses. Four-month-old infants participated in a standard contingency learning/goal blockage procedure during which anger and sad facial expressions to the blockage were coded. When infants were 12- and 20- months-old, mothers completed a questionnaire about their children's tantrums. Tantrum scores increased with age and boys tended to show more tantrum behavior than girls. Anger expressed to goal blockage at 4 months was unrelated to tantrum behavior. There was a gender by sad expression interaction. Girls who expressed sadness in response to the goal blockage had lower total tantrum scores than boys; otherwise there was no difference. These results suggest that tantrums of infants who display sad, not anger expression, in response to goal blockage, are differentially influenced by children's gender.  相似文献   

15.
Sex, age and education differences in facial affect recognition were assessed within a large sample (n = 7,320). Results indicate superior performance by females and younger individuals in the correct identification of facial emotion, with the largest advantage for low intensity expressions. Though there were no demographic differences for identification accuracy on neutral faces, controlling for response biases by males and older individuals to label faces as neutral revealed sex and age differences for these items as well. This finding suggests that inferior facial affect recognition performance by males and older individuals may be driven primarily by instances in which they fail to detect the presence of emotion in facial expressions. Older individuals also demonstrated a greater tendency to label faces with negative emotion choices, while females exhibited a response bias for sad and fear. These response biases have implications for understanding demographic differences in facial affect recognition.  相似文献   

16.
The relation between knowledge of American Sign Language (ASL) and the ability to decode facial expressions of emotion was explored in this study. Subjects were 60 college students, half of whom were intermediate level students of ASL and half of whom had no exposure to a signed language. Subjects viewed and judged silent video segments of stimulus persons experiencing spontaneous emotional reactions representing either happiness, sadness, anger, disgust, or fear/surprise. Results indicated that hearing subjects knowledgeable in ASL were generally better than hearing non-signers at identifying facial expressions of emotion, although there were variations in decoding accuracy regarding the specific emotion being judged. In addition, females were more successful decoders than males. Results have implications for better understanding the nature of nonverbal communication in deaf and hearing individuals.We are grateful to Karl Scheibe for comments on an earlier version of this paper and to Erik Coats for statistical analysis. This study was conducted as part of a Senior Honors thesis at Wesleyan University.  相似文献   

17.
Cognitive models of social anxiety provide a basis for predicting that the ability to process nonverbal information accurately and quickly should be impaired during the experience of state anxiety. To test this hypothesis, we assigned participants to threatening and non-threatening conditions and asked them to label the emotions expressed in a series of faces. It was predicted that social anxiety would be positively associated with errors and response times in threatening conditions, but not in a non-threatening condition. It was also predicted that high social anxiety would be associated with more errors and longer response times when identifying similar expressions such as sadness, anger, and fear. The results indicate that social anxiety was not associated with errors in identifying facial expressions of emotion, regardless of the level of state anxiety experienced. However, social anxiety scores were found to be significantly related to response times to identify facial expressions, but the relationship varied depending on the level of state anxiety experienced. Methodological and theoretical implications of using response time data when assessing nonverbal ability are discussed.  相似文献   

18.
The present study examined effects of temporarily salient and chronic self-construal on decoding accuracy for positive and negative facial expressions of emotion. We primed independent and interdependent self-construal in a sample of participants who then rated the emotion expressions of a central character (target) in a cartoon showing a happy, sad, angry, or neutral facial expression in a group setting. Primed interdependence was associated with lower recognition accuracy for negative emotion expressions. Primed and chronic self-construal interacted such that for interdependence primed participants, higher chronic interdependence was associated with lower decoding accuracy for negative emotion expressions. Chronic independent self-construal was associated with higher decoding accuracy for negative emotion. These findings add to an increasing literature that highlights the significance of perceivers’ socio-cultural factors, self-construal in particular, for emotion perception.  相似文献   

19.
A method was developed for automated coding of facial behavior in computer-aided test or game situations. Facial behavior is registered automatically with the aid of small plastic dots which are affixed to pre-defined regions of the subject's face. During a task, the subject's face is videotaped, and the picture is digitized. A special pattern-recognition algorithm identifies the dot pattern, and an artificial neural network classifies the dot pattern according to the Facial Action Coding System (FACS; Ekman & Friesen, 1978). The method was tested in coding the posed facial expressions of three subjects, themselves FACS experts. Results show that it is possible to identify and differentiate facial expressions by their corresponding dot patterns. The method is independent of individual differences in physiognorny.  相似文献   

20.
The impact of singular (e.g. sadness alone) and compound (e.g. sadness and anger together) facial expressions on individuals' recognition of faces was investigated. In three studies, a face recognition paradigm was used as a measure of the proficiency with which participants processed compound and singular facial expressions. For both positive and negative facial expressions, participants displayed greater proficiency in processing compound expressions relative to singular expressions. Specifically, the accuracy with which faces displaying compound expressions were recognized was significantly higher than the accuracy with which faces displaying singular expressions were recognized. Possible explanations involving the familiarity, distinctiveness, and salience of the facial expressions are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号