首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
The present study examined effects of temporarily salient and chronic self-construal on decoding accuracy for positive and negative facial expressions of emotion. We primed independent and interdependent self-construal in a sample of participants who then rated the emotion expressions of a central character (target) in a cartoon showing a happy, sad, angry, or neutral facial expression in a group setting. Primed interdependence was associated with lower recognition accuracy for negative emotion expressions. Primed and chronic self-construal interacted such that for interdependence primed participants, higher chronic interdependence was associated with lower decoding accuracy for negative emotion expressions. Chronic independent self-construal was associated with higher decoding accuracy for negative emotion. These findings add to an increasing literature that highlights the significance of perceivers’ socio-cultural factors, self-construal in particular, for emotion perception.  相似文献   

2.
It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose–response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.  相似文献   

3.
Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety‐five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co‐occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.  相似文献   

4.
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds’ fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analyzed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).  相似文献   

5.
Darwin (1872) hypothesized that some facial muscle actions associated with emotion cannot be consciously inhibited, particularly when the to-be concealed emotion is strong. The present study investigated emotional “leakage” in deceptive facial expressions as a function of emotional intensity. Participants viewed low or high intensity disgusting, sad, frightening, and happy images, responding to each with a 5 s videotaped genuine or deceptive expression. Each 1/30 s frame of the 1,711 expressions (256,650 frames in total) was analyzed for the presence and duration of universal expressions. Results strongly supported the inhibition hypothesis. In general, emotional leakage lasted longer in both the upper and lower face during high-intensity masked, relative to low-intensity, masked expressions. High intensity emotion was more difficult to conceal than low intensity emotion during emotional neutralization, leading to a greater likelihood of emotional leakage in the upper face. The greatest and least amount of emotional leakage occurred during fearful and happiness expressions, respectively. Untrained observers were unable to discriminate real and false expressions above the level of chance.  相似文献   

6.
Facial expressions of emotion influence interpersonal trait inferences   总被引:4,自引:0,他引:4  
Theorists have argued that facial expressions of emotion serve the interpersonal function of allowing one animal to predict another's behavior. Humans may extend these predictions into the indefinite future, as in the case of trait inference. The hypothesis that facial expressions of emotion (e.g., anger, disgust, fear, happiness, and sadness) affect subjects' interpersonal trait inferences (e.g., dominance and affiliation) was tested in two experiments. Subjects rated the dispositional affiliation and dominance of target faces with either static or apparently moving expressions. They inferred high dominance and affiliation from happy expressions, high dominance and low affiliation from angry and disgusted expressions, and low dominance from fearful and sad expressions. The findings suggest that facial expressions of emotion convey not only a target's internal state, but also differentially convey interpersonal information, which could potentially seed trait inference.This article constitutes a portion of my dissertation research at Stanford University, which was supported by a National Science Foundation Fellowship and an American Psychological Association Dissertation Award. Thanks to Nancy Alvarado, Chris Dryer, Paul Ekman, Bertram Malle, Susan Nolen-Hoeksema, Steven Sutton, Robert Zajonc, and more anonymous reviewers than I can count on one hand for their comments.  相似文献   

7.
The Intensity of Emotional Facial Expressions and Decoding Accuracy   总被引:2,自引:0,他引:2  
The influence of the physical intensity of emotional facial expressions on perceived intensity and emotion category decoding accuracy was assessed for expressions of anger, disgust, sadness, and happiness. The facial expressions of two men and two women posing each of the four emotions were used as stimuli. Six different levels of intensity of expression were created for each pose using a graphics morphing program. Twelve men and 12 women rated each of the 96 stimuli for perceived intensity of the underlying emotion and for the qualitative nature of the emotion expressed. The results revealed that perceived intensity varied linearly with the manipulated physical intensity of the expression. Emotion category decoding accuracy varied largely linearly with the manipulated physical intensity of the expression for expressions of anger, disgust, and sadness. For the happiness expressions only, the findings were consistent with a categorical judgment process. Sex of encoder produced significant effects for both dependent measures. These effects remained even after possible gender differences in encoding were controlled for, suggesting a perceptual bias on the part of the decoders.  相似文献   

8.
Nonverbally-expressed emotions are not always linked to people’s true emotions. We investigated whether observers’ ability to distinguish trues from lies differs for positive and negative emotional expressions. Participants judged targets either simulating or truly experiencing positive or negative emotions. Deception detection was measured by participants’ inference of the targets’ emotions and their direct judgments of deception. Results of the direct measure showed that participants could not accurately distinguish between truth tellers and liars, regardless which emotion was expressed. As anticipated, the effects emerged on the indirect emotion measure: participants distinguished liars from truth tellers when inferring experienced emotions from negative emotional expressions, but not positive emotional expressions.  相似文献   

9.
Twenty-five high-functioning, verbal children and adolescents with autism spectrum disorders (ASD; age range 8–15 years) who demonstrated a facial emotion recognition deficit were block randomized to an active intervention (n = 12) or waitlist control (n = 13) group. The intervention was a modification of a commercially-available, computerized, dynamic facial emotion training tool, the MiX by Humintell©. Modifications were introduced to address the special learning needs of individuals with ASD and to address limitations in current emotion recognition programs. Modifications included: coach-assistance, a combination of didactic instruction for seven basic emotions, scaffold instruction which included repeated practice with increased presentation speeds, guided attention to relevant facial cues, and imitation of expressions. Training occurred twice each week for 45–60 min across an average of six sessions. Outcome measures were administered prior to and immediately after treatment, as well as after a delay period of 4–6 weeks. Outcome measures included (a) direct assessment of facial emotion recognition, (b) emotion self-expression, and (c) generalization through emotion awareness in videos and stories, use of emotion words, and self-, parent-, and teacher-report on social functioning questionnaires. The facial emotion training program enabled children and adolescents with ASD to more accurately and quickly identify feelings in facial expressions with stimuli from both the training tool and generalization measures and demonstrate improved self-expression of facial emotion.  相似文献   

10.
We report data concerning cross-cultural judgments of emotion in spontaneously produced facial expressions. Americans, Japanese, British, and International Students in the US reliably attributed emotions to the expressions of Olympic judo athletes at the end of a match for a medal, and at two times during the subsequent medal ceremonies. There were some observer culture differences in absolute attribution agreement rates, but high cross-cultural agreement in differences in attribution rates across expressions (relative agreement rates). Moreover, we operationalized signal clarity and demonstrated that it was associated with agreement rates similarly in all cultures. Finally, we obtained judgments of won-lost match outcomes and medal finish, and demonstrated that the emotion judgments were associated with accuracy in judgments of outcomes. These findings demonstrated that members of different cultures reliably judge spontaneously expressed emotions, and that across observer cultures, lower absolute agreement rates are related to noise produced by non-emotional facial behaviors. Also, the findings suggested that observers of different cultures utilize the same facial cues when judging emotions, and that the signal value of facial expressions is similar across cultures.  相似文献   

11.
Sex, age and education differences in facial affect recognition were assessed within a large sample (n = 7,320). Results indicate superior performance by females and younger individuals in the correct identification of facial emotion, with the largest advantage for low intensity expressions. Though there were no demographic differences for identification accuracy on neutral faces, controlling for response biases by males and older individuals to label faces as neutral revealed sex and age differences for these items as well. This finding suggests that inferior facial affect recognition performance by males and older individuals may be driven primarily by instances in which they fail to detect the presence of emotion in facial expressions. Older individuals also demonstrated a greater tendency to label faces with negative emotion choices, while females exhibited a response bias for sad and fear. These response biases have implications for understanding demographic differences in facial affect recognition.  相似文献   

12.
This study examined 12‐ and 13‐month‐old infants' behavioral strategies for emotion regulation, emotional expressions, regulatory styles, and attachment quality with fathers and mothers. Eighty‐five infants participated in the Strange Situation procedure to assess attachment quality with mothers and fathers. Infants' behavioral strategies for emotion regulation were examined with each parent during a competing demands task. Emotion regulation styles were meaningfully related to infant‐father attachment quality. Although expressions of distress and positive affect were not consistent across mothers and fathers, there was consistency in infant strategy use, emotion regulation style, and attachment quality with mothers and fathers. Furthermore, infants who were securely attached to both parents showed greater consistency in parent‐oriented strategies than infants who were insecurely attached to one or both parents. Limitations of this study include the constrained laboratory setting, potential carryover effects, and a homogeneous, middle‐class sample.  相似文献   

13.
Journal of Nonverbal Behavior - Age-related deficits are often observed in emotion categorization tasks that include negative emotional expressions like anger, fear, and sadness. Stimulus...  相似文献   

14.
Cross-cultural and laboratory research indicates that some facial expressions of emotion are recognized more accurately and faster than others. We assessed the hypothesis that such differences depend on the frequency with which each expression occurs in social encounters. Thirty observers recorded how often they saw different facial expressions during natural conditions in their daily life. For a total of 90 days (3 days per observer), 2,462 samples of seen expressions were collected. Among the basic expressions, happy faces were observed most frequently (31 %), followed by surprised (11.3 %), sad (9.3 %), angry (8.7 %), disgusted (7.2 %), and fearful faces, which were the least frequent (3.4 %). A significant amount (29 %) of non-basic emotional expressions (e.g., pride or shame) were also observed. We correlated our frequency data with recognition accuracy and response latency data from prior studies. In support of the hypothesis, significant correlations (generally, above .70) emerged, with recognition accuracy increasing and latency decreasing as a function of frequency. We conclude that the efficiency of facial emotion recognition is modulated by familiarity of the expressions.  相似文献   

15.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

16.
Journal of Nonverbal Behavior - Past research has demonstrated that children understand distinct emotion concepts and can accurately recognize facial expressions of distinct emotions by a young...  相似文献   

17.
The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42‐month‐old children. One group consisted of children abandoned at birth who were randomly assigned to Care‐as‐Usual (institutional care) following a baseline assessment. Another group consisted of children abandoned at birth who were randomly assigned to high‐quality foster care following a baseline assessment. A third group consisted of never‐institutionalized children who were reared by their biological parents. All children were familiarized to happy, sad, fearful, and neutral facial expressions and tested on their ability to discriminate familiar versus novel facial expressions. Contrary to our prediction, all three groups of children were equally capable of discriminating among the different expressions. Furthermore, in contrast to findings at 13–30 months of age, these same children showed familiarity rather than novelty preferences toward different expressions. There were also asymmetries in children’s discrimination of facial expressions depending on which facial expression served as the familiar versus novel stimulus. Collectively, early institutionalization appears not to impact the development of the ability to discriminate facial expressions of emotion, at least when preferential looking serves as the dependent measure. These findings are discussed in the context of the myriad domains that are affected by early institutionalization.  相似文献   

18.
Carroll E. Izard 《Infancy》2004,6(3):417-423
Bennett, Bendersky, and Lewis (2002) highlighted a need for revision, or at least clarification, of aspects of differential emotions theory (DET) that relate to the development of facial expressions of discrete emotions. Their article reveals a need for a better theoretical integration of propositions about the emergence of discrete emotions, the generality and flexibility of emotion responding, and issue of specificity in event‐emotion relations. Bennett et al. tested and partly disconfirmed a hard version of an event‐emotion hypothesis that predicts a specific discrete emotion expression for a specific stimulus at a particular age (4 months). They noted that some statements of DET supported their hypothesis, whereas others did not. I clarify the relevant theoretical issues and formulate a soft hypothesis of event‐emotion relations. I suggest methodological changes that may prove necessary to verify or disconfirm hypotheses relating to infants' capacity to encode a specific discrete emotion expression at a given age.  相似文献   

19.
This article introduces the Children’s Scales of Pleasure and Arousal as instruments to enable children to provide judgments of emotions they witness or experience along the major dimensions of affect. In two studies (Study 1: N = 160, 3–11 years and adults; Study 2: N = 280, 3–5 years and adults), participants used the scales to indicate the levels of pleasure or arousal they perceived in stylized drawings of facial expressions, in photographs of facial expressions, or in emotion labels. All age groups used the Pleasure Scale reliably and accurately with all three types of stimuli. All used the Arousal Scale with stylized faces and with facial expressions, but only 5-year-olds did so for emotion labels.  相似文献   

20.
This preliminary study presents data on training to improve the accuracy of judging facial expressions of emotion, a core component of emotional intelligence. Feedback following judgments of angry, fearful, sad, and surprised states indicated the correct answers as well as difficulty level of stimuli. Improvement was greater for emotional expressions originating from a cultural group more distant from participants’ own family background, for which feedback likely provides greater novel information. These results suggest that training via feedback can improve emotion perception skill. Thus, the current study also provides suggestive evidence for cultural learning in emotion, for which previous research has been cross-sectional and subject to selection biases. Hillary Anger Elfenbein is affiliated with Organizational Behavior and Industrial Relations, University of California, Berkeley, CA. This research was supported by National Institute of Mental Health Behavioral Science Track Award for Rapid Transition 1R03MH071294-1. I thank Howard Friedman, Ursula Hess, Abigail Marsh, and three anonymous reviewers for their helpful comments, and Ken Coelho and Cindy Lau for research assistance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号