首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Twenty subjects judged 80 video segments containing brief episodes of smiling behavior for expression intensity and happiness of the stimulus person. The video records were produced under instructions to (a) pose, (b) experience a happy feeling or (c) to both experience and show a happy feeling. An analysis of the integrated facial electromyogram (EMG), recorded over four muscle regions (zygomaticus major, depressor anguli oris, corrugator supercilii, andmasseter), showed that judgments of happiness and of intensity of expression could be predicted in a multiple regression analysis (multipleR = .64 for perceived happiness and .79 for perceived expression intensity). The perception of happiness was affected by EMG activity in regions other thanzygomaticus major. The use of parameters other than the mean of the integrated EMG, namely variance, skewness, kurtosis and properties of the amplitude distributions across time, provided accurate classification of the elicitation conditions (pose happiness versus experience happiness) in a discriminant analysis. For the discrimination of posed and felt smiles variables describing aspects of facial activity in the temporal domain were more useful than any of the other measures. It is suggested that facial EMG can be a useful tool in the analysis of both the encoding and decoding of expressive behavior. The results indicate the advantage of using multiple-site EMG recordings as well as of using amplitude and temporal characteristics of the facial EMG measures.The research was supported in part by funds associated with the John Sloan Dickey Third Century Professorship (Kleck) and in part by grant BNS-8507600 from the National Science Foundation (Lanzetta). Ursula Hess and Arvid Kappas were supported by stipends from the Deutscher Akademischer Austauschdienst (German Academic Exchange Service).  相似文献   

2.
The facial expressions of emotion and the circumstances under which the expressions occurred in a sample of the most popular children's television programs were investigated in this study. Fifteen-second randomly selected intervals from episodes of five television programs were analyzed for displays of happiness, sadness, anger, fear, disgust, and surprise. In addition, the contexts in which the emotions occurred were examined. Results indicated that particular emotional expressions occurred at significantly different frequencies and that there was an association between emotional displays and emotion-contexts. The high rate of emotional displays found in television shows has implications for the development of knowledge regarding emotional display rules in viewers.We are grateful to Sharon Galligan for assistance in coding part of the data and to Carolyn Saarni and Amy Halberstadt for helpful comments on an earlier draft of this paper. This research was supported in part by a grant from the National Institute of Disabilities and Rehabilitation Research, #GOO85351. The opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education.  相似文献   

3.
Subjects imagined situations during which they reported feeling happiness, sadness, anger, or fear, at both low and high levels of imagined sociality. Electromyographic (EMG) activity was recorded from four facial sites overlying the left forehead, brow, cheek, and lip. Controlling for reported emotion, facial EMG activity was influenced by the sociality of the imagery. Results corroborate previous findings of imaginary audience effects on smiling, and extend these effects to imagined situations that elicit dysphoria.This research was supported in part by a University of California Academic Senate grant to the first author. We thank Norman Severe for his assistance in running subjects.  相似文献   

4.
The hypotheses of this investigation were based on conceiving of facial mimicry reactions in face-to-face interactions as an early automatic component in the process of emotional empathy. Differences between individuals high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity, when individuals were exposed to pictures of angry or happy faces. The present study distinguished between spontaneous facial reactions and facial expressions associated with more controlled or modulated emotions at different information processing levels, first at a preattentive level and then consecutively at more consciously controlled levels: 61 participants were exposed to pictures at three different exposure times (17, 56, and 2350 ms). A significant difference in facial mimicry reactions between high- and low-empathy participants emerged at short exposure times (56 ms), representing automatic, spontaneous reactions, with high-empathy participants showing a significant mimicking reaction. The low-empathy participants did not display mimicking at any exposure time. On the contrary, the low-empathy participants showed, in response to angry faces, a tendency to an elevated activation in the cheek region, which often is associated with smiling.  相似文献   

5.
Although still-face effects are well-studied, little is known about the degree to which the Face-to-Face/Still-Face (FFSF) is associated with the production of intense affective displays. Duchenne smiling expresses more intense positive affect than non-Duchenne smiling, while Duchenne cry-faces express more intense negative affect than non-Duchenne cry-faces. Forty 4-month-old infants and their mothers completed the FFSF, and key affect-indexing facial Action Units (AUs) were coded by expert Facial Action Coding System coders for the first 30 s of each FFSF episode. Computer vision software, automated facial affect recognition (AFAR), identified AUs for the entire 2-min episodes. Expert coding and AFAR produced similar infant and mother Duchenne and non-Duchenne FFSF effects, highlighting the convergent validity of automated measurement. Substantive AFAR analyses indicated that both infant Duchenne and non-Duchenne smiling declined from the FF to the SF, but only Duchenne smiling increased from the SF to the RE. In similar fashion, the magnitude of mother Duchenne smiling changes over the FFSF were 2–4 times greater than non-Duchenne smiling changes. Duchenne expressions appear to be a sensitive index of intense infant and mother affective valence that are accessible to automated measurement and may be a target for future FFSF research.  相似文献   

6.
Disgust has recently been implicated in the development and maintenance of female sexual dysfunction, yet most empirical studies have been conducted with a sexually healthy sample. The current study contributes to the literature by expanding the application of a disgust model of sexual functioning to a clinically relevant sample of women with low sexual desire/arousal and accompanying sexual distress. Young women (mean age = 19.12 years) with psychometrically defined sexual dysfunction (i.e., female sexual interest/arousal disorder [FSIAD] group) and a healthy control group were compared in their affective (i.e., facial electromyography [EMG] and self-report) and autonomic (i.e., heart rate and electrodermal activity) responses to disgusting, erotic, positive, and neutral images. Significant differences were predicted in responses to erotic images only. Specifically, it was hypothesized that the FSIAD group would display affective and autonomic responses consistent with a disgust response, while responses from the control group would align with a general appetitive response. Results largely supported study hypotheses. The FSIAD group displayed significantly greater negative facial affect, reported more subjective disgust, and recorded greater heart rate deceleration than the control group in response to erotic stimuli. Greater subjective disgust response corresponded with more sexual avoidance behavior. Planned follow-up analyses explored correlates of subjective disgust responses.  相似文献   

7.
This work constitutes a systematic review of the empirical literature about emotional facial expressions displayed in the context of family interaction. Searches of electronic databases from January 1990 until December 2016 generated close to 4400 articles, of which only 26 met the inclusion criteria. Evidence indicate that affective expressions were mostly examined through laboratory and naturalistic observations, within a wide range of interactive contexts in which mother–child dyads significantly outnumbered father–child dyads. Moreover, dyadic partners were found to match each others’ displays and positive and neutral facial expressions proving more frequent than negative facial expressions. Finally, researchers observed some developmental and gender differences regarding the frequency, intensity, and category of emotional displays and identified certain links among facial expression behavior, family relations, personal adjustment, and peer-related social competence.  相似文献   

8.
Facial expressions of emotions convey not only information about emotional states but also about interpersonal intentions. The present study investigated whether factors known to influence the decoding of emotional expressions—the gender and ethnicity of the stimulus person as well as the intensity of the expression—would also influence attributions of interpersonal intentions. For this, 145 men and women rated emotional facial expressions posed by both Caucasian and Japanese male and female stimulus persons on perceived dominance and affiliation. The results showed that the sex and the ethnicity of the encoder influenced observers' ratings of dominance and affiliation. For anger displays only, this influence was mediated by expectations regarding how likely it is that a particular encoder group would display anger. Further, affiliation ratings were equally influenced by low intensity and by high intensity expressions, whereas only fairly intense emotional expressions affected attributions of dominance.  相似文献   

9.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

10.
Socially anxiety may be related to a different pattern of facial mimicry and contagion of others’ emotions. We report two studies in which participants with different levels of social anxiety reacted to others’ emotional displays, either shown on a computer screen (Study 1) or in an actual social interaction (Study 2). Study 1 examined facial mimicry and emotional contagion in response to displays of happiness, anger, fear, and contempt. Participants mimicked negative and positive emotions to some extent, but we found no relation between mimicry and the social anxiety level of the participants. Furthermore, socially anxious individuals were more prone to experience negative emotions and felt more irritated in response to negative emotion displays. In Study 2, we found that social anxiety was related to enhanced mimicry of smiling, but this was only the case for polite smiles and not for enjoyment smiles. These results suggest that socially anxious individuals tend to catch negative emotions from others, but suppress their expression by mimicking positive displays. This may be explained by the tendency of socially anxious individuals to avoid conflict or rejection.  相似文献   

11.
Male and female encoding-decoding of spontaneous and enacted nonverbal affective behavior was evaluated using the Buck (1977) slide-viewing paradigm. The eliciting stimuli were carefully selected and evaluated to insure a comparable emotional impact on both sexes, and all subjects received the same decoding task. Consistent with previous research, females were superior decoders overall. Also as predicted, females were superior encoders, principally when reacting spontaneously to the slides. Given no evidence of differential affective arousal, this sex difference for spontaneous encoding is interpreted to reflect differences in male-female display rules. Contrary to several previous findings spontaneous and enacted encoding measures were not strongly related, especially for males, where display rules may modify their spontaneous and enacted expressive behavior in comparison to females. There was no consistent positive or negative relationship between dimensional or category measures of encoding-decoding for either sex. Future investigations should separately evaluate encoding-decoding phenomena for each sex, employing more precise methods to evaluate the specific nonverbal behaviors actually important to the encoding-decoding communication process.  相似文献   

12.
Eighty-two younger and older adults participated in a two-part study of the decoding of emotion through body movements and gestures. In the first part, younger and older adults identified emotions depicted in brief videotaped displays of young adult actors portraying emotional situations. In each display, the actors were silent and their faces were electronically blurred in order to isolate the body cues to emotion. Although both groups made accurate emotion identifications well above chance levels, older adults made more overall errors, and this was especially true for negative emotions. Moreover, their errors were more likely to reflect the misidentification of emotional displays as neutral in content. In the second part, younger and older adults rated the videotaped displays using scales reflecting several movement dimensions (e.g., form, tempo, force, and movement). The ratings of both age groups were in high agreement and provided reliable information about particular body cues to emotion. The errors made by older adults were linked to reactions to exaggerated or ambiguous body cues.  相似文献   

13.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

14.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

15.
Aspects of 47 preschoolers'emotional competence—their patterns of emotional expressiveness and reactions to others' emotion displays—were observed in two settings, with mother and with peers, and their general social competence was rated by their preschool teachers. Intrapersonal and interpersonal (i.e., socialization correlates of children's emotional competence were identified, and a causal model incorporating direct and indirect influences on social competence was evaluated. Maternal patterns of expressiveness, reactions to children's emotion displays, and self-reported affective environment were associated with children's emotional competence in the preschool. Children's emotional competence with mother predicted their emotional competence in the preschool somewhat less strongly, suggesting that emotional competence may differ according to the interpersonal relationship studied. Taken as a whole, findings reassert the importance of the domain of emotional expression to the development of social competence.Reprint requests can be sent to the author, Department of Psychology, George Mason University, 4400 University Drive, Fairfax, VA 22030. Grateful acknowledgment goes to the mothers and children who so clearly expressed and reacted to emotions; thanks also are due to Christine Alban, Joanne Ayyash, Michael Casey, Elizabeth Couchoud, Huynh Dung, Merlina Hemingway, Soueang Lay, Vanna Nguyen, Emilianne Slayden and Kimberly Sproul, as well as the director and teachers of the Project for the Study of Young Children. An earlier version of this material was presented at the 1991 Biennial Meetings of the Society for Research in Child Development, Seattle, WA. University support from Grants #2-10150, 2-10176 and 2-10073 also made the research possible.  相似文献   

16.
Subjects imagined situations in which they reported enjoying themselves either alone or with others. Electromyographic (EMG) activity was recorded bilaterally from regions overlying thezygomatic major muscles responsible for smiling. Controlling for equal rated happiness in the two conditions, subjects showed more smiling in high-sociality than low-sociality imagery. In confirming imaginary audience effects during imagery, these data corroborate hypotheses that solitary facial displays are mediated by the presence of imaginary interactants, and suggest caution in employing them as measures of felt emotion.Avery Gilbert and Amy Jaffey had compelling insights throughout the course of study. We thank Paul Ekman, Carroll Izard, and Paul Rozin for extensive comments on earlier drafts. We also thank Bernard Apfelbaum, Jon Baron, Janet Bavelas, John Cacioppo, Linda Camras, Dean Delis, Rob DeRubeis, Alan Fiske, Stephen Fowler, Greg McHugo, Harriet Oster, David Premack, W. John Smith, and David Williams for their valuable comments and suggestions.  相似文献   

17.
The present research examined whether the observation of emotional expressions rapidly induces congruent emotional experiences and facial responses in observers under strong test conditions. Specifically, participants rated their emotional reactions after (a) single, brief exposures of (b) a range of human emotional facial expressions that included (c) a neutral face comparison using a procedure designed to (d) minimize potential experimental demand. Even with these strong test conditions in place, participants reported discrete expression-congruent changes in emotional experience. Participants’ Corrugator supercilii facial muscle activity immediately following the presentation of an emotional expression appeared to reflect expressive congruence with the observed expression and a response indicative of the amount of cognitive load necessary to interpret the observed expression. The complexity of the C. supercilii response suggests caution in using facial muscle activity as a nonverbal measure of emotional contagion.
David H. ZaldEmail:
  相似文献   

18.
One of the most prevalent problems in face transplant patients is an inability to generate facial expression of emotions. The purpose of this study was to measure the subjective recognition of patients’ emotional expressions by other people. We examined facial expression of six emotions in two facial transplant patients (patient A = partial, patient B = full) and one healthy control using video clips to evoke emotions. We recorded target subjects’ facial expressions with a video camera while they were watching the clips. These were then shown to a panel of 130 viewers and rated in terms of degree of emotional expressiveness on a 7-point Likert scale. The scores for emotional expressiveness were higher for the healthy control than they were for patients A and B, and these varied as a function of emotion. The most recognizable emotion was happiness. The least recognizable emotions in Patient A were fear, surprise, and anger. The expressions of Patient B scored lower than those of Patient A and the healthy control. The findings show that partial and full-face transplant patients may have difficulties in generating facial expression of emotions even if they can feel those emotions, and different parts of the face seem to play critical roles in different emotional expressions.  相似文献   

19.
Recent studies demonstrated that in adults and children recognition of face identity and facial expression mutually interact ( Bate, Haslam, & Hodgson, 2009 ; Spangler, Schwarzer, Korell, & Maier‐Karius, 2010 ). Here, using a familiarization paradigm, we explored the relation between these processes in early infancy, investigating whether 3‐month‐old infants’ ability to recognize an individual face is affected by the positive (happiness) or neutral emotional expression displayed. Results indicated that infants’ face recognition appears enhanced when faces display a happy emotional expression, suggesting the presence of a mutual interaction between face identity and emotion recognition as early as 3 months of age.  相似文献   

20.
Recent research has demonstrated that preschool children can decode emotional meaning in expressive body movement; however, to date, no research has considered preschool children's ability to encode emotional meaning in this media. The current study investigated 4- (N = 23) and 5- (N = 24) year-old children's ability to encode the emotional meaning of an accompanying music segment by moving a teddy bear using previously modeled expressive movements to indicate one of four target emotions (happiness, sadness, anger, or fear). Adult judges visually categorized the silent videotaped expressive movement performances by children of both ages with greater than chance level accuracy. In addition, accuracy in categorizing the emotion being expressed varied as a function of age of child and emotion. A subsequent cue analysis revealed that children as young as 4 years old were systematically varying their expressive movements with respect to force, rotation, shifts in movement pattern, tempo, and upward movement in the process of emotional communication. The theoretical significance of such encoding ability is discussed with respect to children's nonverbal skills and the communication of emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号