首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.  相似文献   

2.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

3.
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11‐month‐olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants' nonfacial behavior differed across procedures, suggesting that the target emotions were successfully elicited. However evidence for distinct emotion‐specific facial configurations corresponding to fear versus anger was not obtained. Although facial responses were largely similar across cultures, some differences also were observed. Results are discussed in terms of functionalist and dynamical systems approaches to emotion and emotional expression.  相似文献   

4.
The facial behavior during a marble rolling game was analyzed for two samples of children between the ages of 2 and 5 years using the Facial Action Coding System (FACS). In addition, for a subsample of children temperament ratings were available. Analysis of coding reliability showed that frequency as well as temporal location coding can be reliably performed for preschoolers. The facial movements show a frequency distribution which is highly similar in both samples. Movements of the mouth, especially the components of smiling, and some movements of the eye region, were observed frequently. Most other facial movements were infrequent events. The more often shown facial movements were stable over a period up to 18 months. In addition, sum-scores of emotion-relevant Action Units were meaningfully related to infant temperament characteristics.  相似文献   

5.
A comparative perspective has remained central to the study of human facial expressions since Darwin’s [(1872/1998). The expression of the emotions in man and animals (3rd ed.). New York: Oxford University Press] insightful observations on the presence and significance of cross-species continuities and species-unique phenomena. However, cross-species comparisons are often difficult to draw due to methodological limitations. We report the application of a common methodology, the Facial Action Coding System (FACS) to examine facial movement across two species of hominoids, namely humans and chimpanzees. FACS [Ekman & Friesen (1978). Facial action coding system. CA: Consulting Psychology Press] has been employed to identify the repertoire of human facial movements. We demonstrate that FACS can be applied to other species, but highlight that any modifications must be based on both underlying anatomy and detailed observational analysis of movements. Here we describe the ChimpFACS and use it to compare the repertoire of facial movement in chimpanzees and humans. While the underlying mimetic musculature shows minimal differences, important differences in facial morphology impact upon the identification and detection of related surface appearance changes across these two species.
Sarah-Jane VickEmail:
  相似文献   

6.
A number of investigators have reported that observers can reliably distinguish facial expressions of pain. The purpose of this study was to describe the consistencies which might exist in facial behavior shown during pain. Sixteen candid photographs showing faces of individuals in situations associated with intense, acute pain (e.g., childbirth, various injuries, surgery without anesthesia) were coded using the anatomically-based Facial Action Coding System (FACS) of Ekman and Friesen. A characteristic pain expression—brow lowering with skin drawn in tightly around closed eyes, accompanied by a horizontally-stretched, open mouth, often with deepening of the nasolabial furrow—occurred consistently in this series.During this work the investigator was supported by NIMH Grant No. 5T32MH 14592. The author wishes to thank Paul Ekman for supplying the pain photographs and Paul Ekman and Jerry Boucher for helpful suggestions throughout the course of this research.  相似文献   

7.
We analyzed the facial behavior of 100 volunteers who video-recorded their own expressions while experiencing an episode of sexual excitement that concluded in an orgasm, and then posted their video clip on an Internet site. Four distinct observational periods from the video clips were analyzed and coded by FACS (Facial Action Coding System, Ekman and Friesen 1978). We found nine combinations of muscular movements produced by at least 5% of the senders. These combinations were consistent with facial expressions of sexual excitement described by Masters and Johnson (Human sexual response, 1966), and they included the four muscular movements of the core expression of pain (Prkachin, Pain, 51, 297–306, 1992).  相似文献   

8.
9.
Ethnic bias in the recognition of facial expressions   总被引:1,自引:0,他引:1  
Ethnic bias in the recognition of facial expressions was assessed by having college students from the United States and Zambia assign emotion labels to facial expressions produced by imitation by United States and Zambian students. Bidirectional ethnic bias was revealed by the fact that Zambian raters labeled the Zambian facial expressions with less uncertainty than the U.S. facial expressions, and that U.S. raters labeled the U.S. facial expressions with less uncertainty than the Zambian facial expressions. In addition, the Facial Action Coding System was used to assess accuracy in the imitation of facial expressions. These results and the results of other analyses of recognition accuracy are reported.Portions of this paper were presented at the annual meeting of the Society for Cross-Cultural Research, Philadelphia, Pennsylvania, February 1980 (Note 1).  相似文献   

10.
Facial expressions related to sadness are a universal signal of nonverbal communication. Although results of many psychology studies have shown that drooping of the lip corners, raising of the chin, and oblique eyebrow movements (a combination of inner brow raising and brow lowering) express sadness, no report has described a study elucidating facial expression characteristics under well-controlled circumstances with people actually experiencing the emotion of sadness itself. Therefore, spontaneous facial expressions associated with sadness remain unclear. We conducted this study to accumulate important findings related to spontaneous facial expressions of sadness. We recorded the spontaneous facial expressions of a group of participants as they experienced sadness during an emotion-elicitation task. This task required a participant to recall neutral and sad memories while listening to music. We subsequently conducted a detailed analysis of their sad and neutral expressions using the Facial Action Coding System. The prototypical facial expressions of sadness in earlier studies were not observed when people experienced sadness as an internal state under non-social circumstances. By contrast, they expressed tension around the mouth, which might function as a form of suppression. Furthermore, results show that parts of these facial actions are not only related to sad experiences but also to other emotional experiences such as disgust, fear, anger, and happiness. This study revealed the possibility that new facial expressions contribute to the experience of sadness as an internal state.  相似文献   

11.
One of the most prevalent problems in face transplant patients is an inability to generate facial expression of emotions. The purpose of this study was to measure the subjective recognition of patients’ emotional expressions by other people. We examined facial expression of six emotions in two facial transplant patients (patient A = partial, patient B = full) and one healthy control using video clips to evoke emotions. We recorded target subjects’ facial expressions with a video camera while they were watching the clips. These were then shown to a panel of 130 viewers and rated in terms of degree of emotional expressiveness on a 7-point Likert scale. The scores for emotional expressiveness were higher for the healthy control than they were for patients A and B, and these varied as a function of emotion. The most recognizable emotion was happiness. The least recognizable emotions in Patient A were fear, surprise, and anger. The expressions of Patient B scored lower than those of Patient A and the healthy control. The findings show that partial and full-face transplant patients may have difficulties in generating facial expression of emotions even if they can feel those emotions, and different parts of the face seem to play critical roles in different emotional expressions.  相似文献   

12.
Physical attractiveness is suggested to be an indicator of biological quality and therefore should be stable. However, transient factors such as gaze direction and facial expression affect facial attractiveness, suggesting it is not. We compared the relative importance of variation between faces with variation within faces due to facial expressions. 128 participants viewed photographs of 14 men and 16 women displaying the six basic facial expressions (anger, disgust, fear, happiness, sadness, surprise) and a neutral expression. Each rater saw each model only once with a randomly chosen expression. The effect of expressions on attractiveness was similar in male and female faces, although several expressions were not significantly different from each other. Identity was 2.2 times as important as emotion in attractiveness for both male and female pictures, suggesting that attractiveness is stable. Since the hard tissues of the face are unchangeable, people may still be able to perceive facial structure whatever expression the face is displaying, and still make attractiveness judgements based on structural cues.  相似文献   

13.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

14.
Measuring facial movement   总被引:5,自引:0,他引:5  
A procedure has been developed for measuring visibly different facial movements. The Facial Action Code was derived from an analysis of the anatomical basis of facial movement. The method can be used to describe any facial movement (observed in photographs, motion picture film or videotape) in terms of anatomically based action units. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed.The research reported here was supported by a grant from NIMH, MH 167845. The authors are grateful to Wade Seaford, Dickinson College, for encouraging us to build our measurement system on the basis of specific muscular action. He convinced us that it would allow more precision, and that learning the anatomy would not be an overwhelming obstacle. Neither he nor we realized, however, how detailed and elaborate this undertaking would be. Seaford (1976) recently advanced some of the arguments we have made here about the value of an anatomically based measurement system. We are grateful also to those who first learned FAC and gave us many helpful suggestions as to how to improve the manual. We thank Linda Camras, Joe Hager, Harriet Oster, and Maureen O'Sullivan also for their comments on this report.  相似文献   

15.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

16.
Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Each child was rated by two teachers on measures of school adjustment. Facial expressions encoded by the children were decoded by college undergraduates and the children's parents. Results were as follows: (1) accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; (2) decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding ability; (3) decoding ability increased with age, while encoding ability increased with age among females and slightly decreased among males; (4) parents decoded facial expressions of their own children better than facial expressions of other children, and female parents were better decoders than male parents; (5) children's adjustment to school was related to their encoding and decoding skills and to their mothers' decoding skills; (6) children with better decoding skills were rated as being better adjusted by their parents.The authors gratefully acknowledge the assistance of the Greater Rochester Jewish Community Center, Early Childhood Department, and the parents of the participating children in the completion of this study. Special thanks to Sandra Walter, Director of the Early Childhood Department, for her cooperation and support during all stages of the project.  相似文献   

17.
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.  相似文献   

18.
This study examines the relationship between the ratings made of a set of smiling and neutral expressions and the facial features which influence these ratings. Judges were shown forty real face photographs of smile and neutral expressions and forty line drawings derived from these photographs and were asked to rate the degree of smiling behavior of each expression. The line drawings of the face were generated by a microcomputer which utilizes a mathematical model to quantify facial expression. Twelve facial measures were generated by the computer. Significant differences were found between the ratings of smile and neutral expressions. The Mode of Presentation did not contribute significantly to the ratings. Using the facial measures as separate covariates, five mouth measures and one eye measure were found to discriminate significantly between the ratings made on smile and neutral expressions. When entered as simultaneous covariates, only four mouth measures contributed to the differences found in the expression ratings. Future research projects which may utilise the computer model are discussedThe research reported in this paper was conducted in the Department of Psychology, University of Adelaide. The authors would like to thank Ulana Sudomlak for her assistance in the gathering and recording of the data for this project, and the reviewers for their helpful comments on an earlier version of this paper.  相似文献   

19.
The impact of singular (e.g. sadness alone) and compound (e.g. sadness and anger together) facial expressions on individuals' recognition of faces was investigated. In three studies, a face recognition paradigm was used as a measure of the proficiency with which participants processed compound and singular facial expressions. For both positive and negative facial expressions, participants displayed greater proficiency in processing compound expressions relative to singular expressions. Specifically, the accuracy with which faces displaying compound expressions were recognized was significantly higher than the accuracy with which faces displaying singular expressions were recognized. Possible explanations involving the familiarity, distinctiveness, and salience of the facial expressions are discussed.  相似文献   

20.
Behavioral countermeasures are the strategies engaged by liars to deliberately control face or body behavior to fool lie catchers. To date research has not shown whether deceivers can suppress elements of their facial expression as a behavioral countermeasure. This study examined whether participants could suppress facial actions such as eyebrow movements or smiles on command when under scrutiny by a lie catcher. The results derived from micro momentary coding revealed that facial actions can be reduced, but not eliminated, and that instructions to suppress one element of the expression resulted in reduction in all facial movement, regardless of veracity. The resulting implications for security contexts are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号