首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

3.
A method was developed for automated coding of facial behavior in computer-aided test or game situations. Facial behavior is registered automatically with the aid of small plastic dots which are affixed to pre-defined regions of the subject's face. During a task, the subject's face is videotaped, and the picture is digitized. A special pattern-recognition algorithm identifies the dot pattern, and an artificial neural network classifies the dot pattern according to the Facial Action Coding System (FACS; Ekman & Friesen, 1978). The method was tested in coding the posed facial expressions of three subjects, themselves FACS experts. Results show that it is possible to identify and differentiate facial expressions by their corresponding dot patterns. The method is independent of individual differences in physiognorny.  相似文献   

4.
The facial behavior during a marble rolling game was analyzed for two samples of children between the ages of 2 and 5 years using the Facial Action Coding System (FACS). In addition, for a subsample of children temperament ratings were available. Analysis of coding reliability showed that frequency as well as temporal location coding can be reliably performed for preschoolers. The facial movements show a frequency distribution which is highly similar in both samples. Movements of the mouth, especially the components of smiling, and some movements of the eye region, were observed frequently. Most other facial movements were infrequent events. The more often shown facial movements were stable over a period up to 18 months. In addition, sum-scores of emotion-relevant Action Units were meaningfully related to infant temperament characteristics.  相似文献   

5.
A comparative perspective has remained central to the study of human facial expressions since Darwin’s [(1872/1998). The expression of the emotions in man and animals (3rd ed.). New York: Oxford University Press] insightful observations on the presence and significance of cross-species continuities and species-unique phenomena. However, cross-species comparisons are often difficult to draw due to methodological limitations. We report the application of a common methodology, the Facial Action Coding System (FACS) to examine facial movement across two species of hominoids, namely humans and chimpanzees. FACS [Ekman & Friesen (1978). Facial action coding system. CA: Consulting Psychology Press] has been employed to identify the repertoire of human facial movements. We demonstrate that FACS can be applied to other species, but highlight that any modifications must be based on both underlying anatomy and detailed observational analysis of movements. Here we describe the ChimpFACS and use it to compare the repertoire of facial movement in chimpanzees and humans. While the underlying mimetic musculature shows minimal differences, important differences in facial morphology impact upon the identification and detection of related surface appearance changes across these two species.
Sarah-Jane VickEmail:
  相似文献   

6.
In this article, we report the development of a new test designed to measure individual differences in emotion recognition ability (ERA), five studies examining the reliability and validity of the scores produced using this test, and the first evidence for a correlation between ERA measured by a standardized test and personality. Utilizing Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) and Neutral Faces (JACNeuF), we call this measure the Japanese and Caucasian Brief Affect Recognition Test (JACBART). The JACBART improves on previous measures of ERA by (1) using expressions that have substantial validity and reliability data associated with them, (2) including posers of two visibly different races (3) balanced across seven universal emotions (4) with equal distribution of poser race and sex across emotions (5) in a format that eliminates afterimages associated with fast exposures. Scores derived using the JACBART are reliable, and three studies demonstrated a correlation between ERA and the personality constructs of Openness and Conscientiousness, while one study reports a correlation with Extraversion and Neuroticism.  相似文献   

7.
Substantial research has documented the universality of several emotional expressions. However, recent findings have demonstrated cultural differences in level of recognition and ratings of intensity. When testing cultural differences, stimulus sets must meet certain requirements. Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE) is the only set that meets these requirements. The purpose of this study was to obtain judgment reliability data on the JACFEE, and to test for possible cross-national differences in judgments as well. Subjects from Hungary, Japan, Poland, Sumatra, United States, and Vietnam viewed the complete JACFEE photo set and judged which emotions were portrayed in the photos and rated the intensity of those expressions. Results revealed high agreement across countries in identifying the emotions portrayed in the photos, demonstrating the reliability of the JACFEE. Despite high agreement, cross-national differences were found in the exact level of agreement for photos of anger, contempt, disgust, fear, sadness, and surprise. Cross-national differences were also found in the level of intensity attributed to the photos. No systematic variation due to either preceding emotion or presentation order of the JACFEE was found. Also, we found that grouping the countries into a Western/Non-Western dichotomy was not justified according to the data. Instead, the cross-national differences are discussed in terms of possible sociopsychological variables that influence emotion judgments.  相似文献   

8.
A number of investigators have reported that observers can reliably distinguish facial expressions of pain. The purpose of this study was to describe the consistencies which might exist in facial behavior shown during pain. Sixteen candid photographs showing faces of individuals in situations associated with intense, acute pain (e.g., childbirth, various injuries, surgery without anesthesia) were coded using the anatomically-based Facial Action Coding System (FACS) of Ekman and Friesen. A characteristic pain expression—brow lowering with skin drawn in tightly around closed eyes, accompanied by a horizontally-stretched, open mouth, often with deepening of the nasolabial furrow—occurred consistently in this series.During this work the investigator was supported by NIMH Grant No. 5T32MH 14592. The author wishes to thank Paul Ekman for supplying the pain photographs and Paul Ekman and Jerry Boucher for helpful suggestions throughout the course of this research.  相似文献   

9.
A long line of research investigates how infants learn the sounds and words in their ambient language over the first year of life, through behavioral tasks involving discrimination and recognition. More recently, individual performance in such tasks has been used to predict later language development. Does this mean that dependent measures in such tasks are reliable and can stably measure speech perception skills over short time spans? Our three laboratories independently tested infants with a given task and retested them within 0–18 days. Together, we can report data from 12 new experiments (total number of paired observations N = 409), ranging from vowel and consonant discrimination to recognition of phrasal units. Results reveal that reliability is extremely variable across experiments. We discuss possible causes and implications of this variability, as well as the main effects revealed by this work. Additionally, we offer suggestions for the field of infant speech perception to improve the reliability of its methodologies through data repositories and crowd sourcing.  相似文献   

10.
Few measures parallel the robust depth offered in the existing multidimensional and ecologically informed theories of resilience. This study sought to evaluate the test–retest reliability, construct, and predictive validity of the individual, family, and community resilience resource profile (IFCR-R). The IFCR-R measures a family’s access to resilience resources and protective factors across multiple ecological levels. Confirmatory factor analysis was used with a sample of n?=?810 low-income families. And 159 families completed multiple time point measures for test–retest reliability and predictive validity evaluation. Results of this study support the proposed multidimensional ecological structure of the IFCR-R and found that the IFCR-R offers an acceptable test–retest reliability and predictive validity for outcomes of mental and physical health.  相似文献   

11.
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.  相似文献   

12.
Measuring facial movement   总被引:5,自引:0,他引:5  
A procedure has been developed for measuring visibly different facial movements. The Facial Action Code was derived from an analysis of the anatomical basis of facial movement. The method can be used to describe any facial movement (observed in photographs, motion picture film or videotape) in terms of anatomically based action units. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed.The research reported here was supported by a grant from NIMH, MH 167845. The authors are grateful to Wade Seaford, Dickinson College, for encouraging us to build our measurement system on the basis of specific muscular action. He convinced us that it would allow more precision, and that learning the anatomy would not be an overwhelming obstacle. Neither he nor we realized, however, how detailed and elaborate this undertaking would be. Seaford (1976) recently advanced some of the arguments we have made here about the value of an anatomically based measurement system. We are grateful also to those who first learned FAC and gave us many helpful suggestions as to how to improve the manual. We thank Linda Camras, Joe Hager, Harriet Oster, and Maureen O'Sullivan also for their comments on this report.  相似文献   

13.
Few attempts have been made since the pioneer work of Ekman et al. (1980) to examine the development of the deliberate control of facial action units in children. We are reporting here two studies concerned with this issue. In Study 1, we investigated children’s ability to activate facial action units involved in sadness and happiness expressions as well as combinations of these action units. In Study 2, we examined children’s ability to pose happiness and sadness with their face, without telling them which action unit to activate. The children who took part in this study were simply asked to portray happiness and sadness as convincingly as possible. The results of Study 1 indicate a strong developmental progression in children’s ability to produce elementary facial components of both emotions as well as in their ability to produce a combination of the elements in the case of happiness. In agreement with prior research in motor development, several non-target action units were also activated when children performed the task. Their occurrence persisted throughout childhood, indicating limitations in the finer motor control achieved by children across age. The results obtained in Study 2 paralleled those obtained in Study 1 in many respects, providing evidence that the children used the technique of deliberate action to pose the two target emotions.  相似文献   

14.
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11‐month‐olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants' nonfacial behavior differed across procedures, suggesting that the target emotions were successfully elicited. However evidence for distinct emotion‐specific facial configurations corresponding to fear versus anger was not obtained. Although facial responses were largely similar across cultures, some differences also were observed. Results are discussed in terms of functionalist and dynamical systems approaches to emotion and emotional expression.  相似文献   

15.
16.
We analyzed the facial behavior of 100 volunteers who video-recorded their own expressions while experiencing an episode of sexual excitement that concluded in an orgasm, and then posted their video clip on an Internet site. Four distinct observational periods from the video clips were analyzed and coded by FACS (Facial Action Coding System, Ekman and Friesen 1978). We found nine combinations of muscular movements produced by at least 5% of the senders. These combinations were consistent with facial expressions of sexual excitement described by Masters and Johnson (Human sexual response, 1966), and they included the four muscular movements of the core expression of pain (Prkachin, Pain, 51, 297–306, 1992).  相似文献   

17.
I propose a framework for drawing inferences about an unobserved variable using qualitative and quantitative information. Using this framework, I study the timing and persistence of monetary policy regimes and compute probabilistic measures of the qualitative indicator's reliability. These estimates suggest that (1) it is over one and one-half times more likely that monetary policy is not restrictive at any point in time, (2) Boschen and Mills's [1995] policy index is a reliable indicator of the stance of monetary policy, and (3) certain qualitative indicators of monetary policy improve interest rate forecasts that are based on linear forecasting models. (JEL C22, E52)  相似文献   

18.
The co-authorship among members of a research group commonly can be represented by a (co-authorship) graph in which nodes represent the researchers that make up of this group and edges represent the connections between two agents (i.e., the co-authorship between these agents). Current study measures the reliability of networks by taking into consideration unreliable nodes (researchers) and perfectly reliable edges (co-authorship between two researchers). A Bayesian approach for the reliability of a network represented by the co-authorship among members of a real research group is proposed, obtaining Bayesian estimates and credibility intervals for the individual components (nodes or researchers) and the network. Weakly informative and non-informative prior distributions are assumed for those components and the posterior summaries are obtained by Monte Carlo-Markov Chain methods. The results show the relevance of an inferential approach for the reliability of scientific co-authorship network. The results also demonstrate that the contribution of each researcher is highly relevant for the maintenance of a research group. In addition, the Bayesian methodology was a feasible and easy computational implementation.  相似文献   

19.
Sarah C. Creel 《Infancy》2012,17(2):141-158
Morgante et al. (in press) find inconsistencies in the time reporting of a Tobii T60XL eye tracker. Their study raises important questions about the use of the Tobii T‐series in particular, and various software and hardware in general, in different infant eye tracking paradigms. It leaves open the question of the source of the inconsistencies. Here, observations from a Tobii eye tracker are presented to elucidate possible sources of timing inconsistencies, including those found by Morgante et al. The ramifications of the reported timing inconsistencies are related to various infant paradigms. The focus is on the level of concern a researcher should have if any eye tracker displays these timing characteristics, and what corrective measures may be taken. While posing no problems for some paradigms, timing inconsistencies are potentially problematic (but correctable) when assessing event‐related looking behavior. Observed timing contraindicates use in fast gaze‐contingent displays (<100 ms). General suggestions are made regarding timing in eye‐tracked data collection.  相似文献   

20.
This study examined genetic and environmental effects on individual variation in pubertal timing using two national samples of siblings from the Nonshared Environment of Adolescent Development (NEAD) and the National Longitudinal Study of Adolescent Health (Add Health). In each sample, female and male siblings with different degrees of genetic relatedness, i.e., monozygotic twins, dizygotic twins, full siblings, half siblings, and unrelated siblings in blended families, were assessed. Timing of pubertal development was measured by age-adjusted self-report measures of the Pubertal Development Scale in NEAD and a four-item scale of pubertal development in the Add Health. The results indicated that both genetic and environmental influences play an important role in determining the relative timing of pubertal development for both boys and girls.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号