首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
Archer et al. (1983) found that visual depictions (e.g., photographs) of men tend to show more face and less of the body (a characteristic that was termed high face-ism) than visual depictions of women. Furthermore, photographs (of both men and women) high in face-ism elicited more favorable impressions than photographs low in face-ism. The present studies examined possible reasons for sex differences in face-ism and their implications concerning the effects of high and low face-ism on interpersonal perception. Study 1 showed that the greater face-ism in photographs of men was less pronounced when the photographs were taken from periodicals that are oriented toward women's issues. Study 2 showed that photographs high in face-ism received higher ratings on dominance, a dimension related to masculinity, but not on positivity, a dimension related to femininity. This study also indicated that facial expressions provided more information about degrees of positivity while body cues provided more information about dominance and submission. Consistent with these latter results, Study 3 showed that amateur drawings portraying kind or hostile persons showed more of the face while drawings presenting dominant or weak persons showed more of the body. The two phenomena—the relationship of high face-ism with impressions of high dominance and the different types of information transmitted by the face and body—were considered in the discussion.The author would like to express his appreciation to Diana R. Satin and BiancaMaria (Mia) Penati for their assistance with this project.  相似文献   

2.
Facial expressions of emotion influence interpersonal trait inferences   总被引:4,自引:0,他引:4  
Theorists have argued that facial expressions of emotion serve the interpersonal function of allowing one animal to predict another's behavior. Humans may extend these predictions into the indefinite future, as in the case of trait inference. The hypothesis that facial expressions of emotion (e.g., anger, disgust, fear, happiness, and sadness) affect subjects' interpersonal trait inferences (e.g., dominance and affiliation) was tested in two experiments. Subjects rated the dispositional affiliation and dominance of target faces with either static or apparently moving expressions. They inferred high dominance and affiliation from happy expressions, high dominance and low affiliation from angry and disgusted expressions, and low dominance from fearful and sad expressions. The findings suggest that facial expressions of emotion convey not only a target's internal state, but also differentially convey interpersonal information, which could potentially seed trait inference.This article constitutes a portion of my dissertation research at Stanford University, which was supported by a National Science Foundation Fellowship and an American Psychological Association Dissertation Award. Thanks to Nancy Alvarado, Chris Dryer, Paul Ekman, Bertram Malle, Susan Nolen-Hoeksema, Steven Sutton, Robert Zajonc, and more anonymous reviewers than I can count on one hand for their comments.  相似文献   

3.
Lipps (1907) presented a model of empathy which had an important influence on later formulations. According to Lipps, individuals tend to mimic an interaction partner's behavior, and this nonverbal mimicry induces—via a feedback process—the corresponding affective state in the observer. The resulting shared affect is believed to foster the understanding of the observed person's self. The present study tested this model in the context of judgments of emotional facial expressions. The results confirm that individuals mimic emotional facial expressions, and that the decoding of facial expressions is accompanied by shared affect. However, no evidence that emotion recognition accuracy or shared affect are mediated by mimicry was found. Yet, voluntary mimicry was found to have some limited influence on observer' s assessment of the observed person's personality. The implications of these results with regard to Lipps' original hypothesis are discussed.  相似文献   

4.
Subjects imagined situations in which they reported enjoying themselves either alone or with others. Electromyographic (EMG) activity was recorded bilaterally from regions overlying thezygomatic major muscles responsible for smiling. Controlling for equal rated happiness in the two conditions, subjects showed more smiling in high-sociality than low-sociality imagery. In confirming imaginary audience effects during imagery, these data corroborate hypotheses that solitary facial displays are mediated by the presence of imaginary interactants, and suggest caution in employing them as measures of felt emotion.Avery Gilbert and Amy Jaffey had compelling insights throughout the course of study. We thank Paul Ekman, Carroll Izard, and Paul Rozin for extensive comments on earlier drafts. We also thank Bernard Apfelbaum, Jon Baron, Janet Bavelas, John Cacioppo, Linda Camras, Dean Delis, Rob DeRubeis, Alan Fiske, Stephen Fowler, Greg McHugo, Harriet Oster, David Premack, W. John Smith, and David Williams for their valuable comments and suggestions.  相似文献   

5.
A method was developed for automated coding of facial behavior in computer-aided test or game situations. Facial behavior is registered automatically with the aid of small plastic dots which are affixed to pre-defined regions of the subject's face. During a task, the subject's face is videotaped, and the picture is digitized. A special pattern-recognition algorithm identifies the dot pattern, and an artificial neural network classifies the dot pattern according to the Facial Action Coding System (FACS; Ekman & Friesen, 1978). The method was tested in coding the posed facial expressions of three subjects, themselves FACS experts. Results show that it is possible to identify and differentiate facial expressions by their corresponding dot patterns. The method is independent of individual differences in physiognorny.  相似文献   

6.
7.
We analyzed the facial behavior of 100 volunteers who video-recorded their own expressions while experiencing an episode of sexual excitement that concluded in an orgasm, and then posted their video clip on an Internet site. Four distinct observational periods from the video clips were analyzed and coded by FACS (Facial Action Coding System, Ekman and Friesen 1978). We found nine combinations of muscular movements produced by at least 5% of the senders. These combinations were consistent with facial expressions of sexual excitement described by Masters and Johnson (Human sexual response, 1966), and they included the four muscular movements of the core expression of pain (Prkachin, Pain, 51, 297–306, 1992).  相似文献   

8.
We measured facial behaviors shown by participants in a laboratory study in which a film was used to elicit intense emotions. Participants provided subjective reports of their emotions and their faces were recorded by a concealed camera. We did not find the coherence claimed by other authors (e.g., Rosenberg & Ekman, 1994) between the displayed facial expressions and subjective reports of emotion. We thus concluded that certain emotions are not a necessary or sufficient precondition of certain spontaneous expressions.  相似文献   

9.
People can discriminate cheaters from cooperators on the basis of negative facial expressions. However, such cheater detection is far from perfect in real-world situations. Therefore, it is possible that cheaters have the ability to disguise negative emotional expressions that signal their uncooperative attitude. To test this possibility, emotional intensity and trustworthiness were evaluated for facial photographs of cheaters and cooperators defined by scores in an economic game. The facial photographs had either posed happy or angry expressions. The angry expressions of cheaters were rated angrier and less trustworthy than those of cooperators. On the other hand, happy expressions of cheaters were higher in emotional intensity but comparable to those of cooperators in trustworthiness. These results suggest that cheater detection based on the processing of negative facial expressions can be thwarted by a posed or fake smile, which cheaters put on with higher intensity than cooperators.  相似文献   

10.
A method for studying emotional expression using posthypnotic suggestion to induce emotional states is presented. Subjects were videotaped while roleplaying happiness, sadness or anger or after hypnotic induction of one of these three emotions. Judges then rated these videotapes for emotional tone. Results indicated a main effect for emotion expressed, with happiness and sadness more easily identified by judges than anger. Accuracy was also greater for happiness and sadness in the hypnotically induced condition. However, role-played anger was more easily identified than hypnotically induced anger. An interaction of channel (body/face) and emotion indicated that identification of sadness and anger was easier for judges when the body alone was shown. Findings are discussed in terms of display rules for the expression of emotion.We gratefully acknowledge Irving Kirsch, Ross Buck, and Paul Ekman for their helpful comments on a draft of this article. Special thanks to Reuben Baron, without whose support neither this article nor our careers in psychology would have been possible.A preliminary report of this study was presented at the meeting of the American Psychological Association in Toronto, August 1984.  相似文献   

11.
Measuring facial movement   总被引:5,自引:0,他引:5  
A procedure has been developed for measuring visibly different facial movements. The Facial Action Code was derived from an analysis of the anatomical basis of facial movement. The method can be used to describe any facial movement (observed in photographs, motion picture film or videotape) in terms of anatomically based action units. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed.The research reported here was supported by a grant from NIMH, MH 167845. The authors are grateful to Wade Seaford, Dickinson College, for encouraging us to build our measurement system on the basis of specific muscular action. He convinced us that it would allow more precision, and that learning the anatomy would not be an overwhelming obstacle. Neither he nor we realized, however, how detailed and elaborate this undertaking would be. Seaford (1976) recently advanced some of the arguments we have made here about the value of an anatomically based measurement system. We are grateful also to those who first learned FAC and gave us many helpful suggestions as to how to improve the manual. We thank Linda Camras, Joe Hager, Harriet Oster, and Maureen O'Sullivan also for their comments on this report.  相似文献   

12.
Facial expressions of emotions convey not only information about emotional states but also about interpersonal intentions. The present study investigated whether factors known to influence the decoding of emotional expressions—the gender and ethnicity of the stimulus person as well as the intensity of the expression—would also influence attributions of interpersonal intentions. For this, 145 men and women rated emotional facial expressions posed by both Caucasian and Japanese male and female stimulus persons on perceived dominance and affiliation. The results showed that the sex and the ethnicity of the encoder influenced observers' ratings of dominance and affiliation. For anger displays only, this influence was mediated by expectations regarding how likely it is that a particular encoder group would display anger. Further, affiliation ratings were equally influenced by low intensity and by high intensity expressions, whereas only fairly intense emotional expressions affected attributions of dominance.  相似文献   

13.
Conceptual issues about deceit, in specific why lies fail and when and how behavior may betray a lie, provides the basis for considering the type of experimental situations which may be fruitful for the study of deceit. New evidence, integrating past reports with new unpublished findings, compare the relative efficacy of facial, bodily, vocal, paralinguistic and textual measures in discriminating deceptive from honest behavior. The findings show also that most people do not rely upon the most useful sources of information in judging whether someone is lying.The information reported here also appears inCredibility Assessment—A Unified Theoretical and Research Perspective, J. Yuille (Ed.), in press, Kluwer. The work described was supported by a Research Scientist Award from the National Institute of Mental Health (MH 06092) and a previous grant from NIMH (MH11976).  相似文献   

14.
Young mothers are at risk for depression. The article first reviews research on social and cognitive risk factors for depression and then considers the relationship between depression and child maltreatment. Cognitive-behavioral casework techniques that may improve social integration and self-management—mitigators of depression—are detailed. Finally, a case study demonstrates the use of these techniques with a depressed and maltreating mother.I am indebted to Joni Hardcastle, Andre Ivanoff, Rita Marlow, and Josie Solseng Maxwell for their contributions to the development and testing of the methods herein described and to Victoria Velasquez for permission to use the case study. I am grateful to Steven Paul Schinke and the staff of Social Work Research, Child Development and Mental Retardation Center, University of Washington, for their support of this project and the William T. Grant Foundation for funding this work and for their commitment to improving the mental health care of young people. Special thanks to Pixie Reiten and Mary Ann Liebert. For assistance preparing this article I thank Jordana Ash, Lorretta Morales Dodson, Christine Frazita, Lois Holt, Vicki Keller and Sharon Ikami.  相似文献   

15.
Ruth Ayaß 《Visual Studies》2020,35(2-3):169-192
The essay analyses the photographs produced and circulated during the March 2011 tsunami and earthquake disaster in Japan, which show destroyed buildings, flooded landscapes, desperate people. Disaster destroys existing order. On first sight, the photographs of disaster depict this dissolution of order. The empirical analysis shows that the disastrousness of disaster is (also) created through the pictures of the disasters. The paper discusses why and in what way certain pictures reveal themselves to be iconographic of disaster and how they enter the visual memory as its representative. To achieve this, the study will invoke ethnomethodology. At the centre of the analysis is the question of the particular means with which these images illustrate the destructive force of the disasters and how pictures showing destruction and pain (‘pictures of pain’) turn into pictures effecting pain in the viewer (‘painful pictures’). The empirical study demonstrates: the photographs show the destruction of order; however, they do this in an orderly manner.

for Jörg Bergmann’s 70th birthday  相似文献   

16.
While much research on nonverbal components of social interaction effectively employs film or videotape recording, a variety of circumstances preclude their use. For those situations that require direct observation of ongoing interactions, a simple technique is proposed that facilitates the reliable judgment of diverse behaviors on a time-sampling basis. This particular technique involves the tape-recorded cuing of exact observation times and instructions, which direct the observer's visual attention to the critical behaviors of specific subjects. The implementation of this technique in one setting and its potential application are briefly discussed.The author wishes to thank Paul Ekman for his comments on an earlier draft of this paper.  相似文献   

17.
Emotional intelligence (EI) can be defined as the ability to understand, perceive, and manage emotions. However, there is little research investigating how EI influences decision-making during emotionally difficult situations. We hypothesized that higher EI would correlate with greater utilization of socially relevant facial cues during emotional decision-making. Sixty-two 18–45 year olds completed a decision-making task simulating an airport security screening during a credible terrorist threat. Participants viewed a series of facial photographs of potential passengers (white men and women) and selected which passengers to detain for further interrogation. The face photographs were previously rated for a set of character traits (e.g., aggression) by independent judges. Participants completed measures of trait (self-perceived) and ability (performance-based) EI and cognitive intelligence (IQ). With higher ability EI, participants were more likely to detain only individuals judged to be particularly high in negative traits (e.g., “aggression”) and especially low in positive traits (e.g., “trustworthy”), suggesting greater acuity in decision choices. These associations were driven primarily by the facilitating and understanding branches of EI (i.e., the ability to generate and use emotion to facilitate decision making, and the ability to understand factors that generate and modify emotions). No association between trait EI or IQ and detainment decisions was found. Findings suggest that individuals with higher ability EI were more likely to utilize the available but limited social information (i.e., facial features) when completing an emotional decision-making task than those with lower EI. These findings have implications for real-life situations involving similarly difficult emotional decision-making processes.  相似文献   

18.
Character judgments, based on facial appearance, impact both perceivers’ and targets’ interpersonal decisions and behaviors. Nonetheless, the resilience of such effects in the face of longer acquaintanceship duration is yet to be determined. To address this question, we had 51 elderly long-term married couples complete self and informant versions of a Big Five Inventory. Participants were also photographed, while they were requested to maintain an emotionally neutral expression. A subset of the initial sample completed a shortened version of the Big Five Inventory in response to the pictures of other opposite sex participants (with whom they were unacquainted). Oosterhof and Todorov’s (in Proc Natl Acad Sci USA 105:11087–11092, 2008) computer-based model of face evaluation was used to generate facial trait scores on trustworthiness, dominance, and attractiveness, based on participants’ photographs. Results revealed that structural facial characteristics, suggestive of greater trustworthiness, predicted positively biased, global informant evaluations of a target’s personality, among both spouses and strangers. Among spouses, this effect was impervious to marriage length. There was also evidence suggestive of a Dorian Gray effect on personality, since facial trustworthiness predicted not only spousal and stranger, but also self-ratings of extraversion. Unexpectedly, though, follow-up analyses revealed that (low) facial dominance, rather than (high) trustworthiness, was the strongest predictor of self-rated extraversion. Our present findings suggest that subtle emotional cues, embedded in the structure of emotionally neutral faces, exert long-lasting effects on personality judgments even among very well-acquainted targets and perceivers.  相似文献   

19.
The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.  相似文献   

20.
Few attempts have been made since the pioneer work of Ekman et al. (1980) to examine the development of the deliberate control of facial action units in children. We are reporting here two studies concerned with this issue. In Study 1, we investigated children’s ability to activate facial action units involved in sadness and happiness expressions as well as combinations of these action units. In Study 2, we examined children’s ability to pose happiness and sadness with their face, without telling them which action unit to activate. The children who took part in this study were simply asked to portray happiness and sadness as convincingly as possible. The results of Study 1 indicate a strong developmental progression in children’s ability to produce elementary facial components of both emotions as well as in their ability to produce a combination of the elements in the case of happiness. In agreement with prior research in motor development, several non-target action units were also activated when children performed the task. Their occurrence persisted throughout childhood, indicating limitations in the finer motor control achieved by children across age. The results obtained in Study 2 paralleled those obtained in Study 1 in many respects, providing evidence that the children used the technique of deliberate action to pose the two target emotions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号