首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
To better understand early positive emotional expression, automated software measurements of facial action were supplemented with anatomically based manual coding. These convergent measurements were used to describe the dynamics of infant smiling and predict perceived positive emotional intensity. Over the course of infant smiles, degree of smile strength varied with degree of eye constriction (cheek raising, the Duchenne marker), which varied with degree of mouth opening. In a series of three rating studies, automated measurements of smile strength and mouth opening predicted naïve (undergraduate) observers’ continuous ratings of video clips of smile sequences, as well as naïve and experienced (parent) ratings of positive emotion in still images from the sequences. An a priori measure of smile intensity combining anatomically based manual coding of both smile strength and mouth opening predicted positive emotion ratings of the still images. The findings indicate the potential of automated and fine-grained manual measurements of facial actions to describe the course of emotional expressions over time and to predict perceptions of emotional intensity.  相似文献   

2.
Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6‐month‐old infant‐mother dyads who each engaged in a face‐to‐face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action.  相似文献   

3.
Perceptual responses to infant distress signals were studied in 16 cocaine‐using and 15 comparison mothers. All mothers rated tape recordings of 48 replications of a newborn infant's hunger cry digitally altered to increase in fundamental frequency in 100‐Hz increments. Cries were rated on 4 perceptual (arousing, aversive, urgent, and sick) and 6 caregiving rating scale items (clean, cuddle, feed, give pacifier, pick up, and wait and see) used in previous studies. Analyses of variance showed that, as cry pitch increased, cries were rated as more arousing, aversive, and urgent sounding. The highest pitched cries received the highest ratings for caregiving interventions. Main effects for cocaine use showed cocaine‐using mothers (a) rated cries as less arousing, aversive, urgent, and sick; (b) indicated they were less likely to pick up or feed the infant; and (c) indicated they more likely to give the crying infant a pacifier or just “wait and see.” A Group x Cry Pitch interaction effect showed that mothers in the cocaine group gave higher ratings to wait and see as the pitch of the cries increased, whereas mothers in the comparison group gave lower ratings to wait and see as the pitch of the cries increased. These ratings indicate that cocaine‐using mothers found cries to be less perceptually salient and less likely to elicit nurturant caregiving responses. These results suggest that maternal cocaine use is associated with altered perceptions of infant distress signals that may provide the basis for differential social responsivity in the caregiving context.  相似文献   

4.
Julia R. Irwin 《Infancy》2003,4(4):503-516
This study examined whether perceivers can detect infant distress in the visual and acoustic signals within the cry. Parent and nonparent perceivers rated distress in 3‐, 6‐, 8‐, and 12‐month‐old infants' cries that were manipulated to separate facial, vocal, and bodily action. Mean perceiver ratings differed for high‐ and low‐distress cries at each infant age on the basis of facial and vocal action, but not bodily movement. Perceivers rated the cry sound as more distressed and the cry face as less distressed with increasing infant age. Parents rated the cries as less distressed overall than did nonparents. The results suggest that information about distress is available for perceivers in the crying infant's face and voice.  相似文献   

5.
Fifteen nondepressed, 15 moderately depressed, and 15 severely depressed women rated tape‐recordings of a newborn infant's hunger cry digitally altered to increase in fundamental frequency in 100 Hz increments. Cries were rated on 4 perceptual (e.g., arousing‐not arousing) and 6 caregiving rating scale items (e.g., cuddle, feed) used in previous studies (Zeskind, 1983). Analyses of variance showed that, as cry pitch increased, cries were rated as more arousing, aversive, urgent, and sick sounding. Highest pitched cries received highest levels of caregiving interventions. Severely depressed women rated cries as less perceptually salient and less likely to elicit active caregiving responses. Interaction effects showed that severely depressed women were least responsive to highest pitched cries. These results suggest that women's depression may alter perceptions of infant distress signals, especially at times of greater infant distress.  相似文献   

6.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

7.
Several studies have already documented how Americans and Japanese differ in both the expression and perception of facial expressions of emotion in general, and of smiles in particular. These cultural differences can be linked to differences in cultural display and decoding rules (Ekman, 1972; and Buck, 1984, respectively). The existence of these types of rules suggests that people of different cultures may hold different assumptions about social-personality characteristics, on the basis of smiling versus non-smiling faces. We suggest that Americans have come to associate more positive characteristics to smiling faces than do the Japanese. We tested this possibility by presenting American and Japanese judges with smiles or neutral faces (i.e., faces with no muscle movement) depicted by both Caucasian and Japanese male and female posers. The judges made scalar ratings of each face they viewed on four different dimensions. The findings did indicate that Americans and Japanese differed in their judgments, but not on all dimensions.David Matsumoto was supported in part by a research grant from the National Institute of Mental Health (MH 42749-01), and from a Faculty Award for Creativity, Scholarship, and Research from San Francisco State University. We would like to thank Masami Kobayashi, Fazilet Kasri, Deborah Krupp, Bill Roberts, and Michelle Weissman for their aid in our research program on emotion. We would especially like to thank the Editor for her excellent suggestions and help in conceptualizing this research.  相似文献   

8.
The facial feedback hypothesis states that facial actions modulate subjective experiences of emotion. Using the voluntary facial action technique, in which the participants react with instruction induced smiles and frowns when exposed to positive and negative emotional pictures and then rate the pleasantness of these stimuli, four questions were addressed in the present study. The results in Experiment 1 demonstrated a feedback effect because participants experienced the stimuli as more pleasant during smiling as compared to when frowning. However, this effect was present only during the critical actions of smiling and frowning, with no remaining effects after 5 min or after 1 day. In Experiment 2, feedback effects were found only when the facial action (smile/frown) was incongruent with the presented emotion (positive/negative), demonstrating attenuating but not enhancing modulation. Finally, no difference in the intensity of produced feedback effect was found between smiling and frowning, and no difference in feedback effect was found between positive and negative emotions. In conclusion, facial feedback appears to occur mainly during actual facial actions, and primarily attenuate ongoing emotional states.  相似文献   

9.
Eighty-two younger and older adults participated in a two-part study of the decoding of emotion through body movements and gestures. In the first part, younger and older adults identified emotions depicted in brief videotaped displays of young adult actors portraying emotional situations. In each display, the actors were silent and their faces were electronically blurred in order to isolate the body cues to emotion. Although both groups made accurate emotion identifications well above chance levels, older adults made more overall errors, and this was especially true for negative emotions. Moreover, their errors were more likely to reflect the misidentification of emotional displays as neutral in content. In the second part, younger and older adults rated the videotaped displays using scales reflecting several movement dimensions (e.g., form, tempo, force, and movement). The ratings of both age groups were in high agreement and provided reliable information about particular body cues to emotion. The errors made by older adults were linked to reactions to exaggerated or ambiguous body cues.  相似文献   

10.
Socially anxiety may be related to a different pattern of facial mimicry and contagion of others’ emotions. We report two studies in which participants with different levels of social anxiety reacted to others’ emotional displays, either shown on a computer screen (Study 1) or in an actual social interaction (Study 2). Study 1 examined facial mimicry and emotional contagion in response to displays of happiness, anger, fear, and contempt. Participants mimicked negative and positive emotions to some extent, but we found no relation between mimicry and the social anxiety level of the participants. Furthermore, socially anxious individuals were more prone to experience negative emotions and felt more irritated in response to negative emotion displays. In Study 2, we found that social anxiety was related to enhanced mimicry of smiling, but this was only the case for polite smiles and not for enjoyment smiles. These results suggest that socially anxious individuals tend to catch negative emotions from others, but suppress their expression by mimicking positive displays. This may be explained by the tendency of socially anxious individuals to avoid conflict or rejection.  相似文献   

11.
This study examines the relationship between the ratings made of a set of smiling and neutral expressions and the facial features which influence these ratings. Judges were shown forty real face photographs of smile and neutral expressions and forty line drawings derived from these photographs and were asked to rate the degree of smiling behavior of each expression. The line drawings of the face were generated by a microcomputer which utilizes a mathematical model to quantify facial expression. Twelve facial measures were generated by the computer. Significant differences were found between the ratings of smile and neutral expressions. The Mode of Presentation did not contribute significantly to the ratings. Using the facial measures as separate covariates, five mouth measures and one eye measure were found to discriminate significantly between the ratings made on smile and neutral expressions. When entered as simultaneous covariates, only four mouth measures contributed to the differences found in the expression ratings. Future research projects which may utilise the computer model are discussedThe research reported in this paper was conducted in the Department of Psychology, University of Adelaide. The authors would like to thank Ulana Sudomlak for her assistance in the gathering and recording of the data for this project, and the reviewers for their helpful comments on an earlier version of this paper.  相似文献   

12.
The present study examined effects of temporarily salient and chronic self-construal on decoding accuracy for positive and negative facial expressions of emotion. We primed independent and interdependent self-construal in a sample of participants who then rated the emotion expressions of a central character (target) in a cartoon showing a happy, sad, angry, or neutral facial expression in a group setting. Primed interdependence was associated with lower recognition accuracy for negative emotion expressions. Primed and chronic self-construal interacted such that for interdependence primed participants, higher chronic interdependence was associated with lower decoding accuracy for negative emotion expressions. Chronic independent self-construal was associated with higher decoding accuracy for negative emotion. These findings add to an increasing literature that highlights the significance of perceivers’ socio-cultural factors, self-construal in particular, for emotion perception.  相似文献   

13.
Women were videotaped while they spoke about a positive and a negative experience either in the presence of an experimenter or alone. They gave self-reports of their emotional experience, and the videotapes were rated for facial and verbal expression of emotion. Participants spoke less about their emotions when the experimenter (E) was present. When E was present, during positive disclosures they smiled more, but in negative disclosures they showed less negative and more positive expression. Facial behavior was only related to experienced emotion during positive disclosure when alone. Verbal behavior was related to experienced emotion for positive and negative disclosures when alone. These results show that verbal and nonverbal behaviors, and their relationship with emotional experience, depend on the type of emotion, the nature of the emotional event, and the social context.  相似文献   

14.
Sex, age and education differences in facial affect recognition were assessed within a large sample (n = 7,320). Results indicate superior performance by females and younger individuals in the correct identification of facial emotion, with the largest advantage for low intensity expressions. Though there were no demographic differences for identification accuracy on neutral faces, controlling for response biases by males and older individuals to label faces as neutral revealed sex and age differences for these items as well. This finding suggests that inferior facial affect recognition performance by males and older individuals may be driven primarily by instances in which they fail to detect the presence of emotion in facial expressions. Older individuals also demonstrated a greater tendency to label faces with negative emotion choices, while females exhibited a response bias for sad and fear. These response biases have implications for understanding demographic differences in facial affect recognition.  相似文献   

15.
In this study, we investigated the emotional effect of dynamic presentation of facial expressions. Dynamic and static facial expressions of negative and positive emotions were presented using computer-morphing (Experiment 1) and videos of natural changes (Experiment 2), as well as other dynamic and static mosaic images. Participants rated the valence and arousal of their emotional response to the stimuli. The participants consistently reported higher arousal responses to dynamic than to static presentation of facial expressions and mosaic images for both valences. Dynamic presentation had no effect on the valence ratings. These results suggest that dynamic presentation of emotional facial expressions enhances the overall emotional experience without a corresponding qualitative change in the experience, although this effect is not specific to facial images.
Wataru SatoEmail:
  相似文献   

16.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

17.
The goal of this study was to examine whether individual differences in the intensity of facial expressions of emotion are associated with individual differences in the voluntary control of facial muscles. Fifty college students completed a facial mimicry task, and were judged on the accuracy and intensity of their facial movements. Self-reported emotional experience was measured after subjects viewed positive and negative affect-eliciting filmclips, and intensity of facial expressiveness was measured from videotapes recorded while the subjects viewed the filmclips. There were significant sex differences in both facial mimicry task performance and responses to the filmclips. Accuracy and intensity scores on the mimicry task, which were not significantly correlated with one another, were both positively correlated with the intensity of facial expressiveness in response to the filmclips, but were not associated with reported experiences.We wish to thank the Editor and two anonymous reviewers for their helpful comments on an earlier draft of this paper.  相似文献   

18.
Does mood influence people’s tendency to accept observed facial expressions as genuine? Based on recent theories of affect and cognition, two experiments predicted and found that negative mood increased and positive mood decreased people’s skepticism about the genuineness of facial expressions. After a mood induction, participants viewed images of faces displaying (a) positive, neutral, and negative expressions (Exp. 1), or (b) displays of six specific emotions (Exp. 2). Judgments of genuineness, valence, and confidence ratings were collected. As predicted, positive affect increased, and negative affect decreased the perceived genuineness of facial expressions, and there was some evidence for affect-congruence in judgments. The relevance of these findings for everyday nonverbal communication and strategic interpersonal behavior are considered, and their implications for recent affect-cognition theories are discussed.  相似文献   

19.
The impact of singular (e.g. sadness alone) and compound (e.g. sadness and anger together) facial expressions on individuals' recognition of faces was investigated. In three studies, a face recognition paradigm was used as a measure of the proficiency with which participants processed compound and singular facial expressions. For both positive and negative facial expressions, participants displayed greater proficiency in processing compound expressions relative to singular expressions. Specifically, the accuracy with which faces displaying compound expressions were recognized was significantly higher than the accuracy with which faces displaying singular expressions were recognized. Possible explanations involving the familiarity, distinctiveness, and salience of the facial expressions are discussed.  相似文献   

20.
We report two studies which attempt to explain why some researchers found that neutral faces determine judgments of recognition as strongly as expressions of basic emotion, even through discrepant contextual information. In the first study we discarded the possibility that neutral faces could have an intense but undetected emotional content: 60 students' dimensional ratings showed that 10 neutral faces were perceived as less emotional than 10 emotional expressions. In Study 2 we tested whether neutral faces can convey strong emotional messages in some contexts: 128 students' dimensional ratings on 36 discrepant combinations of neutral faces or expressions with contextual information were more predictable from expressions when the contextual information consisted of common, everyday situations, but were more predictable from neutral faces when the context was an uncommon, extreme situation. In line with our hypothesis, we discuss these paradoxical findings as being caused by the salience of neutral faces in some particular contexts.This research was conducted as a part of the first author's doctoral dissertation, and was supported by a grant (PS89-022) of the Spanish DGICyT. We thank David Weston for his help in preparing the text. We also thank two anonymous reviewers for their valuable comments on a previous draft of this article.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号