首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 502 毫秒
1.
2.
Whether recognition of emotion from facial expression is affected by distortions of pictorial quality has rarely been tested, with the exception of the influence of picture size on emotion recognition. On the other hand, this question is important for (low-cost) tele-communication and tele-conferencing. Here an attempt is made to study whether emotion recognition from facial expression is impaired when video stimuli are distorted both with respect to spatial (pixel) resolution and with respect to temporal resolution (refreshment rate).N=56 stimuli, in which professional actors encoded 14 different emotions, were presented to groups of judges (N=10 in each condition) in six different distortion conditions. Furthermore, judges were confronted with a control condition, presenting the non-distorted stimuli. Channel of information (close-up of the face versus full body recording) was in addition manipulated. Results indicate (besides main effects for type of emotion encoded and for channel) that emotion recognition is impaired by reductions of both spatial resolution and temporal resolution, but that even very low spatial and temporal resolutions result in recognition rates which are still considerably above chance expectation. Results are discussed with respect to the importance of facial expression and body movement in communicating emotions, and with respect to applied aspects concerning tele-communication.  相似文献   

3.
The facial feedback hypothesis states that facial actions modulate subjective experiences of emotion. Using the voluntary facial action technique, in which the participants react with instruction induced smiles and frowns when exposed to positive and negative emotional pictures and then rate the pleasantness of these stimuli, four questions were addressed in the present study. The results in Experiment 1 demonstrated a feedback effect because participants experienced the stimuli as more pleasant during smiling as compared to when frowning. However, this effect was present only during the critical actions of smiling and frowning, with no remaining effects after 5 min or after 1 day. In Experiment 2, feedback effects were found only when the facial action (smile/frown) was incongruent with the presented emotion (positive/negative), demonstrating attenuating but not enhancing modulation. Finally, no difference in the intensity of produced feedback effect was found between smiling and frowning, and no difference in feedback effect was found between positive and negative emotions. In conclusion, facial feedback appears to occur mainly during actual facial actions, and primarily attenuate ongoing emotional states.  相似文献   

4.
5.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

6.
According to the facial feedback hypothesis, facial muscles do not only express emotions, they also have the ability to modulate subjective experiences of emotions and to initiate emotions. This study examined the voluntary facial action technique, where participants were instructed to react with the Zygomatic major muscle (smile) or the Corrugator supercilii muscle (frown) when exposed to different stimuli. The results demonstrate that the technique effectively induces facial feedback effects. Through use of this technique we further addressed three important areas of facial feedback and found, first, that facial feedback did not modulate the experience of positive and negative emotion evoking stimuli differently. Second, the modulating ability provided significant feedback effects, while the initiating ability did not. Third, an effect of feedback remained and could be detected even some time after the critical manipulation. It is concluded that the present technique can be used in the future study of facial feedback.  相似文献   

7.
Facial expressions of fear and disgust have repeatedly been found to be less well recognized than those of other basic emotions by children. We undertook two studies in which we investigated the recognition and visual discrimination of these expressions in school-age children. In Study 1, children (5, 6, 9, and 10 years of age) were shown pairs of facial expressions, and asked to tell which one depicted a target emotion. The results indicated that accuracy in 9- and 10-year-olds was higher than in 5- and 6-year-olds for three contrasts: disgust–anger, fear–surprise, and fear–sadness. Younger children had more difficulty recognizing disgust when it was presented along with anger, and in recognizing fear when it was presented along with surprise. In Study 2, children (5, 6, 9, and 10 years of age) were shown a target expression along with two other expressions, and were asked to point to the expression that was the most similar to the target. Contrary to our expectations, even 5- and 6-year-olds were very accurate in discriminating fear and disgust from the other emotions, suggesting that visual perception was not the main limiting factor for the recognition of these emotions in school-age children.  相似文献   

8.
The purpose of this study is to examine the recognition of facial expressions of six emotions as a function of sex and level of education (high school, college, university) of the subjects. Three hundred French-speaking citizens of Quebec had to judge which emotion was expressed in various facial stimuli presented on slides. Results show that overall, the recognition of emotions was very good. However, there were significant and strong differences between emotions and sex and levels of education did not have strong effects on the results.This research was supported by grant EQ-1717 from Fonds FCAC (Gouvernment du Quebec).  相似文献   

9.
Older adults tend to perform worse on emotion perception tasks compared to younger adults. How this age difference relates to other interpersonal perception tasks and conversation ability remains an open question. In the present study, we assessed 32 younger and 30 older adults’ accuracy when perceiving (1) static facial expressions, (2) emotions, attitudes, and intentions from videos, (3) and interpersonal constructs (e.g., kinship). Participants’ conversation ability was rated by coders from a videotaped, dyadic problem-solving task. Younger adults were more accurate than older adults perceiving some but not all emotions. No age differences in accuracy were found on any perception task or in conversation ability. Some but not all of the interpersonal perception tasks were related. None of the perception tasks predicted conversation ability. Thus, although the literature suggests a robust age difference in emotion perception accuracy, this difference does not seem to transfer to other interpersonal perception tasks or interpersonal outcomes.  相似文献   

10.
11.
Facial expressions related to sadness are a universal signal of nonverbal communication. Although results of many psychology studies have shown that drooping of the lip corners, raising of the chin, and oblique eyebrow movements (a combination of inner brow raising and brow lowering) express sadness, no report has described a study elucidating facial expression characteristics under well-controlled circumstances with people actually experiencing the emotion of sadness itself. Therefore, spontaneous facial expressions associated with sadness remain unclear. We conducted this study to accumulate important findings related to spontaneous facial expressions of sadness. We recorded the spontaneous facial expressions of a group of participants as they experienced sadness during an emotion-elicitation task. This task required a participant to recall neutral and sad memories while listening to music. We subsequently conducted a detailed analysis of their sad and neutral expressions using the Facial Action Coding System. The prototypical facial expressions of sadness in earlier studies were not observed when people experienced sadness as an internal state under non-social circumstances. By contrast, they expressed tension around the mouth, which might function as a form of suppression. Furthermore, results show that parts of these facial actions are not only related to sad experiences but also to other emotional experiences such as disgust, fear, anger, and happiness. This study revealed the possibility that new facial expressions contribute to the experience of sadness as an internal state.  相似文献   

12.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

13.
Twenty-five high-functioning, verbal children and adolescents with autism spectrum disorders (ASD; age range 8–15 years) who demonstrated a facial emotion recognition deficit were block randomized to an active intervention (n = 12) or waitlist control (n = 13) group. The intervention was a modification of a commercially-available, computerized, dynamic facial emotion training tool, the MiX by Humintell©. Modifications were introduced to address the special learning needs of individuals with ASD and to address limitations in current emotion recognition programs. Modifications included: coach-assistance, a combination of didactic instruction for seven basic emotions, scaffold instruction which included repeated practice with increased presentation speeds, guided attention to relevant facial cues, and imitation of expressions. Training occurred twice each week for 45–60 min across an average of six sessions. Outcome measures were administered prior to and immediately after treatment, as well as after a delay period of 4–6 weeks. Outcome measures included (a) direct assessment of facial emotion recognition, (b) emotion self-expression, and (c) generalization through emotion awareness in videos and stories, use of emotion words, and self-, parent-, and teacher-report on social functioning questionnaires. The facial emotion training program enabled children and adolescents with ASD to more accurately and quickly identify feelings in facial expressions with stimuli from both the training tool and generalization measures and demonstrate improved self-expression of facial emotion.  相似文献   

14.
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11‐month‐olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants' nonfacial behavior differed across procedures, suggesting that the target emotions were successfully elicited. However evidence for distinct emotion‐specific facial configurations corresponding to fear versus anger was not obtained. Although facial responses were largely similar across cultures, some differences also were observed. Results are discussed in terms of functionalist and dynamical systems approaches to emotion and emotional expression.  相似文献   

15.
Women were videotaped while they spoke about a positive and a negative experience either in the presence of an experimenter or alone. They gave self-reports of their emotional experience, and the videotapes were rated for facial and verbal expression of emotion. Participants spoke less about their emotions when the experimenter (E) was present. When E was present, during positive disclosures they smiled more, but in negative disclosures they showed less negative and more positive expression. Facial behavior was only related to experienced emotion during positive disclosure when alone. Verbal behavior was related to experienced emotion for positive and negative disclosures when alone. These results show that verbal and nonverbal behaviors, and their relationship with emotional experience, depend on the type of emotion, the nature of the emotional event, and the social context.  相似文献   

16.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

17.
Abstract

Power is associated with living in reward-rich environments and causes behavioural disinhibition (Keltner, Grumfeld & Anderson, 2003). Powerful people also have greater freedom of emotional expression (Hecht & La France, 1998). Two studies were conducted with the aim of: a) analyzing the effect of dispositional power on emotion suppression, and b) exploring the simple and interaction effects of dispositional and situational power on emotion suppression. In a first correlational study, the power of individuals was found to be negatively correlated with emotion suppression. In a second experimental study, participants were assigned to a powerful or powerless position and negative emotions were induced with pictures. Participants were asked to suppress their emotions during the presentation of the pictures. Participants' emotion suppression was measured using the suppression subscale of the Emotion Regulation Questionnaire (Gross & John, 2003). Results showed that dispositionally powerless participants suppressed their emotions more than dispositionally powerful participants only when they were assigned to a low power position. These results are discussed.  相似文献   

18.
We measured facial behaviors shown by participants in a laboratory study in which a film was used to elicit intense emotions. Participants provided subjective reports of their emotions and their faces were recorded by a concealed camera. We did not find the coherence claimed by other authors (e.g., Rosenberg & Ekman, 1994) between the displayed facial expressions and subjective reports of emotion. We thus concluded that certain emotions are not a necessary or sufficient precondition of certain spontaneous expressions.  相似文献   

19.
This article introduces the Children’s Scales of Pleasure and Arousal as instruments to enable children to provide judgments of emotions they witness or experience along the major dimensions of affect. In two studies (Study 1: N = 160, 3–11 years and adults; Study 2: N = 280, 3–5 years and adults), participants used the scales to indicate the levels of pleasure or arousal they perceived in stylized drawings of facial expressions, in photographs of facial expressions, or in emotion labels. All age groups used the Pleasure Scale reliably and accurately with all three types of stimuli. All used the Arousal Scale with stylized faces and with facial expressions, but only 5-year-olds did so for emotion labels.  相似文献   

20.
Infants can detect individuals who demonstrate emotions that are incongruent with an event and are less likely to trust them. However, the nature of the mechanisms underlying this selectivity is currently subject to controversy. The objective of this study was to examine whether infants’ socio‐cognitive and associative learning skills are linked to their selective trust. A total of 102 14‐month‐olds were exposed to a person who demonstrated congruent or incongruent emotional referencing (e.g., happy when looking inside an empty box), and were tested on their willingness to follow the emoter's gaze. Knowledge inference and associative learning tasks were also administered. It was hypothesized that infants would be less likely to trust the incongruent emoter and that this selectivity would be related to their associative learning skills, and not their socio‐cognitive skills. The results revealed that infants were not only able to detect the incongruent emoter, but were subsequently less likely to follow her gaze toward an object invisible to them. More importantly, infants who demonstrated superior performance on the knowledge inference task, but not the associative learning task, were better able to detect the person's emotional incongruency. These findings provide additional support for the rich interpretation of infants’ selective trust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号