首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety‐five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co‐occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.  相似文献   

2.
Sex, age and education differences in facial affect recognition were assessed within a large sample (n = 7,320). Results indicate superior performance by females and younger individuals in the correct identification of facial emotion, with the largest advantage for low intensity expressions. Though there were no demographic differences for identification accuracy on neutral faces, controlling for response biases by males and older individuals to label faces as neutral revealed sex and age differences for these items as well. This finding suggests that inferior facial affect recognition performance by males and older individuals may be driven primarily by instances in which they fail to detect the presence of emotion in facial expressions. Older individuals also demonstrated a greater tendency to label faces with negative emotion choices, while females exhibited a response bias for sad and fear. These response biases have implications for understanding demographic differences in facial affect recognition.  相似文献   

3.
This study examined age and gender differences in decoding nonverbal cues in a school population of 606 (pre)adolescents (9–15 years). The focus was on differences in the perceived intensity of several emotions in both basic and non-basic facial expressions. Age differences were found in decoding low intensity and ambiguous faces, but not in basic expressions. Older adolescents indicated more negative meaning in these more subtle and complex facial cues. Girls attributed more anger to both basic and non-basic facial expressions and showed a general negative bias in decoding non-basic facial expressions compared to boys. Findings are interpreted in the light of the development of emotion regulation and the importance for developing relationships.
Yolanda van BeekEmail:
  相似文献   

4.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

5.
One of the most prevalent problems in face transplant patients is an inability to generate facial expression of emotions. The purpose of this study was to measure the subjective recognition of patients’ emotional expressions by other people. We examined facial expression of six emotions in two facial transplant patients (patient A = partial, patient B = full) and one healthy control using video clips to evoke emotions. We recorded target subjects’ facial expressions with a video camera while they were watching the clips. These were then shown to a panel of 130 viewers and rated in terms of degree of emotional expressiveness on a 7-point Likert scale. The scores for emotional expressiveness were higher for the healthy control than they were for patients A and B, and these varied as a function of emotion. The most recognizable emotion was happiness. The least recognizable emotions in Patient A were fear, surprise, and anger. The expressions of Patient B scored lower than those of Patient A and the healthy control. The findings show that partial and full-face transplant patients may have difficulties in generating facial expression of emotions even if they can feel those emotions, and different parts of the face seem to play critical roles in different emotional expressions.  相似文献   

6.
This study investigated parents’ emotion-related beliefs, experience, and expression, and children’s recognition of their parents’ emotions with 40 parent-child dyads. Parents reported beliefs about danger and guidance of children’s emotions. While viewing emotion-eliciting film clips, parents self-reported their emotional experience and masking of emotion. Children and observers rated videos of parents watching emotion-eliciting film clips. Fathers reported more masking than mothers and their emotional expressions were more difficult for both observers and children to recognize compared with mothers’ emotional expressions. For fathers, but not mothers, showing clearer expressions was related to children’s general skill at recognizing emotional expressions. Parents who believe emotions are dangerous reported greater masking of emotional expression. Contrary to hypothesis, when parents strongly believe in guiding their child’s emotion socialization, children showed less accurate recognition of their parents’ emotions.
Julie C. DunsmoreEmail:
  相似文献   

7.
To better understand early positive emotional expression, automated software measurements of facial action were supplemented with anatomically based manual coding. These convergent measurements were used to describe the dynamics of infant smiling and predict perceived positive emotional intensity. Over the course of infant smiles, degree of smile strength varied with degree of eye constriction (cheek raising, the Duchenne marker), which varied with degree of mouth opening. In a series of three rating studies, automated measurements of smile strength and mouth opening predicted naïve (undergraduate) observers’ continuous ratings of video clips of smile sequences, as well as naïve and experienced (parent) ratings of positive emotion in still images from the sequences. An a priori measure of smile intensity combining anatomically based manual coding of both smile strength and mouth opening predicted positive emotion ratings of the still images. The findings indicate the potential of automated and fine-grained manual measurements of facial actions to describe the course of emotional expressions over time and to predict perceptions of emotional intensity.  相似文献   

8.
This study examined the relationship between decoding nonverbal cues and depressive symptoms in a general school population of 606 children and adolescents (9–15 years). The focus was on the perceived intensity of several emotions in both basic and non-basic facial expressions. The perceived intensities of anger and joy in low intensity facial expressions were related to depression. The higher the perceived intensity of anger the more depressed adolescents were, whereas the reversed effect was found for the perception of joy, but only in girls. These results suggest that the development of decoding biases in low intensity facial expressions may be useful for understanding the development of individual and gender differences in depression during adolescence.
Yolanda van BeekEmail:
  相似文献   

9.
Previous research employing the facial affect decision task (FADT) indicates that when listeners are exposed to semantically anomalous utterances produced in different emotional tones (prosody), the emotional meaning of the prosody primes decisions about an emotionally congruent rather than incongruent facial expression (Pell, M. D., Journal of Nonverbal Behavior, 29, 45–73). This study undertook further development of the FADT by investigating the approximate timecourse of prosody–face interactions in nonverbal emotion processing. Participants executed facial affect decisions about happy and sad face targets after listening to utterance fragments produced in an emotionally related, unrelated, or neutral prosody, cut to 300, 600, or 1000 ms in duration. Results underscored that prosodic information enduring at least 600 ms was necessary to presumably activate shared emotion knowledge responsible for prosody–face congruity effects. Marc D. Pell is affiliated with the School of Communication Sciences and Disorders, McGill University, Montréal, Canada.  相似文献   

10.
The present research examined whether the observation of emotional expressions rapidly induces congruent emotional experiences and facial responses in observers under strong test conditions. Specifically, participants rated their emotional reactions after (a) single, brief exposures of (b) a range of human emotional facial expressions that included (c) a neutral face comparison using a procedure designed to (d) minimize potential experimental demand. Even with these strong test conditions in place, participants reported discrete expression-congruent changes in emotional experience. Participants’ Corrugator supercilii facial muscle activity immediately following the presentation of an emotional expression appeared to reflect expressive congruence with the observed expression and a response indicative of the amount of cognitive load necessary to interpret the observed expression. The complexity of the C. supercilii response suggests caution in using facial muscle activity as a nonverbal measure of emotional contagion.
David H. ZaldEmail:
  相似文献   

11.
The Intensity of Emotional Facial Expressions and Decoding Accuracy   总被引:2,自引:0,他引:2  
The influence of the physical intensity of emotional facial expressions on perceived intensity and emotion category decoding accuracy was assessed for expressions of anger, disgust, sadness, and happiness. The facial expressions of two men and two women posing each of the four emotions were used as stimuli. Six different levels of intensity of expression were created for each pose using a graphics morphing program. Twelve men and 12 women rated each of the 96 stimuli for perceived intensity of the underlying emotion and for the qualitative nature of the emotion expressed. The results revealed that perceived intensity varied linearly with the manipulated physical intensity of the expression. Emotion category decoding accuracy varied largely linearly with the manipulated physical intensity of the expression for expressions of anger, disgust, and sadness. For the happiness expressions only, the findings were consistent with a categorical judgment process. Sex of encoder produced significant effects for both dependent measures. These effects remained even after possible gender differences in encoding were controlled for, suggesting a perceptual bias on the part of the decoders.  相似文献   

12.
The capacity to engage with one’s child in a reciprocally responsive way is an important element of successful and rewarding parent–child conversations, which are common contexts for emotion socialization. The degree to which a parent–child dyad shows a mutually responsive orientation presumably depends on both individuals’ socio-emotional skills. For example, one or both members of a dyad needs to be able to accurately interpret and respond to the other’s nonverbal cues, such as facial expressions, to facilitate mutually responsive interactions. Little research, however, has examined whether and how mother and/or child facial expression decoding skill relates to dyads’ emotional mutuality during conversations. We thus examined associations between both mother and child facial expression decoding skill and observed emotional mutuality during parent-preschooler conversations about happy child memories. Results lend support to our hypotheses by suggesting that both mother and child capacities to read others’ emotional cues make distinct contributions to parent–child emotional mutuality in the context of reminiscing conversations. Specifically, mothers’ accurate decoding of child facial expressions predicted maternal displays of positive affect and interest, while children’s accurate decoding of adult facial expressions predicted dyadic displays of mutual enjoyment. Contrary to our hypotheses, however, parent/child facial expression decoding skills did not interact to predict observed mutual responsiveness. These findings underscore the importance of attending to both parent and child contributions to successful dyadic interactions that facilitate effective emotion socialization.  相似文献   

13.
To examine individual differences in decoding facial expressions, college students judged type and emotional intensity of emotional faces at five intensity levels and completed questionnaires on family expressiveness, emotionality, and self-expressiveness. For decoding accuracy, family expressiveness was negatively related, with strongest effects for more prototypical faces, and self-expressiveness was positively related. For perceptions of emotional intensity, family expressiveness was positively related, emotionality tended to be positively related, and self-expressiveness tended to be negatively related; these findings were all qualified by level of ambiguity/clarity of the facial expressions. Decoding accuracy and perceived emotional intensity also related positively with each other. Results suggest that even simple facial judgments are made through an interpretive lens partially created by family expressiveness, individuals’ own emotionality, and self-expressiveness.  相似文献   

14.
We analyzed the facial behavior of 100 volunteers who video-recorded their own expressions while experiencing an episode of sexual excitement that concluded in an orgasm, and then posted their video clip on an Internet site. Four distinct observational periods from the video clips were analyzed and coded by FACS (Facial Action Coding System, Ekman and Friesen 1978). We found nine combinations of muscular movements produced by at least 5% of the senders. These combinations were consistent with facial expressions of sexual excitement described by Masters and Johnson (Human sexual response, 1966), and they included the four muscular movements of the core expression of pain (Prkachin, Pain, 51, 297–306, 1992).  相似文献   

15.
While numerous studies have investigated children’s recognition of facial emotional expressions, little evidence has been gathered concerning their explicit knowledge of the components included in such expressions. Thus, we investigated children’s knowledge of the facial components involved in the expressions of happiness, sadness, anger, and surprise. Four- and 5-year-old Japanese children were presented with the blank face of a young character, and asked to select facial components in order to depict the emotions he felt. Children’s overall performance in the task increased as a function of age, and was above chance level for each emotion in both age groups. Children were likely to select the Cheek raiser and Lip corner puller to depict happiness, the Inner brow raiser, Brow lowerer, and Lid droop to depict sadness, the Brow lowerer and Upper lid raiser to depict anger, and the Upper lid raiser and Jaw drop to depict surprise. Furthermore, older children demonstrated a better knowledge of the involvement of the Upper lid raiser in surprise expressions.  相似文献   

16.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

17.
Few attempts have been made since the pioneer work of Ekman et al. (1980) to examine the development of the deliberate control of facial action units in children. We are reporting here two studies concerned with this issue. In Study 1, we investigated children’s ability to activate facial action units involved in sadness and happiness expressions as well as combinations of these action units. In Study 2, we examined children’s ability to pose happiness and sadness with their face, without telling them which action unit to activate. The children who took part in this study were simply asked to portray happiness and sadness as convincingly as possible. The results of Study 1 indicate a strong developmental progression in children’s ability to produce elementary facial components of both emotions as well as in their ability to produce a combination of the elements in the case of happiness. In agreement with prior research in motor development, several non-target action units were also activated when children performed the task. Their occurrence persisted throughout childhood, indicating limitations in the finer motor control achieved by children across age. The results obtained in Study 2 paralleled those obtained in Study 1 in many respects, providing evidence that the children used the technique of deliberate action to pose the two target emotions.  相似文献   

18.
Models of the etiology of adolescent antisocial behavior suggest that externalizing problems may reflect a susceptibility to crime exposure and a diminished capacity for emotion introspection. In this study, adolescents of Mexican origin completed a neuroimaging task that involved rating their subjective feelings of sadness in response to emotional facial expressions or a nonemotional aspect of each face. At lower levels of neural activity during sadness introspection in posterior cingulate and left temporoparietal junction, and in left amygdala, brain regions involved in mentalizing and emotion, respectively, a stronger positive association between community crime exposure and externalizing problems was found. The specification of emotion introspection as a psychological process showing neural variation may help inform targeted interventions to positively affect adolescent behavior.  相似文献   

19.
This preregistered study examined how face masks influenced face memory in a North American sample of 6- to 9-month-old infants (N = 58) born during the COVID-19 pandemic. Infants' memory was tested using a standard visual paired comparison (VPC) task. We crossed whether or not the faces were masked during familiarization and test, yielding four trial types (masked-familiarization/masked-test, unmasked-familiarization/masked-test, masked-familiarization/unmasked-test, and unmasked-familiarization/unmasked-test). Infants showed memory for the faces if the faces were unmasked at test, regardless of whether or not the face was masked during familiarization. However, infants did not show robust evidence of memory when test faces were masked, regardless of the familiarization condition. In addition, infants' bias for looking at the upper (eye) region was greater for masked than unmasked faces, although this difference was unrelated to memory performance. In summary, although the presence of face masks does appear to influence infants' processing of and memory for faces, they can form memories of masked faces and recognize those familiar faces even when unmasked.  相似文献   

20.
The specificity predicted by differential emotions theory (DET) for early facial expressions in response to 5 different eliciting situations was studied in a sample of 4‐month‐old infants (n = 150). Infants were videotaped during tickle, sour taste, jack‐in‐the‐box, arm restraint, and masked‐stranger situations and their expressions were coded second by second. Infants showed a variety of facial expressions in each situation; however, more infants exhibited positive (joy and surprise) than negative expressions (anger, disgust, fear, and sadness) across all situations except sour taste. Consistent with DET‐predicted specificity, joy expressions were the most common in response to tickling, and were less common in response to other situations. Surprise expressions were the most common in response to the jack‐in‐the‐box, as predicted, but also were the most common in response to the arm restraint and masked‐stranger situations, indicating a lack of specificity. No evidence of predicted specificity was found for anger, disgust, fear, and sadness expressions. Evidence of individual differences in expressivity within situations, as well as stability in the pattern across situations, underscores the need to examine both child and contextual factors in studying emotional development. The results provide little support for the DET postulate of situational specificity and suggest that a synthesis of differential emotions and dynamic systems theories of emotional expression should be considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号