首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Despite known differences in the acoustic properties of children’s and adults’ voices, no work to date has examined the vocal cues associated with emotional prosody in youth. The current study investigated whether child (n = 24, 17 female, aged 9–15) and adult (n = 30, 15 female, aged 18–63) actors differed in the vocal cues underlying their portrayals of basic emotions (anger, disgust, fear, happiness, sadness) and social expressions (meanness, friendliness). We also compared the acoustic characteristics of meanness and friendliness to comparable basic emotions. The pattern of distinctions between expressions varied as a function of age for voice quality and mean pitch. Specifically, adults’ portrayals of the various expressions were more distinct in mean pitch than children’s, whereas children’s representations differed more in voice quality than adults’. Given the importance of pitch variables for the interpretation of a speaker’s intended emotion, expressions generated by adults may thus be easier for listeners to decode than those of children. Moreover, the vocal cues associated with the social expressions of meanness and friendliness were distinct from those of basic emotions like anger and happiness respectively. Overall, our findings highlight marked differences in the ways in which adults and children convey socio-emotional expressions vocally, and expand our understanding of the communication of paralanguage in social contexts. Implications for the literature on emotion recognition are discussed.  相似文献   

2.
We assessed the impact of social context on the judgment of emotional facial expressions as a function of self-construal and decoding rules. German and Greek participants rated spontaneous emotional faces shown either alone or surrounded by other faces with congruent or incongruent facial expressions. Greek participants were higher in interdependence than German participants. In line with cultural decoding rules, Greek participants rated anger expressions less intensely and sad and disgust expressions more intensely. Social context affected the ratings by both groups in different ways. In the more interdependent culture (Greece) participants perceived anger least intensely when the group showed neutral expressions, whereas sadness expressions were rated as most intense in the absence of social context. In the independent culture (Germany) a group context (others expressing anger or happiness) additionally amplified the perception of angry and happy expressions. In line with the notion that these effects are mediated by more holistic processing linked to higher interdependence, this difference disappeared when we controlled for interdependence on the individual level. The findings confirm the usefulness of considering both country level and individual level factors when studying cultural differences.  相似文献   

3.
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds’ fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analyzed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).  相似文献   

4.
This study investigated whether people can decode emotion (happiness, neutrality, and anger) communicated via hand movements in Finnish sign language when these emotions are expressed in semantically neutral sentences. Twenty volunteer participants without any knowledge of sign language took part in the experiment. The results indicated that the subjects were able to reliably decode anger and neutrality from the quality of hand movements. For happy hand expressions, the responses of happiness and neutrality were confused. Thus, the study showed that emotion-related information can be encoded in the quality of hand movements during signing and that this information can be decoded without previous experience with this particular mode of communication.  相似文献   

5.
Previous research employing the facial affect decision task (FADT) indicates that when listeners are exposed to semantically anomalous utterances produced in different emotional tones (prosody), the emotional meaning of the prosody primes decisions about an emotionally congruent rather than incongruent facial expression (Pell, M. D., Journal of Nonverbal Behavior, 29, 45–73). This study undertook further development of the FADT by investigating the approximate timecourse of prosody–face interactions in nonverbal emotion processing. Participants executed facial affect decisions about happy and sad face targets after listening to utterance fragments produced in an emotionally related, unrelated, or neutral prosody, cut to 300, 600, or 1000 ms in duration. Results underscored that prosodic information enduring at least 600 ms was necessary to presumably activate shared emotion knowledge responsible for prosody–face congruity effects. Marc D. Pell is affiliated with the School of Communication Sciences and Disorders, McGill University, Montréal, Canada.  相似文献   

6.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

7.
While numerous studies have investigated children’s recognition of facial emotional expressions, little evidence has been gathered concerning their explicit knowledge of the components included in such expressions. Thus, we investigated children’s knowledge of the facial components involved in the expressions of happiness, sadness, anger, and surprise. Four- and 5-year-old Japanese children were presented with the blank face of a young character, and asked to select facial components in order to depict the emotions he felt. Children’s overall performance in the task increased as a function of age, and was above chance level for each emotion in both age groups. Children were likely to select the Cheek raiser and Lip corner puller to depict happiness, the Inner brow raiser, Brow lowerer, and Lid droop to depict sadness, the Brow lowerer and Upper lid raiser to depict anger, and the Upper lid raiser and Jaw drop to depict surprise. Furthermore, older children demonstrated a better knowledge of the involvement of the Upper lid raiser in surprise expressions.  相似文献   

8.
Facial expressions related to sadness are a universal signal of nonverbal communication. Although results of many psychology studies have shown that drooping of the lip corners, raising of the chin, and oblique eyebrow movements (a combination of inner brow raising and brow lowering) express sadness, no report has described a study elucidating facial expression characteristics under well-controlled circumstances with people actually experiencing the emotion of sadness itself. Therefore, spontaneous facial expressions associated with sadness remain unclear. We conducted this study to accumulate important findings related to spontaneous facial expressions of sadness. We recorded the spontaneous facial expressions of a group of participants as they experienced sadness during an emotion-elicitation task. This task required a participant to recall neutral and sad memories while listening to music. We subsequently conducted a detailed analysis of their sad and neutral expressions using the Facial Action Coding System. The prototypical facial expressions of sadness in earlier studies were not observed when people experienced sadness as an internal state under non-social circumstances. By contrast, they expressed tension around the mouth, which might function as a form of suppression. Furthermore, results show that parts of these facial actions are not only related to sad experiences but also to other emotional experiences such as disgust, fear, anger, and happiness. This study revealed the possibility that new facial expressions contribute to the experience of sadness as an internal state.  相似文献   

9.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

10.
Facial expressions of emotions convey not only information about emotional states but also about interpersonal intentions. The present study investigated whether factors known to influence the decoding of emotional expressions—the gender and ethnicity of the stimulus person as well as the intensity of the expression—would also influence attributions of interpersonal intentions. For this, 145 men and women rated emotional facial expressions posed by both Caucasian and Japanese male and female stimulus persons on perceived dominance and affiliation. The results showed that the sex and the ethnicity of the encoder influenced observers' ratings of dominance and affiliation. For anger displays only, this influence was mediated by expectations regarding how likely it is that a particular encoder group would display anger. Further, affiliation ratings were equally influenced by low intensity and by high intensity expressions, whereas only fairly intense emotional expressions affected attributions of dominance.  相似文献   

11.
The relation between knowledge of American Sign Language (ASL) and the ability to decode facial expressions of emotion was explored in this study. Subjects were 60 college students, half of whom were intermediate level students of ASL and half of whom had no exposure to a signed language. Subjects viewed and judged silent video segments of stimulus persons experiencing spontaneous emotional reactions representing either happiness, sadness, anger, disgust, or fear/surprise. Results indicated that hearing subjects knowledgeable in ASL were generally better than hearing non-signers at identifying facial expressions of emotion, although there were variations in decoding accuracy regarding the specific emotion being judged. In addition, females were more successful decoders than males. Results have implications for better understanding the nature of nonverbal communication in deaf and hearing individuals.We are grateful to Karl Scheibe for comments on an earlier version of this paper and to Erik Coats for statistical analysis. This study was conducted as part of a Senior Honors thesis at Wesleyan University.  相似文献   

12.
This study tested the hypothesis derived from ecological theory that adaptive social perceptions of emotion expressions fuel trait impressions. Moreover, it was predicted that these impressions would be overgeneralized and perceived in faces that were not intentionally posing expressions but nevertheless varied in emotional demeanor. To test these predictions, perceivers viewed 32 untrained targets posing happy, surprised, angry, sad, and fearful expressions and formed impressions of their dominance and affiliation. When targets posed happiness and surprise they were perceived as high in dominance and affiliation whereas when they posed anger they were perceived as high in dominance and low in affiliation. When targets posed sadness and fear they were perceived as low in dominance. As predicted, many of these impressions were overgeneralized and attributed to targets who were not posing expressions. The observed effects were generally independent of the impact of other facial cues (i.e., attractiveness and babyishness).  相似文献   

13.
Previous research has suggested that the ability to recognize vocal portrayals of socio-emotional expressions improves with age throughout childhood and adolescence. The current study examined whether stimulus-level factors (i.e., the age of the speaker and the type of expression being conveyed) interacted with listeners’ developmental stage to predict listeners’ recognition accuracy. We assessed mid-adolescent (n = 50, aged 13–15 years) and adult (n = 87, 18–30 years) listeners’ ability to recognize basic emotions and social expressions in the voices of both adult and youth actors. Adults’ emotional prosody was better recognized than that of youth, and adult listeners were more accurate overall than were mid-adolescents. Interaction effects revealed that youths’ accuracy was equivalent to adult listeners’ when hearing adult portrayals of anger, disgust, friendliness, happiness, and meanness, and youth portrayals of disgust, happiness, and meanness. Our findings highlight the importance of speaker characteristics and type of expression on listeners’ ability to recognize vocal cues of emotion and social intent.  相似文献   

14.
The Intensity of Emotional Facial Expressions and Decoding Accuracy   总被引:2,自引:0,他引:2  
The influence of the physical intensity of emotional facial expressions on perceived intensity and emotion category decoding accuracy was assessed for expressions of anger, disgust, sadness, and happiness. The facial expressions of two men and two women posing each of the four emotions were used as stimuli. Six different levels of intensity of expression were created for each pose using a graphics morphing program. Twelve men and 12 women rated each of the 96 stimuli for perceived intensity of the underlying emotion and for the qualitative nature of the emotion expressed. The results revealed that perceived intensity varied linearly with the manipulated physical intensity of the expression. Emotion category decoding accuracy varied largely linearly with the manipulated physical intensity of the expression for expressions of anger, disgust, and sadness. For the happiness expressions only, the findings were consistent with a categorical judgment process. Sex of encoder produced significant effects for both dependent measures. These effects remained even after possible gender differences in encoding were controlled for, suggesting a perceptual bias on the part of the decoders.  相似文献   

15.
Affective associations between a speakers voice (emotional prosody) and a facial expression were investigated using a new on-line procedure, the Facial Affect Decision Task (FADT). Faces depicting one of four basic emotions were paired with utterances conveying an emotionally-related or unrelated prosody, followed by a yes/no judgement of the face as a true exemplar of emotion. Results established that prosodic characteristics facilitate the accuracy and speed of decisions about an emotionally congruent target face, supplying empirical support for the idea that information about discrete emotions is shared across major nonverbal channels. The FADT represents a promising tool for future on-line studies of nonverbal processing in both healthy and disordered individuals.The author gratefully acknowledges the service of Elmira Chan, Sarah Addleman, and Marta Fundamenski for running the experiment, and for helpful comments received from M. Harris, K. Scherer and anonymous reviewers of an earlier draft of the paper. This research was funded by the Natural Sciences and Engineering Research Council of Canada.  相似文献   

16.
Cognitive models of social anxiety provide a basis for predicting that the ability to process nonverbal information accurately and quickly should be impaired during the experience of state anxiety. To test this hypothesis, we assigned participants to threatening and non-threatening conditions and asked them to label the emotions expressed in a series of faces. It was predicted that social anxiety would be positively associated with errors and response times in threatening conditions, but not in a non-threatening condition. It was also predicted that high social anxiety would be associated with more errors and longer response times when identifying similar expressions such as sadness, anger, and fear. The results indicate that social anxiety was not associated with errors in identifying facial expressions of emotion, regardless of the level of state anxiety experienced. However, social anxiety scores were found to be significantly related to response times to identify facial expressions, but the relationship varied depending on the level of state anxiety experienced. Methodological and theoretical implications of using response time data when assessing nonverbal ability are discussed.  相似文献   

17.
This article examines gender differences in the emotion management of men and women in the workplace. The belief in American culture that women are more emotional than men has limited women's opportunities in many types of work. Because emotional expression is often tightly controlled in the workplace, examining emotion management performed at work presents an opportunity to evaluate gender differences in response to similar working conditions. Previous research suggests that men and women do not differ in their experiences of emotion and the expression of emotion is linked to status positions. An analysis of survey data collected from workers in a diverse group of occupations illustrates that women express anger less and happiness more than men in the workplace. Job and status characteristics explain the association between gender and anger management at work but were unrelated to the management of happiness expressions in the workplace.  相似文献   

18.
19.
The purpose of this study was to determine whether it is more difficult to decode facial expressions of pain in older than in younger adults. The facial expressions of 10 younger and 10 older chronic pain patients undergoing a painful diagnostic test were viewed on videotape by untrained judges. Judges estimated the severity of pain being experienced by the patients. Ratings made of the older faces during painful moments described more pain, and appeared more accurate, than those made of younger faces. Judges also reported seeing more pain in posed, masked, and baseline facial expressions in the older adults. Age-related structural changes to the face were not responsible for this bias. This suggests that judges were predisposed to see pain in the faces of the older patients, and undermines the assumption that their ratings of pain in the painful moment segments were accurate.  相似文献   

20.
This study was designed to investigate the potential association between social anxiety and children's ability to decode nonverbal emotional cues. Participants were 62 children between 8 and 10 years of age, who completed self-report measures of social anxiety, depressive symptomatology, and nonspecific anxious symptomatology, as well as nonverbal decoding tasks assessing accuracy at identifying emotion in facial expressions and vocal tones. Data were analyzed with multiple regression analyses controlling for generalized cognitive ability, and nonspecific anxious and depressive symptomatology. Results provided partial support for the hypothesis that social anxiety would relate to nonverbal decoding accuracy. Difficulty identifying emotions conveyed in children's and adults' voices was associated with general social avoidance and distress. At higher levels of social anxiety, children more frequently mislabeled fearful voices as sad. Possible explanations for the obtained results are explored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号