首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose–response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.  相似文献   

2.
This study investigated the effects of anxiety on nonverbal aspects of speech using data collected in the framework of a large study of social phobia treatment. The speech of social phobics (N = 71) was recorded during an anxiogenic public speaking task both before and after treatment. The speech samples were analyzed with respect to various acoustic parameters related to pitch, loudness, voice quality, and temporal aspects of speech. The samples were further content-masked by low-pass filtering (which obscures the linguistic content of the speech but preserves nonverbal affective cues) and subjected to listening tests. Results showed that a decrease in experienced state anxiety after treatment was accompanied by corresponding decreases in (a) several acoustic parameters (i.e., mean and maximum voice pitch, high-frequency components in the energy spectrum, and proportion of silent pauses), and (b) listeners’ perceived level of nervousness. Both speakers’ self-ratings of state anxiety and listeners’ ratings of perceived nervousness were further correlated with similar acoustic parameters. The results complement earlier studies on vocal affect expression which have been conducted on posed, rather than authentic, emotional speech.  相似文献   

3.
Despite known differences in the acoustic properties of children’s and adults’ voices, no work to date has examined the vocal cues associated with emotional prosody in youth. The current study investigated whether child (n = 24, 17 female, aged 9–15) and adult (n = 30, 15 female, aged 18–63) actors differed in the vocal cues underlying their portrayals of basic emotions (anger, disgust, fear, happiness, sadness) and social expressions (meanness, friendliness). We also compared the acoustic characteristics of meanness and friendliness to comparable basic emotions. The pattern of distinctions between expressions varied as a function of age for voice quality and mean pitch. Specifically, adults’ portrayals of the various expressions were more distinct in mean pitch than children’s, whereas children’s representations differed more in voice quality than adults’. Given the importance of pitch variables for the interpretation of a speaker’s intended emotion, expressions generated by adults may thus be easier for listeners to decode than those of children. Moreover, the vocal cues associated with the social expressions of meanness and friendliness were distinct from those of basic emotions like anger and happiness respectively. Overall, our findings highlight marked differences in the ways in which adults and children convey socio-emotional expressions vocally, and expand our understanding of the communication of paralanguage in social contexts. Implications for the literature on emotion recognition are discussed.  相似文献   

4.
Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller’s emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former’s greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.  相似文献   

5.
Although the second year of life is characterized by dramatic changes in expressive language and by increases in negative emotion expression, verbal communication and emotional communication are often studied separately. With a sample of twenty‐five one‐year‐olds (12–23 months), we used Language Environment Analysis (LENA; Xu, Yapanel, & Gray, 2009, Reliability of the LENA? language environment analysis system in young children’s natural home environment. LENA Foundation) to audio‐record and quantify parent–toddler communication, including toddlers’ vocal negative emotion expressions, across a full waking day. Using a multilevel extension of lag‐sequential analysis, we investigated whether parents are differentially responsive to toddlers’ negative emotion expressions compared to their verbal or preverbal vocalizations, and we examined the effects of parents’ verbal responses on toddlers’ subsequent communicative behavior. Toddlers’ negative emotions were less likely than their vocalizations to be followed by parent speech. However, when negative emotions were followed by parent speech, toddlers were most likely to vocalize next. Post hoc analyses suggest that older toddlers and toddlers with higher language abilities were more likely to shift from negative emotion to verbal or preverbal vocalization following parent response. Implications of the results for understanding the parent–toddler communication processes that support both emotional development and verbal development are discussed.  相似文献   

6.
The present study aimed to clarify how listeners decode emotions from human nonverbal vocalizations, exploring unbiased recognition accuracy of vocal emotions selected from the Montreal Affective Voices (MAV) (Belin et al. in Trends Cognit Sci 8:129–135, 2008. doi: 10.1016/j.tics.2004.01.008). The MAV battery includes 90 nonverbal vocalizations expressing anger, disgust, fear, pain, sadness, surprise, happiness, sensual pleasure, as well as neutral expressions, uttered by female and male actors. Using a forced-choice recognition task, 156 native speakers of Portuguese were asked to identify the emotion category underlying each MAV sound, and additionally to rate the valence, arousal and dominance of these sounds. The analysis focused on unbiased hit rates (Hu Score; Wagner in J Nonverbal Behav 17(1):3–28, 1993. doi: 10.1007/BF00987006), as well as on the dimensional ratings for each discrete emotion. Further, we examined the relationship between categorical and dimensional ratings, as well as the effects of speaker’s and listener’s sex on these two types of assessment. Surprise vocalizations were associated with the poorest accuracy, whereas happy vocalizations were the most accurately recognized, contrary to previous studies. Happiness was associated with the highest valence and dominance ratings, whereas fear elicited the highest arousal ratings. Recognition accuracy and dimensional ratings of vocal expressions were dependent both on speaker’s sex and listener’s sex. Further, discrete vocal emotions were not consistently predicted by dimensional ratings. Using a large sample size, the present study provides, for the first time, unbiased recognition accuracy rates for a widely used battery of nonverbal vocalizations. The results demonstrated a dynamic interplay between listener’s and speaker’s variables (e.g., sex) in the recognition of emotion from nonverbal vocalizations. Further, they support the use of both categorical and dimensional accounts of emotion when probing how emotional meaning is decoded from nonverbal vocal cues.  相似文献   

7.
8.
When a chief executive officer or spokesperson responds to an organizational crisis, he or she communicates not only with verbal cues but also visual and vocal cues. While most research in the area of crisis communication has focused on verbal cues (e.g., apologies, denial), this paper explores the relative importance of visual and vocal cues by spokespersons of organizations in crisis. Two experimental studies have more specifically examined the impact of a spokesperson’s visual cues of deception (i.e., gaze aversion, posture shifts, adaptors), because sending a credible response is crucial in times of crisis. Each study focused on the interplay of these visual cues with two specific vocal cues that have also been linked to perceptions of deception (speech disturbances in study 1; voice pitch in study 2). Both studies show that visual cues of deception negatively affect both consumers’ attitudes towards the organization (study 1) and their purchase intentions (study 2) after a crisis. In addition, the findings indicate that in crisis communication, the impact of visual cues dominates the outcomes of vocal cues. In both studies, vocal cues only affected consumers’ perceptions when the spokesperson displayed visual cues of deception. More specifically, the findings show that crisis communication messages with speech disturbances (study 1) or a raised voice pitch (study 2) can negatively affect organizational post-crisis perceptions.  相似文献   

9.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged three to five years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundatin (#01523721) to Nathan A. Fox. The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for her help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

10.
The purpose of this study was to investigate whether actors playing homosexual male characters in North-American television shows speak with a feminized voice, thus following longstanding stereotypes that attribute feminine characteristics to male homosexuals. We predicted that when playing homosexual characters, actors would raise the frequency components of their voice towards more stereotypically feminine values. This study compares fundamental frequency (F0) and formant frequencies (F i ) parameters in the speech of fifteen actors playing homosexual and heterosexual characters in North-American television shows. Our results reveal that the voices of actors playing homosexual male characters are characterized by a raised F0 (corresponding to a higher pitch), and raised formant frequencies (corresponding to a less baritone timbre), approaching values typical of female voices. Besides providing further evidence of the existence of an “effeminacy” stereotype in portraying male homosexuals in the media, these results show that actors perform pitch and vocal tract length adjustments in order to alter their perceived sexual orientation, emphasizing the role of these frequency components in the behavioral expression of gender attributes in the human voice.  相似文献   

11.
This paper examines the impact of expressing different discrete emotions with a mixed valence (anger and hope) in organizational crisis communication on negative word-of-mouth on social media. In particular, the effects of expressing discrete emotions with a single valence (either positive or negative) versus mixed valence (expressing both positive and negative emotions) emotions are studied by means of a 4 (emotional message framing: control vs. positive emotion vs. negative emotion vs. mixed valence emotions) by 2 (crisis type: victim vs. preventable crisis) between-subjects experimental design (N?=?295). Results show that in a preventable crisis, expressing mixed valence emotions elicits higher perceived sincerity and more empathy towards the spokesperson, and subsequently less negative word-of-mouth compared to expressing either single emotions or the control condition. However, in the case of a victim crisis, expressing single emotions, and especially a negative emotion like anger, results in less negative word-of-mouth through an increase in perceived sincerity and empathy towards the spokesperson.  相似文献   

12.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged 3 to 5 years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundation (#BNS8317229) to Nathan A. Fox. The research was also supported by a grant awarded to Nathan Fox from the National Institutes of Health (#R01MH/HD17899). The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

13.
Abstract

Objective: This study tested the interactive relationships between college students’ perceived capability of regulating negative emotions and savoring positive emotions on mental health outcomes, including anxiety and depressive symptoms. Participants: Participants were healthy undergraduates (n?=?167) recruited from two universities in Hong Kong. Methods: Students completed four scales assessing their perceived capability of using strategies to regulate negative and positive emotions and their anxiety and depressive symptoms. Results: Findings revealed that both anxiety and depressive symptoms were negatively linked to perceived capabilities of regulating negative emotions and savoring positive emotions. Furthermore, regulating negative emotions interacted with savoring positive emotions to predict anxiety symptoms. Conclusions: The need to simultaneously perform negative and positive emotion regulation is highlighted. The results suggest the priority of regulating negative emotions over savoring positive emotions in alleviating anxiety symptoms. Nevertheless, enhancing positive emotion shows greater benefits for those who are less adept at regulating negative emotions.  相似文献   

14.
Evidence suggests that people can manipulate their vocal intonations to convey a host of emotional, trait, and situational images. We asked 40 participants (20 men and 20 women) to intentionally manipulate the sound of their voices in order to portray four traits: attractiveness, confidence, dominance, and intelligence to compare these samples to their normal speech. We then asked independent raters of the same- and opposite-sex to assess the degree to which each voice sample projected the given trait. Women’s manipulated voices were judged as sounding more attractive than their normal voices, but this was not the case for men. In contrast, men’s manipulated voices were rated by women as sounding more confident than their normal speech, but this did not hold true for women’s voices. Further, women were able to manipulate their voices to sound just as dominant as the men’s manipulated voices, and both sexes were able to modify their voices to sound more intelligent than their normal voice. We also assessed all voice samples objectively using spectrogram analyses and several vocal patterns emerged for each trait; among them we found that when trying to sound sexy/attractive, both sexes slowed their speech and women lowered their pitch and had greater vocal hoarseness. Both sexes raised their pitch and spoke louder to sound dominant and women had less vocal hoarseness. These findings are discussed using an evolutionary perspective and implicate voice modification as an important, deliberate aspect of communication, especially in the realm of mate selection and competition.  相似文献   

15.
Two studies examined vocal affect in medical providers’ and patients’ content-filtered (CF) speech. A digital methodology for content-filtering and a set of reliable global affect rating scales for CF voice were developed. In Study 1, ratings of affect in physicians’ CF voice correlated with patients’ satisfaction, perceptions of choice/control, medication adherence, mental and physical health, and physicians’ satisfaction. In Study 2, ratings of affect in the CF voices of physicians and nurses correlated with their patients’ satisfaction, and the CF voices of nurses and patients reflected their satisfaction. Voice tone ratings of providers and patients were intercorrelated, suggesting reciprocity in their vocal affective communication.  相似文献   

16.
Previous research has shown that more attractive voices are associated with more favorable personality impressions. The present study examined which acoustic characteristics make a voice attractive. Segments of recorded voices were rated on various dimensions of voice quality, attractiveness, and personality impressions. Objective measures of voice quality were obtained from spectrogram analysis. Overall, the subjective ratings of voice quality predicted vocal attractiveness better than the objective measures. When vocal attractiveness was regressed onto both subjective and objective measures, the final regression equation included 8 subjective measures, which together accounted for 74% of the variance of the attractiveness scores. It also was found that the measures of voice quality accounted for variance in favorableness of personality impressions above and beyond the contribution of vocal attractiveness. Thus, attractiveness captures an important dimension of the voice but does not cover all aspects of voice quality.This research was supported by National Institute of Mental Health Grant MH40498.  相似文献   

17.
Past studies have found equivocal support for the ability of young infants to discriminate infant‐directed (ID) speech information in the presence of auditory‐only versus auditory + visual displays (faces + voices). Generally, younger infants appear to have more difficulty discriminating a change in the vocal properties of ID speech when they are accompanied by faces. Forty 4‐month‐old infants were tested using either an infant‐controlled habituation procedure (Experiment 1) or a fixed‐trial habituation procedure (Experiment 2). The prediction was that the infant‐controlled habituation procedure would be a more sensitive measure of infant attention to complex displays. The results indicated that 4‐month‐old infants discriminated voice changes in dynamic face + voice displays depending on the order in which they were viewed during the infant‐controlled habituation procedure. In contrast, no evidence of discrimination was found in the fixed‐trial procedure. The findings suggest that the selection of experimental methodology plays a significant role in the empirical observations of infant perceptual abilities.  相似文献   

18.
The study of emotion elicitation in the caregiver‐infant dyad has focused almost exclusively on the facial and vocal channels, whereas little attention has been given to the contribution of the tactile channel. This study was undertaken to investigate the effects of touch on infants' emotions. During the time that objects were presented to the dyad, mothers provided tactile stimulation to their 12‐month‐old infants by either (a) tensing their fingers around the infants' abdomen while abruptly inhaling, (b) relaxing their grip around the infants' abdomen, or (c) not providing additional tactile stimulation (control condition). The results revealed that infants in the first condition (a) touched the objects less and waited longer to touch the objects while displaying more negative emotional displays compared to infants in the control condition. However, no apparent differences were found between infants in the second condition (b) and the control condition. The results suggest that infants' emotions may be elicited by specific parameters of touch.  相似文献   

19.
20.
Family narratives about the past are an important context for the socialization of emotion, but relations between expression of negative emotion and children's emerging competence are conflicting. In this study, 24 middle‐class two‐parent families narrated a shared negative experience together and we examined the process (initiations and collaborations) and function (the expression and explanation of emotions) of co‐constructed narratives in relation to preadolescents' perceived competencies and self‐esteem. Family narratives in which specific emotions were expressed and explained in a collaborative fashion, especially negative emotion, were positively related to preadolescents' reported competencies and self‐esteem, whereas family narratives that expressed general positive emotion were negatively related to preadolescents' perceived competencies. Implications of family narratives about emotional events, specifically the ways in which families discuss emotion, in relation to preadolescents' self‐development are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号