首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When a chief executive officer or spokesperson responds to an organizational crisis, he or she communicates not only with verbal cues but also visual and vocal cues. While most research in the area of crisis communication has focused on verbal cues (e.g., apologies, denial), this paper explores the relative importance of visual and vocal cues by spokespersons of organizations in crisis. Two experimental studies have more specifically examined the impact of a spokesperson’s visual cues of deception (i.e., gaze aversion, posture shifts, adaptors), because sending a credible response is crucial in times of crisis. Each study focused on the interplay of these visual cues with two specific vocal cues that have also been linked to perceptions of deception (speech disturbances in study 1; voice pitch in study 2). Both studies show that visual cues of deception negatively affect both consumers’ attitudes towards the organization (study 1) and their purchase intentions (study 2) after a crisis. In addition, the findings indicate that in crisis communication, the impact of visual cues dominates the outcomes of vocal cues. In both studies, vocal cues only affected consumers’ perceptions when the spokesperson displayed visual cues of deception. More specifically, the findings show that crisis communication messages with speech disturbances (study 1) or a raised voice pitch (study 2) can negatively affect organizational post-crisis perceptions.  相似文献   

2.
Previous research has shown that more attractive voices are associated with more favorable personality impressions. The present study examined which acoustic characteristics make a voice attractive. Segments of recorded voices were rated on various dimensions of voice quality, attractiveness, and personality impressions. Objective measures of voice quality were obtained from spectrogram analysis. Overall, the subjective ratings of voice quality predicted vocal attractiveness better than the objective measures. When vocal attractiveness was regressed onto both subjective and objective measures, the final regression equation included 8 subjective measures, which together accounted for 74% of the variance of the attractiveness scores. It also was found that the measures of voice quality accounted for variance in favorableness of personality impressions above and beyond the contribution of vocal attractiveness. Thus, attractiveness captures an important dimension of the voice but does not cover all aspects of voice quality.This research was supported by National Institute of Mental Health Grant MH40498.  相似文献   

3.
Character judgments, based on facial appearance, impact both perceivers’ and targets’ interpersonal decisions and behaviors. Nonetheless, the resilience of such effects in the face of longer acquaintanceship duration is yet to be determined. To address this question, we had 51 elderly long-term married couples complete self and informant versions of a Big Five Inventory. Participants were also photographed, while they were requested to maintain an emotionally neutral expression. A subset of the initial sample completed a shortened version of the Big Five Inventory in response to the pictures of other opposite sex participants (with whom they were unacquainted). Oosterhof and Todorov’s (in Proc Natl Acad Sci USA 105:11087–11092, 2008) computer-based model of face evaluation was used to generate facial trait scores on trustworthiness, dominance, and attractiveness, based on participants’ photographs. Results revealed that structural facial characteristics, suggestive of greater trustworthiness, predicted positively biased, global informant evaluations of a target’s personality, among both spouses and strangers. Among spouses, this effect was impervious to marriage length. There was also evidence suggestive of a Dorian Gray effect on personality, since facial trustworthiness predicted not only spousal and stranger, but also self-ratings of extraversion. Unexpectedly, though, follow-up analyses revealed that (low) facial dominance, rather than (high) trustworthiness, was the strongest predictor of self-rated extraversion. Our present findings suggest that subtle emotional cues, embedded in the structure of emotionally neutral faces, exert long-lasting effects on personality judgments even among very well-acquainted targets and perceivers.  相似文献   

4.
The present study aimed to clarify how listeners decode emotions from human nonverbal vocalizations, exploring unbiased recognition accuracy of vocal emotions selected from the Montreal Affective Voices (MAV) (Belin et al. in Trends Cognit Sci 8:129–135, 2008. doi: 10.1016/j.tics.2004.01.008). The MAV battery includes 90 nonverbal vocalizations expressing anger, disgust, fear, pain, sadness, surprise, happiness, sensual pleasure, as well as neutral expressions, uttered by female and male actors. Using a forced-choice recognition task, 156 native speakers of Portuguese were asked to identify the emotion category underlying each MAV sound, and additionally to rate the valence, arousal and dominance of these sounds. The analysis focused on unbiased hit rates (Hu Score; Wagner in J Nonverbal Behav 17(1):3–28, 1993. doi: 10.1007/BF00987006), as well as on the dimensional ratings for each discrete emotion. Further, we examined the relationship between categorical and dimensional ratings, as well as the effects of speaker’s and listener’s sex on these two types of assessment. Surprise vocalizations were associated with the poorest accuracy, whereas happy vocalizations were the most accurately recognized, contrary to previous studies. Happiness was associated with the highest valence and dominance ratings, whereas fear elicited the highest arousal ratings. Recognition accuracy and dimensional ratings of vocal expressions were dependent both on speaker’s sex and listener’s sex. Further, discrete vocal emotions were not consistently predicted by dimensional ratings. Using a large sample size, the present study provides, for the first time, unbiased recognition accuracy rates for a widely used battery of nonverbal vocalizations. The results demonstrated a dynamic interplay between listener’s and speaker’s variables (e.g., sex) in the recognition of emotion from nonverbal vocalizations. Further, they support the use of both categorical and dimensional accounts of emotion when probing how emotional meaning is decoded from nonverbal vocal cues.  相似文献   

5.
Two studies examined the effects of attractiveness of voice and physical appearance on impressions of personality. Subject-senders were videotaped as they read a standard-content text (Study 1) or randomly selected texts (Study 2). Judges rated the senders' vocal attractiveness from the auditory portion of the tape and their physical attractiveness from the visual portion of the tape. Other judges rated the senders' personality on the basis of their voice, face, or face plus voice. Senders with more attractive voices were rated more favorably in both the voice and face plus voice conditions; senders with more attractive faces were rated more favorably in both the face and face plus voice conditions. The effects of both vocal and physical attractiveness were more pronounced in the single channels (voice condition and face condition, respectively) than in the multiple channel (face plus voice condition). Possible antecedents and consequences of the vocal attractiveness stereotype are discussed. p]Her voice was ever soft, gentle, and low, an excellent thing in woman.Shakespeare (King Lear, Act V, Sc. 3)This research was supported in part by National Institute Mental Health Grant RO1 MH40498-01A2. The authors would like to thank Thomas J. Hernandez, James R. Laguzza, Andrea Lurier, and Mary Elizabeth Sementilli for running the videotaping sessions in Study 1, and Craig B. Partyka, Kimberly A. Radoane, and Kelly B. Sanborn for running the videotaping sessions in Study 2. Grateful acknowledgment is extended to Kate Johnson and Michael Zygmuntowicz for running rating sessions in Study 2, to BiancaMaria Penati for running rating sessions and coding data in Study 2, and to Bradley C. Olson for his assistance with data analysis.  相似文献   

6.
Despite known differences in the acoustic properties of children’s and adults’ voices, no work to date has examined the vocal cues associated with emotional prosody in youth. The current study investigated whether child (n = 24, 17 female, aged 9–15) and adult (n = 30, 15 female, aged 18–63) actors differed in the vocal cues underlying their portrayals of basic emotions (anger, disgust, fear, happiness, sadness) and social expressions (meanness, friendliness). We also compared the acoustic characteristics of meanness and friendliness to comparable basic emotions. The pattern of distinctions between expressions varied as a function of age for voice quality and mean pitch. Specifically, adults’ portrayals of the various expressions were more distinct in mean pitch than children’s, whereas children’s representations differed more in voice quality than adults’. Given the importance of pitch variables for the interpretation of a speaker’s intended emotion, expressions generated by adults may thus be easier for listeners to decode than those of children. Moreover, the vocal cues associated with the social expressions of meanness and friendliness were distinct from those of basic emotions like anger and happiness respectively. Overall, our findings highlight marked differences in the ways in which adults and children convey socio-emotional expressions vocally, and expand our understanding of the communication of paralanguage in social contexts. Implications for the literature on emotion recognition are discussed.  相似文献   

7.
We examined personality impressions on five NEO subscales (Costa & McCrae, 1985) as a function of senders' vocal and physical attractiveness. There were four major findings: (a) both vocal and physical attractiveness produced more favorable ratings, and these effects were more pronounced in a single channel (voice only or face only, respectively) than in a multiple channel (voice plus face); (b) the influence of attractiveness, both vocal and physical, was moderated by subscale—the effect of vocal attractiveness was most pronounced for Neuroticism and nonexistent for Agreeableness; the effect of physical attractiveness was most pronounced for Extraversion and nonexistent for Conscientiousness; (c) a vocal attractiveness × physical attractiveness interaction indicated that effects of the two stereotypes were particularly strong for senders who were attractive on both channels; (d) the effects of attractiveness, both vocal and physical, diminished when judges were familiar with the target persons.This research was supported in part by National Institute of Mental Health Grant RO1 MH 40498.  相似文献   

8.
Although appearance-based cues can help to diagnose physical illness, visual manifestations of mental disorder may be more elusive. Here, we investigated whether individuals could distinguish women with a serious mental disorder (borderline personality disorder) from demographically- and IQ-matched non-psychiatric controls. Participants rated mentally ill targets as more likely to have a mental disorder from photos more accurately than chance, despite not believing that such judgments were possible. The configuration of facial cues played an important role in these judgments, as interfering with the spatial relationships between facial features reduced participants’ accuracy to chance guessing. Further investigation showed similar results when participants rated the targets for specific mental disorders (borderline personality disorder, major depressive disorder) and rated the mentally ill targets as more depressed, angry, anxious, disgusted, emotionally unstable, distressed, and less happy. Moreover, the depression ratings significantly correlated with the targets’ actual depressive symptoms. Thus, individuals may be able to infer aspects of mental disorder from minimal facial cues.  相似文献   

9.
Previous research has not considered the effects of nonverbal synchronization by a speaker on message processing and acceptance by a listener. In this experiment, 178 subjects watched one of three versions of a message—high synchrony, minimal synchrony or dissynchrony—presented by one of two speakers. Receivers of the high synchrony message, which employed kinesic cues synchronized to the vocal/verbal stream, showed higher recall of the message and were more persuaded by it than receivers of the dissynchronous message, which had kinesic cues out of sync with the vocal/verbal stream. Results on three other dependent measures—credibility, distraction and counterarguing—were mixed but were generally consistent with the credibility-yielding and distraction-yielding formulations outlined.  相似文献   

10.
This study investigated the effects of anxiety on nonverbal aspects of speech using data collected in the framework of a large study of social phobia treatment. The speech of social phobics (N = 71) was recorded during an anxiogenic public speaking task both before and after treatment. The speech samples were analyzed with respect to various acoustic parameters related to pitch, loudness, voice quality, and temporal aspects of speech. The samples were further content-masked by low-pass filtering (which obscures the linguistic content of the speech but preserves nonverbal affective cues) and subjected to listening tests. Results showed that a decrease in experienced state anxiety after treatment was accompanied by corresponding decreases in (a) several acoustic parameters (i.e., mean and maximum voice pitch, high-frequency components in the energy spectrum, and proportion of silent pauses), and (b) listeners’ perceived level of nervousness. Both speakers’ self-ratings of state anxiety and listeners’ ratings of perceived nervousness were further correlated with similar acoustic parameters. The results complement earlier studies on vocal affect expression which have been conducted on posed, rather than authentic, emotional speech.  相似文献   

11.
The aim of this research was to analyze the main vocal cues and strategies used by a liar. 31 male university students were asked to raise doubts in an expert in law about a picture. The subjects were required to describe the picture in three experimental conditions: telling the truth (T) and lying to a speaker when acquiescent (L1) and when suspicious (L2). The utterances were then subjected to a digitized acoustic analysis in order to measure nonverbal vocal variables. Verbal variables were also analyzed (number of words, eloquency and disfluency index). Results showed that deception provoked an increment in F0, a greater number of pauses and words, and higher eloquency and fluency indexes. The F0 related to the two types of lie—prepared and unprepared—identified three classes of liars: good liars, tense liars (more numerous in L1), and overcontrolled liars (more numerous in L2). It is argued that these differences are correlated to the complex task of lying and the need to control one's emotions during deception. The liar's effort to control his/her voice, however, can lead to his/her tone being overcontrolled or totally lacking in control (leakage). Finally, the research forwards an explanation on the strategies used by the good liar and in particular treats the self-deception hypothesis.  相似文献   

12.
There is some debate about whether or not sex offenders are similar to non-sex offenders with regard to family background (parental characteristics), personality, and psychopathology. The central aim of this study focused on the comparison of juvenile sex offenders and non-sex offenders. The sample consisted of incarcerated juvenile male sex (n = 30) and non-sex (n = 368) offenders. It appeared that sex offenders resembled non-sex offenders with respect to most of the offender and parental characteristics. Results demonstrated some differences between both groups, while the majority of characteristics were similar. Limitations of the study are discussed, especially the low number of sex offenders, followed by suggestions for further research.  相似文献   

13.
This research utilized a novel methodology to explore the relative salience of facial cues to age, sex, race, and emotion in differentiating faces. Inspired by the Stroop interference effect, participants viewed pairs of schematic faces on a computer that differed simultaneously along two facial dimensions (e.g., race and age) and were prompted to make similarity judgments about the faces along one of the dimensions (e.g., race). On a second round of trials, judgments were made along the other dimension (e.g., age). Analysis of response speed and accuracy revealed that participants judged the race of the faces more quickly and with fewer errors compared to their age, gender, or emotional expression. Methodological and theoretical implications for studying the role of facial cues in social perception are discussed.  相似文献   

14.
Aiming at a more comprehensive assessment of nonverbal vocal emotion communication, this article presents the development and validation of a new rating instrument for the assessment of perceived voice and speech features. In two studies, using two different sets of emotion portrayals by German and French actors, ratings of perceived voice and speech characteristics (loudness, pitch, intonation, sharpness, articulation, roughness, instability, and speech rate) were obtained from non-expert (untrained) listeners. In addition, standard acoustic parameters were extracted from the voice samples. Overall, highly similar patterns of results were found in both studies. Rater agreement (reliability) reached highly satisfactory levels for most features. Multiple discriminant analysis results reveal that both perceived vocal features and acoustic parameters allow a high degree of differentiation of the actor-portrayed emotions. Positive emotions can be classified with a higher hit rate on the basis of perceived vocal features, confirming suggestions in the literature that it is difficult to find acoustic valence indicators. The results show that the suggested scales (Geneva Voice Perception Scales) can be reliably measured and make a substantial contribution to a more comprehensive assessment of the process of emotion inferences from vocal expression.  相似文献   

15.
Physical personality involves perceptible aspects of human appearance (both morphological and expressive) that are reliable cues about human traits and dispositions. Inferences made about personality on the basis of perceiving the cues in appearance may be valid as well as invalid. While the perception of many of such characteristics has been studied before, perception of suggestibility from facial appearance has remained unexplored. Here we present the results of a study where we show that real suggestibility does not correlate with perceived suggestibility. However, there is a significant correlation between perceived suggestibility and some facial characteristics such as babyfacedness and merriness.  相似文献   

16.
17.
Audiotapes of the voices of77 preschool children were prepared. Subjects listened to the tapes, and then provided their impressions of the competence, leadership, dominance, warmth, and honesty of the children. Judgments of the voices' babyishness and attractiveness were also obtained. Perceivers reliably discriminated the children's voices along the dimensions of babyishness and attractiveness. Moreover, analyses revealed that the previously documented impact of these characteristics on first impressions of adults extends to impressions of young children. The similarity of the effects of these characteristics on impressions formed about children to those revealed for adults suggests that vocal qualities may have an impact on personality development via a process of self-fulfilling prophecy.  相似文献   

18.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

19.
The lesbian dyad     
Little is known regarding how respondents interpret terms that are commonly used in sexual behavior surveys. The present study assessed the impact of four factors on respondents’ judgments of whether the hypothetical actors “Jim” and “Susie “ would consider a particular behavior that they had engaged in to be “sex.” The four factors were respondent's gender, actor's gender, type of act (vaginal, anal, or oral intercourse), and who achieved orgasm (neither, Jim only, Susie only, or both). Two hundred twenty‐three undergraduates (22.2 ± 2.2 years; 65% female) were asked to read 16 scenarios featuring Jim and Susie and to judge whether each actor would consider the described behavior to be sex. Results indicated that vaginal and anal intercourse were considered sex under most circumstances. Whether oral intercourse was labeled as sex depended on the gender and viewpoint of the actor, and whether orgasm occurred. Findings suggest that items in sexual behavior surveys need to be clearly delineated to avoid subjective interpretations by respondents.  相似文献   

20.
The present research examined if the impact of a babyface on trait impressions documented in previous research holds true for moving faces. It also assessed the relative impact of a babyface and a childlike voice on impressions of talking faces. To achieve these goals, male and female targets' traits as well as their facial and vocal characteristics were rated in one of four information conditions: Static Face, Moving Face, Voice Only, or Talking Face. Facial structure measurements were also made by two independent judges. Data for male faces supported the experimental hypotheses. Specifically, regression analyses revealed that although a babyish facial structure created the impression of weakness even when a target moved his face, this effect was diminished when he also talked. Here a childlike voice and dynamic babyishness, as assessed by moving face ratings, were more important predictors. Similarly, a babyish facial structure had less impact on impressions of a talking target's warmth than did dynamic babyishness or other facial movement. A childlike voice had no impact on impressions of warmth when facial information was available.This research was supported by a NIMH Grant #BSR 5 R01 MH42684 to the first author. The authors would like to thank Linda Linn for her assistance in preparing the stimulus materials and Danylle Rudin for her help in collecting the data. Thanks are also extended to David M. Buss for his suggestions about alternative explanations which may help guide future work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号