首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human–human communication studies have suggested that within communicative interactions, individuals acknowledge each other as intentional agents and adjust their emotion nonverbal behavior according to the other. This process has been defined as emotional attunement. In this study, we examine the emotional attunement process in the context of affective human–computer interactions. To this purpose, participants were exposed to one of two conditions. In one case, they played with a computer that simulated understanding of their emotional reactions while guiding them across four different game-like activities; in the other, the computer guided participants across the activities without mentioning any ability to understand emotional responses. Face movements, gaze direction, posture, vocal behavior, electrocardiogram and electrodermal activity were simultaneously recorded during the experimental sessions. Results showed that if participants were aware of interacting with an agent able to recognize their emotions, they reported that the computer was able to “understand” them and showed a higher number of nonverbal behaviors during the most interactive activity. The implications are discussed.  相似文献   

2.
The present study aimed to clarify how listeners decode emotions from human nonverbal vocalizations, exploring unbiased recognition accuracy of vocal emotions selected from the Montreal Affective Voices (MAV) (Belin et al. in Trends Cognit Sci 8:129–135, 2008. doi: 10.1016/j.tics.2004.01.008). The MAV battery includes 90 nonverbal vocalizations expressing anger, disgust, fear, pain, sadness, surprise, happiness, sensual pleasure, as well as neutral expressions, uttered by female and male actors. Using a forced-choice recognition task, 156 native speakers of Portuguese were asked to identify the emotion category underlying each MAV sound, and additionally to rate the valence, arousal and dominance of these sounds. The analysis focused on unbiased hit rates (Hu Score; Wagner in J Nonverbal Behav 17(1):3–28, 1993. doi: 10.1007/BF00987006), as well as on the dimensional ratings for each discrete emotion. Further, we examined the relationship between categorical and dimensional ratings, as well as the effects of speaker’s and listener’s sex on these two types of assessment. Surprise vocalizations were associated with the poorest accuracy, whereas happy vocalizations were the most accurately recognized, contrary to previous studies. Happiness was associated with the highest valence and dominance ratings, whereas fear elicited the highest arousal ratings. Recognition accuracy and dimensional ratings of vocal expressions were dependent both on speaker’s sex and listener’s sex. Further, discrete vocal emotions were not consistently predicted by dimensional ratings. Using a large sample size, the present study provides, for the first time, unbiased recognition accuracy rates for a widely used battery of nonverbal vocalizations. The results demonstrated a dynamic interplay between listener’s and speaker’s variables (e.g., sex) in the recognition of emotion from nonverbal vocalizations. Further, they support the use of both categorical and dimensional accounts of emotion when probing how emotional meaning is decoded from nonverbal vocal cues.  相似文献   

3.
Young (M = 23 years) and older (M = 77 years) adults' interpretation and memory for the emotional content of spoken discourse was examined in an experiment using short, videotaped scenes of two young actresses talking to each other about emotionally-laden events. Emotional nonverbal information (prosody or facial expressions) was conveyed at the end of each scene at low, medium, and high intensities. Nonverbal information indicating anger, happiness, or fear, conflicted with the verbal information. Older adults' ability to differentiate levels of emotional intensity was not as strong (for happiness and anger) compared to younger adults. An incidental memory task revealed that older adults, more often than younger adults, reconstruct what people state verbally to coincide with the meaning of the nonverbal content, if the nonverbal content is conveyed through facial expressions. A second experiment with older participants showed that the high level of memory reconstructions favoring the nonverbal interpretation was maintained when the ages of the participants and actresses were matched, and when the nonverbal content was conveyed both through prosody and facial expressions.  相似文献   

4.
Alcohol dependent patients (ADs) are known to encounter severe interpersonal problems. Nonverbal communication skills are important for the development of healthy relationships. The present study aimed to explore emotional facial expression (EFE) recognition and posed and spontaneous EFE expressivity in male ADs divided into two groups according to Cloninger’s typology and the impact of their interpersonal relationship quality on the potential nonverbal deficits. Twenty type I ADs, twenty-one type II ADs, and twenty control participants took part in an EFE recognition task and an EFE expressivity task that considered personal emotional events (spontaneous expressivity) and EFE in response to a photo or word cue (posed expressivity). Coding was based on judges’ ratings of participants’ emotional facial expressions. Participants additionally completed a questionnaire on interpersonal relationship quality. No difference between the three groups emerged in the EFE recognition task. Type II ADs showed heightened deficits compared with type I ADs in EFE expressivity: Judges perceived less accurate posed EFE in response to a cue word and less intense and positive spontaneous EFE in type II ADs compared to control participants. In addition, type II ADs reported more relationship difficulties compared to both type I ADs and control participants. These interpersonal relationship difficulties were related to some of the EFE expressivity deficits of AD-IIs. This study underlines the important differences between the interpersonal functioning of AD subtypes.  相似文献   

5.
6.
This study tested the possibility that individual differences in nonverbal expressiveness may function as a mediating factor in the transmission of emotion through social comparison. In a quasi-experimental design, small groups consisting of one expressive person and two unexpressive people were created in which the participants sat facing each other without talking for two minutes. Self-report measures of mood indicated that the feelings of the unexpressive people were influenced by the expressive people but the expressive people were relatively unlikely to be influenced by the unexpressive people. The findings have implications for the role of nonverbal communication in the emotional side of group interaction.This research was supported by NIMH Grant #R03MH31453 and by an Intramural Research Grant from UC Riverside to Howard Friedman. We would like to thank Louise M. Prince and Dan Segall for their assistance and Eliot Smith, Joe Schwartz and Keith Widaman for suggestions.  相似文献   

7.
The present studies examined how sensitivity to spatiotemporal percepts such as rhythm, angularity, configuration, and force predicts accuracy in perceiving emotion. In Study 1, participants (N = 99) completed a nonverbal test battery consisting of three nonverbal emotion perception tests and two perceptual sensitivity tasks assessing rhythm sensitivity and angularity sensitivity. Study 2 (N = 101) extended the findings of Study 1 with the addition of a fourth nonverbal test, a third configural sensitivity task, and a fourth force sensitivity task. Regression analyses across both studies revealed partial support for the association between perceptual sensitivity to spatiotemporal percepts and greater emotion perception accuracy. Results indicate that accuracy in perceiving emotions may be predicted by sensitivity to specific percepts embedded within channel- and emotion-specific displays. The significance of such research lies in the understanding of how individuals acquire emotion perception skill and the processes by which distinct features of percepts are related to the perception of emotion.  相似文献   

8.
Recent research has demonstrated that preschool children can decode emotional meaning in expressive body movement; however, to date, no research has considered preschool children's ability to encode emotional meaning in this media. The current study investigated 4- (N = 23) and 5- (N = 24) year-old children's ability to encode the emotional meaning of an accompanying music segment by moving a teddy bear using previously modeled expressive movements to indicate one of four target emotions (happiness, sadness, anger, or fear). Adult judges visually categorized the silent videotaped expressive movement performances by children of both ages with greater than chance level accuracy. In addition, accuracy in categorizing the emotion being expressed varied as a function of age of child and emotion. A subsequent cue analysis revealed that children as young as 4 years old were systematically varying their expressive movements with respect to force, rotation, shifts in movement pattern, tempo, and upward movement in the process of emotional communication. The theoretical significance of such encoding ability is discussed with respect to children's nonverbal skills and the communication of emotion.  相似文献   

9.
Nonlinguistic communication is typically proposed to convey representational messages, implying that particular signals are associated with specific signaler emotions, intentions, or external referents. However, common signals produced by both nonhuman primates and humans may not exhibit such specificity, with human laughter for example showing significant diversity in both acoustic form and production context. We therefore outline an alternative to the representational approach, arguing that laughter and other nonlinguistic vocalizations are used to influence the affective states of listeners, thereby also affecting their behavior. In the case of laughter, we propose a primary function of accentuating or inducing positive affect in the perceiver in order to promote a more favorable stance toward the laugher. Two simple strategies are identified, namely producing laughter with acoustic features that have an immediate impact on listener arousal, and pairing these sounds with positive affect in the listener to create learned affective responses. Both depend on factors like the listener's current emotional state and past interactions with the vocalizer, with laughers predicted to adjust their sounds accordingly. This approach is used to explain findings from two experimental studies that examined the use of laughter in same-sex and different-sex dyads composed of either friends or strangers, and may be applicable to other forms of nonlinguistic communication.  相似文献   

10.
Although the second year of life is characterized by dramatic changes in expressive language and by increases in negative emotion expression, verbal communication and emotional communication are often studied separately. With a sample of twenty‐five one‐year‐olds (12–23 months), we used Language Environment Analysis (LENA; Xu, Yapanel, & Gray, 2009, Reliability of the LENA? language environment analysis system in young children’s natural home environment. LENA Foundation) to audio‐record and quantify parent–toddler communication, including toddlers’ vocal negative emotion expressions, across a full waking day. Using a multilevel extension of lag‐sequential analysis, we investigated whether parents are differentially responsive to toddlers’ negative emotion expressions compared to their verbal or preverbal vocalizations, and we examined the effects of parents’ verbal responses on toddlers’ subsequent communicative behavior. Toddlers’ negative emotions were less likely than their vocalizations to be followed by parent speech. However, when negative emotions were followed by parent speech, toddlers were most likely to vocalize next. Post hoc analyses suggest that older toddlers and toddlers with higher language abilities were more likely to shift from negative emotion to verbal or preverbal vocalization following parent response. Implications of the results for understanding the parent–toddler communication processes that support both emotional development and verbal development are discussed.  相似文献   

11.
Men and women were videotaped while they silently viewed emotionally toned slides (view period) and then described their feelings (talk period). They then rated their feelings on scales of pleasantness, strength, and 10 specific emotions. Videotapes of the two sending periods separately were shown to receivers who tried to identify the type of slide that the sender was viewing or describing (categorization measure) and rated the senders' expressions on the same scales (emotion correlation measure). Results indicated that communication accuracy, and gender differences in sending, varied with sending period, type of slide, communication measure, and specific emotion. On the categorization measure, women were generally better senders. On the emotion correlation measures, women were better senders of pleasantness, disgust, distress, fear, and anger while men were slightly better senders of guilt. Accuracy was generally better in the view period than in the talk period, and the view period produced more pronounced gender differences. It is argued that categorization and correlation measures are sensitive to different aspects of emotion communication. Used in conjunction with modifications of the slide viewing paradigm, the two types of measure provide versatile means of investigating social aspects of emotional expression.The sending phase of this study was conducted while the first author was visiting the University of Connecticut, Storrs. The receiving phase was conducted by the third author at the University of Manchester, England. The authors wish to thank Rachel Calam for her help in the conduct of the sending phase.  相似文献   

12.
Unimodal emotionally salient visual and auditory stimuli capture attention and have been found to do so cross-modally. However, little is known about the combined influences of auditory and visual threat cues on directing spatial attention. In particular, fearful facial expressions signal the presence of danger and capture attention. Yet, it is unknown whether human auditory distress signals that accompany fearful facial expressions potentiate their capture of attention. It was hypothesized that the capture of attention by fearful faces would be enhanced when co-presented with auditory distress signals. To test this hypothesis, we used a modified multimodal dot-probe task where fearful faces were paired with three sound categories: no sound control, non-distressing human vocalizations, and distressing human vocalizations. Fearful faces captured attention across all three sound conditions. In addition, this effect was potentiated when fearful faces were paired with auditory distress signals. The results provide initial evidence suggesting that emotional attention is facilitated by multisensory integration.  相似文献   

13.
The goal of the paper was to test if humans can detect whether athletes are trailing or leading in sports based on the perception of thin slices of athletes’ nonverbal behavior. In Experiment 1, participants who were unexperienced in the respective sports watched short videos depicting basketball and table tennis players and rated whether athletes were trailing or leading. Results indicated that participants could significantly differentiate between trailing and leading athletes in both team and individual sports. Experiment 2 showed that children were also able to distinguish between trailing and leading athletes based on nonverbal behavior. Comparison with the adult results from Experiment 1 revealed that the adult ratings corresponded to a higher degree with the actual scores during the game compared to the children’s. In Experiment 3, we replicated the findings from Experiment 1 with both expert and unexperienced participants and a different set of stimuli from team handball. Both experts and unexpert participants were able to differentiate between leading and trailing athletes. Our findings are in line with evolutionary accounts of nonverbal behavior and suggest that humans display nonverbal signals as a consequence of leading or trailing which are reliably interpreted by others. By comparing this effect as a function of different age groups we provide evidence that although even young children can differentiate between leading and trailing athletes, the decoding of subtle nonverbal cues continues to develop with increasing experience and maturation processes.  相似文献   

14.
The human body plays a central role in nonverbal communication, conveying attitudes, personality, and values during social interactions. Three experiments in a large, open classroom setting investigated whether the visibility of torso-located cues affects nonverbal communication of similarity. In Experiments 1 and 2, half the participants wore a black plastic bag over their torso. Participants interacted with an unacquainted same-sex individual selected from a large class who was also wearing (or also not wearing) a bag. Experiment 3 added a clear bag condition, in which visual torso cues were not obscured. Across experiments, black bag-wearing participants selected partners who were less similar to them on attitudes, behaviors, and personality compared to the bag-less—and clear bag—participants. Nonverbal cues in the torso communicate information about similarity of attitudes, behavior, and personality; the center of the body plays a surprisingly central role in early-stage person perception and attraction.  相似文献   

15.
Humans perceive emotions in terms of categories, such as “happiness,” “sadness,” and “anger.” To learn these complex conceptual emotion categories, humans must first be able to perceive regularities in expressive behaviors (e.g., facial configurations) across individuals. Recent research suggests that infants spontaneously form “basic-level” categories of facial configurations (e.g., happy vs. fear), but not “superordinate” categories of facial configurations (e.g., positive vs. negative). The current studies further explore how infant age and language impact superordinate categorization of facial configurations associated with different negative emotions. Across all experiments, infants were habituated to one person displaying facial configurations associated with anger and disgust. While 10-month-olds formed a category of person identity (Experiment 1), 14-month-olds formed a category that included negative facial configurations displayed by the same person (Experiment 2). However, neither age formed the hypothesized superordinate category of negative valence. When a verbal label (“toma”) was added to each of the habituation events (Experiment 3), 10-month-olds formed a category similar to 14-month-olds in Experiment 2. These findings intersect a larger conversation about the nature and development of children's emotion categories and highlight the importance of considering developmental processes, such as language learning and attentional/memory development, in the design and interpretation of infant categorization studies.  相似文献   

16.
Women were videotaped while they spoke about a positive and a negative experience either in the presence of an experimenter or alone. They gave self-reports of their emotional experience, and the videotapes were rated for facial and verbal expression of emotion. Participants spoke less about their emotions when the experimenter (E) was present. When E was present, during positive disclosures they smiled more, but in negative disclosures they showed less negative and more positive expression. Facial behavior was only related to experienced emotion during positive disclosure when alone. Verbal behavior was related to experienced emotion for positive and negative disclosures when alone. These results show that verbal and nonverbal behaviors, and their relationship with emotional experience, depend on the type of emotion, the nature of the emotional event, and the social context.  相似文献   

17.
We investigated how power priming affects facial emotion recognition in the context of body postures conveying the same or different emotion. Facial emotions are usually recognized better when the face is presented with a congruent body posture, and recognized worse when the body posture is incongruent. In our study, we primed participants to either low, high, or neutral power prior to their performance in a facial-emotion categorization task in which faces were presented together with a congruent or incongruent body posture. Facial emotion recognition in high-power participants was not affected by body posture. In contrast, low-power and neutral-power participants were significantly affected by the congruence of facial and body emotions. Specifically, these participants displayed better facial emotion recognition when the body posture was congruent, and worse performance when the body posture was incongruent. In a following task, we trained the same participants to categorize two sets of novel checkerboard stimuli and then engaged them in a recognition test involving compounds of these stimuli. High, low, and neutral-power participants all showed a strong congruence effect for compound checkerboard stimuli. We discuss our results with reference to the literature on power and social perception.  相似文献   

18.
Facial expressions related to sadness are a universal signal of nonverbal communication. Although results of many psychology studies have shown that drooping of the lip corners, raising of the chin, and oblique eyebrow movements (a combination of inner brow raising and brow lowering) express sadness, no report has described a study elucidating facial expression characteristics under well-controlled circumstances with people actually experiencing the emotion of sadness itself. Therefore, spontaneous facial expressions associated with sadness remain unclear. We conducted this study to accumulate important findings related to spontaneous facial expressions of sadness. We recorded the spontaneous facial expressions of a group of participants as they experienced sadness during an emotion-elicitation task. This task required a participant to recall neutral and sad memories while listening to music. We subsequently conducted a detailed analysis of their sad and neutral expressions using the Facial Action Coding System. The prototypical facial expressions of sadness in earlier studies were not observed when people experienced sadness as an internal state under non-social circumstances. By contrast, they expressed tension around the mouth, which might function as a form of suppression. Furthermore, results show that parts of these facial actions are not only related to sad experiences but also to other emotional experiences such as disgust, fear, anger, and happiness. This study revealed the possibility that new facial expressions contribute to the experience of sadness as an internal state.  相似文献   

19.
The claim that nonverbal signals are more important than verbal signals in the communication of affect is widely accepted and has had considerable impact on therapy, counselling, and education. In a typical experiment, subjects are presented with a long series of artificially constructed inconsistent messages (messages in which the verbal and nonverbal components are opposite in valence) and asked to judge the strength of the emotion felt by the encoder. In such studies little attempt is made to camouflage the nature of the stimuli or the intent of the experimenter. In this study, it is argued that the absence of camouflage (defined as naturally occurring consistent messages) may bias the results in favour of the nonverbal dominance effect, so that as the level of camouflage is increased, the size of the nonverbal dominance effect is decreased. Four groups of subjects (34 subjects per group) were required to rate a series of audiovisually presented messages. The level of camouflage varied between groups: 0% (all messages presented were inconsistent), 50% (half of the messages presented were consistent and half were inconsistent), 83% (the majority of messages presented were consistent), and 94%. The results clearly demonstrated that the nonverbal dominance effect was present when the level of camouflage was low, and disappeared when the level of camouflage was high. The implications of these findings for the nonverbal dominance hypothesis are discussed.This research was supported by a grant from the Australian Research Grants Scheme (Reference No. A78515618).  相似文献   

20.
Previous research has found that high emotional expressivity contributes to interpersonal attraction independently of and on par with the contributions of physical attractiveness. Using an evolutionary perspective, we argue that emotional expressivity can act as a marker for cooperative behavior or trustworthiness. Theoretical and empirical work from social dilemma research pointing to the advantages of having a signal for cooperation is considered, as well as research from the limited number of studies that have looked at expressive behavior within a social exchange context. We also argue that we need to inject nonverbal emotional behavior into the social dilemma paradigm, which has downplayed or ignored its role in the communication processes associated with cooperation. Finally, we offer an outline for testing our theory and expanding the role of nonverbal emotional processes within research on cooperation and social exchange.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号