首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Despite known differences in the acoustic properties of children’s and adults’ voices, no work to date has examined the vocal cues associated with emotional prosody in youth. The current study investigated whether child (n = 24, 17 female, aged 9–15) and adult (n = 30, 15 female, aged 18–63) actors differed in the vocal cues underlying their portrayals of basic emotions (anger, disgust, fear, happiness, sadness) and social expressions (meanness, friendliness). We also compared the acoustic characteristics of meanness and friendliness to comparable basic emotions. The pattern of distinctions between expressions varied as a function of age for voice quality and mean pitch. Specifically, adults’ portrayals of the various expressions were more distinct in mean pitch than children’s, whereas children’s representations differed more in voice quality than adults’. Given the importance of pitch variables for the interpretation of a speaker’s intended emotion, expressions generated by adults may thus be easier for listeners to decode than those of children. Moreover, the vocal cues associated with the social expressions of meanness and friendliness were distinct from those of basic emotions like anger and happiness respectively. Overall, our findings highlight marked differences in the ways in which adults and children convey socio-emotional expressions vocally, and expand our understanding of the communication of paralanguage in social contexts. Implications for the literature on emotion recognition are discussed.  相似文献   

2.
This study was designed to investigate the potential association between social anxiety and children's ability to decode nonverbal emotional cues. Participants were 62 children between 8 and 10 years of age, who completed self-report measures of social anxiety, depressive symptomatology, and nonspecific anxious symptomatology, as well as nonverbal decoding tasks assessing accuracy at identifying emotion in facial expressions and vocal tones. Data were analyzed with multiple regression analyses controlling for generalized cognitive ability, and nonspecific anxious and depressive symptomatology. Results provided partial support for the hypothesis that social anxiety would relate to nonverbal decoding accuracy. Difficulty identifying emotions conveyed in children's and adults' voices was associated with general social avoidance and distress. At higher levels of social anxiety, children more frequently mislabeled fearful voices as sad. Possible explanations for the obtained results are explored.  相似文献   

3.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged three to five years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundatin (#01523721) to Nathan A. Fox. The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for her help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

4.
When a chief executive officer or spokesperson responds to an organizational crisis, he or she communicates not only with verbal cues but also visual and vocal cues. While most research in the area of crisis communication has focused on verbal cues (e.g., apologies, denial), this paper explores the relative importance of visual and vocal cues by spokespersons of organizations in crisis. Two experimental studies have more specifically examined the impact of a spokesperson’s visual cues of deception (i.e., gaze aversion, posture shifts, adaptors), because sending a credible response is crucial in times of crisis. Each study focused on the interplay of these visual cues with two specific vocal cues that have also been linked to perceptions of deception (speech disturbances in study 1; voice pitch in study 2). Both studies show that visual cues of deception negatively affect both consumers’ attitudes towards the organization (study 1) and their purchase intentions (study 2) after a crisis. In addition, the findings indicate that in crisis communication, the impact of visual cues dominates the outcomes of vocal cues. In both studies, vocal cues only affected consumers’ perceptions when the spokesperson displayed visual cues of deception. More specifically, the findings show that crisis communication messages with speech disturbances (study 1) or a raised voice pitch (study 2) can negatively affect organizational post-crisis perceptions.  相似文献   

5.
The present study examined preschoolers' and adults' ability to identify and label the emotions of happiness, sadness, and anger when presented through either the face channel alone, the voice channel alone, or the face and voice channels together. Subjects were also asked to rate the intensity of the expression. The results revealed that children aged 3 to 5 years are able to accurately identify and label emotions of happy, sad, and angry regardless of channel presentation. Similar results were obtained for the adult group. While younger children (33 to 53 months of age) were equally accurate in identifying the three emotions, older children (54 to 68 months of age) and adults made more incorrect responses when identifying expressions of sadness. Intensity ratings also differed according to the age of the subject and the emotion being rated.Support for this research was from a grant by the National Science Foundation (#BNS8317229) to Nathan A. Fox. The research was also supported by a grant awarded to Nathan Fox from the National Institutes of Health (#R01MH/HD17899). The authors would like to thank Professor A. Caron for providing the original videotape, Joyce Dinsmoor for help in data collection and the staff of the Center for Young Children for their cooperation.  相似文献   

6.
The study of emotion elicitation in the caregiver‐infant dyad has focused almost exclusively on the facial and vocal channels, whereas little attention has been given to the contribution of the tactile channel. This study was undertaken to investigate the effects of touch on infants' emotions. During the time that objects were presented to the dyad, mothers provided tactile stimulation to their 12‐month‐old infants by either (a) tensing their fingers around the infants' abdomen while abruptly inhaling, (b) relaxing their grip around the infants' abdomen, or (c) not providing additional tactile stimulation (control condition). The results revealed that infants in the first condition (a) touched the objects less and waited longer to touch the objects while displaying more negative emotional displays compared to infants in the control condition. However, no apparent differences were found between infants in the second condition (b) and the control condition. The results suggest that infants' emotions may be elicited by specific parameters of touch.  相似文献   

7.
The present study aimed to clarify how listeners decode emotions from human nonverbal vocalizations, exploring unbiased recognition accuracy of vocal emotions selected from the Montreal Affective Voices (MAV) (Belin et al. in Trends Cognit Sci 8:129–135, 2008. doi: 10.1016/j.tics.2004.01.008). The MAV battery includes 90 nonverbal vocalizations expressing anger, disgust, fear, pain, sadness, surprise, happiness, sensual pleasure, as well as neutral expressions, uttered by female and male actors. Using a forced-choice recognition task, 156 native speakers of Portuguese were asked to identify the emotion category underlying each MAV sound, and additionally to rate the valence, arousal and dominance of these sounds. The analysis focused on unbiased hit rates (Hu Score; Wagner in J Nonverbal Behav 17(1):3–28, 1993. doi: 10.1007/BF00987006), as well as on the dimensional ratings for each discrete emotion. Further, we examined the relationship between categorical and dimensional ratings, as well as the effects of speaker’s and listener’s sex on these two types of assessment. Surprise vocalizations were associated with the poorest accuracy, whereas happy vocalizations were the most accurately recognized, contrary to previous studies. Happiness was associated with the highest valence and dominance ratings, whereas fear elicited the highest arousal ratings. Recognition accuracy and dimensional ratings of vocal expressions were dependent both on speaker’s sex and listener’s sex. Further, discrete vocal emotions were not consistently predicted by dimensional ratings. Using a large sample size, the present study provides, for the first time, unbiased recognition accuracy rates for a widely used battery of nonverbal vocalizations. The results demonstrated a dynamic interplay between listener’s and speaker’s variables (e.g., sex) in the recognition of emotion from nonverbal vocalizations. Further, they support the use of both categorical and dimensional accounts of emotion when probing how emotional meaning is decoded from nonverbal vocal cues.  相似文献   

8.
Studies with socially anxious adults suggest that social anxiety is associated with problems in decoding other persons' facial expressions of emotions. Corresponding studies with socially anxious children are lacking. The aim of the present study was to test whether socially phobic children show deficits in classifying facial expressions of emotions or show a response bias for negative facial expressions. Fifty socially anxious and 25 socially non-anxious children (age 8 to 12) participated in the study. Pictures of faces with either neutral, positive (joyful) or negative (angry, disgusted, sad) facial expressions (24 per category) were presented for 60 ms on a monitor screen in random order. The children were asked to indicate by pressing a key whether the facial expression was neutral, positive, or negative, and to rate how confident they were about their classification. With regard to frequency of errors the socially anxious children reported significantly more often that they saw emotions when neutral faces were presented. Moreover, reaction times were longer. However, they did not feel less certain about their performance. There is neither an indication of an enhanced ability to decode negative facial expressions in socially anxious children, nor was there a specific tendency to interpret neutral or positive faces as negative.  相似文献   

9.
Debates about children's mental health problems have raised questions about the reliability and validity of diagnosis and treatment. However, little research has focused on social reactions to children with mental health problems. This gap in research raises questions about competing theories of stigma, as well as specific factors shaping prejudice and discrimination toward those children. Here, we organize a general model of stigma that synthesizes previous research. We apply a reduced version of this model to data from a nationally representative sample responding to vignettes depicting several stigmatizing scenarios, including attention-deficit/hyperactivity disorder (ADHD), depression, asthma, or "normal troubles." Results from the National Stigma Study-Children suggest a gradient of rejection from highest to lowest, as follows: ADHD, depression, "normal troubles," and physical illness. Stigmatizing reactions are highest toward adolescents. Importantly, respondents who label the vignette child's situation as a mental illness compared to those who label the problem as a physical illness or a "normal" situation report greater preferences for social distance, a pattern that appears to result from perceptions that the child is dangerous.  相似文献   

10.
Previous research has suggested that the ability to recognize vocal portrayals of socio-emotional expressions improves with age throughout childhood and adolescence. The current study examined whether stimulus-level factors (i.e., the age of the speaker and the type of expression being conveyed) interacted with listeners’ developmental stage to predict listeners’ recognition accuracy. We assessed mid-adolescent (n = 50, aged 13–15 years) and adult (n = 87, 18–30 years) listeners’ ability to recognize basic emotions and social expressions in the voices of both adult and youth actors. Adults’ emotional prosody was better recognized than that of youth, and adult listeners were more accurate overall than were mid-adolescents. Interaction effects revealed that youths’ accuracy was equivalent to adult listeners’ when hearing adult portrayals of anger, disgust, friendliness, happiness, and meanness, and youth portrayals of disgust, happiness, and meanness. Our findings highlight the importance of speaker characteristics and type of expression on listeners’ ability to recognize vocal cues of emotion and social intent.  相似文献   

11.
This study examined gender differences in self‐reported and observed conversations about sexual issues. Fifty mother –adolescent dyads reported on their conversations about sexual issues and participated in videotaped conversations about dating and sexuality in a laboratory setting. Gender differences (more mother – daughter than mother –son) were found in the extent of sexual communication based on adolescents’ reports, but no gender differences were found based on mothers’ reports, or on observations of conversations. Aspects of laboratory interactions, however, did distinguish mother– daughter and mother – son dyads, and related to self‐report measures. Girls’ reported sexuality communication frequency related to behavior in the laboratory setting. During mother – son conversations, one person usually took on the role of questioner, whereas the other did not. In contrast, there was evidence for mutuality of positive emotions for mother – daughter dyads, but not for mother – son dyads.  相似文献   

12.
Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Each child was rated by two teachers on measures of school adjustment. Facial expressions encoded by the children were decoded by college undergraduates and the children's parents. Results were as follows: (1) accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; (2) decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding ability; (3) decoding ability increased with age, while encoding ability increased with age among females and slightly decreased among males; (4) parents decoded facial expressions of their own children better than facial expressions of other children, and female parents were better decoders than male parents; (5) children's adjustment to school was related to their encoding and decoding skills and to their mothers' decoding skills; (6) children with better decoding skills were rated as being better adjusted by their parents.The authors gratefully acknowledge the assistance of the Greater Rochester Jewish Community Center, Early Childhood Department, and the parents of the participating children in the completion of this study. Special thanks to Sandra Walter, Director of the Early Childhood Department, for her cooperation and support during all stages of the project.  相似文献   

13.
ABSTRACT

This is a [de]-composed conversation between the author and a few feminist writers about how a mother’s resistance to interpellating gender is an echoed call to resistance against society’s interpellation of mother. Alongside the words of a few selected 20th- and 21st-century writers, as well as a brief personal and a clinical vignette, the author illustrates ways in which the bad enough mother survives the relentless scrutiny of society—starting with experts like doctors, sociologists, politicians, and psychoanalysts. In turn, she teaches her children survival of the very interpellative tenants of culture that tried to funnel her into a binary-gendered social order.  相似文献   

14.
The aim of this research was to analyze the main vocal cues and strategies used by a liar. 31 male university students were asked to raise doubts in an expert in law about a picture. The subjects were required to describe the picture in three experimental conditions: telling the truth (T) and lying to a speaker when acquiescent (L1) and when suspicious (L2). The utterances were then subjected to a digitized acoustic analysis in order to measure nonverbal vocal variables. Verbal variables were also analyzed (number of words, eloquency and disfluency index). Results showed that deception provoked an increment in F0, a greater number of pauses and words, and higher eloquency and fluency indexes. The F0 related to the two types of lie—prepared and unprepared—identified three classes of liars: good liars, tense liars (more numerous in L1), and overcontrolled liars (more numerous in L2). It is argued that these differences are correlated to the complex task of lying and the need to control one's emotions during deception. The liar's effort to control his/her voice, however, can lead to his/her tone being overcontrolled or totally lacking in control (leakage). Finally, the research forwards an explanation on the strategies used by the good liar and in particular treats the self-deception hypothesis.  相似文献   

15.
This study examined lay persons' emotional reactions to abuse, with special attention to two types of disease: Alzheimer's disease and osteoporosis. A total of 169 adults (mean age = 60) were interviewed face-to-face using a vignette methodology. Although the majority of the participants found the vignette to describe a situation of abuse, one-quarter did not consider it an abusive situation. The person described in the vignette elicited more positive than negative emotions, with a high percentage of participants expressing sympathy, desire to help, and concern. The various emotional reactions to abuse are associated with different correlates.  相似文献   

16.
Eighty-two younger and older adults participated in a two-part study of the decoding of emotion through body movements and gestures. In the first part, younger and older adults identified emotions depicted in brief videotaped displays of young adult actors portraying emotional situations. In each display, the actors were silent and their faces were electronically blurred in order to isolate the body cues to emotion. Although both groups made accurate emotion identifications well above chance levels, older adults made more overall errors, and this was especially true for negative emotions. Moreover, their errors were more likely to reflect the misidentification of emotional displays as neutral in content. In the second part, younger and older adults rated the videotaped displays using scales reflecting several movement dimensions (e.g., form, tempo, force, and movement). The ratings of both age groups were in high agreement and provided reliable information about particular body cues to emotion. The errors made by older adults were linked to reactions to exaggerated or ambiguous body cues.  相似文献   

17.
18.
Twenty-five high-functioning, verbal children and adolescents with autism spectrum disorders (ASD; age range 8–15 years) who demonstrated a facial emotion recognition deficit were block randomized to an active intervention (n = 12) or waitlist control (n = 13) group. The intervention was a modification of a commercially-available, computerized, dynamic facial emotion training tool, the MiX by Humintell©. Modifications were introduced to address the special learning needs of individuals with ASD and to address limitations in current emotion recognition programs. Modifications included: coach-assistance, a combination of didactic instruction for seven basic emotions, scaffold instruction which included repeated practice with increased presentation speeds, guided attention to relevant facial cues, and imitation of expressions. Training occurred twice each week for 45–60 min across an average of six sessions. Outcome measures were administered prior to and immediately after treatment, as well as after a delay period of 4–6 weeks. Outcome measures included (a) direct assessment of facial emotion recognition, (b) emotion self-expression, and (c) generalization through emotion awareness in videos and stories, use of emotion words, and self-, parent-, and teacher-report on social functioning questionnaires. The facial emotion training program enabled children and adolescents with ASD to more accurately and quickly identify feelings in facial expressions with stimuli from both the training tool and generalization measures and demonstrate improved self-expression of facial emotion.  相似文献   

19.
20.
New legislation in England and Wales requires adoption agencies to specify the contact adopted children will have with their birth families and obliges agencies to offer all parties support to maintain such contact. This study, based on responses from 112 adoption social workers in England and Wales, used a case vignette methodology to explore workers' attitudes towards supporting post-adoption contact. The findings suggest that social workers think primarily about the child's needs and about providing services to or on behalf of the child. In contrast, adults' needs, especially the relationship between birth parents and adoptive parents, were less often considered. Workers were least orientated toward supporting birth parents. Workers from different agencies had very different attitudes towards the case vignette. This study suggests that supporting post-adoption contact is a complex professional task likely to be influenced by workers' own attitudes. Implications for training, support and supervision are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号