首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This article introduces the Children’s Scales of Pleasure and Arousal as instruments to enable children to provide judgments of emotions they witness or experience along the major dimensions of affect. In two studies (Study 1: N = 160, 3–11 years and adults; Study 2: N = 280, 3–5 years and adults), participants used the scales to indicate the levels of pleasure or arousal they perceived in stylized drawings of facial expressions, in photographs of facial expressions, or in emotion labels. All age groups used the Pleasure Scale reliably and accurately with all three types of stimuli. All used the Arousal Scale with stylized faces and with facial expressions, but only 5-year-olds did so for emotion labels.  相似文献   

2.
Cross-cultural and laboratory research indicates that some facial expressions of emotion are recognized more accurately and faster than others. We assessed the hypothesis that such differences depend on the frequency with which each expression occurs in social encounters. Thirty observers recorded how often they saw different facial expressions during natural conditions in their daily life. For a total of 90 days (3 days per observer), 2,462 samples of seen expressions were collected. Among the basic expressions, happy faces were observed most frequently (31 %), followed by surprised (11.3 %), sad (9.3 %), angry (8.7 %), disgusted (7.2 %), and fearful faces, which were the least frequent (3.4 %). A significant amount (29 %) of non-basic emotional expressions (e.g., pride or shame) were also observed. We correlated our frequency data with recognition accuracy and response latency data from prior studies. In support of the hypothesis, significant correlations (generally, above .70) emerged, with recognition accuracy increasing and latency decreasing as a function of frequency. We conclude that the efficiency of facial emotion recognition is modulated by familiarity of the expressions.  相似文献   

3.
Despite known differences in the acoustic properties of children’s and adults’ voices, no work to date has examined the vocal cues associated with emotional prosody in youth. The current study investigated whether child (n = 24, 17 female, aged 9–15) and adult (n = 30, 15 female, aged 18–63) actors differed in the vocal cues underlying their portrayals of basic emotions (anger, disgust, fear, happiness, sadness) and social expressions (meanness, friendliness). We also compared the acoustic characteristics of meanness and friendliness to comparable basic emotions. The pattern of distinctions between expressions varied as a function of age for voice quality and mean pitch. Specifically, adults’ portrayals of the various expressions were more distinct in mean pitch than children’s, whereas children’s representations differed more in voice quality than adults’. Given the importance of pitch variables for the interpretation of a speaker’s intended emotion, expressions generated by adults may thus be easier for listeners to decode than those of children. Moreover, the vocal cues associated with the social expressions of meanness and friendliness were distinct from those of basic emotions like anger and happiness respectively. Overall, our findings highlight marked differences in the ways in which adults and children convey socio-emotional expressions vocally, and expand our understanding of the communication of paralanguage in social contexts. Implications for the literature on emotion recognition are discussed.  相似文献   

4.
The facial feedback hypothesis states that facial actions modulate subjective experiences of emotion. Using the voluntary facial action technique, in which the participants react with instruction induced smiles and frowns when exposed to positive and negative emotional pictures and then rate the pleasantness of these stimuli, four questions were addressed in the present study. The results in Experiment 1 demonstrated a feedback effect because participants experienced the stimuli as more pleasant during smiling as compared to when frowning. However, this effect was present only during the critical actions of smiling and frowning, with no remaining effects after 5 min or after 1 day. In Experiment 2, feedback effects were found only when the facial action (smile/frown) was incongruent with the presented emotion (positive/negative), demonstrating attenuating but not enhancing modulation. Finally, no difference in the intensity of produced feedback effect was found between smiling and frowning, and no difference in feedback effect was found between positive and negative emotions. In conclusion, facial feedback appears to occur mainly during actual facial actions, and primarily attenuate ongoing emotional states.  相似文献   

5.
Previous research has suggested that the ability to recognize vocal portrayals of socio-emotional expressions improves with age throughout childhood and adolescence. The current study examined whether stimulus-level factors (i.e., the age of the speaker and the type of expression being conveyed) interacted with listeners’ developmental stage to predict listeners’ recognition accuracy. We assessed mid-adolescent (n = 50, aged 13–15 years) and adult (n = 87, 18–30 years) listeners’ ability to recognize basic emotions and social expressions in the voices of both adult and youth actors. Adults’ emotional prosody was better recognized than that of youth, and adult listeners were more accurate overall than were mid-adolescents. Interaction effects revealed that youths’ accuracy was equivalent to adult listeners’ when hearing adult portrayals of anger, disgust, friendliness, happiness, and meanness, and youth portrayals of disgust, happiness, and meanness. Our findings highlight the importance of speaker characteristics and type of expression on listeners’ ability to recognize vocal cues of emotion and social intent.  相似文献   

6.
Human social interaction is enriched with synchronous movement which is said to be essential to establish interactional flow. One commonly investigated phenomenon in this regard is facial mimicry, the tendency of humans to mirror facial expressions. Because studies investigating facial mimicry in face-to-face interactions are lacking, the temporal dynamics of facial mimicry remain unclear. We therefore developed and tested the suitability of a novel approach to quantifying facial expression synchrony in face-to-face interactions: windowed cross-lagged correlation analysis (WCLC) for electromyography signals. We recorded muscle activations related to smiling (Zygomaticus Major) and frowning (Corrugator Supercilii) of two interaction partners simultaneously in 30 dyadic affiliative interactions. We expected WCLC to reliably detect facial expression synchrony above chance level and, based on previous research, expected the occurrence of rapid synchronization of smiles within 200 ms. WCLC significantly detected synchrony of smiling but not frowning compared to a control condition of chance level synchrony in six different interactional phases (smiling: d z s = .85–1.11; frowning: d z s = .01–.30). Synchronizations of smiles between interaction partners predominantly occurred within 1000 ms, with a significant amount occurring within 200 ms. This rapid synchronization of smiles supports the notion of the existence of an anticipated mimicry response for smiles. We conclude that WCLC is suited to quantify the temporal dynamics of facial expression synchrony in dyadic interactions and discuss implications for different psychological research areas.  相似文献   

7.
This paper reports on the cross validation of the Gambling Problem Severity Subscale of the Canadian Adolescent Gambling Index (CAGI/GPSS). The CAGI/GPSS was included in a large school based drug use and health survey conducted in 2015. Data from students in grades 9–12 (ages 13–20 years) derived from the (N = 3369 students). The CAGI/GPSS produced an alpha of 0.789. A principle component analysis revealed two eigenvalues greater than one. An oblique rotation revealed these components to represent consequences and over involvement. The CAGI/GPSS indicated that 1% of the students fell into the “red” category indicating a severe problem and an additional 3.3% scored in the “yellow” category indicating low to moderate problems. The CAGI/GPSS was shown to be significantly correlated with gambling frequency (r = 0.36), largest expenditure (r = 0.37), sex (more likely to be male) (r = ?0.19), lower school marks (r = ?0.07), hazardous drinking, (r = 0.16), problem video game play (r = 0.16), as well as substance abuse. The CAGI/GPSS was cross validated using a shorted version of the short SOGS, r = 0.48. In addition the CAGI/GPSS and short SOGS produced very similar patterns of correlations results. The results support the validity and reliability of the CAGI/GPSS as a measure of gambling problems among adolescents.  相似文献   

8.
This study examined gender, age, and task differences in positive touch and physical proximity during mother–child and father–child conversations. Sixty-five Spanish mothers and fathers and their 4- (M = 53.50 months, SD = 3.54) and 6-year-old (M = 77.07 months, SD = 3.94) children participated in this study. Positive touch was examined during a play-related storytelling task and a reminiscence task (conversation about past emotions). Fathers touched their children positively more frequently during the play-related storytelling task than did mothers. Both mothers and fathers were in closer proximity to their 6-year-olds than their 4-year-olds. Mothers and fathers touched their children positively more frequently when reminiscing than when playing. Finally, 6-year-olds remained closer to their parents than did 4-year-olds. Implications of these findings for future research on children’s socioemotional development are discussed.  相似文献   

9.
Sex, age and education differences in facial affect recognition were assessed within a large sample (n = 7,320). Results indicate superior performance by females and younger individuals in the correct identification of facial emotion, with the largest advantage for low intensity expressions. Though there were no demographic differences for identification accuracy on neutral faces, controlling for response biases by males and older individuals to label faces as neutral revealed sex and age differences for these items as well. This finding suggests that inferior facial affect recognition performance by males and older individuals may be driven primarily by instances in which they fail to detect the presence of emotion in facial expressions. Older individuals also demonstrated a greater tendency to label faces with negative emotion choices, while females exhibited a response bias for sad and fear. These response biases have implications for understanding demographic differences in facial affect recognition.  相似文献   

10.
Problem gambling rates in older adults have risen dramatically in recent years and require further investigation. Limited available research has suggested that social needs may motivate gambling and hence problem gambling in older adults. Un-partnered older adults may be at greater risk of problem gambling than those with a partner. The current study explored whether loneliness mediated the marital status–problem gambling relationship, and whether gender moderated the mediation model. It was hypothesised that the relationship between being un-partnered and higher levels of loneliness would be stronger for older men than older women. A community sample of Australian men (n = 92) and women (n = 91) gamblers aged from 60 to 90 years (M = 69.75, SD = 7.28) completed the UCLA Loneliness Scale and the Problem Gambling Severity Index. The results supported the moderated mediation model, with loneliness mediating the relationship between marital status and problem gambling for older men but not for older women. It appears that felt loneliness is an important predictor of problem gambling in older adults, and that meeting the social and emotional needs of un-partnered men is important.  相似文献   

11.
Journal of Nonverbal Behavior - Although emotion expressions are typically dynamic and include the whole person, much emotion recognition research uses static, posed facial expressions. In this...  相似文献   

12.
Ethnic bias in the recognition of facial expressions   总被引:1,自引:0,他引:1  
Ethnic bias in the recognition of facial expressions was assessed by having college students from the United States and Zambia assign emotion labels to facial expressions produced by imitation by United States and Zambian students. Bidirectional ethnic bias was revealed by the fact that Zambian raters labeled the Zambian facial expressions with less uncertainty than the U.S. facial expressions, and that U.S. raters labeled the U.S. facial expressions with less uncertainty than the Zambian facial expressions. In addition, the Facial Action Coding System was used to assess accuracy in the imitation of facial expressions. These results and the results of other analyses of recognition accuracy are reported.Portions of this paper were presented at the annual meeting of the Society for Cross-Cultural Research, Philadelphia, Pennsylvania, February 1980 (Note 1).  相似文献   

13.
One of the most prevalent problems in face transplant patients is an inability to generate facial expression of emotions. The purpose of this study was to measure the subjective recognition of patients’ emotional expressions by other people. We examined facial expression of six emotions in two facial transplant patients (patient A = partial, patient B = full) and one healthy control using video clips to evoke emotions. We recorded target subjects’ facial expressions with a video camera while they were watching the clips. These were then shown to a panel of 130 viewers and rated in terms of degree of emotional expressiveness on a 7-point Likert scale. The scores for emotional expressiveness were higher for the healthy control than they were for patients A and B, and these varied as a function of emotion. The most recognizable emotion was happiness. The least recognizable emotions in Patient A were fear, surprise, and anger. The expressions of Patient B scored lower than those of Patient A and the healthy control. The findings show that partial and full-face transplant patients may have difficulties in generating facial expression of emotions even if they can feel those emotions, and different parts of the face seem to play critical roles in different emotional expressions.  相似文献   

14.
Physical attractiveness is suggested to be an indicator of biological quality and therefore should be stable. However, transient factors such as gaze direction and facial expression affect facial attractiveness, suggesting it is not. We compared the relative importance of variation between faces with variation within faces due to facial expressions. 128 participants viewed photographs of 14 men and 16 women displaying the six basic facial expressions (anger, disgust, fear, happiness, sadness, surprise) and a neutral expression. Each rater saw each model only once with a randomly chosen expression. The effect of expressions on attractiveness was similar in male and female faces, although several expressions were not significantly different from each other. Identity was 2.2 times as important as emotion in attractiveness for both male and female pictures, suggesting that attractiveness is stable. Since the hard tissues of the face are unchangeable, people may still be able to perceive facial structure whatever expression the face is displaying, and still make attractiveness judgements based on structural cues.  相似文献   

15.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

16.
This study examined the association between work–family conflict and couple relationship quality. We conducted a meta-analytic review of 49 samples from 33 papers published between 1986 and 2014. The results indicated that there was a significant negative relationship between work–family conflict and couple relationship quality (r = ?.19, k = 49). Several moderators were included in this analysis: gender, region, parental status, dual-earner status, and the measures used for work–family conflict and marital quality variables. The strength of the relationship varied based on the region of the sample—samples from Europe and Asia had a significantly weaker relationship between work–family conflict and relationship quality than those from North America. In addition, the relationship was significantly weaker in samples of dual-earner couples and when non-standardized scales were used. Implications of the results and directions for future research are suggested.  相似文献   

17.
This study examined whether distinct subgroups could be identified among a sample of non-treatment-seeking problem and pathological/disordered gamblers (PG) using Blaszczynski and Nower’s (Addiction 97:487–499, 2002) pathways model (N = 150, 50% female). We examined coping motives for gambling, childhood trauma, boredom proneness, risk-taking, impulsivity, attention-deficit/hyperactivity disorder (ADHD), and antisocial personality disorder as defining variables in a hierarchical cluster analysis to identify subgroups. Subgroup differences in gambling, psychiatric, and demographic variables were also assessed to establish concurrent validity. Consistent with the pathways model, our analyses identified three gambling subgroups: (1) behaviorally conditioned (BC), (2) emotionally vulnerable (EV), and (3) antisocial-impulsivist (AI) gamblers. BC gamblers (n = 47) reported the lowest levels of lifetime depression, anxiety, gambling severity, and interest in problem gambling treatment. EV gamblers (n = 53) reported the highest levels of childhood trauma, motivation to gamble to cope with negative emotions, gambling-related suicidal ideation, and family history of gambling problems. AI gamblers (n = 50) reported the highest levels of antisocial personality disorder and ADHD symptoms, as well as higher rates of impulsivity and risk-taking than EV gamblers. The findings provide evidence for the validity of the pathways model as a framework for conceptualizing PG subtypes in a non-treatment-seeking sample, and underscore the importance of tailoring treatment approaches to meet the respective clinical needs of these subtypes.  相似文献   

18.
The present studies examined how sensitivity to spatiotemporal percepts such as rhythm, angularity, configuration, and force predicts accuracy in perceiving emotion. In Study 1, participants (N = 99) completed a nonverbal test battery consisting of three nonverbal emotion perception tests and two perceptual sensitivity tasks assessing rhythm sensitivity and angularity sensitivity. Study 2 (N = 101) extended the findings of Study 1 with the addition of a fourth nonverbal test, a third configural sensitivity task, and a fourth force sensitivity task. Regression analyses across both studies revealed partial support for the association between perceptual sensitivity to spatiotemporal percepts and greater emotion perception accuracy. Results indicate that accuracy in perceiving emotions may be predicted by sensitivity to specific percepts embedded within channel- and emotion-specific displays. The significance of such research lies in the understanding of how individuals acquire emotion perception skill and the processes by which distinct features of percepts are related to the perception of emotion.  相似文献   

19.

Background

Since age-related muscle strength loss cannot be explained solely by muscle atrophy, other determinants would also contribute to muscle strength in elderly. The present study aimed to clarify contribution of neuromuscular activation pattern to muscle strength in elderly group. From 88 elderlies (age: 61~?83 years), multi-channel surface electromyography (EMG) of the vastus lateralis muscle was recorded with two-dimensional 64 electrodes during isometric submaximal ramp-up knee extension to assess neuromuscular activation pattern. Correlation analysis and stepwise regression analysis were performed between muscle strength and the parameters for signal amplitude and spatial distribution pattern, i.e., root mean square (RMS), correlation coefficient, and modified entropy of multi-channel surface EMG.

Results

There was a significant correlation between muscle strength and RMS (r =?0.361, p =?0.001) in the elderly. Muscle thickness (r =?0.519, p <?0.001), RMS (r =?0.288, p?=?0.001), and normalized RMS (r =?0.177, p =?0.047) were selected as major determinants of muscle strength in stepwise regression analysis (r?=?0.664 in the selected model).

Conclusion

These results suggest that inter-individual difference in muscle strength in elderly can be partly explained by surface EMG amplitude. We concluded that neuromuscular activation pattern is also major determinants of muscle strength on elderly in addition to indicator of muscle volume.
  相似文献   

20.
Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety‐five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co‐occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号