首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Despite being inherently dynamic phenomena, much of our understanding of how infants attend and scan facial expressions is based on static face stimuli. Here we investigate how six-, nine-, and twelve-month infants allocate their visual attention toward dynamic-interactive videos of the six basic emotional expressions, and compare their responses with static images of the same stimuli. We find infants show clear differences in how they attend and scan dynamic and static expressions, looking longer toward the dynamic-face and lower-face regions. Infants across all age groups show differential interest in expressions, and show precise scanning of regions “diagnostic” for emotion recognition. These data also indicate that infants' attention toward dynamic expressions develops over the first year of life, including relative increases in interest and scanning precision toward some negative facial expressions (e.g., anger, fear, and disgust).  相似文献   

2.
3.
When we perceive the emotions of other people, we extract much information from the face. The present experiment used FACS (Facial Action Coding System), which is an instrument that measures the magnitude of facial action from a neutral face to a changed, emotional face. Japanese undergraduates judged the emotion in pictures of 66 static Japanese male faces (11 static pictures for each of six basic expressions: happiness, surprise, fear, anger, sadness, and disgust), ranging from neutral faces to maximally expressed emotions. The stimuli had previously been scored with FACS and were presented in random order. A high correlation between the subjects' judgments of facial expressions and the FACS scores was found.  相似文献   

4.
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11‐month‐olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants' nonfacial behavior differed across procedures, suggesting that the target emotions were successfully elicited. However evidence for distinct emotion‐specific facial configurations corresponding to fear versus anger was not obtained. Although facial responses were largely similar across cultures, some differences also were observed. Results are discussed in terms of functionalist and dynamical systems approaches to emotion and emotional expression.  相似文献   

5.
The Intensity of Emotional Facial Expressions and Decoding Accuracy   总被引:2,自引:0,他引:2  
The influence of the physical intensity of emotional facial expressions on perceived intensity and emotion category decoding accuracy was assessed for expressions of anger, disgust, sadness, and happiness. The facial expressions of two men and two women posing each of the four emotions were used as stimuli. Six different levels of intensity of expression were created for each pose using a graphics morphing program. Twelve men and 12 women rated each of the 96 stimuli for perceived intensity of the underlying emotion and for the qualitative nature of the emotion expressed. The results revealed that perceived intensity varied linearly with the manipulated physical intensity of the expression. Emotion category decoding accuracy varied largely linearly with the manipulated physical intensity of the expression for expressions of anger, disgust, and sadness. For the happiness expressions only, the findings were consistent with a categorical judgment process. Sex of encoder produced significant effects for both dependent measures. These effects remained even after possible gender differences in encoding were controlled for, suggesting a perceptual bias on the part of the decoders.  相似文献   

6.
Journal of Nonverbal Behavior - Age-related deficits are often observed in emotion categorization tasks that include negative emotional expressions like anger, fear, and sadness. Stimulus...  相似文献   

7.
Memory for in-group faces tends to be better than memory for out-group faces. Ackerman et al. (Psychological Science 17:836–840, 2006) found that this effect reverses when male faces display anger, supposedly due to their functional value in signaling intergroup threat. We explored the generalizability of this reverse effect. White participants viewed Black and White male or female faces displaying angry, fearful, or neutral expressions. Recognition accuracy for White male faces was better than for Black male faces when faces were neutral, but this reversed when the faces displayed anger or fear. For female targets, Black faces were generally better recognized than White faces, and female faces were better remembered when they displayed anger rather than fear, whereas male faces were better remembered when they displayed fear rather than anger. These findings are difficult to reconcile with a functional account and suggest (a) that the processing of male out-group faces is influenced by negative emotional expressions in general; and (b) that gender role expectations lead to differential remembering of male and female faces as a function of emotional expression.  相似文献   

8.
The present study was designed to determine whether the technique used to control the semantic content of emotional communications might influence the results of research on the effects of gender, age, and particular affects on accuracy of decoding tone of voice. Male and female college and elementary school students decoded a 48-item audio tape-recording of emotional expressions encoded by two children and two college students. Six emotions — anger, fear, happiness, jealousy, pride and sadness — were expressed in two types of content-standard messages, namely letters of the alphabet and an affectively neutral sentence. The results of the study indicate that different methods for controlling content can indeed influence the results of studies of determinants of decoding performance. Overall, subjects demonstrated greater accuracy when decoding emotions expressed in the standard sentence than when decoding emotions embedded in letters of the alphabet. A technique by emotion interaction, however, revealed that this was especially true for the purer emotions of anger, fear, happiness and sadness. Subjects identified the less pure emotions of jealousy and pride relatively more accurately when these emotions were embedded in the alphabet technique. The implications of these results for research concerning the vocal communication of affect are briefly discussed.Preparation of this article was supported in part by the National Science Foundation.  相似文献   

9.
One of the most prevalent problems in face transplant patients is an inability to generate facial expression of emotions. The purpose of this study was to measure the subjective recognition of patients’ emotional expressions by other people. We examined facial expression of six emotions in two facial transplant patients (patient A = partial, patient B = full) and one healthy control using video clips to evoke emotions. We recorded target subjects’ facial expressions with a video camera while they were watching the clips. These were then shown to a panel of 130 viewers and rated in terms of degree of emotional expressiveness on a 7-point Likert scale. The scores for emotional expressiveness were higher for the healthy control than they were for patients A and B, and these varied as a function of emotion. The most recognizable emotion was happiness. The least recognizable emotions in Patient A were fear, surprise, and anger. The expressions of Patient B scored lower than those of Patient A and the healthy control. The findings show that partial and full-face transplant patients may have difficulties in generating facial expression of emotions even if they can feel those emotions, and different parts of the face seem to play critical roles in different emotional expressions.  相似文献   

10.
Differentiation models contend that the organization of facial expressivity increases during infancy. Accordingly, infants are believed to exhibit increasingly specific facial expressions in response to stimuli as a function of development. This study tested this hypothesis in a sample of 151 infants (83 boys and 68 girls) observed in 4 situations (tickle, sour taste, arm restraint, and masked stranger) at 4 and 12 months of age. Three of the 4 situations showed evidence of increasing specificity over time. In response to tickle, the number of infants exhibiting joy expressions increased and the number exhibiting interest, surprise, and surprise blends decreased from 4 to 12 months. In tasting a sour substance, more infants exhibited disgust and fewer exhibited joy and interest expressions, and fear and surprise blends over time. For arm restraint, more infants exhibited anger expressions and anger blends and fewer exhibited interest and surprise expressions and surprise blends over time. In response to a masked stranger, however, no evidence of increased specificity was found. Overall, these findings suggest that infants increasingly exhibit particular expressions in response to specific stimuli during the 1st year of life. These data provide partial support for the hypothesis that facial expressivity becomes increasingly organized over time.  相似文献   

11.
The specificity predicted by differential emotions theory (DET) for early facial expressions in response to 5 different eliciting situations was studied in a sample of 4‐month‐old infants (n = 150). Infants were videotaped during tickle, sour taste, jack‐in‐the‐box, arm restraint, and masked‐stranger situations and their expressions were coded second by second. Infants showed a variety of facial expressions in each situation; however, more infants exhibited positive (joy and surprise) than negative expressions (anger, disgust, fear, and sadness) across all situations except sour taste. Consistent with DET‐predicted specificity, joy expressions were the most common in response to tickling, and were less common in response to other situations. Surprise expressions were the most common in response to the jack‐in‐the‐box, as predicted, but also were the most common in response to the arm restraint and masked‐stranger situations, indicating a lack of specificity. No evidence of predicted specificity was found for anger, disgust, fear, and sadness expressions. Evidence of individual differences in expressivity within situations, as well as stability in the pattern across situations, underscores the need to examine both child and contextual factors in studying emotional development. The results provide little support for the DET postulate of situational specificity and suggest that a synthesis of differential emotions and dynamic systems theories of emotional expression should be considered.  相似文献   

12.
Sam V. Wass  Tim J. Smith 《Infancy》2014,19(4):352-384
Little research hitherto has examined how individual differences in attention, as assessed using standard experimental paradigms, relate to individual differences in how attention is spontaneously allocated in more naturalistic contexts. Here, we analyzed the time intervals between refoveating eye movements (fixation durations) while typically developing 11‐month‐old infants viewed a 90‐min battery ranging from complex dynamic to noncomplex static materials. The same infants also completed experimental assessments of cognitive control, psychomotor reaction times (RT), processing speed (indexed via peak look during habituation), and arousal (indexed via tonic pupil size). High test–retest reliability was found for fixation duration, across testing sessions and across types of viewing material. Increased cognitive control and increased arousal were associated with reduced variability in fixation duration. For fixations to dynamic stimuli, in which a large proportion of saccades may be exogenously cued, we found that psychomotor RT measures were most predictive of mean fixation duration; for fixations to static stimuli, in contrast, in which there is less exogenous attentional capture, we found that psychomotor RT did not predict performance, but that measures of cognitive control and arousal did. The implications of these findings for understanding the development of attentional control in naturalistic settings are discussed.  相似文献   

13.
Categorical perception, indicated by superior discrimination between stimuli that cross categorical boundaries than between stimuli within a category, is an efficient manner of classification. The current study examined the development of categorical perception of emotional stimuli in infancy. We used morphed facial images to investigate whether infants find contrasts between emotional facial images that cross categorical boundaries to be more salient than those that do not, while matching the degree of differences in the two contrasts. Five‐month‐olds exhibited sensitivity to the categorical boundary between sadness and disgust, between happiness and surprise, as well as between sadness and anger but not between anger and disgust. Even 9‐month‐olds failed to exhibit evidence of a definitive category boundary between anger and disgust. These findings indicate the presence of discrete boundaries between some, but not all, of the basic emotions early in life. Implications of these findings for the major theories of emotion representation are discussed.  相似文献   

14.
Facial expressions related to sadness are a universal signal of nonverbal communication. Although results of many psychology studies have shown that drooping of the lip corners, raising of the chin, and oblique eyebrow movements (a combination of inner brow raising and brow lowering) express sadness, no report has described a study elucidating facial expression characteristics under well-controlled circumstances with people actually experiencing the emotion of sadness itself. Therefore, spontaneous facial expressions associated with sadness remain unclear. We conducted this study to accumulate important findings related to spontaneous facial expressions of sadness. We recorded the spontaneous facial expressions of a group of participants as they experienced sadness during an emotion-elicitation task. This task required a participant to recall neutral and sad memories while listening to music. We subsequently conducted a detailed analysis of their sad and neutral expressions using the Facial Action Coding System. The prototypical facial expressions of sadness in earlier studies were not observed when people experienced sadness as an internal state under non-social circumstances. By contrast, they expressed tension around the mouth, which might function as a form of suppression. Furthermore, results show that parts of these facial actions are not only related to sad experiences but also to other emotional experiences such as disgust, fear, anger, and happiness. This study revealed the possibility that new facial expressions contribute to the experience of sadness as an internal state.  相似文献   

15.
Recent research has demonstrated that preschool children can decode emotional meaning in expressive body movement; however, to date, no research has considered preschool children's ability to encode emotional meaning in this media. The current study investigated 4- (N = 23) and 5- (N = 24) year-old children's ability to encode the emotional meaning of an accompanying music segment by moving a teddy bear using previously modeled expressive movements to indicate one of four target emotions (happiness, sadness, anger, or fear). Adult judges visually categorized the silent videotaped expressive movement performances by children of both ages with greater than chance level accuracy. In addition, accuracy in categorizing the emotion being expressed varied as a function of age of child and emotion. A subsequent cue analysis revealed that children as young as 4 years old were systematically varying their expressive movements with respect to force, rotation, shifts in movement pattern, tempo, and upward movement in the process of emotional communication. The theoretical significance of such encoding ability is discussed with respect to children's nonverbal skills and the communication of emotion.  相似文献   

16.
17.
The goals of this study were to examine the effectiveness of emotional reappraisal in regulating male sexual arousal and to investigate a set of variables theoretically linked to sexual arousal regulation success. Participants first completed a series of online sexuality questionnaires. Subsequently, they were assessed for their success in regulating sexual arousal in the laboratory. Results showed that the ability to regulate emotion may cross emotional domains; those men best able to regulate sexual arousal were also the most skilled at regulating their level of amusement to humorous stimuli. Participants, on average, were somewhat able to regulate their physiological and cognitive sexual arousal, although there was a wide range of regulation success. Whereas some men were very adept at regulating their sexual arousal, others became more sexually aroused while trying to regulate. Age, sexual experience, and sexual compulsivity were unrelated to sexual arousal regulation. Conversely, sexual excitation, inhibition, and desire correlated with sexual arousal regulation success. Increased sexual excitation and desire were associated with poorer regulatory performance, whereas a propensity for sexual inhibition due to fear of performance consequences was related to regulatory success.  相似文献   

18.
Because of the close connection between culture and language, a number of writers have suggested that bilinguals will differ in their behavior because of differences in the degree of assimilation of different cultures in the same individual. We tested this notion by obtaining data from bilingual (English and Hindi) college students in India using a well-studied cross-cultural research paradigm involving emotional perception. Subjects judged universal facial expressions of emotion in two separate sessions, one conducted entirely in English, the other in Hindi. In each session, they judged which emotion was being portrayed, and how intensely. Subjects recognized anger, fear, and sadness more accurately in English than in Hindi. They also attributed greater intensity to female photos of anger when rating in Hindi, but attributed greater intensity to female photos of sadness when rating in English. These findings were discussed in relation to the theoretical connection between culture and language.  相似文献   

19.
We investigated how power priming affects facial emotion recognition in the context of body postures conveying the same or different emotion. Facial emotions are usually recognized better when the face is presented with a congruent body posture, and recognized worse when the body posture is incongruent. In our study, we primed participants to either low, high, or neutral power prior to their performance in a facial-emotion categorization task in which faces were presented together with a congruent or incongruent body posture. Facial emotion recognition in high-power participants was not affected by body posture. In contrast, low-power and neutral-power participants were significantly affected by the congruence of facial and body emotions. Specifically, these participants displayed better facial emotion recognition when the body posture was congruent, and worse performance when the body posture was incongruent. In a following task, we trained the same participants to categorize two sets of novel checkerboard stimuli and then engaged them in a recognition test involving compounds of these stimuli. High, low, and neutral-power participants all showed a strong congruence effect for compound checkerboard stimuli. We discuss our results with reference to the literature on power and social perception.  相似文献   

20.
The facial expressions of emotion and the circumstances under which the expressions occurred in a sample of the most popular children's television programs were investigated in this study. Fifteen-second randomly selected intervals from episodes of five television programs were analyzed for displays of happiness, sadness, anger, fear, disgust, and surprise. In addition, the contexts in which the emotions occurred were examined. Results indicated that particular emotional expressions occurred at significantly different frequencies and that there was an association between emotional displays and emotion-contexts. The high rate of emotional displays found in television shows has implications for the development of knowledge regarding emotional display rules in viewers.We are grateful to Sharon Galligan for assistance in coding part of the data and to Carolyn Saarni and Amy Halberstadt for helpful comments on an earlier draft of this paper. This research was supported in part by a grant from the National Institute of Disabilities and Rehabilitation Research, #GOO85351. The opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号