首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Young (M = 23 years) and older (M = 77 years) adults' interpretation and memory for the emotional content of spoken discourse was examined in an experiment using short, videotaped scenes of two young actresses talking to each other about emotionally-laden events. Emotional nonverbal information (prosody or facial expressions) was conveyed at the end of each scene at low, medium, and high intensities. Nonverbal information indicating anger, happiness, or fear, conflicted with the verbal information. Older adults' ability to differentiate levels of emotional intensity was not as strong (for happiness and anger) compared to younger adults. An incidental memory task revealed that older adults, more often than younger adults, reconstruct what people state verbally to coincide with the meaning of the nonverbal content, if the nonverbal content is conveyed through facial expressions. A second experiment with older participants showed that the high level of memory reconstructions favoring the nonverbal interpretation was maintained when the ages of the participants and actresses were matched, and when the nonverbal content was conveyed both through prosody and facial expressions.  相似文献   

2.
The specificity predicted by differential emotions theory (DET) for early facial expressions in response to 5 different eliciting situations was studied in a sample of 4‐month‐old infants (n = 150). Infants were videotaped during tickle, sour taste, jack‐in‐the‐box, arm restraint, and masked‐stranger situations and their expressions were coded second by second. Infants showed a variety of facial expressions in each situation; however, more infants exhibited positive (joy and surprise) than negative expressions (anger, disgust, fear, and sadness) across all situations except sour taste. Consistent with DET‐predicted specificity, joy expressions were the most common in response to tickling, and were less common in response to other situations. Surprise expressions were the most common in response to the jack‐in‐the‐box, as predicted, but also were the most common in response to the arm restraint and masked‐stranger situations, indicating a lack of specificity. No evidence of predicted specificity was found for anger, disgust, fear, and sadness expressions. Evidence of individual differences in expressivity within situations, as well as stability in the pattern across situations, underscores the need to examine both child and contextual factors in studying emotional development. The results provide little support for the DET postulate of situational specificity and suggest that a synthesis of differential emotions and dynamic systems theories of emotional expression should be considered.  相似文献   

3.
Differentiation models contend that the organization of facial expressivity increases during infancy. Accordingly, infants are believed to exhibit increasingly specific facial expressions in response to stimuli as a function of development. This study tested this hypothesis in a sample of 151 infants (83 boys and 68 girls) observed in 4 situations (tickle, sour taste, arm restraint, and masked stranger) at 4 and 12 months of age. Three of the 4 situations showed evidence of increasing specificity over time. In response to tickle, the number of infants exhibiting joy expressions increased and the number exhibiting interest, surprise, and surprise blends decreased from 4 to 12 months. In tasting a sour substance, more infants exhibited disgust and fewer exhibited joy and interest expressions, and fear and surprise blends over time. For arm restraint, more infants exhibited anger expressions and anger blends and fewer exhibited interest and surprise expressions and surprise blends over time. In response to a masked stranger, however, no evidence of increased specificity was found. Overall, these findings suggest that infants increasingly exhibit particular expressions in response to specific stimuli during the 1st year of life. These data provide partial support for the hypothesis that facial expressivity becomes increasingly organized over time.  相似文献   

4.
Eighty-two younger and older adults participated in a two-part study of the decoding of emotion through body movements and gestures. In the first part, younger and older adults identified emotions depicted in brief videotaped displays of young adult actors portraying emotional situations. In each display, the actors were silent and their faces were electronically blurred in order to isolate the body cues to emotion. Although both groups made accurate emotion identifications well above chance levels, older adults made more overall errors, and this was especially true for negative emotions. Moreover, their errors were more likely to reflect the misidentification of emotional displays as neutral in content. In the second part, younger and older adults rated the videotaped displays using scales reflecting several movement dimensions (e.g., form, tempo, force, and movement). The ratings of both age groups were in high agreement and provided reliable information about particular body cues to emotion. The errors made by older adults were linked to reactions to exaggerated or ambiguous body cues.  相似文献   

5.
Sex, age and education differences in facial affect recognition were assessed within a large sample (n = 7,320). Results indicate superior performance by females and younger individuals in the correct identification of facial emotion, with the largest advantage for low intensity expressions. Though there were no demographic differences for identification accuracy on neutral faces, controlling for response biases by males and older individuals to label faces as neutral revealed sex and age differences for these items as well. This finding suggests that inferior facial affect recognition performance by males and older individuals may be driven primarily by instances in which they fail to detect the presence of emotion in facial expressions. Older individuals also demonstrated a greater tendency to label faces with negative emotion choices, while females exhibited a response bias for sad and fear. These response biases have implications for understanding demographic differences in facial affect recognition.  相似文献   

6.
A method was developed for automated coding of facial behavior in computer-aided test or game situations. Facial behavior is registered automatically with the aid of small plastic dots which are affixed to pre-defined regions of the subject's face. During a task, the subject's face is videotaped, and the picture is digitized. A special pattern-recognition algorithm identifies the dot pattern, and an artificial neural network classifies the dot pattern according to the Facial Action Coding System (FACS; Ekman & Friesen, 1978). The method was tested in coding the posed facial expressions of three subjects, themselves FACS experts. Results show that it is possible to identify and differentiate facial expressions by their corresponding dot patterns. The method is independent of individual differences in physiognorny.  相似文献   

7.
The perception of emotional facial expressions may activate corresponding facial muscles in the receiver, also referred to as facial mimicry. Facial mimicry is highly dependent on the context and type of facial expressions. While previous research almost exclusively investigated mimicry in response to pictures or videos of emotional expressions, studies with a real, face-to-face partner are still rare. Here we compared facial mimicry of angry, happy, and sad expressions and emotion recognition in a dyadic face-to-face setting. In sender-receiver dyads, we recorded facial electromyograms in parallel. Senders communicated to the receivers—with facial expressions only—the emotions felt during specific personal situations in the past, eliciting anger, happiness, or sadness. Receivers mostly mimicked happiness, to a lesser degree, sadness, and anger as the least mimicked emotion. In actor-partner interdependence models we showed that the receivers’ own facial activity influenced their ratings, which increased the agreement between the senders’ and receivers’ ratings for happiness, but not for angry and sad expressions. These results are in line with the Emotion Mimicry in Context View, holding that humans mimic happy expressions according to affiliative intentions. The mimicry of sad expressions is less intense, presumably because it signals empathy and might imply personal costs. Direct anger expressions are mimicked the least, possibly because anger communicates threat and aggression. Taken together, we show that incidental facial mimicry in a face-to-face setting is positively related to the recognition accuracy for non-stereotype happy expressions, supporting the functionality of facial mimicry.  相似文献   

8.
This study examined the emergence of affect specificity in infancy. In this study, infants received verbal and facial signals of 2 different, negatively valenced emotions (fear and sadness) as well as neutral affect via a television monitor to determine if they could make qualitative distinctions among emotions of the same valence. Twenty 12‐ to 14‐month‐olds and 20 16‐ to 18‐month‐olds were examined. Results suggested that younger infants showed no evidence of referential specificity, as they responded similarly to both the target and distracter toys, and showed no evidence of affect specificity, showing no difference in play between affect conditions. Older infants, in contrast, showed evidence both of referential and affect specificity. With respect to affect specificity, 16‐ to 18‐month‐olds touched the target toy less in the fear condition than in the sad condition and showed a larger proportion of negative facial expressions in the sad condition versus the fear condition. These findings suggest a developmental emergence after 15 months of age for affect specificity in relating emotional messages to objects.  相似文献   

9.
Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11‐month‐olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants' nonfacial behavior differed across procedures, suggesting that the target emotions were successfully elicited. However evidence for distinct emotion‐specific facial configurations corresponding to fear versus anger was not obtained. Although facial responses were largely similar across cultures, some differences also were observed. Results are discussed in terms of functionalist and dynamical systems approaches to emotion and emotional expression.  相似文献   

10.
This study examined three- to seven-year-old children's abilities to recognize and label their own facial expressions of emotion. Each child posed four facial expressions (Happy, Sad, Angry, and Scared) which were photographed with a Polaroid camera. The child then selected each expression from the array of his/her own photos and labeled the facial expression of each photo. In addition to children's self-evaluations, the photo set's expressive content was evaluated by a panel of adult raters. Happy was the expression easiest for children to pose; Scared was the most difficult. Abilities involved in evaluating one's own facial expression (i.e., recognizing and labeling) appear not to be acquired simultaneously with the ability to pose the expression. Not all children conformed to adult standards in evaluating their own expressions. Nearly one-fifth of the children studied exhibited evidence of an idiosyncratic expressive scheme for at least one of their facial expressions.  相似文献   

11.
The present study examined the nonverbal correlates of repressive coping, extending previous research in two ways: (1) participants' nonverbal behaviors were observed in either of two conditions that differed with respect to the salience of public identity; (2) an anatomically-based facial coding system was used to assess participants' emotion expressions and symbolic communication behaviors. Sixty female undergraduates, classified as repressive, low-anxious, or high-anxious, were videotaped during the preparation and delivery of a self-disclosing speech. During both the preparation and delivery, the salience of participants' public identities was either minimized (low-salience condition) or maximized (high-salience condition). Repressors and nonrepressors exhibited similar frequencies of hostile facial expressions. Repressors differed from nonrepressors by their frequent expressions of social smiles and conversational illustrators when their public selves were most salient. These findings suggest that certain symbolic communication behaviors may be nonverbal analogues of cognitive coping processes, and they support the utility of including expressive behaviors in conceptualizations of emotion-focused coping.  相似文献   

12.
There is consistent evidence that older adults have difficulties in perceiving emotions. However, emotion perception measures to date have focused on one particular type of assessment: using standard photographs of facial expressions posing six basic emotions. We argue that it is important in future research to explore adult age differences in understanding more complex, social and blended emotions. Using stimuli which are dynamic records of the emotions expressed by people of all ages, and the use of genuine rather than posed emotions, would also improve the ecological validity of future research into age differences in emotion perception. Important questions remain about possible links between difficulties in perceiving emotional signals and the implications that this has for the everyday interpersonal functioning of older adults.  相似文献   

13.
Several previous experiments have found that newborn and young infants will spend more time looking at attractive faces when these are shown paired with faces judged by adults to be unattractive. Two experimental conditions are described with the aim of finding whether the “attractiveness effect” results from attention to internal or external facial features, or both. Pairs of attractive and less attractive faces (as judged by adults) were shown to newborn infants (mean age 2 days, 9 hours), where each pair had either identical internal features (and different external features) or identical external features (and different internal features). In the latter, but not the former, condition the infants looked longer at the attractive faces. These findings are clear evidence that newborn infants use information about internal facial features in making preferences based on attractiveness. It is suggested that when newborn (and older) infants are presented with facial stimuli, whether dynamic or static, they are able to attend both to internal and external facial features.  相似文献   

14.
Older adults tend to perform worse on emotion perception tasks compared to younger adults. How this age difference relates to other interpersonal perception tasks and conversation ability remains an open question. In the present study, we assessed 32 younger and 30 older adults’ accuracy when perceiving (1) static facial expressions, (2) emotions, attitudes, and intentions from videos, (3) and interpersonal constructs (e.g., kinship). Participants’ conversation ability was rated by coders from a videotaped, dyadic problem-solving task. Younger adults were more accurate than older adults perceiving some but not all emotions. No age differences in accuracy were found on any perception task or in conversation ability. Some but not all of the interpersonal perception tasks were related. None of the perception tasks predicted conversation ability. Thus, although the literature suggests a robust age difference in emotion perception accuracy, this difference does not seem to transfer to other interpersonal perception tasks or interpersonal outcomes.  相似文献   

15.
Research has demonstrated that infants recognize emotional expressions of adults in the first half year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5‐ and 5‐month‐old infants heard a series of infant vocal expressions (positive and negative affect) along with side‐by‐side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5‐month‐olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5‐month‐olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face–voice synchrony, temporal, or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.  相似文献   

16.
This study found that the facial action of moderately or widely opening the mouth is accompanied by brow raising in infnats, thus producing surprise expressions in non-surprise situations. Infants (age = 5 months and 7 months) were videotaped as they were presented with toys that they often grasped and brought to their mouths. Episodes of mouth opening were identified and accompanying brow, nose, and eyelid movements were coded. Results indicated that mouth opening is selectively associated with raised brows rather than to other brow movements. Trace levels of eyelid raising also tended to accompany this facial configuration. The findings are discussed in terms of a dynamical systems theory of facial behavior and suggest that facial expression cannot be used as investigators' sole measure of surprise in infants.This research was conducted as part of the second author's undergraduate honors program project and was supported in part by a grant from the NICHHD #1RO1 HD 22399-A3 awarded to G. F. Michel.  相似文献   

17.
Women were videotaped while they spoke about a positive and a negative experience either in the presence of an experimenter or alone. They gave self-reports of their emotional experience, and the videotapes were rated for facial and verbal expression of emotion. Participants spoke less about their emotions when the experimenter (E) was present. When E was present, during positive disclosures they smiled more, but in negative disclosures they showed less negative and more positive expression. Facial behavior was only related to experienced emotion during positive disclosure when alone. Verbal behavior was related to experienced emotion for positive and negative disclosures when alone. These results show that verbal and nonverbal behaviors, and their relationship with emotional experience, depend on the type of emotion, the nature of the emotional event, and the social context.  相似文献   

18.
We report data concerning cross-cultural judgments of emotion in spontaneously produced facial expressions. Americans, Japanese, British, and International Students in the US reliably attributed emotions to the expressions of Olympic judo athletes at the end of a match for a medal, and at two times during the subsequent medal ceremonies. There were some observer culture differences in absolute attribution agreement rates, but high cross-cultural agreement in differences in attribution rates across expressions (relative agreement rates). Moreover, we operationalized signal clarity and demonstrated that it was associated with agreement rates similarly in all cultures. Finally, we obtained judgments of won-lost match outcomes and medal finish, and demonstrated that the emotion judgments were associated with accuracy in judgments of outcomes. These findings demonstrated that members of different cultures reliably judge spontaneously expressed emotions, and that across observer cultures, lower absolute agreement rates are related to noise produced by non-emotional facial behaviors. Also, the findings suggested that observers of different cultures utilize the same facial cues when judging emotions, and that the signal value of facial expressions is similar across cultures.  相似文献   

19.
Humans perceive emotions in terms of categories, such as “happiness,” “sadness,” and “anger.” To learn these complex conceptual emotion categories, humans must first be able to perceive regularities in expressive behaviors (e.g., facial configurations) across individuals. Recent research suggests that infants spontaneously form “basic-level” categories of facial configurations (e.g., happy vs. fear), but not “superordinate” categories of facial configurations (e.g., positive vs. negative). The current studies further explore how infant age and language impact superordinate categorization of facial configurations associated with different negative emotions. Across all experiments, infants were habituated to one person displaying facial configurations associated with anger and disgust. While 10-month-olds formed a category of person identity (Experiment 1), 14-month-olds formed a category that included negative facial configurations displayed by the same person (Experiment 2). However, neither age formed the hypothesized superordinate category of negative valence. When a verbal label (“toma”) was added to each of the habituation events (Experiment 3), 10-month-olds formed a category similar to 14-month-olds in Experiment 2. These findings intersect a larger conversation about the nature and development of children's emotion categories and highlight the importance of considering developmental processes, such as language learning and attentional/memory development, in the design and interpretation of infant categorization studies.  相似文献   

20.
People can discriminate cheaters from cooperators on the basis of negative facial expressions. However, such cheater detection is far from perfect in real-world situations. Therefore, it is possible that cheaters have the ability to disguise negative emotional expressions that signal their uncooperative attitude. To test this possibility, emotional intensity and trustworthiness were evaluated for facial photographs of cheaters and cooperators defined by scores in an economic game. The facial photographs had either posed happy or angry expressions. The angry expressions of cheaters were rated angrier and less trustworthy than those of cooperators. On the other hand, happy expressions of cheaters were higher in emotional intensity but comparable to those of cooperators in trustworthiness. These results suggest that cheater detection based on the processing of negative facial expressions can be thwarted by a posed or fake smile, which cheaters put on with higher intensity than cooperators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号