首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到6条相似文献,搜索用时 0 毫秒
1.
Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6‐month‐old infant‐mother dyads who each engaged in a face‐to‐face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action.  相似文献   

2.
Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety‐five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co‐occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.  相似文献   

3.
Although still-face effects are well-studied, little is known about the degree to which the Face-to-Face/Still-Face (FFSF) is associated with the production of intense affective displays. Duchenne smiling expresses more intense positive affect than non-Duchenne smiling, while Duchenne cry-faces express more intense negative affect than non-Duchenne cry-faces. Forty 4-month-old infants and their mothers completed the FFSF, and key affect-indexing facial Action Units (AUs) were coded by expert Facial Action Coding System coders for the first 30 s of each FFSF episode. Computer vision software, automated facial affect recognition (AFAR), identified AUs for the entire 2-min episodes. Expert coding and AFAR produced similar infant and mother Duchenne and non-Duchenne FFSF effects, highlighting the convergent validity of automated measurement. Substantive AFAR analyses indicated that both infant Duchenne and non-Duchenne smiling declined from the FF to the SF, but only Duchenne smiling increased from the SF to the RE. In similar fashion, the magnitude of mother Duchenne smiling changes over the FFSF were 2–4 times greater than non-Duchenne smiling changes. Duchenne expressions appear to be a sensitive index of intense infant and mother affective valence that are accessible to automated measurement and may be a target for future FFSF research.  相似文献   

4.
The capacity to engage with one’s child in a reciprocally responsive way is an important element of successful and rewarding parent–child conversations, which are common contexts for emotion socialization. The degree to which a parent–child dyad shows a mutually responsive orientation presumably depends on both individuals’ socio-emotional skills. For example, one or both members of a dyad needs to be able to accurately interpret and respond to the other’s nonverbal cues, such as facial expressions, to facilitate mutually responsive interactions. Little research, however, has examined whether and how mother and/or child facial expression decoding skill relates to dyads’ emotional mutuality during conversations. We thus examined associations between both mother and child facial expression decoding skill and observed emotional mutuality during parent-preschooler conversations about happy child memories. Results lend support to our hypotheses by suggesting that both mother and child capacities to read others’ emotional cues make distinct contributions to parent–child emotional mutuality in the context of reminiscing conversations. Specifically, mothers’ accurate decoding of child facial expressions predicted maternal displays of positive affect and interest, while children’s accurate decoding of adult facial expressions predicted dyadic displays of mutual enjoyment. Contrary to our hypotheses, however, parent/child facial expression decoding skills did not interact to predict observed mutual responsiveness. These findings underscore the importance of attending to both parent and child contributions to successful dyadic interactions that facilitate effective emotion socialization.  相似文献   

5.
6.
We examined the effects of the temporal quality of smile displays on impressions and decisions made in a simulated job interview. We also investigated whether similar judgments were made in response to synthetic (Study 1) and human facial stimuli (Study 2). Participants viewed short video excerpts of female interviewees exhibiting dynamic authentic smiles, dynamic fake smiles, or neutral expressions, and rated them with respect to a number of attributes. In both studies, perceivers’ judgments and employment decisions were significantly shaped by the temporal quality of smiles, with dynamic authentic smiles generally leading to more favorable job, person, and expression ratings than dynamic fake smiles or neutral expressions. Furthermore, authentically smiling interviewees were judged to be more suitable and were more likely to be short-listed and selected for the job. The findings show a high degree of correspondence in the effects created by synthetic and human facial stimuli, suggesting that temporal features of smiles similarly influence perceivers’ judgments and decisions across the two types of stimulus.
Eva KrumhuberEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号