REVINCLUSO - Revista Inclusão & Sociedade                                 ISSN 2764-4537

CODIFICAÇÃO AUTOMATIZADA DE MARCADORES NÃO-MANUAIS: ANÁLISE DE CORPORA DE LÍNGUAS DE SINAIS UTILIZANDO O FACEREADER

CODIFICACIÓN AUTOMATIZADA DE EXPRESIONES NO-MANUALES: ANÁLISIS DE CORPUS DE LENGUAS DE SEÑAS UTILIZANDO FACEREADER

AUTOMATED CODING NON-MANUAL MARKERS: SIGN LANGUAGE CORPORA ANALYSIS USING FACEREADER

Leticia Kaori Hanada[1]

Resumo

O presente estudo tem como objetivo introduzir a possibilidade de codificar expressões faciais e movimentos da cabeça em diferentes línguas de sinais utilizando o programa FaceReader. Considerando que as Marcadores Não-Manuais (MNMs) (expressões faciais, movimentos da cabeça e do tronco) são uma parte importante da gramática das línguas de sinais, a capacidade de codificação automática do FaceReader seria muito útil para a análise de um maior número de dados sinalizados, uma vez que o programa é capaz de calibrar o rosto dos participantes e anotar automaticamente emoções (expressões faciais emotivas), Unidades de Ação (MNMs gramaticais) e suas respectivas intensidades, assim como posições e movimentos da cabeça. No geral, parece ser muito benéfico usar o programa para o reconhecimento de movimentos faciais e da cabeça em línguas de sinais, mas também possui algumas desvantagens em adotá-lo: o preço, a falta de codificação de movimentos do tronco e de projeção da cabeça para frente e para trás e a dificuldade de analisar participantes com barba e óculos. Algumas dessas desvantagens podem ser compensadas ou superadas com softwares ou estratégias metodológicas.

Palavras-chave: FaceReader; Marcadores Não Manuais; Codificação; línguas de sinais.

Resumen

El presente estudio tiene como objetivo introducir la posibilidad de codificar expresiones faciales y movimientos de cabeza en diferentes lenguas de señas utilizando el programa FaceReader. Teniendo en cuenta que las Marcadores No-Manuales (MNMs) (expresiones faciales, movimientos de cabeza y torso) son una parte importante de la gramática de las lenguas de señas, la capacidad de codificación automática de FaceReader sería muy útil para investigar un mayor número de datos, ya que el programa es capaz de calibrar el rostro de los participantes y anotar automáticamente emociones (ENMs emotivos), Unidades de Acción (ENMs gramaticales) y sus intensidad, así como movimientos y posiciones de cabeza. En resumen, utilizar el programa es muy beneficioso para el reconocimiento de movimientos faciales y de cabeza en las lenguas de señas, pero también hay algunas desventajas al adoptarlo: su precio, la falta de codificación de los movimientos del torso y la proyección de la cabeza hacia delante y hacia atrás, así como la dificultad de analizar a participantes con barba y gafas. Algunas de estas desventajas pueden ser superadas con otros softwares o estrategias metodológicas.

Palabras clave: FaceReader; Marcadores No-Manuales; Codificación; lenguas de señas.

Abstract

The present study aims to introduce the possibility of coding facial expressions and head movements in different sign languages using the FaceReader program. Considering that Non-Manuals Markers (NMMs) (facial expressions, head and torso movements) are an important part of the signed languages grammar, the FaceReader’s automatic coding ability would be very useful for investigating a larger number of signed data, The program can calibrate participants’ faces, automatically annotate emotions (emotive facial expressions), Action Units (AUs) (grammatical NMMs), their intensities, as well as head movements. Overall, it seems highly beneficial to use the program for recognizing facial and head movements in sign languages. However, there are some disadvantages to adopting it, such as pricing, the lack of torso movements and head protraction/retraction coding, and difficulties in analyzing participants with beards and glasses. Some of these disadvantages can be addressed through the use of other softwares or methodologies strategies.

Keywords: FaceReader; Non Manual Markers; Coding; sign languages.

  1. Introduction

1.1 Facial expressions of emotions

Facial expressions of emotions have been under investigation mainly by naturalists (Darwin, 1872), anthropologists (Lutz & White, 1986; Dinculescu et al., 2019), and psychologists (Allport, 1924; Asch, 1952; Tomkins, 1962; 1963; Lewinski et al, 2014; Küntzler et al., 2021) for over a hundred years. In the field of emotion sciences, facial expressions have been analyzed in various disciplines, including psychophysiology (Levenson et al., 1990), neural bases (Calder et al., 1996; Davidson et al., 1990), development (Malatesta et al.,1989; Matias & Cohn, 1993), perception (Ambadar et al., 2005), and emotion disorders (Kaiser, 2002; Sloan et al., 1997), among others. In all these fields, there has been a debate regarding the existence of emotions that are universally recognized across all human cultures (Ekman, 1992; 1973). In other words, there has been a debate whether the following six prototypic emotions are universal or not (Jack et al., 2014; Mansourian et al, 2016; Gu et al, 2015; 2016; Wang & Pereira, 2016):

Figure 1 -  Prototypic six emotional facial expressions: Anger, Disgust, Fear, Joy, Sadness, and Surprise (from left to right). Source: Cohn-Kanade database (Kanade et al., 2000 cited in Shan & Braspenning, 2010, p. 03)

 Black and white figure of the same woman expressing six prototypical facial expressions presented in the following order (from left to right): Anger, Disgust, Fear, Joy, Sadness, and Surprise.

This debate is not central to our analysis, but it demonstrates the importance of facial expressions in many different areas. The significance of this research led to the development of facial expression measurement techniques (Ekman & Friesen, 1978; 1982; Ekman et al., 1971; Izard, 1979; 1983; Izard & Dougherty, 1981). Among the various systems for describing facial expressions, the Facial Action Coding System (FACS: Ekman & Friesen, 1978; Ekman et al., 2002) is “the most comprehensive, psychometrically rigorous, and widely used” (Cohn & Ekman, 2005; Ekman & Rosenberg, 2005) (see more in section 2.1).

1.2 Why are facial expressions and head movements important to (sign) languages?

As mentioned, facial expressions have become a subject of investigation in various fields. Recently, linguistic analysis has been conducted, recognizing that facial expressions also serve as a natural form of human communication. Studies by Abelin (2004), Blossom (2006) and Fontes and Madureira [s.d.] have examined facial expressions, as well as gestures, in relation to spoken languages. These analyses contribute to our understanding of the role of non-verbal communication in expressing meaning and information, particularly in cross-linguistic multimodality.

However, when it comes to sign languages, which are visuo-spatial languages used by deaf communities (Libras – Brazilian Sign Language – from Brazil, ASL – American Sign Language – from USA, BSL – British Sign Language – from UK, etc.), the analysis of facial expressions, head and torso movements must not be merely complementary; they are essential. Since the 1960s, linguists (Stokoe, 1960; Battison, 1974; Friedman, 1975; Wilbur, 1987; Quadros & Karnopp, 2004) have been studying and providing evidence that sign languages are natural languages, just like spoken languages are, with their own grammatical structures (Wilcox & Wilcox, 2005). However, unlike spoken languages, facial expressions appear to be part of the grammar of sign languages. These facial expressions are encompassed by what sign language linguists refer to Non-Manual Markers (NMMs). According to Liddell (2003, cited in Baker-Shenk & Cokely, 1980), these NMMs describe aspects of signing that go beyond the hand movements, including facial expressions, as well as head and torso movements and they serve syntactic functions such as agreement, emphasis, topicalizations and sentence modality, as well as phonological functions, like lexicalization, pronominal reference, space reference, assertive and negative particles, among others (Quadros & Karnopp, 2004) (see more in 2.1.2).

  1. Coding signed language NMMs

2.1 ELAN program

A software that has been used for transcribing both speech and (manual and non-manual) signs is ELAN (Wittenburg et al., 2006). It is a free tool that facilitates multimodal research on digital audio and video media, allowing for multiple tiers and the ability to open multiple files per transcription (Crasborn & Sloetjes, 2010). In summary, according to Fung (2008), one major advantage of ELAN for sign language studies is its capacity to represent different linguistic information simultaneously on separate tiers (refer to Crasborn & Sloetjes (2008), for more information on ELAN functionality for sign language corpora). However, a crucial disadvantage of this program is the time required, particularly when working with extensive signed corpora or conducting quantitative research. The researcher often needs to manually transcribe each sign and movement, and this task becomes even more complex when analyzing NMMs, which involve movements of the eyebrows, eyes, mouth, nose, cheeks, head and torso (Ferreira-Brito, 1995). Essentially, having a software capable of automatically analyzing all these movements would save valuable time for the researcher, allowing them to allocate their time to other research activities and analyses.

Figure 2 - SIGN LANGUAGE NOTATION IN ELAN. Source: Crasborn (2006, p.03)

The figure is displayed on a computer screen, and the ELAN program interface is visible around it. The figure is a representation of the sign language transcription process, and it consists of the video of the participant (a blonde woman signing with a blue background) on the top left corner and the tiers and research annotations below the video. In the top right, there is a graphic of the participant movements

2.2 FaceReader

Based on Padden (1990) and Wilbur (1990), facial expressions in ASL have been compared to intonation in spoken language. This comparison arises from the observation that facial expressions convey both emotional (affective) and linguistic information. Intonation in spoken languages utilizes pitch, voice quality, and volume to convey emotional cues while conveying syntactic and lexical information (Reilly et al., 1992).

2.2.1 Affective facial expressions

FaceReader is a program designed for the detection of facial expressions, capable of classifying them according to emotions, such as happiness, sadness, anger, surprise, fear, disgust, and neutrality (Figure 1) (Loijens, 2018). This categorization is important for investigating affective facial expressions in sign languages, as well as in the multimodality analyses of spoken languages. Affective facial expressions in ASL are typically used independently of language and exhibit variability in intensity (Reilly, 2006) with inconsistent timing (Scherer, 1986). To illustrate this, we can imagine that when we are angry or disapprove of something, we may briefly furrow our brows for a few seconds or minutes. Figure 3 provides an example of an anger facial expressions curve, which begins before the signs, continues after their completion, and vary in intensity.

Figure 3 - ANGER FACIAL EXPRESSION CURVE IN THE SENTENCE “I HATE HOMEWORK”. Source: Reilly (2006, p.267)

Figure with a curve representing the intensity of the facial expressions of anger. It starts in zero, increases and then return to zero. The curvilinear form is similar to a mountain form. Below the curve it is written "ME HATE HOMEWORK". Its translation to English would be: I hate homework

2.2.2 Grammatical NMMs - Action Units

NMMs also constitute a significant component of sign language grammar (Baker & Cokely, 1980; Bergman, 1984; Engberg-Pedersen, 1990). In contrast to affective facial expressions, grammatical NMMs consistently occur with a manually signed sentence as they are governed by linguistic rules (Reilly, 2006). They typically begin a few milliseconds before the start of the manual phrase, reach their intensity peak, and maintain this intensity until the end of the manual phrase (Baker-Shenk, 1983). Additionally, the articulation’s onset and offset happen abruptly and are synchronized with the syntactic function (Baker-Shenk, 1983; Liddell, 1978, 1980).

FaceReader can facilitate the analysis of grammatical NMMs by utilizing the Facial Action Coding System (FACS) for reliable coding. FACS is capable of automatically coding almost every possible facial expression by decomposing them into action units (AUs) (Figure 4), AUs combinations (Figure 5), and AUs intensity levels (Figure 6) (Cohn et al., 2007).

Figure 4 - FACS ACTION UNITS[2]. Source: Tian et al. (2011, p.490)

The Figure presents 30 parts of a facial expression:  1-Innew brow raiser; 2-outer brow raiser; 3-brow lowerer; 4-upper lid raiser; 5-cheek raiser; 6-lid tightener; 7-lid droop; 8-slit; 9-eyes closed; 10-squint; 11-blink; 12-wink; 13-nose wrinkler; 14-upper lip raiser; 15-nasolabial deepener; 16-lip corner puller; 17-cheeck puffer; 18-dimpler; 19-lip corner depressor; 20-lower lip depressor; 21-chin raiser; 22-lip puckerer; 23-lip stretcher;  24-lip funneler; 25-lip tightener; 26-lip pressor; 27-lips part; 28-jaw drop; 29-mouth stretch; 30-up suck.

Figure 5 - COMBINATIONS OF FACS ACTION UNITS. Source: Tian et al. (2011, p.491)

The figure presents 20 combinations of facial expressions parts: 1-Inner Brow Raiser with Outer Brow Raiser; 2-Inner Brow Raiser with Brow Lowerer; 3-Brow Lowerer with Upper Lid Raiser; 4- Inner Brow Raiser with Outer Brow Raiser and Brow Lowerer; 5-Inner Brow Raiser with Outer Brow Raise and Upper Lid Raiser; 6-Inner Brow Raiser with Cheek Raiser; 7-Cheek Raiser and Lid Tightener; 8-Inner Brow Raiser with Outer Brow Raise, Upper Lid Raiser, and Cheek Raiser and Lid Tightener; 9-Lip Tightener and Lip Pressor; 10-Nose Wrinkler with Chin Raiser; 11-Nose Wrinkler with Lips part; 12-Nose Wrinkler with Chin Raiser, Lip Tightener and Lip Pressor; 13-Upper Lip Raiser with Chin Raiser; 14-Upper Lip Raiser with Lips Part; 15-Upper Lip Raiser withLip Corner Depressor, and Chin Raiser; 16-Lip Corner Puller with Lips Part; 17-Lip Corner Puller with Jaw Drop; 18-Lip Corner Depressor with Chin Raiser; 19-Chin Raiser with Lip Tightener and Lip Pressor; 20-Lip Stretcher with Lips Part.

Figure 6 - FACS INTENSITY DEGREES[3]. Source: FaceReader Reference Manual 9 (p. 262)

Linha do tempo

Descrição gerada automaticamente com confiança média

The following example of Israeli Sign Language (ISL) illustrates a conditional sentence type of sentence that is marked with grammatical NMMs. It conveys the meaning “If the goalkeeper had caught the ball, they would have won the game”. In this sentence, brow raise and squint have scope over the conditional clause and is concluded with a lean forward. The main clause, on the other hand, is marked by a head up and back, as well as a neutral facial expression (Dachkovsky & Sandler, 2009).

Figure 7 - CONDITIONAL CLAUSE IN ISRAELI SIGN LANGUAGE. Source: Dachkovsky & Sandler (2009, p. 252).

Interface gráfica do usuário

Descrição gerada automaticamente com confiança baixa

2.2.3 Grammatical NMMs - Head movements

        As mentioned, NMMs play a crucial role in sign languages. To fully comprehend the role of non-manuality in sign languages, researchers need to analyze the functions of different articulators separately, including upper and lower face, head, and torso. Many researchers have focused on studying manual movements (Ann, 2005; Eccarius & Brentari, 2007; Henner et al., 2013 etc.), while some explored facial articulation (Wilbur & Patschke, 1998; Boyes Braem & Sutton-Spence, 2001) and only a few have studied head and torso movements (Liddell, 1986; Schalber, 2006; Lackner, 2015).

Head movements have been noticed in negative and positive statements, as well as in questions (e.g., Zeshan 2006; Lackner 2015; Puupponen et al., 2015), and they also play a role in marking prosodic components and their boundaries (e.g. Sandler, 2012). For example, head movements, along with chest movements and hand orientation, are responsible for pronominal distinction (second vs third person) in a sign language such as the American, Brazilian, Croatian, and French Belgian (Berenz, 2002; Ciciliani & Wilbur, 2006; Meurant, 2008).

        In this regard, FaceReader can provide support for studying head movements by automatically coding head orientation, including yaw, pitch and roll, which represent degrees of deviation from looking straight forward (Figure 8) (respectively match with head rotation, flexion and lateral flexion in Figure 9). Additionally, it can analyze head position in terms of horizontal, vertical, and depth angles, which are measured in millimeters relative to the camera (FaceReader Reference Manual).

Figure 8 - HEAD ORIENTATION. Source: FaceReader Reference Manual (p.287)

 The figure presents the head of a man that is crossed by three lines with 180 degrees:

1- one that cross his nose (named roll movement); 2- one that cross his ears (named pitch); 3-one that cross the center of his skull (named Yaw), in parallel with the face.

Figure 9 - ANATOMY OF HEAD MOVEMENTS ACCORDING TO THE CARDINAL PLANES. Source: Puupponen (2015, p.180)

 The Figure presents four head movements: 1- Rotation: when someone turn their head sideways, like the movement for expressing NO; 2-Flexion or extension: when someone turn their head up and down, like the movement for expressing YES; 3-Lateral flexion: when someone tilts their head sideways. 4-Protraction/retraction: when someone projects the head forward and backward.

2.2.4 Model Quality

        Since sign language data is visual and recorded in videos, it is important to ensure that the videos have a minimum resolution and that the individuals being recorded are at an appropriate distance from the camera. This allows for better analysis of facial, head and torso movements. In other words, having a high-quality video is essential for both manual or automated visual analysis.

To assist researcher in assessing video quality for facial and head analysis, FaceReader provides a Model Quality bar. This tool helps researchers determine whether the video meets the required quality standards for automated coding by the program. Figure 10 illustrates the Model quality bar, where the green bar should cross the dashed line:

Figure 10 - MODEL QUALITY. Source: FaceReader Reference Manual 9 (p.96)

The Figure is extracted from the FaceReader program, there is a man smiling and the program is indicating that he is with his Cheek Raised, with his Lip Corner pulled; with his Lips Parted and with his Jaw Dropped. The Figure also presents the Model Quality of this frame in green, indicating that it is with a good resolution.

In this Model Quality, the colors indicate the level of quality for automatic identification of AUs. The red color represents low quality, orange represents medium quality, and green represents good quality (FaceReader Reference Manual). To determine the Model Quality, FaceReader relies on the analysis of pitch and yaw movements. It requires that the participant stand or sit in front of the camera, and by default, the software accepts a maximum angle of 30° in both directions. If the angle exceeds 30°, the face model will be rejected, indicating a potential decrease in the accuracy of automatic coding (FaceReader Reference Model 9). This ensures that the program can provide reliable results when analyzing AUs.

Nowadays (2023), the program works with three different face models: General, EastAsian, and Baby models[4]. This means that by selecting one of these options, the program will perform with greater accuracy depending on the participants’ age and phenotype. Also, there haven’t been enough investigations regarding atypical signers with facial paralysis and how the program would assess their facial expressions. Addressing this issue could enhace the program's usability, benefiting not only the analyses of sign languages but also various other fields in which facial expressions play a crucial role.

2.2.5 Calibrations

        When conducting a study involving a deaf community, it is common to collect data from multiple signers who have individual variations in their facial expressions. It is important to consider that some individuals may naturally exhibit specific facial expressions even in their neutral state. For example, in Figure 11, the woman’s outer lower eyebrow might lead someone to interpret her expression as sad, when in reality, it could simply be neutral facial expression.

Figure 11 - WOMAN WITH OUTER LOWER EYEBROW. Source: Transforming (2004)

The figure presents the upper facial expressions of a girl: her eyes are opened and her outer eyebrows are lower.

To ensure accurate interpretation and analysis of facial expressions, it is crucial to establish a baseline understanding of each participant’s natural facial expressions. In this context, FaceReader allows the normalization of participants’ faces by calibrating it and eliminating person-specific biases. The researcher can choose to calibrate their data using two methods: “Participant calibration” or “Continuous calibration”.

If the researcher is able to capture a neutral phase before the experiment, they can use the “Participant calibration” method. The software can compare subsequent facial expressions to the participant’s calibrated neutral face, providing a more accurate assessments of emotions and AUs. On the other hand, if capturing a neutral face before the experiment is not possible, the “Continuous calibration” method can be employed. This method continuously adjusts the analysis based on the participant’s changing facial expressions. We can observe the before and after participant calibration in Figure 12. In this example, before the face calibration, the program mainly recognized the individual’s face as neutral (grey), but still with remnants of a happy (green) and angry (red) face. However, after continuous calibration, it was capable of identifying this expression as neutral (neither happy nor angry in this case).

Figure 12 - CALIBRATION EFFECT. Source: FaceReader Reference Manual 9 (p.111)

The Figure presents two graphics, one indicating someone's facial expressions before the calibration: neutral expression around 0.60, happy around 0.08, and angry around 0.02.

The second graphic represents the facial expressions of the same person, but after the calibration: the neutral expression is now around 0.80, and the happy and angry expressions became 0.  

2.3 Software disadvantages

        Despite all the positive aspects previously described, the use of the FaceReader program for sign language analysis has some disadvantages that need to be addressed here. The first and the most challenging one to overcome is the pricing. As mentioned, FaceReader is of interest to many academic areas such as psychology, anthropologists, naturalists, linguists, and more. It is used by over 1.000 universities worldwide and had gained popularity among renowned enterprises such as Philips, L’oréal, Fiat, and others for predicting advertisement effectiveness (see more in Measure [s.d]) and conducting consumer research and user experience analysis (see more in FaceReader [s.d.]). The high demand for this product directly affects its pricing. The most accessible scenario would be when the university or professors can purchase the program license and make it available to their students. However, this scenario is not feasible in every country, as not every university has enough research funding available to invest in such program.

Secondly, a comprehensive study of sign language would involve investigating various aspects including the movements of the face, head, hands and torso. As mentioned earlier, the software can analyze emotions (affective facial expressions), AUs (grammatical NMMs), and the head movements. However, currently, there is no technology available that can accurately code the movements of the hands in sign languages. Additionally, the FaceReader program does not offer the capability to code torso movements and the protraction/retraction of the head (Figure 9). Torso movements[5] often resemble the three head movements described in section 2.2.3 (torso rotation, flexion/extension and lateral flexion, as depicted in Figure 13). It is possible that in the future, advancements in technology may allow for the inclusion of these features in the program’s analysis capabilities.

Figure 13 - ANATOMY OF TORSO MOVEMENTS ACCORDING TO THE CARDINAL PLANES. Source: Puupponen (2015, p.180).

The Figure presents three torso movements: 1- Rotation: when someone turn their torso sideways, like the movement for expressing NO; 2-Flexion or extension: when someone turn their torso up and down, like the movement for expressing YES; 3-Lateral flexion: when someone tilts their head sideways.

 Lastly, it is worth noting that the presence of certain physical characteristics can impact the Model Quality of FaceReader. Specifically, if a participant has a beard or wears glasses, the program’s ability to accurately recognize AUs may be compromised, since both beards and glasses can interfere with the visibility of facial movements and expressions, leading to a potentially lower Model Quality. Researchers should take into consideration the potential influence of these factors when using the program.

Conclusions

Considering the capabilities of the FaceReader program in calibrating participants’ faces, automatically annotating emotions (affective facial expressions), AUs (grammatical NMMs), their intensities, as well capturing as head movements, we can affirm that FaceReader is indeed highly beneficial for recognizing facial and head movements in sign languages. Despite the mentioned disadvantages, such as pricing, the lack of torso movements and head protraction/retraction coding, and challenges with participants with specific characteristics such as wearing glasses or having beards, there are possible solutions to overcome these limitations. Pricing can potentially be negotiated with Noldus, the company behind FaceReader, and universities or professors can purchase the software license and make it available for their research groups or students. For torso movements and head protraction/retraction, manual transcription can be performed using the ELAN program (as discussed in section 2.1). Furthermore, depending on the type of experiment, researchers can request participants to remove beards or glasses, to minimize interference with facial recognition. By addressing these considerations, researchers can harness the benefits of the FaceReader program for the analysis of facial and head movements in sign languages.

References

Abelin, Å. (2004). Cross-Cultural Multimodal Interpretation of Emotional Expressions _ An Experimental Study of Spanish and Swedish. In SPEECH PROSODY 2004, INTERNATIONAL CONFERENCE.

Allport, F. H. (1924). Social psychology. Boston, Houghton.

Asch, S. E. (1952). Social Psychology. Englewood Cliffs, New Jersey: PrenticeHall.

Ambadar, Z., Schooler, J. W., & Cohn, J. F. (2005). Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. PSYCHOLOGICAL SCIENCE16(5), 403-410.

Ann, J. (2005). A functional explanation of Taiwan Sign Language handshape frequency. LANGUAGE AND LINGUISTICS-TAIPEI-6(2), 217.

Baker-Shenk , C. (1983). A micro-analysis of the nonmanual components of questions in American Sign Language. Unpublished doctoral dissertation, University of California, Berkeley.

Baker-Shenk, C. L., & Cokely, D. (1991). American Sign Language: A teacher's resource text on grammar and culture. Gallaudet University Press.

Berenz, N. (2002). Insights into person deixis. SIGN LANGUAGE & LINGUISTICS5(2), 203-227.

Bergman, B. (1984). Non-manual components of signed language: Some sentence types in Swedish Sign Language. RECENT RESEARCH ON EUROPEAN SIGN LANGUAGES, 49-59.

Blossom, M., & Morgan, J. L. (2006). Does the face say what the mouth says? A study of infants’ sensitivity to visual prosody. PROCEEDINGS OF THE 30TH ANNUAL BOSTON UNIVERSITY CONFERENCE ON LANGUAGE DEVELOPMENT. Somerville, MA.

Braem, P. B. (1999). Rhythmic temporal patterns in the signing of deaf early and late learners of Swiss German Sign Language. LANGUAGE AND SPEECH42(2-3), 177-208.

Boyes-Braem, P., & Sutton-Spence, R. (2001). The hands are the head of the mouth: The mouth as articulator in sign languages. (No Title).

Calder, A. J., Young, A. W., Rowland, D., Perrett, D. I., Hodges, J. R., & Etcoff, N. L. (1996). Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear. COGNITIVE NEUROPSYCHOLOGY, 13, 699–745

Ciciliani, T. A., & Wilbur, R. B. (2006). Pronominal system in Croatian Sign Language. SIGN LANGUAGE & LINGUISTICS9(1-2), 95-132.

Cohn, J. F., Ambadar, Z., & Ekman, P. (2007). Observer-based measurement of facial expression with the Facial Action Coding System. THE HANDBOOK OF EMOTION ELICITATION AND ASSESSMENT1(3), 203-221.

Cohn, J. F., & Ekman, P. Cohn, J. F. & Ekman, P. (2005). Measuring facial action. In J. A. Harrigan, R. Rosenthal, & K. R. Scherer (Eds.), THE NEW HANDBOOK OF NONVERBAL BEHAVIOR RESEARCH (pp. 9–64). New York: Oxford University Press.

Crasborn, O., Sloetjes, H., Auer, E., & Wittenburg, P. (2006). Combining video and numeric data in the analysis of sign languages with the ELAN annotation software. In 2ND WORKSHOP ON THE REPRESENTATION AND PROCESSING OF SIGN LANGUAGES: LEXICOGRAPHIC MATTERS AND DIDACTIC SCENARIOS (pp. 82-87). ELRA.

Crasborn, O. A., & Sloetjes, H. (2010). Using ELAN for annotating sign language corpora in a team setting.

Crasborn, O., & Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In 6TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2008)/3RD WORKSHOP ON THE REPRESENTATION AND PROCESSING OF SIGN LANGUAGES: CONSTRUCTION AND EXPLOITATION OF SIGN LANGUAGE CORPORA (pp. 39-43).

Crasborn, O., & Van der Kooij, E. (2013). The phonology of focus in Sign Language of the Netherlands1. JOURNAL OF LINGUISTICS49(3), 515-565.

Dachkovsky, S., & Sandler, W. (2009). Visual intonation in the prosody of a sign language. LANGUAGE AND SPEECH, 52 (2-3), 287-314.

Darwin, C. (1872). The expression of emotions in animals and man. London: Murray, 11, 1872.

Davidson, R. J., Ekman, P., Saron, C. D., Senulis, J. A., & Friesen, W. V. (1990). Approach-withdrawal and cerebral asymmetry: Emotional expression and brain physiology: I. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 58, 330–341

Dinculescu, A. et al. (2019). Automatic identification of anthropological face landmarks for emotion detection. 2019 9th International Conference on Recent Advances in Space Technologies (RAST). IEEE, 585-590.

Eccarius, P., & Brentari, D. (2007). Symmetry and dominance: A cross-linguistic study of signs and classifier constructions. LINGUA117(7), 1169-1201.

Ekman, P. (1973). Universal facial expressions in emotion. STUDIA PSYCHOLOGICA15(2), 140.

Ekman, P. (1992). Are there basic emotions? Psychol. Rev, 99, 550–553. 10.1037/0033-295X.99.3.550

Ekman, P., & Friesen, W. V. (1978). Facial action coding system. ENVIRONMENTAL PSYCHOLOGY & NONVERBAL BEHAVIOR.

Ekman, P., & Friesen, W. V. (1982). Rationale and reliability for EMFACS coders. Unpublished manuscript.

Ekman, P., Friesen, W. V., & Tomkins, S. S. (1971). Facial affect scoring technique: A first validation study. SEMIOTICA, 3, 37–58.

Ekman, P., Friesen, W. V., & Hager, J. C. (Eds.). (2002). Facial Action Coding System [E-book]. Salt Lake City, UT: Research Nexus.Ekman, P., Friesen, W. V., & O’Sullivan, M. (1988). Smiles when lying. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 54, 414–420

Ekman, P., & Rosenberg, E. (Eds.). (2005). What the face reveals (2nd ed.). New York: Oxford University Press

Engberg-Pedersen, E. (1990). Pragmatics of nonmanual behaviour in Danish Sign LanguageSLR87, 121-128.

Engberg-Pedersen, E. (1993). Space in Danish Sign Language: The semantics and morphosyntax of the use of space in a visual language (Vol. 19). Gallaudet University Press.

FaceReader: emotion analysis. [s.d.]. Noldus. Acessado em 21 de junho de 2023 pelo URL https://www.noldus.com/facereader

Ferreira-Brito, L. F. (1995). Por uma gramática de lınguas de sinais. TEMPO BRASILEIRO, Rio de Janeiro.

Fontes, M. A., & Madureira, S. [s.d.]. Um experimento sobre a linguagem não verbal na detecção de efeitos de sentidos: o questionamento da autenticidade. ESTUDOS EM VARIAÇÃO LINGUÍSTICA NAS LÍNGUAS ROMÂNICAS-2, 138.

Fung, C. H., Sze, F., Lam, S., & Tang, G. (2008). Simultaneity vs. sequentiality: Developing a transcription system of Hong Kong Sign Language acquisition data. In SIGN-LANG@ LREC 2008 (pp. 22-27). European Language Resources Association (ELRA).

Gu, S., Wang, F., Yuan, T., Guo, B., and Huang, H. (2015). Differentiation of primary emotions through neuromodulators: review of literature. INT. J. NEUROL. RES. 1, 43–50. 10.17554/j.issn.2313-5611.2015.01.19

Gu, S., Wang, W., Wang, F., and Huang, J. H. (2016). Neuromodulator and emotion biomarker for stress induced mental disorders. NEURAL PLAST. 2016:2609128. 10.1155/2016/2609128

Henner, J., Geer, L. C., & Lillo-Martin, D. (2013, May). Calculating frequency of occurrence of ASL handshapes. In LSA ANNUAL MEETING EXTENDED ABSTRACTS (Vol. 4, pp. 16-1).

Hodge, G. C. E., & Ferrara, L. (2014). Showing the story: Enactment as performance in Auslan narratives. In SELECTED PAPERS FROM THE 44TH CONFERENCE OF THE AUSTRALIAN LINGUISTIC SOCIETY, 2013 (Vol. 44, pp. 372-397). University of Melbourne.

Izard, C. E. (1979). Facial expression scoring manual (FESM). Newark: University of Delaware Press.

Izard, C. E. (1983). Maximally discriminative facial movement coding system (MAX). Unpublished manuscript, University of Delaware, Newark

Izard, C. E., & Dougherty, L. M. (1982). Two complementary systems for measuring facial expressions in infants and children. MEASURING EMOTIONS IN INFANTS AND CHILDREN1, 97-126.

Jack, R., Garrod, O., and Schyns, P. (2014). Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. CURR. BIOL. 24, 187–192. doi: 10.1016/j.cub.2013.11.064

Kaiser, S. (2002). Facial expressions as indicators of “functional” and “dysfunctional” emotional processes. In M. Katsikitis (Ed.), THE HUMAN FACE: MEASUREMENT AND MEANING (pp. 235– 253). Dordrecht, Netherlands: Kluwer Academic.

Kanade, T., Cohn, J., & Tian, Y. (2000). Comprehensive database for facial expression analysis.  Proceedings fourth IEEE international conference on automatic face and gesture recognition (cat. No. PR00580). IEEE, 46-53.

Küntzler, T., Höfling, T., Tim, A. & Alpers, G. W. (2021). Automatic facial expression recognition in standardized and non-standardized emotional expressions. Frontiers in psychology, 12, 1086.

Lackner, A. (2015). Linguistic functions of head and body movements in Austrian Sign Language (ÖGS). A corpus-based analysis:(Karl-Franzens-University Graz, 2013). SIGN LANGUAGE & LINGUISTICS18(1), 151-157.

Levenson, R. W., Ekman, P., & Friesen, W. V. (1990). Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology, 27, 363–384.

Lewinski, P., Den, U., Tim, M. & Butler, C. (2014). Automated facial coding: Validation of basic emotions and FACS AUs in FaceReader. Journal of Neuroscience, Psychology, and Economics, 7 (4), 227.

Liddell, S. K. (1978). Nonmanual signals and relative clauses in ASL. In: Patricia Siple (Ed.), Understanding language through sign language research. New York: Academic Press.

Liddell, S. K. (1980). American Sign Language syntax. The Hague: Mouton

Liddell, S. K. (1986). Head thrust in ASL conditional marking. SIGN LANGUAGE STUDIES52(1), 244-262.

Loijens, L., & Krips, O. (2018). FaceReader methodology note. A WHITE PAPER BY NOLDUS INFORMATION TECHNOLOGY.

Lutz, C., & White, G. M. (1986). The anthropology of emotions. ANNUAL REVIEW OF ANTHROPOLOGY15(1), 405-436.

Malatesta, C. Z., Culver, C., Tesman, J. R., & Shephard, B. (1989). The development of emotion expression during the first two years of life. Monographs of the Society for Research in Child Development, 54.

Mansourian, S., Corcoran, J., Enjin, A., Lofstedt, C., Dacke, M., and Stensmyr, M. (2016). Fecal-derived phenol induces egg-laying aversion in drosophila. CURR. BIOL. 26, 2762–2769. 10.1016/j.cub.2016.07.065

Matias, R., & Cohn, J. F. (1993). Are max-specified infant facial expressions during face-to-face interaction consistent with differential emotions theory? Developmental Psychology, 29, 524–531.

Measure advertisement effectiveness with Emotion AI. [s.d]. FaceReader-online. Acessado em 21 de junho de 2023 pelo URL https://www.facereader-online.io/main

Meurant, L. (2008). The Speaker’s Eye Gaze Creating deictic, anaphoric and pseudo-deictic spaces of reference. SIGN LANGUAGES: SPINNING AND UNRAVELING THE PAST, PRESENT AND FUTURE. TISLR9, 403-414.

Padden, C. (1990). The relation between space and grammar in ASL verb morphology. SIGN LANGUAGE RESEARCH: THEORETICAL ISSUES, 118-132.

Puupponen, A., Wainio, T., Burger, B., & Jantunen, T. (2015). Head movements in Finnish Sign Language on the basis of Motion Capture data: A study of the form and function of nods, nodding, head thrusts, and head pulls. SIGN LANGUAGE & LINGUISTICS18(1), 41-89.

Quadros, R. M., & Karnopp, L. B. (2004). Língua de sinais brasileira: estudos linguísticos. Porto Alegre: Artmed Editora.

Reilly, J. S., McIntire, M. L., & Seago, H. (1992). Affective prosody in American sign language. SIGN LANGUAGE STUDIES, 113-128.

Reilly, J. (2006). How faces come to serve grammar: The development of nonmanual morphology in American Sign Language. Advances in the sign language development of deaf children, 262-290.

Sandler, W. (2012). 4. Visual prosody. In SIGN LANGUAGE (pp. 55-76). De Gruyter Mouton.

Schalber, K. (2006). What is the chin doing?: An analysis of interrogatives in Austrian sign language. SIGN LANGUAGE & LINGUISTICS9(1-2), 133-150.

Sloan, D. M., Straussa, M. E., Quirka, S. W., & Sajatovic, M. (1997). Subjective and expressive emotional responses in depression. JOURNAL OF AFFECTIVE DISORDERS, 46, 135–141.

Tian, Y., Kanade, T., & Cohn, J. F. (2011). Facial expression recognition. HANDBOOK OF FACE RECOGNITION, 487-519.

Tomkins, S. (1962). Affect imagery consciousness: Volume I: The positive affects. Springer publishing company.

Tomkins, S. (1963). Affect imagery consciousness: Volume II: The negative affects. Springer publishing company.

Transforming the perfect brow shape. (2004). Brow Diva. Acessado em 21 de junho de 2023 pelo URL https://browdiva.com/blog/transforming-the-perfect-brow-shape

Wang, F., and Pereira, A. (2016). Neuromodulation, emotional feelings and affective disorders. MENS SANA MONOGR. 14, 5–29. doi: 10.4103/0973-1229.154533

Wilbur, R. B. (1987). American Sign Language: linguistic and applied dimensions. Little, Brown and Co.

Wilbur, R. B. (1990). Why syllables? What the notion means for ASL research. THEORETICAL ISSUES IN SIGN LANGUAGE RESEARCH1, 81-108.

Wilbur, R. B., & Patschke, C. G. (1998). Body leans and the marking of contrast in American Sign Language. JOURNAL OF PRAGMATICS30(3), 275-303.

Wilbur, R. B. (2013). Phonological and prosodic layering of nonmanuals in American Sign Language. In THE SIGNS OF LANGUAGE REVISITED (pp. 196-220). Psychology Press.

Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006). ELAN: A professional framework for multimodality research. In 5TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2006) (pp. 1556-1559).

Zeshan, U. (2006). INTERROGATIVE AND NEGATIVE CONSTRUCTIONS IN SIGN LANGUAGE (p. 375). Ishara Press.

Revista Inclusão e Sociedade, v.3, n.1, 2023        


[1] Institute of Language Studies, University of Campinas, Brazil, https://orcid.org/0000-0002-0135-1473

[2] Every Action Unit can be visually accessed in https://imotions.com/blog/facial-action-coding-system/

[3] AUs intensities: Not active: 0.00 - 0.100; A (Trace) :0.100 - 0.217; B (Slight) :0.217 - 0.334; C (Pronnounced) :0.334 - 0.622; D (Severe) :0.622 - 0.910; E (Max): 0.910 - 1.000 (Face Reader Reference Manual)

[4] The Children and Elderly models have become obsolete.

[5] Body movements have been observed to have a role in constructed action (CA), aiding to construct meaning in the signed space, and aligning with various discourse-level components (Engberg-Pedersen, 1993; Wilbur & Patschke 1998; Boyes Braem, 1999; Crasborn & Van der Kooij, 2013; Hodge & Ferrara, 2013).