Self-focused versus dialogic features of gesturing during simultaneous interpreting

Cover Page

Cite item

Abstract

The present study considers an implicit debate in the field of gesture studies as to whether gestures are produced primarily for the speaker or for the addressee. It considers the unique monologic setting of simultaneous interpreters working in a booth in which there is no visible audience present and where they only hear and do not see the speaker whose words they are interpreting. The hypotheses (H) are that the interpreters might produce more representational gestures, to aid in their own idea formulation (H1), and self-adapter movements, to maintain their self-focus (H2), rather than pragmatic gestures, which are known to serve interactive functions. Forty-nine interpreters were videorecorded as they interpreted two portions of popular science lectures, one from either English or German (their L2) into Russian (their L1) and one from Russian into their respective L2. The results showed that a vast majority of the gestures produced were either pragmatic in function or self adapters. H2 was thus supported, but H1 was not. The frequent use of pragmatic gestures is interpreted in terms of the internalized dialogic nature of talk and gesturing itself. Both beat gestures expressing emphasis and reduced forms of presentation gestures can facilitate the interpreters’ speaking by prompting the presentation and emphasis of ideas. Though focused on their own process of speech production, simultaneous interpreters may embody elements of the lecturer of the source text engaging with the audience, blended with their own dialogic speaking behaviors, aspects of which we may see in their gesturing.

Full Text

  1. Introduction

There is a long-standing debate in the field of gesture studies about whether gestures primarily serve the person producing them—as an aid in formulating and expressing their ideas—or whether they mainly serve the goal of facilitating communication with others (Iverson & Goldin-Meadow 1998). In some ways, this dichotomy is artificial, since the two views are not mutually exclusive. However, the distinction highlights two main directions that gesture research has taken over the past decades: on the one hand, involving work mainly in the field of cognitive psychology dedicated to studying the relation of gesture to the gesturer’s presumed thought processes; and, on the other hand, research in interaction analysis of various kinds examining the roles that gestures play in interaction. The two views can be further characterized as follows.

The cognitive view grew significantly along with the interest in gesture studies generated to a large degree by the publication of McNeill’s (1992) book Hand and mind: What gestures reveal about thought. This research put a spotlight on issues such as how speakers’ idea units (which McNeill called “growth points”) develop and are unpacked into speech and gesture on a moment-by-moment basis during talk. The attention in this research is therefore on internal processes of conceptualization leading to gesture production. In addition, this work considers how descriptions of spatial configurations and movements relate to the lexical and grammatical means that speakers have in their language for expressing spatial events (e.g. Kita & Özyürek 2003). This line of inquiry was related to Slobin’s (1987, 1996) work on thinking for speaking and it expanded to include consideration of gesture in this process (e.g. McNeill & Duncan 2000, see also Boutet et al. 2016). There was a particular focus on representational gestures, given the attention to the expression of spatial concepts and especially movement events. Others (e.g. Rauscher et al. 1996) homed in more on the potential role of gesturing in lexical retrieval during speaking. In general, these lines of research consider gesture from the perspective of how it is used by an individual speaker—from conceptualization of an idea to the “performance” of the gesture.

The other area of interest, which we can call the interactional view, is associated by many with the work of Kendon, an overview of which appears in his (2004) book Gesture: Visible Action as Utterance, but which has much earlier roots (e.g. Kendon 1973, 1980). This perspective could be characterized as an external one, taking into consideration the role of gestures in interaction. The use of gestures to serve pragmatic functions has played a prominent role here (Payrató & Teßendorf 2014), including in discovering what kinds of gestures recur across speech events in a given community with such functions (e.g. Bressem & Müller 2014, Ladewig 2014). It has been influential in the study of face-to-face interaction, helping to expand the field of conversation analysis in the direction of taking the visual side of communication into account (e.g. Mondada 2013, Streeck 2009). Certain strands of research in social psychology (on the interactional role of gestures (e.g. Bavelas 1992, Clark 2003) share some areas of interest with this research, but from their own perspective. Studies from cultural anthropology have also contributed to our knowledge of the pragmatic and interactional roles of gesture (see Brookes & Le Guen 2019). Some have even extended this into the analysis of political discourse (e.g. Cienki & Giansante 2014, Ponton 2016, Way 2021).

Most of the existing research has naturally entailed contexts involving two or more interlocutors, even if the context only entails one person speaking monologically to another (especially in experimental settings). Much less research has considered situations of monologic talk in which the addressee is not present, and indeed, when the person may only be speaking to an imagined audience. Such contexts include people talking on the phone when it is only an audio (not a video) call and even less interactive settings, such as simultaneous interpreters working in a booth where they may not see the audience hearing them.

These contexts provide fertile ground for raising the question as to how gestures are used. Previous research on simultaneous interpreters’ gestures has primarily focused on the context in which they can see the person whose speech they are interpreting. For example, Zagar Galvão (2015, 2020) and Galhano-Rodrigues & Zagar Galvão (2010) found that the degree to which interpreters imitate the gestures of the speaker they are interpreting varies widely from one interpreter to another. Stachowiak-Szymczak (2019), by contrast, compared the gesturing of interpreters seeing a blank screen versus seeing images congruent or incongruent with a written source text they were interpreting, but she only analyzed their use of beat gestures—small down-and-up or out-and-in movements that often align with prosodic stress.

When interpreters only hear and do not see the speaker, they do not know what gestures the latter is producing. One might expect that in such contexts, the interpreter would only be producing gestures for their own thinking for speaking. Taking that perspective, one could hypothesize a high frequency of representational gestures, to aid the interpreter in their lexical retrieval. Representational gestures, even if schematic in how they depict referents in the speech, might also help interpreters maintain concepts for the short intervals required to interpret them, by serving as a form of external memory (cf. Kita 2000 on how representational gestures help speaking). We therefore hypothesize that simultaneous interpreters working in a booth might use a larger amount of representational gestures when performing their task (to help themselves with lexical retrieval in the target language or to literally hold on to concepts as a form of externalized memory), rather than interactive, pragmatically oriented gestures. This should be especially true in the context where the interpreters are not facing an audience, or even do not have any audience physically present. But is that what one encounters in practice?

In addition, it is worth considering another type of gestures, i. e. self adapters. These are self-oriented movements that involve “fixing” oneself in some way. As pointed out by (Freedman 1972), this can involve discrete body-focused movements—such as briefly pushing one’s hair back from one’s face or adjusting one’s eyeglasses, as well as sustained, continuous body-focused movements—e. g. rubbing one’s fingers together or scratching oneself. Such (mainly manual) actions were of particular interest during the early years of modern gesture studies (e.g.; Condon & Ogston 1966, Freedman 1972, Kendon 1972) because of the way the analysis could be applied in clinical psychology, e.g. in examining correlations between the way patients mentioned certain subjects and used correlated self-touching movements. This was of interest because of the functions attributed to such movements, such as controlling stress or self-comforting (Ekman & Friesen 1969). However, such movements have been ignored in much subsequent research in the field of gesture studies that developed from the 1990s onward.1 This is because the emphasis turned to the role of gestures in two domains outlined above: in depicting concepts that speakers were trying to express and in engaging in interaction with interlocutors. The known functions of such self-adapters (self-soothing and maintaining one’s mental focus) mean that they can be expected to be commonly used by simultaneous interpreters to regulate the stress of the task they are engaging in. Therefore, considering them in the current study gives rise to a second hypothesis, namely that self-adapters will also be expected to be used during interpreting in larger quantities than pragmatic gestures, given the self-orientation of the former versus the kinds of communicative functions that the latter are associated with. This would also be in line with the expectation behind the first hypothesis, that, while working in the interpreting booth, interpreters will not be employing as many interactionally oriented gestures.

  1. Data

The present study used a simulated setting of simultaneous interpreting to study the interpreters’ use of gestures during their work. It involves data collected between 2019 and 2021 at Moscow State Linguistic University, employing an interpreting booth used in training simultaneous interpreters. All the participants were native speakers of Russian (which we will call their first language, or L1), trained in interpreting from and into either English or German as a second language (L2). Data were collected from 29 simultaneous interpreters interpreting from Russian into English and vice versa (called the ENG-RUS dataset) and from 20 interpreters interpreting from Russian into German and vice versa (called the GER-RUS dataset). Each interpreter heard a ten-minute portion of each of two popular science lectures, one in Russian (from the website PostNauka) that they had to interpret into L2 and one from their respective L2 (either a TED Talk in English or a lecture in German from the ARD Mediathek) that they had to interpret into Russian. In each case, the lecture dealt with topics related to evolution, biodiversity, and the extinction of species. Each participant was provided with a vocabulary list beforehand that suggested translation equivalents for uncommon discipline-specific terms used in the lecture. Each session began with one minute of practice interpreting from a portion of the lecture that was not part of the ten minutes in the actual interpreting task; this allowed them to warm up to the speaker and to the conditions of the recording session.

For the purposes of our study, participants were not allowed to bring any paper, pens, or phones into the interpreting booth. While this does differ from their regular practice, this intervention was introduced in order to examine interpreters’ behavior using only their own ‘natural media’ (Gibbon 2005)—their own minds and bodies—and not any kind of external memory (such as written notes). In addition, the interpreters only listened to the audio of the lectures with headphones, but were not shown the videos of the talks. This was done in order to see how the interpreters gestured on their own, without any potential influence from seeing the gestures of the original lecturers.

The set-up for recording the interpreting sessions was as follows. Interpreters sat at a table in an interpreting booth, looking at an empty classroom in front of them through the booth window The audio recording was played from a laptop out of their view. Their interpreting sessions were recorded with three cameras: one on a tripod behind their right shoulder, providing a bird’s-eye view on their hands on the desk; a small GoPro camera placed at the edge of the desk, facing them, allowing for a close-up frontal view of their hands; and a camera inside the Tobii eye-tracking glasses that participants wore, showing where they looked during the task. The videos from the three cameras were then synchronized and edited into one composite video for analysis, as shown in Figure 1. The present study used the views from the first two cameras for the gesture analysis.


Figure 1. The three angles from which the videorecording was performed

  1. Methods

Each composite video from each interpreting session was analyzed in its own file created in the software ELAN (ENA, March 13, 2024)2. Given the overall amount of the recorded data (980 minutes = 10 minutes × 2 rounds each × 49 interpreters), two minutes were chosen from each video for analysis: minutes 3:00–3:59 and 8:00–8:59. These were selected since they provided moments from the first and second halves of each interpreting session, yet not the very first minutes (when the interpreters were becoming accustomed to the task) nor the last minute (when they might have experienced more fatigue).

These portions of the videos were coded for the functions of the gesture strokes; the identification of gesture strokes and their possible functions in context were defined in detail over four pages of the coding manual we devised. The gesture functions under consideration can be briefly characterized in the following way.

Representational gestures depict the content of the speech in one of five ways – acting, molding, holding, tracing, or embodying the referent that was mentioned in speech or that can be inferred from the context (building on Müller’s 1998, 2014 inventory of gestural modes of representation). For example, when a person in a restaurant asks the waiter if they can pay the bill by pretending to write something in the air, with their hand pursed as if holding a pen, this would be an example of acting out writing. Molding involves moving one’s hands as if touching a surface, and holding is the static version of that mode of reprentation. Tracing involves drawing with one’s finger, while in embodying, the hand becomes the referent refered to (as when one might use one’s other hand as a flat piece of paper being written on in the example of asking the waiter for the bill).

Deictic gestures involve pointing to an object with a finger or hand, or touching the desk in front of oneself to identify a spatial or temporal location.

Pragmatic functions presuppose using any one of a set of gestures known to be common in European cultures (see Bressem & Müller 2014), and particularly in Russian culture (Grishina 2017), for showing a stance towards what is being said, such as negating (e.g. with a sweeping away gesture), presenting an idea (e.g. with a palm-up open hand), expressing uncertainty (e.g. with a wavering flat hand), emphasizing one’s point with beats, etc.

Adapters3 involve self-adapter (SAD) movements (gripping one’s own hands, scratching oneself, etc.) and other adapters (OAD) (e.g. rubbing the desk); the former are oriented towards oneself (see Ekman & Friesen 1969] for details), while the latter are directed away from oneself.

The sub-categories mentioned for each main category above were intended to provide reference points for making distinctions between the main functions based on more concrete characteristics. The analysis here focused on the main categories rather than the sub-categories, with the exception of adapters, which were analyzed as separate sub-categories for reasons described below.

The videos were distributed among three teams of three coders each. The coding of gesture functions was performed in ELAN, with cross-checking among members of the team assigned to each video to produce a consensus ELAN file for each video by one team. Subsequently, another round of cross-checking involved exchanging the ELAN files and videos with another team who then performed a cross-check. Instances of disagreement in coding were then discussed and resolved at regular data sessions held with all members of all teams plus the project coordinator. Any resolutions of disagreements that resulted in changes to the coding manual were then applied to previously coded files. However, since the coding manual derived from a previous project in which it had been developed in detail for gesture function coding, such amendments in the manual for this project were few. The results from ELAN were then exported to Excel files for quantitative analysis.

  1. Results

The percentages of the different functions of gestures are given in Figures 2a and 2b.


Figures 2a and 2b. The relative use of gesture functions in the two datasets

The majority of the gestures produced in the portions of the data analyzed were either serving a pragmatic or adapter function, and among adapters, the proportion was greatly skewed towards self-adapters. Therefore, the distinction between SADs and OADs was maintained for the analysis. The first hypothesis posed at the beginning of this study was thus not supported, neither for the English-Russian study nor for the German-Russian study. Gestures were not used by the interpreters primarily to represent ideas that they were trying to express. They also served a deictic function in a small percentage of cases. The second hypothesis, however, concerning SADs, was supported; many of the interpreters produced many self-adaptive movements while performing their task.

These findings will be considered further in the Discussion below, but first it is worth considering a few of the details that became apparent during the coding of the pragmatic gestures and SADs. One is that the coding of the gestures with pragmatic functions involved recognizing variations from the standard types of examples that are (understandably) used as illustrations of different pragmatic gesture functions in the literature. The best example here is the category of gestures used to present a point. The classic example for this is the palm up open hand gesture, described in detail in Müller (2004). However, this function was accomplished in our dataset with a great deal of variation in the effort exerted. The rotation of the upper arm needed to truly turn one’s palm up requires an effort, and the participants in our study could rarely afford to exert such effort for one gestural movement, given the fast pace of their speaking task in performing simultaneous interpreting. Their position, sitting with their hands on a desk, is also a contextual factor in their normal work in an interpreting booth that constrains their movement in certain ways. Consequently, the function of presenting a point was accomplished by a variety of kinds of hand turn-outs, sometimes with the palm only moving slightly outward, and sometimes with a movement as small as a finger extension outward. See Cienki (2021) for a detailed analysis of the variations on these forms.

Another point concerned the great variety possible in the forms of SAD movements. Fortunately, the placement of the GoPro camera directly in front of the participants allowed for a detailed, close-up view of their hands in a way that is rarely possible in gesture research, where the camera is normally placed further away, so as to provide a view of a larger space. The range in types of self-adapters was great, from rubbing one’s fingers or pulling the skin on one’s arm, to wringing one’s hands or rubbing one’s face or head. The use of these different types also varied widely between individuals. This relates to the overall individual variation in the degree to which participants gestured: while some were more active and made movements more frequently, others kept their hands folded on the desk and gestured less frequently.

Finally, while the interpreters sometimes produced strings of SADs one after another, or sequences of pragmatic gestures, they also many times alternated between SADs and pragmatic gestures. That is, they sometimes broke a pattern of producing SADs by making small outward beat movements or turn-outs of the hand, serving a pragmatic function of making emphasis with stressed syllables in their speech. See Figure 3 for an illustration.


Figure 3. Pragmatic gesture interspersed between SADs (a “gesture matryoshka”)

Given the way in which these pragmatic gestures appeared to pop out of a string of surrounding SADs, we informally referred to such occurrences as “gesture matryoshki”, reminding one of the nested wooden dolls in which a small doll in the center appears to pop out when the tops of the dolls surrounding it are removed. This phenomenon will be considered further in the following Discussion section.

  1. Discussion

The low amount of use of representational gestures by the interpreters did not support our original hypothesis that gestures might facilitate their thinking for interpreting by helping them maintain concepts that they needed to interpret, that is as a form of external memory. This finding is consonant with that of Leonteva et al. (to appear) who considered a different interpreting situation in which the interpreters did see a video of the original speaker. In that context, it was found that metaphoric gestures used by the original speaker to depict abstract concepts were only minimally copied by the interpreters. Instead, the interpreters largely used pragmatic gestures, such as a palm up open hand to present ideas. The iconicity of the metaphors in such gestures was argued in Leonteva et al. (2023) to be minimal, because the as-if holding of an idea on the palm of one’s hand (as argued in Müller 2004, for example) could apply to any idea; the form of the gesture is not related iconically in any obvious way to specific source domains of different metaphors by which ideas can be objectified. Leonteva et al. (2023) also argued that interpreters are under such time pressure to produce their utterances that they do not have the luxury of creating concept-specific imagery in a variety of representational gestures. The thinking for gesturing that would be involved to do so could, in fact, increase their cognitive load, given the effort needed to produce various distinct manual forms related to the concepts they are interpreting.

While deictic gestures were not a focus of this study, the small amount of them produced in this context also raises questions. One might have expected them to be used as a means of reference tracking, as when one indicates different spaces to stand for different ideas (see McNeill et al. 1993 on abstract deixis). However, in line with the minimal use of representational gestures, deictic gestures were not employed very frequently either to keep track of referents. This suggests that this type of gesturing also entails a kind of conceptual complexity for which there is no time, or for which there is no need, during the fast-paced task of simultaneous interpreting.

The extensive use of different types of pragmatic gestures raises several points for consideration. One is that there is a way in which the use of certain pragmatic gestures, and especially beat gestures (Lucero et al. 2014), might actually be facilitating speech production after all. For example, some participants in the study used presentation gestures (turning one’s open hand outward toward a palm up position) in a serial fashion, as they presented one point after another in their renditions. This resulted in a kind of “revving up” motion, whereby the repeated action might have helped in keeping the participant producing interpretations. Thus, what might typically be considered gestures for others (pragmatic gestures of presenting or emphasizing ideas with a palm up open hand, hand turn-out, or beats) can also function for oneself. As Iverson and Thelen (1999) argue, gesture and speech can function as coupled oscillators in terms of how the movement of each is related to the other; see also Pouw et al. (2021) on voice-gesture biomechanical coupling.

There is also another point to consider here regarding the role of pragmatic gestures in monologic speech with no audience present. The use of such gestures could simply arise from the fact that we sometimes cannot help but produce them, e.g., in contexts in which we are talking on the phone and cannot see our addressee. Our ability to speak inherently develops ontologically in dialogic contexts, and so language at its core is dialogic in nature (viz. Bakhtin et al. 1981). The notion of a “private language” is problematic (Wittgenstein 1953), since even thinking to ourselves in verbal form is based on a given language that developed historically in interactive contexts. In a similar way, pragmatic gesturing is inherently part of what adult speakers do when they talk, performing speech acts (witness the performative function of gestures discussed in Kendon (2004) and Müller (1998)), showing one’s stance (Andries et al. 2023, Debras 2013), etc. They are built in our speaking routines. Children acquiring their first language do so in dialogic contexts, and we also acquire gestures from the interactional contexts of the culture of our first language. It is also interesting to note that the finger lifts when presenting a point, considered in Cienki (2021) as miniature versions of the palm up open hand, are different qualitatively from what have classically been studied as pragmatic or interactive gestures. Whereas the latter are often oriented toward the interlocutor and are performed further out from the speaker, in the interactive space, the former type, involving small finger lifts, remain close to the speaker and consist of a simple movement outward and back, rather than appearing to be oriented deictically in any particular direction. This is another way in which these gestures appear to have a pragmatic function that has become speaker-internal—that is, it has been internalized as part of the speaker’s talking behavior, regardless of the context.

A third point to consider is that there is another way in which ‘dialogicity’ comes into play in the interpreting context. Interpreters are dual speakers—uttering their own words which are derived from the original speaker’s ideas, and uttering them as if being the original speaker. (See Vranjes & Brône’s (2021) on interpreters as laminated speakers, picking up on Goodwin and Goodwin’s (2004) characterization of laminated speakers). In the present study we see how this is the case even beyond their speech. That is, some of the pragmatic gestures present a viewpoint on the utterance that the interpreter is rendering, but it is not clear whether the viewpoint is that of the interpreter or an imagined viewpoint of the lecturer whose speech is being interpreted. For example, in one instance, a participant uttered the following statement in his interpretation from German into Russian: “За последние десять тысяч лет мы потеряли почти половину площади дождевых лесов на Земле" (In the last ten thousand years we’ve lost almost half of the rain forests on Earth). When stating the number “ten thousand years” in Russian he separates his hands to produce a small grasping motion with the thumb, index finger, and middle finger of his right hand, as shown in Figure 4.

With this “precision grip” gesture, as Morris (2002) calls it, it appears as if the hand is “delicately taking hold of an imaginary, small object” (p. 79). This gesture is known to be used in various European cultures when speakers want to show that they are emphasizing a precise point (Kendon 2004: ch. 12). Here it could be an instance of the interpreter mentally simulating the speaker of the source text and personifying his imagined gesture in that moment. However, it could also be the interpreter’s own reaction as to how he might utter that point that was being rendered. Indeed, the two viewpoints can be blended in the interpreter at that moment, as part of his bi-faceted role as the animator of another author’s words, to use Goffman’s (1981) categories of footing. (See Cienki & Iriskhanova (2020) for more on viewpoint blending in simultaneous interpreting.)


Figure 4. The interpreter highlights the number with a precision grip gesture (right hand) in the second image

Finally, the extensive use of self-adapters did support our second hypothesis. This finding makes sense in terms of stress management and focusing function that these movements have, as discussed earlier (see also Iriskhanova et al. 2019). However, the blended nature of pragmatic gestures embedded in SADs—the so-called “gesture matryoshki”—illustrates how it is not simply a matter of using SADs in one section of talk and pragmatic gestures in a different one. To put it another way, it is not just an issue of the speaker gesturing just for himself versus for others, as the original debate posed in the introduction to this article would have it. Rather, the dialogic nature of gesture plays out here in another way, with the interpreters’ rapid alternation between SADs and gestures with pragmatic functions. We see a kind of vacillation happening on a micro-timescale between movements that are oriented toward the self (SADs) and ones that extend outward (with pragmatic gestures), perhaps to an imagined interlocutor. This is another way in which the nature of the interpreters’ gesturing is dialogic, with a dialogue between internal versus external functions of gestures.

  1. Conclusion

The interpreters in the present study did not primarily produce gestures to represent the concepts they were rending, but rather made ones which are known to serve pragmatic functions, or made movements known as self-adapters. The context of the interpreter working in a booth without an audience might seem to be one that would be conducive to the production of gestures that would facilitate one’s own thinking for speaking, via the (partial) depiction of the concepts that one is rendering. However, this was not found to be the case; this can be attributed to the kind of speaking that is involved. In contrast to someone in a conversation who is spontaneously developing their own ideas and having to formulate them verbally, the interpreter is already provided with the ideas of another through their words in one language, which they then conceptualize and speak in another language. This context of speaking, combined with the time pressure and cognitive load experienced by the interpreter, can help account for the low amount of gestures serving to represent ideas.

In contrast, the large amount of SADs can be related to the self-adaptive function of these movements, which can aid interpreters in their concentration and help them manage the stress that is part of their job. The frequent use of pragmatic gestures appears related to the dialogic nature of the role of the interpreter in several senses: being engaged in a communicative task, which is inherently an interactive process, and doing so in the role of an intermediary, potentially reflecting not only their own stance towards the ideas being uttered, but also the imagined stance of the person whose speech is being interpreted.

 

1 Notable exceptions to this include the research connected with the NEUROGES system of gesture coding (e.g. Lausberg 2013, Lausberg & Sloetjes 2016) and related to the guide for “Annotating multichannel discourse” (Kibrik & Fedorova 2020, Litvinenko et al. 2018).

2 https://archive.mpi.nl/tla/elan

3 Since this article is written using American English, the American spelling ‘adapter’ will be used, but it is worth noting that much research on this type of gesture appears under the term ‘adaptor’.

×

About the authors

Alan Cienki

Vrije Universiteit Amsterdam

Author for correspondence.
Email: a.cienki@vu.nl
ORCID iD: 0000-0003-2951-9722

has a PhD in Slavic linguistics and is Professor of Language Use & Cognition and English Linguistics in the Department of Language, Literature and Communication at the Vrije Universiteit Amsterdam, Netherlands. His research interests include cognitive linguistics, semantics, gesture studies, metaphor studies and political discourse analysis.

Amsterdam, Netherlands

References

  1. Andries, Fien, Katharina Meissl, Clarissa de Vries, Kurt Feyaerts, Bert Oben, Paul Sambre, Myriam Vermeerbergen & Geert Brône. 2023. Multimodal stance-taking in interaction-A systematic literature review. Frontiers in Communication 8. 1187977. https://doi.org/10.3389/fcomm.2023.1187977
  2. Bakhtin, Mikhail, Michael Holquist & Caryl Emerson. 1981. The Dialogic Imagination: Four Essays by M. M. Bakhtin. Austin, TX: University of Texas Press.
  3. Bavelas, Janet B., Nicole Chovil, Douglas A. Lawrie & Allan Wade. 1992. Interactive gestures. Discourse Processes 15. 469-489. https://doi.org/10.1080/01638539209544823
  4. Boutet, Dominique, Aliyah Morgenstern & Alan Cienki. 2016. Grammatical aspect and gesture in French: A kinesiological approach. Russian Journal of Linguistics 20 (3). 132-151.
  5. Bressem, Jana & Cornelia Müller. 2014. The family of Away gestures: Negation, refusal, and negative assessment. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva Ladewig, David McNeill & Jana Bressem (eds.), Body - language - communication: An international handbook on multimodality in human interaction, 1592-1604. Berlin: De Gruyter Mouton.
  6. Brookes, Heather & Olivier Le Guen. 2019. Gesture studies and anthropological perspectives. Gesture 18 (2-3). 119-141. https://doi.org/10.1075/gest.00040.bro
  7. Cienki, Alan. 2021. From the finger lift to the palm-up open hand when presenting a point: A methodological exploration of forms and functions. Languages and Modalities 1. 17-30. https://doi.org/10.3897/lamo.1.68914
  8. Cienki, Alan & Gianluca Giansante. 2014. Conversational framing in televised political discourse: A comparison from the 2008 elections in the United States and Italy. Journal of Language and Politics 13 (2). 255-288. https://doi.org/10.1075/jlp.13.2.04cie
  9. Cienki, Alan & Olga. K Iriskhanova. 2020. Patterns of multimodal behavior under cognitive load: An analysis of simultaneous interpretation from L2 to L1. Voprosy Kognitivnoy Lingvistiki 1. 5-11. https://doi.org/10.20916/1812-3228-2020-1-5-11
  10. Clark, Herbert H. 2003. Pointing and placing. In Sotaro Kita (ed.), Pointing: Where language, culture, and cognition meet, 243-268. Mahwah, NJ: Lawrence Erlbaum Associates.
  11. Condon, William S. & W. D. Ogston. 1966. Soundfilm analysis of normal and pathological behavior patterns. Journal of Nervous and Mental Disease 143 (4). 338-347.
  12. Debras, Camille. 2013. L’Expression Multimodale du Positionnement Interactionnel (Multimodal Stance-taking). Paris: Université Sorbonne Nouvelle - Paris 3.
  13. Ekman, Paul & Wallace V. Friesen. 1969. The repertoire of nonverbal behavior: Cateogries, origins, usage, and coding. Semiotica 1 (1). 49-98.
  14. Freedman, Norbert. 1972. The analysis of movement behavior during the clinical interview. In Aron Wolfe Siegman & Benjamin Pope (eds.), Studies in dyadic communication, 153-175. New York, NY: Pergamon Press.
  15. Galhano-Rodrigues, Isabel & Elena Zagar Galvão. 2010. The importance of listening with one’s eyes: A case study of multimodality in simultaneous interpreting. In Jorge Díaz-Cintas, Anna Matamala & Josélia Neves (eds.), New insights into audiovisual translation and media accessibility, 241-253. Amsterdam: Rodopi.
  16. Gibbon, Dafydd. 2005. Prerequisites for a multimodal semantics of gesture and prosody. Proceedings of the International Workshop on Computational Semantics 6. 2-15.
  17. Goffman, Erving. 1981. Forms of Talk. Philadelphia, PA: University of Pennsylvania Press.
  18. Goodwin, Charles & Marjorie H. Goodwin. 2004. Participation. In Alessandro Duranti (ed.), A companion to linguistic anthropology, 222-243. Oxford: Basil Blackwell.
  19. Grishina, Elena A. 2017. Russkaya zhestikulyatsiya s lingvisticheskoi tochki zreniya: korpusnye issledovanya (Russian gesticulation from a linguistic point of view: Corpus studies). Moscow: Languages of Slavic Culture. (In Russ.).
  20. Iriskhanova, Olga K., Andrei A. Petrov, Alina I. Makoveeva & Anna V. Leonteva. 2019. Kognitivnaya nagruzka v usloviyakh sinkhronnogo perevoda: opyt polimodal’nogo analiza (Cognitive load in the context of simultaneous interpreting: An experiment in multimodal analysis). Kognitivnye issledovaniya yazyka 38. 100-115. (In Russ.).
  21. Iverson, Jana M. & Susan Goldin-Meadow. 1998. Why people gesture when they speak. Nature 396. 228.
  22. Iverson, Jana M. & Esther Thelen. 1999. Hand, mouth and brain. The dynamic emergence of speech and gesture. Journal of Consciousness Studies 6 (11-12). 19-40.
  23. Kendon, Adam. 1972. Some relations between body motion and speech: An analysis of an example. In Aron Wolfe Siegman & Benjamin Pope (eds.), Studies in dyadic communication, 177-210. New York, NY: Pergamon Press.
  24. Kendon, Adam. 1973. The role of visible behaviour in the organization of social interaction. In Mario von Cranach & Ian Vine (eds.), Social communication and movement: Studies in interaction and expression in man and chimpanzee, 29-74. New York, NY: Academic Press.
  25. Kendon, Adam. 1980. Gesticulation and speech: Two aspects of the process of utterance. In Mary R. Key (ed.), The relation of verbal and nonverbal communication, 207-227. The Hague: Mouton
  26. Kendon, Adam. 2004. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.
  27. Kibrik, Andrei A. & Olga V. Fedorova (eds.) 2020. Annotating Multichannel Discourse: A Guide Book. Moscow: Institute of Linguistics RAS.
  28. Kita, Sotaro. 2000. How representational gestures help speaking. In David McNeill (ed.), Language and gesture, 162-185. Cambridge: Cambridge University Press.
  29. Kita, Sotaro & Aslı Özyürek. 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal: Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language 48. 16-32.
  30. Ladewig, Silva H. 2014. Recurrent gestures. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva Ladewig, David McNeill & Jana Bressem (eds.), Body - language - communication: An international handbook on multimodality in human interaction, 1558-1574. Berlin: De Gruyter Mouton.
  31. Lausberg, Hedda. 2013. Understanding Body Movement. A Guide to Empirical Research on Nonverbal Behaviour - with an Introduction to the NEUROGES Coding System. Frankfurt am Main: Peter Lang.
  32. Lausberg, Hedda & Han Sloetjes. 2016. The revised NEUROGES-ELAN system: An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture. Behavior Research Methods 48. 973-993. https://doi.org/10.3758/s13428-015-0622-z
  33. Leonteva, Anna V., Alan Cienki & Olga V. Agafonova. 2023. Metaphoric gestures in simultaneous interpreting. Russian Journal of Linguistics 27 (3). 820-842. https://doi.org/10.22363/2687-0088-36189
  34. Litvinenko, Alla O., Andrei A. Kibrik, Olga V. Fedorova & Yuliya V. Nikolaeva. 2018. Annotating hand movements in multichannel discourse: Gestures, adaptors and manual postures. The Russian Journal of Cognitive Science 5 (2). 4-17.
  35. Lucero, Ché, Holly Zaharchuk & Daniel Casasanto. 2014. Beat gestures facilitate speech production. Proceedings of the Annual Meeting of the Cognitive Science Society 36. 898-903.
  36. McNeill, David. 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago, IL: Chicago University Press.
  37. McNeill, David & Susan D. Duncan. 2000. Growth points in thinking for speaking. In David McNeill (ed.), Language and gesture, 141-161. Cambridge: Cambridge University Press.
  38. McNeill, David, Justine Cassell & Elena T. Levy. 1993. Abstract deixis. Semiotica 95 (1/2). 5-19. https://doi.org/10.1515/semi.1993.95.1-2.5
  39. Mondada, Lorenza. 2013. Multimodal interaction. In Cornelia Müller, Alan Cienki, Ellen Fricke & David McNeill (eds.), Body - language - communication: An international handbook on multimodality in human interaction, 577-589. Berlin: De Gruyter Mouton.
  40. Morris, Desmond. 2002. Peoplewatching. London: Vintage.
  41. Müller, Cornelia. 1998. Redebegleitende Gesten. Kulturgeschichte - Theorie - Sprachvergleich. Berlin: Berlin Verlag A. Spitz.
  42. Müller, Cornelia. 2004. Forms and uses of the Palm Up Open Hand: A case of a gesture family? In Cornelia Müller & Roland Posner (eds.), The semantics and pragmatics of everyday gestures, 233-256. Berlin: Weidler.
  43. Müller, Cornelia. 2014. Gestural modes of representation as techniques of depiction. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva Ladewig, David McNeill & Jana Bressem (eds.), Body - language - communication: An international handbook on multimodality in human interaction, 1687-1702. Berlin: De Gruyter Mouton.
  44. Payrató, Lluís & Sedinha Teßendorf. 2014. Pragmatic gestures. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva Ladewig, David McNeill & Jana Bressem (eds.), Body - language - communication: An international handbook on multimodality in human interaction, 1531-1539. Berlin: De Gruyter Mouton.
  45. Ponton, Douglas M. 2016. Movements and meanings: Towards an integrated approach to political discourse analysis. Russian Journal of Linguistics 20 (4). 122-139. https://doi.org/10.22363/2687-0088-15152
  46. Pouw, Wim, Shannon Proksch, Linda Drijvers, Marco Gamba, Judith Holler, Christoher Kello, Rebecca S. Schaefer & Geraint A. Wiggins. 2021 Multilevel rhythms in multimodal communication. Philosophical Transactions of the Royal Society B 376: 20200334. https://doi.org/10.1098/rstb.2020.0334
  47. Rauscher, Frances H., Robert M. Krauss & Yihsiu Chen. 1996. Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychological Science 7 (4). 226-231
  48. Slobin, Dan I. 1987. Thinking for speaking. In Jon Aske, Natasha Beery, Laura Michaelis & Hana Filip (eds.), Proceedings of the 13th Annual Meeting of the Berkeley Linguistics Society, 435-445. Berkeley, CA: Berkeley Linguistics Society.
  49. Slobin, Dan I. 1996. From ‘thought and language’ to ‘thinking for speaking.’ In John Gumperz & Stephen C. Levinson (eds.), Rethinking linguistic relativity, 70-96. Cambridge: Cambridge University Press.
  50. Stachowiak-Szymczak, Katarzyna. 2019. Eye Movements and Gestures in Simultaneous and Consecutive Interpreting. Cham: Springer.
  51. Streeck, Jürgen. 2009. Gesturecraft: The Manu-facture of Meaning. Amsterdam: John Benjamins.
  52. Vranjes, Jelena & Geert Brône. 2021. Interpreters as laminated speakers: Gaze and gesture as interpersonal deixis in consecutive dialogue interpreting. Journal of Pragmatics 181. 83-99. https://doi.org/10.1016/j.pragma.2021.05.008
  53. Way, Lyndon C. 2021. Trump, memes and the Alt-right: Emotive and affective criticism and praise. Russian Journal of Linguistics 25 (3). 789-809. https://doi.org/10.22363/2687-0088-2021-25-3-789-809
  54. Wittgenstein, Ludwig. 1953. Philosophical Investigations. Oxford: Basil Blackwell.
  55. Zagar Galvão, Elena. 2015. Gesture in Simultaneous Interpreting from English into European Portuguese: An Exploratory Study. Porto: University of Porto, Portugal.
  56. Zagar Galvão, Elena. 2020. Gesture functions and gestural style in simultaneous interpreting. In Heidi Salaets & Geert Brône (eds.), Linking up with video: Perspectives on interpreting practice and research, 151-179. Amsterdam: John Benjamins Publishing Company.

Copyright (c) 2024 Cienki A.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies