Debata

The status of multimodal constructions in developing models and empirical testing - debate

Language can be viewed as a system of signs shared between community members, a communicative behaviour, a mental phenomenon, “hard-wired” in the brain. But what does this mean for our understanding of language, of methods of its investigation, and modelling?

For de Saussure (1916), language is a formal system of signs consisting of the signifier (roughly – the form) and the signified (i.e., the meaning). It is one of a variety of sign systems used for communication and is a part of social psychology. Its description rests on identifying how signs differ from each other in terms of their form and meaning. In this sense, the study of language is self-contained, being based on an introspective analysis of the observed products of communication. It does not need to cross disciplines, a view which today may well seem rather obsolete.  

Cognitive Linguistics, at its beginnings (Langacker 1987, 1991) also defined the linguistic sign as a bi-polar symbol with the phonological pole (i.e., the form) and the semantic pole (i.e., the meaning). Similarly, in Construction Grammar (Fillmore 1988, Goldberg 1995), the unit of grammar is a construction understood as a form and function pairing. On this account, the formal pole includes both syntactic and phonological information (such as prosody and intonation), while the function pole encodes semantic and pragmatic content. Constructions thus understood form a taxonomic network, which constitutes a language grammar and which is entirely usage-based (Hopper 1987, Barlow & Kemmer 1999, Bybee 2006), i.e. it emerges from language use. The emergent mental constructions may have different levels of schematization and can be dynamically transformed, as novel examples are encountered by language users. This inherent dynamicity and resultant variation means that grammar should be viewed as probabilistic, rather than rule-based. What follows is that it should be described in terms of usage patterns retrievable through generalization from actual language use, as attested in corpora (e.g., Stefanowitsch & Gries 2006, Gries & Stefanowitsch 2006, Glynn & Fischer 2010, Glynn & Robinson 2014).

Beyond corpus-based investigations into the structure of language, Cognitive Linguistics also makes a strong claim about the psychological plausibility of the linguistic model, seeing psycholinguistic experiments as an important part of testing hypotheses about language (Lakoff 2008). Recently, the language/emotion interface has generated a lot of experimental research, undermining linguistically-semantic approach to the exploration of emotional content encoded, and communicated via language. Numerous studies show that emotional content, verbal and nonverbal, exerts strong impact on the perception and processing of linguistic content – facilitating linguistic content processing at lexical and sentence level (Kissler et al. 2007, 2013). This new paradigm brings up a range of questions concerning the language/emotion interface - do linguistic and emotional contents belong to different systems of representation of meaning? Should we come up with new semantics to incorporate emotional, experientially acquired meanings into language systems? Do we acquire emotional meanings and linguistic meanings separately? 

Finally, in recent years, the multimodal character of language has received much attention from cognitive linguists (e.g., Cienki & Muller 2008, Cienki 2016). Accordingly, rather than being seen as a purely verbal system with prosodic features, language is also claimed to incorporate visual clues or indices, such as body posture, gestures or facial expressions. In that understanding, co-speech gestures are of utmost importance for interpreting the meaning of a communicative act. But what exactly constitutes a gesture? Are gestures part of constructions? Is there a grammar of gestures? And how should gestures be represented in the multimodal model of language? How does sign language expresses itself multimodally?

Importantly, multimodality is not only a feature of spoken communication. Written language and other non-spoken communicative modes such as internet discourse, adverts, cartoons or comics can also be described as multimodal, so that both images and the (verbal) text contribute to the meaning making process, and even the shape of the fonts may influence our evaluation of the message (e.g., Kress and van Leuveen 2001, Dancygier & Sweetser 2012, Dancygier & Vandelanotte 2017, Vandelanotte & Dancygier 2017).

But what does this mean for our understanding, investigating and modelling of language and communication?

Are multimodal interactions or texts understood by language users through reference to mental representations? If so, are these representations mono- or multimodal? (Representations need not be understood as stable patterns but as dynamic co-activation of neuronal assemblies.)

Or are linguistic/semiotic signs useful ways of describing the linguistic behaviour of communication participants used by the analysts? (After all, a map is not a territory and yet it helps navigate through space.)
Finally, is language as a verbal system of communication independent from other communicative systems such as gesture? 

These (and potentially other) questions will be the focus of the debate between our experts: Barbara Dancygier, Dylan Glynn, Johanna Kissler and Cornelia Müller.

References:

Barlow, M. & S. Kemmer (eds.), 1999. Usage-Based Models of Language. Stanford: CSLI Publications
Bybee, Joan. 2006. From usage to grammar: The mind's response to repetition. Language 82: 711–733.
Cienki, A. 2016. Cognitive Linguistics, gesture studies, and multimodal communication. Cognitive Linguistics 27: 603-618.
Cienki, A., & Müller, C. 2008. Metaphor and Gesture. Amsterdam: John Benjamins.
Dancygier, B. & E. Sweetser. 2012. Viewpoint in Language. A multimodal perspective. Cambridge: Cambridge University Press.
Dancygier, B. & L. Vandelanotte (Eds.). 2017. Special Issue: Viewpoint phenomena in multimodal communication, special issue. Cognitive Linguistics 28(3).
de Saussure, F. 1916. Cours de linguistique générale. Paris: Payot.
Fillmore, C. J. 1988. The mechanisms of ‘Construction Grammar’. Berkeley Linguistic Society 14: 35–55.
Glynn, D. & Fischer, K. (Eds.). 2010. Quantitative Methods in Cognitive Semantics: Corpus-driven approaches. Berlin: Mouton de Gruyter.
Glynn, D. & Robinson, J. (Eds.). 2014. Corpus Methods for Semantics: Quantitative studies in polysemy and synonymy. Amsterdam: John Benjamins.
Goldberg, A. 1995. Constructions: A Construction Grammar approach to argument structure. Chicago: University of Chicago Press.
Gries, S. T. & Stefanowitsch, A. (Eds.). 2006. Corpora in Cognitive Linguistics: Corpus-based approaches to syntax and lexis. Berlin: Mouton de Gruyter.
Hopper, P. 1987. Emergent grammar. Berkeley Linguistics Society 13: 139–157.
Kissler, J., Herbert, C., Peyk, P. & Junghofer, M. 2007. Buzzwords: early cortical responses to emotional words during reading. Psychological Science 18(6): 475-480.
Kissler, J. 2013. Love letters and hate mails. Cerebral processing of emotional language content. In J. Armony & P. Vuilleumier (Eds.), The Cambridge Handbook of Human Affective Neuroscience, 304-330. Cambridge: Cambridge University Press.
Kress, G. & T. van Leeuwen. 2001. Multimodal Discourse: The modes and media of contemporary communication. London: Arnold.
Lakoff, G. 2008. The neural theory of metaphor. In R. Gibbs (Ed.), The Metaphor Handbook, 17-38. Cambridge: Cambridge University Press.
Langacker, R. 1987. Foundations of Cognitive Grammar. Theoretical prerequisites. Stanford Stanford University Press.
Langacker, R. 1991. Foundations of Cognitive Grammar. Vol. 2 Descriptive application. Stanford Stanford University Press.
Stefanowitsch, A. & S. T. Gries (Eds.). 2006. Corpus-based Approaches to Metaphor and Metonymy. Berlin: Mouton de Gruyter.
Vandelanotte, L. & B. Dancygier (Eds.). 2017. Multimodal artefacts and the texture of viewpoint, special issue. Journal of Pragmatics 122.