Vol 26, No 2 (2022): Computational Linguistics and Discourse Complexology

Articles

Computational linguistics and discourse complexology: Paradigms and research methods

Solovyev V.D., Solnyshkina M.I., McNamara D.S.

Abstract

The dramatic expansion of modern linguistic research and enhanced accuracy of linguistic analysis have become a reality due to the ability of artificial neural networks not only to learn and adapt, but also carry out automate linguistic analysis, select, modify and compare texts of various types and genres. The purpose of this article and the journal issue as a whole is to present modern areas of research in computational linguistics and linguistic complexology, as well as to define a solid rationale for the new interdisciplinary field, i.e. discourse complexology. The review of trends in computational linguistics focuses on the following aspects of research: applied problems and methods, computational linguistic resources, contribution of theoretical linguistics to computational linguistics, and the use of deep learning neural networks. The special issue also addresses the problem of objective and relative text complexity and its assessment. We focus on the two main approaches to linguistic complexity assessment: “parametric approach” and machine learning. The findings of the studies published in this special issue indicate a major contribution of computational linguistics to discourse complexology, including new algorithms developed to solve discourse complexology problems. The issue outlines the research areas of linguistic complexology and provides a framework to guide its further development including a design of a complexity matrix for texts of various types and genres, refining the list of complexity predictors, validating new complexity criteria, and expanding databases for natural language.

Russian Journal of Linguistics. 2022;26(2):275-316
pages 275-316 views

Natural language processing and discourse complexity studies

Solnyshkina M.I., McNamara D.S., Zamaletdinov R.R.

Abstract

The study presents an overview of discursive complexology, an integral paradigm of linguistics, cognitive studies and computer linguistics aimed at defining discourse complexity. The article comprises three main parts, which successively outline views on the category of linguistic complexity, history of discursive complexology and modern methods of text complexity assessment. Distinguishing the concepts of linguistic complexity, text and discourse complexity, we recognize an absolute nature of text complexity assessment and relative nature of discourse complexity, determined by linguistic and cognitive abilities of a recipient. Founded in the 19th century, text complexity theory is still focused on defining and validating complexity predictors and criteria for text perception difficulty. We briefly characterize the five previous stages of discursive complexology: formative, classical, period of closed tests, constructive-cognitive and period of natural language processing. We also present the theoretical foundations of Coh-Metrix, an automatic analyzer, based on a five-level cognitive model of perception. Computing not only lexical and syntactic parameters, but also text level parameters, situational models and rhetorical structures, Coh-Metrix provides a high level of accuracy of discourse complexity assessment. We also show the benefits of natural language processing models and a wide range of application areas of text profilers and digital platforms such as LEXILE and ReaderBench. We view parametrization and development of complexity matrix of texts of various genres as the nearest prospect for the development of discursive complexology which may enable a higher accuracy of inter- and intra-linguistic contrastive studies, as well as automating selection and modification of texts for various pragmatic purposes.

Russian Journal of Linguistics. 2022;26(2):317-341
pages 317-341 views

ReaderBench: Multilevel analysis of Russian text characteristics

Corlatescu D., Ruseti Ș., Dascalu M.

Abstract

This paper introduces an adaptation of the open source ReaderBench framework that now supports Russian multilevel analyses of text characteristics, while integrating both textual complexity indices and state-of-the-art language models, namely Bidirectional Encoder Representations from Transformers (BERT). The evaluation of the proposed processing pipeline was conducted on a dataset containing Russian texts from two language levels for foreign learners (A - Basic user and B - Independent user). Our experiments showed that the ReaderBench complexity indices are statistically significant in differentiating between the two classes of language level, both from: a) a statistical perspective, where a Kruskal-Wallis analysis was performed and features such as the “nmod” dependency tag or the number of nouns at the sentence level proved the be the most predictive; and b) a neural network perspective, where our model combining textual complexity indices and contextualized embeddings obtained an accuracy of 92.36% in a leave one text out cross-validation, outperforming the BERT baseline. ReaderBench can be employed by designers and developers of educational materials to evaluate and rank materials based on their difficulty, as well as by a larger audience for assessing text complexity in different domains, including law, science, or politics.

Russian Journal of Linguistics. 2022;26(2):342-370
pages 342-370 views

What neural networks know about linguistic complexity

Sharoff S.A.

Abstract

Linguistic complexity is a complex phenomenon, as it manifests itself on different levels (complexity of texts to sentences to words to subword units), through different features (genres to syntax to semantics), and also via different tasks (language learning, translation training, specific needs of other kinds of audiences). Finally, the results of complexity analysis will differ for different languages, because of their typological properties, the cultural traditions associated with specific genres in these languages or just because of the properties of individual datasets used for analysis. This paper investigates these aspects of linguistic complexity through using artificial neural networks for predicting complexity and explaining the predictions. Neural networks optimise millions of parameters to produce empirically efficient prediction models while operating as a black box without determining which linguistic factors lead to a specific prediction. This paper shows how to link neural predictions of text difficulty to detectable properties of linguistic data, for example, to the frequency of conjunctions, discourse particles or subordinate clauses. The specific study concerns neural difficulty prediction models which have been trained to differentiate easier and more complex texts in different genres in English and Russian and have been probed for the linguistic properties which correlate with predictions. The study shows how the rate of nouns and the related complexity of noun phrases affect difficulty via statistical estimates of what the neural model predicts as easy and difficult texts. The study also analysed the interplay between difficulty and genres, as linguistic features often specialise for genres rather than for inherent difficulty, so that some associations between the features and difficulty are caused by differences in the relevant genres.

Russian Journal of Linguistics. 2022;26(2):371-390
pages 371-390 views

A cognitive linguistic approach to analysis and correction of orthographic errors

Reynolds R., Janda L., Nesset T.

Abstract

In this paper, we apply usage-based linguistic analysis to systematize the inventory of orthographic errors observed in the writing of non-native users of Russian. The data comes from a longitudinal corpus (560K tokens) of non-native academic writing. Traditional spellcheckers mark errors and suggest corrections, but do not attempt to model why errors are made. Our approach makes it possible to recognize not only the errors themselves, but also the conceptual causes of these errors, which lie in misunderstandings of Russian phonotactics and morphophonology and the way they are represented by orthographic conventions. With this linguistically-based system in place, we can propose targeted grammar explanations that improve users’ command of Russian morphophonology rather than merely correcting errors. Based on errors attested in the non-native academic writing corpus, we introduce a taxonomy of errors, organized by pedagogical domains. Then, on the basis of this taxonomy, we create a set of mal-rules to expand an existing finite-state analyzer of Russian. The resulting morphological analyzer tags wordforms that fit our taxonomy with specific error tags. For each error tag, we also develop an accompanying grammar explanation to help users understand why and how to correct the diagnosed errors. Using our augmented analyzer, we build a webapp to allow users to type or paste a text and receive detailed feedback and correction on common Russian morphophonological and orthographic errors.

Russian Journal of Linguistics. 2022;26(2):391-408
pages 391-408 views

Collection and evaluation of lexical complexity data for Russian language using crowdsourcing

Abramov A.V., Ivanov V.V.

Abstract

Estimating word complexity with binary or continuous scores is a challenging task that has been studied for several domains and natural languages. Commonly this task is referred to as Complex Word Identification (CWI) or Lexical Complexity Prediction (LCP). Correct evaluation of word complexity can be an important step in many Lexical Simplification pipelines. Earlier works have usually presented methodologies of lexical complexity estimation with several restrictions: hand-crafted features correlated with word complexity, performed feature engineering to describe target words with features such as number of hypernyms, count of consonants, Named Entity tag, and evaluations with carefully selected target audiences. Modern works investigated the use of transforner-based models that afford extracting features from surrounding context as well. However, the majority of papers have been devoted to pipelines for the English language and few translated them to other languages such as German, French, and Spanish. In this paper we present a dataset of lexical complexity in context based on the Russian Synodal Bible collected using a crowdsourcing platform. We describe a methodology for collecting the data using a 5-point Likert scale for annotation, present descriptive statistics and compare results with analogous work for the English language. We evaluate a linear regression model as a baseline for predicting word complexity on handcrafted features, fastText and ELMo embeddings of target words. The result is a corpus consisting of 931 distinct words that used in 3,364 different contexts.

Russian Journal of Linguistics. 2022;26(2):409-425
pages 409-425 views

Text complexity and linguistic features: Their correlation in English and Russian

Morozov D.A., Glazkova A.V., Iomdin B.L.

Abstract

Text complexity assessment is a challenging task requiring various linguistic aspects to be taken into consideration. The complexity level of the text should correspond to the reader’s competence. A too complicated text could be incomprehensible, whereas a too simple one could be boring. For many years, simple features were used to assess readability, e.g. average length of words and sentences or vocabulary variety. Thanks to the development of natural language processing methods, the set of text parameters used for evaluating readability has expanded significantly. In recent years, many articles have been published the authors of which investigated the contribution of various lexical, morphological, and syntactic features to the readability level. Nevertheless, as the methods and corpora are quite diverse, it may be hard to draw general conclusions as to the effectiveness of linguistic information for evaluating text complexity due to the diversity of methods and corpora. Moreover, a cross-lingual impact of different features on various datasets has not been investigated. The purpose of this study is to conduct a large-scale comparison of features of different nature. We experimentally assessed seven commonly used feature types (readability, traditional features, morphological features, punctuation, syntax frequency, and topic modeling) on six corpora for text complexity assessment in English and Russian employing four common machine learning models: logistic regression, random forest, convolutional neural network and feedforward neural network. One of the corpora, the corpus of fiction literature read by Russian school students, was constructed for the experiment using a large-scale survey to ensure the objectivity of the labeling. We showed which feature types can significantly improve the performance and analyzed their impact according to the dataset characteristics, language, and data source.

Russian Journal of Linguistics. 2022;26(2):426-448
pages 426-448 views

Discourse complexity in the light of eye-tracking: a pilot Russian language study

Toldova S.Y., Slioussar N.A., Bonch-Osmolovskaya A.A.

Abstract

The paper explores the influence of discourse structure on text complexity. We assume that certain types of discourse units are easier to read than others, due to their explicit discourse structure, which makes their informational input more accessible. As a data source, we use the dataset from the MECO corpus, which contains eye movement data for 12 Russian texts read by 35 native speakers. We demonstrate that the approach relying on elementary discourse units (EDUs) can be felicitously used in the analysis of eye movement data, since fixation patterns on EDUs are similar to those on whole sentences. Our analysis has identified EDU outliers, which show shorter time of first fixation than estimated. We arranged these outliers into several groups associated with different discourse structures. First, these are statements with nominal predicates that set exposition of the text or macroproposition and, following those, EDUs that elaborate on the previous statement and signal the beginning of the narrative. Second, they are EDUs that serve as the middle component of a listing or a group of coordinated clauses or phrases. The final group represents EDUs that are part of an opposition, contrast or comparison. Discourse analysis based on EDUs has never been applied to eye movement data, so our project opens many avenues for further research of complexity of discourse structure.

Russian Journal of Linguistics. 2022;26(2):449-470
pages 449-470 views

Word-formation complexity: a learner corpus-based study

Lyashevskaya O.N., Pyzhak J.V., Vinogradova O.I.

Abstract

This article explores the word-formation dimension of learner text complexity which indicates how skilful the non-native speakers are in using more and less complex - and varied - derivational constructions. In order to analyse the association between complexity and writing accuracy in word formation as well as interactive effects of task type, text register, and native language background, we examine the materials of the REALEC corpus of English essays written by university students with Russian L1. We present an approach to measure derivational complexity based on the classification of suffixes offered in Bauer and Nation (1993) and then compare the complexity results and the number of word formation errors annotated in the texts. Starting with the hypothesis that with increasing complexity the number of errors will decrease, we apply statistical analysis to examine the association between complexity and accuracy. We found, first, that the use of more advanced word-formation suffixes affects the number of errors in texts. Second, different levels of suffixes in the hierarchy affect derivation accuracy in different ways. In particular, the use of irregular derivational models is positively associated with the number of errors. Third, the type of examination task and expected format and register of writing should be taken into consideration. The hypothesis holds true for regular but infrequent advanced suffixal models used in more formal descriptive essays associated with an academic register. However, for less formal texts with lower academic register requirements, the hypothesis needs to be amended.

Russian Journal of Linguistics. 2022;26(2):471-492
pages 471-492 views

Word frequency and text complexity: an eye-tracking study of young Russian readers

Laposhina A.N., Lebedeva M.Y., Berlin Khenis A.A.

Abstract

Although word frequency is often associated with the cognitive load on the reader and is widely used for automated text complexity assessment, to date, no eye-tracking data have been obtained on the effectiveness of this parameter for text complexity prediction for the Russian primary school readers. Besides, the optimal ways for taking into account the frequency of individual words to assess an entire text complexity have not yet been precisely determined. This article aims to fill these gaps. The study was conducted on a sample of 53 children of primary school age. As a stimulus material, we used 6 texts that differ in the classical Flesch readability formula and data on the frequency of words in texts. As sources of the frequency data, we used the common frequency dictionary based on the material of the Russian National Corpus and DetCorpus - the corpus of literature addressed to children. The speed of reading the text aloud in words per minute averaged over the grades was employed as a measure of the text complexity. The best predictive results of the relative reading time were obtained using the lemma frequency data from the DetCorpus. At the text level, the highest correlation with the reading speed was shown by the text coverage with a list of 5,000 most frequent words, while both sources of the lists - Russian National Corpus and DetCorpus - showed almost the same correlation values. For a more detailed analysis, we also calculated the correlation of the frequency parameters of specific word forms and lemmas with three parameters of oculomotor activity: the dwell time, fixations count, and the average duration of fixations. At the word-by-word level, the lemma frequency by DetCorpus demonstrated the highest correlation with the relative reading time. The results we obtained confirm the feasibility of using frequency data in the text complexity assessment task for primary school children and demonstrate the optimal ways to calculate frequency data.

Russian Journal of Linguistics. 2022;26(2):493-514
pages 493-514 views

Russian dictionary with concreteness/abstractness indices

Solovyev V.D., Volskaya Y.A., Andreeva M.I., Zaikin A.A.

Abstract

The demand for a Russian dictionary with indices of abstractness/concreteness of words has been expressed in a number of areas including linguistics, psychology, neurophysiology and cognitive studies focused on imaging concepts in human cognitive systems. Although dictionaries of abstractness/concreteness were compiled for a number of languages, Russian has been recently viewed as an under-resourced language for the lack of one. The Laboratory of Quantitative Linguistics of Kazan Federal University has implemented two methods of compiling dictionaries of abstract/concrete words, i.e. respondents survey and extrapolation of human estimates with the help of an original computer program. In this article, we provide a detailed description of the methodology used for assessing abstractness/concreteness of words by native Russian respondents, as well as control algorithms validating the survey quality. The implementation of the methodology has enabled us to create a Russian dictionary (1500 words) with indices of concreteness/abstractness of words, including those missing in the Russian Semantic Dictionary by N.Yu. Shvedova (1998). We have also created three versions of a machine dictionary of abstractness/concreteness based on the extrapolation of the respondents' ratings. The third, most accurate version contains 22,000 words and has been compiled with the use of a modern deep learning technology of neural networks. The paper provides statistical characteristics (histograms of the distribution of ratings, dispersion, etc.) of both the machine dictionary and the dictionary obtained by interviewing informants. The quality of the machine dictionary was validated on a test set of words by means of contrasting machine and human evaluations with the latter viewed as more credible. The purpose of the paper is to give a detailed description of the methodology employed to create a concrete/abstract dictionary, as well as to demonstrate the methodology of its application in theoretical and applied research on concrete examples. The paper shows the practical use of this vocabulary in six case studies: predicting the complexity of school textbooks as a function of the share of abstract words; comparing abstractness indices of Russian-English equivalents; assessing concreteness/abstractness of polysemantic words; contrasting ratings of different age groups of respondents; contrasting ratings of respondents with different levels of education; analyzing concepts of "concreteness” and “specificity”.

Russian Journal of Linguistics. 2022;26(2):515-549
pages 515-549 views

BOOK REVIEWS

Review of Sean Wallis. 2021. Statistics in Corpus Linguistics: A New Approach. New York/Oxon, Routledge. ISBN 9781138589384 ISBN 9780429491696 (eBook)

Privalova I.V., Kazachkova M.B.

Abstract

-

Russian Journal of Linguistics. 2022;26(2):550-557
pages 550-557 views

Review of A.Ya. Shajkevich, V.M. Andryushchenko, N.A. Rebeckaya. 2021. Distributive-statistical analysis of the language of Russian prose of the1850-1870s, vol. 3. Publishing House YaSK, Moscow. ISBN 978-5-907290-61-7

Bayrasheva V.R.

Abstract

-

Russian Journal of Linguistics. 2022;26(2):558-564
pages 558-564 views

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies