What neural networks know about linguistic complexity

封面

如何引用文章

详细

Linguistic complexity is a complex phenomenon, as it manifests itself on different levels (complexity of texts to sentences to words to subword units), through different features (genres to syntax to semantics), and also via different tasks (language learning, translation training, specific needs of other kinds of audiences). Finally, the results of complexity analysis will differ for different languages, because of their typological properties, the cultural traditions associated with specific genres in these languages or just because of the properties of individual datasets used for analysis. This paper investigates these aspects of linguistic complexity through using artificial neural networks for predicting complexity and explaining the predictions. Neural networks optimise millions of parameters to produce empirically efficient prediction models while operating as a black box without determining which linguistic factors lead to a specific prediction. This paper shows how to link neural predictions of text difficulty to detectable properties of linguistic data, for example, to the frequency of conjunctions, discourse particles or subordinate clauses. The specific study concerns neural difficulty prediction models which have been trained to differentiate easier and more complex texts in different genres in English and Russian and have been probed for the linguistic properties which correlate with predictions. The study shows how the rate of nouns and the related complexity of noun phrases affect difficulty via statistical estimates of what the neural model predicts as easy and difficult texts. The study also analysed the interplay between difficulty and genres, as linguistic features often specialise for genres rather than for inherent difficulty, so that some associations between the features and difficulty are caused by differences in the relevant genres.

作者简介

Serge Sharoff

University of Leeds

编辑信件的主要联系方式.
Email: s.sharoff@leeds.ac.uk
ORCID iD: 0000-0002-4877-0210

Researcher at the Centre for Translation Studies

Leeds, UK

参考

  1. Baayen, Harald. 2008. Analyzing Linguistic Data. Cambridge University Press, Cambridge.
  2. Balasubramanian, Sriram, Naman Jain, Gaurav Jindal, Abhijeet Awasthi & Sunita Sarawagi. 2020. What’s in a name? Are BERT named entity representations just as good for any other name? Proceedings of the 5th Workshop on Representation Learning for NLP. Association for Computational Linguistics, Online. 205-214.
  3. Benko, Vladimír. 2016. Two years of Aranea: Increasing counts and tuning the pipeline. Proc LREC. Portorož, Slovenia.
  4. Biber, Douglas. 1988. Variation Across Speech and Writing. Cambridge University Press.
  5. Biber, Douglas. 1995. Dimensions of Register Variation: A Cross-Linguistic Comparison. Cambridge University Press.
  6. Collins-Thompson, Kevyn. 2014. Computational assessment of text readability: A survey of current and future research. International Journal of Applied Linguistics 165(2). 97-135.
  7. Collins-Thompson, Kevyn & Jamie Callan. 2004. A language modeling approach to predicting reading difficulty. Proc. of HLT/NAACL. Boston. 193-200.
  8. Conneau, Alexis, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzman, Edouard Grave, Myle Ott, Luke Zettlemoyer & Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
  9. Debnath, Alok & Michael Roth. 2021. A computational analysis of vagueness in revisions of instructional texts. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop. Association for Computational Linguistics, Online. 30-35.
  10. Devlin, Jacob, Ming-Wei Chang, Kenton Lee & Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  11. Doughty, Catherine, J. Michael & H. Long. 2008. The Handbook of Second Language Acquisition 27. John Wiley & Sons.
  12. DuBay, William H. 2004. The Principles of Readability. Technical report, Impact Information.
  13. Fytas, Panagiotis, Georgios Rizos & Lucia Specia. 2021. What makes a scientific paper be accepted for publication? Proceedings of the First Workshop on Causal Inference and NLP. Association for Computational Linguistics, Punta Cana, Dominican Republic. 44-60.
  14. Halliday, M.A.K. 1992. Language as system and language as instance: The corpus as a theoretical construct. In J. Svartvik (ed.), Directions in corpus linguistics: Proceedings of Nobel Symposium 82 Stockholm 65, 61-77. Walter de Gruyter.
  15. Hosmer Jr, David W., Stanley Lemeshow & Rodney X. Sturdivant. 2013. Applied Logistic Regression. John Wiley & Sons.
  16. Janizek, Joseph D., Pascal Sturmfels & Su-In Lee. 2021. Explaining explanations: Axiomatic feature interactions for deep networks. Journal of Machine Learning Research 22(104). 1-54.
  17. Juilland, Alphonse. 1964. Frequency Dictionary of Spanish Words. Mouton.
  18. Käding, Friedrich Wilhelm (ed.). 1897. Häufigkeitswörterbuch der Deutschen Sprache. Selbstverlag.
  19. Khallaf, Nouran & Serge Sharoff. 2021. Automatic difficulty classification of Arabic sentences. Proceedings of the Sixth Arabic Natural Language Processing Workshop. Association for Computational Linguistics, Kyiv, Ukraine (Virtual). 105-114.
  20. Kunilovskaya, Maria & Ekaterina Lapshinova-Koltunski. 2019. Translationese features as indicators of quality in English-Russian human translation. Proceedings of the Human-Informed Translation and Interpreting Technology Workshop (HiT-IT 2019). Incoma Ltd., Shoumen, Bulgaria, Varna, Bulgaria. 47-56.
  21. Laposhina, Antonina N., Tatyana Veselovskaya, Maria Lebedeva & Olga Kupreshchenko. 2018. Automated text readability assessment for Russian second language learners. Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialogue’’.
  22. Lorge, Irving. 1944. Predicting readability. Teachers College Record.
  23. Nadeem, Farah & Mari Ostendorf. 2018. Estimating linguistic complexity for science texts. Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, New Orleans, Louisiana. 45-55.
  24. Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR). Technical report, Council of Europe, Strasbourg.
  25. Orlov, Jurij. 1983. Ein modell der häufigkeitsstruktur des vokabulars. In H. Guiter & M. Arapov (eds.), Studies on Zipf’s law, 154-233.
  26. Paun, Silviu, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz & Massimo Poesio. 2018. Comparing Bayesian models of annotation. Transactions of the Association for Computational Linguistics 6. 571-585.
  27. Pitler, Emily & Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. Proc EMNLP. 186-195.
  28. Rogers, Anna, Olga Kovaleva & Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics 8. 842-866.
  29. Sharoff, Serge. 2021. Genre annotation for the web: Text-external and text-internal perspectives. Register Studies 3. 1-32.
  30. Sharoff, Serge, Svitlana Kurella, & Anthony Hartley. 2008. Seeking needles in the Web haystack: Finding texts suitable for language learners. Proc Teaching and Language Corpora Conference, TaLC 2008. Lisbon.
  31. Shavrina, Tatiana & Olga Shapovalova. 2017. To the methodology of corpus construction for machine learning: Taiga syntax tree corpus and parser. CORPORA, International Conference. Saint-Petersburg.
  32. Sheehan, Kathleen M., Michael Flor & Diane Napolitano. 2013. A two-stage approach for generating unbiased estimates of text complexity. Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility. Association for Computational Linguistics, Atlanta, Georgia. 49-58.
  33. Solovyev, Valery, Marina Solnyshkina, Vladimir Ivanov & Ildar Batyrshin. 2019. Prediction of reading difficulty in Russian academic texts. Journal of Intelligent & Fuzzy System 36(5). 4553-4563.
  34. Straka, Milan & Jana Straková. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. Proc CoNLL 2017 Shared Task. Association for Computational Linguistics, Vancouver, Canada. 88-99.
  35. Vajjala, Sowmya & Detmar Meurers. 2012. On improving the accuracy of readability classification using insights from second language acquisition. Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. Association for Computational Linguistics, Montréal, Canada. 163-173.
  36. Vajjala, Sowmya & Detmar Meurers. 2014. ‘Readability assessment for text simplification: From analysing documents to identifying sentential simplifications’. ITL-International Journal of Applied Linguistics 165(2). 194-222.
  37. Wolf, Thomas, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest & Alexander M. Rush. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
  38. Xia, Menglin, Ekaterina Kochmar & Ted Briscoe. 2016. Text readability assessment for second language learners. Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, San Diego, CA. 12-22.
  39. Yuan, Yu & Serge Sharoff. 2020. Sentence level human translation quality estimation with attention-based neural networks. Proc LREC, Marseilles.
  40. Zhai, Yuming, Gabriel Illouz & Anne Vilnat. 2020. Detecting non-literal translations by fine-tuning cross-lingual pre-trained language models. Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Barcelona, Spain (Online). 5944-5956.

版权所有 © Sharoff S., 2022

Creative Commons License
此作品已接受知识共享署名-非商业性使用 4.0国际许可协议的许可。

##common.cookie##