<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root>
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">RUDN Journal of Studies in Literature and Journalism</journal-id><journal-title-group><journal-title xml:lang="en">RUDN Journal of Studies in Literature and Journalism</journal-title><trans-title-group xml:lang="ru"><trans-title>Вестник Российского университета дружбы народов. Серия: Литературоведение. Журналистика</trans-title></trans-title-group></journal-title-group><issn publication-format="print">2312-9220</issn><issn publication-format="electronic">2312-9247</issn><publisher><publisher-name xml:lang="en">Peoples’ Friendship University of Russia named after Patrice Lumumba (RUDN University)</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">47801</article-id><article-id pub-id-type="doi">10.22363/2312-9220-2025-30-4-813-823</article-id><article-id pub-id-type="edn">REJMQZ</article-id><article-categories><subj-group subj-group-type="toc-heading" xml:lang="en"><subject>JOURNALISM</subject></subj-group><subj-group subj-group-type="toc-heading" xml:lang="ru"><subject>Журналистика</subject></subj-group><subj-group subj-group-type="article-type"><subject>Research Article</subject></subj-group></article-categories><title-group><article-title xml:lang="en">Artificial Intelligence and Human-Machine Communication: A Challenge for Mediatization Studies</article-title><trans-title-group xml:lang="ru"><trans-title>Искусственный интеллект и человеко-машинная коммуникация: вызов исследованиям медиатизации</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-7349-9429</contrib-id><contrib-id contrib-id-type="spin">3280-9452</contrib-id><name-alternatives><name xml:lang="en"><surname>Nim</surname><given-names>Evgeniya G.</given-names></name><name xml:lang="ru"><surname>Ним</surname><given-names>Евгения Генриевна</given-names></name></name-alternatives><bio xml:lang="en"><p>PhD in Sociology, Associate Professor, Institute of Media</p></bio><bio xml:lang="ru"><p>кандидат социологических наук, доцент, Институт медиа</p></bio><email>nimeg@mail.ru</email><xref ref-type="aff" rid="aff1"/></contrib></contrib-group><aff-alternatives id="aff1"><aff><institution xml:lang="en">HSE University</institution></aff><aff><institution xml:lang="ru">Национальный исследовательский университет «Высшая школа экономики»</institution></aff></aff-alternatives><pub-date date-type="pub" iso-8601-date="2025-12-29" publication-format="electronic"><day>29</day><month>12</month><year>2025</year></pub-date><volume>30</volume><issue>4</issue><issue-title xml:lang="en">PUSHKIN IN CONTEMPORARY STUDIES</issue-title><issue-title xml:lang="ru">ПУШКИН В СОВРЕМЕННЫХ ИССЛЕДОВАНИЯХ</issue-title><fpage>813</fpage><lpage>823</lpage><history><date date-type="received" iso-8601-date="2025-12-28"><day>28</day><month>12</month><year>2025</year></date></history><permissions><copyright-statement xml:lang="en">Copyright ©; 2025, Nim E.G.</copyright-statement><copyright-statement xml:lang="ru">Copyright ©; 2025, Ним Е.Г.</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="en">Nim E.G.</copyright-holder><copyright-holder xml:lang="ru">Ним Е.Г.</copyright-holder><ali:free_to_read xmlns:ali="http://www.niso.org/schemas/ali/1.0/"/><license><ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">https://creativecommons.org/licenses/by-nc/4.0</ali:license_ref></license></permissions><self-uri xlink:href="https://journals.rudn.ru/literary-criticism/article/view/47801">https://journals.rudn.ru/literary-criticism/article/view/47801</self-uri><abstract xml:lang="en"><p>The development of artificial intelligence technologies, including communicative AI, has become a serious challenge for mediatization studies. The article explores the limits of applicability of the mediatization research program and Andreas Hepp’s figurational approach to automated (human-machine) communication. The author poses and consistently develops three questions: how do mediatization studies consider AI as a subject of theoretical and empirical analysis; what are the potential and limitations of the figurational approach to humanmachine communication; what are the possible directions for reassembling this approach (and the mediatization program as a whole) in the era of communicative AI. Critical understanding of mediatization theory, and in particular, Andreas Hepp’s figurational approach, focuses on five key concepts: figurations, hybrid agency, media logics, quasi-communication and communicative AI. Assessing the heuristic potential of figurational optics in AI research, the author formulates a number of directions for its revision and further development. Given the cross-cutting and multifunctional nature of AI technologies that permeate multiple social worlds and practices, the analysis should not be limited to individual figurations; polyand interfigurative contexts are also important. In addition, it is necessary to study how figurations (social knowledge, values, and behavioral norms of different communities and institutions) are incorporated into artificial agents acting as a (multi)figurative Other. Finally, there are reasons to rethink the concepts of “hybrid figurations” and “hybrid agency” in the context of the growing influence of AI and “artificial humans”.</p></abstract><trans-abstract xml:lang="ru"><p>Развитие технологий искусственного интеллекта, включая коммуникативный ИИ, стало серьезным вызовом для исследований медиатизации. Изучаются границы применимости программы медиатизации и фигуративного подхода Андреаса Хеппа к автоматизированной (человеко-машинной) коммуникации. Автор ставит и последовательно раскрывает три вопроса: как исследования медиатизации рассматривают ИИ в качестве предмета теоретического и эмпирического анализа; каким потенциалом и ограничениями обладает фигуративный подход к человеко-машинной коммуникации; каковы возможные направления пересборки этого подхода (и программы медиатизации в целом) в эпоху коммуникативного ИИ. Критическое осмысление теории медиатизации, в частности фигуративного подхода Андреаса Хеппа, фокусируется на пяти ключевых концептах: фигурации, гибридная агентность, медиалогика, квазикоммуникация и коммуникативный ИИ. Оценивая эвристический потенциал фигуративной оптики в исследовании ИИ, автор формулирует ряд направлений ее ревизии и дальнейшего развития. Учитывая сквозной и многофункциональный характер ИИ-технологий, пронизывающих множество социальных миров и практик, анализ не должен замыкаться на отдельных фигурациях, важны также полии межфигуративные контексты. Кроме того, необходимо изучать как фигурации (социальные знания, ценности и поведенческие нормы разных сообществ и институтов) инкорпорированы в искусственных агентов, выступающих в роли (мульти) фигуративного Другого. Наконец есть основания переосмыслить понятия «гибридная фигурация» и «гибридная агентность» в контексте возрастающего влияния ИИ и «искусственных людей».</p></trans-abstract><kwd-group xml:lang="en"><kwd>artificial intelligence</kwd><kwd>communicative AI</kwd><kwd>human-machine communication</kwd><kwd>mediatization studies</kwd><kwd>figurational approach</kwd><kwd>Andreas Hepp</kwd></kwd-group><kwd-group xml:lang="ru"><kwd>искусственный интеллект</kwd><kwd>коммуникативный ИИ</kwd><kwd>человеко-машинная коммуникация</kwd><kwd>исследования медиатизации</kwd><kwd>фигуративный подход</kwd><kwd>Андреас Хепп</kwd></kwd-group><funding-group/></article-meta><fn-group/></front><body></body><back><ref-list><ref id="B1"><label>1.</label><mixed-citation>Berger, V. (2023). Mediatized Love: A Materialist Phenomenology of Tinder. Social Media + Society, 9(4), 1–15. https://doi.org/10.1177/20563051231216922</mixed-citation></ref><ref id="B2"><label>2.</label><mixed-citation>Bolin, G. (2024). Communicative AI and techno-semiotic mediatization: Understanding the communicative role of the machine. Human-Machine Communication, 7, 65–81. https://doi.org/10.30658/hmc.7.4</mixed-citation></ref><ref id="B3"><label>3.</label><mixed-citation>Couldry, N., &amp; Hepp, A. (2016). The Mediated Construction of Reality. Cambridge, UK; Malden, MA: Polity Press.</mixed-citation></ref><ref id="B4"><label>4.</label><mixed-citation>Elias, N. (1978). What Is Sociology? Columbia University Press.</mixed-citation></ref><ref id="B5"><label>5.</label><mixed-citation>Esposito, E. (2017). Artificial communication? The production of contingency by algorithms. Zeitschrift für Soziologie, 46(4), 249–265. https://doi.org/10.1515/zfsoz-2017-1014</mixed-citation></ref><ref id="B6"><label>6.</label><mixed-citation>Fortunati, L., Edwards, A., &amp; Edwards, Ch. (2024). The perturbing mediatization of Voice-based virtual assistants: The case of Alexa. Human-Machine Communication, 7, 99–119. https://doi.org/10.30658/hmc.7.6</mixed-citation></ref><ref id="B7"><label>7.</label><mixed-citation>Gerhard U., &amp; Hepp A. (2018). Appropriating Digital Traces of Self-Quantification: Contextualizing Pragmatic and Enthusiast Self-Trackers. International Journal of Communication, 12, 683–700.</mixed-citation></ref><ref id="B8"><label>8.</label><mixed-citation>Guzman, A.L., &amp; Lewis, S.C. (2020). Artificial intelligence and communication: A human-machine communication research agenda. New Media &amp; Society, 22(1), 70–86. https://doi.org/10.1177/1461444819858691</mixed-citation></ref><ref id="B9"><label>9.</label><mixed-citation>Hepp, A. (2020). Deep Mediatization. London and New York, Routledge.</mixed-citation></ref><ref id="B10"><label>10.</label><mixed-citation>Hepp, A., Bolin, G., Guzman, A., &amp; Loosen, W. (2024). Mediatization and human-machine communication: Trajectories, discussions, perspectives. Human-Machine Communication, 7, 7–21. https://doi.org/10.30658/hmc.7.1</mixed-citation></ref><ref id="B11"><label>11.</label><mixed-citation>Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, Ch., Malaka, R., Pfadenhauer, M., Puschmann, C., &amp; Schulz, W. (2023). ChatGPT, LaMDA, and the hype around communicative AI: The automation of communication as a field of research in media and communication studies. Human-Machine Communication, 6, 41–63. https://doi.org/10.30658/hmc.6.4</mixed-citation></ref><ref id="B12"><label>12.</label><mixed-citation>Hills, M. (2016). From para-social to multisocial interaction: theorizing material/digital fandom and celebrity. In P.D. Marshall, &amp; S. Redmond (Eds.), A Companion to Celebrity (pp. 463–482). Wiley-Blacwell. https://doi.org/10.1002/9781118475089.ch25</mixed-citation></ref><ref id="B13"><label>13.</label><mixed-citation>Hjarvard, S. (2013). The Mediatization of Culture and Society. Routledge.</mixed-citation></ref><ref id="B14"><label>14.</label><mixed-citation>Horton, D., &amp; Wohl, R.R. (1956). Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry, 19(3), 215–229. https://doi.org/10.1080/00332747.1956.11023049</mixed-citation></ref><ref id="B15"><label>15.</label><mixed-citation>Kalpokas, I. (2020). Problematising reality: The promises and perils of synthetic media. SN Social Sciences, 1(1), 1–11. https://doi.org/10.1007/s43545-020-00010-8</mixed-citation></ref><ref id="B16"><label>16.</label><mixed-citation>Mascheroni, G. (2024). A new family member or just another digital interface? Smart speakers in the lives of families with young children. Human-Machine Communication, 7, 45–63. https://doi.org/10.30658/hmc.7.3</mixed-citation></ref><ref id="B17"><label>17.</label><mixed-citation>Natale, S., &amp; Depounti, I. (2024). Artificial sociality. Human-Machine Communication, 7, 83–98. https://doi.org/10.30658/hmc.7.5</mixed-citation></ref><ref id="B18"><label>18.</label><mixed-citation>Nim, E.G. (2021). Deep mediatization: rethinking a figurational approach. RUDN Journal of Studies in Literature and Journalism, 26(4), 664–671. https://doi.org/10.22363/2312-9220-2021-26-4-664-671</mixed-citation></ref><ref id="B19"><label>19.</label><mixed-citation>Tsuria, R., &amp; Tsuria, Y. (2024). Artificial intelligence’s understanding of religion: Investigating the moralistic approaches presented by generative artificial intelligence tools. Religions, 15(3), 375. https://doi.org/10.3390/rel15030375</mixed-citation></ref><ref id="B20"><label>20.</label><mixed-citation>Waisbord, S. (2019). Communication: A Post-Discipline. John Wiley &amp; Sons.</mixed-citation></ref><ref id="B21"><label>21.</label><mixed-citation>Ytre-Arne, B., &amp; Moe, H. (2020). Folk theories of algorithms: Understanding digital irritation. Media, Culture &amp; Society, 43(5), 807–824. https://doi.org/10.1177/0163443720972314</mixed-citation></ref></ref-list></back></article>
