<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root>
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">RUDN Journal of Psychology and Pedagogics</journal-id><journal-title-group><journal-title xml:lang="en">RUDN Journal of Psychology and Pedagogics</journal-title><trans-title-group xml:lang="ru"><trans-title>Вестник Российского университета дружбы народов. Серия: Психология и педагогика</trans-title></trans-title-group></journal-title-group><issn publication-format="print">2313-1683</issn><issn publication-format="electronic">2313-1705</issn><publisher><publisher-name xml:lang="en">Peoples’ Friendship University of Russia named after Patrice Lumumba (RUDN University)</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">46372</article-id><article-id pub-id-type="doi">10.22363/2313-1683-2025-22-1-96-122</article-id><article-id pub-id-type="edn">UBVZFJ</article-id><article-categories><subj-group subj-group-type="toc-heading" xml:lang="en"><subject>PERSONALITY AND DIGITAL TECHNOLOGIES: OPPORTUNITIES AND CHALLENGES</subject></subj-group><subj-group subj-group-type="toc-heading" xml:lang="ru"><subject>ЛИЧНОСТЬ И ЦИФРОВЫЕ ТЕХНОЛОГИИ: ВОЗМОЖНОСТИ И РИСКИ</subject></subj-group><subj-group subj-group-type="article-type"><subject>Research Article</subject></subj-group></article-categories><title-group><article-title xml:lang="en">Toward a New Level of Human-Chatbot Communication: Goal Management and Mutual Verbal Adaptation</article-title><trans-title-group xml:lang="ru"><trans-title>К новому уровню коммуникации человека и чат-бота: управление целями и взаимная речевая адаптация</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-8552-5639</contrib-id><name-alternatives><name xml:lang="en"><surname>Palenova</surname><given-names>Violetta V.</given-names></name><name xml:lang="ru"><surname>Палёнова</surname><given-names>Виолетта Викторовна</given-names></name></name-alternatives><bio xml:lang="en"><p>PhD Student</p></bio><bio xml:lang="ru"><p>аспирантка</p></bio><email>violetta.palenova@yandex.ru</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-6612-9726</contrib-id><contrib-id contrib-id-type="scopus">7103245935</contrib-id><contrib-id contrib-id-type="spin">2852-2031</contrib-id><name-alternatives><name xml:lang="en"><surname>Voronin</surname><given-names>Anatoly N.</given-names></name><name xml:lang="ru"><surname>Воронин</surname><given-names>Анатолий Николаевич</given-names></name></name-alternatives><bio xml:lang="en"><p>Doctor of Psychology, Professor, Head of the Laboratory of Speech Psychology and Psycholinguistics</p></bio><bio xml:lang="ru"><p>доктор психологических наук, профессор, заведующий лабораторией психологии речи и психолингвистики</p></bio><email>voroninan@bk.ru</email><xref ref-type="aff" rid="aff2"/></contrib></contrib-group><aff-alternatives id="aff1"><aff><institution xml:lang="en">State Academic University for the Humanities</institution></aff><aff><institution xml:lang="ru">Государственный академический университет гуманитарных наук</institution></aff></aff-alternatives><aff-alternatives id="aff2"><aff><institution xml:lang="en">Institute of Psychology, Russian Academy of Sciences</institution></aff><aff><institution xml:lang="ru">Институт психологии Российской академии наук</institution></aff></aff-alternatives><pub-date date-type="pub" iso-8601-date="2025-10-10" publication-format="electronic"><day>10</day><month>10</month><year>2025</year></pub-date><volume>22</volume><issue>1</issue><issue-title xml:lang="en">VOL 22, NO1 (2025)</issue-title><issue-title xml:lang="ru">ТОМ 22, №1 (2025)</issue-title><fpage>96</fpage><lpage>122</lpage><history><date date-type="received" iso-8601-date="2025-10-10"><day>10</day><month>10</month><year>2025</year></date></history><permissions><copyright-statement xml:lang="en">Copyright ©; 2025, Palenova V.V., Voronin A.N.</copyright-statement><copyright-statement xml:lang="ru">Copyright ©; 2025, Палёнова В.В., Воронин А.Н.</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="en">Palenova V.V., Voronin A.N.</copyright-holder><copyright-holder xml:lang="ru">Палёнова В.В., Воронин А.Н.</copyright-holder><ali:free_to_read xmlns:ali="http://www.niso.org/schemas/ali/1.0/"/><license><ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">https://creativecommons.org/licenses/by-nc/4.0</ali:license_ref></license></permissions><self-uri xlink:href="https://journals.rudn.ru/psychology-pedagogics/article/view/46372">https://journals.rudn.ru/psychology-pedagogics/article/view/46372</self-uri><abstract xml:lang="en"><p>As artificial intelligence becomes increasingly integrated into everyday communication, understanding the dynamics of human-chatbot interaction has become a matter of both theoretical importance and practical urgency. This study explores the goals, communicative tactics, and adaptive strategies employed by users and AI chatbots in dialogue, using grounded theory methodology. Based on a corpus of 316 dialogues with ChatGPT, we conducted multi-level coding - substantive, selective, and theoretical - to identify recurring patterns in the organization of digital communication. The analysis revealed a wide range of user goals, including informational, task-oriented, generative, emotional, and exploratory intentions. Chatbots, in turn, pursued structurally narrower but functionally adaptive goals aimed at supporting dialogue coherence and user engagement. Both sides employed diverse communicative tactics, including primary, combined, and compensatory strategies. While users initiated goal setting and frequently adjusted their tactics, chatbots demonstrated reactive behavior through clarification, tone adaptation, and metacommunicative responses. A key result is the identification of six basic communicative scenarios in user-chatbot interaction: informational-analytical, practical, creative, emotional-reflective, entertaining-playful, and exploratory-provocative. Each scenario reflects a stable alignment of goals and tactics between the participants, revealing the functional architecture of digital dialogue. The study demonstrates that interaction with generative chatbots is not random, but unfolds within structured communicative configurations. These findings contribute to the theoretical understanding of digital interaction and provide a typological framework for analyzing, designing, and optimizing AI-based communication systems across various domains.</p></abstract><trans-abstract xml:lang="ru"><p>По мере того как искусственный интеллект все глубже интегрируется в повседневную коммуникацию, изучение динамики взаимодействия человека и чат-бота приобретает как теоретическую значимость, так и практическую актуальность. В настоящем исследовании с использованием методологии обоснованной теории проанализиро- ваны цели, коммуникативные тактики и адаптационные стратегии, применяемые пользователями и чат-ботами в процессе диалога. На основе корпуса из 316 диалогов с ChatGPT было проведено многоуровневое кодирование - субстантивное, избирательное и теоретическое - с целью определения устойчивых паттернов в организации цифровой коммуникации. Анализ выявил широкий спектр пользовательских целей, включая информационные, практико-ориентированные, генеративные, эмоциональные и исследовательские. Цели чат-ботов, в свою очередь, оказались структурно более узкими, но функционально адаптивными - они были направлены на поддержание связности диалога и вовлеченности пользователя. Обе стороны использовали разнообразные коммуникативные тактики, включая первичные, комбинированные и компенсаторные. Пользователи инициировали постановку целей и часто изменяли тактики в ходе взаимодействия, тогда как чат-боты демонстрировали реактивное поведение посредством прояснения, адаптации тона и метакоммуникативных ответов. Ключевым результатом исследования является выделение шести базовых коммуникативных сценариев взаимодействия пользователя и чат-бота: информационно-аналитического, практического, креативного, эмоционально-рефлексивного, развлекательно-игрового и исследовательски-провокационного. Каждый сценарий отражает устойчивую согласованность целей и тактик участников, раскрывая функциональную архитектуру цифрового диалога. Исследование показало, что взаимодействие с генеративными чат-ботами не является случайным, а разворачивается в рамках структурированных коммуникативных конфигураций. Полученные результаты способствуют теоретическому осмыслению цифрового взаимодействия и предлагают типологическую основу для анализа, проектирования и оптимизации систем коммуникации на базе ИИ в различных сферах.</p></trans-abstract><kwd-group xml:lang="en"><kwd>communicative tactics</kwd><kwd>chatbot interaction</kwd><kwd>ChatGPT</kwd><kwd>communication goals</kwd><kwd>coping strategies</kwd><kwd>artificial intelligence</kwd><kwd>adaptive speech strategies</kwd></kwd-group><kwd-group xml:lang="ru"><kwd>коммуникативные тактики</kwd><kwd>взаимодействие с чат-ботом</kwd><kwd>ChatGPT</kwd><kwd>цели коммуникации</kwd><kwd>копинг-стратегии</kwd><kwd>искусственный интеллект</kwd><kwd>адаптивные речевые стратегии</kwd></kwd-group><funding-group><award-group><funding-source><institution-wrap><institution xml:lang="ru">Исследование выполнено в рамках государственного задания Министерства науки и высшего образования Российской Федерации (тема № ФСФУ-2025-0005): «Комплексная оценка когнитивных и эмоциональных ресурсов участников онлайн-коммуникации на родном и иностранном языках».</institution></institution-wrap><institution-wrap><institution xml:lang="en">The study was carried out as part of the state assignment from the Ministry of Science and Higher Education of the Russian Federation (topic No. FSFU-2025-0005): “Integrated assessment of cognitive and emotional resources of participants in online communication in their native and foreign languages.”</institution></institution-wrap></funding-source></award-group></funding-group></article-meta><fn-group/></front><body></body><back><ref-list><ref id="B1"><label>1.</label><mixed-citation>Altay, S., Hacquin, A.-S., Chevallier, C., &amp; Mercier, H. (2023). Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. Journal of Experimental Psychology: Applied, 29(1), 52–62. https://doi.org/10.1037/xap0000400</mixed-citation></ref><ref id="B2"><label>2.</label><mixed-citation>Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051</mixed-citation></ref><ref id="B3"><label>3.</label><mixed-citation>Bedrina, I.S. (2010). Functional semantic stylistic text analyses. Lingua Mobilis, (7), 19–26. (In Russ.). EDN: MWCGAH</mixed-citation></ref><ref id="B4"><label>4.</label><mixed-citation>Brown, P., &amp; Levinson, S.C. (1987). Politeness: Some universals in language usage. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9780511813085</mixed-citation></ref><ref id="B5"><label>5.</label><mixed-citation>Cheng, X., Yin, L., Lin, C., Shi, Z., Zheng, H., Zhu, L., Liu, X., Chen, K., &amp; Dong, R. (2024). Chatbot dialogic reading boosts comprehension for Chinese kindergarteners with higher language skills. Journal of Experimental Child Psychology, 240, 105842. https://doi.org/10.1016/j.jecp.2023.105842</mixed-citation></ref><ref id="B6"><label>6.</label><mixed-citation>Ciechanowski, L., Przegalinska, A., Magnuski, M., &amp; Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055</mixed-citation></ref><ref id="B7"><label>7.</label><mixed-citation>Croes, E.A.J., &amp; Antheunis, M.L. (2021). Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships, 38(1), 279–300. https://doi.org/10.1177/0265407520959463</mixed-citation></ref><ref id="B8"><label>8.</label><mixed-citation>De Gennaro, M., Krumhuber, E.G., &amp; Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in Psychology, 10, 3061. https://doi.org/10.3389/fpsyg.2019.03061</mixed-citation></ref><ref id="B9"><label>9.</label><mixed-citation>Dillard, J.P., Segrin, C., &amp; Harden, J.M. (1989). Primary and secondary goals in the production of interpersonal influence messages. Communication Monographs, 56(1), 19–38. https://doi.org/10.1080/03637758909390247</mixed-citation></ref><ref id="B10"><label>10.</label><mixed-citation>Gayanova, M.M., &amp; Vulfin, A.M. (2022). Structural and semantic analysis of scientific publications in a selected subject area. Systems Engineering and Information Technologies, 4(1), 37–43. (In Russ.). https://doi.org/10.54708/26585014_2022_41837 EDN: SRLPRF</mixed-citation></ref><ref id="B11"><label>11.</label><mixed-citation>Glaser, B. G., &amp; Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine.</mixed-citation></ref><ref id="B12"><label>12.</label><mixed-citation>Grice, H.P. (1975). Logic and conversation. In P. Cole, &amp; J.L. Morgan (Eds.). Syntax and semantics. Vol. 3. Speech acts (pp. 41–58). New York: Academic Press. https://doi.org/10.1163/9789004368811_003</mixed-citation></ref><ref id="B13"><label>13.</label><mixed-citation>Hancock, J.T., Naaman, M., &amp; Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022</mixed-citation></ref><ref id="B14"><label>14.</label><mixed-citation>Ischen, C., Butler, J., &amp; Ohme, J. (2024). Chatting about the unaccepted: Self-disclosure of unaccepted news exposure behaviour to a chatbot. Behaviour &amp; Information Technology, 43(10), 2044–2056. https://doi.org/10.1080/0144929x.2023.2237605</mixed-citation></ref><ref id="B15"><label>15.</label><mixed-citation>Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, 107954. https://doi.org/10.1016/j.chb.2023.107954</mixed-citation></ref><ref id="B16"><label>16.</label><mixed-citation>Jiang, Y., Yang, X., &amp; Zheng, T. (2023). Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Computers in Human Behavior, 138, 107485. https://doi.org/10.1016/j.chb.2022.107485</mixed-citation></ref><ref id="B17"><label>17.</label><mixed-citation>Konya-Baumbach, E., Biller, M., &amp; von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, 107513. https://doi.org/10.1016/j.chb.2022.107513</mixed-citation></ref><ref id="B18"><label>18.</label><mixed-citation>Liu, B., &amp; Sundar, S.S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625–636. https://doi.org/10.1089/cyber.2018.0110</mixed-citation></ref><ref id="B19"><label>19.</label><mixed-citation>Lu, L., McDonald, C., Kelleher, T., Lee, S., Chung, Y.J., Mueller, S., Vielledent, M., &amp; Yue, C.A. (2022). Measuring consumer-perceived humanness of online organizational agents. Computers in Human Behavior, 128, 107092. https://doi.org/10.1016/j.chb.2021.107092</mixed-citation></ref><ref id="B20"><label>20.</label><mixed-citation>Markowitz, D.M., Hancock, J.T., &amp; Bailenson, J.N. (2024). Linguistic markers of inherently false AI communication and intentionally false human communication: Evidence from hotel reviews. Journal of Language and Social Psychology, 43(1), 63–82. https://doi.org/10.1177/0261927x231200201</mixed-citation></ref><ref id="B21"><label>21.</label><mixed-citation>Maurya, R.K. (2024). A qualitative content analysis of ChatGPT’s client simulation role-play for practising counselling skills. Counselling and Psychotherapy Research, 24(2), 614–630. https://doi.org/10.1002/capr.12699</mixed-citation></ref><ref id="B22"><label>22.</label><mixed-citation>McGowan, A., Gui, Y., Dobbs, M., Shuster, S., Cotter, M., Selloni, A., Goodman, M., Srivastava, A., Cecchi, G.A., &amp; Corcoran, C.M. (2023). ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Research, 326, 115334. https://doi.org/10.1016/j.psychres.2023.115334</mixed-citation></ref><ref id="B23"><label>23.</label><mixed-citation>Palomares, N.A. (2014). The goal construct in interpersonal communication. In C.R. Berger (Ed.), Interpersonal Communication (pp. 77–100). Berlin, Boston: De Gruyter Mouton. https://doi.org/10.1515/9783110276794.77</mixed-citation></ref><ref id="B24"><label>24.</label><mixed-citation>Park, G., Chung, J., &amp; Lee, S. (2022). Effect of AI chatbot emotional disclosure on user satisfaction and reuse intention for mental health counseling: A serial mediation model. Current Psychology, 42(32), 28663–28673. https://doi.org/10.1007/s12144-022-03932-z</mixed-citation></ref><ref id="B25"><label>25.</label><mixed-citation>Park, G., Yim, M.C., Chung, J., &amp; Lee, S. (2023). Effect of AI chatbot empathy and identity disclosure on willingness to donate: The mediation of humanness and social presence. Behaviour &amp; Information Technology, 42(12), 1998–2010. https://doi.org/10.1080/0144929x.2022.2105746</mixed-citation></ref><ref id="B26"><label>26.</label><mixed-citation>Prescott, J., Ogilvie, L., &amp; Hanley, T. (2024). Student therapists’ experiences of learning using a machine client: A proof-of-concept exploration of an emotionally responsive interactive client (ERIC). Counselling and Psychotherapy Research, 24(2), 524–531. https://doi.org/10.1002/capr.12685</mixed-citation></ref><ref id="B27"><label>27.</label><mixed-citation>Rashkin, H., Smith, E.M., Li, M., &amp; Boureau, Y.-L. (2019). Towards empathetic open-domain conversation models: A new benchmark and dataset. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 5370–5381). Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/p19-1534</mixed-citation></ref><ref id="B28"><label>28.</label><mixed-citation>Rhee, C.E., &amp; Choi, J. (2020). Effects of personalization and social role in voice shopping: An experimental study on product recommendation by a conversational voice agent. Computers in Human Behavior, 109, 106359. https://doi.org/10.1016/j.chb.2020.106359</mixed-citation></ref><ref id="B29"><label>29.</label><mixed-citation>Rhim, J., Kwak, M., Gong, Y., &amp; Gweon, G. (2022). Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Computers in Human Behavior, 126, 107034. https://doi.org/10.1016/j.chb.2021.107034</mixed-citation></ref><ref id="B30"><label>30.</label><mixed-citation>Ricon, T. (2024). How chatbots perceive sexting by adolescents. Computers in Human Behavior: Artificial Humans, 2(1), 100068. https://doi.org/10.1016/j.chbah.2024.100068</mixed-citation></ref><ref id="B31"><label>31.</label><mixed-citation>Sahab, S., Haqbeen, J., Hadfi, R., Ito, T., Imade, R.E., Ohnuma, S., &amp; Hasegawa, T. (2024). E-contact facilitated by conversational agents reduces interethnic prejudice and anxiety in Afghanistan. Communications Psychology, 2(1), 22. https://doi.org/10.1038/s44271-024-00070-z</mixed-citation></ref><ref id="B32"><label>32.</label><mixed-citation>Schrader, D.C., &amp; Dillard, J.P. (1998). Goal structures and interpersonal influence. Communication Studies, 49(4), 276–293. https://doi.org/10.1080/10510979809368538</mixed-citation></ref><ref id="B33"><label>33.</label><mixed-citation>Seitz, L. (2024). Artificial empathy in healthcare chatbots: Does it feel authentic? Computers in Hu­man Behavior: Artificial Humans, 2(1), 100067. https://doi.org/10.1016/j.chbah.2024.100067</mixed-citation></ref><ref id="B34"><label>34.</label><mixed-citation>Shaikh, S., Yayilgan, S.Y., Klimova, B., &amp; Pikhart, M. (2023). Assessing the usability of ChatGPT for formal English language learning. European Journal of Investigation in Health, Psychology and Education, 13(9), 1937–1960. https://doi.org/10.3390/ejihpe13090140</mixed-citation></ref><ref id="B35"><label>35.</label><mixed-citation>Shin, H., Bunosso, I., &amp; Levine, L.R. (2023). The influence of chatbot humour on consumer evaluations of services. International Journal of Consumer Studies, 47(2), 545–562. https://doi.org/10.1111/ijcs.12849</mixed-citation></ref><ref id="B36"><label>36.</label><mixed-citation>Skjuve, M., Følstad, A., Fostervold, K.I., &amp; Brandtzaeg, P.B. (2021). My chatbot companion — A study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601</mixed-citation></ref><ref id="B37"><label>37.</label><mixed-citation>Sperber, D., &amp; Wilson, D. (1995). Relevance: Communication and cognition (2nd ed.). Oxford: Blackwell Publishers Ltd.</mixed-citation></ref><ref id="B38"><label>38.</label><mixed-citation>Stamp, G.H., &amp; Knapp, M.L. (1990). The construct of intent in interpersonal communication. Quarterly Journal of Speech, 76(3), 282–299.  https://doi.org/10.1080/00335639009383920</mixed-citation></ref><ref id="B39"><label>39.</label><mixed-citation>Titscher, S., Meyer, M., Wodak, R., &amp; Vetter, E. (2000). Methods of text and discourse analysis. London: SAGE Publications Ltd. https://doi.org/10.4135/9780857024480</mixed-citation></ref><ref id="B40"><label>40.</label><mixed-citation>Wang, X. (2020). Semantic and structural analysis of Internet texts. E-Scio, (4), 51–60. (In Russ.). EDN: PBIGEH</mixed-citation></ref><ref id="B41"><label>41.</label><mixed-citation>Wilson, S.R. (1995). Elaborating the cognitive rules model of interaction goals: The problem of accounting for individual differences in goal formation. Annals of the International Communication Association, 18(1), 3–25. https://doi.org/10.1080/23808985.1995.11678905</mixed-citation></ref><ref id="B42"><label>42.</label><mixed-citation>Youn, S., &amp; Jin, S.V. (2021). “In A.I. we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy”. Computers in Human Behavior, 119, 106721. https://doi.org/10.1016/j.chb.2021.106721</mixed-citation></ref></ref-list></back></article>
