Toward a New Level of Human-Chatbot Communication: Goal Management and Mutual Verbal Adaptation

Cover Page

Cite item

Full Text

Abstract

As artificial intelligence becomes increasingly integrated into everyday communication, understanding the dynamics of human-chatbot interaction has become a matter of both theoretical importance and practical urgency. This study explores the goals, communicative tactics, and adaptive strategies employed by users and AI chatbots in dialogue, using grounded theory methodology. Based on a corpus of 316 dialogues with ChatGPT, we conducted multi-level coding - substantive, selective, and theoretical - to identify recurring patterns in the organization of digital communication. The analysis revealed a wide range of user goals, including informational, task-oriented, generative, emotional, and exploratory intentions. Chatbots, in turn, pursued structurally narrower but functionally adaptive goals aimed at supporting dialogue coherence and user engagement. Both sides employed diverse communicative tactics, including primary, combined, and compensatory strategies. While users initiated goal setting and frequently adjusted their tactics, chatbots demonstrated reactive behavior through clarification, tone adaptation, and metacommunicative responses. A key result is the identification of six basic communicative scenarios in user-chatbot interaction: informational-analytical, practical, creative, emotional-reflective, entertaining-playful, and exploratory-provocative. Each scenario reflects a stable alignment of goals and tactics between the participants, revealing the functional architecture of digital dialogue. The study demonstrates that interaction with generative chatbots is not random, but unfolds within structured communicative configurations. These findings contribute to the theoretical understanding of digital interaction and provide a typological framework for analyzing, designing, and optimizing AI-based communication systems across various domains.

Full Text

Introduction The issue of digital communication with AI is actively discussed at the international level within the framework of UN-led educational initiatives. UNESCO, in its Recommendation on the Ethics of Artificial Intelligence[3], emphasizes the need for the responsible use of algorithms in educational and communicative environments. The Council of Europe, in its Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law[4], examines the impact of AI on human rights and democratic processes, including users’ adaptation to interactions with digital agents. The International Association of Universities (IAU)[5] highlights the role of AI in the transformation of higher education, emphasizing the challenges of interaction between users and intelligent chatbots. These initiatives underscore the growing relevance of studying the dynamics of mutual adaptation between users and AI in digital communication, which is becoming a key challenge for modern technologies. In recent decades, human communication with artificial intelligence (AI) has become increasingly widespread, transforming traditional models of interaction. AI-based chatbots are used across a wide range of domains - from customer support and education to psychological counseling and creative practices. This growing integration calls for a deeper examination of the distinctive features of such communication, particularly the underlying mechanisms and the tactics and strategies employed by users. Research shows that the formulation and pursuit of communicative goals significantly affect the effectiveness of interaction, its overall nature, and the level of user satisfaction (Palomares, 2014). Within interpersonal communication, goals are defined as “mental representations of desired end states that can be hierarchically organized and vary in their level of specificity” (Palomares, 2014, p. 78). Human-AI chatbot interactions also involve similar goal-setting structures, making it possible to apply established theories of interpersonal communication to the analysis of dialogues with AI. The cognitive organization of goals plays a crucial role in this process. According to the cognitive rules model (Wilson, 1995), communicative intentions are activated depending on the context and interactional dynamics. Communication with AI requires the adaptation of strategies, as chatbots do not possess traditional intentionality. Essentially, this involves coping strategies employed by users in response to communicative breakdowns. Such strategies include reformulating queries, breaking down complex questions, clarifying the context, and experimenting with alternative phrasing (Stamp & Knapp, 1990). Communication between users and AI chatbots follows certain principles similar to those found in interpersonal interaction. One of the key principles is the Cooperative Principle (Grice, 1975), which suggests that users strive to formulate their queries in a way that enables the chatbot to provide relevant and meaningful responses. The Politeness Principle (Brown & Levinson, 1987) evident in users’ adherence to social norms when composing messages, particularly in formal or interpersonally sensitive contexts. The Relevance Principle (Sperber & Wilson, 1995) governs the selection of information most pertinent to achieving the communicative goal, thus affecting the accuracy and efficiency of the interaction. However, applying these principles in a digital environment requires adaptation, as users must modify their communicative behavior in response to the functional limitations and algorithmic nature of chatbot operation. In the process of communicating with AI, users may modify their goals and tactics based on the feedback they receive. The interactional context plays a crucial role in shaping communicative strategies (Palomares, 2014; Schrader & Dillard, 1998). It is important to note that the effectiveness of the dialogue depends not only on the user’s initial intentions but also on the AI’s capacity to interpret and adapt to user input. Contemporary research increasingly focuses on the study of empathy and emotional expressiveness in chatbots, which can significantly enhance the effectiveness and quality of user interaction. Empathic chatbot interaction involves recognizing and appropriately responding to human emotional cues (De Gennaro et al., 2020). Emotionally responsive chatbots not only increase user satisfaction with communication but also effectively reduce stress and negative emotions arising in difficult or crisis situations (Liu & Sundar, 2018). Effective techniques in empathic interaction include emotional mirroring - where the chatbot reflects the user’s emotional signals - and emotional validation, which involves acknowledging the significance of the user’s feelings (Park et al., 2023). However, excessive or improperly used emotional expressiveness may be perceived by users as artificial or manipulative, highlighting the need for careful calibration of empathic algorithms (Seitz, 2024). A prominent topic in contemporary psychological research is the anthropomorphism of chatbots - their ability to imitate human communicative traits. Anthropomorphic chatbots are perceived by users as more competent, appealing, and trustworthy conversational partners (Araujo, 2018). To evoke such perceptions, designers commonly employ informal language, personal forms of address, humor, and emotionally expressive phrases (Jiang et al., 2023). It is important to distinguish between socially oriented anthropomorphism, aimed at fostering emotional closeness, and functionally oriented anthropomorphism, intended to enhance the perceived competence of the chatbot (Janson, 2023). However, overly realistic communication can trigger the “uncanny valley” effect, where the virtual interlocutor is perceived as unnatural or even unsettling (Park et al., 2023). This underscores the importance of a carefully balanced approach to the design of chatbot communication strategies (Shin et al., 2023). An important area of research concerns the influence of chatbots on user behavior, decision-making, and attitude change across various domains. Studies have shown that even brief interactions with chatbots can have a significant impact on people’s attitudes and behaviors (Altay et al., 2023). The mechanisms underlying this influence include persuasion through well-reasoned informational messages, as well as the strategic use of emotional appeals to enhance message effectiveness Cheng et al., 2024). However, users’ excessive trust in chatbot-generated recommendations may reduce critical thinking and increase the likelihood of flawed decisions, highlighting the need for reliable methods of verification and oversight of chatbot-generated content (Sahab et al., 2024). There is growing interest in the development and implementation of generative chatbots such as ChatGPT, which are capable not only of responding to queries but also of generating complex texts, engaging in multi-step dialogues, and providing psychological support at a level approaching that of professional counseling (Maurya, 2024). Generative models demonstrate the greatest effectiveness in tasks involving the analysis of large volumes of text, the generation of diverse scenarios, and the facilitation of creative processes (Shaikh et al., 2023). However, strict quality control mechanisms, regular fact-checking procedures, and clear ethical guidelines are essential to minimize the risks of inaccuracy and the dissemination of misleading information (Markowitz et al., 2024, McGowan et al., 2023, Ricon, 2024). Additionally, research has highlighted the importance of how users perceive a chatbot’s identity and role. Users respond to chatbots differently depending on the role they perceive them to play - such as a friend, assistant, advisor, or official company representative (Rhee & Choi, 2020). For emotional support and personal interaction, the roles of friend or conversation partner, which employ an informal communication style, tend to be most effective (Youn & Jin, 2021). For informational or advisory tasks, the roles of consultant or assistant, which involve a more formal and authoritative tone, are generally more appropriate (Rhee & Choi, 2020). The choice of role should be based on the nature of the task, the expected type of interaction, and the characteristics of the target audience (Youn & Jin, 2021). In this context, it is particularly important to examine not only how chatbot role are perceived, but also the specific speech acts through which participants pursue their communicative goals. Of particular significance is the analysis of communicative tactics and their associated goals within human - AI interaction, especially in dialogues involving artificial intelligence-based chatbots. While the initial focus of the study was on examining users’ speech behavior, it became evident in the course of the research that a comprehensive understanding of interactional dynamics necessitates the inclusion of the chatbot as an active interlocutor. This insight informed the articulation of the study’s central aim: to identify and systematize the communicative goals and tactics employed by both participants in the interaction. The analysis encompassed the verbal behavior of both users and AI chatbots, with particular emphasis on the ways communicative goals are formulated and pursued, the variability of communicative tactics, and the deployment of coping strategies in response to misunderstandings, refusals, frustration, or other forms of interactional difficulty arising in the course of dialogue. Methods Corpus Collection To identify the strategies and tactics of users’ verbal interaction with AI chatbots, an empirical corpus was compiled, comprising 316 dialogues with a total volume of approximately 836,000 characters. The initial dataset included around 448 interaction logs collected between September 2023 and December 2024 from three primary sources: (1) test sessions conducted by the research team to assess chatbot functionality; (2) anonymized logs voluntarily submitted by users engaging with chatbots in real professional and educational contexts; and (3) open-access online platforms such as the OpenAI Developer Forum, Reddit, Discord, and Kaggle, where users publicly shared excerpts of their interactions with AI systems. Inclusion in the final empirical corpus was guided by the following criteria: - Priority was given to multi-turn dialogues (minimum of three consecutive exchange pairs), allowing for the analysis of tactic development over time; - Only complete dialogues or well-structured excerpts with clearly defined turns from both participants (user and chatbot) were retained; - Interactions lacking meaningful communicative intent (e.g., nonsensical commands, technical trial inputs, or random text generation) were excluded; - Dialogues in languages other than Russian were assessed individually: if they contained relevant communicative strategies, they were translated into Russian; otherwise, they were excluded; - Preference was given to dialogues reflecting a diversity of communicative goals (informational, instructional, generative, emotional, etc.). After the initial selection, the data underwent a multi-stage cleaning process aimed at removing duplicate segments, eliminating technical or irrelevant content, and ensuring complete anonymization (e.g., names of individuals or organizations). This approach, balancing the diversity of communicative scenarios with the quality and integrity of the data, enabled the construction of a corpus representative of contemporary user practices in interactions with AI chatbots across various contexts. Analytical Approach: Grounded Theory and Supporting Methods The study employed grounded theory methodology (Glaser & Strauss, 1967) to systematically analyze the compiled dialogue corpus. The analytical process encompassed the core stages of substantive, selective, and theoretical coding. To enhance the interpretative depth, structural-semantic, semantic-stylistic, and conversation analysis were additionally applied as supporting procedures. At the stage of substantive coding, meaning units (fragments of dialogue logs) were identified based on linguistic and semantic features indicating one or more of the target analytical categories: interaction goals, communicative tactics, and coping strategies for overcoming communication difficulties. Structuralsemantic analysis was employed to detect recurring lexical patterns, key semantic fields, and meaning clusters (Gayanova & Vulfyn, 2022; Wang, 2020). Semanticstylistic analysis played a central role in examining the tone, stylistic devices, and expressive elements of user messages. This made it possible to identify the emotional coloring, degree of formality, and communicative intentions embedded in different types of user queries (Bedrina, 2010). The identification of meaning units was accompanied by their coding - the development of a list of substantive codes. The selective coding stage involved focusing on and refining the substantive codes into higher-order conceptual units. This process led to the emergence of core categories along two principal dimensions: the core category for systematizing user interaction goals, and several core categories describing the communicative tactics employed to achieve these goals. Particular attention at this stage was given to dialogue structure, turn-taking mechanisms, and strategies for resolving communicative breakdowns, such as query reformulation, clarification requests, and adaptive paraphrasing (Titscher et al., 2000). During theoretical coding stage, connections between the emerging categories were explored and integrated into theoretical codes that captured the conceptual model of user-chatbot interaction, grounded in the identified goals and tactics. This multi-level procedure enabled a detailed conceptualization of user communication with AI chatbots, highlighting the dynamic interplay between goals, communicative tactics, and coping strategies within the dialogue. Results Data analysis was conducted in two directions, in accordance with the research objective: identifying and systematizing the communicative goals and tactics employed by both the user and the chatbot during interaction. User and AI Chatbot Interaction Goals During the stage of substantive coding, various types of user goals were identified, such as verifying information, getting an overview of a topic, identifying key concepts and directions, creating schedules and plans, analyzing experimental data, copywriting and advertising texts, imitating styles and authors, creating ideas for visualizations and design, planning personal change, support in decisionmaking, probing for vulnerabilities and limitations, and others. In total, 58 substantive codes were identified - typical goals that users formulate when addressing prompts to the chatbot. In parallel, substantive coding was applied to define the types of goals embedded in the chatbot’s responses to user prompts, thereby facilitating continued interaction. In total, 15 substantive codes were identified - typical goals reflected in chatbot responses. Among them were goals such as elaborating on or deepening the user’s topic, structuring knowledge, co-creating new content or ideas, encouraging reflection or perspective-taking, motivating the user toward subsequent action and others. At the next stage - selective coding - core categories were defined, to which the primary codes were assigned. The core categories were intended to integrate the entire body of analyzed material. The procedure was iterative: substantive coding transitioned into selective coding, after which the researchers returned to the first stage to refine and revise the initial codes, supplementing them with new ones, and then once again proceeded to selective coding at a higher level of abstraction. Iterations continued until the point of saturation was achieved. The substantive codes describing types of user goals were grouped into seven core categories, and the substantive codes representing chatbot goals were grouped into four core categories. The first core category of user goals centered on information retrieval and knowledge clarification (e.g., fact-checking, structuring data, searching for relevant sources, and comparing concepts). The second core category included practical and task-oriented goals (e.g., performing mathematical calculations, code writing and debugging, analyzing large datasets, and creating schedules). The third core category related to content generation goals (e.g., composing blog posts, drafting emails, producing marketing materials, and summarizing long texts). A separate core category involved creative activity (e.g., generating ideas, writing poetry or fictional scenes, imitating stylistic authorship, and participating in brainstorming exercises). Another core category encompassed self-reflection and introspection (e.g., seeking emotional support, reducing anxiety, expressing emotions in a safe space, and engaging in decision-making dialogues). Entertainment and playful interaction formed another goal core category (e.g., humorous conversations, rule-based games, imaginative roleplay, and collecting unexpected responses). Finally, users frequently engaged in exploration of AI chatbot capabilities (e.g., testing system limitations, probing for unusual responses, evaluating creativity, and attempting to bypass constraints). The results of the selective coding of user goals are presented in Table 1, while the full list of substantive codes with illustrative examples (meaning units) is provided in Appendix 1. Table 1 Typology of User Goals in Interaction Core Categories of User Goals Substantive Codes* (types of user goals) 1. Informational Goals Fact-checking, data search, overview of topics, structuring knowledge 2. Practical and Task-oriented Goals Debugging code, task automation, translation, documentation, simulations 3. Content Generation Copywriting, rewriting, email drafting, summarization, educational content 4. Creative Work Writing poetry, role-play, idea generation, visual design 5. Self-reflection and Introspection Emotional support, reflection, stress relief, decision-making 6. Entertainment and Playful Interaction Games, creative challenges, collecting unexpected responses 7. Exploration of AI Capabilities Testing AI limits, probing vulnerabilities, experimenting with creativity A complete list of substantive codes is presented in Appendix 1. The first core category of chatbot goals included informative goals, such as developing the topic, providing explanations along with suggestions for next steps, switching to a related subject, offering analytical comparisons, structuring knowledge, and suggesting new possibilities. These functions reflected the chatbot’s role in expanding and deepening the informational scope of the dialogue. The second core category encompassed practice-oriented goals, including facilitating the transition from abstract discussion to applied contexts and co-creating new content with the user. These goals positioned the chatbot as a collaborative partner in task completion and content development. The third core category pertained to supporting user self-determination, through goals such as clarifying the user’s request, performing comprehension checks, provoking reflection, and guiding the user in the formulation of more precise or achievable objectives. These actions contributed to maintaining purposeful and self-directed dialogue. The fourth core category involved motivational goals, including the suggestion of alternative paths, encouragement to take the next step in a task or decision process, and reinforcement of a positive emotional tone. This category emphasized the chatbot’s role in sustaining user motivation and emotional involvement throughout the interaction. The results of the selective coding of chatbot goals are presented in Table 2, while the full list of substantive codes with illustrative examples (meaning units) is provided in Appendix 2. Table 2 Typology of AI Chatbot Goals in Interaction Core Categories of AI Chatbot Goals Substantive Codes (types of AI Chatbot Goals) 1. Informative Development of the topic; Explanation with suggestions for further steps; Switching to a related topic; Analysis and comparison; Structuring knowledge; Suggesting new possibilities 2. Practice-oriented Transition to practice; Creating something new together 3. Supporting self-determination Identification of request / clarification of need; Comprehension check / self-check; Provoking reflection; Leading to goal-setting 4. Motivational Suggesting several alternative directions; Motivation for the next action; Supporting confidence / positive attitude An analysis of the chatbot’s responses - considered as content aimed at clarifying the user’s goals - showed that AI chatbots often put forward their own “goals” by suggesting next steps in the dialogue or offering multiple possible directions for further interaction. A conceptually important but initially unintended outcome of the study was the discovery of a radical shift in how user-chatbot interaction can be understood: it has become more balanced and collaborative, characterized by a shared pursuit of goals (Collaborative Goal Pursuit). Communicative Tactics in User-AI Chatbot Interaction During the stage of substantive coding, a range of communicative tactics employed by users to achieve their goals in dialogue with AI chatbots was identified. These included, among others: cooperative, persuasive, informative, playful, changing the format of interaction, metacommunication, and experimentalcooperative approaches. In total, 18 substantive codes were derived, representing typical user tactics employed to reach specific interactional objectives. In parallel, substantive coding was also applied to chatbot responses, revealing 11 communicative tactics typical of the AI agent. These included informative, cooperative, paraphrasing and clarifying user intentions, expressing sympathy, adapting the communication style to the user’s tone, and others. At the selective coding stage, higher-order categories were developed to capture the diversity of communicative tactics used by both participants and AI chatbots. Notably, the number and labels of core categories were identical for both sides - each comprising three overarching types: User primary communicative tactics, Combined user tactics, and User communicative tactics for coping with difficulties. However, the content of these categories differed between users and chatbots, reflecting distinct roles and functional orientations in the dialogue. The first core category, User primary communicative tactics, comprised fundamental strategies such as cooperative, persuasive, informative, experimental, and playful strategies. These tactics reflected users’ baseline modes of engaging with the chatbot in pursuit of various interactional goals. The second category, Combined user tactics, encompassed hybrid strategies such as informativepersuasive, experimental-cooperative, and playfully-informative approaches. These tactics emerged in contexts where users simultaneously pursued multiple goals or adapted their strategy dynamically within the same exchange. The third category, User communicative tactics for coping with difficulties, addressed adaptive responses to challenges or disruptions in the interaction. These encompassed such tactics as breaking down complex tasks into simpler components, clarifying tactics, repetitive requests, emotional coping, summarizing requests, issuing alternative queries, changing the format of interaction, soliciting external sources, metacommunication, and modifying communication style. Together, these strategies reflect the user’s adaptive efforts to maintain communicative effectiveness in the face of misunderstanding, refusal, or system limitations. The results of selective coding for user communicative tactics are presented in Table 3, while the full list of substantive codes with illustrative examples (meaning units) is provided in Appendix 3. Table 3 Typology of User Communication Tactics in Interaction Core Categories of User Communication Tactics Substantive Codes (user communication tactics) User primary communicative tactics Cooperative Persuasive Informative Experimental Playful Combined user tactics Informative-persuasive Experimental-cooperative Playfully-informative User communicative tactics for coping with difficulties Breaking down complex tasks into simpler steps Clarifying tactics Repetitive requests Emotional coping Summarising requests Alternative requests Changing the format of interaction Requesting external sources Metacommunication Changing communication style The categorization of chatbot communicative tactics followed the same analytical framework as for user tactics, resulting in three core categories that paralleled those identified on the user side. However, the content of these categories reflected the chatbot’s distinct functional role within the interaction. The first core category, Chatbot primary communicative tactics, included informative, cooperative, and paraphrasing and clarifying user intentions. These tactics represent the default operational style of the chatbot, aimed at maintaining coherence, supporting goal completion, and ensuring mutual understanding during the interaction. The second core category, Chatbot combined tactics, comprised hybrid strategies such as sympathetic-informative or cooperative-directive responses, which simultaneously addressed emotional and instrumental dimensions of user input. These tactics often emerged in complex or ambiguous communicative situations. The third core category, Chatbot communicative tactics for coping with difficulties, captured adaptive behaviors used to manage misunderstandings, uncertainty, or user frustration. These tactics included expressing sympathy, adapting the communication style align with the user’s tone, reframing the request, and proposing alternative interpretations. Such tactics illustrate the chatbot’s programmed capacity to sustain the dialogue under challenging communicative conditions and to adjust its outputs based on user affect or interactional breakdowns. Together, these categories underscore the increasingly sophisticated role of AI chatbots as dynamic conversational agents capable of not only delivering content, but also managing interactional flow, repairing misunderstandings, and supporting user engagement through context-sensitive communicative strategies. The results of selective coding for chatbot communicative tactics are presented in Table 4, while the full list of substantive codes with illustrative examples (meaning units) is provided in Appendix 4. Table 4 Typology of AI Chatbot Communication Tactics in Interaction Core Categories of Communication Tactics of AI Chatbot Substantive Codes (AI Chatbot Communication Tactics) AI Chatbot primary communication tactics Informative Cooperative AI Chatbot combination tactics Informative-Cooperative AI Chatbot communicative tactics for coping with difficulties Breaking down responses into simpler steps Paraphrasing and clarifying user intentions Links to additional resources Expressing sympathy Explanation Alternative solutions Adapting the communication style to the tone of the user Apologising for mistakes Basic communicative scenarios in User-AI chatbot interaction At the final stage of analysis, following the identification of meaning units, the formulation of substantive codes, and the definition of core categories, theoretical coding was undertaken to uncover stable patterns of interaction between users and AI chatbots. In line with the approach of Glaser and Strauss (1967), theoretical coding enabled us to integrate the previously established categories into a coherent conceptual model that captures the logic and dynamics of the phenomenon under study. The empirical foundation for constructing the model consisted of four groups of substantive codes, organized into core categories: User Goals (7 categories, each comprising an average of 4-6 codes), Chatbot Goals (4 categories, with 5-7 codes each), User Communicative Tactics (3 groups: primary, combined, and compensatory; totaling 10+10+7 tactics), and Chatbot Communicative Tactics (3 groups: primary, combined, and compensatory; totaling 2+1+8 tactics). The comparative analysis of core categories and their associated substantive codes revealed a structural asymmetry between user and chatbot goals. Users demonstrated a wider and more context-sensitive range of goals, including not only informational and task-oriented requests but also introspective, creative, and exploratory intentions (a total of 7 categories). In contrast, chatbot goals were more functionally generalized and concentrated around information delivery, practical assistance, self-guidance support, and motivational engagement (4 categories). This reflects the model of a “reactive agent” that adapts to diverse user inputs while operating within a limited range of communicative functions. A comparison of communicative tactics further revealed asymmetries in the distribution of strategic initiative between users and chatbots. Users employed both primary tactics (e.g., persuasive, cooperative) and compensatory metacommunicative techniques such as rephrasing queries, altering the communication format, or expressing emotional states. Chatbots, in contrast, exhibited a wider use of stabilizing and adaptive tactics, such as segmenting responses, requesting clarification, offering apologies, and adjusting tone - highlighting the chatbot’s compensatory role in managing ambiguous or disruptive input. Through the semantic and functional alignment of core categories, we identified six typical communicative scenarios that structure user-chatbot interaction. Each scenario represents a stable configuration involving a characteristic user goal, a corresponding chatbot goal (as communicative response), the tactics employed by each side, and the resulting interactional format. Informational-analytical interaction involves a coordinated effort by the user to obtain and structure information, while the chatbot is oriented toward providing clarification, analysis, and topic development. The communicative tactics employed by both sides (informative, clarifying, paraphrasing, etc.) enable cognitive alignment, resulting in an expanded and thematically enriched response. Practical interaction is based on the coordinated efforts of the user and the chatbot in solving applied problems and completing specific tasks. The user initiates a request for automation or practical implementation, while the chatbot supports the process through step-by-step guidance and resource provision. The tactics used (cooperative, informative, breaking down tasks, step-by-step explanation, etc.) foster an instrumental and cooperative format of interaction, leading to effective joint task completion. Creative interaction manifests in the co-generation of original content, through a playful format and flexible stylistic adaptation by both parties. The user initiates creative tasks - ranging from idea generation to production of texts or visual concepts - while the chatbot responds with alternative suggestions and stylistic adjustments. The use of tactics such as playful, experimental, and adapting style shapes a co-creative and dialogically adaptive format, resulting in high user engagement and synergy in meaningful content creation. Emotional-reflective interaction unfolds in situations marked by uncertainty that require support, reflection, and decision-making. The user seeks emotional relief and self-exploration, while the chatbot responds with empathy, clarifying responses, and motivational cues. The tactics employed (emotional coping, metacommunication, expressing sympathy, etc.) foster a supportive and empathic interaction, leading to cognitive clarity, enhanced confidence, and emotional relief. Entertaining-playful interaction arises from user requests containing elements of play, humor, or unconventional tone. The user seeks enjoyment and informal engagement, while chatbot responds by adapting its tone, sustaining rapport, and participating in imaginative exchanges. The applied tactics (playful, experimental, adapting tone, etc.) impart an expressively flexible character to the dialogue, producing an entertaining effect and unconventional communicative outcomes. Exploratory-provocative interaction is initiated by the user in order to test the boundaries of the AI and reveal its potential weaknesses. The chatbot responds to such challenges by attempting to restore the dialogue coherence and maintain communicative stability. The tactics used (experimental, metacommunication, apologising, adapting, etc.) form a reactive and protective strategy, resulting either in temporary re-stabilization or a breakdown of the interaction. A full analytical description of the interactions is provided in Appendix 5. As a result of theoretical coding, a set of interpretive models has been developed, representing the core communicative scenarios of human-chatbot interaction. These models reflect stable patterns of goal coordination and verbal adaptation within digital dialogue. Each of the six prototypical scenarios emerges from the dynamic alignment of user and chatbot goals, along with the strategic deployment of communicative tactics by both parties. The models clearly demonstrate that verbal behavior on both sides is not arbitrary but unfolds within functional configurations shaped by the participants’ communicative intentions and the dominant orientation of the interaction. Together, these core communicative scenarios provide an empirically grounded framework for describing and explaining two central mechanisms driving this new form of human-AI interaction: goal management and mutual verbal adaptation. Owing to their structural transparency and operational precision, the models offer a practical tool for the analysis, design, and assessment of digital interactions across a range of domains - from informational and task-driven to creative, emotional, and research-oriented contexts. Discussion The study offers refined insights into the mechanisms of user communication with AI chatbots. Users demonstrate a broad repertoire of communicative tactics that vary depending on goals and context. These findings confirm previous research showing that chatbot empathy and dialogic adaptability shape users’ perceptions of closeness and trust (Croes & Antheunis, 2021). The identified user tactics reflect communicative flexibility and intentional adaptation to the constraints of neural language models, echoing the conclusions of Stamp and Knapp (1990) regarding the dynamic nature of communicative intent. The active use of coping strategies aligns with findings by Ischen et al. (2024), which highlight increased self-disclosure in chatbot interaction due to reduced perceived judgment. Anthropomorphic cues were identified as a significant factor enhancing trust and satisfaction, in line with studies by Konya-Baumbach et al. (2023), Rhim et al. (2022), and Lu et al. (2022). These features contributed to users’ perception of the chatbot as a competent and socially responsive interlocutor. The analysis also revealed a functional parity between user and chatbot in the deployment of communicative tactics and the management of interactional goals. Contrary to the traditional view of user-dominated dialogue, chatbots demonstrated initiative by suggesting next steps and offering alternatives, marking a shift toward mutual goal management (Figure). Evolution of Interaction Dynamics: Mutual Deployment of Goals, Tactics, and Coping Strategies Source: The illustration was compiled from open-access visual materials found on the web, without identifiable authorship. The final version was modified and assembled by Violetta V. Palenova & Anatoly N. Voronin using Microsoft Paint Using grounded theory methodology, the study identified six typical communicative scenarios - informational-analytical, practical, creative, emotionalreflective, entertaining-playful, and exploratory-provocative. The typology aligns with recent research highlighting the fluid coordination of roles and the importance of context-sensitive adaptation in dialogic AI systems (Hancock et al., 2020; Skjuve et al., 2021). Notably, user demonstrated greater diversity and contextual sensitivity, whereas chatbot goals were largely confined to clarification, procedural guidance, and affective modulation. This asymmetry reflects the chatbot’s reactive nature, consistent with Ciechanowski et al. (2019), and underscores current limitations in proactive communicative agency. Emotional-reflective scenarios also revealed tensions between scripted empathy and authentic support, paralleling findings by Rashkin et al. (2019). These results suggest the need to advance adaptive, context-aware chatbot design. Scenario-based structuring may offer a promising route, enabling systems to tailor interaction to user goals and discourse patterns (Araujo, 2018). The study also confirms that empathic behavior, while generally beneficial, can reduce trust if perceived as formulaic (Prescott et al., 2024; Seitz, 2024). This highlights the importance of fine-tuning affective responses for more authentic engagement. Despite their versatility, generative models like ChatGPT continue to face challenges in analytical precision and nonverbal inference (Markowitz, 2024; Maurya, 2024). Addressing these limitations is essential to temper inflated expectations and guide responsible implementation. Grounded theory provided a robust framework for deriving these findings from empirical interaction data. However, the reliance on a Russian-language corpus and the interpretive nature of coding present limitations. Future work should pursue cross-linguistic validation and integrate quantitative methods for broader generalizability. In summary, this study contributes to the theoretical understanding of humanAI communication by uncovering recurring interaction scenarios and mechanisms of mutual adaptation. The findings have implications for the development of socially responsive, goal-sensitive, and emotionally competent conversational AI systems. Conclusion This study offers a comprehensive conceptualization of human-AI chatbot interaction, demonstrating that communication with generative models such as ChatGPT is structured by clearly identifiable goals, tactics, and adaptive strategies. Using grounded theory methodology, we reconstructed the dynamics of digital dialogue as a collaborative process shaped by mutual goal coordination and verbal adaptation. The analysis revealed a wide spectrum of user goals-ranging from informational and practical to creative, emotional, and exploratory - and corresponding chatbot responses aimed at maintaining coherence, supporting task completion, and regulating interactional tone. A key outcome of the study is the identification of six fundamental communicative scenarios in user-AI chatbot interaction: informational-analytical, practical, creative, emotional-reflective, entertaining-playful, and exploratoryprovocative. Each scenario represents a stable alignment of user goals, chatbot functional roles, and communicative tactics. These scenarios reflect the non-random, functionally organized nature of digital dialogue and capture recurring patterns of mutual adaptation in the course of interaction. The findings also reveal to a structural asymmetry: while users exhibit a broader and more flexible range of goals and communicative tactics, chatbots predominantly operate within a reactive and stabilizing framework. This asymmetry highlights the need for developing more context-sensitive, proactive AI systems capable of engaging in complex human communication beyond instrumental task execution. Additionally, the study draws attention to the role of affective cues and anthropomorphic design in shaping user perceptions of trust and competence. Although emotional expressiveness enhances user engagement, it requires careful calibration to avoid the pitfalls of scripted or insincere responses. Overall, the research contributes to both theoretical understanding and practical development of digital communication with AI chatbots. The typology of communicative scenarios and the associated tactical patterns provide a solid foundation for designing, analyzing, and improving conversational systems in educational, professional, and emotionally sensitive contexts.
×

About the authors

Violetta V. Palenova

State Academic University for the Humanities

Author for correspondence.
Email: violetta.palenova@yandex.ru
ORCID iD: 0000-0001-8552-5639

PhD Student

26 Maronovskiy Lane, Moscow, 119049, Russian Federation

Anatoly N. Voronin

Institute of Psychology, Russian Academy of Sciences

Email: voroninan@bk.ru
ORCID iD: 0000-0002-6612-9726
SPIN-code: 2852-2031
Scopus Author ID: 7103245935

Doctor of Psychology, Professor, Head of the Laboratory of Speech Psychology and Psycholinguistics

13-1 Yaroslavskaya St, Moscow, 129366, Russian Federation

References

  1. Altay, S., Hacquin, A.-S., Chevallier, C., & Mercier, H. (2023). Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. Journal of Experimental Psychology: Applied, 29(1), 52–62. https://doi.org/10.1037/xap0000400
  2. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
  3. Bedrina, I.S. (2010). Functional semantic stylistic text analyses. Lingua Mobilis, (7), 19–26. (In Russ.). EDN: MWCGAH
  4. Brown, P., & Levinson, S.C. (1987). Politeness: Some universals in language usage. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9780511813085
  5. Cheng, X., Yin, L., Lin, C., Shi, Z., Zheng, H., Zhu, L., Liu, X., Chen, K., & Dong, R. (2024). Chatbot dialogic reading boosts comprehension for Chinese kindergarteners with higher language skills. Journal of Experimental Child Psychology, 240, 105842. https://doi.org/10.1016/j.jecp.2023.105842
  6. Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
  7. Croes, E.A.J., & Antheunis, M.L. (2021). Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships, 38(1), 279–300. https://doi.org/10.1177/0265407520959463
  8. De Gennaro, M., Krumhuber, E.G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in Psychology, 10, 3061. https://doi.org/10.3389/fpsyg.2019.03061
  9. Dillard, J.P., Segrin, C., & Harden, J.M. (1989). Primary and secondary goals in the production of interpersonal influence messages. Communication Monographs, 56(1), 19–38. https://doi.org/10.1080/03637758909390247
  10. Gayanova, M.M., & Vulfin, A.M. (2022). Structural and semantic analysis of scientific publications in a selected subject area. Systems Engineering and Information Technologies, 4(1), 37–43. (In Russ.). https://doi.org/10.54708/26585014_2022_41837 EDN: SRLPRF
  11. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine.
  12. Grice, H.P. (1975). Logic and conversation. In P. Cole, & J.L. Morgan (Eds.). Syntax and semantics. Vol. 3. Speech acts (pp. 41–58). New York: Academic Press. https://doi.org/10.1163/9789004368811_003
  13. Hancock, J.T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022
  14. Ischen, C., Butler, J., & Ohme, J. (2024). Chatting about the unaccepted: Self-disclosure of unaccepted news exposure behaviour to a chatbot. Behaviour & Information Technology, 43(10), 2044–2056. https://doi.org/10.1080/0144929x.2023.2237605
  15. Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, 107954. https://doi.org/10.1016/j.chb.2023.107954
  16. Jiang, Y., Yang, X., & Zheng, T. (2023). Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Computers in Human Behavior, 138, 107485. https://doi.org/10.1016/j.chb.2022.107485
  17. Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, 107513. https://doi.org/10.1016/j.chb.2022.107513
  18. Liu, B., & Sundar, S.S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625–636. https://doi.org/10.1089/cyber.2018.0110
  19. Lu, L., McDonald, C., Kelleher, T., Lee, S., Chung, Y.J., Mueller, S., Vielledent, M., & Yue, C.A. (2022). Measuring consumer-perceived humanness of online organizational agents. Computers in Human Behavior, 128, 107092. https://doi.org/10.1016/j.chb.2021.107092
  20. Markowitz, D.M., Hancock, J.T., & Bailenson, J.N. (2024). Linguistic markers of inherently false AI communication and intentionally false human communication: Evidence from hotel reviews. Journal of Language and Social Psychology, 43(1), 63–82. https://doi.org/10.1177/0261927x231200201
  21. Maurya, R.K. (2024). A qualitative content analysis of ChatGPT’s client simulation role-play for practising counselling skills. Counselling and Psychotherapy Research, 24(2), 614–630. https://doi.org/10.1002/capr.12699
  22. McGowan, A., Gui, Y., Dobbs, M., Shuster, S., Cotter, M., Selloni, A., Goodman, M., Srivastava, A., Cecchi, G.A., & Corcoran, C.M. (2023). ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Research, 326, 115334. https://doi.org/10.1016/j.psychres.2023.115334
  23. Palomares, N.A. (2014). The goal construct in interpersonal communication. In C.R. Berger (Ed.), Interpersonal Communication (pp. 77–100). Berlin, Boston: De Gruyter Mouton. https://doi.org/10.1515/9783110276794.77
  24. Park, G., Chung, J., & Lee, S. (2022). Effect of AI chatbot emotional disclosure on user satisfaction and reuse intention for mental health counseling: A serial mediation model. Current Psychology, 42(32), 28663–28673. https://doi.org/10.1007/s12144-022-03932-z
  25. Park, G., Yim, M.C., Chung, J., & Lee, S. (2023). Effect of AI chatbot empathy and identity disclosure on willingness to donate: The mediation of humanness and social presence. Behaviour & Information Technology, 42(12), 1998–2010. https://doi.org/10.1080/0144929x.2022.2105746
  26. Prescott, J., Ogilvie, L., & Hanley, T. (2024). Student therapists’ experiences of learning using a machine client: A proof-of-concept exploration of an emotionally responsive interactive client (ERIC). Counselling and Psychotherapy Research, 24(2), 524–531. https://doi.org/10.1002/capr.12685
  27. Rashkin, H., Smith, E.M., Li, M., & Boureau, Y.-L. (2019). Towards empathetic open-domain conversation models: A new benchmark and dataset. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 5370–5381). Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/p19-1534
  28. Rhee, C.E., & Choi, J. (2020). Effects of personalization and social role in voice shopping: An experimental study on product recommendation by a conversational voice agent. Computers in Human Behavior, 109, 106359. https://doi.org/10.1016/j.chb.2020.106359
  29. Rhim, J., Kwak, M., Gong, Y., & Gweon, G. (2022). Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Computers in Human Behavior, 126, 107034. https://doi.org/10.1016/j.chb.2021.107034
  30. Ricon, T. (2024). How chatbots perceive sexting by adolescents. Computers in Human Behavior: Artificial Humans, 2(1), 100068. https://doi.org/10.1016/j.chbah.2024.100068
  31. Sahab, S., Haqbeen, J., Hadfi, R., Ito, T., Imade, R.E., Ohnuma, S., & Hasegawa, T. (2024). E-contact facilitated by conversational agents reduces interethnic prejudice and anxiety in Afghanistan. Communications Psychology, 2(1), 22. https://doi.org/10.1038/s44271-024-00070-z
  32. Schrader, D.C., & Dillard, J.P. (1998). Goal structures and interpersonal influence. Communication Studies, 49(4), 276–293. https://doi.org/10.1080/10510979809368538
  33. Seitz, L. (2024). Artificial empathy in healthcare chatbots: Does it feel authentic? Computers in Hu­man Behavior: Artificial Humans, 2(1), 100067. https://doi.org/10.1016/j.chbah.2024.100067
  34. Shaikh, S., Yayilgan, S.Y., Klimova, B., & Pikhart, M. (2023). Assessing the usability of ChatGPT for formal English language learning. European Journal of Investigation in Health, Psychology and Education, 13(9), 1937–1960. https://doi.org/10.3390/ejihpe13090140
  35. Shin, H., Bunosso, I., & Levine, L.R. (2023). The influence of chatbot humour on consumer evaluations of services. International Journal of Consumer Studies, 47(2), 545–562. https://doi.org/10.1111/ijcs.12849
  36. Skjuve, M., Følstad, A., Fostervold, K.I., & Brandtzaeg, P.B. (2021). My chatbot companion — A study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601
  37. Sperber, D., & Wilson, D. (1995). Relevance: Communication and cognition (2nd ed.). Oxford: Blackwell Publishers Ltd.
  38. Stamp, G.H., & Knapp, M.L. (1990). The construct of intent in interpersonal communication. Quarterly Journal of Speech, 76(3), 282–299. https://doi.org/10.1080/00335639009383920
  39. Titscher, S., Meyer, M., Wodak, R., & Vetter, E. (2000). Methods of text and discourse analysis. London: SAGE Publications Ltd. https://doi.org/10.4135/9780857024480
  40. Wang, X. (2020). Semantic and structural analysis of Internet texts. E-Scio, (4), 51–60. (In Russ.). EDN: PBIGEH
  41. Wilson, S.R. (1995). Elaborating the cognitive rules model of interaction goals: The problem of accounting for individual differences in goal formation. Annals of the International Communication Association, 18(1), 3–25. https://doi.org/10.1080/23808985.1995.11678905
  42. Youn, S., & Jin, S.V. (2021). “In A.I. we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy”. Computers in Human Behavior, 119, 106721. https://doi.org/10.1016/j.chb.2021.106721

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2025 Palenova V.V., Voronin A.N.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.