Artificial journalism: the reverse of human-machine communication paradigm. Mapping the field of AI critical media studies
- Authors: Shilina M.G.1,2, Volkova I.I.3, Bombin A.Y.4, Smirnova A.A.4
-
Affiliations:
- Plekhanov Russian University of Economics
- Lomonosov Moscow State University
- RUDN University
- Saint Petersburg State University of Economics
- Issue: Vol 28, No 4 (2023): Media and Crisis – Reversible Paradigms
- Pages: 757-768
- Section: JOURNALISM
- URL: https://journals.rudn.ru/literary-criticism/article/view/38098
- DOI: https://doi.org/10.22363/2312-9220-2023-28-4-757-768
- EDN: https://elibrary.ru/IWYVNU
Cite item
Full Text
Abstract
The study for the first time endeavours to elucidate the distinct conceptual nuances of AI-driven journalism, exploring how it reshapes the core technological and communicative attributes of the field while influencing societal dynamics. The crisis within AI-driven human-machine interaction in journalism rooted in the essence and processing of information is defined. Despite the paradigm of journalism is rooted in a human-centered approach, its AI-driven paradigm is the same - but in a reversible mode. Journalism involves the translation of personal perspectives and experiences through the filter of memory. Algorithms function without the nuances of personal and social memory, thereby undermining the core principles of the journalistic profession. The loss of genuine, “analog” memory among journalists and their audiences, alongside the digital “memory” of algorithms, jeopardizes the fundamental societal role of journalism-upholding social order. Re-thinking the AI phenomenon as artificial communication, the authors propose the term “artificial journalism”. At the basic technological level it is based on various forms of automation and embedded within digital infrastructures; at the societal level it is designed for the central purpose of journalism and entangled with human practices. Both the levels are reversible. The term could serve as an umbrella term for all the AI-driven journalism activities. Also it removes contradictions not only in human-machine communication but clarify the essence of AI performance in journalism and media studies, and for the users. The emergence of AI-driven media practices opens the basic crisis conceptual contradictions which provokes new realms of research and necessitates the establishment of critical AI media studies.
Full Text
Introduction
Over the past decade, the landscape of artificial intelligence (AI) driven journalism has expanded significantly, growing in complexity and scope (Henestrosa et al., 2023). Algorithms have evolved to autonomously generate journalistic content, encompassing tasks such as data collection and analysis, news production and distribution, and audience behavior prediction, etc. This phenomenon, often referred to as automated media (Andrejevic, 2019), automated, algorithmic, or robot journalism, first of all involves the algorithmic generation of journalistic content (Graefe, Bohlken, 2020). It's noteworthy to distinguish between these terms: “automated” pertains to the mode of processing, while “algorithmic” and “robot” are more subject-oriented. Also the functions of AI in journalism are wider. In 2023, the International Center for Journalists (ICFJ, Chicago, the U.S.) presents some problem areas in the field of the interaction of artificial intelligence and journalists, among others the problem of information detecting, questions of so-called “AI hallucinations”, etc.
Throughout history, journalism has always been influenced by technological advancements, but the advent of AI brings about profound changes in how media content is produced, distributed, and discussed (Hepp et al., 2023; Volcic, Andrejevic, 2023). The fundamentals of AI-driven journalism diverge from those of human journalism. Despite the paradigm of journalism is rooted in a human-centered approach, its AI-driven paradigm is the same – but in a reversible mode. For instance, algorithms emulate human behaviors to generate predictive and relevant data; the technological disparities are manifold – machines generate data rather than information, and their “intelligence” and performance are distinct from human capacities.
From the inception of content automation to the present day, advancements in Natural Language Generation (NLG) have made AI-written media texts indistinguishable from those crafted by human hands. In essence, algorithms have become qualified “authors” in the realm of journalism, acting as quasi-actors. AI-driven journalism introduces a range of other non-direct quasi-actors, including platform and data owners, programmers, and more.
Within the realm of media studies, the impact of AI and automation on journalism constitutes a relatively recent area of research. Scholars from Russia and beyond delve into the overarching challenges posed by automated journalism and automated texts (Henestrosa et al., 2023), defining AI-generated content as a category of specific quasi-actors and media agents (Gambino et al., 2020). Ongoing scholarly discourse revolves around clarifying the landscape of more wide approach of research as human-machine communication (Hepp et al., 2023), particularly concerning the profound conceptual shifts catalysed by artificial intelligence.
This paper endeavours to elucidate the distinct conceptual nuances of AI-driven journalism, exploring how it reshapes the core technological and communicative attributes of the field while influencing societal dynamics. To address this inquiry, we will answer two central research questions:
RQ 1: What are the key features and inherent contradictions of AI-driven journalism in the context of machine-human communication?
RQ2: What defines the societal essence of AI-driven journalism and its broader impact?
Methodology
AI-driven journalism, and automated in particular, represents a form of communication that exists at the intersection of human and machine interaction. This phenomenon prompts researchers to delve into both its technological intricacies and its broader societal implications. This research orientation is closely aligned with multilevel methodology, which focuses on investigating digital and AI-driven communication from a societal vantage point. Over the past decade, this multifaceted approach has been successfully applied to our studies on digital media, data journalism, as well as critical analyses of Data and AI (Shilina, 2012, 2022). Consequently, our research aims to delineate the technical facets of AI implementation in journalism by elucidating its fundamental technological human-machine communicative attributes at a foundational level. Simultaneously, we intend to uncover the underlying societal significance of these technological features. Finally, we’ll try to conceptualize all the analysed changes.
Artificial intelligence: technological level of human and machine performance
Despite the novelty surrounding the interaction between humans and machines within the realm of AI-driven journalism, discussions regarding the aspects of human and machine performance have been present since the 1980s within the realms of mathematical communication theory, informatics, and cybernetics. These conversations initially focused on technology-driven communication automation. By the 1990s, within media and communication studies, research into human-computer interaction gained momentum alongside the rise of digitalization.
Since the 2010s, the surge in digitalization catalyzed processes of mediation, mediatization, and the transgression of media during the COVID-19 crisis. As datafication and platform economics gained prominence, the fields of media and communication studies converged with technology studies, fostering a symbiotic relationship. Analysis of AI-driven practices spurred critical discourse concerning datafication, automated data processing, and the institutionalization of critical data and AI studies, which is still a nascent field within media studies.
Presently, scholars continue to elucidate the conceptual shifts provoked by artificial intelligence across all facets of human-machine communication (Fortunati, Edwards, 2021; Richards et al., 2022).
At the foundational technological level, the key crisis paradox in contemplating AI-driven human-machine interaction lies in our conventional human-centric representations and limitations. However, drawing analogies between human and machine performance, such as thinking and acting, proves irrelevant. Notably, the disparity between human and machine “intelligence” arises due to vastly different underlying mechanisms. Even contemporary machine learning (ML) algorithms do not merely replicate or mimic human “intelligence”; generative AI “generate” new content in a specific unpredictable way, etc.
Crucially, algorithms do not generate information, a cornerstone of journalism, but rather data. Human information processing revolves around meaning, while this transformational shift is underpinned by the “revolutionary communicative meaning of big data... to produce information from data that is not itself information” (Esposito, 2022, p. 15). Additionally, algorithms operate solely on “secondary” information.
Algorithms exhibit exceptional speed and efficiency in processing data. However, the outcomes do not inherently reveal causation. Predictive and prescriptive analytical outcomes fail to facilitate a comprehensive understanding of both the analysed phenomena and the rationale behind predictable changes. The technological nature of AI inherently precludes deep comprehension (Hepp et al., 2022).
Thus, the crisis within AI-driven human-machine interaction in journalism stems from a fundamental technological contradiction, manifesting in the core element of journalism – the essence and processing of information.
Memory as the techno-social AI-driven contradiction?
The crisis of algorithmic performance arises from the absence of a human function and mental cognitive process, particularly intelligence in the form of memory. Human memory encompasses the processes of recording, storing, and subsequently recalling information, impressions, and experiences for present use. In digital era, memory is defined not only as a process, but also as “the ability of a living system record the fact of interaction with the environment (external or internal), and save the result this interaction in the form of experience and use it in behavior” (Druzhinin, 2023). Essentially, human memory allows individuals to draw upon past experiences in their current actions. It’s essence is culture-communicative while “memory lives and is preserved in communication. If the latter terminates, or if the referential framework of communicative reality disappears or changes, oblivion follows” (Assman, 2004, 37).
Despite algorithms relying on “secondary” data, a form of data from the past, they are engineered to function in the immediate “here and now” using a specific set of relevant data, even if it's extensive. Algorithms, in essence, calculate and operate without the need for remembering or forgetting.
For AI-driven human-machine interaction, the absence of memory, or more accurately, the lack of it, presents a genuine technological and professional challenge.
This shortfall in technological memory can lead to significant errors in the analysis and representation of even basic data. For instance, generative AI might produce drawings of humans with an incorrect number of fingers.
This problem has quite serious implications in future periods, which, as mentioned earlier, can range from minor errors to significant issues. The main aspects of this problem can be outlined in four directions: context and narrow specialized experience, potential data errors, lack of personalization and recognizability, as well as data security and confidentiality.
Context and narrow specialized experience. AI systems without memory may tend to “forget” information, especially if they are unable to generalize information/ knowledge from previous interactions. This can limit their ability to adapt to various stories/scenarios and make decisions based on accumulated experience.
Potential data errors. The lack of memory can lead to the creation of erroneous data or content. Generative Adversarial Network (GAN) can produce images or texts that do not match real data if they are not provided with sufficient context.
Lack of personalization and recognizability. The absence of memory can also affect the AI system's ability to recognize and interact with specific users. This can result in less personalized and less satisfactory interactions between humans and machines.
Data security and confidentiality. If AI systems are unable to store and manage data, it can create vulnerabilities in terms of security and confidentiality. Ensuring secure and reliable data storage is crucial, especially when dealing with sensitive information. For instance, Google services continued to store location data even when users had turned off location tracking. This was a clear breach of user trust and their privacy.
To overcome this problem, AI developers and researchers are actively working on developing methods and technologies that would enable systems to retain and use information from previous interactions. This includes researching reinforcement learning techniques and developing more complex AI architectures capable of long-term learning and knowledge retention.
According to Esposito, “agents that manage data move in an eternal present, without remembering and without forgetting” (Esposito, 2022, p. 72). This shift away from analog memory is a consequence of digital media's influence, where modern societies now prioritize the ability to forget, shifting from individual to communicative frames of reference (Esposito, 2022, pp. 76–77).
The absence of social and personal memory poses a broader challenge in contemporary societies, extending beyond data privacy concerns in the digitalized world. The memory maintaining commonalities through mechanisms of social and national identity, which include many components, first of all, such as mentality and self-knowledge, social comparison, historical memory, national character, customs, traditions, etc. In the general sense, identical resilience is most often defined as something that allows a person to define his place in the socio-cultural space and navigate in the world (Oleshko, Oleshko, 2020).
The phenomenon of memory – or rather its absence in human sense – has the potential to lead to personal and even national catastrophes. To illustrate this point, consider the metaphors: the first is provided by Georgy Gospodinov, the winner of the International Booker Prize 2023, in his novel “The Timeshelter” (2020), which depicts a European Alzheimer's disease rooted in the historical trauma of Nazism and World War II. Also Chingiz Aitmatov, a renowned Soviet and Kyrgyzstan writer, echoed similar concerns about memory loss and its consequences in “And the Day Lasts Longer than a Century” (1980) highlighting the plight of memory-deprived slaves named “mancourts”.
Thus, the loss of genuine, “analog” memory among journalists and their audiences, alongside the digital “memory” of algorithms, jeopardizes the fundamental societal role of journalism – upholding social order. Journalism involves the translation of personal perspectives and experiences through the filter of memory. Algorithms possess a digital form of “remembering and forgetting” that starkly contrasts with human analog memory. They function without the nuances of personal and social memory (Assmann, 2011), thereby undermining the core principles of the journalistic profession.
From artificial communication to artificial journalism?
Nowadays, the concept of the artificial intelligence is clarified by the communication researchers. Starting with the basic technological features analysis, they proposed to re-think this phenomenon not in terms of artificial intelligence but in terms of artificial communication (Guzman, Lewis, 2020; Esposito, 2022), because all the AI-driven core processes are machine-human and driven by communication. According to Hepp et al. (2023), if machines contribute to social intelligence, it will not be because they have learned how to think like us but because we have learned how to communicate with them. In other words, this reversible idea logically opens up a new approach and concept of communicative AI.
In communication studies, communicative AI is a sensitizing concept, leading to the human-centered communicative construction. For instance, according to Hepp et al., artificial communication is based on three criteria. First, it is based on various forms of automation, then designed for the central purpose of communication, and embedded within digital infrastructures, and entangled with human practices (Hepp et al., 2023, p. 50).
The differences between artificial communication and artificial journalism from a conceptual point of view can be seen in the process of determining the components of both phenomena. Table shows the results of a comparative analysis of the component parts of each of the analysed areas.
In order to gain an understanding of the presence of each of the elements proposed above in the two groups “Artificial Communication” and “Artificial Journalism” we will create a schematic representation of their presence by overlapping areas, each representing a specific element (Figure).
The Figure shows that a total of four elements are present in both groups. Based on this, we can conclude that the proportion of overlapping elements between the “Artificial Communication” and “Artificial Journalism” groups is approximately 44%.
Thus, it is worth noting that artificial communication refers to the use of AI and related technologies to facilitate, improve, or automate communication between humans or between humans and machines. When, in turn, artificial journalism is a subset of artificial communication focused specifically on the application of AI in the field of journalism. It involves using AI technologies to support or augment journalistic processes, such as news gathering, content generation, and distribution.
Component analysis of artificial communication and artificial journalism
Artificial communication (AC) | ||
AC1 | Chatbots & virtual assistants | Chatbots and virtual assistants are AI-powered systems designed to engage in natural language conversations with users. They are used in customer support, information retrieval, and other communication-intensive tasks |
AC2 | Language translation | AI can be used to automatically translate text or speech from one language to another, enabling cross-cultural communication |
AC3 | Speech recognition | AI systems can transcribe spoken language into text, which is useful for applications like transcription services, voice assistants, and more |
AC4 | Content generation | AI can assist in generating written content, such as news articles, reports, or marketing materials |
AC5 | Social media analysis | AI tools can analyze social media data to understand trends, sentiment, and public opinion |
AC6 | Personalization | AI can tailor communication content based on user preferences and behavior, leading to more relevant and engaging interactions |
| Artificial journalism (AJ) | |
AJ1 | Automated news writing | AI algorithms can be employed to generate news articles quickly and efficiently, often based on structured data or information from various sources |
AJ2 | Fact-checking | AI can assist in fact-checking and verifying information presented in news articles, helping to combat misinformation and fake news |
AJ3 | Personalized news delivery | AI-driven recommendation systems can tailor news content to individual readers' preferences and interests |
AJ4 | Data journalism | AI tools can analyze large datasets to uncover trends, patterns, and insights that can inform news stories |
AJ5 | Audience engagement | AI can help media organizations understand their audiences better and engage with them through personalized content and interactions |
Source: compiled by the authors.
Intersection of elements between the “Artificial Communication” and “Artificial Journalism” groups
Do the concepts of artificial communication and communicative AI fit for journalism studies? Such understanding and concept of AI as a communication phenomenon – and algorithm as an actor – fits for AI-driven journalism research from the technological point of view, professional and societal perspective because of basic AI-driven human-centered and human-driven background of technologies and hybrid communication (Shilina, 2012, 2022).
Artificial communication in journalism is a part of societal communication by origin. It is designed for the central purpose of journalism – to improve social values. AI-driven journalism is based on various forms of human-driven automation and embedded within digital infrastructures, and entangled with human practices.
Thus, artificial journalism could be defined as a specific field of artificial communication, and AI-driven digital journalism based on communicative AI. Artificial journalism at the basic technological level is based on various forms of automation and embedded within digital infrastructures; at the societal level it is designed for the central purpose of journalism and entangled with human practices. Both the levels are reversible.
The term “artificial journalism” is as metaphoric as the majority terms of phenomena of post-modernity. But it is more wide and complex then all the previous ones and could serve as an umbrella term for all the AI-driven journalism activities. Also it removes contradictions not only in human-machine communication, but clarify the essence of AI performance in journalism and media studies, and for the users.
Conclusion
The realm of AI-driven journalism is intricately tied to the reversible paradigms of human-machine interaction within the technosocial landscape and could provoke a couple of crisis contradictions. At its core, a profound professional contradiction arises at the basic technological level, centering around the essence and processing of information in journalism. The unique and opaque nature of AI technologies, coupled with the outcomes of data analysis, inherently limits clear interpretation and understanding the mechanism of content production.
Algorithms have the capacity to provide audiences with information that aligns with their preferences rather than objective reality. While both human-created and algorithm-generated media content share similar quality, it rises ethical problems. This shift towards AI-driven experiences in daily life has potentially mitigated this contradiction, as truth and trust remains a pivotal factor in journalism. This dynamic has given rise to ethical dilemmas, which could be conceptualized as the “AI-driven Media Trust Divide”.
De facto, there is no strict contradiction in text of machine and human, but the technological and human specific features of AI-driven interaction is rather reversible.
Journalism involves the personal creative process of translating personal vision and experience, a process inherently tied to person’s memory. Algorithms are designed for immediate and specific tasks, possessing a form of digital “remembering and forgetting” that diverges significantly from human “analog” memory. The absence of “personal” and “social” memory within algorithms could undermine the sociocultural memory as a foundation of journalism. This potential loss of genuine memory, within the digital “memory” of algorithms and both among journalists and their audiences, erodes the personal and professional identification of journalist, and its societal mission – to uphold social order. It is crucial to recognize that the essence of journalism lies in the sharing of personal human experiences, something that cannot be reduced to mere lines of code.
However, drawing direct analogies between “analog” and digital AI-driven journalism proves problematic, as their conceptual frameworks differ. Emerging ideas within media and communication scholarship suggest adopting a new perspective by replacing the vague term “artificial intelligence” with “communicative AI” as a sensitizing concept that fosters human-centered communicative constructs, in AI-driven “artificial” journalism in particular.
A proposed term “artificial journalism” is encompassing the wide and intricate practices of contemporary digital AI-driven journalism. Artificial journalism, seen as a digital techno-human phenomenon and process of AI-driven media communication, extends beyond mere content generation to encompass the broader landscape of media processing. This term, while metaphoric like many concepts in post-modernity, is more systematic and encompassing. It resolves contradictions in human-machine communication research, clarifies AI's role in journalism and media studies, and provides greater clarity for target audiences.
Since its inception, AI-driven journalism has been marked as a field of contradictions. The emergence of AI-driven media practices opens the basic crisis conceptual contradictions which provokes new realms of research and necessitates the establishment of Critical AI Media Studies. Ultimately, the concept of artificial intelligence potentially serves as a catalyst for deeper reflections within the field.
About the authors
Marina G. Shilina
Plekhanov Russian University of Economics; Lomonosov Moscow State University
Author for correspondence.
Email: marina.shilina@gmail.com
ORCID iD: 0000-0002-9608-352X
Dr. Sc., Professor, Professor of the Department of Advertising, Public Relations and Design, Plekhsnov Russian University of Economics; Professor of the Department of Advertising and Public Relations, Faculty of Journalism, Lomonosov Moscow State University
36 Stremyannyi Pereulok, Moscow, 115054, Russian Federation; 9 Mokhovaya St, bldg 1, Moscow, 125009, Russian FederationIrina I. Volkova
RUDN University
Email: volkova-ii@rudn.ru
ORCID iD: 0000-0002-2693-1204
Dr. Sc. in Philology, Professor, Professor of the Department of Mass Communication, Faculty of Philology
6 Miklukho-Maklaya St, Moscow, 117198, Russian FederationAndrey Yu. Bombin
Saint Petersburg State University of Economics
Email: bombin.a@unecon.ru
ORCID iD: 0000-0002-1151-7721
senior lecturer, Department of Communication Technologies and Public Relations
30-32 Naberezhnaya Kanala Griboedova, St. Petersburg, 191023, Russian FederationAnna A. Smirnova
Saint Petersburg State University of Economics
Email: smirnova.aa@unecon.ru
ORCID iD: 0000-0003-1392-2832
senior lecturer, Department of Communication Technologies and Public Relations, Deputy Dean of the Faculty of Humanities
30-32 Naberezhnaya Kanala Griboedova, St. Petersburg, 191023, Russian FederationReferences
- Andrejevic, A. (2019). Automated media. New York: Taylor & Francis Group, Routledge.
- Assmann, A. (2004). Four formats of memory: From individual to collective constructions of the past. In C. Emden & D. Midgley (Eds.), Cultural Memory and Historical Consciousness in the German-Speaking World since 1500 (pp. 19-37). Bern: Peter Lang.
- Assmann, A. (2011). Cultural memory and Western civilization: Functions, media, archives. Cambridge: Cambridge University Press.
- Druzhinin, V.N. (2023). The psychology of general ability. Moscow: Urait Publ. (In Russ.)
- Esposito, E. (2022). Artificial communication: How algorithms produce social intelligence strong ideas. New York: The MIT Press.
- Fortunati, L., & Edwards, A.P. (2021). Moving ahead with human-machine communication. Human-Machine Communication, 2, 7-28. https://doi.org/10.30658/hmc.2.1
- Gambino, A., Fox, J., & Ratan, R. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1, 71-86. https://doi.org/10.30658/hmc.1.5
- Graefe, A., & Bohlken, N.A. (2020). Automated journalism: A meta-analysis of readers’ perceptions of human-written in comparison to automated news. Media and Communication, 8(3), 50-59. https://doi.org/10.17645/mac.v8i3.3019
- Guzman, A.L., & Lewis, S.C. (2020). Artificial intelligence and communication: A human-machine communication research agenda. New Media & Society, 22(1), 70-86. https://doi.org/10.1177/1461444819858691
- Henestrosa, A.L., Greving, H., & Kimmerle, J. (2023). Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, 107445. https://doi.org/10.1016/j.chb.2022.107445
- Hepp, A., Jarke, J., & Kramp, L. (2022). New perspectives in critical data studies. London: Palgrave Macmillan.
- Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M., Puschmann, C., & Schulz, W. (2023). ChatGPT, LaMDA, and the hype around communicative AI: The automation of communication as a field of research in media and communication studies. Human-Machine Communication, 6, 41-63. https://doi.org/10.30658/hmc.6.4
- Oleshko, V., & Oleshko, E. (2020). reading as a democratic value and resource for the formation of the communicative and cultural memory of a nation. KnE Social Sciences, 4(2), 284-298. https://doi.org/10.18502/kss.v4i2.6347284-298
- Richards, R., Spence, P., & Edwards, C. (2022). Human-machine communication scholarship trends: An examination of research from 2011 to 2021 in communication journals. Human-Machine Communication, 4, 45-65.
- Shilina, M.G. (2012). The theory of public relations: Creating non-classical methodology. Mediascope, (1). Retrieved September 24, 2023, from https://mediascope.ru/node/1028
- Shilina, M.G. (2022). Mediatization in the context of the global crisis: Temporality as a research modality. Russian School of Public Relations, 24, 45-57.
- Volcic, Z., & Andrejevic, M. (2023). Automated media and commercial populism. Cultural Studies, 37(1), 149-167. https://doi.org/10.1080/09502386.2022.2042581