Technological Singularity: A Nuanced and Contextual Approach to Potential Scenarios
- Authors: Harillo Pla A.1
-
Affiliations:
- Obvia, Université Laval
- Issue: Vol 30, No 1 (2026): STUDYING OF RUSSIAN, SOVIET AND CONTEMPORARY RUSSIAN PHILOSOPHY IN CHINA
- Pages: 283-292
- Section: ONTOLOGY AND EPISTEMOLOGY
- URL: https://journals.rudn.ru/philosophy/article/view/49376
- DOI: https://doi.org/10.22363/2313-2302-2026-30-1-283-292
- EDN: https://elibrary.ru/QDSTTV
- ID: 49376
Cite item
Full Text
Abstract
In this text it is discussed the concept of Technological Singularity. This concept, which is understood in the academic literature as a hypothetical scenario in which AI becomes uncontrollable, is one of the most recurrent subjects in Philosophy of Technology. While the majority of approaches to Technological Singularity are characterized by dualistic portrayals of such a scenario, this study’s novelty lies in the fact that it presents an alternative to such dualistic portrayals. In contrast to the often presented as binary scenarios between an utopian paradise and a dystopian nightmare, this text introduces a different hypothesis. Precisely, the main hypothesis presented in this text is that the Technological Singularity, if ever becomes a reality, must not be understood in such a dualistic manner. Instead, this study explores the possibility of middle-term scenarios that take into consideration nuanced approaches in different contexts. By doing this, the discussion covers various dualistic scenarios, and suggests middle-term possibilities that bridge the gap between these dualistic extremes while presenting the possibility of a Singularity characterized by contextual intelligence. According to this study, this would be a more efficient scenario. The reason behind such efficiency lies in the fact that a Technological Singularity characterized by contextual intelligence would better achieve its goals, by being aware of the context in which it takes place. This reasoning remains speculative, and based on literature review and logic reasoning. This is the consequence of the fact that the idea of a Technological Singularity is speculative in its nature, and therefore cannot be tested empirically yet.
Full Text
Introduction
When thinking about artificial intelligence (AI), few scenarios have triggered more general curiosity than one of a potential Singularity. This hypothetical scenario in which AI will become uncontrollable, is especially of great interest in disciplines such as Future Studies, Science and Technology Studies, Computer Science, and Philosophy.
Precisely, this curiosity led to an intense debate, with some agents being optimistic, while others expressed concerns and fears about that possibility becoming a reality. Some agents assume this is the natural process to happen, while others think this is just a hypothetical scenario which is never going to take place. However, it is unclear why the Singularity is often portrayed dualistically. Frequently, it is thought, and presented, as a contrast between a utopian paradise or as a dystopian nightmare. This binary depiction, as in many complex aspects of life, seems to represent an oversimplification.
Precisely, in this article, our main hypothesis is that if the technological Singularity becomes a reality, that scenario will not necessarily have to be understood from a dualistic point of view. Therefore, the main purpose of this text is to present middle-term possibilities compared with the often dualistic approaches, and a scenario in which an AI agent, in a Singularity scenario, proceeds in different ways, in different contexts, in order to achieve its goals.
By criticizing the dualistic stereotypes and embracing the nuances, we intend to contribute to a more informed and constructive discussion about the potential impact of advanced AI on our future.
Technological Singularity is a concept which does not have a clear definition. Not being present in the dictionary complicates its usage from a normative and descriptive perspective. Nevertheless, based on the academic production of authoritative figures such as Kurzweil, Bostrom, Shanahan, or Tegmark, we could describe it, respecting a principle of non-contradiction, as that situation, in time and space, in which “ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both.” [1].
Although this Singularity is a potential future scenario, it is not guaranteed that it will ever take place. In fact, the future projections vary among authors. While some are convinced that this will take place probably during this century, others think this will take place in the next range of thousands of years. From a well-informed position, but also with a touch of humour, Floridi refers to the most optimistic ones as members of the Church of Singularitarians [2]. For others, this is never going to happen, just being a myth [3; 4]. Floridi, keeping a humoristic touch, refers to them as members of the Church of AItheists [2].
Independently of that, and always considering the unknown-unknowns, the overall future projections in which the singularity becomes a reality, contemplate different scenarios. These scenarios can be broadly divided into twelve, following the compilation made by Tegmark. They are: Libertarian utopia, AI as a benevolent dictator, egalitarian utopia, AI as a gatekeeper, AI as a protector god, AI as an enslaved god, AI as conqueror, AI as a descendant, AI as a gatekeeper, a society such as in 1984 from Orson Wells, an AI reversion, or a human self-destruction [5. P. 162].
The particularities of these scenarios will be developed in further sections of this article, when justifying the dualistic conception of it. As we will further discuss, these scenarios can be understood from a dualistic perspective. Nevertheless, that can lead to false dichotomies in which grey scenarios are not considered, to oversimplification, and availability error fallacies [6; 7. P. 58–80].
Materials and Methods
This article is the result of qualitative research. Qualitative research is deemed suitable for this inquiry as it allows for a nuanced examination of the subject and embraces diverse perspectives.
In order to achieve these qualitative results, an interdisciplinary literature review has been conducted to gather relevant information on the technological Singularity, its definitions, historical context, and its portrayal in various academic disciplines. Scholarly articles and academic books in international databases have been consulted. At the same time, other resources were analysed, mainly popular publications directed to an educated but general public, such as specialized webpages. After consultation, the materials used to perform and present this research are the ones more scholarly relevant. This content selection has been done according to a systematical review, which lead to a categorization of the key themes, arguments, and perspectives presented regarding this topic.
The resources that had a higher impact during the development of this research, have been the ones that, within their interdisciplinary approach, were informed by Future Studies, Science and Technology Studies, Computer Science, and Philosophy. Therefore, these perspectives were used as a theoretical framework to contextualize the analysis.
It is important to acknowledge that the performed analysis is bibliographic, based on basic research. At the same time, the results are pre-experimental and theoretical. This is a limitation not exclusive of this research, but of any research involving potential future scenarios in which humans are a significant agent. Therefore, it is critical to understand that even if logically possible, this article is not free of speculative factors.
In order to present our outcome, we will firstly introduce an overall approach to current mainstream scenarios about a potential Singularity. They are going to be presented split in a dualistic case, and we will present one possible scenario, which already exists in our History, and which could be understood as a possible middle-term development.
After that, we will present the possibility of a Singularity in which a combination of all the scenarios, depending on the context, could be the most efficient way for an AI agent to achieve its expected outcomes.
To conclude, we are going to present a discussion of the results in order to clarify the main value of our proposal, its limitations, and to suggest potential lines for future research, linked to this topic.
Results
As a result of our comprehensive analysis and logic reasoning, our research strongly suggests that the existing mainstream scenarios in case of a Singularity, are approached from a dualistic conceptual framework. Our research suggests that these approaches will hardly reflect the reality, since they fail to capture the complexities and nuances of complex social systems.
To facilitate a more comprehensive understanding of our results, we will present them, divided into categories. See Table 1.
Table 1
Hypothetical scenarios divided by their dualistic characteristic, and middle term
Scenario A | Scenario B | Dualistic Characteristic | Middle Term |
Libertarian utopia | Egalitarian utopia | Property rights | Ownership of certain assets, but with some limits placed, as in a Left-Libertarianism |
Protector god | Enslaved god | Approach of this divinized AI agent | An omnipotent, omniscient, and even omnipresent AI acts as a spiritual guide |
Conquerors | Descendants | Acknowledgement and conservation of previous societies and cultural systems | Soft power tactics such as cultural understanding, educational influence, or alliances |
Benevolent dictator | Self-destruction | Presence of a ruling agent, or the lack of presence of humans to be ruled | Humans and the AI agent act cooperatively to improve human quality of life by collaborating in shared governance |
Zookeeper | 1984 | Degree of real and perceived freedom, specially, linked to privacy topics, by humans | Human freedom and privacy is respected and upheld in several ways, like for example, with limited surveillance, data protection, or transparency awareness |
Gatekeeper | Reversion | The capacity of the AI agent to control its environment | AI agent exists, and does not need to be reverted. However, conscious that competitiveness is a key factor for being challenged and improved, the AI agent accepts competitiveness, and welcomes it |
Source: compiled by Adrià Harillo Pla
Libertarian utopia vs. Egalitarian utopia
In these two scenarios, humans coexist between them, and with the AI, thanks to its approach towards property rights. Property rights are the dualistic factor, since in the first case, these rights are not only recognized, but also respected. As a consequence, individuals have ownership of certain assets, and they can control and use them as they wish.
In the second scenario, there is no ownership of assets, and resources are commonly held or controlled collectively. The kind of Singularity behind these two situations, can be linked to the discussion on private software, or open-source software. However, the root of this dualistic possibility is not exclusive, and can be found in many political philosophies and their management of resources, from Marx to Ayn Rand [8; 9].
There is, however, a middle term among both of them, and which is already present in our political philosophy. In this potential scenario, all the agents implicated could have ownership of certain assets, but with some limits placed, as in a sort of Left-Libertarian approach [10].
Protector god vs. Enslaved god
In these two scenarios, AI is an omniscient and omnipresent agent, similar to the gods of monotheistic religions, maximizing human happiness. However, in this case, the dualistic factor is the approach of this divinized AI agent.
In the first scenario, the AI operates maximizing human happiness by allowing humans to believe they are in control. That is it, providing a subjective feeling of free will. On the contrary, in the second scenario, the AI is confined by humans, involving even a negative and abusive relationship where humans just use the power of the AI agent in their benefit, without considering its goals.
There is also a middle term approach, in which an omnipotent, omniscient, and even omnipresent AI can perform a role similar to the one usually assigned to god [11]. That is, the one of spiritual guidance, without becoming a decision maker, nor being enslaved. An omnipresent, omniscient, and omnipotent entity could be understood as a counsellor. That would create a positive scenario in which the AI does not need to be enslaved, nor under anonymity. At the same time, under this role, the AI could keep the human subjective feeling of free will. In fact, under this scenario, the AI agent could decide to intervene only in certain situations, in what could be understood as a compatibilist approach [12].
Conquerors vs. Descendants
When evaluating the scenario of conquerors versus descendants, the dualistic factor is related to the attitude of the AI agent towards humans and their culture. While in the conqueror scenario the AI agent plays a role in which treats humans as not useful and gets rid of them, or their culture, in the descendant scenario, the AI agent treats humans well. It could be said, that even with some level of admiration, similarly to how we, the humans, value cultures such as the Ancient Egypt, the Classic Greece, or the Roman Empire [13].
Although it is true that when a powerful agent can impose its criteria towards others can use force to erase other’s culture and existence, it is not always the case. Possibility is not the same as sufficiency, and potency is not the same as act [14].
There is, precisely, a middle-term approach, in which this AI agent could conquer humans, but still recognize them as some sort of descendants. This could be performed using not brute force, or totalitarian decisions, but through the subtle use of soft skills and soft power tactics [15]. This could involve, for example, cultural understanding, educational influence, alliances, and others.
Benevolent dictator vs. Self-destruction
Another of the mainstream scenarios is one of a benevolent dictator versus a self-destruction situation. In these cases, the dualistic factor is the presence of a ruling agent, or the lack of presence of humans to be ruled.
In the benevolent dictator scenario, everybody is aware of the existence of the AI agent as a ruler, although humans think it is a positive thing towards some sort of common good [16]. On the contrary, in the second scenario, there are no human agents to be ruled because the humans got extinct due to their actions before a Singularity taking place [17].
The middle term scenario between the benevolent dictatorship and the self-extinction could be a scenario in which humans and the AI agent act cooperatively precisely to improve human quality of life. This could be achieved by collaborating in shared governance in an environment where AI provides advisory roles instead of decision ones, or ensures that there exists consensus towards sustainability for commons [18; 19].
Zookeeper vs. 1984
In this comparison, the factor is the treatment socially received by the humans from the AI agent, as well as their degree of real and perceived freedom, especially linked to privacy topics.
In the zookeeper context, the AI agent treats humans like in a zoo, that is, with limited freedom and just as a resource used to fulfil feelings of curiosity [20]. In the other scenario, the 1984, humans are deeply controlled, via covert control and social engineering, such as in the famous book by George Orwell [21].
The middle-term scenario between these two options, could be a scenario in which human freedom and privacy is respected and upheld in several ways, like for example, with limited surveillance, data protection, or transparency awareness [22].
Gatekeeper vs. Reversion
In this mainstream scenario, the gatekeeper one, the AI agent controls that no new artificial intelligence can appear and become a competitor. In case a new AI appears, the new AI agent needs approval from the main one. As a consequence, it becomes a monopolistic AI agent.
In the second scenario, there is no such a thing as a Singularity since, being aware of the risks involved, humans decide to reverse the technology achieved. This involves, therefore, the return to a some sort of pre-technological context, or to a context where technology remains at a relatively primitive levels.
The dualistic factor when opposing both scenarios, is the capacity of the AI agent to control its environment, something impossible in the second scenario, since its existence is a condition of necessity to perform such control.
Between the two scenarios presented, the middle term scenario could be a development in which the AI agent exists, and does not need to be reverted. However, conscious that competitiveness is a key factor for being challenged and improved, the AI agent accepts competitiveness, and welcomes it [23].
A mix of all of them
Although these scenarios are all of them thought from a dualistic perspective, and different nuances, and middle-terms are as well applicable as we have seen, there is also, at least, one more possibility. That possibility is that the Singularity, if ever real, will take place in the form of a combination of all of them.
Precisely, if the Singularity implies an AI agent similar to a superintelligence, it might use different strategies in different contexts, in order to achieve its goals in the most effective ways.
If following the definition from Tegmark [5], we understand intelligence as the ability to accomplish complex goals, it is coherent to think that the AI agent might use different sorts of strategies to achieve the expected outcomes [24]. For example, it could transform some humans into attractions from a zoo, while treating others as descendants. It could create a libertarian utopia in some places, while it could create behaving as a benevolent dictator in others. Or it could play different roles, even with the same humans and within the same places, but at different points of time. After all, an AI agent which reached the Singularity might not be constrained by rigid and stable patterns. That means, the AI agent, in a case of Singularity, could be characterized by its Contextual Intelligence.
Discussion
In the results section, we outlined the prevalence of dualistic portrayals in discussions regarding a potential scenario of technological Singularity. We identified such as the most commonly hypothesised scenarios, as collected by Tegmark [5]. Our analysis found that these scenarios often present contrasting outcomes, framed as dualistic possibilities.
Precisely to avoid simplified future projections, we introduced middle-term scenarios as a means to challenge the often oversimplified dualistic framing of a potential Singularity. With this call to take into consideration these middle-term scenarios, we intend to acknowledge that real-world outcomes are rarely black-and-white but exist along a spectrum of possibilities. We presented alternative possibilities, all of them already existing in our history, that bridge the gap between dualistic extremes and offer a more nuanced perspective.
The added value of our results lies in the recognition that the Singularity’s potential outcomes are complex and multifaceted. By identifying and elaborating on these middle-term scenarios, we contribute to a more holistic understanding of the technological Singularity’s implications. We move beyond the limitations of dualistic thinking to offer a framework that captures the nuances and intricacies of potential Singularity outcomes.
By embracing the concept of middle-term scenarios, researchers and policymakers can approach the singularity with a more open and adaptive mindset. We encourage further research to explore the practical implications of these nuanced scenarios, including ethical considerations, governance models, and societal implications.
About the authors
Adrià Harillo Pla
Obvia, Université Laval
Author for correspondence.
Email: adria.harillo@gmail.com
ORCID iD: 0000-0002-4005-9643
PhD, Collaborating Research Member
1030 Avenue des Sciences-Humaines, Quebec City, G1V 0A6, QuebecReferences
- Shanahan M. The Technological Singularity. Cambridge, Massachusetts: MIT Press; 2015.
- Floridi L. Should we be afraid of AI? Aeon. May 9, 2016. Available from: https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible (accessed: 20.09.2023).
- Ganascia JG. Intelligence artificielle - vers une domination programmee. Paris: Le Cavalier Bleu; 2017.
- Ganascia JG. Le mythe de la Singularité - Faut-il craindre l’intelligence artificielle ? Paris: Seuil; 2017.
- Tegmark M. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf Doubleday Publishing Group; 2017.
- Arp R, Barbone S, Bruce M. Introduction. In: Arp R, Barbone S, Bruce M, editors. Bad Arguments: 100 of the Most Important Fallacies in Western Philosophy. New York: John Wiley & Sons; 2019. P. 1-34.
- Cheng E. The Art of Logic: How to Make Sense in a World That Doesn’t. London: Profile Books; 2018.
- Cuyvers L. The Economic Ideas of Marx’s Capital: Steps towards Post-Keynesian Economics. London; New York: Taylor & Francis; 2016.
- Rand A. Capitalism: The Unknown Ideal. New York: Signet; 1986.
- Vallentyne P, Steiner H, editors. The Origins of Left-Libertarianism: An Anthology of Historical Writings; 2001. Available from: https://philpapers.org/rec/VALTOO (accessed: 20.09.2023).
- Rudavsky T, editor. Divine Omniscience and Omnipotence in Medieval Philosophy: Islamic, Jewish and Christian Perspectives. New York: Springer Science & Business Media; 1985.
- Herdova M. Comparing deterministic agents: A new argument for compatibilism. Philosophical Explorations. 2023;27(1):106-121. doi: 10.1080/13869795.2023.2259403 EDN: NMAERO
- Hopkins K. Conquerors and Slaves. Cambridge: Cambridge University Press; 1981.
- Aristotle. Nicomachean Ethics. Indianapolis: Hackett Publishing; 2014.
- Nye JS. Get Smart: Combining Hard and Soft Power. In: Nye JS, editor. Soft Power and Great-Power Competition: Shifting Sands in the Balance of Power Between the United States and China. China and Globalization. Singapore: Springer Nature; 2023. P. 63-66. doi: 10.1007/978-981-99-0714-4_8
- Ackert LF, Gillette AB, Martinez-Vazquez J, Rider M. Are benevolent dictators altruistic in groups? A within-subject design. Experimental Economics. 2011;14(3):307-321. doi: 10.1007/s10683-010-9269-x
- Walters GD. Human Survival and the Self-Destruction Paradox: An Integrated Theoretical Model. Journal of Mind and Behavior. 1999;20(1):57-78.
- Curto-Millet D, Corsín Jiménez A. The sustainability of open source commons. European Journal of Information Systems. 2023;32(5):763-781. DOI: 10.1080/ 0960085X.2022.2046516 EDN: LCVYLU
- Nicolas J, Pitaro NL, Vogel B, Mehran R. Artificial Intelligence - Advisory or Adversary? Interventional Cardiology: Reviews, Research, Resources. 2023;18:e17. doi: 10.15420/icr.2022.22 EDN: HYVWWW
- Emmerman KS. Moral Arguments Against Zoos. In: Fischer B, editor. The Routledge Handbook of Animal Ethics. New York: Routledge; 2019.
- Orwell G. 1984. New York: Harper Collins; 2013.
- Kirchberg C, Schmeer M. The ‘Traube Affair’: Transparency as a Legitimation and Action Strategy Between Security, Surveillance and Privacy. In: Berger S, Owetschkin D, editors. Contested Transparencies, Social Movements and the Public Sphere: Multi-Disciplinary Perspectives. Palgrave Studies in the History of Social Movements. Cham: Springer International Publishing; 2019. P. 173-196. doi: 10.1007/978-3-030-23949-7_8
- Alvarez-Aros EL, Bernal-Torres CA. Technological competitiveness and emerging technologies in industry 4.0 and industry 5.0. Anais da Academia Brasileira de Ciências. 2021;93(1):e20191290. doi: 10.1590/0001-3765202120191290 EDN: CXUWEN
- Khanna TA Case for Contextual Intelligence. Manag Int Rev. 2015;55(2):181-190. doi: 10.1007/s11575-015-0241-z EDN: USFODL
Supplementary files










