Philosophy and Science on the Way to Knowing and Making Consciousness

Cover Page

Cite item

Full Text

Abstract

The latest progress of empirical studies of consciousness and spectacular advances in AI technologies kick philosophy out of the familiar comfort of uncontrolled proliferation of concepts and scholastic disputes. In the overview of the current state of empirical theories of consciousness, I reveal that those theories still find themselves at the pre-paradigmatic stage, therefore not yet comprising an immediate existential threat to philosophy of consciousness, though making it watch out. I make an attempt to deal with the familiar polysemy of the term 'consciousness' stripping its meaning from parts already susceptible to science and technology and from parts still highly unlikely to be explained away. Besides, I specify the relation between philosophy and science in general by analyzing them to their inner dynamics of theories and ontologies, showing that for science the distinction of the two is substantially more important than for philosophy. From this perspective, I argue that philosophical schemas of consciousness claiming to be ‘experiential’ must have met recently formulated criteria for empirical theories of consciousness, otherwise failing to explain anything in the domain. On the way, I touch on the issues of intentionality and representation. Finally, I add my own pragmatic criterion that addresses technological perspectives a theory provides. In the end of the day, a winning competitive theory will have to let us produce and control artificial conscious devices.

Full Text

These days, trying to ultimately cope with painful riddles of what makes us feel and think, we are met face-to-face with a whole lot of theories of consciousness (ToC), some of them relying on the brain circuitry, which constantly slips away from any final explanations, while others focusing on our phenomenal ‘insides’ in the certainty that this is the thing. (There is, however, a minority that is ready to add the subject to Emile Dubois-Reymond’s list of ignoramus et ignorabimus, but let us constrain ourselves with what may be discussed any further.) One of the popular views is that theories of the first kind must be considered scientific or empirical[1], while the second group is obviously philosophical. This distinction is worth a quick look.

Two of the prominent students of mind and consciousness take on the distinction of scientific and metaphysical ToC: “the former typically at least tacitly assumes materialism and aims at explanation and mechanistic approaches to consciousness, whereas the latter is concerned with the ultimate nature of consciousness rather than with specifics about neuronal mechanisms” (Hohwy and Seth 2020). I suppose that this distinction is not quite accurate, as Hohwy and Seth attribute a tacit metaphysical stance to empirical students, which may not be universally acknowledged. On the other hand, there are explicit materialists among philosophers (e. g. Paul Churchland) for whom there is nothing to consciousness except for the neural mechanisms. The ultimate difference between metaphysics and science is not in the ‘what’- but in the ‘how’-approaches. This difference is better grasped by Gerhardt Schurz in saying: “while typical speculations postulate for each new phenomenon a new kind of theoretical cause, good science introduces new theoretical entities only if they figure as common causes of several intercorrelated phenomena” (Schurz 2011:108). But even this formula is not effective enough to guarantee the demarcation in all the cases, as Hegel-style philosophies excel in inventing universal causes, while science is not barred from identifying specific causes for specific instances. I would propose an even more delicate analysis based on the distinction of a theory and an ontology (I. Mikhailov 2019). Every theory, except the most formal ones, needs an ontology, which is the same as a model for its interpretation. And this is what unites philosophy and science. The difference starts in attributing truth values and in what I would call ‘finalizing’ of a study. A Hegel-style philosopher would say that (1) there is an Absolute Spirit that generates all the rest of what there is, and (2) this proposition is true. A neighboring scientist, if committed to the same ontology, would try to elaborate a system of measurements for the activity of the said Spirit to formulate its universal laws in their most precise form possible, for them to be testable against some measured observables. And only propositions of these laws and their corollaries may be evaluated as true or false. And only these propositions constitute what may be referred to as a theory proper, unlike the ontological commitments serving as its interpretation.

Here starts another distinction, which is between scientific realism and instrumentalism. The former claims that ontological commitments somehow inherit truth-value of a theory interpreted on them, which is to say that an ontology of a theory considered to be true is the true picture of the world. The latter, probably paying more attention to the real history of science, are more cautious in that respect.

As far as I can tell, this discussion implies that one of the main objectives of non-Hegel-style philosophy is to determine, if ‘consciousness’ refers to an ontological commitment at the risk of vanishing soon, like ‘caloric’ and ‘phlogiston’ did, under pressure of well-measured and operationalized scientific models, or it stands for a real entity that threatens the causal closure of the physical world[2].

I am inclined to let the second option alone, for I personally don’t know what to do with it (some remarks to follow below). As for the first one, it presumes that to scientifically explain anything non-physical means to ‘expel it plain’ if I may say so, in plain coherence with Wittgenstein’s “<t>he solution of the problem of life is seen in the vanishing of the problem” (Wittgenstein 2021:250). But, once the ghost is expelled from the machine, the question remains of what drives the machine and how it is set up. The actual science offers two principal ways. Physics, chemistry, biology, and genetics incline us to believe that mind and consciousness are emergent effects of the extremely complex interplay of multi-level natural forces, which gives us quantum, chemical and genetic conjectures on the nature of the phenomenon. Meanwhile, mathematics and computer science tend to elaborate a kind of simpler explanations presuming that parts of mental mechanisms impact each other with their abstract structural properties that line up in functional dependencies, which turns one part into a representation of another. This makes the whole thing an information-processing, i. e. computational, system.

The computational approach, while being apparently simpler, rises a bunch of methodological problems discussed in thousands of publications nowadays. But its possible outcome is that what makes us capable of thinking, feeling, and taking account may be reproduced in other material systems with similar computational capacities, and we will be able to create as many conscious companions for ourselves as we probably need.

Basing on this clarification of philosophy/science relations, I am inclined to pronounce in favor of methodological naturalism[3]. This approach consists, firstly, in understanding the task of philosophical research as the generation and adjustment of scientific (domain) ontologies. Since ontologies in this sense are conceptual schemes or abstract models on which the provisions of scientific theories are interpreted, meta-ontology (Van Inwagen 1998), to which philosophical research is mainly dedicated, is equivalent to conceptual analysis. The latter was considered as the only possible way of philosophical research by later positivists.

Methodological naturalism consists, secondly, in disputing philosophy being immune against empirical data and scientific concepts, i.e., the not quite a priori character of philosophical propositions. The most famous proponent of this view has been W. V. Quine. Philosophy, of course, does not have exact procedures for empirical verification or falsification, but it cannot ignore what is happening in the sciences.

With these observations in mind — and in consciousness as well — I will try to analyze some of the current tendencies and approaches to building successful empirical ToC.

 

What is consciousness exactly?

Far from referring to an exact scientific concept, the term ‘consciousness’ rather stands for another idea resistant to any attempts of strict definitions. Highlighted by Wittgenstein, this class of ideas is not provided with a categorical label, so I will speak of them as of ‘intractable notions’. They are usually fundamental for thought and culture and are very difficult or impossible to define discursively: ‘game’, ‘culture’, ‘consciousness’, ‘knowledge’, ‘time’, ‘space’, ‘number’, etc. The principal shortcoming of Wittgenstein’s ‘family resemblances’ concept (Wittgenstein 1986:32e) is that it doesn’t set any limits to the scopes of intractable notions: why, for example, does not everything in the Universe become a game when every instance of the ‘game’ is tangled with any other one only by some contingent non-universal property? It seems impossible to bound the concept of non-game in this case. One possible explanation may be that there is a universal common property of the instances of a class, although not of a substance-attribute kind, but of procedural (algorithmic) one: ‘time’ covers all possible actions with duration, ‘game’ is a term for any regular, though unconcerned, behavior, when the brain rewards practicing the rule-following with dopamine, ‘consciousness’ stands for all purposeful accountable actions combined with possibility of making non-standard decisions. If my assumption is correct, then ‘consciousness’ refers to some (procedural) properties of mind and behavior.

Still, we will try to define what exactly we mean by ‘consciousness’: (I) intelligence, (C) control, including the ability to account (awareness), or (Q) qualia, a. k. a. ‘phenomenal consciousness’. (I) does not cause serious problems in the theoretical and philosophical part of the cognitive sciences, while consistently and in many cases successfully finding technological embodiments. Besides, as Ned Block puts it, “<c>onsciousness and intelligence are on the face of it very different things. We all understand science fiction stories in which intelligent machines lack some or all forms of consciousness. And on the face of it, mice or even lower animals  might have phenomenal consciousness without much intelligence” (Block 2009:1112).

(C) is what is lacking while we are in deep sleep or with complete anesthesia and returns upon exiting these states. There are several competing approaches to the subject: the theory of the global workspace by Giulio Tononi, the theory of integrated information by Bernard Baars, and some others. Some day one of these theories (or maybe another one still unknown) will win the competition and become paradigmatic. Anyway, this is what Chalmers would relate to as the ‘easy problems of consciousness’, so this issue can hardly be considered unsolvable for cognitive science either. Only (Q) claims the title of the ‘hard problem’ because, thanks to the scholastic efforts of philosophers like Frank Jackson, many are convinced that the existence of qualia (which is not contested by these philosophers, because, like Descartes, they believe in the a priori truth and reliability of their ‘subjective experience’) somehow contradicts the physicalist picture of the world. I see two possible ways out here: (Q1) scientific investigation of qualia using the ‘small steps’ approach — for example, starting with the biological sensitivity of the simplest organisms and reconstructing the evolutionary stairway from them to specialized ‘modular’ neurons and neural ensembles of a developed brain, etc., which does not guarantee success, but does not exclude it a priori as well, or (Q2) to recognize this problem as scientifically unsolvable.

But whatever chosen, neither of these alternatives prevents science from studying C-consciousness independently from Q-experience, as the latter while giving consciousness some unique and interesting flavors does not account for what it actually is.

 

…And what is it not?

At the heart of the Cartesian cogito, which is mistakenly considered by some a starting point of the New Age philosophy of mind, lies the following logical move: my doubt in whatever existence is the guarantee of my own being there. However, this conclusion can be justified only if the following — mainly tacit — presuppositions are true.

Assumption 1: My consciousness is given to me as a thing-in-itself (my perception of my inner life cannot be false).

Assumption 2: Thinking (doubt) can only be a conscious act, and my own account of my thought acts is necessarily reliable (unconscious distortions, biases and attitudes are ignored).

Obviously, each of them can be subject of reasonable philosophical suspicions.

The fallibility of introspection as a source for knowing consciousness, according to Kant, makes psychology as a science proper impossible (Kant 2004:8). Nevertheless, classical phenomenology advocates the possibility of a transition from conscious experience to transcendental knowledge of mental properties. Husserl thought of phenomenology as a transcendental justification of the sciences. In accordance with his original project, it implied rejection of the ‘natural attitude’ and of the opposition of ‘internal’ and ‘external’, as well as the possibility of inferring from the ‘givenness’ of an object to the transcendental structure of consciousness.

Even though Merleau-Ponty insisted on the fundamental difference between phenomenological and psychological introspection, Galagher and Zahavi define phenomenology as the study of the world in which we live from a new reflective point of view, namely from the point of view of its meaning and manifestation for consciousness (Gallagher and Zahavi 2013:25). More than once in their book, they speak of phenomenology as a theory based on experience (Gallagher and Zahavi 2013:6–7). From my point of view, as a theory based on experience, it must meet, firstly, the general criteria for empirical theories of consciousness and, secondly, the pragmatic criterion, all of which are discussed further. But, anyway, one gets an immediate impression that phenomenology begins with a Kantian transcendental approach but proceeds sharing all the faults of Hegelianism, such as optional conclusions and the claim to extra-scientific knowledge.

In some philosophical concepts, phenomenology included, intentionality is considered as an important part of the ontology of consciousness. Here the intentional object is supposed to be a certain ‘aspect’ of the real object, visible from the first-person perspective (FPP). The concept of intentionality, created or revived by Franz Brentano, has been proliferated into numerous versions and variations, not all of which are compatible with each other. In addition to Husserl, one can mention Anscombe (Anscombe 1957), Meinong (Meinong 1960), Mally (Mally 1913), Searle (Searle 1983), Dennett (Dennett 1987), and others who built this idea into their own concepts. Here I will only try to lay down the core of the problem in its original form in the following way: how is it possible that the statements "Unicorns have a horn in their mid-forehead" and "Unicorns do not exist" can be both true? Brentano at one time believed that the property of ‘intentional non-existence’ of an intentional object is inherent in all mental states and only them, but many subsequently disputed this identity of mental and the intentional.

If we proceed from the supposition that consciousness is constituted by phenomenal properties (which is far from being obvious to everyone), then the task of its scientific explanation (from the perspective of the ‘natural attitude’ as Husserl labels it) is to reduce them to natural properties. For this, at the conceptual and philosophical level, it is important to know properties of what they are: ones of the ‘soul’, or of the thing perceived by it, or of interaction of both, as in Locke’s philosophy.

But if, from my FPP, some natural object O looks like an intentional object Oi, and Oi has a phenomenal property A, does this property belong to object O as well in whatever form? If so, then the introduction of intentionality does not imply any progress compared to Locke. If not, then they are different objects. And then, as such a doubling of entities does not agree with the scientific (or physicalist, if you like) picture of the world, its proponents should provide an amended ontology, on which the known facts would be interpreted consistently and the theories — probably, renovated — would retain their predictive capabilities.

As for the phenomenal aspect of consciousness, the issue with it may be somewhat reminiscent of the Flying Arrow paradox attributed to Zeno of Elea. We have a continuum of neural activity, described by a dynamic state-space model operating with real numbers. Our communication system, based on the static semantics of discrete symbols, snatches arbitrarily fractional snapshots thereof. The remainder, not covered by fixed points of linguistic meanings, is perceived as something ineffable. Such an explanatory model excludes a special ‘qualitative’ side of consciousness, which, upon closer examination, turns out to be equal to those parts of the stream that, as it were, overflow the edges of the discrete linguistic structure with its systematicity, compositionality and productivity (Tacca 2010).

 

The proper place of representations in theories of consciousness

I have had a chance to discuss the issues of representation within a more general scope of cognitive science (I. F. Mikhailov 2019).

Taking into consideration all the piercing critique towards the concept (Hutto 2011; Hutto and Myin 2013), let me clarify that the point is not that this concept as such is empty and unnecessary. The fact is that:

  1. The concept is often interpreted vaguely and too broadly.
  2. Representation as a function is not necessary in each type of mental act.

The relation of representation presupposes a functional relation (sometimes complex and indirect) of what is represented to the representational vehicle. As such, this relationship must be strong enough to make control feasible.

In the case of the intentional view on the mental within a classical symbolist cognitive approach, a kind of Homunculus Paradox arises: to manipulate symbols according to their semantics, the cognitive system must ‘know’ this semantic relation. But knowledge is representation. Thus, any representation within the framework of semantically dependent manipulation needs a representation that supports it, and so on ad infinitum. With syntactically dependent processing, we avoid this paradox but leave the mechanism by which mental states have content at all, i. e., intentionality, unexplained.

In contrast, within the connectionist view, the content of the representation is the result of numerous iterations of learning: action — input — reaction — feedback — correction of the reaction, etc. As a result, the (phenomenal) representation of the input is perceived by consciousness as a representation of the ‘thing’, which is itself a complex representation of all the context: goals, ways to achieve it (affordances), obstacles, etc.

If the chessboard is displayed on a straight one-dimensional line in the form of named segments (option: on the tape of a Turing machine in the form of named cells), then it will be just as easy or even easier for a computer to play, while more difficult for a person. This may be because underlying spatial perceptions occur outside of consciousness and are presented to consciousness as a re-presentation, i. e., an economical way of notification. This way, each higher level of the system is spared from the load of processing the data of the lower one to save energy costs.

In the literature, the term ‘representation’ is met as used in three various senses: (1) the material transport of the phenomenal image, (2) the phenomenal image itself, and (2) the module-dependent mode of representation[4] (color, geometric, symbolic, tonal, etc.). As far as I can tell, only (3) is scientifically operational, as it provides a chance to close the explanatory gap by involving explanations dealing with coding/decoding algorithms.

Representation is the result, not the subject of computation. Temperature is a representation of the Brownian motion of molecules, as to be informative it must provide some integrative value thereof computed according to some consistent function. But for it to take place, it takes coarse graining as well. Coarse graining as a part of the computation aims to create coarse representations of complex entities or processes to reduce the thermodynamic cost of further computation.

 

Empirical theories of consciousness

There are quite a few ToC listings in various reviews. Thus, Hohwy and Seth point out Global Neuronal Workspace theory, Integrated Information theory, Recurrent Processing theory, Higher Order Thought theories coupled with Metacognitive theories, Radical Plasticity thesis,

Virtual Reality theories, Attention-based theories, Heterophenomenology, Core Consciousness theory, Orchestrated Objective reduction, and Electromagnetic theory (Hohwy and Seth 2020). Ned Block in an earlier review had focused on Higher Order Thought, Global Workspace, Integrated Information and on what he calls The Biological Theory dating back to the 1950ies, which occurs to be his preference (Block 2009:1111–1113). A more recent attempt to review and analyze ongoing approaches in empirical studies of consciousness has been made in (Doerig, Schurger, and Herzog 2021), which I will touch on a bit later.

One of the common ideas brought about by experimental studies of conscious and unconscious perceptions is that perceptions that are beyond the threshold of conscious processing, nevertheless, affect other cognitions and behaviors of the subject. The following events were identified as objective neural correlates of conscious states (NCC is now a widely accepted abbreviation for these): a late increase in the corresponding sensory activity, cortical-cortical synchronization at beta and gamma frequencies over long distances, and the ‘ignition’ of a large-scale prefronto-parietal network. These data agree with the well-known Global Neronal Workspace theory (Deheane et al. 2011). It has also been found that the global neuronal workspace (GNW) hypothesis can become the basis for the synthesis of empirical data regarding conscious access, attention and working memory (Mashour et al. 2020). Studies of patients under anesthesia have shown that consciousness disappears when anesthetics cause functional shutdown in the posterior parietal region, interrupting cortical communications and causing loss of integration; or when they lead to bistable stereotyped responses, causing a loss of informational capacity. Thus, it seems most likely that anesthetics cause unconsciousness when they block the brain's ability to integrate information (Alkire, Hudetz, and Tononi 2008). Studies of returning to consciousness has demonstrated that neural activity on the way to conscious states is limited to a low-dimensional subspace. In this subspace, neural activity forms discrete metastable states that persist for minutes. The network of transitions that links these metastable states is structured so that some states form hubs that connect groups of otherwise unrelated states. Although there are different paths through the network, to eventually enter a state of activity compatible with consciousness, the brain must first pass signals through these nodes in an orderly manner (Hudson et al. 2014). In general, the available literature on empirical studies of conscious states suggests that conscious states are characterized by a greater degree of connectivity between different parts of the brain and neural ensembles. That is, conscious and unconscious states do not form a dichotomy, but are characterized by gradual quantitative transitions, for which a rheostat might be a proper metaphor (Arp 2007). And even if some information has not become the subject of broadcasting and, therefore, a fact of consciousness, it can still be processed by the cognitive system and causally affect the state and behavior of the subject.

According to the said Global Neural Workspace theory (GNW) (Baars 1993; Baars, Franklin, and Ramsoy 2013; Boly et al. 2013), unconscious processes and mental states compete for the center of attention from which global information is broadcast throughout the system. Consciousness is identified with this global broadcasting and, according to Baars, is an important means of functional and biological adaptation. The closest competitor to GNW is the integrated information theory (IIT) by J. Tononi and K. Koch (Edlund et al. 2011; Mayner et al. 2018; Tononi 2008, 2012; Tononi et al. 2016). It is one of the few modern theories that offers a measurable indicator for the degree of consciousness in a system that is labelled as Φ (phi), which shows the degree of physical integration of information. According to Tononi, this indicator can be measured even in relatively simple physical systems, which, of course, leads to a kind of panpsychist conclusions. Nevertheless, the theory is said to be well interpreted on some neurophysiological data coming from anesthetical practices and other situations of transition between conscious and unconscious states.

Both theories are, in a sense, models, since they point to possible neurobiological realizations of the functions under consideration. However, there are also purely theoretical ideas that are not tied to specific implementation models. These include, for example, the theory of ‘Higher Order Thought’ (HOT) by David Rosenthal (Rosenthal 1997, 2008). According to it, the mental states of the lower order (LO) include sensations, perceptions caused by the impact on the sense organs of objects of the external world. Higher-order mental states (HO) have any other mental states as their object. The LO mental states become conscious only when they become the object of the HO mental states. This theory has some similarities with various approaches stating the recursive nature of individual consciousness (Baryshnikov 2018; Corballis 2011; Vergauwen 2014).

 

Criteria for ToC

In a comparatively recent paper entitled “Hard Criteria for Empirical Theories of Consciousness” (Doerig et al. 2021) the following criteria for ToC have been proposed:

  1. Whether a theory sticks to paradigm cases of consciousness and the unconscious alternative.
    As is clarified therein, “<p>aradigm cases with an unconscious alternative ensure that consciousness is the dependent variable in experiments, and contrast with approaches where only conscious states are investigated” (Doerig et al. 2021:5).
  2. Whether a ToC is free from being subject to the ‘unfolding argument’, which equals to its adhering to a falsifiable causal structure.
    Doerig et al. give an example of the Recurrent Processing Theory (Lamme 2010, 2020; Lamme and Roelfsema 2000), according to which visual consciousness emerges when stimuli having passed through an initial feed-forward network (FFN) of the visual tract start being broadcasted via some recurrent networks (RN) that connect the visual processing regions to other parts of the brain. The problem with this and similar approaches is that, as is stated mathematically, any RN may be unfolded into a FFN, thus being functionally identical thereto. Moreover, patients whose damaged RN-regions have been replaced with FFN-implants experience no faults in their consciousness.Therefore, according to Doerig et al., the kind of theories under consideration are not falsifiable, which puts them outside the scope of science.
  3. Whether a ToC is free from the small (and large) network argument.
    This argument boils down to the problem of panpsychism and the unity of consciousness. If a theory lacks a quantitative criterion of a system being conscious, then it may imply that a network of, say, ten neurons may be conscious. If a theory implies that a small-size network is enough to produce consciousness, it is subject to two issues: one is panpsychism, and the other is the problem of unity of consciousness in large-scale networks, such as human brain.
  4. Whether a ToC is resistant to the multiple realizations’ argument.
    Resorting to a computer metaphor of an application being implemented within different operational systems, the authors claim that “ToCs that explain consciousness by pointing to certain brain regions or characteristics claimed to be sufficient for consciousness need to explain why they are also necessary for consciousness. Hence, our fourth criterion asks whether ToCs can make clear-cut and specific predictions about which other systems are conscious, apart from humans” (Doerig et al. 2021:8).

The paper is interesting in its conclusions as well. Comparing ToC issues with those biologists are faced with regularly, the authors point out that the latter lack any rigid definition of life, which doesn’t prevent them from knowing what life is as they associate it with a set of necessary processes, like homeostasis, reproduction, etc. The current situation with ToC, in their view, is like the one with magnetism in the science of ancient times. Researchers just look for known or unknown ‘things’ to identify them with their subject of interest. While, as Doerig et al. put it, it may well be that “consciousness is a ‘solution’, a by-product, or a core component of a computational challenge that information processing systems need to solve – and that we have not discovered yet” (Doerig et al. 2021:16).

I would add that in a future sequel to their seminal paper, somebody should examine how diverse species of the computational approach meet the four criteria.

Turning back to the issues with philosophy of consciousness, still commonly built up as metaphysics, I would like to add that if a philosophical ‘theory’ claims to be based on some experience of whatever kind, it must be consistent with the above criteria, which requirement will be obviously failed by phenomenology.

 

The pragmatic criterion added

From the point of the previous discussion, to know is to obtain a reliable theory coherent with some criteria that make a theory properly explanatory and predictive. From the point of view of social pragmatics, to know is to be able to make. But there may be a substantial difference in various kinds of making. We can make a clock to know what time is now— but do we know then what time is as such? We would, but only if we made a time machine. Being able to measure is not the same as being able to alter.

These days, we can make AI devices, primarily, in the form of neural networks, but we still cannot determine the level of their consciousness, let alone provide them with it.

 

[1] For overviews of empirical theories of consciousness see Ned Block on some of the most important empirical theories (Block 2009), a perspective of neuroscientific advancements (Dehaene and Changeux 2011) and maybe the latest analysis of the situation in this realm (Doerig, Schurger, and Herzog 2021). Some of them will be approached further herein.

[2] It is worth noting that ‘the physical world’ refers to some integrated ontology of the current physical theories considered to be true.

[3] For the distinction between physicalism and naturalism see, e. g., (Vintiadis 2013)

[4] Which is probably equal to Wittgenstein’s ‘law of projection’ (Wittgenstein 2021:117).

×

About the authors

Игорь Феликсович Михайлов

Author for correspondence.
Email: ifmikhailov@gmail.com

References

  1. Alkire, Michael T., Anthony G. Hudetz, and Giulio Tononi. 2008. “Consciousness and Anesthesia.” Science 322(5903):876–80.
  2. Anscombe, G. E. M. 1957. Intention. Vol. 57. Harvard University Press.
  3. Arp, R. 2007. “Consciousness and Awareness. Switched-On Rheostats: A Response to de Quincey.” Journal of Consciousness Studies 14(3):101–106.
  4. Baars, Bernard J. 1993. Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
  5. Baars, Bernard J., Stan Franklin, and Thomas Zoega Ramsoy. 2013. “Global Workspace Dynamics: Cortical ‘Binding and Propagation’ Enables Conscious Contents.” Frontiers in Psychology 4(MAY).
  6. Baryshnikov, P. N. 2018. “Language, Brain and Computation: From Semiotic Asymmetry to Recursive Rules.” RUDN Journal of Philosophy 22(2):168–82.
  7. Block, Ned. 2009. “Comparing the Major Theories of Consciousness.” Pp. 1111–22 in The Cognitive Neurosciences, edited by M. S. Gazzaniga. MIT Press.
  8. Boly, Melanie, Anil K. Seth, Melanie Wilke, Paul Ingmundson, Bernard Baars, Steven Laureys, David B. Edelman, and Naotsugu Tsuchiya. 2013. “Consciousness in Humans and Non-Human Animals: Recent Advances and Future Directions.” Frontiers in Psychology 4(OCT):1–20.
  9. Corballis, Michael C. 2011. The Recursive Mind: The Origins of Human Language, Thought, and Civilization.
  10. Deheane, S., J. P. Jean-Pierre Pierre Changeux, Stanislas Dehaene, and J. P. Jean-Pierre Pierre Changeux. 2011. “Experimental and Theoretical Approaches to Conscious Processing.” Neuron 70(2):200–227.
  11. Dennett, Daniel C. 1987. The Intentional Stance. MIT Press.
  12. Doerig, Adrien, Aaron Schurger, and Michael H. Herzog. 2021. “Hard Criteria for Empirical Theories of Consciousness.” Cognitive Neuroscience 12(2):41–62.
  13. Edlund, Jeffrey A., Nicolas Chaumont, Arend Hintze, Christof Koch, Giulio Tononi, and Christoph Adami. 2011. “Integrated Information Increases with Fitness in the Evolution of Animats.” PLoS Computational Biology 7(10):e1002236.
  14. Gallagher, Shaun, and Dan Zahavi. 2013. The Phenomenological Mind. Routledge.
  15. Hohwy, Jakob, and Anil Seth. 2020. “Predictive Processing as a Systematic Basis for Identifying the Neural Correlates of Consciousness.” Philosophy and the Mind Sciences 1(II).
  16. Hudson, Andrew E., Diany Paola Calderon, Donald W. Pfaff, and Alex Proekt. 2014. “Recovery of Consciousness Is Mediated by a Network of Discrete Metastable Activity States.” Proceedings of the National Academy of Sciences of the United States of America 111(25):9283–88.
  17. Hutto, Daniel D. 2011. “Representation Reconsidered.” Philosophical Psychology 24(1):135–39.
  18. Hutto, Daniel D., and Erik Myin. 2013. Radicalizing Enactivism: Basic Minds Without Content. MIT Press.
  19. Van Inwagen, Peter. 1998. “Meta-Ontology.” Erkenntnis 48(48):233–250.
  20. Kant, Immanuel. 2004. Metaphysical Foundations of Natural Science. Cambridge University Press.
  21. Lamme, Victor A. F. 2010. “How Neuroscience Will Change Our View on Consciousness.” Cognitive Neuroscience 1(3):204–20.
  22. Lamme, Victor A. F. 2020. “Visual Functions Generating Conscious Seeing.” Frontiers in Psychology 11:83.
  23. Lamme, Victor A. F., and Pieter R. Roelfsema. 2000. “The Distinct Modes of Vision Offered by Feedforward and Recurrent Processing.” Trends in Neurosciences 23(11):571–79.
  24. Mally, Ernst. 1913. “Gegenstandstheoretische Grundlagen Der Logik Und Logistik.” Revue de Métaphysique et de Morale 21(3).
  25. Mashour, George A., Pieter Roelfsema, Jean Pierre Changeux, and Stanislas Dehaene. 2020. “Conscious Processing and the Global Neuronal Workspace Hypothesis.” Neuron 105(5):776–98.
  26. Mayner, William G. P., William Marshall, Larissa Albantakis, Graham Findlay, Robert Marchman, and Giulio Tononi. 2018. “PyPhi: A Toolbox for Integrated Information Theory.” PLoS Computational Biology 14(7).
  27. Meinong, Alexius. 1960. “On the Theory of Objects (Translation of ‘Über Gegenstandstheorie’, 1904).” Pp. 76–117 in Realism and the Background of Phenomenology, edited by R. Chisholm. Free Press.
  28. Mikhailov, I. 2019. “Has Time of Philosophy Passed?” Voprosy Filosofii (1):15–25.
  29. Mikhailov, I. F. 2019. “The Proper Place of Computations and Representations in Cognitive Science.” Pp. 329–48 in Automata’s Inner Movie: Science and Philosophy of Mind, edited by S. S. G. Manuel Curado. Vernon Press.
  30. Rosenthal, David M. 1997. “A Theory of Consciousness.” in The Nature of Consciousness, edited by N. Block, O. J. Flanagan, and G. Guzeldere. MIT Press.
  31. Rosenthal, David M. 2008. “Consciousness and Its Function.” Neuropsychologia 46(3):829–40.
  32. Schurz, Gerhard. 2011. “Structural Correspondence, Indirect Reference, and Partial Truth: Phlogiston Theory and Newtonian Mechanics.” Synthese 180(2):103–20.
  33. Searle, John. 1983. Intentionality. Vol. 20. Oxford University Press.
  34. Tacca, Michela C. 2010. “Syntactic Compositionality, Systematicity, and Productivity.” Pp. 37–52 in Seeing Objects: The Structure of Visual Representation. Brill | mentis.
  35. Tononi, G. 2012. “Integrated Information Theory of Consciousness: An Updated Account.” Archives Italiennes de Biologie 150(4):293–329.
  36. Tononi, Giulio. 2008. “Consciousness as Integrated Information: A Provisional Manifesto.” Biological Bulletin 215(3):216–42.
  37. Tononi, Giulio, Melanie Boly, Marcello Massimini, and Christof Koch. 2016. “Integrated Information Theory: From Consciousness to Its Physical Substrate.” Nature Reviews Neuroscience 17(7):450–61.
  38. Vergauwen, Roger. 2014. “Consciousness, Recursion and Language.” in Language and Recursion. Vol. 9781461494140.
  39. Vintiadis, Elly. 2013. “Why a Naturalist Should Be an Emergentist about the Mind.” SATS 14(1):38–62.
  40. Wittgenstein, Ludwig. 1986. Philosophical Investigations. 3rd ed. Blackwell Publishers Ltd.
  41. Wittgenstein, Ludwig. 2021. “Tractatus Logico-Philosophicus.” Pp. 56–250 in Tractatus Logico-Philosophicus. Anthem Press.

Copyright (c) 2023 Михайлов И.Ф.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies