User:Tancre/2/thesis: Difference between revisions

From XPUB & Lens-Based wiki
< User:Tancre‎ | 2
Line 11: Line 11:


=== The Hard Problem of Consciousness ===
=== The Hard Problem of Consciousness ===
Through the standard scientific method the challenge of explaining the mind has been mainly addressed by disassembling it in its functional, dynamical and structural properties <ref name="weisenberg">Weisberg, J (2012) 'The hard problem of consciousness' in J. Feiser & B. Dowden (eds.), Internet Encyclopedia of Philosophy</ref>. Consciousness has been described as cognition, thought, knowledge, intelligence, self-awareness, agency and so on, assuming that explaining the physical brain will resolve the mystery of the mind <ref name="physicalism">This position is called physicalism and it is closely related to materialism</ref>. From this perspective, our brain works as a complex mechanism that eventually triggers some sort of behavior. Consciousness is the result of a series of physical processes happening in the cerebral matter and determining our experience of having a body, thinking and feeling. This view has been able to explain many incognita of what happens in our mind, leading some to think that soon we will be able to fully explain its inner secrets.
Through the standard scientific method the challenge of explaining the mind has been mainly addressed by disassembling it in its functional, dynamical and structural properties. <ref name="weisenberg">Weisberg, J (2012) 'The hard problem of consciousness' in J. Feiser & B. Dowden (eds.), Internet Encyclopedia of Philosophy</ref> Consciousness has been described as cognition, thought, knowledge, intelligence, self-awareness, agency and so on, assuming that explaining the physical brain will resolve the mystery of the mind <ref name="physicalism">This position is called physicalism and it is closely related to materialism</ref>. From this perspective, our brain works as a complex mechanism that eventually triggers some sort of behavior. Consciousness is the result of a series of physical processes happening in the cerebral matter and determining our experience of having a body, thinking and feeling. This view has been able to explain many incognita of what happens in our mind, leading some to think that soon we will be able to fully explain its inner secrets.


In 1995 the philosopher of mind David Chalmers published his article ''Facing up to the problem of consciousness'' <ref name="chalmers">Chalmers, D (1995) 'Facing up to the problem of consciousness' in Journal of Consciousness Studies 2 (3):200-19</ref>, where he points out that the objective scientific explanation of the brain can solve only an ''easy problem''. If we want to fully explain the mystery of the mind, instead, we have to face up the ''hard problem'' of consciousness: ''How do physical processes in the brain give rise to the subjective experience of the mind and of the world? Why is there a subjective, first-person, experience of having a particular kind of brain?'' <ref name="nagel">Nagel, T (1974) 'What is it like to be a bat?' in Philosophical Review 83 (October):435-50</ref>
In 1995 the philosopher of mind David Chalmers published his article ''Facing up to the problem of consciousness'' <ref name="chalmers">Chalmers, D (1995) 'Facing up to the problem of consciousness' in Journal of Consciousness Studies 2 (3):200-19</ref>, where he points out that the objective scientific explanation of the brain can solve only an ''easy problem''. If we want to fully explain the mystery of the mind, instead, we have to face up the ''hard problem'' of consciousness: ''How do physical processes in the brain give rise to the subjective experience of the mind and of the world? Why is there a subjective, first-person, experience of having a particular kind of brain?'' <ref name="nagel">Nagel, T (1974) 'What is it like to be a bat?' in Philosophical Review 83 (October):435-50</ref>

Revision as of 12:05, 19 February 2020

Thesis

Introduction

Part 1



Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac.
Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.


Isaac Asimov, The Last Question, 1956 [1]


  1. Asimov, I (November 1956) 'The Last Question' in Science Fiction Quarterly



The Hard Problem of Consciousness

Through the standard scientific method the challenge of explaining the mind has been mainly addressed by disassembling it in its functional, dynamical and structural properties. [1] Consciousness has been described as cognition, thought, knowledge, intelligence, self-awareness, agency and so on, assuming that explaining the physical brain will resolve the mystery of the mind [2]. From this perspective, our brain works as a complex mechanism that eventually triggers some sort of behavior. Consciousness is the result of a series of physical processes happening in the cerebral matter and determining our experience of having a body, thinking and feeling. This view has been able to explain many incognita of what happens in our mind, leading some to think that soon we will be able to fully explain its inner secrets.

In 1995 the philosopher of mind David Chalmers published his article Facing up to the problem of consciousness [3], where he points out that the objective scientific explanation of the brain can solve only an easy problem. If we want to fully explain the mystery of the mind, instead, we have to face up the hard problem of consciousness: How do physical processes in the brain give rise to the subjective experience of the mind and of the world? Why is there a subjective, first-person, experience of having a particular kind of brain? [4]

Explaining the brain as an objective mechanism is a relatively easy problem that eventually could be solved in a matter of time. But a complete understanding of consciousness and its subjective experience is an hard problem that the scientific objectivity can't directly access. Instead, scientists have to develop new methodologies, facing up that a hard problem exists — How is it possible that such a thing as the subjective experience of being me, here, now takes place in the brain? — and must be taken into consideration.

Echoing the mind-body problem initiated by Descartes [5] subjective experience, also called 'phenomenal consciousness'[6], underlies whatever attempt to investigate the nature of our mind. It challenged the physicalist ontology of the scientific method showing the unbridgeable explanatory gap [7] between its dogmatic view and a full understanding of consciousness. This produces the necessity of a paradigm shift allowing new alternative scientific methods to embrace the challenge of investigating phenomenal consciousness [8].

The reactions to Chalmer's paper range between a total denial of the issue (Ryle 1949, Dennett 1978, 1988, Wilkes 1984, Rey 1997) to panpsychist positions (Nagel 1979, Bohme 1980, Tononi and Koch 2015), with some isolated case of mysterianism (McGinn 1989, 2012) advocating the impossibility to solve such a mystery. In any case, the last 30 years have seen exponential growth in multidisciplinary researches facing the hard problem with a constant struggle to build the blocks of a science of consciousness finally accepted as a valid field of study. This is a central subject of this thesis and we will regularly return to this contested field later in this text.



  1. Weisberg, J (2012) 'The hard problem of consciousness' in J. Feiser & B. Dowden (eds.), Internet Encyclopedia of Philosophy
  2. This position is called physicalism and it is closely related to materialism
  3. Chalmers, D (1995) 'Facing up to the problem of consciousness' in Journal of Consciousness Studies 2 (3):200-19
  4. Nagel, T (1974) 'What is it like to be a bat?' in Philosophical Review 83 (October):435-50
  5. Descartes, R (1641) 'Meditationes de prima philosophia'
  6. Block, N (2002) 'Some concepts of consciousness' in D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. pp. 206-219
  7. Levine, J (2009) 'The explanatory gap' in Harold Pashler (ed.), Encyclopedia of the Mind. SAGE Publications
  8. Varela, F (1996) 'Neurophenomenology: A methodological remedy for the hard problem' in Journal of Consciousness Studies 3 (4):330-49


The Machinic Life and Its Discontent (I)

To fully understand the relevance and consequences which involve exploring consciousness, we must shift our attention towards the evolution of the technological system. In particular to the attempts of simulating the mechanisms of the mind building autonomous and intelligent machines. In The Allure of the Machinic Life [1], John Johnston attempts to organize the contemporary discourse on machines under a single framework that he calls machinic life.

By machinic life I mean the forms of nascent life that have been made to emerge in and through technical interactions in human-constructed environments. Thus the webs of connection that sustain machinic life are material (or virtual) but not directly of the natural world. (...) Machinic life, unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations.

[2] Implying the whole attempt to produce life out of artificial hardware and software, the definition of machinic life allows us to re-consider the different experiences of the last century under the common goal of building autonomous adaptive machines and to understand their theoretical backgrounds as a continuum.

The mythological intuition of technology, subsumed in the concept of techné [3], already shows the main paths of the contemporary discourse. In fact in the myth of Talos and in Daedalus' labyrinth, we can find the first life-like automaton and the first architectural design reflecting the complexity of existence and outsourcing thought from human dominion. However only in the 19th century, with the new technological discoveries and a positivistic approach to knowledge [4], scientists started to build the bearing structures of what will become the two main fields of researches on autonomous machines of the 20th century: Cybernetics and Artificiali Intelligence(AI).

On the one hand, this process begins with the development of the steam engine (Watt 1776) through the study of thermodynamics (Sadi Carnot 1824) is joined with the studies of evolutionary biology (Lamark 1809, Darwin 1859). In 1858, Alfred Wallace wrote a letter to Charlezs Darwin making a specific relation between the 'vapor engine' and the evolutionary process.

The action of this [feedback] principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow. [5]

In the 1870s Samual Butler speculated on the evolution of machines. [6]. The autoregulation of animals (evolution) and machines (feedback mechanism) found their equivalence in the concept of adaptation. These theories reflect the effort to reintroduce the idea of teleology (purpose) which, strictly related to free will and choice, was denied in the debate regarding the origin of humankind contended between creationism's vitalism of the purposeful design (Paley 1802) and the blind chance of Darwinian's mechanism. These developments made it possible to theorize, as Butler did, a framework where machines can auto-regulate and reproduce themselves, evolving exactly as biological organisms. In the twentieth century, Wallace and Butler's speculative theories will find their scientific correlative in the process of homeostasis. The biological autoregulatory system was described by biologist Walter Bradford Cannon [7], making possible its closer study and its simulation in mechanical machines by cybernetics which, it is worth remembering, was defined as: control and communication in the animal and machine.

Parallel to these developments, the study of mathematics and logic, along with the revolution of Jacquard's loom (1804), led to the construction of advanced discrete-state machines [8] and the first practical translation of elementary logical function into binary algebra. Charles Babbage and Ada Lovelace's effort to develop and program the 'analytical' (1837) engine, together with Boolean logic (1854), give notice of a new computational era in which mental labor was not exclusively the prerogative of humans but could be performed by an economy of machines [9]. The idea of formalizing thought in a set of rules (algorithm) can be traced back to Plato [10] and was theorized in the 17th century by Leibniz [11] as a universal symbolical system capable to solve every possible problem. Alan Turing and Alonzo Church will [later] demonstrate this speculation mathematically, in 1936, leading to the formalization of the theory of computation [12]. Together with the formalization of computer's architecture by John Von Neumann in 1945 and Claude Shannon's information theory in 1948, the digital computer was born establishing the framework for the rise of Artificial Intelligence (AI).

If the classical world had the intuition of the sentient machine, and the modern the realization of its possibility, it is only with the practical experience of cybernetics and AI that the contemporary discourse of machinic life can be formulated. Nonetheless, the dual nature of contemporary discourse embodies the convergence of different theories in biological, mechanical and computational systems within a multidisciplinary approach to knowledge and life, driven by complexity and information. However, this already shows some of the weaknesses and biases that will become the limits of 'machinic life' in understanding and building working models of consciousness. For example, the idea that a). life can be reproduced outside of biological systems and b). the assumption that the human mind works as a symbolic computer, which will be discussed in the next chapter.



  1. Johnston, J (2008) 'The allure of machinic life' in The MIT Press
  2. Ibid. p. ix
  3. 'An art, skill, or craft; a technique, principle, or method by which something is achieved or created.' in Oxford Dictionary
  4. This approach is called Positivism. Formulated by Auguste Comte in the early 19th century, which rejects subjective experience because it is not verifiable by empirical evidence.
  5. Russel Wallace, R A (1858) 'On The Tendency of Varieties to Depart Indefinitely from the Original Type' Retrieved 18 April 2009
  6. Butler, S (1863) 'Darwin among the machines'
    Butler, S (1872) 'Erewhon'
    Butler, S (1879) 'Evolution old and new'
  7. Cannon, B W (1936) 'The wisdom of the body' in International Journal of Ethics 43 (2):234-235
  8. "These [discrete-state machines] are the machines which move by sudden jumps or clicks from one quite definite state to another. [...] As an example of a discrete-state machine we might consider a wheel which clicks round through 120 once a second, but may be stopped by a lever which can be operated from outside; in addition a lamp is to light in one of the positions of the wheel." Turing, A (1950) 'Computing intelligence and machinery' in Mind 49: 433-460
  9. Babbage, C () 'On the economy of machinery and manufactures'
  10. "I want to know what is characteristic of piety which makes all actions pious [...] that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men." Plato 'Euthyphro' VII, trans. F. J. Church (1948) in New York: Library of Liberal Arts p. 7.
  11. "Once the characteristic numbers are established for most concepts, mankind will then possess a new instrument which will enhance the capabilities of the mind to far greater extent than optical instruments strengthen the eyes, and will supersede the microscope and telescope to the same extent that reason is superior to eyesight." Leibniz, W G 'Selections' in Philip Wiener ed. (New York: Scribner, 1951), p. 18.
  12. The Church-Turing thesis states that if something is calculable, therefore, it is also computable and it can be represented as a Turing Machine


The Machinic Life and Its Discontent (II)

Gaining its identity at the Macy conference (NYC 1946 ) [1], Cybernetics has been the first framework capable of generating a working theory of machines. Its influence spreads in different disciplines such as sociology, psychology, ecology, economics [2] as well as in popular culture (cyber-culture). The prefix cyber-, in fact, will become emblematic of a new understanding of the human condition as profoundly connected with machines. Researchers earlier involved in technological development during World War II, met to discuss and experiment with a new idea of life. Supported by statistical information theory, experimental psychology, behaviorism and Norbert Wiener's control theory, biological organisms were understood as self-regulating machines, thanks to their embodied homeostatic processes. This machinic behavior can be formalized in models and simulated in artificial organisms, conceptually leading to the dissolution of boundaries between natural and artificial, mens and machines, bodies and minds. Life becomes a complex adaptive system made by an organism adapting to its environment through long and short feedback loops (homeostasis). [3] Human beings becomes machines and vice-versa. Cybernetic subjects sharing the new critical idea of life, no longer a matter of substance but of structural complexity. The implications of this astonishing view, broke the boundaries of human identity, leading theorists to talk about post-humanism [4] and explore new realms of control and speculations on the nature of simulation.

Despite the variety of subfields developed by Cybernetics [5], the parallel advent of the digital computer obscured most of its paths for decades. The focus of researchers and national funding shifted into the framework of Artificial Intelligence (AI). This new focus on intelligence, to which consciousness is a feature, was made possible by establishing a strict relation between the mind/brain and the digital computer. In fact, another revolution was taking place in the field of psychology. The incapacity of Behaviorism to include mental processes in understanding humans and animals, was opening the doors to the 'cognitive revolution'. Comparing the mind, intended as the cradle of cognitive processes, with computers' information processing, some researchers began to see the possibility of testing psychological theories on the digital computer envisioned as an artificial brain. [6].

Before AI was officially born, in 1950 Alan Turing published his article 'Computing machinery and intelligence' [7], where he designed the 'imitation game' best known as 'Turing test'. The computational power of the discrete-state machine was identified with the act of thinking and therefore with intelligence. "The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact, mimic the actions of a human computer very closely." [8]. Because of phrasing the problem as 'can machines think?' can have ambiguous results, to allow computer scientists exploring the possibility of creating intelligent machines Turing reversed the question into a behavioral test. 'Can we say a machine is thinking when imitating a human so well that he thinks he is talking to another human? If you can't recognize that your interlocutor is a machine, it doesn't matter if it is actually thinking because in any case, the result would be the same, a human-level communication. Thinking and mimicking thinking becomes equivalent allowing machines to be said intelligent. In his text, Turing dismisses the argument of phenomenal consciousness and the actual presence of subjective experience by sustaining that such a problem does not necessarily need to be solved before being able to answer his question. Indeed, the Turing test suggests more than a simple game. It is signaling the beginning of a new inquiry on the theoretical and practical possibility of building 'real' intelligent machines. At the same time indicating some possible directions [9] to build a machine capable of passing this test.

Riding the new wave of the cognitive revolution and embracing the cybernetic comparison between men and machines, in 1956 a group of polymaths begins to meet at Dartmouth College [10], the birthplace of AI. They developed the first working program, called Logic Theorist, exploring the possibility to automatize reasoning through its formalization and manipulation in a symbolic system. This approach, called Symbolic AI, will become the workhorse to pass from programs that are able to reproduce only specific aspects of intelligence (narrow AI) to artificial general intelligence (AGI) capable of doing any task and achieve that human-level AI (HLAI) prospected by Turing. The overstated goal that will lead the fathers of AI [11] to be remembered as the enthusiastic researchers drawn in a spiral of positive predictions and hyperbolic claims [12] mostly failed or not yet achieved.

Infected by the same enthusiasm, philosophers of science already struggling on the possible comparison between the brain and the Turing Machine, started to attempt a serious interpretation of the human mind based on the information processing of new digital computers. The movement called computationalism led to several theories: The Computational Theory of Mind (CTM) (1967) (Putnam, Fodor, Dennet, Pinker, Marr) understands the mind effectively as a linear 'input-processing-output' machine. Jerry Fodor's Language of Thought Hypothesis (LOTH) (1975) claims thinking is only possible in a 'language-like' structure to build thoughts at the top-level. The hypothesis of A. Newell, H. Simon (1976) sees in the physical symbolic system everything needed to build a true intelligence. In the popular culture as well, the same enthusiasm led to a new ideology of the machine with its climax in the fictional character of HAL 9000 of the movie 2001: A Space Odyssey by Stanley Kubrick. [13]

Despite the great enthusiasm and expectations, the idea that computers can do all the things a human can do, has been heavily criticized. Philosophers such as Hubert Dreyfus (1965, 1972, 1986) and Noam Chomsky (1968) have highlighted the problematics of computationalism in building working theories of the mind. Starting a critical analysis of AI they reveal the simplistic assumptions [14] perpetuated by the unjustified hype and incapacity of auto-criticism of major AI's researchers. They showed the technical limitations of physical symbolic systems that are unable to grasp the value of the context, essential in gaining knowledge and achieving common sense. As well as the impossibility to formalize all the aspects of intelligence as creativity and intuition.

In the same direction, philosopher John Searle, criticizing the comparison of the human mind with computers in understanding things, developed a thought experiment called 'Chinese room' (1980) [15] arguing for an underlining distinction between a 'strong AI' capable to really 'understanding', and a 'weak AI' which just simulates understanding. Searle's argument raises the same problematics of the 'hard problem' of consciousness defining a threshold between the actual AI and the human mind. Other thought experiments, such as Jackson's 'Mary's room' (1986) [16] touch the subjectivity of experience directly, which seems to resist all the efforts of the scientific community to reduce it to a machine and its weak computational intelligence.



  1. maci conference + partecipants
  2. cyberdisciplines
  3. Ashby, W R (1952) 'Design for a brain' in London : Chapman and Hall
  4. Hayles N K (1999) 'How we became posthuman' in The University of Chicago Press
  5. Self-organizing systems, neural networks and adaptive machines, evolutionary programming, biological computation, and bionics.
  6. cognitive simulation
  7. Turing, A (1950) 'Computing intelligence and machinery' in Mind 49: 433-460
  8. Ibid. p.
  9. Natural language processing, problem-solving, chess-playing, the child program idea, and genetic algorithms
  10. dartmouth + names
  11. McCorduck (1979) 'Machines who think' in Freeman
  12. Dreyfus, H (1972) 'What computers can't do' in Harper & Raw,
  13. HAL 9000 is depicted as a malevolent human-like artificial intelligence capable of feeling emotions, designed with the technical consultancy of Marvin Minsky.
  14. dreyf assumptions
  15. chinese room
  16. Mary's room


The Machinic Life and Its Discontent (III)

The computational symbolic AI, also called good old-fashioned AI (GOFAI), believes that from a top-down approach you can engineer in digital computers all the aspects of the mind, including consciousness. However, despite the early success (still limited when compared to the actual goals aforementioned) the wrong predictions and their conceptual limitations led to a series of failures resultants in 2 periods of recession best known as AI winters. After these periods, the criticism moved toward symbolic AI and the development of new researches inspired by cybernetics, will lead to find different approaches to understand intelligence and design life.

Looking closely at the architecture of the brain, cyberneticists were already exploring the possibility to reproduce its networks of neurons in artificial neural networks (ANN) [1]. However, this system will become effective only during the 80s with the development of the parallel distributed processing (PDP) pairing multiple layers of ANNs [2] and the birth of a new approach in AI: connectionism. Instead of an upstream representation of knowledge typical of the symbols manipulation, this bottom-up approach makes possible to design AI systems capable to learn and find useful patterns by inspecting sets of data and reinforcing the connections between its neurons. Thanks to the internet, ANN's can now be fed with a large amount of data increasing drastically their capacity to learn. This new perspective called deep learning has been introduced in 2012 producing a renovated hype in connectionism and AI.

Another relevant approach produced in AI by cybernetics is the intelligent agent paradigm. Reintroducing the discourse on complex systems, the concept of 'rational agent' (borrowed from economics) becomes the neutral way to refer to anything capable to interact with an environment through sensors and actuators. This concept made possible to devel AI systems capable to achieve a goal by keeping track of their environment, learning and improving their performance autonomously. In parallel with the developments in AI, cybernetics led to a new paradigm of machinic life focusing on the simulation of biological evolution through software: Artificial Life (ALife) [3]. Starting from very simple operators and basic laws of interaction in a given environment, complex systems automatically arise creating chaotic interactions, stable loops and astonishing patterns that are impossible to predict.

Despite these new developments, both ALife and AI are encountering their respective boundaries. Life, like intelligence, generates from the interaction with an extremely complex and variegated environment, the noisy physical world made of radiations and electromagnetic phenomena, particles and wavelengths in continuous interaction. A chaotic world that neither contemporary computers' capabilities nor the internet's amount of data can simulate. Furthermore, the companies relying on deep learning are facing up to the problem of understanding why these learning systems make their choices. Their autonomous way of learning through layers of networked neurons creates nested black boxes extremely difficult to unpack rising a whole debate on discriminations and biases embedded in software. [4]

To escape from these limitations, scientists are now working on a more wholistic understanding of intelligence which mixes the sub-symbolic approach of machine learning with the knowledge representation of symbolic AI. In robotics, the situated AI takes robots outside of the labs to interact with the 'noisy' physical world, hoping to find new ways to generate direct knowledge instead of just simulate it. Finally, these new ways of defining life and intelligence, are moving toward a deeper understanding of cognition that instead of being represented only as a symbolic system, also lies on a sub-symbolic level [5] and instead of being a designed product it is part of evolutionary processes [6]. Slowly machinic life is reaching that adaptive unconscious and embodied knowledge which seems the key to simulate the high-level intelligence typical of intuition, creativity and the spontaneous complexity of life. In general, the perspectives of engineering general-purpose systems and populate the world with new forms of life are growing faster.

After almost 70 years from the first AI program, we are still surrounded by only weak-and-narrow AIs. On the one hand, part of the researchers reformulated the goal of building human-level systems, as well as 'strong' AI, on more practical aims. On the other hand, private institutions, such as MIT, tech entrepreneurs, such as Elon Musk, and many other researchers in AI's related fields, such as Ray Kurzweil, are repeating the same errors of the old fathers. Riding the regenerated hype made possible by the boom of deep learning, a new wave of enthusiasts are calling for the 'big picture' of AI. They daydream a future-oriented techno-utopianist world that reminds the morally dubious neo-liberal Californian Ideology. Passing through the development of general-purpose human-level AIs the technological singularity will realize a strong artificial consciousness and the emancipated, black-boxed, AI will 'think for itself' becoming the dominant, superintelligent, form of life of the future (eventually helping, killing or snubbing human beings).

The AI's 'big picture' sores the nervous system of popular culture. It creates misunderstandings on the actual state of affairs, expectations in the near futures and doubts on positive future perspectives. However, if the plan of machinic life in general, is still to unveil the 'mystery of the mind', it can't continue to pretend their assumptions to be absolutely true and stay silent on the technical issues regarding how consciousness works. In particular, it will have to deal with the 'hard problem' of consciousness to achieve whatever attempt to create a machine really capable of thinking. It will need to engineer that subjective experience, which at this point in time still differentiates humans from machines, allowing to imagine a present-oriented future where the 'big picture' will be resized to its 'weak' actuality. Or in the best case, if machinic life succeeds and as Matteo Pasquinelli hopefully interprets the words of Turing, it will be "that of a new kind of alliance between the two forms of cognition".

Before to proceed with a detailed account of the characteristics of subjective experience, its similarities and differences with the computer, in the next chapter I will briefly introduce other approaches that, instead of the autonomous machine of machinic life, explore different relations between men and his technological system.



  1. McCulloch, S W and Pitts, W (1943) 'A Logical Calculus of the Ideas Immanent in Nervous Activity' in Journal of Symbolic Logic 9 (2):49-50 (1943)
  2. 'A General Framework for Parallel Distributed Processing'
  3. Artificial Life (Alife) borns from the studies of cellular automata begun by Von Neuman and developed by ... until its birth made possible by Langton and his conference ...
  4. In this direction, deep learning's researchers are using deep learning methods to analyze deep learning itself and unpack the behavior of its layered neurons.
  5. Kahneman, D (2011) 'Thinking, fast and slow' in Farrar, Straus and Giroux
  6. Langton, C (1990) "Computation at the edge of chaos" in Physica D 42, 12-37


Beyond humans and machines

[S: This section needs more development and discussion. you must be clear about what the central question of the thesis is and orientate around that issue. I understand the research is wide, Can we continue to discuss it in terms of Hard and easy questions of consciousness? We can discuss this on Thursday.]

A totally different story overlaps the plans of machinic life. Instead of the design of intelligent artificial organisms, other machines can be developed, with the capacity to augment human intelligence. As to play with words, intelligence augmentation (IA) inverts the acronym AI with the aim to create a symbiosis between men and machines. Augmented intelligence means that the power of artificial computational systems lives in their capacity to amplify human intelligence.

computer-aided symbiosis / immersive gaming

IA can be more directly seen in medical, aeronautic and aerospatial technologies capable to augment humans cognition and create that actual symbiosis envisioned by computer scientist J.C.R. Licklider and in sociological political and economics technologies which Simon Ramo reports and conceptualize in the mixing of intelligence and electronics: 'intellectronics'. (<-- split in 2) Another relevant example can be found in the design of human-computer interface (HCI), where the idea to build an intermediary between men and machines pass from Vannevar Bush to Douglas Engelbart, the inventor of the computer mouse?, reaching its extreme point with the contemporary idea of designing a brain-computer interface (BCI) which potentially makes possible that extrasensory perception which Alan Turing refers to as These disturbing phenomena [which] seem to deny all our usual scientific ideas. (Quote properly – Sources) split in 2?

AI and IA represent two instances of the relation between man and machines making possible to separate and reconnect two independent elements. Where one tries to ward off technological systems, the other tries to get humans closer to it. Nevertheless, these two views are binaries of a more subtle intertwining already happening between human beings and their technological system.

The cyborg theory represents another different view trying to valorize the different strains possible between humans and machines. Not only the symbiosis is already happening, as in between a man and his smartphone. But it realizes that this symbiosis also exists at an internal level where biological beings carry artificial implants vice versa. In this direction, hybrid biorobotics are another framework that is backing away from the pure artificial goal of AI and standard robotic. It explores the different ways of mixing biological and artificial. If we think in terms of hardware and software we can, on the one hand, we can build artificial hardware running biological software. For example, there is research trying to use the patterns emerging in biological neural networks as software to run in artificial hardware. On the other hand, using artificial software to control biological hardware, as in RoboRoach where the movements of a cockroach are controlled through an implant that sends electrical impulses. Using artificial software to stimulate the brain, could lead to what Thomas Metzinger called 'neuro-enhancement', the artificial control of emotional and altered states of consciousness. As well it could also make possible creating artificial consciousness, as it seems easier to control the brain instead of building an artificial correspondent of it.

If cyborg theory and hybrid robotics are an attempt to build patchworks containing a certain percentage of biological and artificial, Cybernetic instead works on a different and more complex understanding of man and machine. The fundamental difference between biological and artificial systems collapses into a common ground, a continuum where life is not contained anymore in the substance but instead dwells in the structure and might take place in whatever material. This continuum will be then developed in Deleuze and Guattari's 'Capitalism and schizophrenia' (source) as 'machinic phylum'. The abstracted computational power of the machine is identified in the power of nature to create life by making assemblages of ... (?).

All these different understandings of the relation between social and technological, man and machines, have a common denominator. The first step seems to be the so acclaimed singularity, intended as a particular moment in time in which there will be a drastic change in how we deal with technologies. It could be the advent of AGI, HLAI or super artificial intelligence (SAI). The construction of an affordable BCI or the rise of a cyborg society and the synthesis of artificial consciousness. But the final point, the farthest moment where theories conflate, is the bio-digital fusion that will follow the exponential growth of humans and machines.

The stars and Galaxies died and snuffed out, and space grew black after ten trillion years of running down. One by one Man fused with AC [Automatic Computer], each physical body losing its mental identity in a manner that was somehow not a loss but a gain. Man's last mind paused before fusion, looking over a space that included nothing but the dregs of one last dark star and nothing besides but incredibly thin matter, agitated randomly by the tag ends of heat wearing out, asymptotically, to the absolute zero[1]



  1. Asimov - the last question - 1956


Part 2

Here, me, now

Subjective experience is phenomenal consciousness and since the standard scientific method relies on an objective account of the mind based on empirical evidence, it can't directly explain it. Philosophy, instead, already developed different methods to look at the phenomena (the things that appear to us) in itself. At the end of the 19th century, Edmund Husserl's phenomenology inquiry the nature of mental content acknowledging the possibility to infer objective knowledge about it and the external world. During the first half of the 20th century, analytic philosophers theorize the sense datum, later qualia: minimal mind-dependent unities which combined together constitute the whole phenomenal consciousness [1].

A standard cognitive understanding of the mind involves the mental representation of the external world (representational realism) instead of direct contact with it(naive realism). Our perception is deconstructed, processed in different areas of the brain, and recompose in the world as we experience it. [2] The contents of our phenomenal consciousness accessible through introspection can be resumed in the mental representations of the here, the me, and the now: the experience of having a first-person perspective on the world in a specific moment in time. These mental representations can be understood as functional models produced by the evolutionary process and naturally selected for their survival and adaptive value.

In a common scenario, our point of view takes place from within our body, which itself is represented in the world giving us a sense of ownership and selfhood, location, presence, and agency. If the phenomenal space and time are essential structures, the self is a higher feature only in part recognized in mammals and other animals as a non-conscious proto-self. [3] I will return later on this argument but it is important to notice that the self allows us accessing memories and being projected in the future, using language and logico-mathematical thinking. Most importantly, turning the first-person perspective inward makes us self-aware beings, able to explore our own mental state and to account and experience experience itself. The absence of an active subject capable of inwardness, makes us doubt on which extent certain animals are able to experience emotions and feelings as originating from within themselves. However, the impossibility to know how is it like to be another living being let this argument open to debate. [4]

Particular clinical cases and limit experiences such as neuropsychiatric syndromes, dreams and altered states of mind, instead, can help in this direction. They make possible to have a direct account of phenomenal worlds with different contents showing us that in spite of an all-or-nothing process, consciousness is graded and non-unitary. In his books Ego Tunnel [5], Thomas Metzinger defines consciousness as the appearance of a world. In fact, to explain subjective experience you have to solve the one-world problem first, which is: how different phenomenal facts are merged together (world binding) in a coherent whole.

Given the hypothesis that the brain is the sufficient cause for consciousness to exists, whatever constitutes it, must have a sort of correlation with the physical brain. This is what scientists call the neural correlate of consciousness (NCC), an extremely complex but "coherent island emerging from a less coherent flow of neural activity" that than becomes a more abstract "information cloud hovering above a neurological substrate". [6]. If we manage to link a particular subjective experience with a pattern of chattering neurons, we could get closer to solve the hard problem of consciousness. The study of different phenomenal worlds helps too in this direction because they permit us to map which parts of the brain are activated in the absence of certain particular contents in our consciousness. This effort will lead to the definition of a global NCC and a minimal phenomenal consciousness necessary to a world to appear.

The functionalist theory developed by Metzinger, leads him to suppose the nature of the self, instead of being a stable thing, is a process running in our brain when we are conscious and turning off when we go to sleep: the personal self-model (PSM). Because consciousness can persist in particular contexts where the self is perceived from outside of the body (out-of-body experience or OBE) Metzinger developed the idea of the

Antti Revonsuo > out-of-brain experience

In particular, the study of the out-of-body experience (OBE), where you experience yourself from outside of your body, led Metzinger to theorize the nature of the me in human beings as an internal model that he calls personal-self model (PSM). The PSM can be separated from its own body. De facto, the experience of having a consciousness that can't be experientially identified with the brain can be intended as an out-of-brain experience.



  1. Cite error: Invalid <ref> tag; no text was provided for refs named qualia
  2. It is implied that the physical reality described by nuclear and quantum physics exists and our phenomenal experience is projected on top of it.
  3. core high consciousness and protoself
  4. how is it like to be a bat
  5. ego tunnel + being no one
  6. ncc metz quote


Engines and experiences

As Metzinger suggests, our consciousness is a world engine making possible a world to appear. Furthermore, because we don't identify the experience of being conscious as coming from the brain (even though we know it is strictly related to it), it can be said an out-of-brain experience. These two concepts, the world engine and the out-of-brain experience, led me to think about a different way of approaching computers and in particular software which highlight the personal subjective relation that we creates with them.

If the computer metaphor has its limitations in practice, as a metaphor helps us to think about many aspects of our own beings. In fact, the difference between hardware and software reflects our struggle to interpret the relation between our body and our mind, our brain and our consciousness. We know that what we call software, is a series of abstractions relying on each other but in the end nothing more than a series of electrical impulses happening in the hardware and therefore, to steal the famous words of Friedrik Kittler, there is no software. However, software influence our culture and in fact are studied as a cultural object in software studies. The same happens for our consciousness, which scientists are continuously trying to reduce to the brain itself, and for its properties leading Metzinger to argue that there is no self'. However, we just started to explore what consciousness is and further development might lead to a totally different understanding of its nature.


Given the relation between consciousness and software, my proposal is to give some hints on a possible cultural theory of software which instead of being understood as a means to create artificial beings, relies upon subjective experience.


From this view, it can be said that a piece of software is an out-of-hardware experience precisely experienced by a subject interacting with it.












Conclusion