User:Tancre/2/thesis: Difference between revisions
(283 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
== | = Thesis = | ||
== Introduction == | |||
The meaning of software as described in this thesis, does not address technologies as an idealized offspring of autonomous life, nor as mere tools. Neither does it address society and its politics, privileging groups over individuals and institutionalizing objective, scientific, knowledge as the sole source of wisdom. Instead, the meaning of software sprawls toward subjective experience, which, notwithstanding being in part determined by the intertwining of society with the technical system, stands still as the primal perspective to acknowledge the world and its objects. | |||
How to articulate this kind of discourse? | |||
And, what are the consequences unfolded by such an understanding of software? | |||
The reactions to Chalmer's paper range between a total denial of the issue (Ryle 1949, Dennett 1978, 1988, Wilkes 1984, Rey 1997) to panpsychist positions (Nagel 1979, Bohme 1980, Tononi and Koch 2015), with some isolated | When I started unpacking these thoughts, my interest was directed in justifying how my own attitude and practice resists the ruling views on what the aims of both the social and the technological system are. First, I was concerned about the tireless attempts of an omnipresent society to determine ''what, who and why'' I am myself. Then, this imposed subjectification pushed me to move toward experimenting with new technologies expecting to find an easier route to self-determination. However, in the last year, I realized that even there, the bearing structure of digital technologies built upon a century of coding software and manufacturing hardware was actually determined by the same politics and social dynamics I was trying to get away from. | ||
My naive expectations are now reified, but still, my interest in claiming the fundamental role of self-determination is not changed and a critical approach to the study and use of computers and software still keeps my personal door to wisdom open. This path, however, must be said, is not a one-man enterprise, but a historical trail weaving different discourses and involving different disciplines already multidisciplinary and arduous to resume as definite unities. However, to capture the whole and help the reader not to get lost in the labyrinth, a map needs to be made here. A map where to find the path and the main ''"ciceros"'' that helps me not to get lost myself. | |||
This thesis looks back at the frameworks to instantiate the scientific explanation of the mind and the body into machines, here called "machinic life", an umbrella term coined by John Johnston. In parallel, it explores the recent theoretical approaches in the humanities and the scientific proofs shedding new light on the study of consciousness. Furthermore, these recent views permit to correct the attempts of ''"machinic life"'' which, however, already spread the consequences of its premature assumptions in a society that in the meanwhile is enmeshed with its technological system. In this direction, I've found a particular resonance of my thoughts with the work of David Chalmers, claiming for a paradigm shift in science allowing the study of consciousness as a valid field of research and building a solid network of scientists aiming at facing the challenge of explaining consciousness. Thomas Metzinger, who, through the study of altered states of mind, is one of the few to propose an alternative model of consciousness capable of explaining the nature of the self. And Katherine N. Hayles, who, developing her discourse starting from the social (and therefore opposite to mine emphasizing the subject), particularly impressed me for articulating the discourse on machines as a form of cognition that is other from humans but sharing something with them. This same idea that I was trying to draw following the path of the "proto-subjectivity" described by Felix Guattari, and that she develops from a scientific point of view and extends to social theory. | |||
Finally, the result of these discourses, applied to software, instead of addressing its technical and cultural aspects directly, reveals a new aspect of software pointing toward the subjective experience of a new phenomenal world that can be built through an external form of cognition. Knowledge, from this point of view, is not inaccessible to the individual level and given by science and normalized through society, but is a process built through the construction of worlds, simulating the real or creating useful fictions, but still validating the subject as the designer (or hacker) of its own experience. | |||
== Part 1 == | |||
<br><br> | |||
''<q>Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER. </q>'' | |||
— | |||
Isaac Asimov, ''The Last Question'', 1956 | |||
<ref name="Asimov">Asimov, I (November 1956) 'The Last Question' in Science Fiction Quarterly</ref> | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br><br> | |||
=== The Hard Problem of Consciousness === | |||
Through the standard scientific method the challenge of explaining the mind has been mainly addressed by disassembling it in its functional, dynamical and structural properties. <ref name="weisenberg">Weisberg, J (2012) 'The hard problem of consciousness' in J. Feiser & B. Dowden (eds.), Internet Encyclopedia of Philosophy</ref> Consciousness has been described as cognition, thought, knowledge, intelligence, self-awareness, agency and so on, with the assumption that explaining the physical brain would resolve the mystery of the mind. <ref name="physicalism">This position is called physicalism and it is closely related to materialism</ref>. From this perspective, our brain works as a complex mechanism that eventually triggers some sort of behavior. Consciousness is the result of a series of physical processes happening in the cerebral matter and determining our experience of having a body, thinking, and feeling. This view has been able to explain many unknown elements of what happens in our minds. | |||
In 1995 the philosopher of mind David Chalmers published his article ''Facing up to the problem of consciousness'' <ref name="chalmers">Chalmers, D (1995) 'Facing up to the problem of consciousness' in Journal of Consciousness Studies 2 (3):200-19</ref>, where he points out that the objective scientific explanation of the brain can solve only an ''easy problem''. If we want to fully explain the mystery of the mind, instead, we have to face up the ''hard problem'' of consciousness: ''How do physical processes in the brain give rise to the subjective experience of the mind and of the world? Why is there a subjective, first-person, experience of having a particular kind of brain?'' <ref name="nagel">Nagel, T (1974) 'What is it like to be a bat?' in Philosophical Review 83 (October):435-50</ref> | |||
Explaining the brain as an objective mechanism is a relatively ''easy problem'' that eventually, in time, could be solved. But a complete understanding of consciousness and its subjective experience is a ''hard problem'' that scientific objectivity cannot access directly. Instead, scientists have to develop new methodologies, considering that a ''hard problem'' exists — ''How is it possible that such a thing as the subjective experience of being ''me, here, now'' takes place in the brain?''. | |||
Echoing the ''mind-body problem'' initiated by Descartes <ref name="descartes">Descartes, R (1641) 'Meditationes de prima philosophia' </ref> subjective experience, also called ''phenomenal consciousness'' <ref name="block">Block, N (2002) 'Some concepts of consciousness' in D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. pp. 206-219</ref>, underlies whatever attempt to investigate the nature of our mind. This problem challenged the physicalist ontology of the scientific method showing the unbridgeable ''explanatory gap'' <ref name="levine">Levine, J (2009) 'The explanatory gap' in Harold Pashler (ed.), Encyclopedia of the Mind. SAGE Publications</ref> between the latter's dogmatic view and a full understanding of consciousness. This produces the necessity of a paradigm shift allowing new alternative scientific methods to embrace the challenge of investigating phenomenal consciousness <ref name="neurophenomenology">Varela, F (1996) 'Neurophenomenology: A methodological remedy for the hard problem' in Journal of Consciousness Studies 3 (4):330-49</ref>. | |||
The reactions to Chalmer's paper range between a total denial of the issue (Ryle 1949, Dennett 1978, 1988, Wilkes 1984, Rey 1997) to panpsychist positions (Nagel 1979, Bohme 1980, Tononi and Koch 2015), with some isolated cases of mysterianism (McGinn 1989, 2012) advocating the impossibility to solve such a mystery. In any case, the last 30 years have seen exponential growth in multidisciplinary researches facing the ''hard problem'' with a constant struggle to build the blocks of a science of consciousness finally accepted as a valid field of study. This is a central subject of this thesis and we will regularly return to this contested field later in this text. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Machinic Life and Its Discontent (I) === | |||
To fully understand the importance and consequences which are involved in exploring consciousness, we must shift our attention towards the evolution of the technological system. In particular, to the attempts of simulating the mechanisms of the mind in order to build autonomous and intelligent machines. In ''The Allure of the Machinic Life'' <ref name="Johnston">Johnston, J (2008) 'The allure of machinic life' in The MIT Press</ref>, John Johnston attempts to organize the contemporary discourse on machines under a single framework that he calls ''machinic life''. <blockquote>By machinic life I mean the forms of nascent life that have been made to emerge in and through technical interactions in human-constructed environments. Thus the webs of connection that sustain machinic life are material (or virtual) but not directly of the natural world. (...) Machinic life, unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations. <ref name="Johnston2">Ibid. p. ix</ref> </blockquote> Implying the whole attempt to produce life and its processes out of artificial hardware and software, the definition of ''machinic life'' allows us to re-consider the different experiences of the last century under the common goal of building autonomous adaptive machines and to understand their theoretical backgrounds as a continuum. | |||
The mythological intuition of technology, subsumed in the concept of ''techné'' <ref name="technè">'An art, skill, or craft; a technique, principle, or method by which something is achieved or created.' in Oxford Dictionary</ref>, already shows the main paths of the contemporary discourse. In fact, in the myth of Talos and in Daedalus' labyrinth, we can find the first life-like automaton and the first architectural design reflecting the complexity of existence and outsourcing thought from human dominion. However, only in the 19th century, with the new technological discoveries and a positivistic approach to knowledge <ref name="positivism"> This approach is called Positivism. Formulated by Auguste Comte in the early 19th century, which rejects subjective experience because it is not verifiable by empirical evidence. </ref>, did scientists start building the bearing structures of what would become the two main fields of machinic life of the 20th century: Cybernetics and Artificial Intelligence (AI). | |||
On the one hand, this process begins with the improvement of the steam engine through the study of thermodynamics (Sadi Carnot 1824) joined with the debate on the origin of human beings opposing evolutionary biology (Lamark 1809, Darwin 1859) to the religious belief in Creationism (Paley 1802). This new view, however, wasn't able to sustain any possibility of choice and will to human existence which, unleashed from the religious teleology imposed by God, was consigned to ''natural selection'' and the random ''chance'' introduced By Charles Darwin. In 1858, Alfred Wallace wrote a letter to Darwin making a specific relation between the 'vapor engine' and the evolutionary process. <q>''The action of this [natural selection] principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident.''</q> <ref name="wallace"> Russel Wallace, R A (1858) 'On The Tendency of Varieties to Depart Indefinitely from the Original Type' Retrieved 18 April 2009 </ref> In the 1870s Samuel Butler, speculating on the evolution of machines in texts such as Darwin ''Amongst the Machines'' (1872) and ''Erewhon'' (1879)<ref name="butler">Butler, S (1863) 'Darwin among the machines' <br>Butler, S (1872) 'Erewhon' <br>Butler, S (1879) 'Evolution old and new' </ref>, reintroduced the idea of teleology (purpose) in human nature funding the equivalence of the autoregulatory system of animals (evolution) and machines (feedback mechanism) in the concept of ''adaptation''. Furthermore, these developments made it possible to theorize a framework where machines can auto-regulate and reproduce themselves, evolving exactly as biological organisms. In the twentieth century, Wallace and Butler's speculative theories will find their scientific correlative in the biological process of ''homeostasis'' <ref name="cannon"> Cannon, B W (1936) 'The wisdom of the body' in International Journal of Ethics 43 (2):234-235 </ref>, making its closer study and simulation in technical systems by cybernetics possible. It is worth remembering that this was defined as: control and communication in the ''animal'' and ''machine''. | |||
Parallel to these developments, the study of mathematics and logic, along with the revolution of Jacquard's loom (1804), led to the construction of advanced ''discrete-state machines'' <ref name="discrete-state machine"> "These [discrete-state machines] are the machines which move by sudden jumps or clicks from one quite definite state to another. [...] As an example of a discrete-state machine we might consider a wheel which clicks round through 120 once a second, but may be stopped by a lever which can be operated from outside; in addition a lamp is to light in one of the positions of the wheel." Turing, A (1950) 'Computing intelligence and machinery' in Mind 49: 433-460</ref> and the first practical translation of elementary logical function into binary algebra. Charles Babbage and Ada Lovelace's effort to develop and program the ''analytical engine'' (1837), first general-purpose computer, together with Boolean logic (1854), introduced a new computational era in which ''mental labor'' was not exclusively the prerogative of humans but could be performed by an economy of machines <ref name="babbage"> Babbage, C () 'On the economy of machinery and manufactures'</ref>. The idea of formalizing thought in a set of rules (algorithm) can be traced back to Plato <ref name="plato"> "I want to know what is characteristic of piety which makes all actions pious [...] that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men." Plato 'Euthyphro' VII, trans. F. J. Church (1948) in New York: Library of Liberal Arts p. 7.</ref> and was theorized in the 17th century by Leibniz <ref name="leibniz">"Once the characteristic numbers are established for most concepts, mankind will then possess a new instrument which will enhance the capabilities of the mind to far greater extent than optical instruments strengthen the eyes, and will supersede the microscope and telescope to the same extent that reason is superior to eyesight." Leibniz, W G 'Selections' in Philip Wiener ed. (New York: Scribner, 1951), p. 18.</ref> as a universal symbolical system capable of solving every possible problem. Alan Turing and Alonzo Church will [later] demonstrate this speculation mathematically, in 1936, leading to the formalization of the theory of computation <ref name="turing-church"> The Church-Turing thesis states that if something is calculable, therefore, it is also computable and it can be represented as a Turing Machine </ref>. Along with the formalization of computers' architecture by John Von Neumann in 1945 and Claude Shannon's information theory in 1948, the digital computer was born, establishing the framework for the rise of Artificial Intelligence (AI). | |||
If the classical world had the intuition of the sentient machine, and the modern the realization of its possibility, it is only with the practical experience of cybernetics and AI that the contemporary discourse of machinic life can be formulated. Nonetheless, the dual nature of contemporary discourse embodies the convergence of different theories in biological, mechanical and computational systems within a multidisciplinary approach to knowledge and life, driven by complexity and information. However, this already shows some of the weaknesses and biases that will become the limits of 'machinic life' in understanding and building working models of consciousness. For example, the idea that a). life can be reproduced outside of biological systems and b). the assumption that the human mind works as a symbolic computer, topics which will be discussed in the next chapter. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Machinic Life and Its Discontent (II) === | |||
Gaining its identity at the Macy conference (NYC 1946 ) <ref name="maci"> maci conference + partecipants </ref>, Cybernetics has been the first framework capable of generating a working theory of machines. Its influence spreads in different disciplines such as sociology, psychology, ecology, economics <ref name="cyberdisciplines"> cyberdisciplines </ref> as well as in popular culture (cyber-culture). The prefix cyber-, in fact, will become emblematic of a new understanding of the human condition as profoundly connected with machines. Researchers who had been involved in technological development during World War II, met to discuss and experiment with a new idea of life. Supported by statistical information theory, experimental psychology, behaviorism and Norbert Wiener's control theory, biological organisms were understood as self-regulating machines, thanks to their embodied homeostatic processes. This machinic behavior can be formalized in models and simulated in artificial organisms, conceptually leading to the dissolution of boundaries between natural and artificial, humans and machines, bodies and minds. Life becomes a complex adaptive system made by an organism adapting to its environment through long and short feedback loops (homeostasis). <ref name="ashby"> Ashby, W R (1952) 'Design for a brain' in London : Chapman and Hall </ref> Human beings become machines and vice-versa. Cybernetic subjects share the new critical idea of life, which is no longer a matter of organic and inorganic substance but of structural complexity. The implications of this astonishing view, broke the boundaries of human identity, leading theorists to talk about post-humanism <ref name="hayles post"> Hayles N K (1999) 'How we became posthuman' in The University of Chicago Press</ref> and explore new realms of control and speculations on the nature of simulation. | |||
Despite the variety of subfields developed by Cybernetics <ref name="cybersubfields"> Self-organizing systems, neural networks and adaptive machines, evolutionary programming, biological computation, and bionics.</ref>, the parallel advent of the digital computer obscured most of its paths for decades. The focus of researchers and national funding shifted into the framework of Artificial Intelligence (AI). This new focus on intelligence, to which consciousness is allegedly a feature, was made possible by establishing a strict relation between the mind/brain and the digital computer. In fact, another revolution was taking place in the field of psychology. The incapacity of Behaviorism to include mental processes in understanding humans and animals, was opening the doors to the 'cognitive revolution'. Comparing the mind, intended as the cradle of cognitive processes, with computers' information processing, some researchers began to see the possibility of testing psychological theories on the digital computer envisioned as an artificial brain. <ref name="cogsim"> cognitive simulation</ref>. | |||
Before AI was officially born, in 1950 Alan Turing published his article 'Computing machinery and intelligence' <ref name="turin"> Turing, A (1950) 'Computing intelligence and machinery' in Mind 49: 433-460</ref>, where he designed the 'imitation game' best known as the 'Turing test'. The computational power of the discrete-state machine was identified with the act of thinking and therefore with intelligence. "The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact, mimic the actions of a human computer very closely." <ref name="turin2"> Ibid. p. </ref>. Because the phrasing of the problem as ''Can machines think?'' can have ambiguous results, to allow computer scientists to explore the possibility of creating intelligent machines, Turing reversed the question into a behavioral test. 'Can we say a machine is thinking when imitating a human so well that s/he thinks s/he is talking to another human?' If you can't recognize that your interlocutor is a machine, it doesn't matter if it is actually thinking, because in any case, the result would be the same, a human-level communication. Thinking and mimicking thinking become equivalent, allowing machines to be called intelligent. In his text, Turing dismisses the argument of phenomenal consciousness and the actual presence of subjective experience by sustaining that such a problem does not necessarily need to be solved before being able to answer his question. Indeed, the Turing test suggests more than a simple game. It is signaling the beginning of a new inquiry on the theoretical and practical possibility of building 'real' intelligent machines. At the same time indicating some possible directions <ref name="subturin"> Natural language processing, problem-solving, chess-playing, the child program idea, and genetic algorithms </ref> to build a machine capable of passing this test. | |||
Riding the new wave of the cognitive revolution and embracing the cybernetic comparison between humans and machines, a group of polymaths began to meet in 1956 at Dartmouth College <ref name="Dartmouth"> dartmouth + names </ref>, the birthplace of AI. They developed the first working program, called ''Logic Theorist'', exploring the possibility to automatize reasoning through its formalization and manipulation in a symbolic system. This approach, called Symbolic AI, will become the workhorse to pass from programs that are able to reproduce only specific aspects of intelligence (narrow AI) to artificial general intelligence (AGI) capable of doing any task and achieve that human-level AI (HLAI) prospected by Turing. The overstated goal that will lead the fathers of AI <ref name="mccorduck"> McCorduck (1979) 'Machines who think' in Freeman</ref> to be remembered as the enthusiastic researchers drawn in a spiral of positive predictions and hyperbolic claims <ref name="dreyfus"> Dreyfus, H (1972) 'What computers can't do' in Harper & Raw, </ref> mostly failed, or, it has not yet been achieved. | |||
Infected by the same enthusiasm, philosophers of science already struggling on the possible comparison between the brain and the Turing Machine, started to attempt a serious interpretation of the human mind based on the information processing of new digital computers. The movement called computationalism led to several theories: The Computational Theory of Mind (CTM) (1967) (Putnam, Fodor, Dennet, Pinker, Marr) basically understands the mind as a linear 'input-processing-output' machine. Jerry Fodor's Language of Thought Hypothesis (LOTH) (1975) claims thinking is only possible in a 'language-like' structure to build thoughts at the top-level. The hypothesis of A. Newell, H. Simon (1976) sees in the physical symbolic system everything needed to build a true intelligence. In popular culture as well, the same enthusiasm led to a new ideology of the machine with its climax in the fictional character of HAL 9000 of the movie 2001: A Space Odyssey by Stanley Kubrick. <ref name="hal9000"> HAL 9000 is depicted as a malevolent human-like artificial intelligence capable of feeling emotions, designed with the technical consultancy of Marvin Minsky. </ref> | |||
Despite the great enthusiasm and expectations, the idea that computers can do all the things a human can do, has been heavily criticized. Philosophers such as Hubert Dreyfus (1965, 1972, 1986) and Noam Chomsky (1968) have highlighted the problematic aspects of computationalism in building working theories of the mind. Starting a critical analysis of AI they reveal the simplistic assumptions <ref name="dreyf assumptions"> dreyf assumptions </ref> perpetuated by the unjustified hype and incapacity of auto-criticism of major AI's researchers. They showed the technical limitations of physical symbolic systems that are unable to grasp the value of the context, essential in gaining knowledge and achieving common sense, as well as the impossibility to formalize all the aspects of intelligence, such as creativity and intuition. | |||
In the same direction, philosopher John Searle, criticizing the comparison of the human mind with computers in understanding things, developed a thought experiment called 'The Chinese room' (1980) <ref name="chinese room"> chinese room </ref> arguing for an underlining distinction between a 'strong AI' capable to really 'understanding', and a 'weak AI' which just simulates understanding. Searle's argument raises the same issues of the 'hard problem' of consciousness defining a threshold between the actual AI and the human mind. Other thought experiments, such as Jackson's 'Mary's room' (1986) <ref name="Mary's room"> Mary's room </ref> touch the subjectivity of experience directly, which seems to resist all the efforts of the scientific community to reduce it to a machine and its weak computational intelligence. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Machinic Life and Its Discontent (III) === | |||
The computational symbolic AI, also called ''good old-fashioned AI'' (GOFAI), believes that from a top-down approach you can engineer all the aspects of the mind in digital computers, including consciousness. However, despite the early success (still limited when compared to the actual goals aforementioned), the wrong predictions and their conceptual limitations led to a series of failures resulting in 2 periods of recession best known as ''AI winters''. After these periods, the criticism moved toward symbolic AI and the development of new research inspired by cybernetics, and the search for different approaches to understand intelligence and design life. | |||
Looking closely at the architecture of the brain, cyberneticists were already exploring the possibility to reproduce its networks of neurons in artificial neural networks (ANN) <ref name="nn"> McCulloch, S W and Pitts, W (1943) 'A Logical Calculus of the Ideas Immanent in Nervous Activity' in Journal of Symbolic Logic 9 (2):49-50 (1943) </ref>. However, this system will become effective only during the 80s with the development of the parallel distributed processing (PDP) pairing multiple layers of ANNs <ref name="connectivism"> 'A General Framework for Parallel Distributed Processing' </ref> and the birth of a new approach in AI: connectionism. Instead of an upstream representation of knowledge typical of the symbols manipulation, this bottom-up approach makes it possible to design AI systems capable of learning and finding useful patterns by inspecting sets of data and reinforcing the connections between its neurons. Thanks to the internet, ANN's can now be fed with a large amount of data increasing drastically their capacity to learn. This new perspective called ''deep learning'' was introduced in 2012, producing a renovated hype in connectionism and AI. | |||
Another relevant approach produced in AI by cybernetics is the ''intelligent agent'' paradigm. Reintroducing the discourse on complex systems, the concept of 'rational agent' (borrowed from economics) becomes the neutral way to refer to anything capable of interacting with an environment through sensors and actuators. This concept made it possible to develop AI systems capable of achieving a goal by keeping track of their environment, learning and improving their performance autonomously. In parallel with the developments in AI, cybernetics led to a new paradigm of machinic life focusing on the simulation of biological evolution through software: Artificial Life (ALife) <ref name ="AL">Artificial Life (Alife) borns from the studies of cellular automata begun by Von Neuman and developed by ... until its birth made possible by Langton and his conference ... </ref>. Starting from very simple operators and basic laws of interaction in a given environment, complex systems automatically arise creating chaotic interactions, stable loops and astonishing patterns that are impossible to predict. | |||
Despite these new developments, both ALife and AI are encountering their respective boundaries. Life, like intelligence, generates from the interaction with an extremely complex and variegated environment, the ''noisy'' physical world which is made of radiations and electromagnetic phenomena, particles and wavelengths in continuous interaction. A chaotic world that neither contemporary computers' capabilities nor the internet's amount of data can simulate. Furthermore, the companies relying on deep learning are looking into the problem of understanding why these learning systems make their choices. Their autonomous way of learning through layers of networked neurons creates nested black boxes extremely difficult to unpack rising a whole debate on discriminations and biases embedded in software. <ref name="soft">In this direction, deep learning's researchers are using deep learning methods to analyze deep learning itself and unpack the behavior of its layered neurons. </ref> | |||
To escape from these limitations, scientists are now working on a more holistic understanding of intelligence which combines the sub-symbolic approach of machine learning with the knowledge representation of symbolic AI. In robotics, the ''situated AI'' takes robots outside of the labs to interact with the 'noisy' physical world, hoping to find new ways to generate direct knowledge instead of just simulating it. Finally, these new ways of defining life and intelligence, are moving toward a deeper understanding of cognition that instead of being represented only as a symbolic system, also lies on a sub-symbolic level <ref name ="slow and fast">Kahneman, D (2011) 'Thinking, fast and slow' in Farrar, Straus and Giroux</ref> and instead of being a designed product, it is seen as part of evolutionary processes <ref name ="evo">Langton, C (1990) "Computation at the edge of chaos" in Physica D 42, 12-37</ref>. Slowly machinic life is reaching that ''adaptive unconscious'' and ''embodied knowledge'' which seems the key to simulate the high-level intelligence typical of intuition, creativity and the spontaneous complexity of life. In general, the perspectives of engineering general-purpose systems and populating the world with new forms of life are growing faster. | |||
After almost 70 years from the first AI program, we are still surrounded by only weak-and-narrow AIs. On the one hand, part of the researchers reformulated the goal of building ''human-level'' systems, as well as 'strong' AI, on more practical aims. On the other hand, private institutions, such as MIT, tech entrepreneurs, such as Elon Musk, and many other researchers in AI's related fields, such as Ray Kurzweil, are repeating the same errors of the old fathers. Riding the regenerated hype made possible by the boom of deep learning, a new wave of enthusiasts is calling for the 'big picture' of AI. They daydream a future-oriented, techno-utopianist world that reminds the morally dubious neo-liberal Californian Ideology. Passing through the development of general-purpose human-level AIs, the technological singularity will realize a strong artificial consciousness and the emancipated, black-boxed, AI will 'think for itself' becoming the dominant, superintelligent, form of life of the future (eventually helping, killing or snubbing human beings). | |||
The AI's 'big picture' sores the nervous system of popular culture. It creates misunderstandings on the actual state of affairs, expectations in the near futures and doubts on positive future perspectives. However, if the plan of machinic life in general, is still to unveil the 'mystery of the mind', it can't continue to pretend their assumptions to be absolutely true and stay silent on the technical issues regarding how consciousness works. In particular, it will have to deal with the 'hard problem' of consciousness to achieve whatever attempt to create a machine really capable of thinking. It will need to engineer subjective experience, which at this point in time still differentiates humans from machines, allowing us to imagine a present-oriented future where the 'big picture' will be resized to its 'weak' actuality. Or, in the best case, if machinic life succeeds and as Matteo Pasquinelli hopefully interprets the words of Turing, it will be <q>''that of a new kind of alliance between the two forms of cognition''</q>. | |||
Before proceeding with a detailed account of the characteristics of subjective experience, its similarities and differences with the computer, in the next chapter I will briefly introduce other approaches that, instead of the autonomous machine of machinic life, explore different relations between humans and their technological system. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Beyond humans and machines === | |||
Before the 19th century, the technological system was envisioned as a tool extending the organs of the human operator, leader of the intellectual and productive process. This conception started to corrode with the critique of historical materialism pictured by Marx and Engels, to drastically change after War World II. Inverting the dichotomy and confining human beings to a total subaltern level, yet fated to become futile, the intelligent and autonomous artificial organism conceived by cybernetics and AI draw an unsurpassable threshold between human and machine performance. However, in this power play configurations between natural and artificial agents, other possible worlds can be articulated. Worlds where humans and machines not only coexist but melt together achieving that level of close interaction between organisms known as symbiosis and leading to the paradigms of intelligence amplification (IA) and cyborg theory. | |||
On the one hand, in parallel to AI and inverting its acronym, IA claims the possibility of augmenting human intelligence through technological means. Anticipated in 1945 with the prophetic words of Vannevar Bush and theoretically formalized in cybernetics' forgery by Ross W. Ashby, "it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done [...]". Fostered by the visions of J.C.R. Licklider's ''man-machine symbiosis'' and Simon Ramo's ''intellectronics'' working in strict contact with the United States Department of Defense, the 60s have seen the consolidation of this promising paradigm in the development of interactive computing and the user interface. The work of Douglas Engelbart and his political plan to bootstrap human intelligence automatically affecting society will be remembered as the highest peak of IA before disappearing in the less politicized human-computer interaction (HCI) in the late 70s. Nowadays, a new frontier of amplification/interaction linking directly the brain with the computer is becoming possible. The brain-computer interface (BCI) gets us closer to realize what Turing refers to as the "disturbing phenomena" called extrasensory perception "[which] seem to deny all our usual scientific ideas". The same BCI which Elon Musk is trying to develop in his project ''Neuralink'', among other things, as a universal panacea to communicate with the artificial super-intelligence of the dystopic future. | |||
On the other hand, the disclosure of the cybernetic concept of life dissolves the human-machine dichotomy into an ecosystem of patchworked organisms mixing together artificial and biological parts. This continuum, called ''machinic phylum'' by Deleuze and Guattari, is the home of the cybernetic organisms (cyborg), transforming his body in the playground where internal and external assemblages of parts like implants, different in their substance but communicating through feedback loops, coexists. The cyborg society represents all the possible shades articulating the space between what is human and what is a machine. In this direction, hybrid biorobotics are another framework that is backing away from the pure artificial goal of AI and standard robotics exploring the possibility of mixed species. The idea is that we can build artificial hardware running biological software as well as using artificial software to control biological hardware. | |||
If the first way tries to deploy patterns emerging in biological neural networks to run in artificial computers, the second finds its example in RoboRoach where the movements of a cockroach are controlled through an artificial implant sending electrical impulses to its nerves. This last technique reconnects to the BCI discussed above, and when used to directly stimulate the brain leads to what Thomas Metzinger called ''neuro-enhancement'', the artificial control of mental states (as the neuro version of psychopharmacology). Hybrid biorobotics seems to be the closest way to synthesize consciousness based on the fact that it seems easier to control the brain instead of building an artificial correspondent of it. | |||
All these different configurations and consequent understanding of the relation between the human and the machinic have a common denominator. The first step seems to be the much acclaimed singularity, intended as a particular moment in time in which there will be a drastic change in how we deal with technologies. It could be the advent of AGI, HLAI or super artificial intelligence (SAI). The construction of an affordable BCI or the rise of a cyborg society and the synthesis of machinic consciousness. But the final point, the farthest moment where theories conflate, is the bio-digital fusion that will follow the exponential growth of humans and machines. | |||
''<q>The stars and Galaxies died and snuffed out, and space grew black after ten trillion years of running down. One by one Man fused with AC [Automatic Computer], each physical body losing its mental identity in a manner that was somehow not a loss but a gain. Man's last mind paused before fusion, looking over a space that included nothing but the dregs of one last dark star and nothing besides but incredibly thin matter, agitated randomly by the tag ends of heat wearing out, asymptotically, to the absolute zero</q>''<ref name="Asimov">Asimov - the last question - 1956</ref> | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
== Part 2 == | |||
<br><br> | |||
''<q>I am not advocating that we go back to an animistic way of thinking, but nevertheless, I would propose that we attempt to consider that in the machine, and at the machinic interface, there exists something that would not quite be of the order of the soul, human or animal, anima, but of the order of a proto-subjectivity. This means that there is a function of consistency in the machine, both a relationship to itself and a relationship to alterity. It is along these two axes that I shall endeavour to proceed.</q>'' | |||
— | |||
Felix Guattari, ''On Machines'', 1956 | |||
<ref name="on machines"> Guattari, F (1995) 'On Machines', 'Complexity' in Andrew Benjamin ed. </ref> | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Here, me, now === | |||
Subjective experience is phenomenal consciousness and since the standard scientific method relies on an objective account of the mind based on empirical evidence, it can't directly explain it. Philosophy, instead, has developed different methods to look at the phenomena (the things that appear to us) in themselves. <ref name="buddism">buddism</ref> At the end of the 19th century, Edmund Husserl's ''phenomenology'' inquired about the nature of mental content acknowledging the possibility to infer objective knowledge about it and the external world. During the first half of the 20th century, analytic philosophers theorized the ''sense datum'' (intext ref), later ''qualia''(intext ref): minimal mind-dependent unities which combined together constitute the whole phenomenal consciousness. These approaches and the description of the mind portrayed by the aforementioned cognitive revolution involve the mental representation <ref name="mental rep"> These mental representations can be understood as functional models produced by the evolutionary process and naturally selected for their survival and adaptive value.</ref>of the external world (representational realism) instead of direct contact with it (naive realism). Our perception is deconstructed, processed in different areas of the brain, and recompose in the world as we experience it. <ref name="asdad"> It is implied that the physical reality described by nuclear and quantum physics exists and our phenomenal experience is projected on top of it.</ref> | |||
The contents of our phenomenal consciousness accessible through introspection are resumed in the experience of having a first-person perspective (''me'') on the world (''here'') in a specific moment in time(''now''). Generally, our point of view takes place from within our body, which is itself represented as part of the world, giving us a sense of ownership and selfhood, location, presence, and agency. On the one hand, the ''me'' or self, as experienced by humans and few mammals, is built on a higher-level of consciousness allowing us to access memories and being projected in the future, using language and logico-mathematical thinking. Turning the first-person perspective inward, this extended or secondary consciousness (Damasio 2000; Edelman and Tononi 2000;), makes us particularly self-aware beings, able to explore our own mental states and to account and experience 'experience' itself. The lower-level, called core or primary consciousness (Damasio 2000; Edelman and Tononi 2000;) is common in humans, a large number of mammals, and marine organisms such as octopi, and consist of a more basic form of self-awareness. On the other hand, the representation of space and time persist in most of the species as a basic-level called ''nonconscious protoself'' (Damasio 2000). I will return later on this argument but it is important to highlight that the absence of a consistent subject capable of inwardness, makes us doubt on which extent certain animals are able to experience emotions and feelings as originating from within themselves. However, the impossibility to know what it is like to be another living being leaves this argument open to debate. <ref name="nagel">how is it like to be a bat </ref> | |||
Given the hypothesis that the brain is the sufficient cause for consciousness to exist <ref name="reductionism">reductionism</ref>, whatever constitutes it, must have a sort of correlation with the physical brain. This is what scientists call the neural correlate of consciousness (NCC) (intext ref), an extremely complex but "coherent island emerging from a less coherent flow of neural activity" that than becomes a more abstract "information cloud hovering above a neurological substrate". <ref name="metz"> ncc metz quote </ref>. Clinical cases and limit experiences that are directly accountable (neuropsychiatric syndromes, dreams, meditation, altered states of mind and so on) help to map which part of the brain is activated when the experience of the ''here, me, now'' happens in different circumstances. In fact, in spite of an all-or-nothing process, consciousness is graded and non-unitary taking place in different phenomenal worlds. If we manage to link a particular subjective experience with a pattern of chattering neurons, we could get closer to solve the ''hard problem'' of consciousness. In particular, the first step to explain subjective experience would be to solve the ''one-world problem'': how different phenomenal facts are merged together (''world binding'') in a coherent whole, and defining particular NCCs should lead to finding the global NCC and the minimal NCC necessary for phenomenal consciousness to take place. | |||
In his books ''Ego Tunnel'', Thomas Metzinger defines consciousness as ''the appearance of a world'' and the brain is understood as a ''world engine'' capable of creating a wide variety of explorable phenomenal worlds. In particular, he focuses on the phenomenal worlds of dreams and out-of-body experiences (OBE) in order to develop a functionalist, reductionist, theory of consciousness. These limit experiences, where a complete experience of disembodiment can be achieved, have led him to develop a particular definition of self, that instead of being a stable instance, is a process running in our brain when we are conscious and turning off when we go to sleep. Exactly like the experience of the ''here'' and ''now'' is possible because they exist as internal mental representations, Metzinger's self is defined as personal self-model (PSM) created for better control over the whole organism. Although the internal modelization of the ''here, me, now'' allows a deeper understanding of phenomenal consciousness as simulated virtual reality, Metzinger's claims that ''"no such things as selves exist in the world: nobody ever had or was a self"''. This provocatory claim might be misleading in understanding the nature of the ego, which, notwithstanding the perspective of a PSM, seems more ontologically rooted when we consider the tangibility of experience itself. | |||
Metzinger, like other researchers, tries to explain why it really looks like we are living in a simulation created by our own brains. While conscious experience seems to take place far away from the physical world, it seems to dwell in a place other than the physical brain, the focus of most of the scientific community. Drawing a liminal space existing in between our brain and the physical world and claiming for a reality of the phenomenal world closer to dreams, Antti Revonsuo calls the experience of being ''here, me, now'' reasonably an ''out-of-brain experience''. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Engines and experiences === | |||
If the computer metaphor has its limitations in practice, kept as a metaphor it helps us think about many aspects of our beings. In particular, the difference between hardware and software reflects our struggle to interpret the relation between our body and our mind, our brain and our consciousness. The first part of this text highlights how computers can produce symbolic and sub-symbolic operations, evolutionary dynamics and embodied knowledge, resulting in external behaviors identical to living beings. However, the available ''thinking machines'' cannot be said to be conscious. Most evidently computers lack that active individual instance called self which makes a world to appear. But what about the ''here'' and the ''now'' of computers? | |||
In her book ''Hamlet on the holodeck'', literary criticist Janet H. Murray develops a theory of new media based on their literary nature. She reports a quote from Italo Calvino describing the experience of a writer in front of his typewriter. ''<blockquote> "Every time I sit down here I read, “It was a dark and stormy night...” and the impersonality of that incipit seems to open the passage from one world to the other, from the time and space of here and now to the time and space of the written word; I feel the thrill of a beginning that can be followed by multiple developments, inexhaustibly."</blockquote>'' Murray, explains how the overwhelming capacity of the analog text to project the reader in its world, is reconfigured and augmented in new media. Not only can the text be translated into a digital file, displayed and multiplied, but the whole nature of computers' software, where the digital text takes shape, is itself textual. Both the stack of layers of programming language and the binary code dwelling at its foundation are texts expressing meaning. These computers' backstage has been used in concrete poetry (Goldsmith) and software art (Cramer) to unveil the textual nature and the conceptual realm of the processes undergoing the graphical user interface (GUI) and to compare them to human nature. Referred to as the ''Rorschach metaphor'' (Nelson, Turkle), the projective character of digital media, is increased by the unique spatial aspects of the software's environment. Often called cyberspace, it represents a geographical space where we can move through, in an interactive process of navigation and exploration. Furthermore, the ''user/interactor'', active part of this process, triggers certain events to happen in a temporal immediacy: ''<q>You are not just reading about an event that occurred in the past; the event is happening now, and, unlike the action on the stage of a theater, it is happening to you.</q>''. | |||
Integrating space and time, the software enables a world to be experienced. As the brain described by Metzinger, computers' hardware works as a ''world engine''. However, because of the absence of an internal experiencing consciousness making the world appear, their ''here'' and ''now'' is actualized only through the subjective experience of an external ''me''. Similarly to this view of software as potential worlds, Seymour Papert, a prominent computer scientist in educational software, has developed the concept of ''microworld'': ''<q>a little world, a little slice of reality. It's strictly limited, completely defined [...]. But it is rich.</q>'' <ref name="papert microworld">"[a] representation of some well-defined domain, such as Newtonian physics, such that there is a simple mapping between the rules and structures of the microworld and those of the domain". </ref> The microworld works as an educational tool helping children to learn how to operate and design multiple contained digital environments. In the long term, this knowledge of different small worlds can be used to create something larger: a '' macroworld''. | |||
What we call software, is a stack of abstractions relying on each other but, in the end, it is nothing more than electrical impulses happening on the physical level of the hardware. Because of this, Friedrich Kittler suggests that in fact ''<q>there is no software</q>''. The same happens for our consciousness, which scientists are continuously trying to reduce to the brain itself, and, to paraphrase Metzinger, ''<q>there is no self</q>''. However, the influence of software in our society is widespread. The worlds created by software shape the physical world, and, in fact, it is increasingly considered a cultural object worthy of being studied in-depth. Something similar is happening to the self which is actually experienced as more than an abstract model switching on and off. It contains the means enabling us to transform a meaningless physical world into a meaningful phenomenal universe worth to be explored and giving us the means to create our complex society. When the self interacts with the self-less computer, the projective mechanisms of the textual software activate, transporting the individual to experience a new phenomenal world. From this view, if the hardware represents the physical level, the software represents the possibility of a phenomenal world which is actualized only when experienced by a self. This phenomenal dimension that the software acquires can be described as an ''out-of-hardware experience'', exactly because experienced by a conscious subject located outside of the hardware. | |||
However, if the software is experienced ''out-of-hardware'', and consciousness is experienced ''out-of-brain'', where is subjective experience exactly located? The identification of subjectivity within the hardware is typical of that researcher whose scientific approach negates and reduces subjective experience to a brain's mechanism. They easily tend to alienate their own selves idealizing computers as living organisms and predicting their ability to generate consciousness autonomously. Instead, when the problem is posed in these terms, the individual can claim back its power over the machine in shaping the center of phenomenal consciousness. In fact, the "one-world problem" of subjective experience mentioned before, includes that one world is first needed for consciousness to take place. A first mental simulation is necessary, and then, from this one world, other simulations similar to the ''microworlds'' described by Papert can be performed predicting the results of an action or recalling a past event. <ref name"nested structure">This nested hierarchical structure is common in conscious mental simulations as well as in software, where at the top-level runs the operating system.</ref> However, the hardware and the brain are two different kinds of ''world engine''. They are two different systems and, even though producing the same results, they differ precisely in substance, structure, and process. When we experience software, a phenomenal world other than the simulation of our main world, opens in front of us. From the inside of the first ''out-of-brain'' world, a second ''out-of-hardware'' world can appear. In computer science, when a system runs a simulation of another system, this is called emulation. Given this notion, the brain and the hardware can be understood respectively as a ''world simulator'' and a ''world emulator'', when seen from the perspective of subjective experience. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
=== Extending cognition === | |||
To better understand the relationship connecting computational media and human beings, it is necessary to understand how the phenomenal ''here, me, now'', the compound of consciousness, differs from the ''here'' and ''now'' of the selfless world of software. However, even though the debate has progressed in many directions, the foundational elements to understand this relationship have been there all along. | |||
A first connection is already contained in ''Erewhon'' the main novel of the aforementioned forerunner of cybernetics, Samuel Butler and in its influence in the work of Deleuze and Guattari. Meant to be read backward as a wanted misspelling of ''nowhere'', ''Erewhon'' contains ''the book of the machine'' <ref name="bookmachine"> a compendium of his earlier texts on the Darwinian discourse on evolution</ref> where consciousness was for the first time binding humans and machines. Previously, Deleuze's critique of representation, articulated by his concepts of difference and repetition <ref name="deleuze diffrep"> deleuze difference and repetition</ref>, would develop a particular understanding of ''Ideas'' which are also called ''Erewhon'' but, in particular, he will reframe this term not just as a ''no-where'' but as a ''now-here''. Later, in their collaborative work ''Anti-Oedipus'' <ref name="antioed"> deleuze&guattari antioedipus</ref> they will relate the same term to their concept of ''desiring-machine'' and Butler's understanding of machines to the ''body without organs''. Finally, Guattari will describe the machine as a ''proto-singularity'' differing from biological organisms but closely related to their nature. <ref name="onmachines"> guatttari on machines</ref> | |||
The term, ''proto-singularity'' suggests a direct link to the aforesaid ''proto-self'' defined 10 years later by Damasio as the <q>''ensemble of brain devices that continuously and nonconciously maintain the body state within the narrow range and relative stability required for survival''</q>, and representing the deep roots of the <q>''elusive sense of self’''</q> of conscious experience. <ref name="damasiou"> damasiou cit protoself</ref> Still referring to two different domains, technical and biological, the theoretical correspondence of these terms, can be traced back to the offspring of cybernetics of the late 60s <ref name="2ndcyb"> second order cybernetics</ref>, and in particular to the researches of biologists Humberto Maturana and Francisco Varela. They first developed the idea that cognition emerges in living systems from their ability to self-organize as self-contained systems (''autopoiesis'') <ref name="autopo"> autpoiesis and cognition</ref> to then enlarge this position comprehending the sensorimotor capacity to match and interact with the environment (''enaction''). <ref name="enact"> enaction</ref> Proposing an alternative to computationalism and connectionism, the enactivist paradigm extends cognition beyond the brain and consciousness into the nonconscious inner processes happening in the organism as a body (''embodied cognition'') <ref name="embodied"> embodied cog</ref>and its interaction with an external environment (''situated cognition'') <ref name="situated"> situated cog</ref>. These radical view of cognition can be extended furthermore outside of the body to create frameworks comprehending not only animals and plants but also the technical system, and eventually, natural processes (''distributed cognition'') <ref name="distr"> distributed cog</ref>, getting closer to the panpsychist view where the mind becomes a fundamental element of the whole reality. | |||
In her recent work <ref name="listHayles"> list work of hayles on nonconscious cog</ref> Katherine N. Hayles, reframes Damasio's ''protoself'' as ''nonconsious cognition'' emphasizing the extension of cognition outside consciousness into ''embodied'' and ''situated'' processes, and the relevance of the nonconscious as a new cognitive sphere comprehending both biological and technical systems. Furthermore, according to Hayles, because cognition presupposes interpretation and production of meaning <ref name="biosem"> biosemiosi</ref>, the nonconscious provides a framework, that she calls ''cognitive assemblages'', to extend the social theory beyond anthropocentrism and consciousness, into a cognitive ecology of human and nonhuman''cognizers''. <ref name="cognitiveassembl"> cognitive assemblage & cognizers</ref> Differing from the unconscious for its inaccessibility by conscious states, the nonconscious posits itself in-between material processes and consciousness, providing the first layer of meaningful representations needed by consciousness to take place. Furthermore, according to new empirical proofs, the nonconscious works faster and can process a bigger amount of information than consciousness avoiding the latter to be overwhelmed. However, with its ability to choose, at its simplest level between a zero and a one, and perform faster than consciousness, the technical nonconscious can condition our decisions and behaviors making new techniques of surveillance and control possible. <ref name="missinghalfs"> missing half second</ref> From these perspectives, the study of computational media becomes a necessity to complete a coherent map of social interactions and to openly accept their active role in the production of culture. | |||
The framework of the ''nonconscious cognition'' developed by Hayles, provides a working model to understand the actual relationship between consciousness and software. In fact, given an extended cognition beside consciousness, on the "biological" hand, we find conscious processes relying on internal, dynamical representations provided by the biological nonconscious. These representations, that are maps of the environment and the body continuously updated in a window of time, provide consciousness with the building blocks of an embodied sense of ''self'' and a point of view through which it experiences a coherent phenomenal world. On the "artificial" hand, there is no consciousness and self to reinterpret the representations provided by the technical nonconscious. Furthermore, far from being ''embodied'' and ''situated'' as biological organisms, the technical nonconscious is an ''embedded'' system <ref name="embedded system"> embedded system</ref>, compiling and interpreting lines of text internally stored and manifesting its represented content through an interface. This technical cognitive process happens in ''real-time'', like the biological one, and represents the abstract spatial dimension described by its code providing the ''here'' and ''now'' necessary for a world to appear. The software stands for the possibility of the representational processes of the technical nonconscious to be extrapolated, internalized and re-represented by a consciousness that integrates its ''"self"'' to experience a new phenomenal world as an out-of-hardware experience. | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> | |||
== Conclusion - a walk through the language maze == | |||
''<q> But why should I repeat the whole story? At last we came to the kingly art, and enquired whether that gave and caused happiness, and then we got into a labyrinth, and when we thought we were at the end, came out again at the beginning, having still to seek as much as ever. </q>'' <ref name="platos"> Plato - Euthydemus </ref> | |||
The attempts of the proponents of ''machinic life'' to build autonomous machines, and the articulations of human-machine symbiosis are essential steps in exploring the processes of cognition. However, these frameworks rely on premature assumptions perpetuated as pretended actualities while they fail to consider the consequences of their claims and products on the broad public. The focus of these disciplines should change because the symbiosis is clearly already happening and it necessarily changes our lives and our societies through a ''"control without control"''. Indeed, the mimesis of consciousness in technical systems and its underlying faith in a true ''artificial consciousness'' must rely on the understanding of the biological consciousness and, eventually, it must be re-framed in accordance with it. Instead of the rush to increase the capabilities of technical systems, developing a science of consciousness is a necessary step that we must take first, and that will allow us to disclose the nature of subjective experience and to re-organize our understanding of the physical world, the technical system, and the mind. | |||
In the same trajectory, understanding consciousness provides new means to look beyond consciousness itself. It permits us to find the natural position of an elusive object of inquiry which, because it is observable only inside ourselves as subjects, has been used for ages to perpetrate an unnatural anthropocentrism now felt threatened by our own technologies. The extension of cognition outside consciousness, allows us to think of a natural social ecology where different forms of cognition, conscious and not, shape each other in a communal influence. Instead of being a threat, it opens new physical and intellectual relationships with new forms of cognition in and beyond the biological realm. | |||
The interaction between human beings and the technical system through software, as discussed in this thesis, remarks on the validity of such developments and insists on the necessity to continue in this direction. It envisions new ways to articulate the study of software, on the one hand, by standing firmly on the materiality of the physical processes constituting it and reducing it completely to the hardware, and on the other hand, by highlighting how the interaction with a conscious subject permits to rethink software in terms of an experiential world abstracted by the underlying material processes. Drastically different from other phenomena, which fails to provide the complexity of an experiential world, software can be arguably said to augment consciousness instead of just augmenting cognition and intelligence. | |||
The technical questions on the validity and consequences of this thesis rely on the next developments in our understanding of consciousness, of the physical world, and of their relationship which still is uncertain. Eventually, these developments will allow us to really consider the fact of living inside of artificially simulated worlds, which right now are still games impossible to be mistaken for real worlds. Eventually, it will be understood that consciousness can be really instantiated in artificial machines able to feel feelings and perceive themselves as embodied in a physical world. But right now, we must be able to understand why this is not really happening and why we are inherently different from computers. | |||
Perhaps, the link between the human and the machinic consists of the maze created by their intertwining layers of languages. <ref name="languagemaze"> Language here is intended in the broad sense, comprehending non-verbal language and sensorial perception, to suggests all the possible signifiers. </ref> A ''"language maze"'' made of verbal and non-verbal languages, natural languages and formal languages, computer's code and machine languages. A Deadalus' labyrinth of material, informational, algorithmic and literary explorable spaces developing in the horizontal and the vertical direction, from microscopic to macroscopic territories, internal external and internal spheres. A Penelope's web made of rooms that, as Escherian paintings, hides recursive simulations and emulations of other rooms, other mazes and itself. Perhaps, what distinguishes the human from the machinic, is the feeling, illusory or real, of being whole with the ''"language maze"'' as an infinite space (logos) where to build new worlds from scratch. | |||
''<q> The Labyrinth is presented, then, as a human creation, a creation of the artist and of the inventor, of the man of knowledge, of the Apollonian individual, yet in the service of Dionysus the animal-god. </q>'' <ref name="colli"> Giorgio Colli - the birth of philosophy </ref> | |||
--------------------- | |||
<references/> | |||
--------------------- | |||
<br> |
Latest revision as of 20:07, 22 March 2020
Thesis
Introduction
The meaning of software as described in this thesis, does not address technologies as an idealized offspring of autonomous life, nor as mere tools. Neither does it address society and its politics, privileging groups over individuals and institutionalizing objective, scientific, knowledge as the sole source of wisdom. Instead, the meaning of software sprawls toward subjective experience, which, notwithstanding being in part determined by the intertwining of society with the technical system, stands still as the primal perspective to acknowledge the world and its objects.
How to articulate this kind of discourse? And, what are the consequences unfolded by such an understanding of software?
When I started unpacking these thoughts, my interest was directed in justifying how my own attitude and practice resists the ruling views on what the aims of both the social and the technological system are. First, I was concerned about the tireless attempts of an omnipresent society to determine what, who and why I am myself. Then, this imposed subjectification pushed me to move toward experimenting with new technologies expecting to find an easier route to self-determination. However, in the last year, I realized that even there, the bearing structure of digital technologies built upon a century of coding software and manufacturing hardware was actually determined by the same politics and social dynamics I was trying to get away from.
My naive expectations are now reified, but still, my interest in claiming the fundamental role of self-determination is not changed and a critical approach to the study and use of computers and software still keeps my personal door to wisdom open. This path, however, must be said, is not a one-man enterprise, but a historical trail weaving different discourses and involving different disciplines already multidisciplinary and arduous to resume as definite unities. However, to capture the whole and help the reader not to get lost in the labyrinth, a map needs to be made here. A map where to find the path and the main "ciceros" that helps me not to get lost myself.
This thesis looks back at the frameworks to instantiate the scientific explanation of the mind and the body into machines, here called "machinic life", an umbrella term coined by John Johnston. In parallel, it explores the recent theoretical approaches in the humanities and the scientific proofs shedding new light on the study of consciousness. Furthermore, these recent views permit to correct the attempts of "machinic life" which, however, already spread the consequences of its premature assumptions in a society that in the meanwhile is enmeshed with its technological system. In this direction, I've found a particular resonance of my thoughts with the work of David Chalmers, claiming for a paradigm shift in science allowing the study of consciousness as a valid field of research and building a solid network of scientists aiming at facing the challenge of explaining consciousness. Thomas Metzinger, who, through the study of altered states of mind, is one of the few to propose an alternative model of consciousness capable of explaining the nature of the self. And Katherine N. Hayles, who, developing her discourse starting from the social (and therefore opposite to mine emphasizing the subject), particularly impressed me for articulating the discourse on machines as a form of cognition that is other from humans but sharing something with them. This same idea that I was trying to draw following the path of the "proto-subjectivity" described by Felix Guattari, and that she develops from a scientific point of view and extends to social theory.
Finally, the result of these discourses, applied to software, instead of addressing its technical and cultural aspects directly, reveals a new aspect of software pointing toward the subjective experience of a new phenomenal world that can be built through an external form of cognition. Knowledge, from this point of view, is not inaccessible to the individual level and given by science and normalized through society, but is a process built through the construction of worlds, simulating the real or creating useful fictions, but still validating the subject as the designer (or hacker) of its own experience.
Part 1
Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
— Isaac Asimov, The Last Question, 1956 [1]
- ↑ Asimov, I (November 1956) 'The Last Question' in Science Fiction Quarterly
The Hard Problem of Consciousness
Through the standard scientific method the challenge of explaining the mind has been mainly addressed by disassembling it in its functional, dynamical and structural properties. [1] Consciousness has been described as cognition, thought, knowledge, intelligence, self-awareness, agency and so on, with the assumption that explaining the physical brain would resolve the mystery of the mind. [2]. From this perspective, our brain works as a complex mechanism that eventually triggers some sort of behavior. Consciousness is the result of a series of physical processes happening in the cerebral matter and determining our experience of having a body, thinking, and feeling. This view has been able to explain many unknown elements of what happens in our minds.
In 1995 the philosopher of mind David Chalmers published his article Facing up to the problem of consciousness [3], where he points out that the objective scientific explanation of the brain can solve only an easy problem. If we want to fully explain the mystery of the mind, instead, we have to face up the hard problem of consciousness: How do physical processes in the brain give rise to the subjective experience of the mind and of the world? Why is there a subjective, first-person, experience of having a particular kind of brain? [4]
Explaining the brain as an objective mechanism is a relatively easy problem that eventually, in time, could be solved. But a complete understanding of consciousness and its subjective experience is a hard problem that scientific objectivity cannot access directly. Instead, scientists have to develop new methodologies, considering that a hard problem exists — How is it possible that such a thing as the subjective experience of being me, here, now takes place in the brain?.
Echoing the mind-body problem initiated by Descartes [5] subjective experience, also called phenomenal consciousness [6], underlies whatever attempt to investigate the nature of our mind. This problem challenged the physicalist ontology of the scientific method showing the unbridgeable explanatory gap [7] between the latter's dogmatic view and a full understanding of consciousness. This produces the necessity of a paradigm shift allowing new alternative scientific methods to embrace the challenge of investigating phenomenal consciousness [8].
The reactions to Chalmer's paper range between a total denial of the issue (Ryle 1949, Dennett 1978, 1988, Wilkes 1984, Rey 1997) to panpsychist positions (Nagel 1979, Bohme 1980, Tononi and Koch 2015), with some isolated cases of mysterianism (McGinn 1989, 2012) advocating the impossibility to solve such a mystery. In any case, the last 30 years have seen exponential growth in multidisciplinary researches facing the hard problem with a constant struggle to build the blocks of a science of consciousness finally accepted as a valid field of study. This is a central subject of this thesis and we will regularly return to this contested field later in this text.
- ↑ Weisberg, J (2012) 'The hard problem of consciousness' in J. Feiser & B. Dowden (eds.), Internet Encyclopedia of Philosophy
- ↑ This position is called physicalism and it is closely related to materialism
- ↑ Chalmers, D (1995) 'Facing up to the problem of consciousness' in Journal of Consciousness Studies 2 (3):200-19
- ↑ Nagel, T (1974) 'What is it like to be a bat?' in Philosophical Review 83 (October):435-50
- ↑ Descartes, R (1641) 'Meditationes de prima philosophia'
- ↑ Block, N (2002) 'Some concepts of consciousness' in D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. pp. 206-219
- ↑ Levine, J (2009) 'The explanatory gap' in Harold Pashler (ed.), Encyclopedia of the Mind. SAGE Publications
- ↑ Varela, F (1996) 'Neurophenomenology: A methodological remedy for the hard problem' in Journal of Consciousness Studies 3 (4):330-49
Machinic Life and Its Discontent (I)
To fully understand the importance and consequences which are involved in exploring consciousness, we must shift our attention towards the evolution of the technological system. In particular, to the attempts of simulating the mechanisms of the mind in order to build autonomous and intelligent machines. In The Allure of the Machinic Life [1], John Johnston attempts to organize the contemporary discourse on machines under a single framework that he calls machinic life.
By machinic life I mean the forms of nascent life that have been made to emerge in and through technical interactions in human-constructed environments. Thus the webs of connection that sustain machinic life are material (or virtual) but not directly of the natural world. (...) Machinic life, unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations. [2]
Implying the whole attempt to produce life and its processes out of artificial hardware and software, the definition of machinic life allows us to re-consider the different experiences of the last century under the common goal of building autonomous adaptive machines and to understand their theoretical backgrounds as a continuum.
The mythological intuition of technology, subsumed in the concept of techné [3], already shows the main paths of the contemporary discourse. In fact, in the myth of Talos and in Daedalus' labyrinth, we can find the first life-like automaton and the first architectural design reflecting the complexity of existence and outsourcing thought from human dominion. However, only in the 19th century, with the new technological discoveries and a positivistic approach to knowledge [4], did scientists start building the bearing structures of what would become the two main fields of machinic life of the 20th century: Cybernetics and Artificial Intelligence (AI).
On the one hand, this process begins with the improvement of the steam engine through the study of thermodynamics (Sadi Carnot 1824) joined with the debate on the origin of human beings opposing evolutionary biology (Lamark 1809, Darwin 1859) to the religious belief in Creationism (Paley 1802). This new view, however, wasn't able to sustain any possibility of choice and will to human existence which, unleashed from the religious teleology imposed by God, was consigned to natural selection and the random chance introduced By Charles Darwin. In 1858, Alfred Wallace wrote a letter to Darwin making a specific relation between the 'vapor engine' and the evolutionary process. The action of this [natural selection] principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident.
[5] In the 1870s Samuel Butler, speculating on the evolution of machines in texts such as Darwin Amongst the Machines (1872) and Erewhon (1879)[6], reintroduced the idea of teleology (purpose) in human nature funding the equivalence of the autoregulatory system of animals (evolution) and machines (feedback mechanism) in the concept of adaptation. Furthermore, these developments made it possible to theorize a framework where machines can auto-regulate and reproduce themselves, evolving exactly as biological organisms. In the twentieth century, Wallace and Butler's speculative theories will find their scientific correlative in the biological process of homeostasis [7], making its closer study and simulation in technical systems by cybernetics possible. It is worth remembering that this was defined as: control and communication in the animal and machine.
Parallel to these developments, the study of mathematics and logic, along with the revolution of Jacquard's loom (1804), led to the construction of advanced discrete-state machines [8] and the first practical translation of elementary logical function into binary algebra. Charles Babbage and Ada Lovelace's effort to develop and program the analytical engine (1837), first general-purpose computer, together with Boolean logic (1854), introduced a new computational era in which mental labor was not exclusively the prerogative of humans but could be performed by an economy of machines [9]. The idea of formalizing thought in a set of rules (algorithm) can be traced back to Plato [10] and was theorized in the 17th century by Leibniz [11] as a universal symbolical system capable of solving every possible problem. Alan Turing and Alonzo Church will [later] demonstrate this speculation mathematically, in 1936, leading to the formalization of the theory of computation [12]. Along with the formalization of computers' architecture by John Von Neumann in 1945 and Claude Shannon's information theory in 1948, the digital computer was born, establishing the framework for the rise of Artificial Intelligence (AI).
If the classical world had the intuition of the sentient machine, and the modern the realization of its possibility, it is only with the practical experience of cybernetics and AI that the contemporary discourse of machinic life can be formulated. Nonetheless, the dual nature of contemporary discourse embodies the convergence of different theories in biological, mechanical and computational systems within a multidisciplinary approach to knowledge and life, driven by complexity and information. However, this already shows some of the weaknesses and biases that will become the limits of 'machinic life' in understanding and building working models of consciousness. For example, the idea that a). life can be reproduced outside of biological systems and b). the assumption that the human mind works as a symbolic computer, topics which will be discussed in the next chapter.
- ↑ Johnston, J (2008) 'The allure of machinic life' in The MIT Press
- ↑ Ibid. p. ix
- ↑ 'An art, skill, or craft; a technique, principle, or method by which something is achieved or created.' in Oxford Dictionary
- ↑ This approach is called Positivism. Formulated by Auguste Comte in the early 19th century, which rejects subjective experience because it is not verifiable by empirical evidence.
- ↑ Russel Wallace, R A (1858) 'On The Tendency of Varieties to Depart Indefinitely from the Original Type' Retrieved 18 April 2009
- ↑ Butler, S (1863) 'Darwin among the machines'
Butler, S (1872) 'Erewhon'
Butler, S (1879) 'Evolution old and new' - ↑ Cannon, B W (1936) 'The wisdom of the body' in International Journal of Ethics 43 (2):234-235
- ↑ "These [discrete-state machines] are the machines which move by sudden jumps or clicks from one quite definite state to another. [...] As an example of a discrete-state machine we might consider a wheel which clicks round through 120 once a second, but may be stopped by a lever which can be operated from outside; in addition a lamp is to light in one of the positions of the wheel." Turing, A (1950) 'Computing intelligence and machinery' in Mind 49: 433-460
- ↑ Babbage, C () 'On the economy of machinery and manufactures'
- ↑ "I want to know what is characteristic of piety which makes all actions pious [...] that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men." Plato 'Euthyphro' VII, trans. F. J. Church (1948) in New York: Library of Liberal Arts p. 7.
- ↑ "Once the characteristic numbers are established for most concepts, mankind will then possess a new instrument which will enhance the capabilities of the mind to far greater extent than optical instruments strengthen the eyes, and will supersede the microscope and telescope to the same extent that reason is superior to eyesight." Leibniz, W G 'Selections' in Philip Wiener ed. (New York: Scribner, 1951), p. 18.
- ↑ The Church-Turing thesis states that if something is calculable, therefore, it is also computable and it can be represented as a Turing Machine
Machinic Life and Its Discontent (II)
Gaining its identity at the Macy conference (NYC 1946 ) [1], Cybernetics has been the first framework capable of generating a working theory of machines. Its influence spreads in different disciplines such as sociology, psychology, ecology, economics [2] as well as in popular culture (cyber-culture). The prefix cyber-, in fact, will become emblematic of a new understanding of the human condition as profoundly connected with machines. Researchers who had been involved in technological development during World War II, met to discuss and experiment with a new idea of life. Supported by statistical information theory, experimental psychology, behaviorism and Norbert Wiener's control theory, biological organisms were understood as self-regulating machines, thanks to their embodied homeostatic processes. This machinic behavior can be formalized in models and simulated in artificial organisms, conceptually leading to the dissolution of boundaries between natural and artificial, humans and machines, bodies and minds. Life becomes a complex adaptive system made by an organism adapting to its environment through long and short feedback loops (homeostasis). [3] Human beings become machines and vice-versa. Cybernetic subjects share the new critical idea of life, which is no longer a matter of organic and inorganic substance but of structural complexity. The implications of this astonishing view, broke the boundaries of human identity, leading theorists to talk about post-humanism [4] and explore new realms of control and speculations on the nature of simulation.
Despite the variety of subfields developed by Cybernetics [5], the parallel advent of the digital computer obscured most of its paths for decades. The focus of researchers and national funding shifted into the framework of Artificial Intelligence (AI). This new focus on intelligence, to which consciousness is allegedly a feature, was made possible by establishing a strict relation between the mind/brain and the digital computer. In fact, another revolution was taking place in the field of psychology. The incapacity of Behaviorism to include mental processes in understanding humans and animals, was opening the doors to the 'cognitive revolution'. Comparing the mind, intended as the cradle of cognitive processes, with computers' information processing, some researchers began to see the possibility of testing psychological theories on the digital computer envisioned as an artificial brain. [6].
Before AI was officially born, in 1950 Alan Turing published his article 'Computing machinery and intelligence' [7], where he designed the 'imitation game' best known as the 'Turing test'. The computational power of the discrete-state machine was identified with the act of thinking and therefore with intelligence. "The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact, mimic the actions of a human computer very closely." [8]. Because the phrasing of the problem as Can machines think? can have ambiguous results, to allow computer scientists to explore the possibility of creating intelligent machines, Turing reversed the question into a behavioral test. 'Can we say a machine is thinking when imitating a human so well that s/he thinks s/he is talking to another human?' If you can't recognize that your interlocutor is a machine, it doesn't matter if it is actually thinking, because in any case, the result would be the same, a human-level communication. Thinking and mimicking thinking become equivalent, allowing machines to be called intelligent. In his text, Turing dismisses the argument of phenomenal consciousness and the actual presence of subjective experience by sustaining that such a problem does not necessarily need to be solved before being able to answer his question. Indeed, the Turing test suggests more than a simple game. It is signaling the beginning of a new inquiry on the theoretical and practical possibility of building 'real' intelligent machines. At the same time indicating some possible directions [9] to build a machine capable of passing this test.
Riding the new wave of the cognitive revolution and embracing the cybernetic comparison between humans and machines, a group of polymaths began to meet in 1956 at Dartmouth College [10], the birthplace of AI. They developed the first working program, called Logic Theorist, exploring the possibility to automatize reasoning through its formalization and manipulation in a symbolic system. This approach, called Symbolic AI, will become the workhorse to pass from programs that are able to reproduce only specific aspects of intelligence (narrow AI) to artificial general intelligence (AGI) capable of doing any task and achieve that human-level AI (HLAI) prospected by Turing. The overstated goal that will lead the fathers of AI [11] to be remembered as the enthusiastic researchers drawn in a spiral of positive predictions and hyperbolic claims [12] mostly failed, or, it has not yet been achieved.
Infected by the same enthusiasm, philosophers of science already struggling on the possible comparison between the brain and the Turing Machine, started to attempt a serious interpretation of the human mind based on the information processing of new digital computers. The movement called computationalism led to several theories: The Computational Theory of Mind (CTM) (1967) (Putnam, Fodor, Dennet, Pinker, Marr) basically understands the mind as a linear 'input-processing-output' machine. Jerry Fodor's Language of Thought Hypothesis (LOTH) (1975) claims thinking is only possible in a 'language-like' structure to build thoughts at the top-level. The hypothesis of A. Newell, H. Simon (1976) sees in the physical symbolic system everything needed to build a true intelligence. In popular culture as well, the same enthusiasm led to a new ideology of the machine with its climax in the fictional character of HAL 9000 of the movie 2001: A Space Odyssey by Stanley Kubrick. [13]
Despite the great enthusiasm and expectations, the idea that computers can do all the things a human can do, has been heavily criticized. Philosophers such as Hubert Dreyfus (1965, 1972, 1986) and Noam Chomsky (1968) have highlighted the problematic aspects of computationalism in building working theories of the mind. Starting a critical analysis of AI they reveal the simplistic assumptions [14] perpetuated by the unjustified hype and incapacity of auto-criticism of major AI's researchers. They showed the technical limitations of physical symbolic systems that are unable to grasp the value of the context, essential in gaining knowledge and achieving common sense, as well as the impossibility to formalize all the aspects of intelligence, such as creativity and intuition.
In the same direction, philosopher John Searle, criticizing the comparison of the human mind with computers in understanding things, developed a thought experiment called 'The Chinese room' (1980) [15] arguing for an underlining distinction between a 'strong AI' capable to really 'understanding', and a 'weak AI' which just simulates understanding. Searle's argument raises the same issues of the 'hard problem' of consciousness defining a threshold between the actual AI and the human mind. Other thought experiments, such as Jackson's 'Mary's room' (1986) [16] touch the subjectivity of experience directly, which seems to resist all the efforts of the scientific community to reduce it to a machine and its weak computational intelligence.
- ↑ maci conference + partecipants
- ↑ cyberdisciplines
- ↑ Ashby, W R (1952) 'Design for a brain' in London : Chapman and Hall
- ↑ Hayles N K (1999) 'How we became posthuman' in The University of Chicago Press
- ↑ Self-organizing systems, neural networks and adaptive machines, evolutionary programming, biological computation, and bionics.
- ↑ cognitive simulation
- ↑ Turing, A (1950) 'Computing intelligence and machinery' in Mind 49: 433-460
- ↑ Ibid. p.
- ↑ Natural language processing, problem-solving, chess-playing, the child program idea, and genetic algorithms
- ↑ dartmouth + names
- ↑ McCorduck (1979) 'Machines who think' in Freeman
- ↑ Dreyfus, H (1972) 'What computers can't do' in Harper & Raw,
- ↑ HAL 9000 is depicted as a malevolent human-like artificial intelligence capable of feeling emotions, designed with the technical consultancy of Marvin Minsky.
- ↑ dreyf assumptions
- ↑ chinese room
- ↑ Mary's room
Machinic Life and Its Discontent (III)
The computational symbolic AI, also called good old-fashioned AI (GOFAI), believes that from a top-down approach you can engineer all the aspects of the mind in digital computers, including consciousness. However, despite the early success (still limited when compared to the actual goals aforementioned), the wrong predictions and their conceptual limitations led to a series of failures resulting in 2 periods of recession best known as AI winters. After these periods, the criticism moved toward symbolic AI and the development of new research inspired by cybernetics, and the search for different approaches to understand intelligence and design life.
Looking closely at the architecture of the brain, cyberneticists were already exploring the possibility to reproduce its networks of neurons in artificial neural networks (ANN) [1]. However, this system will become effective only during the 80s with the development of the parallel distributed processing (PDP) pairing multiple layers of ANNs [2] and the birth of a new approach in AI: connectionism. Instead of an upstream representation of knowledge typical of the symbols manipulation, this bottom-up approach makes it possible to design AI systems capable of learning and finding useful patterns by inspecting sets of data and reinforcing the connections between its neurons. Thanks to the internet, ANN's can now be fed with a large amount of data increasing drastically their capacity to learn. This new perspective called deep learning was introduced in 2012, producing a renovated hype in connectionism and AI.
Another relevant approach produced in AI by cybernetics is the intelligent agent paradigm. Reintroducing the discourse on complex systems, the concept of 'rational agent' (borrowed from economics) becomes the neutral way to refer to anything capable of interacting with an environment through sensors and actuators. This concept made it possible to develop AI systems capable of achieving a goal by keeping track of their environment, learning and improving their performance autonomously. In parallel with the developments in AI, cybernetics led to a new paradigm of machinic life focusing on the simulation of biological evolution through software: Artificial Life (ALife) [3]. Starting from very simple operators and basic laws of interaction in a given environment, complex systems automatically arise creating chaotic interactions, stable loops and astonishing patterns that are impossible to predict.
Despite these new developments, both ALife and AI are encountering their respective boundaries. Life, like intelligence, generates from the interaction with an extremely complex and variegated environment, the noisy physical world which is made of radiations and electromagnetic phenomena, particles and wavelengths in continuous interaction. A chaotic world that neither contemporary computers' capabilities nor the internet's amount of data can simulate. Furthermore, the companies relying on deep learning are looking into the problem of understanding why these learning systems make their choices. Their autonomous way of learning through layers of networked neurons creates nested black boxes extremely difficult to unpack rising a whole debate on discriminations and biases embedded in software. [4]
To escape from these limitations, scientists are now working on a more holistic understanding of intelligence which combines the sub-symbolic approach of machine learning with the knowledge representation of symbolic AI. In robotics, the situated AI takes robots outside of the labs to interact with the 'noisy' physical world, hoping to find new ways to generate direct knowledge instead of just simulating it. Finally, these new ways of defining life and intelligence, are moving toward a deeper understanding of cognition that instead of being represented only as a symbolic system, also lies on a sub-symbolic level [5] and instead of being a designed product, it is seen as part of evolutionary processes [6]. Slowly machinic life is reaching that adaptive unconscious and embodied knowledge which seems the key to simulate the high-level intelligence typical of intuition, creativity and the spontaneous complexity of life. In general, the perspectives of engineering general-purpose systems and populating the world with new forms of life are growing faster.
After almost 70 years from the first AI program, we are still surrounded by only weak-and-narrow AIs. On the one hand, part of the researchers reformulated the goal of building human-level systems, as well as 'strong' AI, on more practical aims. On the other hand, private institutions, such as MIT, tech entrepreneurs, such as Elon Musk, and many other researchers in AI's related fields, such as Ray Kurzweil, are repeating the same errors of the old fathers. Riding the regenerated hype made possible by the boom of deep learning, a new wave of enthusiasts is calling for the 'big picture' of AI. They daydream a future-oriented, techno-utopianist world that reminds the morally dubious neo-liberal Californian Ideology. Passing through the development of general-purpose human-level AIs, the technological singularity will realize a strong artificial consciousness and the emancipated, black-boxed, AI will 'think for itself' becoming the dominant, superintelligent, form of life of the future (eventually helping, killing or snubbing human beings).
The AI's 'big picture' sores the nervous system of popular culture. It creates misunderstandings on the actual state of affairs, expectations in the near futures and doubts on positive future perspectives. However, if the plan of machinic life in general, is still to unveil the 'mystery of the mind', it can't continue to pretend their assumptions to be absolutely true and stay silent on the technical issues regarding how consciousness works. In particular, it will have to deal with the 'hard problem' of consciousness to achieve whatever attempt to create a machine really capable of thinking. It will need to engineer subjective experience, which at this point in time still differentiates humans from machines, allowing us to imagine a present-oriented future where the 'big picture' will be resized to its 'weak' actuality. Or, in the best case, if machinic life succeeds and as Matteo Pasquinelli hopefully interprets the words of Turing, it will be that of a new kind of alliance between the two forms of cognition
.
Before proceeding with a detailed account of the characteristics of subjective experience, its similarities and differences with the computer, in the next chapter I will briefly introduce other approaches that, instead of the autonomous machine of machinic life, explore different relations between humans and their technological system.
- ↑ McCulloch, S W and Pitts, W (1943) 'A Logical Calculus of the Ideas Immanent in Nervous Activity' in Journal of Symbolic Logic 9 (2):49-50 (1943)
- ↑ 'A General Framework for Parallel Distributed Processing'
- ↑ Artificial Life (Alife) borns from the studies of cellular automata begun by Von Neuman and developed by ... until its birth made possible by Langton and his conference ...
- ↑ In this direction, deep learning's researchers are using deep learning methods to analyze deep learning itself and unpack the behavior of its layered neurons.
- ↑ Kahneman, D (2011) 'Thinking, fast and slow' in Farrar, Straus and Giroux
- ↑ Langton, C (1990) "Computation at the edge of chaos" in Physica D 42, 12-37
Beyond humans and machines
Before the 19th century, the technological system was envisioned as a tool extending the organs of the human operator, leader of the intellectual and productive process. This conception started to corrode with the critique of historical materialism pictured by Marx and Engels, to drastically change after War World II. Inverting the dichotomy and confining human beings to a total subaltern level, yet fated to become futile, the intelligent and autonomous artificial organism conceived by cybernetics and AI draw an unsurpassable threshold between human and machine performance. However, in this power play configurations between natural and artificial agents, other possible worlds can be articulated. Worlds where humans and machines not only coexist but melt together achieving that level of close interaction between organisms known as symbiosis and leading to the paradigms of intelligence amplification (IA) and cyborg theory.
On the one hand, in parallel to AI and inverting its acronym, IA claims the possibility of augmenting human intelligence through technological means. Anticipated in 1945 with the prophetic words of Vannevar Bush and theoretically formalized in cybernetics' forgery by Ross W. Ashby, "it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done [...]". Fostered by the visions of J.C.R. Licklider's man-machine symbiosis and Simon Ramo's intellectronics working in strict contact with the United States Department of Defense, the 60s have seen the consolidation of this promising paradigm in the development of interactive computing and the user interface. The work of Douglas Engelbart and his political plan to bootstrap human intelligence automatically affecting society will be remembered as the highest peak of IA before disappearing in the less politicized human-computer interaction (HCI) in the late 70s. Nowadays, a new frontier of amplification/interaction linking directly the brain with the computer is becoming possible. The brain-computer interface (BCI) gets us closer to realize what Turing refers to as the "disturbing phenomena" called extrasensory perception "[which] seem to deny all our usual scientific ideas". The same BCI which Elon Musk is trying to develop in his project Neuralink, among other things, as a universal panacea to communicate with the artificial super-intelligence of the dystopic future.
On the other hand, the disclosure of the cybernetic concept of life dissolves the human-machine dichotomy into an ecosystem of patchworked organisms mixing together artificial and biological parts. This continuum, called machinic phylum by Deleuze and Guattari, is the home of the cybernetic organisms (cyborg), transforming his body in the playground where internal and external assemblages of parts like implants, different in their substance but communicating through feedback loops, coexists. The cyborg society represents all the possible shades articulating the space between what is human and what is a machine. In this direction, hybrid biorobotics are another framework that is backing away from the pure artificial goal of AI and standard robotics exploring the possibility of mixed species. The idea is that we can build artificial hardware running biological software as well as using artificial software to control biological hardware. If the first way tries to deploy patterns emerging in biological neural networks to run in artificial computers, the second finds its example in RoboRoach where the movements of a cockroach are controlled through an artificial implant sending electrical impulses to its nerves. This last technique reconnects to the BCI discussed above, and when used to directly stimulate the brain leads to what Thomas Metzinger called neuro-enhancement, the artificial control of mental states (as the neuro version of psychopharmacology). Hybrid biorobotics seems to be the closest way to synthesize consciousness based on the fact that it seems easier to control the brain instead of building an artificial correspondent of it.
All these different configurations and consequent understanding of the relation between the human and the machinic have a common denominator. The first step seems to be the much acclaimed singularity, intended as a particular moment in time in which there will be a drastic change in how we deal with technologies. It could be the advent of AGI, HLAI or super artificial intelligence (SAI). The construction of an affordable BCI or the rise of a cyborg society and the synthesis of machinic consciousness. But the final point, the farthest moment where theories conflate, is the bio-digital fusion that will follow the exponential growth of humans and machines.
The stars and Galaxies died and snuffed out, and space grew black after ten trillion years of running down. One by one Man fused with AC [Automatic Computer], each physical body losing its mental identity in a manner that was somehow not a loss but a gain. Man's last mind paused before fusion, looking over a space that included nothing but the dregs of one last dark star and nothing besides but incredibly thin matter, agitated randomly by the tag ends of heat wearing out, asymptotically, to the absolute zero
[1]
- ↑ Asimov - the last question - 1956
Part 2
I am not advocating that we go back to an animistic way of thinking, but nevertheless, I would propose that we attempt to consider that in the machine, and at the machinic interface, there exists something that would not quite be of the order of the soul, human or animal, anima, but of the order of a proto-subjectivity. This means that there is a function of consistency in the machine, both a relationship to itself and a relationship to alterity. It is along these two axes that I shall endeavour to proceed.
— Felix Guattari, On Machines, 1956 [1]
- ↑ Guattari, F (1995) 'On Machines', 'Complexity' in Andrew Benjamin ed.
Here, me, now
Subjective experience is phenomenal consciousness and since the standard scientific method relies on an objective account of the mind based on empirical evidence, it can't directly explain it. Philosophy, instead, has developed different methods to look at the phenomena (the things that appear to us) in themselves. [1] At the end of the 19th century, Edmund Husserl's phenomenology inquired about the nature of mental content acknowledging the possibility to infer objective knowledge about it and the external world. During the first half of the 20th century, analytic philosophers theorized the sense datum (intext ref), later qualia(intext ref): minimal mind-dependent unities which combined together constitute the whole phenomenal consciousness. These approaches and the description of the mind portrayed by the aforementioned cognitive revolution involve the mental representation [2]of the external world (representational realism) instead of direct contact with it (naive realism). Our perception is deconstructed, processed in different areas of the brain, and recompose in the world as we experience it. [3]
The contents of our phenomenal consciousness accessible through introspection are resumed in the experience of having a first-person perspective (me) on the world (here) in a specific moment in time(now). Generally, our point of view takes place from within our body, which is itself represented as part of the world, giving us a sense of ownership and selfhood, location, presence, and agency. On the one hand, the me or self, as experienced by humans and few mammals, is built on a higher-level of consciousness allowing us to access memories and being projected in the future, using language and logico-mathematical thinking. Turning the first-person perspective inward, this extended or secondary consciousness (Damasio 2000; Edelman and Tononi 2000;), makes us particularly self-aware beings, able to explore our own mental states and to account and experience 'experience' itself. The lower-level, called core or primary consciousness (Damasio 2000; Edelman and Tononi 2000;) is common in humans, a large number of mammals, and marine organisms such as octopi, and consist of a more basic form of self-awareness. On the other hand, the representation of space and time persist in most of the species as a basic-level called nonconscious protoself (Damasio 2000). I will return later on this argument but it is important to highlight that the absence of a consistent subject capable of inwardness, makes us doubt on which extent certain animals are able to experience emotions and feelings as originating from within themselves. However, the impossibility to know what it is like to be another living being leaves this argument open to debate. [4]
Given the hypothesis that the brain is the sufficient cause for consciousness to exist [5], whatever constitutes it, must have a sort of correlation with the physical brain. This is what scientists call the neural correlate of consciousness (NCC) (intext ref), an extremely complex but "coherent island emerging from a less coherent flow of neural activity" that than becomes a more abstract "information cloud hovering above a neurological substrate". [6]. Clinical cases and limit experiences that are directly accountable (neuropsychiatric syndromes, dreams, meditation, altered states of mind and so on) help to map which part of the brain is activated when the experience of the here, me, now happens in different circumstances. In fact, in spite of an all-or-nothing process, consciousness is graded and non-unitary taking place in different phenomenal worlds. If we manage to link a particular subjective experience with a pattern of chattering neurons, we could get closer to solve the hard problem of consciousness. In particular, the first step to explain subjective experience would be to solve the one-world problem: how different phenomenal facts are merged together (world binding) in a coherent whole, and defining particular NCCs should lead to finding the global NCC and the minimal NCC necessary for phenomenal consciousness to take place.
In his books Ego Tunnel, Thomas Metzinger defines consciousness as the appearance of a world and the brain is understood as a world engine capable of creating a wide variety of explorable phenomenal worlds. In particular, he focuses on the phenomenal worlds of dreams and out-of-body experiences (OBE) in order to develop a functionalist, reductionist, theory of consciousness. These limit experiences, where a complete experience of disembodiment can be achieved, have led him to develop a particular definition of self, that instead of being a stable instance, is a process running in our brain when we are conscious and turning off when we go to sleep. Exactly like the experience of the here and now is possible because they exist as internal mental representations, Metzinger's self is defined as personal self-model (PSM) created for better control over the whole organism. Although the internal modelization of the here, me, now allows a deeper understanding of phenomenal consciousness as simulated virtual reality, Metzinger's claims that "no such things as selves exist in the world: nobody ever had or was a self". This provocatory claim might be misleading in understanding the nature of the ego, which, notwithstanding the perspective of a PSM, seems more ontologically rooted when we consider the tangibility of experience itself.
Metzinger, like other researchers, tries to explain why it really looks like we are living in a simulation created by our own brains. While conscious experience seems to take place far away from the physical world, it seems to dwell in a place other than the physical brain, the focus of most of the scientific community. Drawing a liminal space existing in between our brain and the physical world and claiming for a reality of the phenomenal world closer to dreams, Antti Revonsuo calls the experience of being here, me, now reasonably an out-of-brain experience.
- ↑ buddism
- ↑ These mental representations can be understood as functional models produced by the evolutionary process and naturally selected for their survival and adaptive value.
- ↑ It is implied that the physical reality described by nuclear and quantum physics exists and our phenomenal experience is projected on top of it.
- ↑ how is it like to be a bat
- ↑ reductionism
- ↑ ncc metz quote
Engines and experiences
If the computer metaphor has its limitations in practice, kept as a metaphor it helps us think about many aspects of our beings. In particular, the difference between hardware and software reflects our struggle to interpret the relation between our body and our mind, our brain and our consciousness. The first part of this text highlights how computers can produce symbolic and sub-symbolic operations, evolutionary dynamics and embodied knowledge, resulting in external behaviors identical to living beings. However, the available thinking machines cannot be said to be conscious. Most evidently computers lack that active individual instance called self which makes a world to appear. But what about the here and the now of computers?
In her book Hamlet on the holodeck, literary criticist Janet H. Murray develops a theory of new media based on their literary nature. She reports a quote from Italo Calvino describing the experience of a writer in front of his typewriter.
"Every time I sit down here I read, “It was a dark and stormy night...” and the impersonality of that incipit seems to open the passage from one world to the other, from the time and space of here and now to the time and space of the written word; I feel the thrill of a beginning that can be followed by multiple developments, inexhaustibly."
Murray, explains how the overwhelming capacity of the analog text to project the reader in its world, is reconfigured and augmented in new media. Not only can the text be translated into a digital file, displayed and multiplied, but the whole nature of computers' software, where the digital text takes shape, is itself textual. Both the stack of layers of programming language and the binary code dwelling at its foundation are texts expressing meaning. These computers' backstage has been used in concrete poetry (Goldsmith) and software art (Cramer) to unveil the textual nature and the conceptual realm of the processes undergoing the graphical user interface (GUI) and to compare them to human nature. Referred to as the Rorschach metaphor (Nelson, Turkle), the projective character of digital media, is increased by the unique spatial aspects of the software's environment. Often called cyberspace, it represents a geographical space where we can move through, in an interactive process of navigation and exploration. Furthermore, the user/interactor, active part of this process, triggers certain events to happen in a temporal immediacy: You are not just reading about an event that occurred in the past; the event is happening now, and, unlike the action on the stage of a theater, it is happening to you.
.
Integrating space and time, the software enables a world to be experienced. As the brain described by Metzinger, computers' hardware works as a world engine. However, because of the absence of an internal experiencing consciousness making the world appear, their here and now is actualized only through the subjective experience of an external me. Similarly to this view of software as potential worlds, Seymour Papert, a prominent computer scientist in educational software, has developed the concept of microworld: a little world, a little slice of reality. It's strictly limited, completely defined [...]. But it is rich.
[1] The microworld works as an educational tool helping children to learn how to operate and design multiple contained digital environments. In the long term, this knowledge of different small worlds can be used to create something larger: a macroworld.
What we call software, is a stack of abstractions relying on each other but, in the end, it is nothing more than electrical impulses happening on the physical level of the hardware. Because of this, Friedrich Kittler suggests that in fact there is no software
. The same happens for our consciousness, which scientists are continuously trying to reduce to the brain itself, and, to paraphrase Metzinger, there is no self
. However, the influence of software in our society is widespread. The worlds created by software shape the physical world, and, in fact, it is increasingly considered a cultural object worthy of being studied in-depth. Something similar is happening to the self which is actually experienced as more than an abstract model switching on and off. It contains the means enabling us to transform a meaningless physical world into a meaningful phenomenal universe worth to be explored and giving us the means to create our complex society. When the self interacts with the self-less computer, the projective mechanisms of the textual software activate, transporting the individual to experience a new phenomenal world. From this view, if the hardware represents the physical level, the software represents the possibility of a phenomenal world which is actualized only when experienced by a self. This phenomenal dimension that the software acquires can be described as an out-of-hardware experience, exactly because experienced by a conscious subject located outside of the hardware.
However, if the software is experienced out-of-hardware, and consciousness is experienced out-of-brain, where is subjective experience exactly located? The identification of subjectivity within the hardware is typical of that researcher whose scientific approach negates and reduces subjective experience to a brain's mechanism. They easily tend to alienate their own selves idealizing computers as living organisms and predicting their ability to generate consciousness autonomously. Instead, when the problem is posed in these terms, the individual can claim back its power over the machine in shaping the center of phenomenal consciousness. In fact, the "one-world problem" of subjective experience mentioned before, includes that one world is first needed for consciousness to take place. A first mental simulation is necessary, and then, from this one world, other simulations similar to the microworlds described by Papert can be performed predicting the results of an action or recalling a past event. [2] However, the hardware and the brain are two different kinds of world engine. They are two different systems and, even though producing the same results, they differ precisely in substance, structure, and process. When we experience software, a phenomenal world other than the simulation of our main world, opens in front of us. From the inside of the first out-of-brain world, a second out-of-hardware world can appear. In computer science, when a system runs a simulation of another system, this is called emulation. Given this notion, the brain and the hardware can be understood respectively as a world simulator and a world emulator, when seen from the perspective of subjective experience.
- ↑ "[a] representation of some well-defined domain, such as Newtonian physics, such that there is a simple mapping between the rules and structures of the microworld and those of the domain".
- ↑ This nested hierarchical structure is common in conscious mental simulations as well as in software, where at the top-level runs the operating system.
Extending cognition
To better understand the relationship connecting computational media and human beings, it is necessary to understand how the phenomenal here, me, now, the compound of consciousness, differs from the here and now of the selfless world of software. However, even though the debate has progressed in many directions, the foundational elements to understand this relationship have been there all along.
A first connection is already contained in Erewhon the main novel of the aforementioned forerunner of cybernetics, Samuel Butler and in its influence in the work of Deleuze and Guattari. Meant to be read backward as a wanted misspelling of nowhere, Erewhon contains the book of the machine [1] where consciousness was for the first time binding humans and machines. Previously, Deleuze's critique of representation, articulated by his concepts of difference and repetition [2], would develop a particular understanding of Ideas which are also called Erewhon but, in particular, he will reframe this term not just as a no-where but as a now-here. Later, in their collaborative work Anti-Oedipus [3] they will relate the same term to their concept of desiring-machine and Butler's understanding of machines to the body without organs. Finally, Guattari will describe the machine as a proto-singularity differing from biological organisms but closely related to their nature. [4]
The term, proto-singularity suggests a direct link to the aforesaid proto-self defined 10 years later by Damasio as the ensemble of brain devices that continuously and nonconciously maintain the body state within the narrow range and relative stability required for survival
, and representing the deep roots of the elusive sense of self’
of conscious experience. [5] Still referring to two different domains, technical and biological, the theoretical correspondence of these terms, can be traced back to the offspring of cybernetics of the late 60s [6], and in particular to the researches of biologists Humberto Maturana and Francisco Varela. They first developed the idea that cognition emerges in living systems from their ability to self-organize as self-contained systems (autopoiesis) [7] to then enlarge this position comprehending the sensorimotor capacity to match and interact with the environment (enaction). [8] Proposing an alternative to computationalism and connectionism, the enactivist paradigm extends cognition beyond the brain and consciousness into the nonconscious inner processes happening in the organism as a body (embodied cognition) [9]and its interaction with an external environment (situated cognition) [10]. These radical view of cognition can be extended furthermore outside of the body to create frameworks comprehending not only animals and plants but also the technical system, and eventually, natural processes (distributed cognition) [11], getting closer to the panpsychist view where the mind becomes a fundamental element of the whole reality.
In her recent work [12] Katherine N. Hayles, reframes Damasio's protoself as nonconsious cognition emphasizing the extension of cognition outside consciousness into embodied and situated processes, and the relevance of the nonconscious as a new cognitive sphere comprehending both biological and technical systems. Furthermore, according to Hayles, because cognition presupposes interpretation and production of meaning [13], the nonconscious provides a framework, that she calls cognitive assemblages, to extend the social theory beyond anthropocentrism and consciousness, into a cognitive ecology of human and nonhumancognizers. [14] Differing from the unconscious for its inaccessibility by conscious states, the nonconscious posits itself in-between material processes and consciousness, providing the first layer of meaningful representations needed by consciousness to take place. Furthermore, according to new empirical proofs, the nonconscious works faster and can process a bigger amount of information than consciousness avoiding the latter to be overwhelmed. However, with its ability to choose, at its simplest level between a zero and a one, and perform faster than consciousness, the technical nonconscious can condition our decisions and behaviors making new techniques of surveillance and control possible. [15] From these perspectives, the study of computational media becomes a necessity to complete a coherent map of social interactions and to openly accept their active role in the production of culture.
The framework of the nonconscious cognition developed by Hayles, provides a working model to understand the actual relationship between consciousness and software. In fact, given an extended cognition beside consciousness, on the "biological" hand, we find conscious processes relying on internal, dynamical representations provided by the biological nonconscious. These representations, that are maps of the environment and the body continuously updated in a window of time, provide consciousness with the building blocks of an embodied sense of self and a point of view through which it experiences a coherent phenomenal world. On the "artificial" hand, there is no consciousness and self to reinterpret the representations provided by the technical nonconscious. Furthermore, far from being embodied and situated as biological organisms, the technical nonconscious is an embedded system [16], compiling and interpreting lines of text internally stored and manifesting its represented content through an interface. This technical cognitive process happens in real-time, like the biological one, and represents the abstract spatial dimension described by its code providing the here and now necessary for a world to appear. The software stands for the possibility of the representational processes of the technical nonconscious to be extrapolated, internalized and re-represented by a consciousness that integrates its "self" to experience a new phenomenal world as an out-of-hardware experience.
- ↑ a compendium of his earlier texts on the Darwinian discourse on evolution
- ↑ deleuze difference and repetition
- ↑ deleuze&guattari antioedipus
- ↑ guatttari on machines
- ↑ damasiou cit protoself
- ↑ second order cybernetics
- ↑ autpoiesis and cognition
- ↑ enaction
- ↑ embodied cog
- ↑ situated cog
- ↑ distributed cog
- ↑ list work of hayles on nonconscious cog
- ↑ biosemiosi
- ↑ cognitive assemblage & cognizers
- ↑ missing half second
- ↑ embedded system
Conclusion - a walk through the language maze
But why should I repeat the whole story? At last we came to the kingly art, and enquired whether that gave and caused happiness, and then we got into a labyrinth, and when we thought we were at the end, came out again at the beginning, having still to seek as much as ever.
[1]
The attempts of the proponents of machinic life to build autonomous machines, and the articulations of human-machine symbiosis are essential steps in exploring the processes of cognition. However, these frameworks rely on premature assumptions perpetuated as pretended actualities while they fail to consider the consequences of their claims and products on the broad public. The focus of these disciplines should change because the symbiosis is clearly already happening and it necessarily changes our lives and our societies through a "control without control". Indeed, the mimesis of consciousness in technical systems and its underlying faith in a true artificial consciousness must rely on the understanding of the biological consciousness and, eventually, it must be re-framed in accordance with it. Instead of the rush to increase the capabilities of technical systems, developing a science of consciousness is a necessary step that we must take first, and that will allow us to disclose the nature of subjective experience and to re-organize our understanding of the physical world, the technical system, and the mind.
In the same trajectory, understanding consciousness provides new means to look beyond consciousness itself. It permits us to find the natural position of an elusive object of inquiry which, because it is observable only inside ourselves as subjects, has been used for ages to perpetrate an unnatural anthropocentrism now felt threatened by our own technologies. The extension of cognition outside consciousness, allows us to think of a natural social ecology where different forms of cognition, conscious and not, shape each other in a communal influence. Instead of being a threat, it opens new physical and intellectual relationships with new forms of cognition in and beyond the biological realm.
The interaction between human beings and the technical system through software, as discussed in this thesis, remarks on the validity of such developments and insists on the necessity to continue in this direction. It envisions new ways to articulate the study of software, on the one hand, by standing firmly on the materiality of the physical processes constituting it and reducing it completely to the hardware, and on the other hand, by highlighting how the interaction with a conscious subject permits to rethink software in terms of an experiential world abstracted by the underlying material processes. Drastically different from other phenomena, which fails to provide the complexity of an experiential world, software can be arguably said to augment consciousness instead of just augmenting cognition and intelligence. The technical questions on the validity and consequences of this thesis rely on the next developments in our understanding of consciousness, of the physical world, and of their relationship which still is uncertain. Eventually, these developments will allow us to really consider the fact of living inside of artificially simulated worlds, which right now are still games impossible to be mistaken for real worlds. Eventually, it will be understood that consciousness can be really instantiated in artificial machines able to feel feelings and perceive themselves as embodied in a physical world. But right now, we must be able to understand why this is not really happening and why we are inherently different from computers.
Perhaps, the link between the human and the machinic consists of the maze created by their intertwining layers of languages. [2] A "language maze" made of verbal and non-verbal languages, natural languages and formal languages, computer's code and machine languages. A Deadalus' labyrinth of material, informational, algorithmic and literary explorable spaces developing in the horizontal and the vertical direction, from microscopic to macroscopic territories, internal external and internal spheres. A Penelope's web made of rooms that, as Escherian paintings, hides recursive simulations and emulations of other rooms, other mazes and itself. Perhaps, what distinguishes the human from the machinic, is the feeling, illusory or real, of being whole with the "language maze" as an infinite space (logos) where to build new worlds from scratch.
The Labyrinth is presented, then, as a human creation, a creation of the artist and of the inventor, of the man of knowledge, of the Apollonian individual, yet in the service of Dionysus the animal-god.
[3]