User:Tancre/2/thesis: Difference between revisions
Line 44: | Line 44: | ||
Developed during World War II and gaining its identity at the Macy conference (NYC 1946), Cybernetics has been the first framework capable of generating a working theory of machines. Its influence spreads in different disciplines such as sociology, ecology, economics as well as in the popular culture (cyber-culture). The prefix cyber-, in fact, will become emblematic of a new understanding of the human condition as profoundly connected with machines. Researchers earlier involved in technological development during the war, meet to discuss and experiment with a new idea of life. Supported by statistical information theory, behaviorism and Norbert Wiener's control theory, biological organisms were understood as self-regulating machines thanks to their homeostatic processes. This machinic behavior can be formalized in models and simulated in artificial organisms, conceptually leading to the dissolution of boundaries between the natural and the artificial. Between human beings and machines. Life becomes a complex adaptive system made by an organism adapting to its environment through feedback loops. Exactly as advocated by Butler's speculative theory of purpose. Human beings and machines become two different kinds of living organisms sharing the new critical idea of life, not anymore a matter of substance but of structural complexity. The implications of this astonishing view, break the boundaries of human identity leading theorists to talk about post-humanism and explore new realms of possible control on the human body as well as speculations on the nature of simulation which can't be distinguished from the original. | Developed during World War II and gaining its identity at the Macy conference (NYC 1946), Cybernetics has been the first framework capable of generating a working theory of machines. Its influence spreads in different disciplines such as sociology, ecology, economics as well as in the popular culture (cyber-culture). The prefix cyber-, in fact, will become emblematic of a new understanding of the human condition as profoundly connected with machines. Researchers earlier involved in technological development during the war, meet to discuss and experiment with a new idea of life. Supported by statistical information theory, behaviorism and Norbert Wiener's control theory, biological organisms were understood as self-regulating machines thanks to their homeostatic processes. This machinic behavior can be formalized in models and simulated in artificial organisms, conceptually leading to the dissolution of boundaries between the natural and the artificial. Between human beings and machines. Life becomes a complex adaptive system made by an organism adapting to its environment through feedback loops. Exactly as advocated by Butler's speculative theory of purpose. Human beings and machines become two different kinds of living organisms sharing the new critical idea of life, not anymore a matter of substance but of structural complexity. The implications of this astonishing view, break the boundaries of human identity leading theorists to talk about post-humanism and explore new realms of possible control on the human body as well as speculations on the nature of simulation which can't be distinguished from the original. | ||
Despite the variety of subfields developed by Cybernetics [note][self-organizing systems, neural networks and adaptive machines, evolutionary programming, biological computation, and bionics] the parallel advent of the digital computer liquidated most of its paths for decades. Shifting the focus of researchers and national fundings into the framework of Artificial Intelligence (AI). The new focus on intelligence, to which consciousness is a feature, was made possible by this strict relation between the computer and the mind. In fact, another revolution was taking place in the field of psychology. The incapacity of Behaviorism to include mental processes in understanding humans and animals, was opening the doors to the 'cognitive revolution' of the '50s. Influenced by the idea of the mind as the cradle of cognitive processes, some researchers saw the possibility to test psychological theories on digital computers, envisioning a correspondence of the mind and digital machines in the way information is processed. | |||
Before AI was officially born, in 1950 Alan Turing published his article 'Computing machinery and intelligence' where he designed the 'imitation game' best known as 'the Turing test'. The computational power of the discrete-state machine was identified with the act of thinking and therefore with intelligence. "The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact, mimic the actions of a human computer very closely." Because of phrasing the problem as 'can machines think?' can have ambiguous results, to allow computer scientists exploring the possibility of creating intelligent machines he reversed the question into a behavioral test. 'Can we say a machine is thinking when imitating a human so well that he thinks he is talking to another human? If you can't recognize that your interlocutor is a machine, it doesn't matter if it is actually thinking because in any case, the result would be the same, a human-level communication. Thinking and mimicking thinking becomes equivalent allowing machines to be intelligent. In his text, Turing dismisses the argument of phenomenal consciousness and the actual presence of subjective experience by sustaining the fact that such a problem does not necessarily need to be solved before being able to answer his question. Indeed, the Turing test suggests more than a simple game. It is signaling the beginning of a new inquiry on the theoretical and practical possibility of building real intelligent machines [note]. At the same time indicating some possible directions to take (as natural language processing, problem-solving, chess-playing, the child program idea, and genetic algorithms) to build a machine capable of passing this test. | |||
Riding the new wave of the cognitive revolution, a group mainly of mathematicians, begin to meet at Dartmouth College in 1956. The first small working programs they developed explored the possibility to synthesize human intelligence through its formalization and manipulation in a symbolic system. This approach, called Symbolic AI, will become the workhorse to imitate human intelligence. Allen Newell, Herbert Simon, Marvin Minsky, and John MacCarthy will be remembered as the fathers of the new field of AI [note]. At the same time, they were the enthusiastic researchers drawn in a spiral of positive prediction about their fast success in coming closer to the building a human-level AI. These aims (mostly failed or not yet achieved) soon spread into other fields. In particular, philosophers of science, infected by the same enthusiasm, were attempting to interpret the human mind based on the information processing of digital computers. The movement called computationalism led to several theories: The Computational Theory of Mind (CTM) (1967) (Putnam, Fodor, Dennet, Pinker, Marr) understands the mind effectively as a computer. Jerry Fodor's Language of Thought Hypothesis (LOTH) (1975) claims thinking is only possible in a 'language-like' structure to build thoughts at the top-level. The hypothesis of A. Newell, H. Simon (1976) sees in the physical symbolic system everything needed to build a true intelligence. In the popular culture as well, the same enthusiasm led to a new ideology of the machine with its climax in the fictional character of HAL 9000 of the movie 2001: A Space Odyssey by Stanley Kubrick. HAL 9000 is depicted as a malevolent human-like artificial intelligence capable of feeling emotions, designed with the technical consultancy of Marvin Minsky. | |||
Despite the great enthusiasm and expectations, the idea that computers can do all the things a human can do, has been heavily criticized. Philosophers such as Hubert Dreyfus (1965,1972,1986) and Noam Chomsky (1968) have highlighted the problematics of computationalism in building working theories of the mind. Starting a critical analysis of AI they reveal the simplistic assumptions perpetuated by the unjustified hype and incapacity of auto-criticism of major AI's researchers. They showed the technical limitations of physical symbolic systems that are unable to grasp the value of the context, essential in gaining knowledge and achieving common sense. As well as the impossibility to formalize all the aspects of intelligence as creativity and intuition. In the same direction, philosopher John Searle, criticizing the comparison of the human mind with computers in understanding things, developed a thought experiment called 'Chinese room' (1980)[note] arguing for an underlining distinction between a 'strong AI' capable to really 'understanding', and a 'weak AI' which just simulates understanding. Searle's argument raises the same problematics of the 'hard problem' of consciousness defining a threshold between the actual AI and the human mind. Other thought experiments, such as Jackson's 'Mary's room' (1986) [note] touch the subjectivity of experience directly, which seems to resist all the efforts of the scientific community to reduce it to a machine and its weak computational intelligence. | |||
=== The Machinic Life and Its Discontent (III) === | === The Machinic Life and Its Discontent (III) === | ||
=== Beyond humans and machines === | === Beyond humans and machines === |
Revision as of 18:52, 9 February 2020
Thesis
Part 1
Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac.
Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
—
Isaac Asimov, The Last Question, 1956
The Hard Problem of Consciousness
Science used to address the challenge of explaining the mind by disassembling it in its functional, dynamical and structural properties [1]. Consciousness has been described as cognition, thought, knowledge, intelligence, self-awareness, agency and so on, assuming that explaining the physical brain will resolve the mystery of the mind [2]. From this perspective, our brain works as a complex mechanism that eventually triggers some sort of behavior. Consciousness is the result of a series of physical processes happening in the cerebral matter and determining our experience of controlling our body, thinking and feeling. This view has been able to explain many incognita of what happens in our mind, leading some to think that in a not-too-distant future we will be able to fully explain its inner secrets.
In 1995 the philosopher of mind David Chalmers published his article Facing up to the problem of consciousness [3], where he points out that the objective scientific explanation of the brain can solve only an easy problem. If we want to fully explain the mystery of the mind, instead, we have to face up the hard problem of consciousness: How do physical processes in the brain give rise to the subjective experience of the mind and of the world? Why is there a subjective, first-person, experience of having a particular kind of brain? [4]
Explaining the brain as an objective mechanism is a relatively easy problem that eventually could be solved in a matter of time. But a complete understanding of consciousness and its subjective experience is an hard problem that the scientific objectivity can't directly access. Instead, scientists have to develop new methodologies, facing up that a hard problem exists — How is it possible that such a thing as the subjective experience of being me, here, now takes place in the brain? — and must be taken into consideration.
This new perspective distinguishing the subjective experience as phenomenal consciousness [5], echoes the mind-body problem initiated by Descartes [6] and underlies whatever attempt to investigate the nature of our mind. It challenged the physicalist ontology of the scientific method. showing the unbridgeable explanatory gap [7] between this dogmatic view of the world and the possibility to formulate a full understanding of consciousness. This produces the necessity of a paradigm shift in science allowing the study of consciousness from new alternative scientific methods [8] embracing the challenge of investigating phenomenal consciousness.
The reactions to Chalmer's paper range between a total denial of the issue (Ryle 1949, Dennett 1978, 1988, Wilkes 1984, Rey 1997) to panpsychist positions (Nagel 1979, Bohme 1980, Tononi and Koch 2015), with some isolated case of mysterianism (McGinn 1989, 2012) advocating the impossibility to solve such a mystery. In any case, the last 30 years have seen exponential growth in multidisciplinary researches facing the hard problem with a constant struggle to build the blocks of a science of consciousness finally accepted as a valid field of study. This is a central subject of this thesis and we will regularly return to this contested field later in this text.
- ↑ Josh Weisberg - the hard problem of consciousness - internet encyclopedia of philsophy
- ↑ This position is called physicalism and it is closely related to materialism
- ↑ David Chalmers - facing up the hard problem of consciousness - 1995
- ↑ nagel - how is it like to be a bat?
- ↑ Ned Block - concepts of consciousness - 2002
- ↑ descartes
- ↑ Levine - the explanatory gap
- ↑ neurophenomenology
The Machinic Life and Its Discontent (I)
To fully understand the relevance and consequences which involve exploring consciousness, we must shift our attention towards the evolution of the technological system. In particular to the attempts of simulating the mechanisms of the mind building autonomous and intelligent machines. In The Allure of the Machinic Life [1], John Johnston attempts to organize the contemporary discourse on machines under a single framework that he calls machinic life. By machinic life I mean the forms of nascent life that have been made to emerge in and through technical interactions in human-constructed environments. Thus the webs of connection that sustain machinic life are material (or virtual) but not directly of the natural world. (...) Machinic life, unlike earlier mechanical forms, has a capacity to alter itself and to respond dynamically to changing situations.
[2] Implying the whole attempt to produce life out of artificial hardware and software, the definition of machinic life allows us to re-consider the different experiences of the last century under the common goal of building autonomous adaptive machines and to understand their theoretical backgrounds as a continuum.
The mythological intuition of technology, subsumed in the concept of technè [3], already shows the main paths of the contemporary discourse. In fact in the myth of Talos and in Daedalus' labyrinth, we can find the first life-like automaton and the first architectural design reflecting the complexity of existence and outsourcing thought from human dominion. However only in the 19th century, with the new technological discoveries and a positivistic approach to knowledge [4], scientists started to build the bearing structures of what will become the two main fields of researches on autonomous machines of the 20th century.
On the one hand, the development of the steam engine (Watt 1776) through the study of thermodynamics (Sadi Carnot 1824) is joined with the studies of evolutionary biology (Lamark 1809, Darwin 1859). In 1858, Alfred Wallace made a specific relation between the 'vapor engine' and the evolutionary process. [5] In the 1870s Samual Butler speculated on the evolution of machines. [6]. The autoregulation of animals (evolution) and machines (feedback mechanism) found their equivalence in the concept of adaptation. These theories reflect the effort to reintroduce the idea of teleology (purpose) denied by the then ongoing debate on the origin of humankind contended between the purposeful design of Creationism (Paley 1802) and the blind chance of Darwinism. These developments made possible to theorize a framework where machines can auto-regulate and reproduce themselves, evolving exactly as biological organisms. Wallace and Butler's speculative theories will find their scientific correlative in the process of homeostasis. The biological autoregulatory system described by biologist Walter Bradford Cannon in 1926[7], making possible its closer study and its simulation in mechanical machines by Cybernetic.
On the other hand, the study of mathematics and logic, along with the revolution of Jacquard's loom (1804), led to the construction of advanced discrete-state machines [8] and the first practical translation of elementary logical function into binary algebra. Charles Babbage and Ada Lovelace's effort to develop and program the 'analytical' (1837) engine, together with Boolean logic (1854), give notice of a new computational era in which mental labor was not exclusively the prerogative of humans but could be performed by an economy of machines [9]. The idea of formalizing thought in a set of rules (algorithm) can be traced back to Plato [10] and was theorized in the 17th century by Leibniz [11] as a universal symbolical system capable to solve every possible problem. Alan Turing and Alonzo Church will then demonstrate mathematically this speculation in 1936 leading to the formalization of the theory of computation [12]. Together with the formalization of computer's architecture by John Von Neumann in 1945 and Claude Shannon's information theory in 1950, the digital computer was born making possible the framework of Artificial Intelligence (AI) to raise.
If the classical world had the intuition of the sentient machine, and the modern the realization of its possibility, it is only with the practical experience of Cybernetics and AI that the contemporary discourse of machinic life can be formulated. Nonetheless, the dual nature of contemporary discourse embodies the convergence of different theories in biological, mechanical and computational systems within a multidisciplinary approach to knowledge and life, driven by complexity and information. However, this already shows some of the weaknesses and biases that will become the limits of 'machinic life' in understanding and building working models of consciousness. For example, the idea that life can be reproduced outside of biological systems and the assumption that the human mind works as a symbolic computer discussed in the next chapter.
The Machinic Life and Its Discontent (II)
Developed during World War II and gaining its identity at the Macy conference (NYC 1946), Cybernetics has been the first framework capable of generating a working theory of machines. Its influence spreads in different disciplines such as sociology, ecology, economics as well as in the popular culture (cyber-culture). The prefix cyber-, in fact, will become emblematic of a new understanding of the human condition as profoundly connected with machines. Researchers earlier involved in technological development during the war, meet to discuss and experiment with a new idea of life. Supported by statistical information theory, behaviorism and Norbert Wiener's control theory, biological organisms were understood as self-regulating machines thanks to their homeostatic processes. This machinic behavior can be formalized in models and simulated in artificial organisms, conceptually leading to the dissolution of boundaries between the natural and the artificial. Between human beings and machines. Life becomes a complex adaptive system made by an organism adapting to its environment through feedback loops. Exactly as advocated by Butler's speculative theory of purpose. Human beings and machines become two different kinds of living organisms sharing the new critical idea of life, not anymore a matter of substance but of structural complexity. The implications of this astonishing view, break the boundaries of human identity leading theorists to talk about post-humanism and explore new realms of possible control on the human body as well as speculations on the nature of simulation which can't be distinguished from the original.
Despite the variety of subfields developed by Cybernetics [note][self-organizing systems, neural networks and adaptive machines, evolutionary programming, biological computation, and bionics] the parallel advent of the digital computer liquidated most of its paths for decades. Shifting the focus of researchers and national fundings into the framework of Artificial Intelligence (AI). The new focus on intelligence, to which consciousness is a feature, was made possible by this strict relation between the computer and the mind. In fact, another revolution was taking place in the field of psychology. The incapacity of Behaviorism to include mental processes in understanding humans and animals, was opening the doors to the 'cognitive revolution' of the '50s. Influenced by the idea of the mind as the cradle of cognitive processes, some researchers saw the possibility to test psychological theories on digital computers, envisioning a correspondence of the mind and digital machines in the way information is processed.
Before AI was officially born, in 1950 Alan Turing published his article 'Computing machinery and intelligence' where he designed the 'imitation game' best known as 'the Turing test'. The computational power of the discrete-state machine was identified with the act of thinking and therefore with intelligence. "The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact, mimic the actions of a human computer very closely." Because of phrasing the problem as 'can machines think?' can have ambiguous results, to allow computer scientists exploring the possibility of creating intelligent machines he reversed the question into a behavioral test. 'Can we say a machine is thinking when imitating a human so well that he thinks he is talking to another human? If you can't recognize that your interlocutor is a machine, it doesn't matter if it is actually thinking because in any case, the result would be the same, a human-level communication. Thinking and mimicking thinking becomes equivalent allowing machines to be intelligent. In his text, Turing dismisses the argument of phenomenal consciousness and the actual presence of subjective experience by sustaining the fact that such a problem does not necessarily need to be solved before being able to answer his question. Indeed, the Turing test suggests more than a simple game. It is signaling the beginning of a new inquiry on the theoretical and practical possibility of building real intelligent machines [note]. At the same time indicating some possible directions to take (as natural language processing, problem-solving, chess-playing, the child program idea, and genetic algorithms) to build a machine capable of passing this test.
Riding the new wave of the cognitive revolution, a group mainly of mathematicians, begin to meet at Dartmouth College in 1956. The first small working programs they developed explored the possibility to synthesize human intelligence through its formalization and manipulation in a symbolic system. This approach, called Symbolic AI, will become the workhorse to imitate human intelligence. Allen Newell, Herbert Simon, Marvin Minsky, and John MacCarthy will be remembered as the fathers of the new field of AI [note]. At the same time, they were the enthusiastic researchers drawn in a spiral of positive prediction about their fast success in coming closer to the building a human-level AI. These aims (mostly failed or not yet achieved) soon spread into other fields. In particular, philosophers of science, infected by the same enthusiasm, were attempting to interpret the human mind based on the information processing of digital computers. The movement called computationalism led to several theories: The Computational Theory of Mind (CTM) (1967) (Putnam, Fodor, Dennet, Pinker, Marr) understands the mind effectively as a computer. Jerry Fodor's Language of Thought Hypothesis (LOTH) (1975) claims thinking is only possible in a 'language-like' structure to build thoughts at the top-level. The hypothesis of A. Newell, H. Simon (1976) sees in the physical symbolic system everything needed to build a true intelligence. In the popular culture as well, the same enthusiasm led to a new ideology of the machine with its climax in the fictional character of HAL 9000 of the movie 2001: A Space Odyssey by Stanley Kubrick. HAL 9000 is depicted as a malevolent human-like artificial intelligence capable of feeling emotions, designed with the technical consultancy of Marvin Minsky.
Despite the great enthusiasm and expectations, the idea that computers can do all the things a human can do, has been heavily criticized. Philosophers such as Hubert Dreyfus (1965,1972,1986) and Noam Chomsky (1968) have highlighted the problematics of computationalism in building working theories of the mind. Starting a critical analysis of AI they reveal the simplistic assumptions perpetuated by the unjustified hype and incapacity of auto-criticism of major AI's researchers. They showed the technical limitations of physical symbolic systems that are unable to grasp the value of the context, essential in gaining knowledge and achieving common sense. As well as the impossibility to formalize all the aspects of intelligence as creativity and intuition. In the same direction, philosopher John Searle, criticizing the comparison of the human mind with computers in understanding things, developed a thought experiment called 'Chinese room' (1980)[note] arguing for an underlining distinction between a 'strong AI' capable to really 'understanding', and a 'weak AI' which just simulates understanding. Searle's argument raises the same problematics of the 'hard problem' of consciousness defining a threshold between the actual AI and the human mind. Other thought experiments, such as Jackson's 'Mary's room' (1986) [note] touch the subjectivity of experience directly, which seems to resist all the efforts of the scientific community to reduce it to a machine and its weak computational intelligence.