User:Dave Young/rm/1-2/Colossus-Imaginary-Futures: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
In Imaginary Futures, Richard Barbrook writes about an era suffering from technological anxieties. During the 1960s, the vast majority of innovation in the computer industry was led by military funding, and this sinister background to the development of these new machines trickled down into the collective consciousness via the discourse of popular culture. As Barbook states: "When the survival of the nation was at stake, technological excellence wasn't constrained by financial limitations." [40] Looking back on literary fiction, Hollywood cinema and articles in the print media at the height of the Cold War prove that there was a market for paranoid musings about the devastating possibilities of entering into nuclear warfare with Russia. The basis of this paranoia was the new post-war ideologies of cybernetics and the information society – ideologies that rendered humans as mechanistic devices whose mass-actions could be predicted by mathematics and computers as humanised heuristic devices that would evolve and eventually become sentient. The technological battleground – one of the many fronts which the United States fought Russia throughout the Cold War – was fuelled by paranoid knowledge: the egotistical desire to attain an objective in fear that the other side might attain it first. In this case, the need to have the most powerful machines capable of the most sophisticated defence mechanisms to impress and frighten the opposition into inaction. While these objectives were being attained, they were also being leaked to the news media as a means of distributing propaganda, in order to remind the American people that their country promised the better 'imaginary future'.
==Precognitive Systems: How Cybernetics Could Control the Future==


In the United States, the rapid developments in computer technologies were applied as a mass-control interface, enabling the military to “plan the destruction of Russian cities, organise the invasion of 'unfriendly' countries, […] and pay the wages of its troops and manage its supply chain.” [Barbrook, 41] The computer was also seen as an important political tool to analyse the rapidly increasing quantities of data from sources such as the U.S census. With the use of computers, the statistics could be processed much quicker, and be presented in a more comprehensible manner to the policy-makers in order to shorten the temporal-gap between survey and action. This cybernetic concept of data-flow management created an important but largely covert function for computers that only led to a greater level of mystification around them in popular culture. It is perhaps easy to forget that for most Americans, their experience of computers would have been limited to science-fiction and mediated news reports. In Imaginary Futures, Barbrook describes how some of the biggest computer companies such as IBM tried to create a public facade to illustrate the amiable wonderment of their machines on exhibition at the 1964 World Fair in New York: “The only hint of the corporation's massive involvement in fighting the Cold War was the presence of the computer which could translate Russian into English.” [53] The fact that IBM displayed a subservient computer based on Robby the Robot from the film Forbidden Planet illustrates their desire to disconnect the public consciousness from the secret bunkers and missile trajectories associated with computers in the popular science-fiction of the time. [Barbook, 16]
In my previous essay, titled Cybernetic Ideologies, I discussed how a generation of young activists in the 1960s and 1970s  re-appropriated cybernetic theories in their attempts to create a new type of non-hierarchical society. In rebellion against the post-war trends of consumerism and “mechanisation”, the counterculture claimed that the new society would be based on the accessibility of knowledge and powered by new possibilities offered by  computer technologies. The application of cybernetics that created the supposed 'mechanisation of society' came from the military-led research into systems theory in the first two decades of the Cold War. Game theory, operational theory, and cybernetics – the three 'war sciences' (Galison, 1994: 231) – allowed for the American military to rapidly develop new technologies of warfare, one of these being the computer. The computer became a physical manifestation of the paranoid drive for 'control' over a chaotic system of international relations, and another example of the desire to illustrate how Capitalism triumphs over Communism.  
In this essay, I aim to provide a brief overview of the military application of cybernetics in the Cold War era, particularly during the period of high-innovation and investment in the various research and development labs during the 1950s and 1960s. In the United States, the fear of an imminent communist invasion drove an obsession for control, further exacerbated by crises such as the Berlin Blockade and the Cuban Missile Crisis that very nearly ignited another world war. Around the same time, Norbert Wiener's development of cybernetics - a science of 'control and communication in the animal and the machine' - promised an attractive mathematical framework to organise military policy and develop new technologies to combat the soviet threat. The resulting  application of cybernetics can be seen as being a science of prediction: Wiener's initial theories were expanded by his contemporaries in order to formulate complex models that could supposedly predict the behaviour of the Russians, and also suggest how to respond in the event of an invasion. As the many research laboratories around the United States enjoyed massive military investment, cybernetics became not only a possible solution to the problems of the era, but an ideology – a belief that the world was comprised of complex systems that could be controlled, manipulated, and perhaps most importantly, predicted. As David Mindell states in his paper Knowledge Domains in Engineering Systems, cybernetics became what could be considered as a 'philosophy of technology.' (2000: 1)


In the film Colossus: The Forbin Project, an 'alternative present' to the year 1970 is imagined through a  thematic exploration of control and authority with relation to computer technologies and artificial intelligence. The film follows the American development of a secret project to construct a cybernetic machine capable of controlling all the nuclear missile silos around the country and act as an early warning system against a Russian attack. The machine, codenamed Colossus, was designed by the protagonist Dr Forbin as a cybernetic entity with heuristic reprogramming and data processing technologies. The computer, according to Dr Forbin, could operate at such a high level that it could predict another country's future offensive manoeuvre against the United States and automatically retaliate. In the film, Colossus rapidly increases in processing power, and quickly realises that the Russian's have secretly constructed an equivalent machine named Guardian. The machines become networked, and they begin to communicate between each other and decide to take control of their human-operators, threatening them with launching missiles at strategic targets if their demands are not met.
In the 1940s, Norbert Wiener experimented with creating a model of a manned anti-aircraft gun that could estimate the future position of an enemy aircraft. In developing what he called The Anti-Aircraft Predictor, Wiener had to create a computational system that could account for “not only the mind of an inaccessible Axis opponent, but of the Allied anti-aircraft gunner as well, and then even more widely to include the vast array of human proprioceptive and electro-physiological feedback systems.” (Galison, 1994: 229) Wiener expanded this unifying vision of the man and machine into a mathematical framework that he named Cybernetics, presenting his studies in the book Cybernetics: Control and Communication in the Animal and Machine (1947), and to great acclaim at the Macy Conferences in the early 1950s. From its inception, it is clear that Cybernetics was directly concerned with temporality – more specifically, the conditions of a dynamic system at a moment in time. In developing the Predictor, the mathematical challenge was in 'predicting the future value of a pseudo-random function based on its statistical history (its autocorrelation).' (Mindell, 2000: 2) The result was the augmentation of the human operator's abilities into a servo-mechanical precognitive system, adjusting the aim of the gun to take into account the potential future manoeuvres of the aircraft.  


The film articulates the Cold War preoccupation with apocalyptic 'possibilities' and the suspicion of military-led technological innovation, specifically the realisation of the fear of the sentient machine inverting man/machine power relations. The military interest in artificial intelligence had existed during Alan Turing's time at Bletchely Park, when he envisioned the future creation of a computer that would be modelled on the human brain. Richard Barbrook writes about how Alan Turing's theories of artificial intelligence greatly influenced the mathematician John von Neumann: “Just like Turing, this prophet also believed that continual improvements in hardware must eventually culminate in the emergence of artificial intelligence.” [51] Von Neumann applied a right-wing reading to cybernetics, and working predominantly for the American military during the 1950s and 1960s, would have investigated the possibilities and potentials of realising such technologies. Despite Colossus' aesthetic jargon and, at times, degeneration into farce, there is a curious foresight in its exploration of man's relationship with technology, especially considering the film was created before the time of the computer's domestication. It could in fact be argued that the anxieties articulated within the film were visions not of an imaginary future, but the present we find ourselves in today.
Faced with the choice of anthropomorphising the mechanical gun or mechanising the human operator, Wiener chose the latter, for the reason that mechanical devices could be understood in greater complexity in terms of mathematics than the physiological and cognitive functioning of the human body. The human was rendered as a rational subject governed by simplified laws of self-preservation: a 'self-maintaining mechanism'. (Galison, 1994: 246)


===Links===
This switch in understanding followed the wider trend of the conceptual mechanisation of organic subjects, allowing for greater depths of  precognitive control. For example, cybernetic computing systems were seen as an important political tool to analyse the rapidly increasing quantities of data from sources such as the U.S census. With the use of such systems, the statistics could be processed much quicker, and be presented in a more comprehensible manner to the policy-makers in order to shorten the temporal-gap between survey and action. The sociological results of this application of cybernetics are readily visible in the control systems of the 21st century - as Brian Holmes writes: “The myriad forms of contemporary electronic surveillance now constitute a proactive force, the irremediably multiple feedback loops of a cybernetic society, devoted to controlling the future.” (2009) In this regard, we can understand cybernetics as a surveillant mathematics as it attempts to make sense of the behaviour of information within a given system.
*http://www.youtube.com/watch?v=-Uwv-JxPMyk
 
*http://www.imaginaryfutures.net/
During a time of tense Soviet-American relations, the importance of cybernetics as a war science was largely agreed upon amongst academics, while the media portrayed cybernetics as 'the epitome of computerized modernity' to the masses. (Barbrook, 2005: 48) After the shock of the Sputnik satellite launch, the dynamics of funding research into technology were dramatically altered: “Science, technology, and engineering were totally reworked and massively funded in the shadow of Sputnik.” (Hunger, 2008: 6) Fuelled by great investment from the US military designed to retain the country's hegemonic position during the tense Cold War, cybernetics became increasingly relevant to further the advancement of computer technologies at research institutes such as the National Defense Research Committee, RAND, and ARPA.  As the computer technologies advanced, the perceived abilities of the Americans to control the future increased. By the 1960s, the functionality of these cybernetic computer systems were broad, and were used by the military to “plan the destruction of Russian cities, organise the invasion of 'unfriendly' countries, […] and pay the wages of its troops and manage its supply chain.” (Barbrook, 2005: 41) That the variety of uses stretches from the purely utilitarian to the planning of a potential military campaign illustrates how cybernetics had permeated as an ideology: it was not simply a mathematical toolkit, but a universal methodology of controlling and understanding flows of information.
*http://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
 
While the military purpose for much of the technological advancements made at the research and development laboratories across US are undeniable, the Americans also gained greatly from a propagandistic perspective. The great funding drive during the 1950s and 1960s was a means to beat the soviets on a technological war front, publicly illustrating how Capitalism offered a more advanced future for its citizens - the 1964/65 New York World's Fair being an expensive performance of this. The IBM pavilion was a museum for the future, and attempting to explain the complexity of cybernetic computer systems to the general public through a spectacle of interactive mainframes and a nine-panel projected animation by Charles and Ray Eames:
“The theme of this ‘mind-blowing’ multi-media show was how computers solved problems in the same way as the human mind. The audience learnt that the System/360 mainframes exhibited in the IBM pavilion were in the process of acquiring consciousness: artificial intelligence. ” (Barbrook. 2005: 15) This public manifestation of cybernetic computing research at events such as the World's Fair was about a control of appearance: visitors were invited to interact with the machines, re-presenting them not as nuclear missile launchers but as curious and helpful tools that, in the near-future, would be present as artificially intelligent devices in the homes of suburban America. Barbrook calls this unrealised fantasy the 'imaginary future' – a propagandistic performance of the future designed to illustrate the great promise of the Capitalist system.
 
The attempts to demystify the machine and perpetuate the fantasy of the robot servant at the IBM pavilion were partially a reaction to the cinematic representation of technology at this time. Barbrook describes the public's fascination with how characters such as Robby the Robot from the 1956 film Forbidden Planet could become consumer objects in the suspended near-future, as suggested by IBM at the World's Fair. (2005: 16) While much science-fiction rendered cybernetic technologies as friendly and convenient, a number of high-profile films in the 1960s began to engage with the political background of the computer as a control interface.
 
The most obvious example of this is 2001: A Space Odyssey (1968), in which Stanley Kubrick and Arthur C. Clarke raise the point that the increasing degrees of human control being afforded to machines also increases the possibility of a 'catastrophic glitch'. In 2001, the relationship between the astronauts and the artificially intelligent computer on board the space ship (HAL) becomes tense when, after its misdiagnosis of a technical problem, the astronauts consider whether to shut the system down as a precautionary measure. The paranoid fear of loss of control causes HAL to prioritise self-preservation and retain power over the ship, resulting in a series of catastrophic events for the on-board astronauts. The great fear of the catastrophic glitch was not only the infliction of immediate damage to a given system, but the wider attack on the assumed stability of computer systems and the ideology of cybernetics that helped create them. For the human operator, a glitch represents a loss of control. Therefore, a glitch in a precognitive system shatters the illusion of control over the future. The glitch is a reminder that the window into the near-future provided by cybernetic systems is a simulation, governed by potentiality and statistics.
 
“In 1961, the influence of the men from RAND increased dramatically. The new president John F. Kennedy turned to them to impose order – not only on nuclear strategy, but the arms race, which was threatening to run out of control.” (Curtis, 1992)
The militarisation of cybernetics created a new ideology symptomatic of the Cold War era of anxiety and paranoia. In Adam Curtis' documentary Pandora's Box: To the Brink of Eternity, he describes how the advocates of cybernetics and game theory used rational systems as a means to decide what political and military steps should be taken in response to the Cuban Missile Crisis. Faced with the potential of escalating the crisis into a third world war with an incorrect precognition, “they found out they had no idea how the other side would respond to any move they made.” (Curtis, 1992) While science-fiction cinema heightened anxiety through dramatic narratives about runaway machines, the cybernetic ideology was suffering from a different kind of glitch: the very real issue of human error. In times of high crisis when the value of the war sciences could be demonstrated, the fact that their cybernetic systems only simulated precognition became apparent.
 
The coupling of the catastrophic technological glitch with human error displays an inherent unsuitability for the military application of cybernetics. In To The Brink of Eternity, the systems-theorists working at the many research labs around the United States are portrayed as egotistical and megalomaniacal, using cybernetic theory as an excuse to maintain control over their personal positions of authority. As a result, the systems themselves were distorted in order to prove the worth of further investment - Curtis gives the example of the US Airforce providing the researchers with flawed data pertaining to the soviet stock of nuclear weapons in order to acquire more aircraft. When the statistical basis of a system is fictitious, its fragile simulation of the future is thrown into complete disarray. At a time when the many men in various military departments fought their agendas for control and investment, the subjectivity of statistical data was an inherent flaw in the unassailable application of the war sciences. By the early 1950s Norbert Wiener reviewed the ethics of systematically creating military weaponry, writing that he felt that he had lost control over the future of cybernetics, and 'repeatedly stressed the power of cybernetics to save, enslave, or destroy humanity.' (Galison, 1994: 254) While the application of cybernetics prevalent during the 1950s and 1960s seemed to largely focus on how to manage possible future military campaigns and justify the powerful position of the US Department of Defense, the countercultural re-readings of cybernetic concepts in the late 1960s and 1970s would focus on the former of Wiener's three visions of the future.
 
==Bibliography==                                            
 
Barbrook, Richard (2005) Imaginary Futures
 
Galison, Peter (1994) The Ontology of the Enemy
 
Holmes, Brian (2007) FUTURE MAP: Or, How Cyborgs Learned to Stop Worrying and Love Surveillance
source: http://brianholmes.wordpress.com/2007/09/09/future-map/ accessed  25-03-12
 
Hunger, Brian (2008) Setun: An Inquiry into the Soviet Ternary Computer
 
Mindell, David (2000) Cybernetics: Knowledge Domains in Engineering Systems
source: web.mit.edu/esd.83/www/notebook/Cybernetics.PDF
 
Triclot, Mathieu (2006) Norbert Wiener's Politics and the History of Cybernetics
 
==Filmography==                                             
 
Curtis, Adam (1992) Pandora's Box
source: http://thoughtmaybe.com/video/pandoras-box
 
Eames, Charles and Ray (1958) The Info Machine
source: http://archive.org/details/InformationM
 
Kubrick, Stanley (1968) 2001: A Space Odyssey
 
Sargent, Joseph (1970) Colossus: The Forbin Project

Latest revision as of 10:27, 15 April 2012

Precognitive Systems: How Cybernetics Could Control the Future

In my previous essay, titled Cybernetic Ideologies, I discussed how a generation of young activists in the 1960s and 1970s re-appropriated cybernetic theories in their attempts to create a new type of non-hierarchical society. In rebellion against the post-war trends of consumerism and “mechanisation”, the counterculture claimed that the new society would be based on the accessibility of knowledge and powered by new possibilities offered by computer technologies. The application of cybernetics that created the supposed 'mechanisation of society' came from the military-led research into systems theory in the first two decades of the Cold War. Game theory, operational theory, and cybernetics – the three 'war sciences' (Galison, 1994: 231) – allowed for the American military to rapidly develop new technologies of warfare, one of these being the computer. The computer became a physical manifestation of the paranoid drive for 'control' over a chaotic system of international relations, and another example of the desire to illustrate how Capitalism triumphs over Communism.

In this essay, I aim to provide a brief overview of the military application of cybernetics in the Cold War era, particularly during the period of high-innovation and investment in the various research and development labs during the 1950s and 1960s. In the United States, the fear of an imminent communist invasion drove an obsession for control, further exacerbated by crises such as the Berlin Blockade and the Cuban Missile Crisis that very nearly ignited another world war. Around the same time, Norbert Wiener's development of cybernetics - a science of 'control and communication in the animal and the machine' - promised an attractive mathematical framework to organise military policy and develop new technologies to combat the soviet threat. The resulting application of cybernetics can be seen as being a science of prediction: Wiener's initial theories were expanded by his contemporaries in order to formulate complex models that could supposedly predict the behaviour of the Russians, and also suggest how to respond in the event of an invasion. As the many research laboratories around the United States enjoyed massive military investment, cybernetics became not only a possible solution to the problems of the era, but an ideology – a belief that the world was comprised of complex systems that could be controlled, manipulated, and perhaps most importantly, predicted. As David Mindell states in his paper Knowledge Domains in Engineering Systems, cybernetics became what could be considered as a 'philosophy of technology.' (2000: 1)

In the 1940s, Norbert Wiener experimented with creating a model of a manned anti-aircraft gun that could estimate the future position of an enemy aircraft. In developing what he called The Anti-Aircraft Predictor, Wiener had to create a computational system that could account for “not only the mind of an inaccessible Axis opponent, but of the Allied anti-aircraft gunner as well, and then even more widely to include the vast array of human proprioceptive and electro-physiological feedback systems.” (Galison, 1994: 229) Wiener expanded this unifying vision of the man and machine into a mathematical framework that he named Cybernetics, presenting his studies in the book Cybernetics: Control and Communication in the Animal and Machine (1947), and to great acclaim at the Macy Conferences in the early 1950s. From its inception, it is clear that Cybernetics was directly concerned with temporality – more specifically, the conditions of a dynamic system at a moment in time. In developing the Predictor, the mathematical challenge was in 'predicting the future value of a pseudo-random function based on its statistical history (its autocorrelation).' (Mindell, 2000: 2) The result was the augmentation of the human operator's abilities into a servo-mechanical precognitive system, adjusting the aim of the gun to take into account the potential future manoeuvres of the aircraft.

Faced with the choice of anthropomorphising the mechanical gun or mechanising the human operator, Wiener chose the latter, for the reason that mechanical devices could be understood in greater complexity in terms of mathematics than the physiological and cognitive functioning of the human body. The human was rendered as a rational subject governed by simplified laws of self-preservation: a 'self-maintaining mechanism'. (Galison, 1994: 246)

This switch in understanding followed the wider trend of the conceptual mechanisation of organic subjects, allowing for greater depths of precognitive control. For example, cybernetic computing systems were seen as an important political tool to analyse the rapidly increasing quantities of data from sources such as the U.S census. With the use of such systems, the statistics could be processed much quicker, and be presented in a more comprehensible manner to the policy-makers in order to shorten the temporal-gap between survey and action. The sociological results of this application of cybernetics are readily visible in the control systems of the 21st century - as Brian Holmes writes: “The myriad forms of contemporary electronic surveillance now constitute a proactive force, the irremediably multiple feedback loops of a cybernetic society, devoted to controlling the future.” (2009) In this regard, we can understand cybernetics as a surveillant mathematics as it attempts to make sense of the behaviour of information within a given system.

During a time of tense Soviet-American relations, the importance of cybernetics as a war science was largely agreed upon amongst academics, while the media portrayed cybernetics as 'the epitome of computerized modernity' to the masses. (Barbrook, 2005: 48) After the shock of the Sputnik satellite launch, the dynamics of funding research into technology were dramatically altered: “Science, technology, and engineering were totally reworked and massively funded in the shadow of Sputnik.” (Hunger, 2008: 6) Fuelled by great investment from the US military designed to retain the country's hegemonic position during the tense Cold War, cybernetics became increasingly relevant to further the advancement of computer technologies at research institutes such as the National Defense Research Committee, RAND, and ARPA. As the computer technologies advanced, the perceived abilities of the Americans to control the future increased. By the 1960s, the functionality of these cybernetic computer systems were broad, and were used by the military to “plan the destruction of Russian cities, organise the invasion of 'unfriendly' countries, […] and pay the wages of its troops and manage its supply chain.” (Barbrook, 2005: 41) That the variety of uses stretches from the purely utilitarian to the planning of a potential military campaign illustrates how cybernetics had permeated as an ideology: it was not simply a mathematical toolkit, but a universal methodology of controlling and understanding flows of information.

While the military purpose for much of the technological advancements made at the research and development laboratories across US are undeniable, the Americans also gained greatly from a propagandistic perspective. The great funding drive during the 1950s and 1960s was a means to beat the soviets on a technological war front, publicly illustrating how Capitalism offered a more advanced future for its citizens - the 1964/65 New York World's Fair being an expensive performance of this. The IBM pavilion was a museum for the future, and attempting to explain the complexity of cybernetic computer systems to the general public through a spectacle of interactive mainframes and a nine-panel projected animation by Charles and Ray Eames: “The theme of this ‘mind-blowing’ multi-media show was how computers solved problems in the same way as the human mind. The audience learnt that the System/360 mainframes exhibited in the IBM pavilion were in the process of acquiring consciousness: artificial intelligence. ” (Barbrook. 2005: 15) This public manifestation of cybernetic computing research at events such as the World's Fair was about a control of appearance: visitors were invited to interact with the machines, re-presenting them not as nuclear missile launchers but as curious and helpful tools that, in the near-future, would be present as artificially intelligent devices in the homes of suburban America. Barbrook calls this unrealised fantasy the 'imaginary future' – a propagandistic performance of the future designed to illustrate the great promise of the Capitalist system.

The attempts to demystify the machine and perpetuate the fantasy of the robot servant at the IBM pavilion were partially a reaction to the cinematic representation of technology at this time. Barbrook describes the public's fascination with how characters such as Robby the Robot from the 1956 film Forbidden Planet could become consumer objects in the suspended near-future, as suggested by IBM at the World's Fair. (2005: 16) While much science-fiction rendered cybernetic technologies as friendly and convenient, a number of high-profile films in the 1960s began to engage with the political background of the computer as a control interface.

The most obvious example of this is 2001: A Space Odyssey (1968), in which Stanley Kubrick and Arthur C. Clarke raise the point that the increasing degrees of human control being afforded to machines also increases the possibility of a 'catastrophic glitch'. In 2001, the relationship between the astronauts and the artificially intelligent computer on board the space ship (HAL) becomes tense when, after its misdiagnosis of a technical problem, the astronauts consider whether to shut the system down as a precautionary measure. The paranoid fear of loss of control causes HAL to prioritise self-preservation and retain power over the ship, resulting in a series of catastrophic events for the on-board astronauts. The great fear of the catastrophic glitch was not only the infliction of immediate damage to a given system, but the wider attack on the assumed stability of computer systems and the ideology of cybernetics that helped create them. For the human operator, a glitch represents a loss of control. Therefore, a glitch in a precognitive system shatters the illusion of control over the future. The glitch is a reminder that the window into the near-future provided by cybernetic systems is a simulation, governed by potentiality and statistics.

“In 1961, the influence of the men from RAND increased dramatically. The new president John F. Kennedy turned to them to impose order – not only on nuclear strategy, but the arms race, which was threatening to run out of control.” (Curtis, 1992) The militarisation of cybernetics created a new ideology symptomatic of the Cold War era of anxiety and paranoia. In Adam Curtis' documentary Pandora's Box: To the Brink of Eternity, he describes how the advocates of cybernetics and game theory used rational systems as a means to decide what political and military steps should be taken in response to the Cuban Missile Crisis. Faced with the potential of escalating the crisis into a third world war with an incorrect precognition, “they found out they had no idea how the other side would respond to any move they made.” (Curtis, 1992) While science-fiction cinema heightened anxiety through dramatic narratives about runaway machines, the cybernetic ideology was suffering from a different kind of glitch: the very real issue of human error. In times of high crisis when the value of the war sciences could be demonstrated, the fact that their cybernetic systems only simulated precognition became apparent.

The coupling of the catastrophic technological glitch with human error displays an inherent unsuitability for the military application of cybernetics. In To The Brink of Eternity, the systems-theorists working at the many research labs around the United States are portrayed as egotistical and megalomaniacal, using cybernetic theory as an excuse to maintain control over their personal positions of authority. As a result, the systems themselves were distorted in order to prove the worth of further investment - Curtis gives the example of the US Airforce providing the researchers with flawed data pertaining to the soviet stock of nuclear weapons in order to acquire more aircraft. When the statistical basis of a system is fictitious, its fragile simulation of the future is thrown into complete disarray. At a time when the many men in various military departments fought their agendas for control and investment, the subjectivity of statistical data was an inherent flaw in the unassailable application of the war sciences. By the early 1950s Norbert Wiener reviewed the ethics of systematically creating military weaponry, writing that he felt that he had lost control over the future of cybernetics, and 'repeatedly stressed the power of cybernetics to save, enslave, or destroy humanity.' (Galison, 1994: 254) While the application of cybernetics prevalent during the 1950s and 1960s seemed to largely focus on how to manage possible future military campaigns and justify the powerful position of the US Department of Defense, the countercultural re-readings of cybernetic concepts in the late 1960s and 1970s would focus on the former of Wiener's three visions of the future.

Bibliography

Barbrook, Richard (2005) Imaginary Futures

Galison, Peter (1994) The Ontology of the Enemy

Holmes, Brian (2007) FUTURE MAP: Or, How Cyborgs Learned to Stop Worrying and Love Surveillance source: http://brianholmes.wordpress.com/2007/09/09/future-map/ accessed 25-03-12

Hunger, Brian (2008) Setun: An Inquiry into the Soviet Ternary Computer

Mindell, David (2000) Cybernetics: Knowledge Domains in Engineering Systems source: web.mit.edu/esd.83/www/notebook/Cybernetics.PDF

Triclot, Mathieu (2006) Norbert Wiener's Politics and the History of Cybernetics

Filmography

Curtis, Adam (1992) Pandora's Box source: http://thoughtmaybe.com/video/pandoras-box

Eames, Charles and Ray (1958) The Info Machine source: http://archive.org/details/InformationM

Kubrick, Stanley (1968) 2001: A Space Odyssey

Sargent, Joseph (1970) Colossus: The Forbin Project