User:Manetta/serving-simulations/what-how-why-version-01

From XPUB & Lens-Based wiki

Semantic Simulations


# what have you done

My digital [object] will be a set of five short films, which present conversations between a person and an AI machine. The conversations are cut-ups from five iconic science fiction films, in which the relationship between human and machine is turned around. The cut-ups include moments where it's not the user who articulates a request, but the machine, as it starts to request or even instruct its user. By combining these cut-ups together, they highlight a reversed master-servant relationship.


# why have you done it

This master-servant dialectic has its origin in Hegel's writings, and is very applicable these days onto the newest 'intelligent' devices of last years. Thanks to the smartphones, and thanks to the 'cloud', we now have Siri's, Cortana's, Google Now's and Echo's produced by Apple, Microsoft, Google and Amazon. They all interact with their users without any physical interaction: they listen to us, and answer our requests with our linguistic device: the voice.

In 1995, Brian Massumi described the presence of information already as 'anything, anytime, anywhere', and because of this flood of information, the bigger corporations are producing their own devices that can help us to digest it.

But Massumi also mentioned a blurred boundary between the information requester (the master) and information deliverers (the servants). Who is serving who? Who adjusts himself to who? The user is surely not only the master. Massumi described it as "the human-designed machine designing the human".

Because what is interesting about AI technology is that humans try to built a simulation, a model with a level of intelligence that comes as close as possible to the complexity of human intelligence. But nobody succeeded yet. And hence we now live with a huge range of dogged attempts: the Siri's, the Cortana's and the Echo's.

In these products:

The interface is the voice. Either the machine's interface, as the human interface.

Being smart = being human. Being intelligent = being human.


# how

To create the short films, i worked with '2001 - a Space Oddysey' (1968), Steven Spielberg's 'AI' (2001), 'Moon' (2009), 'Her' (2013), and Black Mirror's 'Be Right Back' (2013).

To create the cut-ups i used 'Videogrep', a piece of software (built upon Moviepy and FFmpeg) that makes it possible to search through a film's subtitle file, and select according to a certain search query. Inside the subtitle-files of the five films I marked the moments in which the master-servant relation is turned around.

The audio-files will be combined with a visual layer, which i'm still thinking about at the moment.


about semantic web:

From an interest in the semantic web, i started this project with the idea to create a simulation of such 'semantically linked data' structure.

The idea of the semantic web comes from the 60s, but is mostly known as one of the 'dreams' Tim Berners-Lee had in 1989. Berners-Lee designed the markup-language HTML in that year, a language that would standardize the vocabulary of the internet. This universal form, AND the birth of hyperlinks, it became possible to connect Web Pages, information, and media files to eachother. The next step would be a meta-language, which would describe any piece of information (Web Page/text/image) on the internet. This meta-description is written in the form of so called 'triples'. Triples contain three elements: a subject, a predicate and an object. For example: the Eiffeltower (subject) is located in (predicate) Paris (object), and: Paris (subject) is located in (predicate) France (object). These triples make it possible to search for information that falls into the same classification or relation-type, opposed to search for a keyword.