User:Manetta/interview-text

From XPUB & Lens-Based wiki
< User:Manetta
Revision as of 14:27, 22 April 2015 by Manetta (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

2015-04-22 interview: Julie --> Manetta


During the first and second trimester there were two things important for me.

The first one has to do with a shift of tools. I'm a graphic designer, and i'm shifting tools at the moment. I'm working in another OS at the moment, and i'm learning to write code. That means that i'm not using Adobe software anymore. I'm using more textbased software that works through the terminal, and also different suites of open-source design programms. This changes my practise quite a lot, and it also makes me approach design in a different way.

The second main thing is pretty much related to it. I started this master with a linguistic interest, and so i've been looking at a set of linguistic tools in the last 6 months. I tried to see how the computer is understanding human speech. So for example: i've been looking into speech-to-text software, by looking into the dictionaries they are using, how these dictionaries are constructed and which structures are used to understand human language.

And i found that super interesting, and that is in a way also the link between my 'new' toolset. Writing code is a writing a language itself, it is very structured, and there are correlations with human language to name.


As writing code is something pretty new to me, i've been mainly making prototypes in the last 6 months. One of them is a little script that translates human written text to computer phonemes. This is just a little thing, but i think it reveals the way that a computer interprets human language.

Also I've been looking into the topic of Artificial Intelligence. Mainly because i saw different interests coming together within the field of AI: speech to text recognition which is used to receive voice commands; using linked data to link the requested information sources; and then the machine will reply to the user through text-to-speech, again another linguistic tool.

This survey into the topic of AI resulted in the form of a film, by considering it as an interface. I looked at five iconic science-fiction films, in which AI-machines are the main character. I've been working with Hal (from 2001, a Space Odyssey — 1968), Gerty (from Moon — 2009), Samantha (from Her 2013), Ash (Be Right Back, BlackMirror — 2013) and David (from Steven Spielberg's AI — 2009). Stating the years of the films when summing up seems to be quite important, as it seems to be important to know when people are already using AI machines in film, to show in which context these films are made. HAL is made in 1968, which is one year before the first man on the moon.

But it hasn't been the reason for me to work with these films. I'm more interested to see these AI-machines as interfaces. An AI interface is mimicing a human being, and i'm interested to look at how that influences the way that you communicate with it. And also to see how the user is actually adapting his system to the machine. So, how this relation between computer and user is shaped by the fact that the interface is a human simulation.


There is a kind of feedbackloop happening.


The film is not made with a regular piece of editing software, but with a tool called Videogrep. It's a tool that you use in the command line, so you use it through code. It uses the subtitle files of the films, it can grep certain keywords from it, which generates a new 'version' of the film. That made it possible for me to work in another way with editing film. As I'm not a film editor myself—i had some classes in my bachelor but nothing more than that—but this tool made it possible for me that i could edit on a linguistic level, as i was using the dialogs from the film directly. And so i could use this method, with which i could highlight the sentences of the dialogs that i wanted to use. This was both a pretty flexible working process, and a text-based process.


You could say that a graphic design practise is mainly based in space. Making a film is rather time based. But i think there is not much of difference between the two. Either designing within space or within time share a common basis. With film you have certain factors as 'time' for example, and 'composition' is less present there. But if you compare that with making a poster for example, it has a lot to do with 'composition', and 'layers', and using details. The corps size of the typography can ask the audience to zoom in or zoom out. So i see a lot of correlations between the two media.

But a film is a medium that is not that static. It could be quite close to the medium of the book, in the sense that it has a kind of duration and expierence factor in it. And it asks for a similiar way of thinking, in structure, material/atmosphere, and tone.


For this film i've been using videogrep in a very particular way, by using it as a highlight tool to select specific cuts. Videogrep is a pretty powerfull tool, which could be easily used with other films. I still want to make a summerization tool, which makes it possible to quite easily make a summary of a lecture for example. It would be good to make a simple interface for that, so you would be able to use it in the browser. Technically at this moment, i wouldn't know where to start, but that could be something for the future.


I would like to make a comparison between human speech and the computer, and look at the dialog between the two systems. I'm not sure to make a particular order between the two: putting human speech over the language of the computer, or the other way around.


A programming language is quite comparible to human language. Structuraly there are interesting similarities going on. The way you can save a certain value in a variable for example, is quite similar to the way we use different levels of abstractness in our speech. We can name somthing an animal, or we can call it a cow, or either a very specific type of cow. When we're having a conversation, and i tell you a bit about a cow i saw, i probably would tell you first a bit about the cow i saw. You would save that information during the conversation, and if i would refer to a cow later you would probably know that i'm talking about the cow from earlier in the conversation.


(todo --> make a little list of more examples)


Currently i'm looking into WordNet, a lexical database, used for many different applications. Mainly it is used for machine-learning purposes. WordNet is an interesting resource, a sort of 'truth' base, where the other applications base their truth upon.