Editing - Narrative

From XPUB & Lens-Based wiki
Revision as of 10:31, 24 March 2015 by Sol (talk | contribs) (→‎Kinect)

Prototyping Editing/Narrative

Connecting the Kinect to the book ‘The Society of the Spectacle’ from Guy Debord. Not the whole book, but a part of the book. As research I want to see the different with the movie and the book. Because Debord talks about that people are mediated by images, but the movie is also an images based media. So what does that say about the text and the movie? Maybe I have to re-editing the movie of Debord.

The other part of my research is the Kinect. I going to focus on one sensor and going to prototype them and lastly I’m going to connect the text with the technical part. At the end I also have to think about how people react to it, and how the engaged to the work. The following features as Kinect can provide on a Windows:

1. Raw sensor streams: Access to low-level streams from the depth sensor, color camera sensor, and four-element microphone array.

2. Skeletal tracking: The capability to track the skeleton image of one or two people moving within Kinect's field of view for gesture-driven applications.

3. Advanced audio capabilities: Audio processing capabilities include sophisticated acoustic noise suppression and echo cancellation, beam formation to identify the current sound source, and integration withWindows speech recognition API.

4. Sample code and Documentation

http://en.wikipedia.org/wiki/Kinect

Kinect.JPG


How to:

http://kin-educate.blogspot.nl/2012/06/speech-recognition-for-kinect-easy-way.html

https://gist.github.com/elbruno/e4816d4d5a59a3b159eb#file-elbrunokw4v2speech

https://raychambers.wordpress.com/2011/12/31/lesson-9kinect-speech-recognition/

http://wiki.roberttwomey.com/UNTREF_Speech_Workshop#Hands-on_with_Sphinx4_Library_for_Processing

http://learning.codasign.com/index.php?title=Trigger_Audio_When_a_Skeleton_is_Tracked

SPEECH TO TEXT LIBRARY - http://florianschulz.info/stt/

http://ericmedine.com/tutorials/LECTURE_kinect_hacking.htm



Kinect

Let’s not focus too much on the tool but on the concept. The Kinect is not working out for me, so I want to focus more on the book and the movie of Debord. How can I make people engage with ‘The Society of the Spectacle’, to let them be aware of it or to understand it? Do we look different at an image or movie when we have read the text of Debord? Maybe video is not the right medium, and I have to incorporate physical objects to. And why the hole movie or book, and not choice just one part because it’s complicated enough. For me the best part of the book is when he talks about Thesis 4:


"The spectacle is not a collection of images; rather, it is a social relationship between people that is mediated by images."


I use Thesis 4 as a research point, and collect almost everything of the internet what was the do with Thesis 4. I have 218 screenshots of pdf’s and websites and only five videos. The screenshots I want to print, because you get a better idea of the amount of people who used the text. The video I’ll combine with the film ‘Society of the Spectacle’, where I use the audio of the film under need the five videos . So you’ll get an overload of Thesis 4 from Debord, with different kind of contexts. What does this say about Thesis 4? They all have a social relationship by the text. Is the collection not a representation of Thesis 4 from Debord?

They all have one thing similar; the footnote, Debord, Guy. “Thesis 4.” Society of the spectacle. 1967


Google results.JPG

Thesis 4

After I printed all the screenshots and combine them in a kind of booklet, I looks like a script (a movie script). It felt and looks like a script so why don’t use it as a script. For each screenshot I look up a video on the internet. Some screenshots are artist so I show pictures of the work, some of them are saying names like Michael Jackson, so I show a music video of him. All those variety videos collected together in one playlist, that Google search as create. Only with one thing in common, the Thesis 4 for Guy Debord.

After searching 25 videos for the screenshots, I already have a few hours of playlist. You’ll not be able to watch the whole thing, but it gives you an idea what it’s about. Only the viewer does not know witch screenshot belongs to witch video. So I make a double screen , or two screens. First the video with all the found footage, the other one with the screenshot. Then you can related the video to the screenshot. The script is not specially for the viewer, is was a tool for me to make a narrative.

Next to the Youtube playlist I want to make my own narrative, not directed by Google. My interpretation on the screenshots and on Thesis 4. So I create a contrast between Google’s - Youtube playlist and my ‘movie’. I start with setting up some basic, simple rules; like each video 1 minute screening, as a prototype. After that I create other rules to see what happens to the video and context. I want to make a video that Youtube can’t create.


Prototypes Thesis 4

File:Thesis 4 pages 4- 21.pdf - pdf with the first 25 screenshots, based on does screenshots I found videos on youtube, prototypes:



Thesis 4 - Are Google Video results of the Thesis 4, with Debord's voice over.

Thesis 4.1 - Youtube videos based on the research of the first 25 screenshots, each video is 1 minute screening.

Thesis 4.2 - Youtube videos based on the research of the first 25 screenshots, each video is 1 minute screening, with Debord's voice over.

Thesis 4.3 - Youtube videos based on the research of the first 25 screenshots, each video is 1 minute screening, with repeat of Debord's voice over.

Thesis 4.4 - Youtube videos based on the research of the first 25 screenshots, edit videos together that react to each other.

Thesis 4.5 - Youtube videos based on the research of the first 25 screenshots, edit videos together from dream to reality, using a digital voice over.