User:Sebastian Cimpean/gradproject/kinect: Difference between revisions

From XPUB & Lens-Based wiki
 
Line 5: Line 5:
[http://vimeo.com/38635471 test results] | password: gradproj
[http://vimeo.com/38635471 test results] | password: gradproj


===Notes===
===notes===
'''initial intention'''<br/>
'''initial intention'''<br/>
Use the three-dimensional point cloud generated by the Xbox Kinect as a data source in order to create abstract imagery.
Use the three-dimensional point cloud generated by the Xbox Kinect as a data source in order to create abstract imagery.

Latest revision as of 20:58, 16 March 2012

exp.03 // 2012.02.20 // 2:40 pm // Kinect hack


test results | password: gradproj

notes

initial intention
Use the three-dimensional point cloud generated by the Xbox Kinect as a data source in order to create abstract imagery.

inputs
The live feed captured by the camera(s) of the Xbox Kinect.

modifiers
Through Processing, the feed from the Kinect is analyzed and a three-dimensional point cloud is displayed on the screen. I have modified the original Processing code in order to connect the points in the cloud with lines. After this was achieved, I have modified the code further to randomly connect dots with lines. (other tests, and variations can be found in this test, they represent the process of modifying the code)

output
Single screen video feed. (because through processing you usually get a live feed that is displayed on the computer screen I have modified the code to take a still every time the screen is refreshed, and was therefore able to create a video file out of the image sequence).

backend
Xbox Kinect
Processing
After Effects

conclusions
I am very happy with the results of this test for various reasons: the aesthetic of the video generated through this method is a continuation of other videos I have generated as part of other tests for this project. More specifically, this aesthetic can be described as the white/grey lines that outline the content on a black background. This background allows for everything to connect easily, and I am therefore able to create a continuous background in a 360 degree environment. Additionally this test represents a tool/method/filter that allows me to continue using a device with a lens to generate my imagery. Since the content has quickly start to go into a direction where I was not able to use videos generated by a video camera, this method allows me to keep filming (and think specifically of cinematography) in order to generate content for the project.

Finally, I want to continue making modifications to the code. A few of the things I have lined up are: connecting dots with lines based on the proximity of the dots to each other. Creating polygons and volumes using the dots in the cloud. Understanding the possibilities and how each change to the code affects the image will allow me to think about what specifically to film. The code modifies the image, but the initial image that is filmed is also very important.