User:Sebastian Cimpean/gradproject/thesisfirstdraft

From XPUB & Lens-Based wiki

First Draft

Synopsis/Abstract

what is the thesis
This thesis takes the form of a project report.

what are you going to talk about
Within this project report I would like to cover three main topics:

  1. theoretical/conceptual and practical background to new-media art looking at both:
    • the history of immersive art (created with 360 degree environments)
    • technologically driven art (such as the E.A.T. collective)
    • history of projection
  2. the conceptual background of my own project
    • flatland
    • journey through multiple dimensions
  3. the technicality of my own project.
    • software
    • hardware
    • techniques
    • xbox kinect as a way of using a lens with a filter that makes it aesthetically continuous with the rest of the things.

what is your project
Concept: My project is an audio-visual immersive performance, where the audience is taken on a multi-dimensional journey. The main subject of play is perspective, and the goal is to affect the way the audience perceives the space surrounding them. If it is about a journey in multiple dimensions, I would first like them to perceive the space around them as a two-dimensional space, three-dimensional space, after which I would like to make them aware of the forth and possibly the fifth dimension. The most basic method in which I intend to do this is for each dimension, set up a set of rules, that become true/real to the audience, after I feel like they believe in them, my intention is to break the rules, in order to transcend to another dimension, where the process is repeated.

Practical: The project consists of a mapped 360 degree projected environment.

Previous practice / relation
This project is an extension of my previous work regarding mapping and perception of space. Previously I worked with augmented reality, focusing on the sense of perception related to one object. This time, rather than allowing the audience to view the object itself, my intention is to place the audience within the object, and therefore be completely immersed in the space that I create.

Additionally, as a result of a recent test where I hacked the Xbox Kinect in order to create abstract imagery through a lens, I continue working on my cinematographic skills, creating moving images. The abstractness and the hacking through coding of the image is also an extension of motion graphics - a field that I’ve been heavily involved in the last few years. All of this relates to the content than I’m generating, and if not created through this modified camera, it is motion graphics/animation.

Sections/Tags

  • Perspective
  • Multi-dimensionality
  • Space
  • Light + Shadow/Darkness
  • Screen(s)
  • Sensual stimulation/overstimulation

exp01

initial intention
The initial intention of this test was to observe the effects of a projected luminous object against a stroboscopic background (flickering between white and black). Secondarily, from visualization tests conducted in Cinema4D, I became interested in observing the effect the relationship between the shadow and object can create. From the visualization test, I noticed that the stroboscopic background can affect the way we perceive the shadow, more specifically, it appeared to move with a latency compared to the object creating it.

inputs [projected materials]
A series of videos of a pyramid and its shadow were created. In each video the relationship between the pyramid and its shadow is different, varying from a static shadow to a synchronized and desynchronized shadow. Additionally, different textures were used for the pyramid to test its “luminosity” when projected in contrast to the stroboscopic background, these textures included, highly luminous, matte and reflective.

modifiers
Modifiers for this test were not necessarily needed, as the goal was to see the relationship between the luminous object and stroboscopic background when projected. This being said, the speed of the videos was affected therefore changing the frame-rate of the video - especially when working with the feed of the stroboscopic source video.
Additionally, simple mapping techniques were used in order to display multiple videos on the same screen while placing them in specific parts of the screen. (scaling/position)
Potentially, the blur effect can have an use in this section.

output [screen(s)]
For this test, a flat, two dimensional surface was primarily used. Part of the test was conducted using an additional two-dimensional screen that was positioned in front of the original surface to add physical depth. This secondary screen was also tested as a moving screen.

back end
VDMX - a VJing software - was used to project, map and apply any changes other changes to the video.
The back of videos was generated using Cinema4D and AfterEffects.

conclusion
While the test proved the initial intuition not very effective, new things were discovered. The reason why the effect was not effective is because the relationship between the shadow and the object needs to be more drastically visualized (the shadow needs to go evidently out of sync). Furthermore, this effect does start to have an impact when the video is programmed to emphasize the change, ie: a movie with the shadow moving in sync, followed by a movie where the shadow stops, seamlessly joined. The transition has the most effect. This could also extend with the use of stroboscopic to flicker between different video files, in sync with the stroboscopic background, ie: the stroboscopic effect should be used as a transitional effect, as it overpowers the content, and it's hard to see the change. One of the new discoveries is related to perspective, and more precisely, forced perspective. When the pyramids were arranged in a forced perspective, walking around the screen created parallax, meaning that the brain creates the space that is actually absent. This technique will be investigated further. Going back to the forced depth, where an additional screen was placed in front of the original screen, what is interesting is that when you walk around the space, the forced perspective moves the object in the foreground screen either in front, or actually behind the foreground object in the original screen. Moving the screen emphasizes this effect further.

exp02

initial intention
The initial intention was to see the effects of projecting into a spherical mirror, focusing on how the image is spread to cover a large area (over 180 degrees of view).

inputs [projected materials]
Video file.

modifiers
None (other than the spherical mirror).

output
Large sized projection, the spherical mirror reflecting the image into a more than 180 degree projection.

back end
Spherical mirror / silver coated ball

conclusion
The problem with projecting using a spherical mirror is that the light is spread almost too much and therefore actually lights up a dark space. Secondly, the contrast, intensity and visibility of the image is lowered, and projecting graphical videos results in an image that is difficult to distinguish. This is also the reason why the projected image in the test is a photographic moving image. That being said, if the actual camera (in the source material) is moving a lot, the effect of the whole room moving is quite strong. I see this technique being used to create an ambiance in the space, and although seeing the whole room moving is strong, I think a 360 degree synced projection and out of multiple projectors could be stronger due to the higher quality of the image.

exp03

initial intention
Use the three-dimensional point cloud generated by the Xbox Kinect as a data source in order to create abstract imagery.

inputs
The live feed captured by the camera(s) of the Xbox Kinect.

modifiers
Through Processing, the feed from the Kinect is analyzed and a three-dimensional point cloud is displayed on the screen. I have modified the original Processing code in order to connect the points in the cloud with lines. After this was achieved, I have modified the code further to randomly connect dots with lines. (other tests, and variations can be found in this test, they represent the process of modifying the code)

output
Single screen video feed. (because through processing you usually get a live feed that is displayed on the computer screen I have modified the code to take a still every time the screen is refreshed, and was therefore able to create a video file out of the image sequence).

backend
Xbox Kinect
Processing
After Effects

conclusions
I am very happy with the results of this test for various reasons: the aesthetic of the video generated through this method is a continuation of other videos I have generated as part of other tests for this project. More specifically, this aesthetic can be described as the white/grey lines that outline the content on a black background. This background allows for everything to connect easily, and I am therefore able to create a continuous background in a 360 degree environment. Additionally this test represents a tool/method/filter that allows me to continue using a device with a lens to generate my imagery. Since the content has quickly start to go into a direction where I was not able to use videos generated by a video camera, this method allows me to keep filming (and think specifically of cinematography) in order to generate content for the project.

Finally, I want to continue making modifications to the code. A few of the things I have lined up are: connecting dots with lines based on the proximity of the dots to each other. Creating polygons and volumes using the dots in the cloud. Understanding the possibilities and how each change to the code affects the image will allow me to think about what specifically to film. The code modifies the image, but the initial image that is filmed is also very important.

exp04

initial intention
The initial intention for this test was to get familiar with the plugin itself, and visualize the idea of making abstract images where a cloud of lines is displayed, connecting dots based on their proximity.

inputs [projected materials]
The input for this particular plugin is a pre-determined shape layer that is analyzed for vertices/points. The rest is created using a generative method where the program draws lines based on particular settings set before.

modifiers
The are no particular modifiers here. The result is created through a generative process. Settings that can be set in the plugin include rotation, depth of field (focus), position, multiplication of the initial image. The shape that is created can also be animated over time.

output
The output is a single screen video file.

back end
After Effects
Plexus (AE plug-in)

conclusion
The results are aesthetically continuos with the rest of the tests/videos I’ve done, especially with those from the Kinect. I see two potential outcomes for using this plug-in, creating a bank of video files that represent landscapes and more volumetric images where the sense of depth and space is central. These results will be published in an upcoming test.